url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1207.1228 | 1D analysis of 2D isotropic random walks | Many stochastic systems in physics and biology are investigated by recording the two-dimensional (2D) positions of a moving test particle in regular time intervals. The resulting sample trajectories are then used to induce the properties of the underlying stochastic process. Often, it can be assumed a priori that the underlying discrete-time random walk model is independent from absolute position (homogeneity), direction (isotropy) and time (stationarity), as well as ergodic. In this article we first review some common statistical methods for analyzing 2D trajectories, based on quantities with built-in rotational invariance. We then discuss an alternative approach in which the two-dimensional trajectories are reduced to one dimension by projection onto an arbitrary axis and rotational averaging. Each step of the resulting 1D trajectory is further factorized into sign and magnitude. The statistical properties of the signs and magnitudes are mathematically related to those of the step lengths and turning angles of the original 2D trajectories, demonstrating that no essential information is lost by this data reduction. The resulting binary sequence of signs lends itself for a pattern counting analysis, revealing temporal properties of the random process that are not easily deduced from conventional measures such as the velocity autocorrelation function. In order to highlight this simplified 1D description, we apply it to a 2D random walk with restricted turning angles (RTA model), defined by a finite-variance distribution $p(L)$ of step length and a narrow turning angle distribution $p(\phi)$, assuming that the lengths and directions of the steps are independent. | \section*{Quantifying 2D trajectories}
We consider a measured trajectory (Fig.1) that consists of $N+1$ discrete two-dimensional points $\vec{R}_t=(x_t,y_t)$ with $t=0\ldots N$, sampled in equal time intervals $\delta t_{rec}$. In the following, it is implicitly assumed that all absolute times $t$ and lag times $\Delta t$ are in units of this sampling interval. In a spatially homogeneous system, the absolute positions $\vec{R}_t$ are of no importance by themselves. All relevant information about the walk is contained in the N steps $\vec{u}_t=\vec{R}_t-\vec{R}_{t-1}$, or the corresponding velocities $\vec{v}_t=\vec{u}_t/\delta t_{rec}$. These steps can be described in Cartesian or polar coordinate systems, $\vec{u}_t = (\Delta x_t , \Delta y_t) = (L_t\cos{\varphi_t},L_t\sin{\varphi_t}) = L_t \vec{e}_t$, where $\vec{e}_t$ is a unit direction vector. The angle $\phi_t = \varphi_t-\varphi_{t-1}$ between two successive step vectors is called the turning angle. In an isotropic system, the absolute step directions, measured by the angles $\varphi_t$, cannot be of importance as well. The only relevant information of a trajectory is therefore contained in the set $\left\{ (L_t,\phi_t):t=1\ldots N\right\}$ of subsequent step lengths and turning angles. In the most general case, the underlying discrete-time random walk model has to determine the combined probability density $p\left( L_1,\phi_1,\;L_2,\phi_2,\;\ldots,\;L_N,\phi_N\right)$. In a stationary random process, the stochastic properties can only depend on the differences between the time indices. A stationary walk is therefore described by the combined probability density $p\left( \ldots,L_{t-1},\phi_{t-1},\;L_t,\phi_t,L_{t+1},\phi_{t+1},\ldots\right)$.
\section*{Aggregated statistical properties}
The aggregated statistical properties of the system are extracted by computing suitable averages. Because of the stationarity and ergodicity of the random process, we can replace ensemble averages (over different trajectories) by time averages. In the following we denote the average of a quantity $f_t$ over all absolute time points as $\left\langle f_t \right\rangle_t = \frac{1}{t_{max}\!-t_{min}\!+\!1}\sum_{t\!=\!t_{min}}^{t_{max}} f_t$.
It is important to keep in mind that, in general, a finite trajectory does not show all the symmetries of the underlying random process. For example, when analyzing relatively short trajectories with directional persistence, it may happen that all step directions fall into a narrow range of absolute angles $\varphi_t$. This can cause artifacts, such as significantly different distributions $p(\Delta x)$ and $p(\Delta y)$ of the Cartesian step components. To avoid such problems, one strategy is to use only quantities that are, by definition, invariant with respect to translations and rotations of the trajectories, such as the step length $L_t$ and the turning angles $\phi_t$.
\section*{Distributions and correlation properties of step length and turning angles}
The probability distributions of the step lengths and turning angles can be expressed formally as
\begin{eqnarray}
\label{eq:1}
p(L)&=&\left\langle \delta(L - L_t) \right\rangle_t \nonumber\\
p(\phi)&=&\left\langle \delta(\phi - \phi_t) \right\rangle_t.
\end{eqnarray}
From them follow the mean value (denoted by $\overline{L}$ and $\overline{\phi}$ ) and the variances (denoted by $\sigma_L^2$ and $\sigma_{\phi}^2$). In this paper, we assume that the $\overline{L}$ and $\sigma_L^2$ are finite, excluding random walks with a heavy-tailed step length distribution such as the Levy-flight.
Besides these distributions, one should take into account possible temporal correlations of these quantities as well. The normalized autocorrelation functions of $L$ and $\phi$ (and the cross correlation function, respectively) are defined as
\begin{eqnarray}
\label{eq:2}
C_{LL}(\tau)&=&\left\langle (L_t-\overline{L})(L_{t+\tau}-\overline{L}) \right\rangle_t / \sigma_L^2 \nonumber\\
C_{\phi\phi}(\tau)&=&\left\langle (\phi_t-\overline{\phi})(\phi_{t+\tau}-\overline{\phi}) \right\rangle_t / \sigma_\phi^2 \nonumber\\
C_{L\phi}(\tau)&=&\left\langle (L_t-\overline{L})(\phi_{t+\tau}-\overline{\phi}) \right\rangle_t / (\sigma_L \sigma_{\phi}).
\end{eqnarray}
We note that in most "standard models" of random walks, such as the discrete-time correlated random walk, $L$ and $\phi$ are drawn independently from fixed distributions $p(L)$ and $p(\phi)$, so that $C_{LL}(\tau)=C_{\phi\phi}(\tau)=\delta_{\tau,0}$ and $C_{L\phi}(\tau)=0$. However, some more complex stochastic system show "super-statistical" effects, such as temporally correlated fluctuations of the step length, $C_{LL}(\tau\ge0)\neq 0$, or a time-dependent variance $\sigma_{\phi}^2=f(t)$ of the turning angle.
\section*{Vectorial velocity autocorrelation function and Mean Squared Displacement}
In contrast to the step vectors $\vec{u}_t$ themselves, certain combinations such as the dot product $\vec{u}_t\vec{u}_{t+\tau}$ are translational and rotational invariant. Therefore, a frequently used measure for the temporal structure of a random walk is the vectorial velocity autocorrelation (VAC) function:
\begin{equation}
\label{eq:3}
C_{\vec{u}\vec{u}}(\tau)=\left\langle
(\vec{u}_t-\overline{\vec{u}})(\vec{u}_{t+\tau}-\overline{\vec{u}})
\right\rangle_t / \sigma_{\vec{u}}^2.
\end{equation}
Another popular quantity with translational and rotational invariance is the Mean Squared Displacement (MSD):
\begin{equation}
\label{eq:4}
\overline{R^2}(\tau)= \left\langle \left|
\vec{R}_{t+\tau}-\vec{R}_t
\right|^2 \right\rangle_t.
\end{equation}
The MSD is mathematically related to the VAC by
\begin{equation}
\label{eq:5}
\overline{R^2}(\tau)= \sigma_{\vec{u}}^2
\sum_{t=-\tau}^{t=+\tau} (\tau-t) C_{\vec{u}\vec{u}}(t) .
\end{equation}
From a practical point of view, the MSD has the advantage to be less sensitive to statistical noise, due to the summation. Also, while the normalized VAC always starts with the value 1 at lagtime zero, the MSD shows explicitly the scale of the displacements. Note that the scale factor $\sigma_{\vec{u}}^2 = \left\langle L^2 \right\rangle_t - \overline{L}^2 $ only depends on the first and second moment of the step length distribution. The shape of the MSD, on the other hand, is entirely determined by the VAC.
It is worthwhile to consider which properties of the trajectory are responsible for this shape: For a trajectory without drift ($\overline{\vec{u}}=\vec{0}$), the VAC depends only on the expression $ \left\langle \vec{u}_t\vec{u}_{t+\tau} \right\rangle_t = \left\langle L_t L_{t+\tau} \cos(\varphi_{t+\tau}-\varphi_t) \right\rangle_t $. Consider first the case when step lengths and step directions are statistically independent. Then $ \left\langle \vec{u}_t\vec{u}_{t+\tau} \right\rangle_t $ factorizes as $\left\langle L_t L_{t+\tau} \cos(\varphi_{t+\tau}-\varphi_t) \right\rangle_t \!=\! \left\langle L_t L_{t+\tau} \right\rangle_t \cdot \left\langle \cos(\varphi_{t+\tau}-\varphi_t) \right\rangle_t$.
The first factor describes possible correlations between successive step lengths. However, the expression $\left\langle L_t L_{t+\tau} \right\rangle_t \!=\! \overline{L}^2 + \left\langle \Delta L_t \Delta L_{t+\tau} \right\rangle_t$ is always non-equal zero, even if the step lengths fluctuations are mutually uncorrelated. The second factor describes directional correlations and it {\em can} be zero. If it {\em is} zero, the VAC is $\delta$-correlated and the MSD increases linearly with lagtime, indicating trivial diffusive behavior Consequently, any non-trivial lagtime-dependence of the VAC/MSD (such as two distinct lagtime regimes, sub-diffusive, or super-diffusive behavior) can only arise if directional correlations are present. In this case, correlated step length correlations may have an additional effect on the shape of the VAC/MSD. The most general case would even include cross-correlations between step lengths and step directions.
\section*{The projected trajectory and rotational averaging}
Next we consider aggregated statistical properties based on quantities without built-in rotational invariance. In particular, we analyze the projection of the 2D trajectory onto some axis, for example the x-axis of the coordinate system. By the projection, the sequence of vectorial steps $\vec{u}_t$ is reduced to a sequence of scalar steps $\Delta x_t$, so that some directional information is lost. However, we can define for the 1D trajectory are pair of quantities equivalent to the step length and the step direction, by factorizing each scalar step into a magnitude and a sign factor:
\begin{eqnarray}
\label{eq:6}
\Delta x_t &=& m_t \cdot s_t \nonumber\\
&=&|\Delta x_t| \cdot sgn(\Delta x_t)\nonumber\\
&=& L_t|\cos(\varphi_t)|\cdot sgn(\cos(\varphi_t)).
\end{eqnarray}
We can compute the distributions $p(m)$ and $p(s)$ of the magnitudes and signs by temporal averaging over the projected trajectory. However, in order to avoid the above-mentioned artifacts related to the finite number of steps, we have to additionally perform a rotational averaging over the absolute direction angles $\varphi$ of each step:
\begin{equation}
\label{eq:7}
\left\langle f(\varphi) \right\rangle_{\varphi} =
\int_0^{2\pi} \frac{d\varphi}{2\pi} f(\varphi).
\end{equation}
Using this notation, we can write
\begin{eqnarray}
\label{eq:8}
p(m)&=&\left\langle \left\langle \delta(m - m_t) \right\rangle_t \right\rangle_{\varphi} \nonumber\\
p(s)&=&\left\langle \left\langle \delta(s - s_t) \right\rangle_t \right\rangle_{\varphi} .
\end{eqnarray}
We next consider how the distribution $p(m)$ is related to $p(L)$. If a single vectorial trajectory step $\vec{u}$ of step length $L$ is isotropically rotated, it produces a whole distribution $\rho(m,L)$ of projected magnitudes, ranging from $m=0$ to $m=L$:
\begin{equation}
\label{eq:9}
\rho(m,L) = \frac{ 2\;\theta(m)\;\theta(L-m)}{\pi \sqrt{L^2 \! -\! m^2}},
\end{equation}
where $\theta()$ indicates the Heaviside step function. Therefore, a given step length distribution $p(L)$ produces a corresponding distribution of magnitudes that is given by
\begin{equation}
\label{eq:10}
p(m) = \int_m^{\infty} \!dL \; \rho(m,L) \; p(L).
\end{equation}
The quantity in 1D that corresponds to the turning angle distribution $p(\phi)$ in 2D is the probability $q=Prob("s_{t+1}\!=\!s_t")$ that two successive scalar steps have the same signs. The probability $q$ can also be called the persistence parameter of a trajectory, since $q\!=\!1/2$ indicates non-persistent behavior, while $q$-values smaller (larger) than $1/2$ indicate sub-diffusive (super-diffusive) behavior In order to derive a relation between $p(\phi)$ and $q$, we consider a sequence of two vectorial steps $\vec{u}_1$ and $\vec{u}_2$, enclosing a turning angle $\phi$ (compare Fig.2) with probability $p(\phi)d\phi$. Assume that initially the two sign factors $s_1$ and $s_2$ are both positive. If we now gradually increase $\phi$, there is a critical turning angle $\phi_c(\varphi)$ at which $s_1=s(\varphi)=sgn(\cos(\varphi))$ becomes different from $s_2 =s(\varphi+\phi)=sgn(\cos(\varphi+\phi))$. We can therefore express $q$ as an angular integral over $p(\phi)$:
\begin{eqnarray}
\label{eq:11}
q &=& \int_0^{2\pi} \frac{d\varphi}{2\pi} \;\int_{-\pi}^{+\pi} \! d\phi \; \delta_{s_1,s_2} \; p(\phi) \;\;\mbox{with}\nonumber\\
&=& \int_{-\pi}^{+\pi} \! d\phi \; \left[ \int_0^{2\pi} \frac{d\varphi}{2\pi} \; \delta_{s(\varphi),s(\varphi+\phi)} \right] \; p(\phi)\nonumber\\
&=& \int_{-\pi}^{+\pi} \! d\phi \; \left[\;1-|\phi/\pi|\;\right] \; p(\phi).
\end{eqnarray}
The fact that $p(\phi)$ is a distribution, whereas $q$ is just a number, demonstrates the information loss associated with the projection from 2D to 1D. This missing information is the detailed shape of the turning angle distribution, which in many cases will not be of particular interest. In this sense, the 1D projection of a 2D trajectory with rotational averaging represents a useful simplification and helps to filter out the essential information.
Finally, the temporal structure of the projected trajectory can be analyzed with the rotationally averaged autocorrelation function of the scalar steps:
\begin{equation}
\label{eq:12}
C_{\Delta x\Delta x}(\tau)=\left\langle \left\langle
(\Delta x_t-\overline{\Delta x})(\Delta x_{t+\tau}-\overline{\Delta x})
\right\rangle_t / \sigma_{\Delta x}^2 \right\rangle_{\varphi}.
\end{equation}
The correlation functions $C_{mm}(\tau)$, $C_{ss}(\tau)$ and $C_{ms}(\tau)$ are defined in an analogous way.
\section*{Discrete pattern statistics and the persistent Markov chain of signs}
By the above procedure, we have mapped an originally two-dimensional trajectory onto two scalar time series, $m_t$ and $s_t$. Since the signs $s_t$ are binary variables, we can apply to them analysis tools that are tailor-made for discrete random processes. In particular, we can count the frequency of patterns, such as "-+-", within the time series. Once the probabilities for all patterns of a given length are known, it is straight forward to construct a higher order Markov model that replicates the statistical properties of a measured time series.
The principles of pattern statistics can be demonstrated with a simple binary Markov chain $s_t$: At $t=0$ it starts randomly with "-" or "+" (equal probability). For each subsequent time, $Prob("s_{t+1}\!=\!s_t")=q$, with a pre-defined persistence parameter. What is the frequency distribution for a given pattern, such as "-+-", in this model ? In this particular case, $p("-+-")=p("-")p("-"\rightarrow"+")p("+"\rightarrow"-")$, which yields $p("-+-")=(1/2)\;(1-q)\;(1-q)$. For reasons of symmetry, the probability of any pattern is equal to that of its inverse, where all "+" and "-" are exchanged. Also, we can temporally reverse a pattern without changing its probability. Thus, there are only 3 distinct patterns of length 3, and their relative frequencies in our model are: $p("---")\propto q^2$, $p("--+")\propto q(1-q)$ and $p("-+-")\propto (1-q)^2$. They all become equally frequent for the non-persistent case $q=1/2$.
\section*{The restricted turning angle (RTA) model}
We next consider the class of 2D random walk models in which the step lengths $L$ and step directions $\varphi$ are statistically independent, $\left\langle L_t \varphi_{t+\tau} \right\rangle_t=0$. The step lengths have a fixed distribution $p(L)$ with finite values for mean $\overline{L}$ and variance $\sigma_L^2$. The turning angles also have a fixed distribution $p(\phi)$, with zero mean and variance $\sigma_{\phi}^2$. In particular, we are interested in the case where $\sigma_{\phi}^2$ is rather narrow (restricted turning angles), so that the walk has directional persistence. We call this case the RTA model in the following.
It is straight forward to show that under the given assumptions the vectorial velocity autocorrelation functions is given as
\begin{equation}
\label{eq:13}
C_{\vec{u}\vec{u}}(\tau=n \;t_{rec}) =
\left[ \frac{\overline{L}^2+\sigma_L^2\delta_{n0}}{\overline{L}^2+\sigma_L^2\;\;\;} \right]\;
\left\langle \cos(\varphi_{t+n}-\varphi_{t}) \right\rangle_t
\end{equation}
The directional correlation factor can be expressed as an integral over turning angles:
\begin{equation}
\label{eq:14}
\left\langle \cos(\varphi_{t+n}-\varphi_{t}) \right\rangle_t =
\int_{-\pi}^{+\pi} d\phi\; p(\phi,n) \cos(\phi).
\end{equation}
Here, $p(\phi,n)$ is the probability density that a vectorial step and its $n$th successor enclose a turning angle $\phi$. It is clear that $p(\phi,n=0)=\delta(\phi-0)$, that $p(\phi,n=1)=p(\phi)$ is just the prescribed turning angle distribution and that $p(\phi,n\rightarrow\infty)\rightarrow 1/(2\pi)$. The temporal development of $p(\phi,n)$ corresponds to a kind of diffusion process on the unit circle. As long as the width of $p(\phi,n)$ is smaller than $2\pi$, we can view the process as a diffusion on a linear $\phi$-axis. Then, $p(\phi,n)$ is just the $n$-fold convolution of the turning angle distribution $p(\phi)$ with itself. For lagtimes $1\ll n\ll n_{max}=2\pi/\sqrt{\sigma_{\phi}^2}$, the distributions $p(\phi,n)$ resemble normalized Gaussians with zero mean and a lagtime-dependent variance $\sigma_{\phi}^2(n)=n\cdot\sigma_{\phi}^2$. We insert the approximation $p(\phi,n)\approx\frac{1}{\sqrt{2\pi\sigma_{\phi}^2(n)}}\;e^{-(1/2)\phi/\sigma_{\phi}^2(n)}$ into Eq.[14] and obtain analytically
\begin{equation}
\label{eq:15}
\left\langle \cos(\varphi_{t+n}-\varphi_{t}) \right\rangle_t \approx
e^{-(\sigma_{\phi}^2/2)\;n}.
\end{equation}
Summing up, the velocity autocorrelation function in the RTA model will show a sudden drop between lagtimes $0$ and $1$. This drop occurs because the variance of the uncorrelated step lengths $L_t$ contributes only to the total velocity variance ($n=0$). For intermediate lagtimes in the regime $1\ll n\ll n_{max}=2\pi/\sqrt{\sigma_{\phi}^2}$ it will decay exponentially with a characteristic decay time inversely proportional to the variance of the turning angle distribution:
\begin{equation}
\label{eq:16}
C_{\vec{u}\vec{u}}(n) \approx
\left[ \frac{\overline{L}^2+\sigma_L^2\delta_{n0}}{\overline{L}^2+\sigma_L^2\;\;\;} \right]\;
e^{-(\sigma_{\phi}^2/2)\;n}.
\end{equation}
\section*{Simulation of RTA model}
For a concrete example, we consider Rayleigh-distributed step lengths with the most probable value $L_p$, $p(L\ge 0)=(L/L_p^2)\;e^{-L^2/(2L_p^2)}$, so that $\overline{L}=\sqrt{\pi/2}L_p$ and $\sigma_L^2=\frac{4-\pi}{2}L_p^2$. The turning angles are assumed to be equally distributed within a narrow interval $\left[ -\phi_{max}\ldots +\phi_{max} \right]$, corresponding to a variance $\sigma_{\phi}^2=\frac{1}{3}\phi_{max}^2$. Note that this model has only the two parameters $L_p$ and $\phi_{max}$.
For the numerical simulation of the RTA model we set $L_p=1.0$lu, with an arbitrary length unit lu, and $\phi_{max}=\pi/20$. The recording time interval is set $\delta_{rec}=1$. A segment of a typical trajectory is shown in Fig.3. The numerically calculated step width distributions, turning angle distributions and velocity autocorrelation function, together with the analytical results, are shown in Figs.4-5.
\section*{1D-projection of RTA model}
Using the transformation formula Eq.10, we obtain for the 1D projection of the RTA model with Rayleigh-distributed step lengths a Gaussian magnitude distribution $ p(m\ge 0)=\frac{2}{L_p\sqrt{2\pi}} \; e^{-(L/L_p)^2/2}$ (see Fig.6). According to Eq.11, we expect a persistence parameter $q=1-\frac{\phi_{max}}{2\pi}$, yielding $q=0.975$ for a maximum turning angle $\phi_{max}=\pi/20$. By counting the fraction of pairs of identical signs in the simulated time series $s_t$ from the projected RTA model, we obtain $q=0.974$.
The autocorrelation function of $s_t$ is shown in Fig.7. It is interesting to compare it with $C_{ss}(n)$ for a persistent Markov chain of signs with the same average $q$, which decays like $C_{ss}(n)=(2q-1)^n$ for $q \ge 1/2$. Note that the two models agree only for $n=0$ (automatic due to normalized autocorrelation function) and $n=1$ (correct average fraction of equal sign pairs). For larger lagtime, the sign correlations in the projected RTA model decay more slowly than in the Markov chain.
The origin of these deviations are "higher order" correlations in the projected RTA model, beyond a simple Markov approximation. Consider the projection of the RTA-trajectory onto the x-axis. As long as the step direction is roughly parallel to the x-axis, the time series $s_t$ shows long sequences of identical signs, such as "+++++++", what could also be called a "bunching" of equal sign pairs. But occasionally, the direction diffuses into a close to vertical position. During such phases, $s_t$ shows sequences such as "+-+-++-+--+", corresponding to an "anti-bunching" of equal sign pairs. Thus, while the average fraction of equal sign pairs agrees in both models, they are spread over the time axis in a different way in the projected RTA model. These differences are also reflected in the pattern statistics: For example, in the projected RTA model (Fig.9), the fraction of "--+" patterns is significantly diminished, compared to the Markov chain (Fig.8).
\section*{A "momentary persistence" variable and its temporal correlations}
The temporal distribution of equal sign pairs can be investigated by defining a momentary persistence variable,
\begin{equation}
\label{eq:17}
\eta_t = \delta_{s_{t\!-\!1},s_t}.
\end{equation}
The global persistence parameter $q=\left\langle \eta_t \right\rangle_t$ is just the time average of this variable. In a persistent Markov chain of signs, the random variable $\eta_t$ behaves like a Bernoulli process with the probability $q$ for the event $1$ and $1-q$ for the event $0$. For such a white noise process, the autocorrelation function yields $C_{\eta\eta}(\tau)=\delta_{\tau,0}$. In the projected RTA model, however, the momentary persistence should have some memory time larger than zero. This is indeed the case, as shown in Fig.10.
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig1.pdf} \caption{\label{1} {\em
Sample trajectory, introducing positions $\vec{R}_t$, steps $\vec{u}_t$, step directions $\varphi_t$, step lengths $L_t$ and turning angles $\phi_t$.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig2.pdf} \caption{\label{2} {\em
Two successive vectorial steps, their projections $\Delta x_t$ onto the x-axis, and the signs $s_t$ of the steps.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig3.pdf} \caption{\label{3} {\em
Segment of a trajectory in the RTA model.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig4.pdf} \caption{\label{4} {\em
Simulated (symbols) and analytic (black lines) distributions of step lengths (left) and turning angles (right) in the RTA-model.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig5.pdf} \caption{\label{5} {\em
Vectorial velocity autocorrelation function in the RTA-model, comparing simulation (red symbols) with analytical approximation (black line).
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig6.pdf} \caption{\label{6} {\em
Simulated (symbols) and analytic (black line) distribution of step magnitude in the projected RTA-model.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig7.pdf} \caption{\label{7} {\em
Autocorrelation functions of sign factors in the projected RTA-model (red), and in a persistent Markov chain of signs (black). Note that models agree only at lagtimes $0$ and $1$.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig8.pdf} \caption{\label{8} {\em
Logarithmic frequency of selected patterns in a persistent Markov chain of signs $s_t$, for $q=0.975$.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig9.pdf} \caption{\label{9} {\em
Logarithmic frequency of selected patterns in the projected RTA model.
}}\end{figure}
\begin{figure}[htb] \includegraphics[width=14cm]{figures/fig10.pdf} \caption{\label{10} {\em
Autocorrelation function of the momentary persistence $\eta_t$ in the projected RTA model (red) and in a persistent Markov chain of signs with the same $q$ (black).
}}\end{figure}
\begin{acknowledgments}
This work was supported by grants from Deutsche Forschungsgemeinschaft.
\end{acknowledgments}
| {
"timestamp": "2012-08-27T02:01:24",
"yymm": "1207",
"arxiv_id": "1207.1228",
"language": "en",
"url": "https://arxiv.org/abs/1207.1228",
"abstract": "Many stochastic systems in physics and biology are investigated by recording the two-dimensional (2D) positions of a moving test particle in regular time intervals. The resulting sample trajectories are then used to induce the properties of the underlying stochastic process. Often, it can be assumed a priori that the underlying discrete-time random walk model is independent from absolute position (homogeneity), direction (isotropy) and time (stationarity), as well as ergodic. In this article we first review some common statistical methods for analyzing 2D trajectories, based on quantities with built-in rotational invariance. We then discuss an alternative approach in which the two-dimensional trajectories are reduced to one dimension by projection onto an arbitrary axis and rotational averaging. Each step of the resulting 1D trajectory is further factorized into sign and magnitude. The statistical properties of the signs and magnitudes are mathematically related to those of the step lengths and turning angles of the original 2D trajectories, demonstrating that no essential information is lost by this data reduction. The resulting binary sequence of signs lends itself for a pattern counting analysis, revealing temporal properties of the random process that are not easily deduced from conventional measures such as the velocity autocorrelation function. In order to highlight this simplified 1D description, we apply it to a 2D random walk with restricted turning angles (RTA model), defined by a finite-variance distribution $p(L)$ of step length and a narrow turning angle distribution $p(\\phi)$, assuming that the lengths and directions of the steps are independent.",
"subjects": "Quantitative Methods (q-bio.QM)",
"title": "1D analysis of 2D isotropic random walks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854155523791,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7089699417873808
} |
https://arxiv.org/abs/gr-qc/9510013 | Normal frames for non-Riemannian connections | The principal properties of geodesic normal coordinates are the vanishing of the connection components and first derivatives of the metric components at some point. It is well-known that these hold only at points where the connection has vanishing torsion and non-metricity. However, it is shown that normal frames, possessing the essential features of normal coordinates, can still be constructed when the connection is non-Riemannian. | \section{Introduction}
Almost the defining property of general relativity is its coordinate and
frame independence, a feature which becomes obvious when tensor notation is
used. Nevertheless, it is frequently convenient to single out certain
coordinate and frame choices in order to simplify particular calculations
or arguments, or to make approximations to compare with experiment. In many
places, for example in comparisons with special relativity or arguments
motivated by the equivalence principle, an inertial frame of reference is
desirable.
Unfortunately, curved space-time does not admit inertial frames of
reference in general, so geodesic normal coordinates are commonly used to
provide an instantaneous inertial frame at a single space-time point. In
these coordinates, the metric components are stationary and the connection
components vanish at the chosen point. As is common knowledge, these
features are lost when the connection is non-Riemannian. For example, it is
clear from the definition of the torsion operator $\tau$ on vector fields
$X$, $Y$,
$$ \tau(X,Y) = \nabla_X Y - \nabla_Y X - [X,Y] \eqno(1)
$$
that if $X$ and $Y$ are coordinate vectors for which the connection
components vanish at some point, then the torsion must vanish at the point
too. Likewise, the non-metricity
$$
{\hskip-\secindent} Q(X,Y,Z) = \nabla_{\lower.5ex\hbox{$\scriptstyle X$}} g(Y,Z)\cr
\lo= X\big(g(Y,Z)\big) - g(\nabla_XY,Z) - g(Y,\nabla_XZ)
\eqno(2)
$$
reduces to first derivatives of the metric components in coordinates where
the connection components vanish.
It is, of course, still possible to construct normal coordinates using
the exponential map based on autoparallels of a non-Riemannian
connection. However, for non-vanishing torsion, the connection components
will not vanish, and in the presence of non-metricity, the metric
components will not be stationary.
The purpose of this note is to point out that, notwithstanding the failure
of normal coordinates, it is still possible to define a normal frame at a
point for a non-Riemannian connection. Such a possibility has been
mentioned before within the context of the equivalence principle for
theories with torsion (von der Heyde 1975, Hehl {\it et al\/}\ 1976), but without a
full proof. Related concepts for gauge theories (using Riemann normal
coordinates) have also been employed in heat-kernel studies on
Riemann-Cartan space-time (Obukhov 1983, Cognola and Zerbini 1988).
Let $M$ be a differentiable manifold, and $\nabla$ an arbitrary connection
on $M$. Take an arbitrary point $p\in M$, and let $\{X_i\}_p$ be a basis
for $T_pM$.
\bigbreak\noindent{\it Proposition}.\quad\ignorespaces
$\{X_i\}_p$ can be extended to a frame $\{X_i\}$ on some neighbourhood
$U\subset M$ of $p$ in such a way that $\nabla X_i = 0$ at $p$ for each $i$.
\bigbreak\noindent{\it Proof}.\quad\ignorespaces
For each $i$, there exists an autoparallel $C_i$ in a simply connected
neighbourhood $U_i$ of $p$ which passes through $p$ with tangent vector
$X_i$ at $p$. For each $i$, define a set of vector fields $\{X_j\}$ along
$C_i$ by parallel transporting $\{X_j\}_p$. By suitably restricting the
intersection $U$ of the $U_i$, the autoparallels will not meet, and the
vector fields can be extended arbitrarily to form a frame on $U$. Since, by
construction, $\nabla_{X_i}X_j = 0$ at $p$ for all $i,j$, it follows that
$\nabla X_i = 0$ at $p$.
This result makes no mention of any metric on $M$, it is simply a property
of connections. Now let $g$ be a metric on $M$. A frame $\{X_i\}$ is said
to be {\it normal} (Sachs and Wu 1977) at $p$ if $\{X_i\}_p$ is orthonormal
and $\nabla X_i = 0$ at $p$. Starting with an orthonormal basis at any
point $p$ in $M$, it is always possible to extend it using the above method
to a local normal frame with respect to any connection. If the connection
and metric are compatible ($\nabla g=0$), the normal frame can be made
orthonormal away from $p$ as well.
This result can be expressed using the connection components defined by
$\nabla_{X_i} X_j = \Gamma^k{}_{ij} X_k$. Furthermore, it is obvious that
there always exist coordinates $\{x^i\}$ covering $p$ such that
$\partial_{x^i} = X_i$ at $p$. The component formulation is then
\bigbreak\noindent{\it Proposition}.\quad\ignorespaces
Let $M$ be a manifold with an arbitrary connection. For any point $p\in M$,
there exist coordinates $\{x^i\}$ and a frame $\{X_i\}$ in a neighbourhood
of $p$ such that, at $p$,
$$
X_i = \partial_{x^i}\cr \ceq(3)
\Gamma^k{}_{ij} = 0,\cr
$$
where $\Gamma^k{}_{ij}$ are the connection components referred to the frame
$\{X_i\}$.
Note that the connection components referred to the frame
$\{\partial_{x^i}\}$ need not vanish (even at $p$). They will necessarily
be non-zero if the torsion is non-vanishing at $p$.
The essential properties of normal coordinates are the vanishing of the
connection components and first derivatives of the metric components. Both
of these are properties in the tangent spaces, not on the manifold
itself. So a normal frame suffices for most discussions where normal
coordinates are used. Since normal frames may be defined for
non-Riemannian connections also, these same discussions can be adapted to
cover non-Riemannian gravity as well.
This will be most successful in Riemann-Cartan space-time, where torsion is
present but the connection is metric compatible. Here, the normal frame may
also be made orthonormal. This is desirable if, for instance, the
discussion involves spinors.
It is interesting to note that, as follows easily from (1), the existence
of an anholonomic normal frame implies the presence of torsion. Likewise,
{}from (2), a normal frame for which the metric components are not stationary
requires non-metricity. Turned around, this means that the only normal
frames for a Riemannian connection are those generated by normal coordinates.
In this note, the existence of normal frames for space-times with
non-Riemannian connections has been proven by construction. These frames
have been argued to retain the salient features of normal coordinates in
general relativity, and thus allow similar simplifications and
approximations to be made in non-Riemannian gravity.
\@ck{s}
It is a pleasure to thank F~W~Hehl, F~Gronwald and P~A~Tuckey for helpful
discussions. The work described here was carried out with the support of
the Graduate College on Scientific Computing, University of Cologne and GMD
St Augustin, funded by the German Research Foundation (DFG).
\references
\refjl{von der Heyde P 1975}{\it Lett. Nuovo Cimento}{14}{250--252}
\refjl{Hehl F W, von der Heyde P, Kerlick G D and Nester J M
1976}{{\it Rev. Mod. Phys.}}{48}{393--416}
\refjl{Obukhov Y N 1983}{{\it Nucl. Phys.}}{B212}{237--254}
\refjl{Cognola G and Zerbini S 1988}{{\it Phys. Lett.}}{B214}{70--74}
\refbk{Sachs R K and Wu H 1977}{General Relativity for
Mathematicians}{(New York: Springer-Verlag) p. 79}
\bye
| {
"timestamp": "1995-10-09T10:44:18",
"yymm": "9510",
"arxiv_id": "gr-qc/9510013",
"language": "en",
"url": "https://arxiv.org/abs/gr-qc/9510013",
"abstract": "The principal properties of geodesic normal coordinates are the vanishing of the connection components and first derivatives of the metric components at some point. It is well-known that these hold only at points where the connection has vanishing torsion and non-metricity. However, it is shown that normal frames, possessing the essential features of normal coordinates, can still be constructed when the connection is non-Riemannian.",
"subjects": "General Relativity and Quantum Cosmology (gr-qc)",
"title": "Normal frames for non-Riemannian connections",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854146791213,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7089699411489782
} |
https://arxiv.org/abs/1105.1158 | Regularity properties of nonlocal minimal surfaces via limiting arguments | We prove an improvement of flatness result for nonlocal minimal surfaces which is independent of the fractional parameter $s$ when $s\rightarrow 1^-$.As a consequence, we obtain that all the nonlocal minimal cones are flat and that all the nonlocal minimal surfaces are smooth when the dimension of the ambient space is less or equal than 7 and $s$ is close to 1. | \section{Notation}\label{notation}
A point~$x\in {\mathbb R}^n$ will be often written
in coordinates as~$x=(x',x_n)\in{\mathbb R}^{n-1}
\times{\mathbb R}$.
The complement of a set~$\Omega\subseteq{\mathbb R}^n$
will be denoted by~$\CC \Omega:={\mathbb R}^n\setminus\Omega$.
For any $P\in{\mathbb R}^n$ and $\rho>0$,
we define the cylinder
$$ K_\rho(P):= \{ |x'-P'|<\rho\}\times
\{ |x_n - P_n|<\rho\}.$$
We also set~$K_\rho:=K_\rho(0)$.
The $(n-1)$-dimensional
cube of side~$R$ centered at~$x_o'\in{\mathbb R}^{n-1}$ will
be denoted by~$Q_R(x_o)$.
If~$\nu\in{\rm S}^{n-1}$, given~$x\in{\mathbb R}^n$, we define its
projection along~$\nu$, that is~$\pi_\nu x:=
x-(x\cdot \nu)\nu$.
Given a set~$E\subset{\mathbb R}^n$, we denote
by~${d}_E (x)$ the signed
distance of a point~$x\in{\mathbb R}^n$; we will
take the sign convention that~${d}_E (x)\ge0$
if~$x\in\CC E$.
If~$\Sigma\subset{\mathbb R}^n$ is a $C^2$-portion of
hypersurface, we define~${\mathcal{H}}(P)$
to be the mean curvature of~$\Sigma$ at~$P$
(with the convention that~${\mathcal{H}}$
equals the sum
of all the principal curvatures).
The $k$-dimensional Lebesgue measure of a (measurable) set~$A\subseteq
{\mathbb R}^k$ will be denoted by~$|A|$.
We let~$\varpi$ be the $(n-2)$-dimensional Hausdorff measure of
the boundary of the~$(n-1)$-dimensional unit ball.
Often, we will denote by~$c$, $C$ a suitable
positive constant, that we
allow ourselves the latitude of renaming at each
step of the computation.
\section{Proof of Theorem~\ref{MAIN}}
Now we start the proof of Theorem~\ref{MAIN},
which is based on several steps.
First, we need
to approximate our $s$-minimal surface with a graph.
As soon as~$s$ approaches~$1$, a flat $s$-minimal surface
approach a classical, smooth, minimal surface,
and this will allow us to keep the Lipschitz norm
of this approximating graph under control.
Then, we perform an estimate on the detachment
of this graph from its tangent hyperplane: this bound
(together with a suitable auxiliary function and an estimate relating
the integral equation with the classical mean curvature
equation in the limit) provides
an Alexandrov-Bakelman-Pucci type theory
that controls the oscillation of the graph in measure.
This may be repeated at finer and finer scales via
dyadic decomposition, by possibly taking advantage
of the closeness to the smooth minimal surface
when the size of the cubes become too small. In this
way, one obtains a pointwise control on the oscillation
of the approximating graph (and so of the original
$s$-minimal surface), leading to the proof of Theorem~\ref{MAIN}.
Below are the full details or the proof.
\subsection{Building a graph via the distance function}
One of the difficulties of our framework
is that the~$s$-minimal surfaces we are dealing with
are not necessarily graphs. To get around this
problem, we follow an idea of~\cite{CC} and we consider
level sets of the distance function in an appropriate scaling
(this may be seen as a sup-convolution technique).
For this,
we recall the following classical
geometric observation
on the regularity of the level sets of
the distance function:
\begin{lemma}\label{CC1L}
Let~$E\subset{\mathbb R}^n$.
Assume that
\begin{equation}\label{Gt}
\{ x_n\le -\gamma\} \cap K_r\subseteq
E\cap K_r \subseteq \{ x_n\le \gamma\}
\cap K_r,\end{equation}
for some~$r>\gamma>0$.
Let~$\delta\in(0,r/4)$ and~$
{\mathcal{S}}^\pm:=\{ x\in {\mathbb R}^n {\mbox{ s.t. }} d_E(x)=
\pm\delta\}$.
Then, there exist~$c\in(0,1)$ and~$C\in(1,+\infty)$
such that if~$\gamma/\delta<c$
then~${\mathcal{S}}^\pm\cap K_{r-2\delta}$
is a Lipschitz graph in the~$n$th direction with
Lipschitz constant bounded by~$C\sqrt{\gamma/\delta}$.
Furthermore,~${\mathcal{S}}^-$
(resp.,~${\mathcal{S}}^+$) may be touched
at any point of~$K_{r-2\delta}$
by a tangent paraboloid from above (resp., below).
\end{lemma}
\begin{proof} We focus on~${\mathcal{S}}^-$, the
case of~${\mathcal{S}}^+$ being analogous.
We would like to show that
for any~$x$, $z\in {\mathcal{S}}^-\cap K_{r-2\delta}$
\begin{equation}\label{Ch7}
x_n-z_n\le C\sqrt{\frac{\gamma}{\delta}}\, |x'-z'|,
\end{equation}
from which the desired result follows
by possibly exchanging the roles of~$x$ and~$z$.
For this, we argue like this.
For any~$x\in{\mathcal{S}}^-\cap K_{r-2\delta}$,
the ball of radius~$\delta$
centered at~$x$
is tangent to~$\partial E$ at some point~$y(x)\in \partial E
\cap K_r$, and,
conversely,
\begin{equation}\label{y503}{\mbox{the ball
of radius~$\delta$
centered at~$y(x)$
is tangent to~${\mathcal{S}}^-$ at~$x$.}}
\end{equation}
Let~$e_n:=(0,\dots,1)$. Since~$x+\delta e_n\in B_{
\delta} (x)$, we have that~$x+\delta e_n$
must lie
in the closure of~$E$.
Hence,
by~\eqref{Gt},
\begin{equation}\label{1.3a}
x_n+\delta\le \gamma.\end{equation}
Similarly, since~$y(x)\in\partial E$, we obtain from~\eqref{Gt}
that
\begin{equation}\label{1.3a-b}
y_n(x)\ge -\gamma.\end{equation}
By~\eqref{1.3a} and~\eqref{1.3a-b},
\begin{equation}\label{X} {y_n(x)-x_n}
\ge \delta-{2\gamma}.\end{equation}
In the same way, we see that
\begin{equation}\label{Xz} {y_n(z)-z_n}
\ge \delta-{2\gamma}.\end{equation}
Now, if~$|x'-z'|\ge \sqrt{\gamma\delta}$, we use~\eqref{Gt}
and~\eqref{X}
to deduce that
\begin{eqnarray*}
&& x_n-z_n\le (x_n-y_n(x))+|y_n(x)|+|y_n(z)|+|y_n(z)-z_n|
\\&&\qquad\le
(2\gamma-\delta)+\gamma+\gamma+|y(z)-z|
\\&&\qquad\le
(2\gamma-\delta)+\gamma+\gamma+\delta\\
&& \qquad =4\gamma\le 4\sqrt{\frac{\gamma}{\delta}}\,|x'-z'|,
\end{eqnarray*}
which proves~\eqref{Ch7} in this case.
So, we may focus on the case in which
\begin{equation}\label{90c}
|x'-z'|\le\sqrt{\gamma\delta}.\end{equation}
Then, from~\eqref{Xz},
$$ \delta^2=|y(z)-z|^2=|y'(z)-z'|^2+|y_n(z)-z_n|^2
\ge |y'(z)-z'|^2+(\delta-2\gamma)^2,$$
which gives
\begin{equation}\label{90i}
|y'(z)-z'|\le 2\sqrt{\gamma\delta}.\end{equation}
Hence
\begin{equation*}
|x'-y'(z)|\le |x'-z'|+|z'-y'(z)|\le 3\sqrt{\gamma\delta},
\end{equation*}
due to~\eqref{90c}
and~\eqref{90i}, and so, in particular,
\begin{equation}\label{90ttt}
|x'-y'(z)|\le \frac{\delta}{100}.
\end{equation}
So, we can define
\begin{equation}\label{Xz2}
p:= \big( x', y_n(z)-\sqrt{\delta^2-|y'(z)-x'|^2}\big).
\end{equation}
We observe that
\begin{equation}\label{190ttt}
p\in \partial B_\delta(y(z)).\end{equation}
Also, from~\eqref{X}
and~\eqref{Gt},
$$ y_n(z)-x_n\ge y_n(z)-y_n(x)+\delta-2\gamma\ge\delta-4\gamma>0.$$
Therefore,
by~\eqref{90ttt}, we have that~$x$ must be below~$B_\delta(y(z))$,
hence~\eqref{190ttt}
implies that
\begin{equation}\label{78yy0}
x_n\le p_n.\end{equation}
Now, we define~$P:=(p-y(z))/\delta$ and~$Z:=(z-y(z))/\delta$.
We observe that~$P$, $Z\in \partial B_1$, due to~\eqref{190ttt}.
Also, $P_n$, $Z_n\le0$, due to~\eqref{Xz}
and~\eqref{Xz2}. Moreover, $|P'|+|Z'|\le 1/50$
thanks to~\eqref{90i}, \eqref{90ttt}
and~\eqref{Xz2}.
As a consequence
$$ |P_n-Z_n|\le 100\, |P'-Z'|^2.$$
By scaling back, this gives that
$$ |p_n-z_n|\le\frac{100}{\delta}\,|p'-z'|^2=
\frac{100}{\delta}\,|x'-z'|^2 \le
100\sqrt{\frac{\gamma}{\delta}} \,|x'-z'|,$$
where~\eqref{90c} was used once again.
{F}rom this and~\eqref{78yy0}, we infer that
$$ x_n-z_n\le p_n-z_n\le
100\sqrt{\frac{\gamma}{\delta}} \,|x'-z'|,$$
which gives~\eqref{Ch7} in this case too.
Then, the desired Lipschitz property is
a consequence of~\eqref{Ch7},
and the existence of a tangent paraboloid follows
from~\eqref{y503} (and, by~\eqref{X}, the touching occurs from above in this case).
\end{proof}
We point out that the Lipschitz bound $C\sqrt{\gamma/\delta}$
in Lemma \ref{CC1L} is optimal, as the example in Figure~1 shows.
\begin{figure}[htbp]
\begin{center}
\resizebox{13.2cm}{!}{\input{ligamma.pstex_t}}
{\caption{\it Optimality of the Lipschitz constant
$\sqrt{\gamma/\delta}=\gamma/\sqrt{\delta\gamma}$ in Lemma \ref{CC1L}.}}
\end{center}
\end{figure}
A global version of Lemma~\ref{CC1L} is given by
the following result:
\begin{corollary}\label{CorCC}
Let~$E_\star\subseteq{\mathbb R}^n$. Suppose that~$\partial E_\star\cap K_2$
is a $C^{1,\alpha}$-graph in the $n$th direction, for some~$\alpha>0$,
and let~$M_\star$ be its $C^{1,\alpha}$-norm.
Then, there exists~$c_\star\in(0,1)$,
possibly depending on~$M_\star$,
such that the
following holds.
Let~$\gamma$, $\delta\in (0,1/4)$,~$E\subseteq{\mathbb R}^n$
and suppose that
\begin{equation}\label{n992}
{\mbox{$E\cap K_2$ lies in
a~$\gamma$-neighborhood of~$E_\star$.
}}\end{equation}
Let~$
{\mathcal{S}}^\pm:=\{ x\in {\mathbb R}^n {\mbox{ s.t. }} d_E(x)=
\pm\delta\}$.
Then,~${\mathcal{S}}^\pm\cap K_1$
is a Lipschitz graph in the~$n$th direction,
provided that~$\gamma/\delta<c_\star$, $\delta<c_\star
\gamma^{1/(1+\alpha)}$
and~$\gamma<c_\star$.
More precisely, there exists a constant~$C>1$ for which~$
{\mathcal{S}}^\pm\cap K_1$
is a Lipschitz graph in the~$n$th direction and
the Lipschitz norm of~${\mathcal{S}}^\pm\cap K_1$ is
controlled by~$C\sqrt{\gamma/\delta}+M_o$, where~$M_o$
is the Lipschitz norm
of~$\partial E_\star\cap K_2$.
Furthermore,~${\mathcal{S}}^-$
(resp.,~${\mathcal{S}}^+$) may be touched
at any point of~$K_{1-2\delta}$
by a tangent paraboloid from above (resp., below).
Finally, for any~$|x'|\le 1/2$,
\begin{equation}\label{s8822211a}
u^+(x')-u^-(x')\le 2(2+M_o)(\gamma+\delta)
.\end{equation}
\end{corollary}
\begin{proof} Since~$\partial E_\star\cap K_2$
is~$C^{1,\alpha}$, it separates with power~$(1+\alpha)$
from its tangent hyperplane, with
multiplicative constant~$M_\star$.
Then, we take~$r:=
(\gamma/M_\star)^{1/(1+\alpha)}$ and we
cover~$\partial E_\star\cap K_2$
with cylinders~$K_r$, centered at points of~$\partial E_\star$
and rotated parallel to the
tangent plane of~$\partial E_\star$.
By construction,
in each of these cylinders, $\partial E_\star$ separates
no more than~$M_\star r^{1+\alpha}=\gamma$ from its
tangent hyperplane,
and so~$E$ is~$2\gamma$-close
to such hyperplane. Therefore, Lemma~\ref{CC1L}
applies (with~$\gamma$ there replaced by~$2\gamma$).
Consequently, in each of these cylinders,~${\mathcal{S}}^\pm$
is a Lipschitz graph with respect to the normal
direction~$\nu$ of~$\partial E_\star$ (and its Lipschitz norm
is bounded by~$C\sqrt{\gamma/\delta}$ with respect to~$\nu$).
This proves the first part of Corollary~\ref{CorCC}.
It remains to prove~\eqref{s8822211a}.
For this, we fix~$|\bar x'|\le 1/2$
and we set~$P^\pm:=(\bar x',u^\pm(\bar x'))\in{\mathcal{S}}^\pm$.
Then, we take~$Q^\pm \in \partial E$ that realizes the distance,
i.e. $|P^\pm-Q^\pm|=\delta$.
By~\eqref{n992}, we find points~$R^\pm\in \partial E_\star$ such
that~$|R^\pm-Q^\pm|\le\gamma$.
Notice that
$$ |(R^\pm)' - (P^\pm)'|\le |(R^\pm)' - (Q^\pm)'|+
|(Q^\pm)' - (P^\pm)|\le \gamma+\delta.$$
Therefore, since~$(P^+)'=(P^-)'=u(\bar x)$,
$$ |(R^+)' - (R^-)'|\le |(R^+)'-(P^+)'|+|(P^-)'-(R^-)'|\le
2 (\gamma+\delta).$$
So, since~$\partial E_\star$
is a Lipschitz graph,
$$ |R^+_n - R^-_n|\le M_o|(R^+)' - (R^-)'|
\le2 M_o(\gamma+\delta).$$
In particular,
$$ |R^+-R^-|\le 2(1+M_o)(\gamma+\delta)$$
and so
\begin{eqnarray*}
&& |P^+-P^-|
\\ &\le& |P^+-Q^+|+|Q^+-R^+|+|R^+-R^-|
+|R^--Q^-|+|Q^--P^-|
\\ &\le&
2(1+M_o)(\gamma+\delta)+2\gamma+2\delta,
\end{eqnarray*}
which gives~\eqref{s8822211a}.
\end{proof}
\subsection{Detachment from the tangent hyperplane}\label{S2p}
Next result is one of the cornerstones of our
procedure since it manages
to reconstruct a geometry similar to the one
obtained in Lemma~8.1 of~\cite{CS}. In spite of
its technical flavor, it basically states
under which conditions we can say that
a functions separates from
a tangent hyperplane
quadratically in a ring, independently of~$s$
as~$s\rightarrow 1^-$.
\begin{lemma}\label{S LEM} Fix~$\overline C\ge1$.
Let~$\varepsilon$, $R>0$ and~$\bar x'\in {\mathbb R}^{n-1}$.
Let~$u:{\mathbb R}^{n-1}\rightarrow{\mathbb R}$ be a Lipschitz function, with
\begin{equation}\label{gradient}
|\nabla u (x')|\le \overline C
\end{equation}
a.e.~$|x'-\bar x'|\le R$
and let~$\bar x_n:=u(\bar x')$, $\bar x:=(\bar x',\bar x_n)$
and~$E:=\{ x_n<u(x')\}$.
Assume that
\begin{equation}\label{the eq}
(1-s)\int_{B_{R}(\bar x)}\frac{\chi_E(y)-\chi_{\CC E}(y)}{|
\bar x-y|^{n+s}}\,dy
\le \frac{\varepsilon}{R^s}.
\end{equation}
Suppose that there exists~$\PPara\in C^{1,1}({\mathbb R}^{n-1})$
such that
\begin{equation}\label{C1.1}
|\nabla \PPara(x')|+R\,|D^2\PPara(x')|\le\varepsilon
\end{equation}
a.e.~$|x'-\bar x'|\le R$,
\begin{equation}\label{above gamma}
{\mbox{$\PPara(\bar x')=u(\bar x')$
and $\PPara(x')\le u(x')$
in $|x'-\bar x'|\le R$.}}
\end{equation}
Then, there exists a constant~$C\ge1$, only depending
on~$n$ and~$\overline C$,
such that\footnote{The reader may compare~\eqref{star3}
here and~(8.1) in~\cite{CS}.
Notice that such an estimate, roughly speaking,
says that $u$ separates quadratically from
its tangent hyperplane in a ring, up
to a set with small density -- and the the constants
are independent of~$s$.
{F}rom this, a general geometric argument implies
a uniform quadratic detachment
in a whole ball with smaller radius
(see~(8.2) and~(8.3) in~\cite{CS}) and consequently
a linear bound on the image of the subdifferential
of the convex envelope (see~(8.4) in~\cite{CS}),
and this is the necessary ingredient
for the Alexandrov-Bakelman-Pucci theory to work
(see Sections~8, 9 and~10 in~\cite{CS}).
In our framework,~$u$ will be the level set of the distance
from an~$s$-minimal surface: we will add to
it the auxiliary function
of Section~\ref{S COM}
and consider the touching point of the convex envelope.
These points, by construction are touched from below
by a hyperplane, so~$u$ is touched from below
by a smooth function, which motivates the setting
of Lemma~\ref{S LEM}.}
the following result
holds, as long as~$\varepsilon\in(0,1/C)$.
There exists a $(n-1)$-dimensional
ring~$S_r:=\{ |x'-\bar x'|\in (r/C, r)\}$, with~$r\in(0,R]$,
such that, for any~$M>0$
we have
\begin{equation}\label{star3}
\frac{ \left| S_r \cap \Big\{
u(x')-\bar x_n-\nabla\PPara(\bar x')\cdot (x'-\bar x')>
\displaystyle\frac{
M\varepsilon r^2}{R}
\Big\}\right|}{\big| S_r\big|} \,\le\,
\frac{C}{M} .\end{equation}
\end{lemma}
\begin{proof} We consider the normal vector of
the graph of~$\PPara$
at~$\bar x'$, to wit
$$ \nu:=\frac{(-\nabla\PPara(\bar x'),1)}{
\sqrt{|\nabla\PPara(\bar x')|^2+1}}.$$
Let also
\begin{eqnarray*}
P&:=&\{ x_n<\PPara(x')\},\\
L&:=&\{x_n<\nabla\PPara(\bar x')\cdot (x'-\bar x')+\bar x_n\}
\\ {\mbox{and }}\;
A&:=&\bar x+\left\{ |x\cdot \nu|\le \frac{4\varepsilon}{R}
|\pi_\nu x|^2\right\}.\end{eqnarray*}
We recall that~$\pi_\nu$
is the projection along~$\nu$ (see Section~\ref{notation})
and we
notice that~$A$ is just the translation and the
rotation of the set
$$ \left\{ |x_n|\le \frac{4\varepsilon}{R}
|x'|^2 \right\}$$
and so, for any~$\rho>r>0$,
\begin{equation}\label{ST7}
\int_{B_\rho(\bar x)\setminus B_r(\bar x)}
{\chi_A(y)}\,dy
\le
\int_{|y'|\le \rho}\left[ \int_{|y_n|\le ({4\varepsilon}/{R})
|y'|^2}\,dy_n \right]\,dy'
\le\frac{C\varepsilon\rho^{n+1}}{ R}.
\end{equation}
On the other hand, since $L$ is a halfspace passing
through~$\bar x$,
the following cancellations hold:
\begin{equation}\label{A one}
\int_{B_\rho(\bar x)\setminus B_r(\bar x)}
{\chi_L(y)-\chi_{\CC L}(y)}\,dy =0
\;{\mbox{ and }}\;
\int_{B_\rho(\bar x)\setminus B_r(\bar x)}
\frac{\chi_L(y)-\chi_{\CC L}(y)}{|\bar x-y|^{n+s}}\,dy =0.
\end{equation}
Moreover, by~\eqref{above gamma}, we have that~$P\subseteq E$, thus
\begin{equation}\label{pre2.9a}
\chi_E\ge\chi_P
\;{\mbox{ and so }}\;
\chi_{\CC E}\le
\chi_{\CC P}.
\end{equation}
Also, the quadratic detachment of~$\PPara$ from
its tangent plane given by~\eqref{C1.1} implies that~$
(L\setminus A)\cap B_R\subseteq P\cap B_R$ and $(\CC P)
\cap B_R\subseteq((\CC L)\cup A)\cap B_R$.
Therefore, in~$B_R$,
\begin{equation}\label{pre2.9}
{\mbox{$\chi_L-\chi_A\le \chi_{L\setminus A}
\le\chi_P$ and~$\chi_{\CC P}\le \chi_{(\CC L)\cup A}
\le \chi_{\CC L}+\chi_A$.}}\end{equation}
So, from~\eqref{pre2.9a}
and~\eqref{pre2.9}, we obtain that,
in~$B_R$,
\begin{equation}\label{CHI}\chi_E-\chi_{\CC E}\ge
\chi_P-\chi_{\CC P}\ge \chi_L-\chi_{\CC L} -2\chi_A.
\end{equation}
Now, for any~$m\in{\mathbb N}$, let
\begin{eqnarray*}
r_m&:=&\frac{R}{\big( (2+\overline C) n\big)^{m}},\\
R_m&:=& B_{r_m}(\bar x)\setminus B_{r_{m+1}}(\bar x)\\
{\mbox{and }}
b_m&:=&\int_{R_m}\frac{\chi_E(y)-\chi_{\CC E}(y)}{|
\bar x-y|^{n+s}}\,dy.\end{eqnarray*}
Here above~$\overline C$ is the one fixed in the statement of
Lemma~\ref{S LEM}.
We claim that there exists~$m\in{\mathbb N}$ such that
\begin{equation}\label{the 1st}
b_m\le \frac{C_o\varepsilon r_m^{1-s}}{R} ,
\end{equation}
for a suitable constant~$C_o\ge1$.
The proof is by contradiction: if not, we have
\begin{equation*}\begin{split}
& \int_{B_R(\bar x)}\frac{\chi_E(y)-\chi_{\CC E}(y)}{|
\bar x-y|^{n+s}}\,dy\\
& \qquad=\sum_{m=0}^{+\infty} b_m
\ge
\frac{C_o\varepsilon}{R} \sum_{m=0}^{+\infty} r_m^{1-s}=
\frac{C_o\varepsilon}{R^{s}} \sum_{m=0}^{+\infty} \big(
(2+\overline C)n
\big)^{-(1-s)m}
\\ &\qquad =
\frac{C_o\varepsilon}{R^{s}}\cdot\frac{1}{1-\big(
(2+\overline C) n
\big)^{-(1-s)}}
>\frac{C_o\varepsilon}{R^{s}}\cdot\frac{1}{C(1-s)}
\end{split}\end{equation*}
for some~$C>0$.
This is in contradiction with~\eqref{the eq} if~$C_o$ is large,
and so~\eqref{the 1st} is established. {F}rom now on,~$m$
will be the one given by~\eqref{the 1st}, and~$C_o$ will be simply~$C$
(and, as usual, we will take the freedom of renaming $C$ line after line).
Now, we make use of~\eqref{CHI}, \eqref{A one} and~\eqref{ST7}
to obtain that
\begin{equation*}\begin{split}
& \int_{R_m} \Big( \chi_E(y)-\chi_{\CC E}(y)\Big)
\left( \frac{1}{|\bar x-y|^{n+s}}
-\frac{1}{r_m^{n+s}} \right)
\,dy\\ &\qquad\ge
\int_{R_m} \Big( \chi_L(y)-\chi_{\CC L}(y)-2\chi_A(y)\Big)
\left( \frac{1}{|\bar x-y|^{n+s}}
-\frac{1}{r_m^{n+s}} \right)
\,dy\\ &\qquad=
\int_{R_m} \Big( -2\chi_A(y)\Big)
\left( \frac{1}{|\bar x-y|^{n+s}}
-\frac{1}{r_m^{n+s}} \right)\,dy\\ &\qquad\ge
-2
\int_{R_m}
\frac{\chi_A(y)}{|\bar x-y|^{n+s}}\,dy
\\ &\qquad \ge-\frac{C}{r_m^{n+s}}
\int_{R_m} \chi_A(y)\,dy
\\ &\qquad \ge-\frac{C\varepsilon r_m^{1-s}}{ R}.
\end{split}\end{equation*}
Combining this
with~\eqref{the 1st}, we conclude that
\begin{equation*}\begin{split}
\frac{|E\cap R_m|-|(\CC E)\cap R_m|}{r_m^{n+s}}
\,=& \,\int_{R_m}\frac{\chi_E(y)-\chi_{\CC E}(y)}{r_m^{n+s}}\,dy
\\ =& \,b_m-
\int_{R_m} \Big( \chi_E(y)-\chi_{\CC E}(y)\Big)
\left(\frac{1}{|\bar x-y|^{n+s}}
-\frac{1}{r_m^{n+s}} \right)
\\ \le& \frac{C\varepsilon r_m^{1-s}}{R}
\end{split}\end{equation*}
that is
\begin{equation}\label{star1} |E\cap R_m|-|(\CC E)\cap R_m|\le
\frac{C\varepsilon
r_m^{n+1}}{R}.\end{equation}
Now we prove
that
\begin{equation}\label{star2}
\int_{ \{ r_{m+1}\le |x'-\bar x'|\le r_m/(\overline C\sqrt n) \} }
u(x')-\bar x_n-\nabla\PPara(\bar x')\cdot (x'-\bar x')\, dx'
\le
\frac{C\varepsilon r_m^{n+1}}{R}.
\end{equation}
To this scope, we observe that
$$ K_{r_m/\sqrt{n} }\subseteq B_{r_m}\subseteq K_{r_m}$$
and~$r_{m+1}< r_m/(\overline C\sqrt{n})$. Hence
\begin{equation}\label{9-11}
S_m:=\big\{ r_{m+1}< |x'-\bar x'|< r_m/\sqrt{n}\big\} \times
\big\{ |x_n-\bar x_n|< r_m/\sqrt{n}\big\} \,\subseteq \,R_m.
\end{equation}
Of course, no confusion should arise between~$S_m$ here and~$S_r$
in the statement of Lemma~\ref{S LEM}.
Let~$\alpha:=\chi_E-\chi_{L}=\chi_{\CC L}-
\chi_{\CC E}$. We recall that
\begin{equation}\label{9-11-bis}
{\mbox{$\alpha+\chi_A\ge 0$ in $R_m$,}}
\end{equation}
due to~\eqref{pre2.9a} and~\eqref{pre2.9}.
Accordingly,
by~\eqref{ST7}, \eqref{A one},
\eqref{9-11} and~\eqref{9-11-bis},
\begin{equation}\label{9-12}\begin{split}
&|E\cap R_m|-|(\CC E)\cap R_m|
\\ =\,& \int_{R_m} {\chi_E(y)-\chi_{\CC E}(y)} \,dy -0
\\ =\,& \int_{R_m} {\chi_E(y)-\chi_{\CC E}(y)} \,dy
-\int_{R_m} {\chi_{L}(y)-\chi_{\CC L}(y)} \,dy
\\ =\,&
2 \int_{R_m} {\alpha(y)}\,dy\\ =\,&
2 \int_{R_m} {\alpha(y)}+\chi_A(y) \,dy-2\int_{R_m}\chi_A(y)\,dy
\\ \ge\,& 2 \int_{S_m} {\alpha(y)}+\chi_A(y)
\,dy-\frac{C\varepsilon r_m^{n+1}}{R}.
\end{split}\end{equation}
Now,
we use~\eqref{gradient} and~\eqref{C1.1}
to see that, if~$|y'-\bar x'|<r_m/
(\overline C\sqrt n)$, we have
\begin{equation}\label{9-13}\begin{split}
& |\nabla\PPara(\bar x')\cdot(y'-\bar x')|\le
|y'-\bar x'|<r_m/\sqrt{n} \\ {\mbox{and }}\;&
|u(y')-\bar x_n| =|u(y')-u(\bar x')|\le \overline C
|y'-\bar x'|<r_m/\sqrt{n}.
\end{split}\end{equation}
Hence, fixed~$y'$, with~$|y'-\bar x'|\in \big(r_{m+1}, r_m/
(\overline C\sqrt{n})\big)$
we see that~$\alpha(y',y_n)=1$
when~$(y',y_n)$ is trapped between~$E$ and~$\CC L$
(notice that it cannot exit~$S_m$ from either the
top or the bottom, by~\eqref{9-13}), i.e.,
when
$$\bar x_n+\nabla\PPara(\bar x'
)\cdot (\bar x')(y'-\bar x')\le y_n<u(y').$$
So, recalling~\eqref{9-11-bis}
and integrating first in~$dy_n$, we have that
$$ \int_{S_m} {\alpha(y)}+\chi_A(y) \,dy \,\ge\,
\int_{ \big\{|y'-\bar x'|\in (r_{m+1}, r_m/(\overline C\sqrt{n}))
\big\} }\Big(
u(y') - \bar x_n-\nabla\PPara(
\bar x')\cdot(x'-\bar x')\Big)^+\,dy'.$$
This,~\eqref{9-12} and~\eqref{star1} imply~\eqref{star2}.
Then,~\eqref{star3} follows from~\eqref{star2}
and the Chebyshev Inequality, taking~$r:=r_m/(\overline C\sqrt{n})$,
$S_r:=
\{ |x'-\bar x'|\in (r_{m+1}, r_m/(\overline C\sqrt{n}))\}$
and noticing that~$|S_r|
\sim r_m^{n-1}$ (remember that~$S_r\subset{\mathbb R}^{n-1}$).
\end{proof}
\subsection{The mean curvature as a limit equation}
In this section, we show that the integral equation
of~$s$-minimal surfaces converges, in a somewhat
uniform way, to the classical mean curvature equation
as~$s\rightarrow1^-$,
and we remark that the estimates
improve as the surfaces gets flatter
and flatter (see~\cite{Aba} for a more detailed discussion
on nonlocal curvatures).
An estimate of this kind will be
useful in the computation of the
forthcoming Lemma~\ref{barrier}.
\begin{lemma}\label{limit curvature}
Let~$s\in[1/10,1)$.
Let~$\alpha\in(0,1)$.
Let~$F\subset{\mathbb R}^n$,
$x_o\in\partial F$,
and suppose that~$\partial F\cap
B_1(x_o)$
is a $C^{2,\alpha}$-graph in some direction,
with~$C^{2,\alpha}$-norm bounded by some~$M>0$.
Then, there exists~$C\ge 1$, only depending on~$\alpha$
and~$n$, such that
\begin{equation}\label{LAC1}
\left|{\mathcal{H}}(x_o)-\frac{(n-1)(1-s)}{\varpi}
\int_{B_r}\frac{\chi_F(y)-
\chi_{\CC F}(y)}{|x_o-y|^{n+s}}\,dy\right|\le
\frac{CM(1-s)}{r},
\end{equation}
where~${\mathcal{H}}$ is the mean curvature (see Section~\ref{notation}) and
\begin{equation}\label{r definition}
r:=\min\left\{\frac{1}n,\frac{1}{2M}\right\}.
\end{equation}
In particular, if~$M\in (0,1]$,
\begin{equation}\label{LAC3}
\left|{\mathcal{H}}(x_o)-\frac{(n-1)(1-s)}{\varpi}
\int_{B_r}\frac{\chi_F(y)-
\chi_{\CC F}(y)}{|x_o-y|^{n+s}}\,dy\right|\le
{C M(1-s)}.
\end{equation}
\end{lemma}
\begin{proof} Without loss of generality, up to
a translation and a rotation, which leave
our problem invariant, we may take~$x_o=0$
and the tangent hyperplane of~$\partial F$ at~$0$
to be~$\{x_n=0\}$. In this way, we write~$\partial F$
as the graph~$x_n=g(x')$, for~$|x'|\le
1/\sqrt{n}$,
with~$\nabla g(0)=0$ and~${\mathcal{H}}(0)=\Delta g(0)$.
Up to a rotation of the horizontal coordinates, we also
suppose that~$D^2 g(0)$ is diagonal, with
eigenvalues~$\lambda_1,\dots,\lambda_{n-1}$.
In this way
$$ g(y')=\frac{1}{2}\sum_{i=1}^{n-1}\lambda_i y_i^2 +h(y'),$$
and~$|h(y')|\le M|y'|^{2+\alpha}$.
So, for any~$|y'|\le r$,
\begin{equation}\label{in trap}
|g(y')|\le Mr^2\le \frac{r}{2},\end{equation}
thanks to~\eqref{r definition}.
We observe that, by rotational symmetry,
$$ \int_{\{|y'|\le r\}} y_j^2\,|y'|^{-(n+s)}\,dy'=
\int_{\{|y'|\le r\}} y_1^2\,|y'|^{-(n+s)}\,dy'$$
for any~$j=1,\dots,n-1$ and therefore, by summing up in~$j$,
\begin{equation*}\begin{split}
& \frac{\varpi r^{1-s}}{1-s}=
\int_{\{|y'|\le r\}} |y'|^{2-(n+s)}\,dy'\\
&\qquad=(n-1) \int_{\{|y'|\le r\}} y_1^2\,|y'|^{-(n+s)}\,dy'
=(n-1) \int_{\{|y'|\le r\}} y_i^2\,|y'|^{-(n+s)}\,dy'\end{split}
\end{equation*}
for any~$i=1,\dots,n-1$.
Therefore
\begin{equation}\label{cure}
\int_{\{|y'|\le r\}} \sum_{i=1}^{n-1}\lambda_i
y_i^2\,|y'|^{-(n+s)}\,dy'=\frac{\varpi r^{1-s}{\mathcal{H}}(0)}{
(n-1)(1-s)}.
\end{equation}
Let now
$$ G_s(\tau):=\int_0^\tau \frac{dt}{(1+t^2)^{(n+s)/2}}.$$
We observe that~$G_s(0)=0$, $G'_s(0)=1$ and~$|G_s''(\tau)|
=(n+s)(1+\tau^2)^{-(n+s+2)/2} |\tau|\le (n+1)|\tau|$.
Therefore, a Taylor expansion gives
$$ G_s(\tau)=\tau+\widetilde G_s(\tau),$$
with~$|\widetilde G_s(\tau)|\le C|\tau|^3$.
Therefore, if we write
$$ \widetilde g(y'):=
\frac{g(y')}{|y'|}= \frac{1}{2|y'|}\sum_{i=1}^{n-1} \lambda_i y_i^2
+\widetilde h(y')$$
with~$|\widetilde h(y')|=|h(y')|/|y'|\le
M|y'|^{1+\alpha}$, we have that
\begin{equation*}
\begin{split}
G_s(\widetilde g(y'))&=\widetilde g(y')+\widetilde G_s
(\widetilde g(y'))
\\ &=\frac{1}{2|y'|}\sum_{i=1}^{n-1} \lambda_i y_i^2
+\widetilde h(y')
+\widetilde G_s(\widetilde g(y'))
\\ &=\frac{1}{2|y'|}\sum_{i=1}^{n-1} \lambda_i y_i^2
+\ell(y'),
\end{split}\end{equation*}
with
$$ |\ell(y')|\le |\widetilde h(y')|+C|\widetilde g(y')|^3\le
CM(|y'|^{1+\alpha}+|y'|^3)\le CM|y'|^{1+\alpha}$$
for any~$|y'|\le r$.
As a consequence of this and~\eqref{cure},
\begin{equation}\label{babel1}
\int_{\{ |y'|\le r\}} \frac{G_s(\widetilde g(y'))}{|y'|^{n+s-1}}\,dy'
=\frac{\varpi r^{1-s}{\mathcal{H}} (0)}{2(n-1)(1-s)}+\varepsilon_1
\end{equation}
with~$|\varepsilon_1|\le CM r^{1+\alpha-s}/(1+\alpha-s)\le CM$.
Now, since the map~$(0,+\infty)\ni
t\mapsto 1-e^{-t}$ is concave, we have that~$1-e^{-t}
\in[0,t]$, hence
$$ 1-r^{1-s}\in \big[ 0,(1-s)\log r^{-1}\big].$$
Accordingly, we may write~\eqref{babel1} as
\begin{equation}\label{Bab2}
\int_{\{ |y'|\le r\}} \frac{G_s(
\widetilde g(y'))}{|y'|^{n+s-1}}\,dy'
=\frac{\varpi {\mathcal{H}} (0)}{2(n-1)(1-s)}+\varepsilon_2
\end{equation}
with~$|\varepsilon_2|\le CM(1+\log r^{-1})$.
Now, we recall~\eqref{in trap}, we integrate in the
vertical coordinate and we substitute~$t:=y_n/|y'|$
to obtain that
\begin{eqnarray*}
&&\int_{K_r}\frac{\chi_F(y)-\chi_{\CC F}(y)}{|y|^{n+s}}
\,dy
\\ &=&\int_{|y'|\le r} \left[
\int_{-r}^{g(y')}\frac{dy_n}{(|y'|^2+|y_n|^2)^{(n+s)/2}}
-\int_{g(y')}^r\frac{dy_n}{(|y'|^2+|y_n|^2)^{(n+s)/2}}
\right]\,dy'
\\ &=& \int_{|y'|\le r} \frac{1}{|y'|^{n+s}}\left[
\int_{-r}^{g(y')}\frac{dy_n}{(1+(|y_n|/|y'|)^2)^{(n+s)/2}}
-\int_{g(y')}^r\frac{dy_n}{(1+(|y'|/|y_n|)^2)^{(n+s)/2}}
\right]\,dy'
\\ &=&\int_{|y'|\le r} \frac{1}{|y'|^{n+s-1}}\left[
\int_{-r/|y'|}^{\widetilde
g(y')}\frac{dt}{(1+t^2)^{(n+s)/2}}
-\int^{r/|y'|}_{\widetilde
g(y')}\frac{dt}{(1+t^2)^{(n+s)/2}}
\right]\,dy'
\\ &=&\int_{|y'|\le r} \frac{1}{|y'|^{n+s-1}}\left[
G_s(\widetilde g(y'))-G_s(-r/|y'|)
-G_s({r/|y'|})+G_s({\widetilde
g(y')}) \right]\,dy'.
\end{eqnarray*}
Therefore, since~$G_s$ is odd,
\begin{equation}\label{78deii3ejjjjejej}
\int_{K_r}\frac{\chi_F(y)-\chi_{\CC F}(y)}{|y|^{n+s}}
\,dy=2\int_{|y'|\le r} \frac{G_s(\widetilde g(y'))}{|y'|^{n+s-1}}
\,dy'=
\frac{\varpi {\mathcal{H}} (0)}{(n-1)(1-s)}+\varepsilon_3
\end{equation}
with~$|\varepsilon_3|\le CM(1+\log r^{-1})$, due to~\eqref{Bab2}.
Now, we point out the following cancellation:
\begin{eqnarray*}
&& \left|\int_{K_r\setminus B_r}
\frac{\chi_F(y)-\chi_{\CC F}(y)}{|y|^{n+s}}
\,dy\right|\le\int_{(K_r\setminus K_{r/\sqrt n}
)\cap \{|y_n|\le M|y'|\}}
\frac{1}{|y|^{n+s}}
\,dy
\\ &&\qquad\le
CM\int_{r/\sqrt n}^r \rho^{-1-s}\,ds
=\frac{CM(n^{s/2}-1)}{s r^s}\le \frac{CM}{r}.
\end{eqnarray*}
Accordingly, we can write~\eqref{78deii3ejjjjejej} as
$$ \int_{B_r}\frac{\chi_F(y)-\chi_{\CC F}(y)}{|y|^{n+s}}
\,dy=
\frac{\varpi {\mathcal{H}} (0)}{(n-1)(1-s)}+\varepsilon_4
$$
with~$|\varepsilon_4|\le CM(1+\log r^{-1}+r^{-1})\le CMr^{-1}$.
This proves~\eqref{LAC1}.
Then,~\eqref{LAC3} follows from~\eqref{LAC1} and~\eqref{r definition},
by observing that, if~$M\in(0,1]$, we have that~$r=1/n$ so it does not
depend on~$M$.
\end{proof}
\subsection{Construction of an
auxiliary function}\label{S COM}
The purpose of this section is
to obtain a special function, which is positive
in a large ball, and that
satisfies the correct inequality with respect
to the integral operator of~\eqref{the eq}
in a smaller ball.
This is needed to apply an appropriate variation of the
local Alexandrov-Bakelman-Pucci theory
of~\cite{Cabre, CS}, in order to localize the set
in which the solution we are considering becomes
positive. Indeed, the following function
is the one that replaces the
auxiliary functions
in Lemma~4.1 of~\cite{Cabre} and
Corollary~9.3 of~\cite{CS} for our framework
(here, some technical complications
also arise since the operator in~\eqref{6gghhssh}
is both nonlocal and nonlinear in its dependence on the sets):
\begin{lemma}\label{barrier} Fix~$R>0$
and constants~$c_1,\dots,c_5>0$. Fix also~$c_0\in (0,c_1)$.
There exists~$C\ge1$ (possibly depending on~$c_0,\dots,c_5>0$
but independent of~$R$)
such that,
if~$1-s$, $\varepsilon\in(0,1/C]$, the following results hold.
There exists~$\Phi\in
C^\infty({\mathbb R}^{n-1},[-C\varepsilon R,C\varepsilon R])$
satisfying the following conditions:
\begin{equation}\label{901212}
\begin{split}
&{\mbox{$\Phi(x')>\varepsilon R$ if~$|x'|\ge (c_1+c_2) R$,
$\Phi(x')\le-4\varepsilon R$
if~$|x'|\le c_1 R$, and}}\\ &\qquad
\sup_{{\mathbb R}^{n-1}}|\nabla\Phi|+
R\,|D^2\Phi|\le {C\varepsilon}. \end{split}\end{equation}
Also, let~$L$ be an affine function with
\begin{equation}\label{D L}
|\nabla L|\le\frac{1}{C},
\end{equation}
set
\begin{equation}\label{789sd90djwwjjdfff1}{\mbox{
$\tilde\Phi:=L-\Phi$ and~$F:=\{ x_n<\tilde\Phi(x')\}$.}}
\end{equation} Then
\begin{equation}\label{6gghhssh}
(1-s) \int_{B_{c_3 R}(x)}\frac{\chi_{F}(y)-
\chi_{\CC F}(y)}{|x-y|^{n+s}}\,dy\ge\frac{c_4\varepsilon}{R^s}
\end{equation}
for any~$x\in \partial F\cap \{c_0 R < |x'|\le (c_1+c_2+c_5)R\}$.
\end{lemma}
\begin{proof} Up to replacing~$\Phi(x')$ with~$R\Phi(x'/R)$,
we may and do consider just the
case~$R=1$.
Then, the function we will construct is depicted in Figure~2.
\begin{figure}[htbp]
\begin{center}
\resizebox{11.2cm}{!}{\input{fig0.pstex_t}}
{\caption{\it The auxiliary function~$\Phi$ (with~$R=1$).}}
\end{center}
\end{figure}
More explicitly, we
take~$\Phi$ to be smooth, radial,
radially increasing, satisfying~\eqref{901212}
with~$R=1$, and in fact
$$\|\Phi\|_{C^{2,\alpha}({\mathbb R}^{n-1})}
\le C(1+\mu_q)\varepsilon,$$ and such
that
\begin{equation*}
\Phi(x')=\varepsilon\left(\frac{c_0^q\mu_q}{c_1^q}-4-\frac{c_0^q\mu_q}{|x'|^q}\right)
\end{equation*}
if~$|x'|>c_0$. Here, $q>n-3$ is a fixed free parameter and~$\mu_q>0$
will be chosen appropriately large\footnote{At the moment we only need that~$\mu_q$
is so large that
$$ c_0^q \mu_q \left(\frac{1}{c_1^q}-\frac{1}{(c_1+c_2)^q}\right)\ge6.$$
In this way, if~$|x'|\ge c_1+c_2$, then
$$ \Phi(x')\ge\varepsilon\left( \frac{c_0^q\mu_q}{c_1^q}-4-\frac{c_0^q\mu_q}{(c_1+c_2)^q}\right)\ge2\varepsilon$$
that gives~\eqref{901212}.}
at the end of the proof.
We observe that, if~$|x'|>c_0$,
\begin{eqnarray*}
|\partial_i\Phi|&\le& {\varepsilon q\mu_q}c_0^q\,|x'|^{-q-1},\\
|\partial^2_{ij}\Phi|&\le&{\varepsilon q (q+3)\mu_q
}c_0^q\,|x'|^{-q-2}
\\{\mbox{and }}\;
-\Delta\tilde\Phi=\Delta\Phi &=&-{\varepsilon q(q-n+3)\mu_q}\,
c_0^q|x'|^{-q-2}.
\end{eqnarray*}
Accordingly, if~$|x'|>c_0$,
\begin{eqnarray*} && \sqrt{1+|\nabla\tilde\Phi|^2}\in [1,2]\\
{\mbox{and }}&&
\Delta\tilde\Phi-\left|
\frac{(D^2\tilde\Phi\nabla\tilde\Phi)\cdot\nabla
\tilde\Phi}{1+|\nabla\tilde\Phi|^2}
\right|\ge \frac{\varepsilon q(q+3-n)\mu_q}{4}c_0^q\,|x'|^{-q-2}
\end{eqnarray*}
as long as~$\varepsilon$ is small enough, thanks to~\eqref{D L}.
Hence, we estimate the mean curvature of~$\partial F$ at some
point~$x$ with~$|x'|\in (c_0,c_1+c_2+c_5]$ as
$$ {\mathcal{H}} (x) = \frac1{\sqrt{1+|\nabla\tilde\Phi|^2}}\left(
\Delta\tilde\Phi-\frac{(D^2\tilde\Phi
\nabla\tilde\Phi)\cdot\nabla\tilde\Phi}{1+|\nabla\tilde\Phi|^2}
\right)
\ge \frac{\varepsilon \mu_q}{C} .$$
Therefore, if~$x\in\partial F$, $|x'|\in(c_0,c_1+c_2+c_5]$,
we have that
$$ (1-s) \int_{B_{c_3}(x)}\frac{\chi_F(y)-
\chi_{\CC F}(y)}{|x-y|^{n+s}}\,dy\ge
\frac{\varepsilon \mu_q}{C^2}
-C(1+\mu_q)\varepsilon (1-s)\ge
\frac{\varepsilon \mu_q}{C^3} $$
thanks to~\eqref{LAC3}
in Lemma~\ref{limit curvature}, as long as~$1-s$ and~$\varepsilon$
are small enough. This and a suitably large choice of~$\mu_q$
give~\eqref{6gghhssh} (namely, we take~$\mu_q /C^3\ge c_4$).
\end{proof}
\subsection{Measure estimates for the oscillation}
We obtain the following measure estimate.
Such result may be
seen as the counterpart, in our framework,
of the measure estimate in Lemma~4.5 of~\cite{Cabre}
and Lemmata~8.6 and~10.1 of~\cite{CS}.
\begin{lemma}\label{SILV 8.6} Fix~$\overline C\ge1$.
Let~$\kappa\in {\mathbb R}$ and~$R>0$.
Let~$u:{\mathbb R}^{n-1}\rightarrow{\mathbb R}$ be a Lipschitz function, with
\begin{equation}\label{AL}|\nabla u (x')|\le \overline C
\end{equation}
a.e.~$|x'|\le 3R$, and
\begin{equation}\label{USE}
u(x')\ge\kappa {\mbox{ for any }}|x'|\ge R.
\end{equation}
Let~$E:=\{ x_n<u(x')\}$.
Assume that, for any~$x\in \partial E\cap B_{4n}$,
\begin{equation}\label{sd77ef12345d}
(1-s)\int_{B_{R}(x)}\frac{\chi_E(y)-\chi_{\CC E}(y)}{|
x-y|^{n+s}}\,dy
\le \frac{\varepsilon}{R^s}.\end{equation}
Then, if
\begin{equation}\label{901212-bis}
\inf_{Q_{3R}} u\le \kappa+\varepsilon R
\end{equation}
we have that
\begin{equation}\label{step zero}
\Big|\big\{ u-\kappa \le M\varepsilon R
\big\}\cap Q_R \big\}\Big|\ge \mu R^{n-1},
\end{equation}
for appropriate universal constants~$M>1$
and~$\mu\in(0,1)$, as long as~$1-s$ and~$\varepsilon\in (0,1/C]$,
with~$C\ge1$ suitably large.
Here,~$M$, $\mu$ and~$C$ only depend on~$n$
and~$\overline C$.
\end{lemma}
\begin{proof} Up to translation, we may suppose
that~$\kappa=0$.
Let~$\Phi$ be as in
Lemma~\ref{barrier} (with~$c_0,\dots,c_5$ to
be conveniently chosen in what follows).
Let~$v:=u+\Phi$ and~$\Gamma:{\mathbb R}^{n-1}\rightarrow{\mathbb R}$
be the convex envelope of~$v^-:=\min\{v,0\}$
in~$B_{6\sqrt n R}$, that is
$$ \Gamma(x):=
\left\{
\begin{matrix}
\displaystyle\sup_\Xi \ell(x) & {\mbox{ if }} |x'|<{6\sqrt n R}, \\
0 & {\mbox{ if }}|x'|\ge{6\sqrt n R},
\end{matrix}
\right.$$
where~$\Xi$ above is a short-hand notation
for all the affine functions~$\ell$
such that $\ell(y')\le v^-(y')$ for any
$|y'|<{6\sqrt n R}$ (see
pages 23--27 of~\cite{Cabre}
for the basic properties of the convex envelope).
Let~${\mathcal{T}}$ be the touching set
between~$v$ and~$\Gamma$, i.e.
$$ {\mathcal{T}}:=\{ x'\in{\mathbb R}^{n-1} {\mbox{ s.t. }}
\Gamma(x')=v(x')\}.$$
Let
$$ m_o:=-\inf_{Q_{3R}} v.$$
Notice that~$v\le u-4\varepsilon R$ in~$Q_{3R}$,
due to~\eqref{901212} (for this we choose~$c_1:=3\sqrt{n}/2$
in Lemma~\ref{barrier},
so that~$Q_{3R}\subseteq \{|x'|\le c_1 R\}$; the other constants~$c_0$, $c_2,\dots,c_5$
will be fixed in the sequel).
Therefore, by~\eqref{901212-bis},
$$ \inf_{Q_{3R}} v\le -2\varepsilon R,$$
so~$m_o\ge 2\varepsilon R$.
We recall that
all the hyperplanes with slope bounded by~$
m_o/(CR)$ belong to~$\nabla\Gamma(B_{6\sqrt n R})$
(see page~24 of~\cite{Cabre} and also~(3.9) there),
hence
\begin{equation}\label{Slope}
\varepsilon^{n-1}\le C\left(\frac{m_o}{R}\right)^{n-1}\le
C|\nabla\Gamma({\mathcal{T}})|.
\end{equation}
Now, for any~$\bar x'\in {\mathcal{T}}$,
we let
\begin{eqnarray*}
&& L(x'):=v(\bar x')+
\nabla\Gamma(\bar x')\cdot (x'-\bar x')\\{\mbox{and }}&&
\PPara:=L-\Phi.\end{eqnarray*}
We point out
that~$v>0$ in~$\{|x'|\ge 3\sqrt{n} R\}$, thanks to~\eqref{901212}
and~\eqref{USE}
(for this, we choose~$c_2:=3\sqrt{n}/2$
in Lemma~\ref{barrier},
so that~$c_1+c_2:=3\sqrt{n}$).
In particular, since~$\Gamma\le0$,
we see that~$\bar x'\in{\mathcal{T}}\subseteq\{|x'|\le 3\sqrt{n}R\}$.
Also, from~\eqref{901212}, we have
\begin{equation}\label{par1}
|D^2\PPara|=|D^2\Phi|\le \frac{C\varepsilon}{R}.
\end{equation}
Moreover, $v$ is above~$\Gamma$ which is above~$L$ in~$B_{{6\sqrt n R}}$,
by convexity,
therefore, for any~$e\in S^{n-1}$
$$ 0\ge \Gamma(\bar x'+Re)\ge L(\bar x'+Re)
=v(\bar x')+R\nabla\Gamma(\bar x')\cdot e
\ge -C\varepsilon R+R\nabla\Gamma(\bar x')\cdot e$$
that is~$\nabla\Gamma(\bar x')\cdot e\le C\varepsilon$.
So, since~$e$ is an arbitrary unit vector, we get that
\begin{equation}\label{D L 2}
|\nabla L|=
|\nabla\Gamma(\bar x')|\le
C\varepsilon,\end{equation} and so, by~\eqref{901212},
\begin{equation}\label{par2}
|D\PPara|\le C\varepsilon.
\end{equation}
Now we observe that
\begin{equation}\label{bar trap}
{\mathcal{T}}\subseteq Q_R.
\end{equation}
The proof is by contradiction: if not,~$u+\Phi\ge L$
in~$\{ |x'|\le 6\sqrt{n}R\}$, with
equality at some~$\bar x'$ with~$\bar x'\not\in Q_R$.
In particular,~$|\bar x'|\ge R/2$.
Then, we can use Lemma~\ref{barrier}, with~$F$
as in~\eqref{789sd90djwwjjdfff1}
(notice that~\eqref{D L} is satisfied here due to~\eqref{D L 2}).
For this, we set~$\bar x:=(\bar x',u(\bar x'))\in\partial F$,
and we
choose~$c_0:= 1/4$, $c_4:=2$ and~$c_5:=
100\sqrt{n}$ in Lemma~\ref{barrier}.
In this way
since~$E\cap B_{{6\sqrt n R}}\supseteq F\cap B_{{6\sqrt n R}}$,
we deduce from~\eqref{6gghhssh} that
\begin{eqnarray*}
(1-s) \int_{B_R(\bar x)}\frac{\chi_{E}(y)-
\chi_{\CC E}(y)}{|x-y|^{n+s}}\,dy\ge
(1-s) \int_{B_R(\bar x)}\frac{\chi_{F}(y)-
\chi_{\CC F}(y)}{|x-y|^{n+s}}\,dy\ge \frac{2\varepsilon}{R^s}
.\end{eqnarray*}
This is in contradiction with~\eqref{sd77ef12345d}
and so it establishes~\eqref{bar trap}.
Also, given~$\bar x'
\in {\mathcal{T}}$, we have that~$\PPara(\bar x')=v(\bar x')
-\Phi(\bar x')=u(\bar x')$ and
$$ \PPara\le\Gamma-\Phi\le v-\Phi=u.$$
This,~\eqref{AL}, \eqref{sd77ef12345d}, \eqref{par1}
and~\eqref{par2}
say that the hypotheses of Lemma~\ref{S LEM} are fulfilled (up
to scaling~$\varepsilon$ to~$C\varepsilon$).
As a consequence, by~\eqref{star3}, for any~$M$ large enough,
\begin{equation}\label{9099}\frac{ \left| S^{(\bar x')}
\cap \Big\{
u(x')-u(\bar x')-\nabla\PPara(\bar x')\cdot (x'-\bar x')>
\displaystyle\frac{
M\varepsilon r_{\bar x'}^2}{R}
\Big\}\right|}{\big| S^{(\bar x')}\big|} \,\le\,
\frac{C}{M} \end{equation}
for a suitable ring~$S^{(\bar x')}:= \big\{ |x'-\bar x'|\in\big(
{r_{\bar x'}}/C, {r_{\bar x'}}\big)\big\}$ and
a suitable~$ {r_{\bar x'}}
\in (0,R]$.
On the other hand, by~\eqref{901212},
$$-\Phi(x')+\Phi(\bar x')+\nabla\Phi(\bar x')\cdot
(x'-\bar x')\ge -\frac{\varepsilon r_{\bar x'}^2}{R}\ge
-\frac{M\varepsilon r_{\bar x'}^2}{2R}$$
if~$x'\in S^{(\bar x')}$, as long as~$M$ is big enough.
Consequently, using that~$v$ lies above~$\Gamma$ and
that~$\bar x'\in{\mathcal{T}}$, we have that
\begin{eqnarray*}
&& \Gamma(x')-\Gamma(\bar x')-\nabla\Gamma(\bar x')\cdot(x'-\bar x')
-\frac{M \varepsilon r_{\bar x'}^2}{2R}\\
&\le&\Gamma(x')-\Phi(x')-\Gamma(\bar x')+
\Phi(\bar x')
-\Big(\nabla\Gamma(\bar x')-
\nabla\Phi(\bar x')\Big)\cdot(x'-\bar x')
\\ &\le&
v(x')-\Phi(x')-v(\bar x')+
\Phi(\bar x')
-\Big(\nabla\Gamma(\bar x')-
\nabla\Phi(\bar x')\Big)\cdot(x'-\bar x')
\\ &=&
u(x')-u(\bar x')
-\nabla\PPara(\bar x')\cdot(x'-\bar x').
\end{eqnarray*}
The latter estimate and~\eqref{9099}
imply that
$$ \frac{ \left| S^{(\bar x')}
\cap \Big\{
\Gamma(x')-\Gamma(\bar x')-\nabla\Gamma(\bar x')\cdot (x'-\bar x')>
\displaystyle\frac{
M\varepsilon r_{\bar x'}^2}{2R}
\Big\}\right|}{\big| S^{(\bar x')}\big|} \,\le\,
\frac{C}{M} .$$
So, by taking~$M$ appropriately large and using
Lemma~8.4 of~\cite{CS} we deduce that
\begin{equation}\label{b782221122m}
\Gamma(x')-\Gamma(\bar x')-\nabla\Gamma(\bar x')\cdot (x'-\bar x')
\le\frac{C\varepsilon r_{\bar x'}^2}{R}
\end{equation}
for any~$|x'-\bar x'|< r_{\bar x'}/2$.
In particular, for any~$|x'-\bar x'|< r_{\bar x'}/4$,
we set~$\rho:=r_{\bar x'}/4$,
we plug the point~$x'+\rho e$ inside~\eqref{b782221122m},
we use the convexity of~$\Gamma$ twice
and we obtain
\begin{eqnarray*}
\frac{ C\varepsilon \rho^2}{R}&\ge&
\Gamma(x'+\rho e)-\Gamma(\bar x')-
\nabla\Gamma(\bar x')\cdot (x'+\rho e-\bar x')
\\ &\ge& \Gamma(x')+\rho \nabla\Gamma(x')\cdot e\\
&&\qquad
-\Gamma(\bar x')-
\nabla\Gamma(\bar x')\cdot (x'+\rho e-\bar x')
\\ &\ge& \Gamma(\bar x')+\nabla\Gamma(\bar x')\cdot(x'-\bar x')
+\rho \nabla\Gamma(x')\cdot e\\
&&\qquad -\Gamma(\bar x')-
\nabla\Gamma(\bar x')\cdot (x'+\rho e-\bar x')
\\ &=&\rho \big(\nabla\Gamma(x')-
\nabla\Gamma(\bar x')\big)\cdot e.
\end{eqnarray*}
So, since~$e$ is an arbitrary unit vector, it follows
that
\begin{equation*}
|\nabla\Gamma(x')-\nabla\Gamma(\bar x')|\le
\frac{C\varepsilon r_{\bar x'}}{R}
\end{equation*}
for any~$|x'-\bar x'|< r_{\bar x'}/4$,
that is: the~$(n-1)$-dimensional
ball of radius~$r_{\bar x'}/4$ centered at~$\bar x'$
(which we now call~$B^{(\bar x')}$)
is sent, via the map~$\nabla\Gamma$,
inside the~$(n-1)$-dimensional
ball of radius~${C\varepsilon r_{\bar x'}}/{R}$ centered at~$
\nabla\Gamma(\bar x')$ (we observe that the
latter is a ball smaller by
a scale factor~$C\varepsilon/R$,
and let us call~$\widetilde B^{(\bar x')}$
such a ball).
Now we cover~${\mathcal{T}}$
with a countable, finite overlapping system of these balls, say~$\big\{
B^{(j)}\big\}_{j\in {\mathbb N}}$.
By the previous observations,
this covering induces a covering of~$\nabla\Gamma({\mathcal{T}})$
made of balls~$\big\{\widetilde B^{(j)}\big\}_{j\in {\mathbb N}}$, with~$|\widetilde B^{(j)}|
\le C(\varepsilon/R)^{n-1}|B^{(j)}|$.
So, we obtain
the measure estimate
\begin{equation}\label{0shhh65www}
|\nabla \Gamma({\mathcal{T}})|\le \sum_{j\in{\mathbb N}}
|\widetilde B^{(j)}|
\le C\left(\frac\varepsilon{R}\right)^{n-1}
\sum_{j\in{\mathbb N}} |B^{(j)}|.
\end{equation}
On the other hand,
we observe that, if~$ |x'-\bar x'|\le{r_{\bar x'}}$,
then
\begin{eqnarray*}
u(x')&\le& u(x')-\Gamma(x') \\&\le&u(x')
-\Gamma(\bar x')-\nabla\Gamma(\bar
x')\cdot(x'-\bar x')
\\ &=& u(x')
-u(\bar x')-\Phi(\bar x')-\big(\nabla\PPara(\bar
x')
+\nabla\Phi(\bar x')\big)\cdot(x'-\bar x')
\\ &\le& u(x')-u(\bar x')-\Phi(x')-\nabla\PPara(\bar x')
\cdot(x'-\bar x')+\frac{C\varepsilon}{R}|x'-\bar x'|^2
\\ &\le& u(x')-u(\bar x')-\nabla\PPara(\bar x')
\cdot(x'-\bar x')+C\varepsilon R
\end{eqnarray*}
thanks to the convexity of~$\Gamma$ and~\eqref{par1}.
Therefore
\begin{equation}\label{L.A.79}\begin{split}
& S^{(\bar x')}
\cap \Big\{
u(x')-u(\bar x')-\nabla\PPara(\bar x')\cdot (x'-\bar x')\le
\displaystyle\frac{
M\varepsilon r_{\bar x'}^2}{R}\Big\}\\
\subseteq\;\,&
S^{(\bar x')}
\cap \{u(x')\le C\varepsilon R\}\\
\subseteq\;\,&
B^{(\bar x')}
\cap \{u(x')\le C\varepsilon R\}.
\end{split}\end{equation}
Also, by~\eqref{9099}
\begin{eqnarray*}&&\left| S^{(\bar x')}
\cap \Big\{
u(x')-u(\bar x')-\nabla\PPara(\bar x')\cdot (x'-\bar x')\le
\displaystyle\frac{
M\varepsilon r_{\bar x'}^2}{R}
\Big\}\right|\\ &&\qquad\ge \left(1-
\frac{C}{M}\right)\,| S^{(\bar x')}|\ge
\frac{| S^{(\bar x')}|}2
\ge \frac{| B^{(\bar x')}|}{C}.\end{eqnarray*}
This and~\eqref{L.A.79} give that
$$ | B^{(\bar x')}|\le
C |B^{(\bar x')}
\cap \{u(x')\le C\varepsilon R\}|.$$
Gathering this estimate,
\eqref{Slope} and \eqref{0shhh65www},
and using the finite overlapping property of~$\big\{
B^{(j)}\big\}_{j\in {\mathbb N}}$, we conclude that
\begin{equation}\label{step zero a}
\begin{split}
& \varepsilon^{n-1}\le
C|\nabla\Gamma({\mathcal{T}})|
\le C\left( \frac\varepsilon{R}\right)^{n-1}
\sum_{j\in{\mathbb N}} |B^{(j)}|
\\ &\qquad \le C\left( \frac\varepsilon{R}\right)^{n-1}
\sum_{j\in{\mathbb N}}
\Big|B^{(j)}
\cap \{u\le C\varepsilon R\}\Big|\le C\left(
\frac\varepsilon{R}\right)^{n-1}
\left|
\bigcup_{j\in{\mathbb N}} B^{(j)}
\cap \{u\le C\varepsilon R\}\right|
.\end{split}\end{equation}
Accordingly,~\eqref{step zero}
is a consequence of~\eqref{step zero a}
and~\eqref{bar trap}.
\end{proof}
\subsection{Uniform improvement
of flatness}
The cornerstone of the regularity theory of~\cite{CRS}
is Lemma~6.9 there, to wit a Harnack Inequality, according to which
$s$-minimal surfaces become more and more flat
when we get
closer and closer to any of their points.
However, the estimates in Lemma~6.9
of~\cite{CRS} are all uniform
when~$s$ is bounded away from
both~$0$ and~$1$, but they do degenerate
as~$s\rightarrow1^-$ (see, in particular, the estimate on~$I_1$
on page~1129 of~\cite{CRS}), therefore such result
cannot be applied directly in our framework.
For this scope, we provide the following
result, which is a version of Lemma~6.9
of~\cite{CRS} with uniform estimates as~$s\rightarrow1^-$.
In fact, the
reader may compare Lemma~\ref{ovidiu 6.9}
here below with Lemma~6.9 in~\cite{CRS}: the only
difference is that the estimates here are
uniform as~$s\rightarrow1^-$.
Our proof is completely different from the one in~\cite{CRS}
and it is based
on the uniformity of the results obtained
in the preceding sections, together with
a Calder{\'o}n--Zygmund iteration, which needs
to distinguish between two scales of the dyadic cubes.
\begin{lemma}\label{ovidiu 6.9}
Fix~$s_o\in(0,1)$ and~$\alpha\in(0,1)$. Then, there
exist~$K\in {\mathbb N}$
and~$d\in(0,1)$ which only depend
on~$n$, $\alpha$ and~$s_o$, for which the following result holds.
Let~$a:=2^{-K\alpha}$. Let~$E$ be a set with~$s$-minimal perimeter
in~$B_{2^{K+1}}$, with~$s\in[1/10,1)$. Assume that
\begin{equation}\label{X51}
\partial E\cap B_1\subseteq \{ |x_n|\le a\}
\end{equation}
and, for any~$i\in \{0,\dots,K\}$,
\begin{equation}\label{X52}
\partial E\cap B_{2^i}\subseteq \{ |x\cdot
\nu_i|\le a2^{i(1+\alpha)}\}
\end{equation}
for some~$\nu_i\in{\rm S}^{n-1}$.
Then
\begin{equation}\label{X53}\begin{split}
&{\mbox{either }}\,\partial E\cap B_d\subseteq \{ x_n\le
a(1-d^2)\}\\ &{\mbox{or }}\,
\partial E\cap B_d\subseteq \{ x_n\ge a(-1+d^2)\}.\end{split}
\end{equation}
\end{lemma}
\begin{proof} The proof is not simple, but the naive idea is to
argue by
contradiction, supposing that there is
a sequence of~$E_j$'s
that oscillate too much. Then one performs the following steps:
\begin{itemize}
\item By~\cite{CV}, one gets a sequence~$s_j\rightarrow 1^-$
for which~$E_j$ approaches a classical minimal surface~$E_\star$;
\item By~\eqref{CorCC},
one shadows~$E_j$ with level sets of distance
functions~$u^\pm_j$ from above and below, and the graphs
of $u^\pm_j$ are close to~$\partial E_\star$ as~$s_j\rightarrow 1^-$;
\item Since (by contradiction) we assumed~$E_j$ to oscillate
too much, there are points of~$E_j$ (and so of the graphs of~$u^\pm_j$)
that stay very close to the bottom and the top of the cylinder
of height~$a$;
\item Accordingly, from the fact that there is a point
for which~$u^-_j$ is close to the bottom, we deduce
that~$u^-_j$ is close to the bottom in a rather large set:
for this, one
needs to use
a dyadic cube argument -- when the cubes are reasonably big,
one can repeat Lemma~\ref{SILV 8.6}, and when the cubes get too small
one takes advantage of the regularity theory for the classical
minimal surface~$E_\star$;
\item Analogously, from the fact that there is a point
for which~$u^+_j$ is close to the top, we deduce
that~$u^+_j$ is close to the top in a rather large set;
\item In particular, we find a point for which~$u^+_j$ is
close to the top and~$u^-_j$ close to the bottom, that is~$u^+_j
-u^-_j$ is of the order of~$a$;
\item This is in contradiction with~\eqref{s8822211a}
and so it completes the proof.
\end{itemize}
We remark that, in these arguments,
there are two
uncorrelated scales involved.
One is the flatness of order one
(which, in the course of the proof,
will be dominated by a configuration of cylinders whose ratio
between the height and the base is
some~$\varepsilon^\star$); the other is
the one
induced by
the criticality ratio for the minimal surfaces flatness condition
(which is some universal~$\varepsilon_o$).
Of course, both these configurations
are somewhat induced by the trapping of the surface
in a strip of small size~$a$.
The interplay between these two scales
is what allows us to choose
the critical $s$ in an independent way,
and so to decouple the ratio of the scales involved.
Finally , this implies also that as
the flatness~$\varepsilon_\flat$ of~\eqref{main trap}
improves (while
the classical minimal surfaces flatness~$\varepsilon_o$
is a fixed constant), we can apply
the decrease of oscillation more and more times, so that in the vertical
blow up limit we get a H\"older graph, that is
harmonic in viscosity sense (see~\cite{CRS}).
Below is the full detail discussion.
The proof is by contradiction.
If the claim were false,
since the estimates of Lemma~6.9 of~\cite{CRS} are
uniform when~$s\ge1/10$ is bounded away from~$1$,
it follows that
there exist
\begin{equation}\label{XoX5}
s_j\rightarrow 1^-,\end{equation}
and a sequence~$E_j$ of~$s_j$-minimal surfaces
in~$B_{2^{K+1}}$ such that
\begin{equation}\label{trap 0}
\partial E_j\cap B_1\subseteq \{ |x_n|\le a\}
\end{equation}
and, for any~$i\in \{0,\dots,K\}$,
\begin{equation}\label{trap i}
\partial E_j\cap B_{2^i}\subseteq \{
|x\cdot \nu_i|\le a2^{i(1+\alpha)}\}.
\end{equation}
for suitable~$\nu_i\in{\rm S}^{n-1}$, but
\begin{equation}\label{not trap}
\partial E_j\cap B_d\cap \{ x_n\ge a(1-d^2)\}\ne\varnothing
{\mbox{ and }}\;
\partial E_j\cap B_d\subseteq \{ x_n\le a(-1+d^2)\}
\ne\varnothing.\end{equation}
By~\eqref{XoX5} and
Theorem~7 in~\cite{CV}, we have that~$\chi_{E_j}$
converges in~$L^1(B_{(9/7) 2^K})$ to
some~$E_\star$ (possibly up to subsequence).
Therefore (see the Remark after Corollary~17
in~\cite{CV}) $E_j$ approaches~$E_\star$ uniformly
in~$B_{(8/7) 2^K}$ and then, by Theorem~6 in~\cite{CV},
we have that~$E_\star$ is a classical minimal surface
in~$B_{2^K}$.
We will define~$\gamma_j$ to be the distance between~$E_j$
and~$E_\star$ in~$B_{2^K}$: by construction
\begin{equation}\label{dw110dfu2bxq0}
\lim_{j\rightarrow+\infty}\gamma_j=0.
\end{equation}
Let also
$$ \delta_j:=
a\gamma_j^{1/(1+\alpha)},$$
and notice that
\begin{equation}\label{dw110dfu2bxq}
\lim_{j\rightarrow+\infty}\delta_j=0.
\end{equation}
Now, we observe that~$K\alpha>4(1+\alpha)$ if~$K$
is large enough, and so we can
take~$K'\in {\mathbb N}$ such that
\begin{equation}\label{sd9790e3c-33yyy}
\frac{K\alpha}{2(1+\alpha)}-1<K'\le
\frac{K\alpha}{2(1+\alpha)}.\end{equation}
Now, we denote by~$\varepsilon_o$ the
flattening constants of the classical minimal surfaces
(see, e.g.,~\cite{CC} and references therein) according to which
if a minimal surface is trapped in a
cylinder whose ratio between the height and the base
is below~$\varepsilon_o$, then the minimal surface
is a~$C^{1,\alpha}$-graph in half the cylinder.
By~\eqref{trap i}, \eqref{sd9790e3c-33yyy}
and the uniform convergence
of~$E_j$, we see that, for large~$K$
(possibly in dependence of~$\varepsilon_o$),
\begin{eqnarray*}
&& \partial E_\star \cap B_{2^{K'}}\subseteq
\{ |x\cdot \nu_{K'}|\le 2^{-K\alpha} 2^{K'(1+\alpha)}\}
\\ &&\qquad\subseteq\{ |x\cdot \nu_{K'}|\le 2^{-K\alpha/2}
\}\subseteq \{ |x\cdot \nu_{K'}|\le\varepsilon_o\},\end{eqnarray*}
and so
\begin{equation}\label{8d9e93333kk}
{\mbox{$\partial E_\star \cap B_{2^{K'-1}}$ is
a~$C^{1,\alpha}$-graph.}}
\end{equation}
Now, we use Corollary~\ref{CorCC}
with~$\gamma:=\gamma_j$
and~$\delta:=\delta_j$:
for this,
we define
\begin{equation}\label{dw110dfu2bxq.2}
{\mathcal{S}}^\pm_j:=\{ x\in {\mathbb R}^n {\mbox{ s.t. }} d_{E_j}(x)=
\pm \delta_j\}\end{equation}
and we deduce from~\eqref{8d9e93333kk} and
Corollary~\ref{CorCC} that~${\mathcal{S}}^\pm_j \cap B_{2^{K'-2}}$ is
\begin{equation}\label{e L}
\begin{split}
&{\mbox{ the graph of a uniformly Lipschitz function, say~$u^\pm_j$.}}
\end{split}\end{equation}
Also, from~\eqref{s8822211a}, \eqref{dw110dfu2bxq0}
and~\eqref{dw110dfu2bxq}, we have that
\begin{equation}\label{so long}
u^+_j(x')-u^-_j(x')\le C\delta_j
\end{equation}
for any~$|x'|\le 1$, as long as~$j$ is large enough.
Now we will concentrate on~$u^-_j$ (the case of~$u^+_j$
being specular): we
set~$E^-_j:= \{x_n<u^-(x')\}$, so that~$\partial E^-_j
= {\mathcal{S}}^-_j$. {F}rom~\eqref{not trap}
and the fact that~${\mathcal{S}}^-_j$ lies
below~$E_j$, we obtain that there exists~$\zeta'\in{\mathbb R}^{n-1}$
with
\begin{equation}\label{ds73jjjjj11}
|\zeta'|\le d
\end{equation}
and
\begin{equation}\label{pr915}
u^-_j(\zeta')\le a(-1+d^2).\end{equation}
As usual in these types of proofs, the convenient~$d$ in our argument will be chosen later on,
in dependence of the constants of the previous lemmata (see~\eqref{ne.2}
below).
Now, we use the following notation:
given any~$x\in\G^-_j$,
let~$y(x)\in \partial E_j$ such that~$|y(x)-x|=\delta_j$,
and let~$\nu(x):=y(x)-x$. Then
\begin{equation}\label{incl}
E^-_j +\nu(x)\subseteq \overline{E}.
\end{equation}
Indeed, if~$p\in E^-_j +\nu(x)$, we have that~$p-\nu(x)\in E^-_j$
and so~$\overline{B_{\delta_j}(p-\nu(x))}\subseteq \overline{E}_j$.
Then, since~$|\nu(x)|=\delta_j$, we have~$p\in
\overline{B_{\delta_j}(p-\nu(x))}\subseteq \overline{E}_j$,
proving~\eqref{incl}.
Moreover~$\partial E$ has zero Lebesgue measure
(see, e.g., Corollary~4.4(i) of~\cite{CRS}), thus we
infer from~\eqref{incl} that, if~$x_o\in\partial E^- _j$,
\begin{equation}\label{Incl}
\chi_{E^- _j+\nu(x_o)}\le\chi_{E} \qquad{\mbox{ and }}\qquad
\chi_{\CC(E^-_j +\nu(x_o))}\ge\chi_{\CC E}.
\end{equation}
Therefore, using~\eqref{Incl}, the Euler-Lagrange equation
satisfied by~$E$ (see Theorem~5.1 of~\cite{CRS})
and the change of variable~$z:=x+\nu(x_o)$, we obtain
\begin{equation}\label{E.L.}
\begin{split}
&\int_{{\mathbb R}^n}
\frac{\chi_{E^-_j}(x)-\chi_{\CC(E^-_j)}(x)}{|x-x_o|^{n+s_j}}\,dx
=
\int_{{\mathbb R}^n} \frac{\chi_{E^-_j
+\nu(x_o)}(z)-\chi_{\CC(E^-_j +\nu(x_o))}(z)}{|z
-y(x_o)|^{n+s_j}}\,dz
\\ &\qquad
\le \int_{{\mathbb R}^n} \frac{\chi_{E}(z)-\chi_{\CC E}
(z)}{|z-y(x_o)|^{n+s_j}}\,dz\le0
\end{split}\end{equation}
for any~$x_o\in\partial E^-_j\cap B_C$.
On the other hand, by~\eqref{trap i}, we have that~$|x_o\cdot\nu_i|\le
C a 2^{i(1+\alpha)}$, and so
\begin{eqnarray*}
&& \partial E_j\cap B_{2^i}(x_o)\subseteq
\partial E_j\cap B_{2^{i+C}}\\
&&\qquad\subseteq
\{|x\cdot\nu_i|\le C
a2^{i(1+\alpha)} \}
\subseteq
\{|(x-x_o)\cdot\nu_i|\le C
a2^{i(1+\alpha)} \}
\end{eqnarray*}
for any~$1\le i\le K-C$.
Therefore, for $j$ large,
$$ \partial E^-_j\cap B_{2^i}(x_o)\subseteq
\{|(x-x_o)\cdot\nu_i|\le
Ca2^{i(1+\alpha)} \}$$
for any~$1\le i\le K-C$.
As a consequence,
we obtain the
following cancellation:
\begin{equation}\label{8.20p}\begin{split}
& \left|\int_{\CC B_1(x_o)}
\frac{\chi_{E^-_j}(x)-\chi_{\CC(E^-_j)}(x)}{|x-x_o|^{n+s_j}}\,dx\right|
\\ &\le
\sum_{i=1}^{K-C}
\left|\int_{B_{2^i}(x_o)\setminus B_{2^{i-1}}(x_o)}
\frac{\chi_{E^-_j}(x)-\chi_{\CC(E^-_j)}(x)}{|x-x_o|^{n+s_j}}\,dx\right|
+
\left|\int_{\CC B_{2^{K-C}}(x_o)}
\frac{\chi_{E^-_j}(x)-\chi_{\CC(E^-_j)}(x)}{|x-x_o|^{n+s_j}}\,dx\right|
\\ &\le
C\left[ \sum_{i=1}^{K-C}
\limits\int_{ {B_{2^i}(x_o)\setminus B_{2^{i-1}}(x_o)}\atop{
\{ |(x-x_o)\cdot\nu_i| \le Ca2^{i(1+\alpha)}\}
}}
\frac{1}{|x-x_o|^{n+s_j}}\,dx
+
\int_{\CC B_{2^{K-C}}(x_o)}
\frac{1}{|x-x_o|^{n+s_j}}\,dx\right]
\\ &\le C\left[ \sum_{i=1}^{K-C}
\int_{2^{i-1}}^{2^i} \frac{a2^{i(1+\alpha)} \rho^{n-2}}{\rho^{n+s_j}}
\,d\rho
+
\int_{2^{K-C}}^{+\infty}
\frac{\rho^{n-1}}{\rho^{n+s_j}}\,d\rho\right]
\\ &\le Ca
\end{split}\end{equation}
provided that~$j$ is big enough (in particular,~$s_j$
is larger than~$\alpha$).
Therefore, by~\eqref{E.L.}
and~\eqref{8.20p},
for any~$x_o\in \partial E^-_j\cap B_C$,
\begin{equation}\label{ne.1} \int_{B_1(x_o)}
\frac{\chi_{E^-_j}(x)-\chi_{\CC(E^-_j)}(x)}{|x-x_o|^{n+s_j}}\,dx
\le Ca .\end{equation}
With this, we
are in position to
obtain a finer bound in measure,
often referred to with the name of~``$L^\beta$-estimate''
(see, e.g.,
Lemma~4.6 of~\cite{Cabre}
and Lemma~9.2 of~\cite{CS}
for the corresponding results for fully nonlinear
or fractional operators, the proof of which is based
on related, but quite different, techniques).
Such estimate will be based on a
Calder{\'o}n--Zygmund type dyadic cube decomposition.
According to the different scales involved,
we use either
a repeated version of Lemma~\ref{SILV 8.6} or
the vicinity of the classical minimal surface~$E_\star$
to deduce the necessary rigidity features.
Here are the details of such $L^\beta$-estimate.
We take~$\mu\in(0,1)$ and~$M\in(1,+\infty)$
as in Lemma~\ref{SILV 8.6}, and we fix a large integer~$k_o$
such that
\begin{equation}\label{k o}
(1-\mu)^{k_o}\le\frac14.
\end{equation}
Then, we choose
\begin{equation} \label{ne.2}
d:=\frac{1}{2M^{k_o}} \in(0,1),\end{equation}
we set~$a_j:=a+\delta_j+\gamma_j$,
and we claim that, for any $k\in{\mathbb N}$, with~$1\le k\le k_o$, we have that
\begin{equation}\label{L beta}
\left| \Big\{ u^-_j +a_j\ge
\frac{ a_j M^{k-k_o}}{2} \Big\} \cap Q_1\right|\le
(1-\mu)^k
\end{equation}
as long as~$j$ is large enough.
Indeed, when~$k=1$,~\eqref{L beta} is a consequence
of~\eqref{step zero}, by applying
Lemma~\ref{SILV 8.6} here with~$\varepsilon:=d a_j$, $\kappa:=-a_j$
and~$R:=1$ -- for this recall~\eqref{pr915}, \eqref{ne.1}
and~\eqref{ne.2}
in order to check~\eqref{sd77ef12345d}
and~\eqref{901212-bis}, and consider
the complement set in~\eqref{step zero}:
such configuration is sketched in Figure~3.
\begin{figure}[htbp]
\begin{center}
\resizebox{11.2cm}{!}{\input{fig1.pstex_t}}
{\caption{\it Proving~\eqref{L beta} when $k=1$.}}
\end{center}
\end{figure}
Then, we proceed by induction, by supposing
that~\eqref{L beta} holds for~$k-1$, and we prove
it for~$k\le k_o$.
For simplicity, we just
perform the step from~$k=1$ to~$k=2$ (the others are
analogous).
For this, we define
$$ A:= \Big\{ u^-_j +a_j>
\frac{ a_j M^{2-k_o}}{2} \Big\} \cap Q_1
\;{\mbox{ and }}\;
B:= \Big\{ u^-_j +a_j>
\frac{ a_j M^{1-k_o}}{2} \Big\} \cap Q_1 .$$
Notice that
\begin{equation}
A\subseteq B\subseteq Q_1
\end{equation} and
\begin{equation}
|A|\le \left|
\Big\{ u^-_j +a_j>
\frac{ a_j M^{1-k_o}}{2} \Big\} \cap Q_1\right|
\le 1-\mu,
\end{equation}
since we know that~\eqref{L beta} holds when~$k=1$.
Now we take a dyadic cube decomposition
of~$Q_1$, with the notation that
if~$Q$ is one of the cubes of the family, its predecessor
is denoted by~$\tilde Q$.
We claim that
\begin{equation}\label{post}
{\mbox{if $|A\cap Q|>(1-\mu) |Q|$ then
$\tilde Q\subseteq B$.
}}
\end{equation}
Notice that if~\eqref{post} holds, then, by Lemma~4.2
of~\cite{Cabre} (applied here with~$\delta:=1-\mu$)
and the inductive
assumption (that is, in this case,~\eqref{L beta}
with~$k=1$), we have that
\begin{equation*}\begin{split}
& \left|\Big\{ u^-_j +a_j>
\frac{ a_j M^{2-k_o}}{2} \Big\} \cap Q_1\right|=|A|
\\ &\qquad\le(1-\mu) |B|=(1-\mu)
\left| \Big\{ u^-_j +a_j>
\frac{ a_j M^{1-k_o}}{2} \Big\} \cap Q_1\right|
\le (1-\mu)^2.\end{split}
\end{equation*}
This would complete the induction necessary for
the proof of~\eqref{L beta}, hence we focus on the proof
of~\eqref{post}.
For the proof of~\eqref{post}, we argue by
contradiction, by supposing that
\begin{equation}\label{DQ}
|A\cap Q|>(1-\mu)|Q|
\end{equation}
but there exists~$\xi'\in\tilde Q\setminus B$, i.e.
\begin{equation}\label{xi prime}
u^-_j(\xi') +a_j\le
\frac{ a_j M^{1-k_o}}{2}.
\end{equation}
We denote by~$\ell$
the width of~$Q$
(which is, say, centered at some~$x_\star'\in{\mathbb R}^{n-1}$).
We need to distinguish two cases,
according to the scale of the cube~$Q$, namely,
we distinguish whether or
not~$a_j/\ell \le
\varepsilon^\star$, using either
Lemma~\ref{SILV 8.6} or the minimal surface rigidity
(here~$\varepsilon^\star$ is a small quantity,
say the minimum between the threshold for the classical
minimal surface regularity~$\varepsilon_o$, as
introduced after~\eqref{sd9790e3c-33yyy},
and the small constants given
by Lemma~\ref{SILV 8.6}: a precise requirement about this will
be taken after~\eqref{EL2}).
If
\begin{equation}\label{less}
a_j/\ell\le \varepsilon^\star,\end{equation} we use
Lemma~\ref{SILV 8.6}.
For this scope, given~$x_o\in\partial E^-_j\cap B_{C}$, we notice that
\begin{eqnarray*}
&& \left| \int_{B_1\setminus B_\ell(x_o)}
\frac{\chi_{E^-_j} (x)-\chi_{\CC (E^-_j)}(x) }{|x-x_o|^{n+s_j}}\,dx
\right|=
\left| \int_{(B_1\setminus B_\ell(x_o))\cap \{|x_n|\le Ca_j\}}
\frac{\chi_{E^-_j} (x)-\chi_{\CC (E^-_j)}(x) }{|x-x_o|^{n+s_j}}\,dx
\right|\\
&&\qquad \le C \int_{(\CC B_\ell(x_o))\cap \{|x_n|\le Ca_j\}}
\frac{1}{|x'-x_o'|^{n+s_j}}\,dx \le C a_j \int_\ell^{+\infty}
\frac{\rho^{n-2}}{\rho^{n+s}}\,d\rho\le \frac{C a_j}{\ell^{1+s}}.
\end{eqnarray*}
As a consequence, recalling~\eqref{ne.1},
\begin{equation}\label{EL2}
(1-s_j) \int_{B_\ell(x_o)}
\frac{\chi_{E^-_j} (x)-\chi_{\CC (E^-_j)}(x) }{|x-x_o|^{n+s_j}}\,dx
\le \frac{C (1-s_j) a_j}{\ell^{1+s}}.
\end{equation}
With this, we are in position to apply
Lemma~\ref{SILV 8.6} with~$\kappa:=-a_j$,
$R:=\ell$ and~$\varepsilon:=a_j
M^{1-k_o}/
(2\ell)$ -- notice indeed that~\eqref{sd77ef12345d}
follows from~\eqref{EL2}, \eqref{901212-bis}
follows from~\eqref{xi prime} and, recalling~\eqref{less},
we see that~$\varepsilon\le
\varepsilon^\star M^{1-k_o}/2$ which is small if so
is~$\varepsilon^\star$: this configuration is
represented in Figure~4.
\begin{figure}[htbp]
\begin{center}
\resizebox{11.2cm}{!}{\input{fig2.pstex_t}}
{\caption{\it Proving the inductive step of~\eqref{L beta}
when~$a_j/\ell\le \varepsilon^\star$.}}
\end{center}
\end{figure}
So, we obtain from~\eqref{step zero}
that
\begin{eqnarray*}
&& |A\cap Q|=\Big|\big\{ u^-_j +a_j > \frac{a_j M^{2-k_o}}{2}
\big\}\cap Q\Big|
\\ &&\qquad=
\Big|\big\{ u^-_j -\kappa>
M\varepsilon R
\big\}\cap Q\Big|\le (1-\mu) |Q|,\end{eqnarray*}
which is in contradiction with~\eqref{DQ}.
This proves~\eqref{post} if~\eqref{less} holds true.
Now we deal with the case in which~$a_j/\ell \ge
\varepsilon^\star$, and we fix~$\theta\in(0,1)$
to be chosen suitably small in the sequel.
We set~$p:=a_j/(\theta^2 \varepsilon^\star)$.
Notice that, for small~$\theta$, we
have that~$p>10 a_j/\varepsilon^\star \ge10\ell$.
Also, the ratio between~$a_j$ and~$p$ is below~$\theta^2
\varepsilon^\star$,
hence a minimal surface
that is trapped inside~$\{|x'|\le p\}\times\{|x_n|\le 8 a_j\}$
is the graph of a function~$\omega$, with~$|\nabla\omega|\le
\theta^{3/2} \varepsilon^\star$.
Accordingly,
\begin{equation}\label{omega os}\begin{split}&{\mbox{
the oscillation of~$\omega$ in~$\{|x'_i|\le 6\ell\}$}}\\
&\qquad{\mbox{
is bounded by~$\theta\varepsilon^\star\ell \le \theta a_j$.}}
\end{split}\end{equation}
Keeping this in mind,
we take~$j$ so large that~$\gamma_j$, i.e. the distance
between~$E_j$
and~$E_\star$ is less than~$\theta^2\varepsilon^\star p/2$
(recall~\eqref{dw110dfu2bxq0}).
Also, for large~$j$, we have that the graph of~$u^-_j$ is
at distance~$\delta_j$ less than~$\theta^3\varepsilon^\star p/2$
from~$E_j$,
and so less than~$\theta^3\varepsilon^\star p$ from~$E_\star$
(recall~\eqref{dw110dfu2bxq}
and~\eqref{dw110dfu2bxq.2}).
Accordingly, $\partial E^\star\cap \{|x'_i|\le 6\ell\}$
is trapped in a slab of width~$4a_j+2\theta^3\varepsilon^\star p<8a_j$,
and, by~\eqref{xi prime}, its boundary contains a point
with vertical entry below~$
(a_j M^{1-k_o}/{2})+\theta^3\varepsilon^\star p$.
Then, by~\eqref{omega os}, the whole of~$\partial E^\star\cap \{|x'_i|\le
4\ell\}$ has vertical entry below
$$ -a_j+(a_j M^{1-k_o}/{2})+\theta^3\varepsilon^\star p+\theta a_j.$$
Consequently, the graph of~$u^-$ on~$Q$ would stay below
\begin{eqnarray*}
&& -a_j+(a_j M^{1-k_o}/{2})+\theta^3\varepsilon^\star p+\theta a_j
+\theta^3\varepsilon^\star p
\\ &&\qquad=-a_j+(a_j M^{1-k_o}/{2})+3\theta a_j
< -a_j+(a_j M^{2-k_o}/{2}),
\end{eqnarray*}
as long as we choose~$\theta<M^{1-k_o} (M-1)/6$. Hence,~$A\cap Q=
\varnothing$,
which is in contradiction with~\eqref{DQ}.
This ends the proof of~\eqref{post},
and therefore the one of~\eqref{L beta}.
As a consequence, by taking~$k:=k_o$ in~\eqref{L beta}
and recalling~\eqref{k o}, we obtain that
\begin{equation}\label{L beta 1}
\left| \Big\{ u^-_j <
-\frac{ a_j }{2} \Big\} \cap Q_1\right|\ge
\frac{3}{4}
\end{equation}
for large~$j$.
A mirror argument on~$u^+_j$ gives that
\begin{equation}\label{L beta 2}
\left| \Big\{ u^+_j >
\frac{ a_j }{2} \Big\} \cap Q_1\right|\ge
\frac{3}{4}
\end{equation}
for large~$j$.
So, by~\eqref{L beta 1}
and~\eqref{L beta 2},
there must exist~$y'_j$
such that~$u^-_j( y'_j) \le -a_j/2$ and~$u^+_j(y'_j)\ge a_j/2$,
hence
$$ u^+_j(y'_j)-u^-_j(y'_j)\ge a_j\ge a/2.$$
This is in contradiction with~\eqref{so long},
and so the proof of Lemma~\ref{ovidiu 6.9}
is completed.
\end{proof}
\subsection{Completion of the proof of Theorem~\ref{MAIN}}
Thanks to Lemma~\ref{ovidiu 6.9},
we have obtained a statement analogous to the one
of Lemma~6.9 of~\cite{CRS}, but with uniform estimates.
Then, the argument from Lemma~6.10 to the end of Section~6
in~\cite{CRS} also yield the proof of Theorem~\ref{MAIN} here.
\section{Proof of Theorem~\ref{-2-}}
The proof is by contradiction.
We suppose that there are~$s_k$-minimal cones~$E_k$
that are not hyperplanes, with~$s_k\rightarrow 1^-$.
By dimensional reduction (see Theorem~10.3 of~\cite{CRS}),
we may focus on the case in which~$E_k$ is singular at
the origin.
{F}rom~\cite{CV}, up to subsequence, we have that~$E_k$
approaches locally uniformly a classical cone of minimal
perimeter. Since~$n\le 7$, we have that such a cone
is a halfspace, say~$\{x_n<0\}$
(see, e.g., Section~1.5.2 of~\cite{Miranda}).
So, for large~$k$, we have that~\eqref{main trap}
holds true for~$E_k$, namely
$$ \partial E_k\cap B_1\subseteq \{ |x\cdot e_n|\le
\varepsilon_\flat \}.$$
Therefore, by Theorem~\ref{MAIN},
we obtain that~$\partial E_k$
is smooth, i.e.~$E_k$
is a hyperplane, for infinitely many~$k$'s.
This is a contradiction with our assumptions
and it proves
Theorem~\ref{-2-}.
\section{Proof of Theorem~\ref{-3-}}
Let~$E$ be $s$-minimal. We take the blow up
of~$E$ and we obtain a minimal cone~$E'$ (see Theorem~9.2
of~\cite{CRS}).
By Theorem~\ref{-2-}, we know that~$E'$ is a hyperplane.
Then, $\partial E$ is~$C^{1,\alpha}$, thanks to Theorem~9.4
in~\cite{CRS}. This ends the proof of Theorem~\ref{-3-}.
\section{Proof of Theorems~\ref{GG1} and~\ref{GG2}}
The proofs of Theorems~\ref{GG1} and~\ref{GG2}
follow now verbatim the ones of Theorems~11.7
and~11.8 in~\cite{Giusti} (the only difference
is that the dimensional reduction is
performed via
Theorem~10.3 of~\cite{CRS},
and the regularity needed in low dimension
is assured here by
Theorem~\ref{-2-}).
| {
"timestamp": "2013-02-07T02:02:20",
"yymm": "1105",
"arxiv_id": "1105.1158",
"language": "en",
"url": "https://arxiv.org/abs/1105.1158",
"abstract": "We prove an improvement of flatness result for nonlocal minimal surfaces which is independent of the fractional parameter $s$ when $s\\rightarrow 1^-$.As a consequence, we obtain that all the nonlocal minimal cones are flat and that all the nonlocal minimal surfaces are smooth when the dimension of the ambient space is less or equal than 7 and $s$ is close to 1.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Regularity properties of nonlocal minimal surfaces via limiting arguments",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854146791214,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.7089699411489782
} |
https://arxiv.org/abs/1808.00223 | The power spectrum indicator: A new, efficient method for the early detection of chaos | To determine the regular or chaotic nature of the orbits in dynamical systems can be quite an issue. In this article, following Vozikis et al. (2000), we propose a new tool, namely, the Power Spectrum Indicator (PSI), $\psi^2$, that enables us to determine, as early as posible, whether an orbit of a two-dimensional map is chaotic or not. This new method is based on the frequency analysis of a data series constucted by recording the logarithm of the amplification factor of the deviation vector of nearby orbits. Accordingly, two datasets are recorded and the $\chi^2$-likelyhood of their power spectra is computed. Ordered orbits have always the same power spectrum, so their $\chi^2 \equiv \psi^2$ acquires a zero value. On the contrary, a chaotic orbit has a power spectrum that varies with time, hence, chaotic orbits always exhibit a non-zero $\psi^2$ value. Even as regards "sticky" orbits, the PSI method is very effective in the early detection of chaos, while the global behavior of the $\psi^2$ indicator can provide information (also) on the intense of the chaotic behavior, i.e., on how "strong" or "weak" the associated chaos may be. | \section{Introduction}
A major issue in studying non-integrable dynamical systems is the
determination, as early as posible, of the orbits' (chaotic or not) nature. In
the pioneering work of H\'enon and Heiles (1964), when the related research
was limited to two-dimensional $(2D)$ systems, the study of the orbits' nature
was performed by means of the surface of section (SoS). When research extended
to more complex, three-dimensional $(3D)$ systems (where the method of SoS
cannot be applied), the problem was addressed in terms of the Lyapunov
characteristic numbers - LCN (Benettin et al. 1976, Froeschl\'e 1984). The LCN
method tracks the evolution of the deviation vector, $\vec{d}$, that connects
the positions of two nearby orbits in phase-space at each time-step or,
equivalently, at each iteration. Unfortunately both methods have the same
weakness: They cannot distinguish, early enough, a {\it "sticky"} chaotic
orbit from an ordered one (see, e.g., Contopoulos \& Voglis 1997).
Since then, several other methods have been proposed. Some of them, are based
on the analysis of a time-series associated to the values of the generalized
coordinates or functions of these coordinates, as the rotation number method
(Contopoulos 1966), the frequency map analysis (Laskar et al. 1992, Laskar
1993) and the power spectrum of quasi-integrals analysis (Voyatzis \&
Ichtiaroglou 1992). Other methods, use the geodesic divergence of initially
nearby trajectories, as in the probability-density analysis of stretching
numbers (Froeschl\'e et al. 1993, Voglis \& Contopoulos 1994), the fast
Lyapunov indicators (Froeschl\'e et al. 1997), and the methods of alignment
indexes, namely, the small alignment indexes - SALI (Skokos 2001, Skokos et
al. 2003, 2004, Skokos \& Manos 2016) and the generalized alignment indexes -
GALI (Skokos et al. 2007, Skokos \& Manos 2016). Each and everyone of the
above methods has its own advantages and
weaknesses; some of them are more efficient to addressing 2D systems rather
than their higher-dimensional counterparts, while others perform better on
mappings rather than flows.
In this context, eightteen years ago, Vozikis et al. (2000) proposed a method
based on the frequency analysis (power spectrum) of stretching numbers. A
variant of this method was used also by Karanis \& Vozikis (2008). The method
is fast and efficient, but it has a major disadvantage. In order to
decide whether an orbit is regular or not, one needs to visually inspect the
power spectrum and to classify it as either
representing a regular orbit or a chaotic one.
In the present article, we are revisiting the method proposed by
Vozikis et al. (2000),
introducing a major improvement, that may help us to override the previous
disadvantage. As a result of this new method, we end up with (just) a single
number that enables us to classify an orbit as being either regular or
chaotic.
This article is organized as follows: In Section 2, we summarize the
power-spectrum method of Vozikis et al. (2000), in the context of which we
will set up also our new model. In Section 3, we perform a {\it
goodness-of-fit}, $\chi^2$-analysis of several successive power spectra,
resulting from datasets that are associated with the deviation vectors of
three particular types of orbit, namely, regular, chaotic, and {\it
"sticky"}. As a result of the aforementioned likelyhood analysis, a new
indicator of chaotic behavior is introduced (Section 4), which allows us to
classify any kind of orbit on a $2D$ mapping, as early as possible. Finally,
we conclude in Section 5.
\section{The model and the power spectrum method}
\subsection{The model}
One of the most frequently used test-models for studying chaotic motion is the
$2D$ {\it standard map}, appearing in literature in too many forms
(Lichtenberg \& Lieberman 1983, Ichikawa et al 1987, Aubry and Abramovici
1990, Contopoulos \& Voglis 1997, Gelfreich 1999, Lazutkin 2005 ). In the
present article, we adopt the following set of recursive relations
\numparts
\begin{eqnarray}
J_{i+1} & = & J_i + k \cos(2 \theta_i) ~~~ \mathrm{mod}(2 \pi) \: ,\\
\theta_{i+1} & = & \theta_i + J_{i+1} ~~~~~~~~~~ \mathrm{mod}(2 \pi) \:,
\label{eq:map}
\end{eqnarray}
\endnumparts
where $J_i$ and $\theta_i$ are the associated action-angle variables, $i$
stands for the iteration number, and $k$ is the {\it "stochasticity
parameter"} (see, e.g., Lichtenberg \& Lieberman 1983). In what follows, we
consider that $k = 0.7$, a case where the standard map possess both regular
and chaotic regions.
In figures \ref{fig:points1}, \ref{fig:points2} and \ref{fig:points3}
we present four characteristic orbits of the
standard map, for $k=0.7$. Notice that, in all figures, $J_i$ and $\theta_i$
are given as multiples of $\pi$. The two frames of figure \ref{fig:points1}
represent the first 20\,000 points of an ordered orbit, with initial
conditions $J_0 = \pi $, $\theta_0 = 1.5~\pi$ (left frame), and of a chaotic
orbit, with initial position at $J_0 = 1.3~\pi $, $\theta_0 = 1.5~\pi$ (right
frame). The left frame of figure \ref{fig:points2} represents a chaotic orbit,
originating at $J_0
= 1.1998~\pi $, $\theta_0 = 1.49~\pi$. Although the orbit looks ordered, a
zoom (right frame) on the area near the separatrix reveals its chaotic
nature. Finally the two frames of figure \ref{fig:points3} represent a {\it
"sticky"} orbit
with initial position $J_0 = \pi $, $\theta_0 = 1.538~\pi$. The left frame
shows the first 10\,000 successive points, while the right one presents the
corresponding 15\,000 ones. This {\it "sticky"} orbit, although it is chaotic,
behaves, at least macroscopically, like an ordered one for about 11\,000
iterations.
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics*{fig01tl.ps}
\hspace{1cm}
\includegraphics*{fig01tr.ps}}
\caption{Successive points of two orbits on the {\it standard map}, when $k =
0.7$. Left frame, an ordered orbit originating at $J_0 = \pi $,
$\theta_0 = 1.5~\pi$; Right frame, a chaotic orbit originating at $J_0 =
1.3~\pi $, $\theta_0 = 1.5~\pi$ (the axes units are in multiples of $\pi$). }
\label{fig:points1}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics*{fig01cl.ps}
\hspace{1cm}
\includegraphics*{fig01cr.ps}}
\caption{Successive points of a chaotic orbits on the {\it standard map}, when
$k = 0.7$ originating at $J_0 = 1.1998 ~\pi $, $\theta_0 = 1.49~\pi$. Left
frame, full view; Right frame, a zoom on the area near the separatrix of
the orbit.}
\label{fig:points2}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}
{\includegraphics*{fig01bl.ps}
\hspace{1cm}
\includegraphics*{fig01br.ps}}
\caption{Successive points of a ``sticky'' orbit originating at
$J_0 = \pi $, $\theta_0 = 1.538~\pi$.
Left frame, the first 10\,000 points;
Right frame, the first 15\,000 points.}
\label{fig:points3}
\end{figure*}
\subsection{The power spectrum (PSOD) method}
In order to decide on the nature of an orbit (chaotic or not), Vozikis et
al. (2000) proposed the Power Spectrum of Orbits Divergence (PSOD)
method. This method consists in the numerical integration of the orbit
originating at $\left( J_0, \theta_0 \right)$, along with a nearby one,
originating at an infinitesimally-close distance in phase space,
$\vec{d_0}=\left( dJ_0, d\theta_0 \right)$, i.e., of an initial position
$J_0^{\prime} = J_0 + dJ_0$, $\theta_0^{\prime} = \theta_0 + d\theta_0$. At
each iteration, $i$, the quantity
\begin{equation}
q_i = \ln (d_i/d_0) \: ,
\label{eq:q}
\end{equation}
where $\vec{d_i}=\left ( dJ_i, d\theta_i \right )$, is recorded. To calculate
$\vec{d_i}$, instead of integrating numerically also the second, nearby,
orbit, one can use the so called {\it variational equations}
\begin{eqnarray}
dJ_{i+1} & = & dJ_i - 2k \sin (2 \theta_i) d \theta_i \nonumber \\ d \theta_{i+1} & = & d \theta_i + dJ_{i+1} .
\label{eq:var_map}
\end{eqnarray}
These equations can be easily obtained, by
substituting in the standard map $J^{\prime} = J + dJ$ and $\theta^{\prime} =
\theta + d \theta$, and expanding $\sin (2 \theta^{\prime})$ as a Taylor
series, keeping only first order terms in $dJ$ and $d \theta$. Prior to any
iteration, the deviation vector, $\vec{d}$, is renormalized, upon
multiplication by the factor $d_0/d_i$. In other words, at each and every $i$,
although $\vec{d_i}$ retains the orientation acquired, its norm remains equal
to $d_0$. Thus, after tracking this orbit for $N$ successive iterations, we
are left with a series of consecutive $q_i$ $( i = 1, 2, ..., N)$.
The power spectrum of the aforementioned $q$-series can be obtained by taking
the discrete Fourier transform of $q_k$, multiplied by a {\it window
function}, $w_k$,
\begin{equation}
Q_j = \sum_{k=0}^{2N-1} q_k w_k e^{2 \pi i \frac{j}{N} k} ~~~~~j = 0,..., \left( 2 N - 1 \right) \: .
\end{equation}
As window function we use the so-called {\it Hanning window} (see, e.g., Press
et al. 1992). In this context, the power spectrum, $P(f_j)$, is defined over a
set of $M = N+1$ frequencies, $f_j$, as
\begin{eqnarray}
\label{eq:Power}
P(f_0) & = & \frac{1}{W} ~ \vert Q_0 \vert^2 \: , \nonumber \\
P(f_j) & = & \frac{1}{W} ~ \left( \vert Q_j \vert^2 + \vert Q_{2N-j} \vert^2 \right )~~~~~ j = 1,..., \left ( N-1 \right ) \: , \\
P(f_c) & = & \frac{1}{W}~ \vert Q_{N} \vert^2 \: , \nonumber
\end{eqnarray}
where we have set
\begin{equation}
W = 2N \sum_{k=0}^{2N-1}w_k^2 \: .
\end{equation}
In Eqs. (\ref{eq:Power}) , the frequency $f_c = f_{N}$ is the Nyquist
frequency, which, in
the case of the standard map, is
\begin{equation}
f_c = \frac{1}{2}.
\end{equation}
The group of frequencies embraced by the power spectrum of
Eqs. (\ref{eq:Power}) , is given by:
\begin{equation}
f_j = f_c~\frac{j}{M}~~~~~~~j = 0, ..., M
\label{eq_freqs}
\end{equation}
More elaborated details on the calculation of the power spectrum can be found
in the book {\it "Numerical Recipes"} by Press et al. (2001). In this article,
as far as the computation of the power spectrum is concerned, we use two sets
of successive $2 \times N$ data, which overlap with each other by one half of
their lengths. In other words, the whole dataset $q_i$ involved in a single
calculation of the power spectrum is $N_S = 3 \times N$, where $N$ is a power
of 2.
Figure \ref{fig:psod_oc} presents the power spectra associated with the two
orbits of figure \ref{fig:points1}. The spectrum on the left
frame is for the regular orbit, while the one on the right frame is for the
chaotic. One may easily decide on the nature of a particular orbit, simply by
inspecting its spectrum. Ordered motion corresponds to a power spectrum with
only a few spikes in certain frequencies. On the contrary, a chaotic orbit
exhibits a spectrum that consists of almost all frequencies, with varying
amplitudes.
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{fig02l.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{fig03l.ps}}}
\caption{The power spectra of two orbits for $N_s = 3 \times 256$
iterations. Left frame: The power spectrum associated to the ordered orbit
originating at $J_0 = \pi $, $\theta_0 = 1.5~\pi$. Right frame: The
corresponding quantity as regards the chaotic orbit originating at $J_0 =
1.3 \pi $, $\theta_0 = 1.5~\pi$.}
\label{fig:psod_oc}
\end{figure*}
\section{A novel method}
\subsection{The basic idea}
The basic idea behind our new method is that, ordered motion is a kind of
quasi-periodic motion. Thus, the power spectrum of an ordered orbit will
always have some sort of characteristic frequencies, i.e., it will be
independent of time (or, equivalently, of the iteration number). On the
contrary, chaotic motion is a sort of random walk. Hence, we
expect that for a chaotic orbit, two different sets of $q_i$ (eq. \ref{eq:q})
recorded on different times (iterations) will be completely different,
since the motions posses no periodicity at all.
The validy of this idea can be easily inspected in figures
\ref{fig:comp_psd_o} to \ref{fig:comp_psd_s}. In all three figures, at first,
we follow a specific orbit for $N_{S} = 3 \times 256$ iterations. The spectrum
of the associated set of $q_i$ is presented on the left frame of each
figure. Next, we follow the same orbit for another $N_{S}$ iterations, thus
creating a second set of $q_i$. The spectrum of this second set is presented
on the right frame. In figure \ref{fig:comp_psd_o}, the two successive spectra
of an ordered orbit are presented. These two spectra appear to be the same. On
the contrary, as far as a chaotic orbit is concerned, the difference between
the associated two spectra is more than obvious (cf. figure
\ref{fig:comp_psd_c}).
Detecting the chaotic nature of a {\it "sticky"} orbit is the most challenging
case, since such an orbit can behave like an ordered one for many
iterations. A reliable {\it chaos-detecting tool} must reveal the true
identity of these orbits as early as possible. Figure \ref{fig:comp_psd_s}
shows the two spectra of a {\it "sticky"} orbit. We see that these spectra
resemble the ones of an ordered orbit. However, although they appear to be the
same, upon a closer look we see that they are not: There are small differences
at those frequencies that are associated to low amplitudes.
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{fig02l.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{fig02r.ps}}}
\caption{The PSODs of the first $N_S = 3 \times 256 = 768$ iterations (left)
and the next $N_S$ iterations (right) of an ordered orbit originating at
$J_0 = \pi $, $\theta_0 = 1.5~\pi$.}
\label{fig:comp_psd_o}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{fig03l.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{fig03r.ps}}}
\caption{Same as figure \ref{fig:comp_psd_o}, but, this time, for a chaotic
orbit originating at $J_0 = 1.3~\pi $, $\theta_0 = 1.5~\pi$.}
\label{fig:comp_psd_c}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{fig04l.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{fig04r.ps}}}
\caption{Same as figures \ref{fig:comp_psd_o} and \ref{fig:comp_psd_c}, but
now, as far as the {\it "sticky"} orbit originating at $J_0 = \pi $,
$\theta_0 = 1.538~\pi$ is concerned.}
\label{fig:comp_psd_s}
\end{figure*}
\subsection{On the $\chi^2$-likelyhood of the PSOD method}
In order to see if and how much the PSOD of the same orbit
changes we perform a $\chi^2$-statistics, as regards two PSODs of a
particular orbit.
Let $P_j$ be
the power spectrum $P \left ( f_j \right )$ of the first PSOD $(j = 1, 2,
... , N_S)$ and $P_j^{\prime}$ the corresponding spectrum, $P \left (
f_{j^{\prime}} \right )$, of the second PSOD $(j^{\prime} = N_S + 1, ... ,
i)$. Their $\chi^2$ likelyhood is, then, defined as (see, e.g., Chapter 14.3
of Press et al. 1992)
\begin{equation}
\chi^2 = {\sum_{j=0}^M {\frac{\left ( P_j^{\prime} - P_j \right )^2}{P_j^{\prime} + P_j}}} \: .
\label{eq:chi2}
\end{equation}
Since the values of $P_j^{\prime}$ of the second PSOD depend on the particular
set of $q_i$, we consider that the second PSOD, $P_j^{\prime}$, is a function
of the iteration number $i$, namely,
\begin{eqnarray}
P_j & = & {\rm PSOD} \left[ q \left ( 1 \right ),~...,~q(N_S) \right ] \: , \nonumber \\
P_j^{\prime} (i) & = & {\rm PSOD} \left [ q \left ( i-N_S+1 \right ),~...,~q(i) \right ] \: .
\label{eq:Pj}
\end{eqnarray}
As a consequense, now, $\chi^2$ becomes also a function of the iteration number, $i$.
\begin{equation}
\chi^2 = \chi^2(i) = {\sum_{j=0}^M{\frac{\left ( P_j^{\prime} (i) - P_j \right )^2}{P_j^{\prime} (i) + P_j}}}.
\label{eq:chi2i}
\end{equation}
To begin with, let us examine how $\chi^2(i)$ behaves for various types of
orbits (i.e., ordered, chaotic, or {\it "sticky"}) and if the idea suggested
in Section 3.1 is valid, in the sense that it can give us reliable results as
far as the early prediction of chaos is concerned. Figure \ref{fig:chi2} shows
the evolution of $\chi^2(i)$ of the PSODs as a function of the iteration
number, $i$, for the four orbits of figures \ref{fig:points1},\ref{fig:points2}
and \ref{fig:points3}. The top left
frame corresponds to the ordered orbit, having initial conditions
$J_0 = \pi$, $\theta_0 = 1.5~\pi$. As we can see, the corresponding
$\chi^2(i)$ value
remains zero, for every $i$. As we have already discussed in Section 3.1, an
ordered orbit represents a sort of quasi-periodic motion and, thus, its PSOD
will exhibit only a few, characteristic frequencies, remaining time-invariant
(or, equivalently, invariant with respect to the iteration number, $i$). On
the contrary, chaotic orbits behave in a definitely non-periodic manner and,
therefore, their PSOD will be completely different at different times
(equivalently, at different values of the iteration number, $i$). The top
right frame of figure \ref{fig:chi2} depicts the evolution of $\chi^2(i)$ for
a chaotic orbit, originating at $J_0 = 1.3~\pi$, $\theta_0 = 1.5 ~ \pi$. As we
can see, in this case, $\chi^2(i)$ starts from a value of $0.23$ and
fluctuates, but it never gets a zero value. A similar behavior can be seen
also in the evolution of $\chi^2(i)$ of the second chaotic orbit, originating
at $J_0 = 1.1998~\pi $, $\theta_0 = 1.49~\pi$. This orbit, although it
presents very week chaos (cf. figure \ref{fig:points2}),
exhibits, from the very beginning, a value of $\chi^2(i)$ around $0.18$. The
final, bottom right frame of figure \ref{fig:chi2} represents the evolution of
$\chi^2(i)$ associated to the {\it "sticky"} orbit, originating at $J_0 = \pi
$, $\theta_0 = 1.538 ~ \pi$. We can see that, even in the beginning, where it
is not easy at all to
(visually) distinguish the difference between the two PSODs (cf. figure
\ref{fig:comp_psd_s}), the $\chi^2(i)$ method can reveal the true nature of
the orbit, acquiring a value of $0.03$. To put it more clearly, in this case,
the non-zero value of $\chi^2(i)$ of the two PSODs indicates that there are
differencies between them and, therefore, in the end, the orbit will exhibit a
clearly chaotic behavior. Indeed, as the time (or, equivalently, the iteration
number, $i$) goes by, $\chi^2(i)$ increases and, when the particular orbit
leaves the {\it "sticky"} region and enters into the {\it "big chaotic sea"},
it climbs to values higher than $0.5$!
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figx2_o.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figx2_c.ps}}}
\vskip 0.1cm
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figx2_c2.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figx2_s.ps}}}
\caption{$\chi^2(i)$-likelyhood of the PSODs corresponding to the orbits of
figure 1. Top left frame, the ordered orbit originating at $J_0 = \pi $,
$\theta_0 = 1.5~\pi$; Top right frame, the chaotic orbit originating at $J_0
= 1.3~\pi$, $\theta_0 = 1.5~\pi$; Botton left frame, the second chaotic
orbit originating at $J_0 = 1.1998~\pi $, $\theta_0 = 1.49~\pi$; Bottom
right frame, the {\it "sticky"} orbit originating at $J_0 = ~\pi $,
$\theta_0 = 1.538~\pi$.}
\label{fig:chi2}
\end{figure*}
\section{The power spectrum indicator, $\psi^2$}
So far, we have seen that, as regards two PSODs of a particular orbit, the
$\chi^2(i)$-likelyhood analysis can give us important information on whether
this orbit is ordered or not. Clearly, a non zero value of $\chi^2$ suggests
that the orbit is definitely chaotic. In this context, the results of Section
3.2 may give rise to the following questions:
\begin{enumerate}
\item Why does the $\chi^2$ value corresponding to the {\it "sticky"} orbit
rises to such high values, as compeared to the chaotic orbits presented in
figure \ref{fig:chi2}?
\item Can we modify the method, in a way that it can give us (also) a clear
indication on the degree of chaos?
\end{enumerate}
The answer to the first question is easy. The two spectra we compare for the
$\chi^2$-analysis, differ significally, not only on the frequencies
but also on the total power,
\begin{equation}
S_P(i)=\sum_{j=0}^M{P_j(i)^{\prime}} \: ,
\label{eq:Sp}
\end{equation}
of each spectrum. The orbit migrates from a region of weak (i.e., not visually
observable) chaos to a region with strong chaotic behavior. Figure
\ref{fig:Sp} shows the evolution of the total power spectra, $S_P(i)$,
corresponding to the four orbits associated with the $\chi^2(i)$ values of
figure \ref{fig:chi2}.
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figSp_o.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figSp_c.ps}}}
\vskip 0.1cm
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figSp_c2.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figSp_s.ps}}}
\caption{The evolution of the total power, $S_P(i)$, of the spectra associated
to the four orbits of figure \ref{fig:chi2}.}
\label{fig:Sp}
\end{figure*}
In an effort to answer the second question, we (further) ask ourselves:
{\it "What our results would be, if we compared two power spectra yielded from
datasets that differ only a few (i.e., a constant number, $n$, of)
iterations appart?"}
To do so, instead of taking the first data set as corresponding to the
beginning of the orbit, i.e., to originate from $i=1$, we consider that it
ends at the iteration $i-n$. In this case, the (two) power spectra
correponding to the iteration $i$ are
\begin{eqnarray}
P_j^{\prime} (i) & = & {\rm PSOD} \left[ q \left ( i-N_S+1 \right ),~...,~q(i) \right ] \nonumber \\
P_j(i) & = & {\rm PSOD} \left [ q \left ( i-N_S+1-n \right ),~...,~q(i-n) \right ]
\label{eq:newPj}
\end{eqnarray}
and their $\chi^2$ likelyhood, which is our new Power Spectrum Indicator
(PSI), to be (from now on) denoted by $\psi^2$, is given by
\begin{equation}
\psi^2(i) \equiv {\sum_{j=0}^M{\frac{\left ( P_j^{\prime}(i) - P_j(i) \right )^2}{P_j^{\prime}(i) + P_j(i)}}} = \chi^2 (i).
\label{eq:psi2i}
\end{equation}
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2_o.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2_c.ps}}}
\vskip 0.1cm
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2_c2.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2_s.ps}}}
\caption{The evolution of $\psi^2(i)$ associated to the four orbits depicted
in figure 1. The top left frame corresponds to the ordered orbit,
originating at $J_0 = \pi $, $\theta_0 = 1.5~\pi$. Top right: The first
chaotic orbit, originating at $J_0 = 1.3~\pi$, $\theta_0 = 1.5~\pi$. Bottom
left: The second chaotic orbit, originating at $J_0 = 1.1998~\pi $,
$\theta_0 = 1.49~\pi$. Bottom right: The power spectrum indicator associated
to the {\it "sticky"} orbit, originating at $J_0 = ~\pi $, $\theta_0 =
1.538~\pi$. In each and every one of these cases, the datasets involved
differ with each other by $n = 256$ iterations.}
\label{fig:psi2}
\end{figure*}
Figure \ref{fig:psi2} presents the evolution of our new indicator, $\psi^2$,
as a function of the iteration number $i$, regarding the four orbits of figure
1. The two spectra are calculated from datasets that are $n = 256$ iterations
apart. As expected, for the ordered orbit (upper left frame) the value of
$\psi^2$ is always zero. On the contrary, the two chaotic orbits (upper right
and lower left) exhibit a clearly non-zero value of $\psi^2$. This value is by
no means constant, because, at different values of the iteration number, $i$,
the orbit visits different areas of the chaotic region. Notice that, the
values of $\psi^2$ corresponding to the first chaotic orbit (upper right
frame), are much higher than those of the second one (lower left frame),
indicating that the former orbit is in a region of {\it "strong"} chaos, as
compared to the {\it "weaker"} chaotic region in which the latter orbit lies
up.
Finally, as regards the power spectrum indicator, $\psi^2$, associated with
the {\it "sticky"} orbit (lower right frame of figure \ref{fig:psi2}), it
initially admits a low, but clearly non-zero value, indicating that it rests
in a {\it "weak"} chaotic region. However, after a lot of iterations, the
orbit migrates from the {\it "weak"} chaotic region to a region with
{\it "stronger chaos"}, analogous to the region of the first chaotic
orbit. Accordingly, the value of $\psi^2$ climbs to higher levels, exhibitting
a behavior similar to that of the first chaotic orbit.
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2a_o.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2a_c.ps}}}
\vskip 0.1cm
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2a_c2.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2a_s.ps}}}
\caption{Same as figure \ref{fig:psi2}, but for $n = 64$.}
\label{fig:psi2a}
\end{figure*}
At this point, we need to stress that, the potential values of $\psi^2$ depend
on the seperation $n$ of the two datasets. The lower the value of $n$, the
closer the two datasets are, hence, their $\psi^2$ will be lower. Note that
for $n < N_s$ the two datadets have $N_s - n$ common data. Nevertheless, even
for $n = 64$ ($N_s = 3 \times 256$) the PSI method gives quite accurate
results, as it can be readily seen from figure \ref{fig:psi2a}.
As mentioned above, the value of $\psi^2(i)$ give us a local (i.e., in the
region currently visited by the orbit) indication of how strong the chaos may
be. In oder to attain a global indicator (i.e., one that will cover the whole
area visited by an orbit), we should consider the average value of $\psi^2$,
defined as
\begin{equation}
\left \langle \psi^2 \right \rangle (i) = \frac{1}{i-N_s-n+1}{\sum_{j=N_s+n}^i \psi^2(j)}.
\label{eq:psi2ai}
\end{equation}
Figure \ref{fig:psi2m} shows the evolution of $\left \langle \psi^2 \right
\rangle$ for the four orbits considered throughout this article. Their
$\psi^2$ indicators are computed upon the use of a dataset seperation of $n =
256$ iterations.
\begin{figure*}
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2m_o.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2m_c.ps}}}
\vskip 0.1cm
\resizebox{\hsize}{!}
{\rotatebox{270}{\includegraphics*{figy2m_c2.ps}}
\hspace{1cm}
\rotatebox{270}{\includegraphics*{figy2m_s.ps}}}
\caption{The evolution of the global indicator $\left \langle \psi^2 \right
\rangle (i)$, for $n = 256$, as regards the four orbits presented in figure
1. Top left, the ordered orbit originating at $J_0 = \pi $, $\theta_0 =
1.5~\pi$; Top right, the chaotic orbit originating at $J_0 = 1.3~\pi $,
$\theta_0 = 1.5~\pi$; Botton left, the second chaotic orbit originating at
$J_0 = 1.1998~\pi $, $\theta_0 = 1.49~\pi$; Bottom right, the {\it "sticky"}
orbit originating at $J_0 = ~\pi $, $\theta_0 = 1.538~\pi$.}
\label{fig:psi2m}
\end{figure*}
\section{Conclusions}
In the present paper we propose a new tool, to be called Power Spectrum
Indicator (PSI), or $\psi^2$, that enables us to determine, as early as
posible, the chaotic nature of orbits in dynamical systems. This new method is
based on the method of Vozikis et al. (2000), i.e., on the frequency analysis
of a data series constucted by recording the logarithm of the amplification
factor of the deviation vector of nearby orbits. For this reason, two datasets
are recorded and the $\chi^2$-likelyhood of their power spectra is computed.
Ordered orbits have always the same power spectrum, so their $\psi^2 \equiv
\chi^2$ acquires an ever-zero value. On the contrary, a chaotic orbit has a
power spectrum that varies with time (equivalently, with the number of
iterations). Therefore, chaotic orbits always exhibit a non-zero $\psi^2$
value, hence, by calculating the $\psi^2$ of an orbit, we can easily decide if
this is chaotic ot not. Even for {\it "sticky"} orbits, the PSI method
is very effective in the early detection of chaos. Eventually, the global
behavior of the $\psi^2$ indicator can provide information (also) on the
intense of the chaotic behavior, i.e., on how {\it "strong"} or {\it "weak"}
the associated chaos may be.
However, we need to stress that, the aforementioned results refer to the case
where the system under study is a $2D$ mapping. Further investigation is
needed, if the $\psi^2$ method is to be implemented also in Hamiltonian
flows. As stated in the discussion of Vozikis et al. (2000), the difference
between maps and flows is that, in the case of flows, a detailed analysis
concerning the selection of the renormalization time is needed, since it may
significantly affect the corresponding results.
\ack{Financial support by the Research Committee of the Technological
Education Institute of Central Macedonia at Serres, Greece, under grant
SAT/CE/201217-aa/01, is gratefully acknowledged.}
\References
\item[] Aubry S and Abramovici G 1990 {\it Physica D} {\bf 43} 199
\item[] Benettin G, Galgani L and Strelcyn J M 1976 \PR A {\bf 14} 2338
\item[] Contopoulos G 1966 {\it Les Nouvelles M\'ethodes de la Dynamique
Stellaire} ed F Nahon and M H\'enon (Paris: CNRS)
\item[] Contopoulos G and Voglis N 1997 {\it Astron. Astroph.} {\bf 317} 73
\item[] H\'enon M and Heiles C 1964 \AJ {\bf 69} 73
\item[] Froeschl\'e C, 1984 {\it Cel. Mech.} {\bf 34} 95
\item[] Froeschl\'e C, Froeschl\'e Ch and Lohinger E 1993 {\it
Cel. Mech. Dyn. Astron.} {\bf 51} 135
\item[] Froeschl\'{e} C, Lega E and Gonzi R 1997, {\it Cel. Mech. Dyn. Astron.}
{\bf 67} 41
\item[] Gelfreich V G 1999 {\it Commun. Math. Phys.} {\bf 201} 155
\item[] Ichikawa Y H, Kamimura T and Hatori T 1987 {\it Physica D} {\bf 29} 247
\item[] Karanis G and Vozikis Ch 2008 {\it Astron. Nachr.} {\bf 320} 403
\item[] Laskar J 1993 {\it Physica D} {\bf 67} 257
\item[] Laskar J, Froeschl\'e C and Celleti A 1992 {\it Physica D} {\bf 56}
253
\item[] Lazutkin V F 2005 {\it J. Math. Science} {\bf 128} 2687
\item[] Lichtenberg A J and Lieberman M A 1983 {\it Regular and stochastic
motion} (New York: Springer) pp. 77-84)
\item[] Press W H, Teukolsky S A, Vetterling W T and Flannery B P 1992
{\it Numerical Recipes in Fortran -- The Art of Scientific Computing 2nd
edn} (Cambridge: Cambridge University Press)
\item[] Skokos Ch 2001 \JPA {\bf 34} 10029
\item[] Skokos Ch, Antonopoulos C, Bountis A and Vrahatis M N 2003 {\it
Prog. Theor. Phys. Suppl.} {\bf 150} 439
\item[] Skokos Ch, Antonopoulos C, Bountis A and Vrahatis M N 2004 \JPA {\bf
37} 6269
\item[] Skokos Ch, Bountis A and Antonopoulos C G 2007 {\it Physica D} {\bf
231} 30
\item[] Skokos Ch and Manos T 2016 {\it Chaos Detection and Predictability
}({\it Lecture Notes in Physics vol 915}) ed Skokos Ch, Gottwald G and Laskar
J (Berlin, Heidelberg : Springer)
\item[] Voglis N and Contopoulos G 1994, \JPA {\bf 27} 4899
\item[] Voyatzis G and Ichtiaroglou S 1992 \JPA {\bf 25} 5931
\item[] Vozikis Ch L, Varvoglis H and Tsiganis K 2000 {\it Astron. Astrophys.}
{\bf 359} 386
\endrefs
\end{document}
| {
"timestamp": "2018-08-02T02:07:22",
"yymm": "1808",
"arxiv_id": "1808.00223",
"language": "en",
"url": "https://arxiv.org/abs/1808.00223",
"abstract": "To determine the regular or chaotic nature of the orbits in dynamical systems can be quite an issue. In this article, following Vozikis et al. (2000), we propose a new tool, namely, the Power Spectrum Indicator (PSI), $\\psi^2$, that enables us to determine, as early as posible, whether an orbit of a two-dimensional map is chaotic or not. This new method is based on the frequency analysis of a data series constucted by recording the logarithm of the amplification factor of the deviation vector of nearby orbits. Accordingly, two datasets are recorded and the $\\chi^2$-likelyhood of their power spectra is computed. Ordered orbits have always the same power spectrum, so their $\\chi^2 \\equiv \\psi^2$ acquires a zero value. On the contrary, a chaotic orbit has a power spectrum that varies with time, hence, chaotic orbits always exhibit a non-zero $\\psi^2$ value. Even as regards \"sticky\" orbits, the PSI method is very effective in the early detection of chaos, while the global behavior of the $\\psi^2$ indicator can provide information (also) on the intense of the chaotic behavior, i.e., on how \"strong\" or \"weak\" the associated chaos may be.",
"subjects": "Chaotic Dynamics (nlin.CD); Dynamical Systems (math.DS)",
"title": "The power spectrum indicator: A new, efficient method for the early detection of chaos",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854111860906,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.708969938595368
} |
https://arxiv.org/abs/2007.05844 | Weak normality properties in $Ψ$-spaces | Almost disjoint families of true cardinality $\mathfrak{c}$ are used to produce an example of a mildly-normal not partly-normal $\Psi$-space and a quasi-normal not almost-normal $\Psi$-space. This is related with a problem posed by Lufti Kalantan where he asks whether there exists a mad family so that the related Mrówka-Isbell space is partly-normal. In addition, a consistent example of a Luzin mad family such that its associated $\Psi$-space is quasi-normal is provided. | \section{Introduction}
\noindent Mr\'owka-Isbell $\Psi$-spaces give a number of interesting counterexamples in many areas of topology including normality and related covering properties (\cite{Mr}, \cite{GJ}). $\Psi$-spaces associated to maximal almost disjoint families are never normal. Weakenings of normality have been considered in the literaure since the late 60's and early 70's . For instance, quasi-normal \cite{Za}, almost-normal \cite{AS}, mildly-normal \cite{Shch}, \cite{SS}, and more recently $\pi$-normal \cite{K2008} and partly-normal \cite{K2017}. In \cite{KS} L. Kalantan and the second author prove that any product of ordinals is mildly-normal. Kalantan builds a $\Psi$-space which is not mildly-normal in \cite{K2002} and, in \cite{K2017}, using {\bf CH} constructs a mad family so that the associated $\Psi$-space is quasi-normal.\\
\noindent Standard notation is followed and any undefined term can be found in \cite{E}. A subset $A$ of a topological space $X$ is called \emph{regularly closed} (also called closed domain), if $A = \overline{int(A)}$ ($cl_X(A)$ or simply $cl(A)$ will denote the closure of $A$ in the space $X$ as well). A set $A$ will be called \emph{$\pi$-closed}, if $A$ is a finite intersection of regularly closed sets. Two subsets $A$ and $B$ of a topological space $X$ are said to be \emph{separated} if there exist two disjoint open sets $U$ and $V$ of $X$ such that $A\subseteq U$ and $B \subseteq V$.
\newpage
\begin{definition}A regular space $X$ is called:
\begin{enumerate}
\item \emph{$\pi$-normal \cite{K2008}} if any two nonintersecting sets $A$ and $B$, where $A$ is closed and $B$ is $\pi$-closed, are separated.
\item \emph{almost-normal \cite{AS}} if any two nonintersecting sets $A$ and $B$, where $A$ is closed and $B$ is regularly closed, are separated.
\item \emph{quasi-normal \cite{Za}} if any two nonintersecting $\pi$-closed sets $A$ and $B$ are separated.
\item \emph{partly-normal \cite{K2017}} if any two nonintersecting sets $A$ and $B$, where $A$ is regular closed and $B$ is $\pi$-closed, are separated.
\item \emph{mildly-normal (also called $\kappa$-normal), \cite{Shch} \cite{SS}} if any two nonintersecting regular closed sets $A$ and $B$ are separated.
\end{enumerate}
\end{definition}
\noindent Since ``regular closed $\rightarrow$ $\pi$-closed $\rightarrow$ closed'' holds, it follows that normal spaces are $\pi$-normal and:
\begin{displaymath}
\begin{array}{ccccccccc}
& \nearrow & \textit{quasi-normal} & \searrow & & & \\
\textit{$\pi$-normal} & & & & \textit{partly-normal} & \rightarrow & \textit{mildly-normal.} \\
& \searrow & \textit{almost-normal} & \nearrow & & &
\end{array}
\end{displaymath}
\begin{proposition}
\label{PiandAlmostNormalCoincide}
Almost-normal spaces are $\pi$-normal.
\end{proposition}
\begin{proof}
Assume $X$ is an almost-normal space. For a positive integer $n$, call a set $n$-$\pi$-closed, if it is the intersection of $n$ many regular closed sets. We will show by induction on $n$, that in $X$ every $n$-$\pi$-closed set can be separated from a closed set, provided they are disjoint. This is enough to show that $X$ is $\pi$-normal.\\
\emph{Base case:} $n=1$. Since $X$ is almost normal, every closed $H$ and $1$-$\pi$-closed set $K$ in $X$ such that $H \cap K= \emptyset$ can be separated ($K$ is a regular closed set).
\emph{Inductive step:} Assume that for all $1\le i \le n$ if $H$ is closed, $K$ is $i$-$\pi$-closed in $X$ and, $H \cap K= \emptyset$, then $H$ and $K$ can be separated. Let $H \subset X$ be a closed set and let $K$ be an $(n+1)$-$\pi$-closed set such that $H \cap K = \emptyset$. Thus, $K = \bigcap _{0\le j \le n}K_j$, where each $K_j$ is a regular closed set in $X$. We show that $H$ and $K$ can be separated.\\
\noindent \emph{Case 1:} $H \cap (\bigcap _{j < n}K_j) = \emptyset$ ($H \cap K_n = \emptyset$).\\
Then, by the inductive hypothesis, we can find $U, V \subset X$ open such that $U \cap V =\emptyset$, $H \subseteq U$, $\bigcap _{j < n}K_j \subseteq V$ ($K_n \subseteq V$). Since $K \subseteq \bigcap _{j < n}K_j$ ($K \subseteq K_n$), $H$ and $K$ are separated by $U$ and $V$.\\
\noindent \emph{Case 2:} $H \cap (\bigcap _{j < n}K_j)\neq \emptyset \neq H \cap K_n$.\\
Given that $H \cap K = \emptyset$, $[H \cap (\bigcap _{j < n}K_j)] \cap K_n = \emptyset$. In addition, $H \cap (\bigcap _{j < n}K_j)$ is closed, non-empty and $K_n$ is a regular closed set, since $X$ is almost-normal, there are $U_n, V_n \subset X$ open such that $U_n \cap V_n = \emptyset$, $H \cap (\bigcap _{j < n}K_j) \subseteq U_n$, $K_n \subseteq V_n$.\\
\noindent Now, $H \smallsetminus U_n = H \cap (X \smallsetminus U_n)$ is closed, non-empty (since $H \cap K_n \subseteq H \smallsetminus U_n)$, and disjoint from $\bigcap _{j < n}K_j$, which is an $n$-$\pi$-closed set. Hence, by the inductive hypothesis, there are $U_K, V_K \subset X$ open such that $U_K \cap V_K = \emptyset$, $H \smallsetminus U_n \subseteq U_K$, $\bigcap _{j < n}K_j \subseteq V_K$. Let $U = U_n \cup U_K$, $V = V_n \cap V_K$.\\
\noindent \emph{Claim:} $U$ and $V$ are a separation of $H$ and $K$. \\
Assume there is $x \in U \cap V$, then $x \in U_n \cap V_n$ or $x \in U_K \cap V_K$, which is a contradiction. Thus, $U \cap V = \emptyset$. In addition, $H = (H \cap U_n) \cup (H \smallsetminus U_n) \subseteq U_n \cup U_K = U$ and $K = (\bigcap _{j < n}K_j) \cap K_n \subseteq V_K \cap V_n = V$. Hence, $H$ and $K$ are separated.\\
Therefore, for any closed set $H$ and for each $n$, if $K$ is $n$-$\pi$-closed and $H \cap K = \emptyset$, then $H$ and $K$ can be separated. Whence, $X$ is $\pi$-normal.
\end{proof}
\noindent Hence, the previous diagram is simplified as follows:
\begin{center}
\emph{almost-normal $\rightarrow$ quasi-normal $\rightarrow$ partly normal $\rightarrow$ mildly-normal}.
\end{center}
\noindent A family $\mathcal{A}$ of infinite subsets of $\omega$ is called an \emph{almost disjoint family} if and only if any two distinct members meet in a finite set (for each $a,b \in \mathcal{A}$, $a\neq b \rightarrow |a\cap b| < \omega$). All almost disjoint families considered here will be infinite.
\noindent Given an almost disjoint family $\mathcal{A}$, the Mr\'owka-Isbell $\Psi$-space $\Psi(\mathcal{A})$ is defined as follows: the underlying set is $\omega \cup \mathcal{A}$; if $n \in \omega$, $\{n\}$ is open and if $a\in \mathcal{A}$, then for any finite set $F \subset \omega$, $\{a\} \cup a \setminus F$ is a basic open set of $a$. $\Psi(\mathcal{A})$ is a separable, first countable, zero dimensional regular space. For a detailed survey on open problems and recent work on almost disjoint families and $\Psi$-spaces see \cite{HH}.
\begin{definition} Given an almost disjoint family $\mathcal{A}$,
\begin{itemize}[nosep]
\item If $B \subseteq \omega$, let $\mathcal{A}\upharpoonright _ B = \{a\in \mathcal{A}: |a \cap B| = \omega\}$.
\item $\mathcal{I}^+(\mathcal{A}) = \{B \subseteq \omega: |\mathcal{A}\upharpoonright _ B|\geq \omega\}$ is the family of big sets (the sets that have infinite intersection with infinite many members of the family).
\item $\mathcal{I}(\mathcal{A}) = \{B \subseteq \omega: |\mathcal{A}\upharpoonright _ B|< \omega\}$, the family of small sets. This family forms an ideal.
\item $\mathcal{A}$ will be called \emph{completely separable \cite{GS}} if for each $B \in \mathcal{I}^+(\mathcal{A})$, there is some $a \in \mathcal{A}$ with $a \subseteq B$.
\item $\mathcal{A}$ will be called \emph{of true cardinality $\mathfrak{c}$ \cite{HS}} if for every $B \subseteq \omega$ either $\mathcal{A}\upharpoonright _ B $ is finite, or it has size $\mathfrak{c}$.
\item If $\Psi(\mathcal{A})$ is a normal space (almost-normal, quasi-normal, partly-normal, mildly-normal), it will be said that $\mathcal{A}$ is normal (almost-normal, quasi-normal, partly-normal, mildly-normal, respectively).
\end{itemize}
\end{definition}
\noindent The existence in ZFC of a completely separable mad family is an important open question that has many interesting consequences (see \cite{HS}). Completely separable almost disjoint (not maximal) families do exist in ZFC and also have interesting consequences (see \cite{GS}). It is not hard to show that if $\mathcal{A}$ is a completely separable almost disjoint family and $B \in \mathcal{I}^+(\mathcal{A})$, then $|\{a \in \mathcal{A}: a \subseteq {B}\}|= \mathfrak{c}$. This fact has the following consequence: if $\mathcal{A}$ is completely separable, then for any $B \subseteq \omega$, the set $\mathcal{A}\upharpoonright _ B$ is either finite or it has size $\mathfrak{c}$. That is, every completely separable almost disjoint family is of true cardinality $\mathfrak{c}$, and therefore, almost disjoint families of true cardinality $\mathfrak{c}$ exist in ZFC. Furthermore, every infinite almost disjoint family $\mathcal{A}$ of true cardinality $\mathfrak{c}$, has size $\mathfrak{c}$ and thefore $\mathcal{A}$ is not normal (as a consequence of Jones' Lemma). Actually, something slightly stronger holds:
\begin{observation}
\label{IfTrueCardcAandCtlbeComplementcannotseparate}
If $\mathcal{A}$ is an almost disjoint family of true cardinality $\mathfrak{c}$, then for all $\mathcal{C} \in [\mathcal{A}]^{\aleph _0}$, $\mathcal{C}$ and $\mathcal{A} \smallsetminus \mathcal{C}$ cannot be separated in $\Psi(\mathcal{A})$.
\end{observation}
\begin{proof}
Let $U$, $V$ be any open sets in $\Psi(\mathcal{A})$ so that $\mathcal{C} \subseteq U$, $\mathcal{A} \setminus \mathcal{C} \subseteq V$. Let $W = U \cap \omega$, then for all $c \in \mathcal{C}$, $c \subseteq ^* W$. Hence, $|\mathcal{A}\upharpoonright _ W|\geq \omega$. Thus, $|\mathcal{A}\upharpoonright _ W|= \mathfrak{c}$. Pick $a \in \mathcal{A}\setminus \mathcal{C}$ such that $|W\cap a| = \omega$. Since $a \subseteq ^* V \cap \omega$, $U \cap V \neq \emptyset$.
\end{proof}
\noindent The following observations are not hard to show and they will be used in various occasions in the next section.
\begin{observation}
\label{closureofsubsetsofomega}
Given any almost disjoint family $\mathcal{A}$, if $W \subseteq \omega$, then $cl_{\Psi(\mathcal{A})}(W)$ is a regular closed subset of $\Psi(\mathcal{A})$.
\end{observation}
\begin{observation}
\label{regclosedset}
Given any almost disjoint family $\mathcal{A}$, if $H \subset \Psi(\mathcal{A})$ is a regular closed set,
then for each $a \in \mathcal{A}$, $a \in H$ if and only if $|a \cap H| = \omega$.
\end{observation}
\begin{observation}
\label{closedsets}
Given any almost disjoint family $\mathcal{A}$ and $H, K \subset \Psi(\mathcal{A})$ such that $H$ and $K$ are closed sets, $H\cap K = \emptyset$ and $|H \cap \mathcal{A}| < \omega$, then $H$ and $K$ can be separated. In particular, for each closed set $H \subset \Psi(\mathcal{A})$ that has finite intersection with $\mathcal{A}$, $H$ and $\mathcal{A} \smallsetminus H$ can be separated.
\end{observation}
\section{Examples}
\noindent Example \ref{QuasiNormalNotAlmostNormalPsiSpace} provides a quasi-normal not almost-normal almost disjoint family $\mathcal{F}$ which is constructed from a particular non almost-normal almost disjoint family $\mathcal{A}$ of true cardinality $\mathfrak{c}$. Each element of $\mathcal{F}$ will be a finite union of elements of $\mathcal{A}$ . In order to make $\mathcal{F}$ quasi-normal, all pairs of disjoint $\pi$-closed sets in $\Psi(\mathcal{F})$ have to be separated. By Observation \ref{closedsets}, the only pairs of $\pi$-closed sets $(A,B)$ that might be difficult to separate are the ones where $A \cap \mathcal{F}$ and $B \cap \mathcal{F}$ are infinite. Using that $\mathcal{A}$ is of true cardinality $\mathfrak{c}$ it will be possible to build $\mathcal{F}$ so that all such pairs have a point in common. Thus, all pairs of disjoint $\pi$-closed sets in $\Psi(\mathcal{F})$ will be trivial, i.e. one of them will have finite intersection with $\mathcal{F}$. Hence, $\mathcal{F}$ will be quasi-normal. In addition, it won't be hard to carry this construction out so that the non almost-normality of $\mathcal{A}$ is preserved in $\mathcal{F}$. That is, a closed set $\mathcal{C}$ and a regular closed set $E$ with empty intersection that cannot be separated in $\Psi(\mathcal{A})$ will be transformed into a pair of witnesses of non almost-normality in $\Psi(\mathcal{F})$. Now, let us obtain the required non almost-normal almost disjoint family of true cardinality $\mathfrak{c}$.\\
\noindent The following example is an instance of a machine for converting two almost disjoint families of the same cardinality, into a single almost disjoint family $\mathcal{A}$ with a countable set $\mathcal{C} \subset \mathcal{A}$ and a set $E \subset \Psi(\mathcal{A})$ such that $\mathcal{C}$ is closed and $E$ is regular closed in $\Psi(\mathcal{A})$, $\mathcal{C} \cap E = \emptyset$ and $\mathcal{A} \subset \mathcal{C} \cup E$.
\begin{example}
\label{TrueCountableC}
There is an almost disjoint family $\mathcal{A}$ of true cardinality $\mathfrak{c}$ on $\omega$ so that there is $\mathcal{C} \in [\mathcal{A}]^\omega$ and $W \in [\omega]^\omega$, such that $cl_{\Psi(\mathcal{A})}(W) \cap \mathcal{A} = \mathcal{A} \smallsetminus \mathcal{C}$. In particular, there is a non almost-normal almost disjoint family of true cardinality $\mathfrak{c}$.
\end{example}
\begin{proof}
Partition $\omega$ into two infinite disjoint sets $V,W$. Let $\mathcal{A}_0, \mathcal{A}_1$ be almost disjoint families of true cardinality $\mathfrak{c}$ on $V$ and $W$, respectively, and let $\mathcal{C} \in [\mathcal{A}_0]^\omega$. Now, a new family is built as follows, let $\alpha: \mathcal{A}_0 \smallsetminus \mathcal{C} \leftrightarrow \mathcal{A}_1$ be a bijective function. Let $\mathcal{A} = \{a \cup \alpha(a): a \in \mathcal{A}_0 \smallsetminus \mathcal{C}\} \cup \mathcal{C}$.\\
Let us check that $\mathcal{A}$ is the desired family. Clearly, it is almost disjoint. To see that it has true cardinality $\mathfrak{c}$ let $M\subseteq \omega$ such that $|\mathcal{A}\upharpoonright _ M|\geq \omega$. Then, either $|\mathcal{C}\upharpoonright _ M|\geq \omega$ or $|(\mathcal{A} \setminus \mathcal{C})\upharpoonright _ M|\geq \omega$. Hence, $|\mathcal{A}_0\upharpoonright _ M|\geq \omega$ or $|\mathcal{A}_1\upharpoonright _ M|\geq \omega$. Therefore, $|\mathcal{A}_0\upharpoonright _ M| = \mathfrak{c}$ or $|\mathcal{A}_1\upharpoonright _ M|= \mathfrak{c}$. In any case, $|\mathcal{A}\upharpoonright _ M|= \mathfrak{c}$. Thus, $\mathcal{A}$ is of true cardinality $\mathfrak{c}$.\\
Now, $a \in cl_{\Psi(\mathcal{A})}(W) \cap \mathcal{A} \leftrightarrow a \in \mathcal{A}$ $\wedge |a \cap W| = \omega \leftrightarrow a \in \mathcal{A} \wedge \big(\exists a_0 \in \mathcal{A}_0[a = a_0 \cup \alpha(a_0)]\big)$ $\leftrightarrow a \in \mathcal{A} \smallsetminus \mathcal{C}$.\
By Observation \ref{IfTrueCardcAandCtlbeComplementcannotseparate}, $\mathcal{A}$ is not almost-normal.
\end{proof}
\noindent If in the previous example we assume, in addition, that $\mathcal{A}_0, \mathcal{A}_1$ are mad families of the same cardinality, the resulting family $\mathcal{A}$ is mad as well: If $M \in [\omega]^\omega$, then $M$ has infinite intersection either with $V$ or with $W$, since $\mathcal{A}_0, \mathcal{A}_1$ are both mad, there is $a \in \mathcal{A}$ such $|a \cap M| = \omega$. Hence, the following holds:
\begin{corollary}
\label{TrueCountableCandmad}
The existence of a mad family of true cardinality $\mathfrak{c}$ implies the existence of a mad family $\mathcal{A}$ of true cardinality $\mathfrak{c}$ on $\omega$ so that there is $\mathcal{C} \in [\mathcal{A}]^\omega$ and $W \in [\omega]^\omega$, such that $cl_{\Psi(\mathcal{A})}(W) \cap \mathcal{A} = \mathcal{A} \smallsetminus \mathcal{C}$. In particular, the existence of a mad family of true cardinality $\mathfrak{c}$ implies the existence of a non almost-normal mad family of true cardinality $\mathfrak{c}$.
\end{corollary}
\begin{example}
\label{QuasiNormalNotAlmostNormalPsiSpace}
There is a quasi-normal not almost-normal almost disjoint family of true cardinality $\mathfrak{c}$.
\end{example}
\begin{proof}
Let $\mathcal{A}$ be a not almost-normal almost disjoint family of true cardinality $\mathfrak{c}$ as in Example \ref{TrueCountableC}. Hence, let $\mathcal{C} \in [\mathcal{A}]^\omega$ and $W \in [\omega]^\omega$, with $|\omega \smallsetminus W| = \omega$, such that $cl_{\Psi(\mathcal{A})}(W) \cap \mathcal{A} = \mathcal{A} \smallsetminus \mathcal{C}$. Consider the family of finite subsets of $[\omega]^\omega$, $\mathcal{E} = \big[[\omega]^\omega\big]^{<\omega}$ and let $\mathcal{B} = \{\{ C,D\} \in [\mathcal{E}]^2: (\bigcap C) \cap (\bigcap D) = \emptyset\}$. Since $|\mathcal{B}| = \mathfrak{c}$, we can list it as $\mathcal{B} = \{\{ C_\alpha,D_\alpha \} : \alpha < \mathfrak{c}\}$. A sequence of finite sets $\mathcal{F}_\alpha \in [\mathcal{A}]^{<\omega}$ will be built recursively in $\mathfrak{c}$ many steps.\\
\noindent For $\alpha = 0$, consider $\{ C_0,D_0 \} \in \mathcal{B}$. If for each $C \in C_0$ and $D \in D_0$, $\mathcal{A}\upharpoonright _C$ and $\mathcal{A}\upharpoonright _D$ all have size $\mathfrak{c}$, then for each $C \in C_0$ and $D \in D_0$ pick $a_C, b_D \in \mathcal{A} \setminus \mathcal{C}$ such that $|a_C\cap C| = \omega = |b_D\cap D|$ and all the $a_C$'s and $b_D$'s are distinct ($|\{a_C, b_D: C \in C_0, D \in D_0\}| = |C_0|+ |D_0|$). Let $\mathcal{F}_0 = \{a_C, b_D: C \in C_0, D \in D_0\}$. If there is $C \in C_0$ (or $D \in D_0$) such that $\mathcal{A}\upharpoonright _C$ is finite ($\mathcal{A}\upharpoonright _D$ is finite), let $\mathcal{F}_0 = \emptyset$. Observe that these are the only two possibilities as $\mathcal{A}$ is of true cardinality $\mathfrak{c}$.\\
\noindent Now assume $0 < \alpha < \mathfrak{c}$ and that for each $\beta < \alpha$, $\mathcal{F}_\beta$ is either empty of a finite subset of $ \mathcal{A} \setminus (\mathcal{C} \cup \bigcup _{\gamma < \beta}\mathcal{F}_\gamma)$. Consider the pair $\{ C_\alpha,D_\alpha \}$. If for each $C \in C_\alpha$ and $D \in D_\alpha$, $\mathcal{A}\upharpoonright _C$ and $\mathcal{A}\upharpoonright _D$ all have size $\mathfrak{c}$, then for each $C \in C_\alpha$ and $D \in D_\alpha$ pick $a_C, b_D \in \mathcal{A} \setminus (\mathcal{C} \cup \bigcup _{\beta < \alpha}\mathcal{F}_\beta)$ such that $|a_C\cap C| = \omega = |b_D\cap D|$ and all the $a_C$'s and $b_D$'s are distinct ($|\{a_C, b_D: C \in C_\alpha, D \in D_\alpha\}| = |C_\alpha|+ |D_\alpha|$). Let $\mathcal{F}_{\alpha} = \{a_C, b_D: C \in C_\alpha, D \in D_\alpha\}$. If there is $C \in C_\alpha$ (or $D \in D_\alpha$) such that $\mathcal{A}\upharpoonright _C$ is finite ($\mathcal{A}\upharpoonright _D$ is finite), let $\mathcal{F}_{\alpha} = \emptyset$. Let
$$\mathcal{F} = \big \{\bigcup \mathcal{F}_\alpha:\alpha < \mathfrak{c} \big \} \cup \big(\mathcal{A} \setminus \bigcup_{\alpha < \mathfrak{c}} \mathcal{F}_\alpha \big).$$
\noindent Since each $a \in \mathcal{F}$ is either an element of $\mathcal{A}$ or a finite union of elements of $\mathcal{A}$, it is clear that $\mathcal{F}$ is an almost disjoint family of true cardinality $\mathfrak{c}$.\\
\noindent {\bf Claim:} $\Psi(\mathcal{F})$ is quasi-normal.
\noindent Let $A \neq \emptyset \neq B$ be disjoint $\pi$-closed subsets of $\Psi(\mathcal{F})$. $A = \bigcap_{i=1}^nA_i$, $B = \bigcap_{j=1}^mB_j$, where each $A_i$ and $B_j$ are regular closed sets. It can be assumed that for each $i \leq n$ and for each $j \leq m$, $|A_i \cap \omega| = \omega = |B_j \cap \omega|$. Let $\alpha < \mathfrak{c}$ be minimal such that $C_\alpha = \{A_i \cap \omega:i\leq n\}$ and $D_\alpha = \{B_j \cap \omega:j\leq m\}$.\\
At stage $\alpha$, either $\mathcal{F}_{\alpha} = \emptyset$ or $\mathcal{F}_{\alpha} = \{a_C, b_D: C \in C_\alpha, D \in D_\alpha\}$. The latter is not possible since for each $C \in C_\alpha$ and each $D \in D_\alpha$ the $a_C$'s and $b_D$'s were chosen so that $|a_C \cap C| = \omega = |b_D \cap D|$ and this implies $\bigcup \mathcal{F}_{\alpha}$ is in the closure of each $C \in C_\alpha$ and each $D \in D_\alpha$ (see Observation \ref{closureofsubsetsofomega} and Observation \ref{regclosedset}). Hence $\bigcup \mathcal{F}_{\alpha} \in A \cap B$, but it is assumed that $A$ and $B$ are disjoint.\\
\noindent Thus, $\mathcal{F}_{\alpha} = \emptyset$. This means that there exists $C \in C_\alpha$, such that $\mathcal{A}\upharpoonright _C = H$ for some finite set $H$ (or there exists $D \in D_\alpha$, such that $\mathcal{A}\upharpoonright _D = H$ for some finite set $H$). Without loss of generality assume there exists such $C \in C_\alpha$. Hence, $\mathcal{A} \upharpoonright _C = H_0$ for some finite set $H_0$. Observe that since for each $a \in \mathcal{F}$, either $a \in \mathcal{A}$ or $a$ is a finite union of elements of $\mathcal{A}$, then $\mathcal{F} \upharpoonright _C = H_1$ for some finite $H_1$ so that $|H_1| \leq |H_0|$. Now fix $i \leq n$ such that $A_i \cap \omega = C$. Since $A_i$ is regular closed, by \ref{regclosedset} $A_i \cap \mathcal{F} = H_0$. Thus, $A \cap \mathcal{F} \subseteq H_0$ and by Observation \ref{closedsets}, $A$ and $B$ can be separated. Therefore $\Psi(\mathcal{F})$ is quasi-normal.\\
\noindent {\bf Claim:} $\Psi(\mathcal{F})$ is not almost-normal.\\
Fix $a \in \mathcal{F} \smallsetminus \mathcal{C}$, then $a \in \mathcal{A} \smallsetminus \mathcal{C}$ or $a$ is a finite union of elements of $\mathcal{A} \smallsetminus \mathcal{C}$. Since $cl_{\Psi(\mathcal{A})}(W) \cap \mathcal{A} = \mathcal{A} \smallsetminus \mathcal{C}$, $|W \cap a| = \omega$. Hence, $a \in cl_{\Psi(\mathcal{F})}(W)$, i.e., $\mathcal{F} \smallsetminus \mathcal{C} \subseteq cl_{\Psi(\mathcal{F})}(W)$. On the other hand, if $c \in \mathcal{C}$, $c \notin cl_{\Psi(\mathcal{A})}(W)$, thus $|c \cap W| < \omega$ and therefore $c \notin cl_{\Psi(\mathcal{F})}(W)$. \\
Hence, $\mathcal{C}$ is a closed set, $cl_{\Psi(\mathcal{F})}(W)$ is a regular closed set, they do not intersect and by Observation \ref{IfTrueCardcAandCtlbeComplementcannotseparate} they cannot be separated.
\end{proof}
\noindent If in the construction of Example \ref{QuasiNormalNotAlmostNormalPsiSpace}, a mad family as in Corollary \ref{TrueCountableCandmad} is chosen, then the resulting family $\mathcal{F}$ is mad, quasi-normal and not almost-normal. Thus:
\begin{corollary}
\label{TruemadQuasiNotAlmNor}
The existence of a mad family of true cardinality $\mathfrak{c}$ implies the existence of a quasi-normal, non almost-normal mad family of true cardinality $\mathfrak{c}$.
\end{corollary}
\noindent The following example provides a mildly-normal not partly-normal almost disjoint family $\mathcal{F}$ of true cardinality $\mathfrak{c}$ which is constructed using three almost disjoint families of true cardinality $\mathfrak{c}$. In order to make $\mathcal{F}$ mildly-normal all pairs of disjoint regular closed sets in $\Psi(\mathcal{F})$ have to be separated. A similar approach as in Example \ref{QuasiNormalNotAlmostNormalPsiSpace} is followed. It will be possible to build $\mathcal{F}$ so that all pairs of disjoint regular closed sets in $\Psi(\mathcal{F})$ will be trivial, i.e., one of them will have finite intersection with $\mathcal{F}$ (Observation \ref{closedsets} guarantees they can be separated). To make $\mathcal{F}$ not quasi-normal, there will be a regular closed set $A$ disjoint from a $\pi$-closed set $B$ that cannot be separated. The basic idea is to partition $\omega$ into three infinite sets, $W$, $V_0$, $V_1$, take an almost disjoint family of true cardinality $\mathfrak{c}$ on each one of them (we use the property of true cardinality $\mathfrak{c}$ to make $\mathcal{F}$ mildly-normal), and build $\mathcal{F}$ so that in $\Psi(\mathcal{F})$, $A = cl_{\Psi(\mathcal{F})}(W)$ and
$B = cl_{\Psi(\mathcal{F})}(V_0) \cap cl_{\Psi(\mathcal{F})}(V_1)$ are disjoint but cannot be separated.
\begin{example}
\label{mildlynotpartlyNormal}
There exists a mildly-normal not partly-normal almost disjoint family of true cardinality $\mathfrak{c}$.
\end{example}
\begin{proof}
Partition $\omega$ into three disjoint infinite pieces, that is $W, V_0, V_1 \in [\omega]^{\omega}$ and $W \cup V_0 \cup V_1 = \omega$. If $Y \in \{W, V_0, V_1\}$ let $\mathcal{A}_Y$ be an almost disjoint family of true cardinality $\mathfrak{c}$ on $Y$. List all pairs of infinite subsets of $\omega$ with empty intersection as $\{ \{ C_\alpha , D_\alpha \}: \alpha < \mathfrak{c} \}$. A sequence of finite sets $\mathcal{F}_\alpha \subset \mathcal{A}_{W} \cup \mathcal{A}_{V_0} \cup \mathcal{A}_{V_1}$ will be built recursively in $\mathfrak{c}$ many steps.\\
\noindent Fix $\alpha < \mathfrak{c}$, assume that for each $\beta < \alpha$, $\mathcal{F}_\beta$ has been defined such that $\mathcal{F}_\beta$ is a possibly empty finite set $\mathcal{F}_\beta \subset (\mathcal{A}_{W} \cup \mathcal{A}_{V_0} \cup \mathcal{A}_{V_1}) \setminus \bigcup _{\gamma < \beta}\mathcal{F}_\gamma$ such that either $\mathcal{F}_\beta \subset \mathcal{A}_{W}$ or $\mathcal{F}_\beta$ has nonempty intersection with exactly two elements of $\{\mathcal{A}_{W}, \mathcal{A}_{V_0}, \mathcal{A}_{V_1}\}$. Consider $\{C_\alpha, D_\alpha\}$.\\
\noindent {\bf Case 1:} Either all three sets $\mathcal{A}_{W}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{C_\alpha}$ are finite, or all three sets $\mathcal{A}_{W}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{D_\alpha}$ are finite. In this case, let $\mathcal{F}_{\alpha} = \emptyset$.\\
\noindent {\bf Case 2:} Case 1 is false. That is (given that $\mathcal{A}_{W}$, $\mathcal{A}_{V_0}$, $\mathcal{A}_{V_1}$ are of true cardinality $\mathfrak{c}$): at least one of the three sets $\mathcal{A}_{W}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{C_\alpha}$ has size $\mathfrak{c}$ and at least one of the three sets $\mathcal{A}_{W}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{D_\alpha}$ has size $\mathfrak{c}$. Choose the smallest $i$ such that Subcase $2.i$ (below) holds, define $\mathcal{F}_{\alpha}$ accordingly, and ignore the other subcases.\\
\noindent {\bf Subcase 2.1:} $|\mathcal{A}_{W}\upharpoonright _{C_\alpha}| = \mathfrak{c}=|\mathcal{A}_{W}\upharpoonright _{D_\alpha}|$. Pick $c_\alpha, d_\alpha \in \mathcal{A}_{W}\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$ such that $c_\alpha \neq d_\alpha$ and $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$. Let $\mathcal{F}_{\alpha} = \{c_\alpha, d_\alpha\}$.\\
\noindent {\bf Subcase 2.2:} There exists $i \in \{0,1\}$ so that $|\mathcal{A}_{V_i}\upharpoonright _{C_\alpha}| = \mathfrak{c} = |\mathcal{A}_{V_i}\upharpoonright _{D_\alpha}|$. Pick $c_\alpha, d_\alpha \in \mathcal{A}_{V_i}\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$, such that $c_\alpha \neq d_\alpha$ and $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$. In addition, pick $e_\alpha \in \mathcal{A}_{V_{1-i}}\setminus\bigcup _{\beta < \alpha}\mathcal{F}_\beta$. Let $\mathcal{F}_{\alpha} = \{c_\alpha, d_\alpha, e_\alpha\}$.\\
\noindent {\bf Subcase 2.3:} $|\mathcal{A}_{V_0}\upharpoonright _{C_\alpha}| = \mathfrak{c}=|\mathcal{A}_{V_1}\upharpoonright _{D_\alpha}|$. Pick $c_\alpha \in \mathcal{A}_{V_0}\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$ and $d_\alpha \in \mathcal{A}_{V_1}\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$ such that $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$ and let $\mathcal{F}_{\alpha} = \{c_\alpha, d_\alpha\}$.\\
\noindent {\bf Subcase 2.4:} $|\mathcal{A}_W\upharpoonright _{C_\alpha}| = \mathfrak{c}$ and there exists $i \in \{0,1\}$ so that $|\mathcal{A}_{V_i}\upharpoonright _{D_\alpha}|= \mathfrak{c}$. Pick $c_\alpha \in \mathcal{A}_W\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$ and $d_\alpha \in \mathcal{A}_{V_i}\setminus \bigcup _{\beta < \alpha}\mathcal{F}_\beta$ such that $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$ and let $\mathcal{F}_{\alpha} = \{c_\alpha, d_\alpha\}$.\\
\noindent This finishes Case 2 and the construction of $\mathcal{F}_\alpha$ for $\alpha < \mathfrak{c}$. Let
$$\mathcal{F} = \big \{\bigcup \mathcal{F}_\alpha:\alpha < \mathfrak{c} \big \} \cup \big((\mathcal{A}_{W} \cup \mathcal{A}_{V_0} \cup \mathcal{A}_{V_1}) \setminus \bigcup_{\alpha < \mathfrak{c}} \mathcal{F}_\alpha \big).$$
\noindent It will be shown that $\mathcal{F}$ is the desired almost disjoint family. Given that each of $\mathcal{A}_{W}$, $\mathcal{A}_{V_0}$ and $\mathcal{A}_{V_1}$ is of true cardinality $\mathfrak{c}$ and if we let $a \in \mathcal{F}$, then either $a$ is an element or a finite union of elements of $\mathcal{A}_{W} \cup \mathcal{A}_{V_0} \cup \mathcal{A}_{V_1}$, then $\mathcal{F}$ is an almost disjoint family of true cardinality $\mathfrak{c}$.\\
\noindent {\bf $\Psi(\mathcal{F})$ is not partly-normal:}\\
\noindent Let $A = cl_{\Psi(\mathcal{F})}(W)$ and
$B = cl_{\Psi(\mathcal{F})}(V_0) \cap cl_{\Psi(\mathcal{F})}(V_1)$.
By Observation \ref{closureofsubsetsofomega}, $A$ is regular closed and $B$ is a $\pi$-closed set. Observe that since $\mathcal{A}_{V_0}$ and $\mathcal{A}_{V_1}$ are of true cardinality $\mathfrak{c}$, there are infinite many pairs $\{C_\alpha,D_\alpha\}$ such that $C_\alpha \subset V_0$, $D_\alpha \subset V_1$, and $|\mathcal{A}_{V_0} \upharpoonright _{C_\alpha}| = \mathfrak{c} = |\mathcal{A}_{V_1} \upharpoonright _{D_\alpha}|$. For such pairs Subcase 2.3 applies and therefore $|B \cap \mathcal{F}| \ge \omega$. In addition, $A \cap B = \emptyset$: assume there is $a \in A \cap B$. Since $V_0 \cap V_1 = \emptyset$, $B \cap \omega = \emptyset$, hence $a \in \mathcal{F} \cap A \cap B$. By Observation \ref{regclosedset}, $|a \cap W|= |a \cap V_0|= |a \cap V_1| = \omega$. This implies that $a \notin \mathcal{A}_W \cup \mathcal{A}_{V_0} \cup\mathcal{A}_{V_1}$. There is $\alpha < \mathfrak{c}$ such that $a = \bigcup \mathcal{F}_{\alpha}$, but by the construction, $\mathcal{F}_{\alpha} \subset \mathcal{A}_W$ or $\mathcal{F}_{\alpha}$ intersects exactly two elements of $\{\mathcal{A}_W, \mathcal{A}_{V_0},\mathcal{A}_{V_1}\}$ which contradicts that $a$ has infinite intersection with $W$, $V_0$ and $V_1$. Whence, $A\cap B = \emptyset$.\\
\noindent It remains to show that $A$ and $B$ cannot be separated. Assume, on the contrary, that there are $S, T \subseteq \Psi(\mathcal{F})$ open such that $A \subseteq S$, $B \subseteq T$ and $S \cap T = \emptyset$. Let $\alpha < \mathfrak{c}$ such that $C_\alpha = \omega \cap S$ and $D_\alpha = \omega \cap T$. For the pair $\{C_\alpha,D_\alpha\}$, either Case 1 or Case 2 of the construction holds.\\
\noindent {\it If Case 1 holds:} since $W \subseteq C_\alpha$, $\mathcal{A}_{W}\upharpoonright _{C_\alpha}$ is not finite. Hence, $\mathcal{A}_{W}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{D_\alpha}$ are finite. Thus, $\mathcal{F}\upharpoonright _{D_\alpha}$ is finite. Since $cl_{\Psi(\mathcal{F})}(D_\alpha)$ is regular closed and $\mathcal{F}\upharpoonright _{D_\alpha}$ is finite, by Observation \ref{regclosedset}, $\mathcal{F} \cap cl_{\Psi(\mathcal{F})}(D_\alpha)$ is finite. Now, $T$ is open and $D_\alpha = \omega \cap T$, therefore $T \subseteq cl_{\Psi(\mathcal{F})}(D_\alpha)$. Hence, $\mathcal{F} \cap T$ is finite. Given that $|B \cap \mathcal{F}| \ge \omega$, $B \not \subseteq T$, which is a contradiction.\\
\noindent {\it If Case 2 holds:} Either $\mathcal{F}_{\alpha} \subset \mathcal{A}_W$ or $\mathcal{F}_{\alpha}$ intersects exactly two elements of $\{\mathcal{A}_W, \mathcal{A}_{V_0},\mathcal{A}_{V_1}\}$. In any case $\bigcup \mathcal{F}_{\alpha}$ is an element of $A$ or $B$. In addition, there exist $c_\alpha, d_\alpha \in \mathcal{F}_{\alpha}$ such that $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$. If $\bigcup \mathcal{F}_{\alpha} \in A$, then for each open neighbourhood $U$ of $\bigcup \mathcal{F}_{\alpha}$, $U \cap T \neq \emptyset$ (which implies $U \not \subseteq S$), and this contradicts that $S$ is open. We reach a similar contradiction if $\bigcup \mathcal{F}_{\alpha} \in B$. Hence, $A$ and $B$ cannot be separated.\\
\noindent {\bf $\Psi(\mathcal{F})$ is mildly-normal:}\\
\noindent Let $C \neq \emptyset \neq D$ be disjoint regular closed subsets of $\Psi(\mathcal{F})$. It can be assumed that $|C \cap \omega| = \omega = |D \cap \omega|$. Fix $\alpha < \mathfrak{c}$ such that $C \cap \omega = C_\alpha$ and $D \cap \omega = D_\alpha$. For the pair $\{C_\alpha,D_\alpha\}$, either Case 1 or Case 2 holds. If Case 2 holds, there exist $c_\alpha, d_\alpha \in \mathcal{F}_{\alpha}$ such that $|c_\alpha \cap C_\alpha| = \omega = |d_\alpha \cap D_\alpha|$. Thus, $\bigcup \mathcal{F}_{\alpha} \in cl_{\Psi(\mathcal{F})}(C_\alpha) \cap cl_{\Psi(\mathcal{F})}(D_\alpha)$ $\subseteq cl_{\Psi(\mathcal{F})}(C) \cap cl_{\Psi(\mathcal{F})}(D) = C \cap D$. This contradicts $C \cap D = \emptyset$.\\
Thus, Case 1 holds. This means that all three sets $\mathcal{A}_{W}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{C_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{C_\alpha}$ are finite, or all three sets $\mathcal{A}_{W}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_0}\upharpoonright _{D_\alpha}$, $\mathcal{A}_{V_1}\upharpoonright _{D_\alpha}$ are finite.\\
Without loss of generality, assume the former. This implies that $\mathcal{F}\upharpoonright _{C_\alpha}$ is finite. Given that $C$ is a regular closed set and $C_\alpha = C\cap \omega$, by Observation \ref{regclosedset} $C \cap \mathcal{F}$ is finite and by Observation \ref{closedsets}, $C$ and $D$ can be separated. Therefore $\Psi(\mathcal{F})$ is mildly-normal.
\end{proof}
\noindent Observe that if in the construction of Example \ref{mildlynotpartlyNormal}, the families $\mathcal{A}_{W}$, $\mathcal{A}_{V_0}$ and $\mathcal{A}_{V_1}$ are mad of true cardinality $\mathfrak{c}$, then the family $\mathcal{F}$ is mad as well. Therefore:
\begin{corollary}
\label{TruemadMildlyNotPartly}
If there exists a mad family of true cardinality $\mathfrak{c}$, then there is a mildly-normal, not partly-normal mad family of true cardinality $\mathfrak{c}$.
\end{corollary}
\begin{definition}
\label{nPartlyNormal}
For a positive $n\in \omega$, a regular space will be called \emph{$n$-partly-normal} if any two nonintersecting sets $A$ and $B$, where $A$ is regularly closed and $B$ is the intersection of at most $n$ regularly closed sets, are separated.
\end{definition}
\noindent Observe that $1$-partly-normal coincides with mildly-normal, and for each positive $n\in \omega$, partly-normal $\rightarrow$ $(n+1)$-partly-normal $\rightarrow$ $n$-partly-normal $\rightarrow$ mildly-normal. It is possible to extend the idea in Example \ref{mildlynotpartlyNormal} (partition $\omega$ into $n+2$ pairwise disjoint infinite pieces, take an almost disjoint family of true cardinality $\mathfrak{c}$ on each piece and let $\{\mathbb{C}_\alpha:\alpha < \mathfrak{c}\}$ list all sets $\mathbb{C} \subset [\omega]^\omega$ such that $2 \le |\mathbb{C}| \le n+1$), to show the following:
\begin{theorem}
\label{npartlyNormalnotn+1}
For each positive $n\in \omega$, there exists a $n$-partly-normal not $(n+1)$-partly-normal almost disjoint family of true cardinality $\mathfrak{c}$.
\end{theorem}
\noindent Similarly as Corollary \ref{TruemadMildlyNotPartly}, it also holds true:
\begin{corollary}
\label{TruemadNPartlyNotn+1}
If there exists a mad family of true cardinality $\mathfrak{c}$, then for each positive $n\in \omega$, there is a $n$-partly-normal not $(n+1)$-partly-normal mad family of true cardinality $\mathfrak{c}$.
\end{corollary}
\noindent Corollary \ref{TruemadQuasiNotAlmNor} says, in particular, that there is a quasi-normal mad family, provided there is a completely separable mad family. Our next example shows that, assuming {\bf CH}, not only a quasi-normal mad family exists, but one that it is also Luzin. Recall that an almost disjoint family is \emph{Luzin} if it can be enumerated as $\{a_\alpha:\alpha <\omega_1\}$ so that for each $\alpha < \omega_1$ and each $n \in \omega$, $\{\beta < \alpha : a_\alpha \cap a_\beta \subseteq n \}$ is finite. Luzin introduced this kind almost disjoint family in \cite{Lu} to provide an example of an almost disjoint family $\mathcal{A}$ such that every pair of uncountable subfamilies of $\mathcal{A}$ have no \emph{separation} (it will be said that two subfamiles $\mathcal{B}$ and $\mathcal{C}$ of $\mathcal{A}$, have a separation if there is $X \subseteq \omega$ such that for each $b \in \mathcal{B}$, $b \subseteq ^* X$ and for each $c \in \mathcal{C}$, $c \cap X =^* \emptyset$). Thus, Luzin families are far from being normal. No mad family is normal, no Luzin family is normal, and yet, there is, consistently, a quasi-normal Luzin mad family.
\begin{example}[CH]
\label{MadLuzinQuasi}
There is a Luzin mad family $\mathcal{A}$ which is quasi-normal.
\end{example}
\begin{proof}
The standard construction of a Luzin family is modified to build a family $\mathcal{A}$ with the extra following property: for each $X \subseteq \omega$, either $X$ is covered by finitely many elements of $\mathcal{A}$ or the set of elements of $\mathcal{A}$ that has finite intersection with $X$ is countable.\\
The idea is to use {\bf CH} to list all infinite subsets $X_\alpha \subseteq \omega$, with $\alpha < \omega_1$ and, at stage $\alpha < \omega_1$ of the construction of the family, $X_\alpha$ will be covered by the $\alpha$-th element of the family, together with finitely many elements of the family previously constructed or, if $X_\alpha$ has infinite intersection with infinitely many elements of the family constructed so far, it will be guaranteed that, from that stage until the end, all elements of the family will have infinite intersection with $X_\alpha$.\\
Partition $\omega$ into infinite pairwise disjoint subsets $a_i$, with $i \in \omega$, that is $\omega = \bigcup _{i\in\omega}a_i$, and $i \neq j $ implies $a_i \cap a_j = \emptyset$. List all infinite subsets of $\omega$ as $[\omega]^\omega = \{X_\alpha:\alpha < \omega_1\}$ such that for each $n \in \omega$, $X_n = a_n$. If $\alpha$ is such that $\omega \leq \alpha < \omega_1$, recursively assume we have constructed $a_\beta$ for $\beta < \alpha$ such that $\{a_\beta: \beta < \alpha\}$ is an almost disjoint family and for each $\beta < \alpha$, $X_\beta$ is covered by finitely many elements of $\{a_\gamma: \gamma \le \beta\}$ or for each $\beta \le \gamma < \alpha$, $|X_\beta \cap a_\gamma| = \omega$.\\
The $\alpha$-th element of the family will be constructed. Reenumerate the sets $\mathcal{A}_\alpha = \{a_\beta : \beta < \alpha\}$ and $J_\alpha = \{X_\beta : \beta \leq \alpha\}$ as $\mathcal{A}_\alpha = \{a^\alpha_n : n \in \omega\}$ and $J_\alpha = \{X^\alpha_n : n \in \omega\}$. Let $I_\alpha = \{n \in \omega: X_n^\alpha \in \mathcal{I}^+(\mathcal{A_\alpha})\}$.\\
There are two cases, either $X_\alpha \in \mathcal{I}^+(\mathcal{A_\alpha})$ or $X_\alpha \notin \mathcal{I}^+(\mathcal{A_\alpha})$. We will construct $a_\alpha$ depending on whether at this stage, $I_\alpha$ is still empty or not.\\
If $I_\alpha = \emptyset$ (observe that in particular $X_\alpha \notin \mathcal{I}^+(\mathcal{A_\alpha})$), let $p_n^\alpha \subseteq a_n^\alpha \setminus \bigcup _{i < n}a_i^\alpha$, such that $|p_n^\alpha| = n$ and let $a_\alpha = \bigcup _{n \in \omega} p_n^\alpha \cup (X_\alpha \setminus\bigcup (\mathcal{A}_\alpha \upharpoonright _{X_\alpha}))$.\\
If $I_\alpha \neq \emptyset$. Let $\{Y_n:n\in \omega\}$ list all $X_n^\alpha$ such that $n \in I_\alpha$ and so that not only each $X_n^\alpha$ appears infinitely often but for each $n \in I_\alpha$ and for each $m \in \omega$, there is some $s \geq m$ such that $Y_s = X_n^\alpha$ and $|a_s^\alpha \cap Y_s| = \omega$. For $n \in \omega$, if $|a_n^\alpha \cap Y_n| < \omega$, let $p_n^\alpha \subseteq a_n^\alpha \setminus \bigcup _{i < n}a_i^\alpha$, such that $|p_n^\alpha| = n$. If $|a_n^\alpha \cap Y_n| = \omega$, let $p_n^\alpha \subseteq (a_n^\alpha \setminus \bigcup _{i < n}a_i^\alpha) \cap Y_n$ such that $|p_n^\alpha|=n$. Let $a_\alpha = \bigcup_{n \in \omega}p_n^\alpha \cup (X_\alpha \setminus\bigcup (\mathcal{A}_\alpha \upharpoonright _{X_\alpha}))$. Observe that if $X_\alpha \notin \mathcal{I}^+(\mathcal{A_\alpha})$, then the construction of $a_\alpha$ guarantees that $X_\alpha$ is covered by finitely many elements of $\mathcal{A}_\alpha \cup \{a_\alpha\}$. On the other hand, if $X_\alpha \in \mathcal{I}^+(\mathcal{A_\alpha})$, then $X_\alpha$ appears infinitely often in $\{Y_n:n\in \omega\}$, thus, it has infinite intersection with $a_\alpha$ and it will have infinite intersection with each $a_\beta$ for each $\beta > \alpha$.
\noindent Finally, let $\mathcal{A} = \{a_\alpha:\alpha < \omega_1\}$. The construction guarantees that $\mathcal{A}$ is Luzin: let $\alpha \in \omega_1$ and $n \in \omega$. Recall that $\mathcal{A}_\alpha = \{a_\beta : \beta < \alpha\} = \{a_m^\alpha: m \in \omega\}$ and for each $m \ge n$, $p_m^\alpha \subseteq a_\alpha \cap a_m^\alpha$ and $|p_m^\alpha| = m \ge n$. Hence, $\{\beta < \alpha: a_\beta\cap a_\alpha \subseteq n\}$ is finite. Let us verify that it is mad. Let $\alpha, \beta \in \omega_1$ such that $\beta < \alpha$. There is $n \in \omega$ with $a_\beta = a_n^\alpha$. Observe that for $i\leq n$, $p_i ^\alpha$ is finite, for $i >n$, $p_i^\alpha \cap a_\beta = \emptyset$ and, $ (X_\alpha \setminus\bigcup (\mathcal{A}_\alpha \upharpoonright _{X_\alpha})) \cap a_\beta$ is finite. Hence, $a_\beta \cap a_\alpha$ is finite. Now, let $X \in [\omega]^\omega$ and $\alpha < \omega_1$ such that $X = X_\alpha$. Either $X \notin \mathcal{I}^+(\mathcal{A}_\alpha)$, in which case $X$ is covered by finitely many elements of $\mathcal{A}_\alpha \cup \{a_\alpha\}$ (i.e. $X$ has infinite intersection with some element of $\mathcal{A}$), or $X \in \mathcal{I}^+(\mathcal{A}_\alpha)$, in which case for each $\gamma > \alpha$, $|X \cap a_\gamma| = \omega$. Thus, $\mathcal{A}$ is mad and it has the desired property. \\
\noindent Let us show that $\mathcal{A}$ is quasi-normal. Let $A, B \subseteq \Psi(\mathcal{A})$ such that $A$ and $B$ are $\pi$-closed sets and $A \cap B = \emptyset$. Thus, $A = \bigcap _{i<n}A_i$, $B = \bigcap _{j<m}B_j$, where each $A_i$, $B_j$ are regular closed subsets of $\Psi(\mathcal{A})$ for $i<n$ and $j<m$. Assume that for each $i<n$ and for each $j<m$, $|A_i \cap \mathcal{A}| \ge \omega$ and $|B_j \cap \mathcal{A}| \ge \omega$. Hence, for each $i<n$ and for each $j<m$, $A_i \cap \omega \in \mathcal{I}^+(\mathcal{A})$ and $B_j \cap \omega \in \mathcal{I}^+(\mathcal{A})$. By the construction of $\mathcal{A}$, for each $i<n$ and for each $j<m$ the sets $\{a \in \mathcal{A}:|a \cap (A_i \cap \omega)|< \omega \}$ and $\{a \in \mathcal{A}:|a \cap (B_j \cap \omega)|< \omega \}$ are countable. Since the $A_i$'s, $B_j$'s are regular closed sets, then for each $i<n$ and for each $j<m$, $|\mathcal{A} \setminus A_i| \leq \omega$ and $|\mathcal{A} \setminus B_j | \leq \omega$. Thus, $|\mathcal{A} \setminus \bigcap_{i<n}A_i| \leq \omega$ and $|\mathcal{A} \setminus \bigcap_{j<m} B_j | \leq \omega$. Therefore, $A \cap B \neq \emptyset$. Hence, there exists some $i<n$ (or $j<m$), such that $|A_i \cap \mathcal{A}| < \omega$ ($|B_j \cap \mathcal{A}| < \omega$). Then $|A \cap \mathcal{A}| < \omega$ ($|B \cap \mathcal{A}| < \omega$) and, by Observation \ref{closedsets}, $A$ can be separated from $B$.
\end{proof}
\section{Strongly $\aleph_0$-separated almost disjoint families}
It is still open whether there could be (e.g., assuming CH) a mad family whose $\Psi$-space is almost normal, or one whose $\Psi$-space is almost normal but not normal. However, we can construct a mad family with a slightly weaker property:
\begin{definition}
\label{stronglycountableseparated}
An almost disjoint family $\mathcal{A}$ will be called \emph{strongly $\aleph_0$-separated}, if and only if for each pair of disjoint countable subfamilies there is a clopen partition of $\mathcal{A}$ that separates them. That is, for each $A,B \in [\mathcal{A}]^\omega$, with $A\cap B = \emptyset$, there is $X \subset \omega$ such that
\begin{enumerate}[nosep]
\item For each $a \in \mathcal{A}$, $a \subseteq ^* X$ or $a \cap X =^* \emptyset$,
\item For each $a \in A$, $a \subseteq ^* X$,
\item For each $a \in B$, $a \cap X =^* \emptyset$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{quasiseparation}
Almost-normal almost disjoint families are strongly $\aleph_0$-separated.
\end{lemma}
\begin{proof}
Let $\mathcal{A}$ be an almost-normal almost disjoint family. First, let us recall that each pair of disjoint countable closed subsets of a regular space can be separated. Hence, given that $\Psi(\mathcal{A})$ is regular and $\mathcal{A}$ is a closed discrete subset of $\Psi(\mathcal{A})$, if we consider $A, B \in [\mathcal{A}]^\omega$ so that $A \cap B = \emptyset$, then $A$ and $B$ can be separated. Thus, there exist $U_A, U_B$ open subsets of $\mathcal{A}$ such that $U_A \cap U_B = \emptyset$ and $A \subseteq U_A$, $B \subseteq U_B$. Let $C = cl_{\Psi(\mathcal{A})}(U_A \cap \omega)$. By Observation \ref{closureofsubsetsofomega}, $C$ is a regular closed set. Then $C$ and $\mathcal{A} \setminus C$ is a pair of a regular closed set and a closed set with empty intersection. Since $\mathcal{A}$ is almost-normal, there exist $V$, $W$ open subsets of $\Psi(\mathcal{A})$ such that $V \cap W = \emptyset$ and $C \subseteq V$, $\mathcal{A} \setminus C \subseteq W$.\\
Let us check that $X = V \cap \omega$ has the desired properties. Indeed, let $a \in \mathcal{A}$, if $a \in C$, then $a \subseteq ^* V \cap \omega = X$. If $a \in \mathcal{A}\setminus C$, then $a\subseteq ^* W \cap \omega$, thus $a \cap X =^* \emptyset$. Now, if $a \in A$, $a \subseteq ^* U_A \cap \omega \subseteq C \cap \omega \subseteq V \cap \omega = X$. If $b \in B$, $|b \cap U_A| < \omega$ thus, $b \in \mathcal{A} \setminus C$. Hence, $b \subseteq ^* W$, i.e. $b \cap X =^* \emptyset$. Hence, $\mathcal{A}$ is strongly $\aleph_0$-separated.
\end{proof}
\begin{proposition}[CH]
\label{CHimpliesMADstronglycountableseparated}
There is a strongly $\aleph_0$-separated mad family.
\end{proposition}
\begin{proof}
Let $\{(A_\beta,B_\beta)\in [\omega_1]^\omega \times [\omega_1]^\omega : \omega \le \beta < \omega_1\}$ list all disjoint pairs of countable subsets of $\omega_1$ in such a way that for each $\omega \le \beta < \omega_1$, $A_\beta \cup B_\beta \subseteq \beta$. In addition, list $[\omega]^\omega$ as $\{Y_\alpha: \omega \le \alpha < \omega_1\}$.\\
Let $\omega \le \alpha <\omega_1$ and assume that for each $\omega \le \beta < \alpha$, the sets $X_\beta$, $a_\beta \subset \omega$ have been defined such that:
\begin{enumerate}[nosep]
\item For each $\gamma \in A_\beta: a_\gamma \subseteq^* X_\beta$,
\item For each $\gamma \in B_\beta: a_\gamma \cap X_\beta =^* \emptyset$,
\item For each $\gamma < \alpha: a_\gamma \subseteq^* X_\beta$ or $a_\gamma \cap X_\beta =^* \emptyset$,
\item If there is $\gamma < \beta$ such that $|a_\gamma \cap Y_\beta| = \omega$, then $a_\beta = \emptyset$. Otherwise, $|a_\beta \cap Y_\beta|=\omega$,
\item For each $\eta, \gamma < \alpha$, $a_\eta \cap a_\gamma =^* \emptyset$,
\end{enumerate}
\noindent Let us construct $X_\alpha$. List $\alpha \smallsetminus B_\alpha$ and $B_\alpha$ as $\alpha \smallsetminus B_\alpha =\{\gamma _n:n\in \omega\}$, $B_\alpha=\{\beta _n:n\in \omega\}$. Since $A_\alpha \cup B_\alpha \subseteq \alpha$, then $A_\alpha \subseteq \alpha \smallsetminus B_\alpha$ and for each $n \in \omega$, $\gamma_n, \beta_n < \alpha$. That is, $a_{\gamma_n},a_{\beta_n}$ have been defined. In addition, for $n \in \omega$, $W_n =a_{\gamma_n}\smallsetminus [\bigcup_{j\le n}a_{\beta_j}]$ is either empty of infinite. Define $X_\alpha = \bigcup_{n\in \omega}W_n$. Observe that $(A_\alpha,B_\alpha)$ and $X_\alpha$ satisfy properties 1. and 2. of the recursive construction.\\
Now let us build $a_\alpha$. Reenumerate $\{X_\beta: \beta <\alpha\}$$\cup \{X_\alpha\}$ as $\{X^n:n\in \omega\}$. For $n \in \omega$, let $X_1^n = X^n$, $X_0^n = \omega \smallsetminus X^n$.
If there is $\gamma < \alpha$ such that $|a_\gamma \cap Y_\alpha| = \omega$, then let $a_\alpha = \emptyset$.
On the other hand, if for each $\gamma < \alpha$, $|a_\gamma \cap Y_\alpha| < \omega$, for $n \in \omega$, pick $i(n) \in \{0,1\}$ so that $Y_\alpha \cap \bigcap_{j \le n}X_{i(j)}^j$ is infinite. For each $n \in \omega$, pick $p_n \in \big[Y_\alpha \cap \bigcap_{j \le n}X_{i(j)}^j\big] \smallsetminus \{p_j:j<n\}$. In this case, let $a_\alpha = \{p_n:n\in\omega\}$. Since $a_\alpha \subseteq Y_\alpha$, then for ech $\beta < \alpha$, $a_\beta \cap a_\alpha$ is finite.\\
This finishes the recursive construction of $X_\alpha$ and $a_\alpha$. Regardless of whether $a_\alpha$ is empty or not, it satisfies properties 4. and 5. In addition, it holds true that for each $\gamma, \beta \le \alpha$: $a_\gamma \subseteq ^* X_\beta$ or $a_\gamma \cap X_\beta =^* \emptyset$. Thus, property 3. is satisfied.\\
Let $\mathcal{A} = \{a_\alpha: \omega \le \alpha < \omega_1 \textit{ and } a_\alpha \neq \emptyset \}$. Observe that properties 4. and 5. guarantee that $\mathcal{A}$ is a mad family. Properties 1., 2. and 3. guarantee that $\mathcal{A}$ is strongly $\aleph_0$-separated. Hence, $\mathcal{A}$ is the desired family.
\end{proof}
\section{Questions and Remarks}
\noindent We don't even have consistent examples to answer the following questions:
\begin{question} Is there a partly-normal not quasi-normal almost disjoint family?
\end{question}
\begin{question} Is there an almost-normal not normal almost disjoint family?
\end{question}
\begin{question} Is there an almost-normal mad family?
\end{question}
\noindent If $\mathcal{A}$ is mad, $\Psi(\mathcal{A})$ is a pseudocompact and not countably compact space. Recall that normal pseudocompact spaces are countably compact and so it is natural to ask the following more general question
\begin{question} Are almost-normal pseudocompact spaces countably compact?
\end{question}
\noindent Since $\Psi$-spaces are always Tychonoff and not countably compact, the existence of an almost-normal mad family would answer this question in the negative.
\noindent Finally, we have not considered the relationship between these weakenings of normality and countable paracompactness:
\begin{question} Is there a relationship between countably paracompact and any of these weakenings of normality?
\end{question}
\section*{Acknowledgements}
The first author was partly supported for this research by the Consejo Nacional de Ciencia y Tecnolog\'ia CONACYT, M\'exico, Scholarship 411689.
\small
\baselineskip=5pt
| {
"timestamp": "2020-07-14T02:11:47",
"yymm": "2007",
"arxiv_id": "2007.05844",
"language": "en",
"url": "https://arxiv.org/abs/2007.05844",
"abstract": "Almost disjoint families of true cardinality $\\mathfrak{c}$ are used to produce an example of a mildly-normal not partly-normal $\\Psi$-space and a quasi-normal not almost-normal $\\Psi$-space. This is related with a problem posed by Lufti Kalantan where he asks whether there exists a mad family so that the related Mrówka-Isbell space is partly-normal. In addition, a consistent example of a Luzin mad family such that its associated $\\Psi$-space is quasi-normal is provided.",
"subjects": "General Topology (math.GN)",
"title": "Weak normality properties in $Ψ$-spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9697854164256366,
"lm_q2_score": 0.7310585669110203,
"lm_q1q2_score": 0.708969936743333
} |
https://arxiv.org/abs/1501.04918 | A density property for fractional weighted Sobolev spaces | In this paper we show a density property for fractional weighted Sobolev spaces. That is, we prove that any function in a fractional weighted Sobolev space can be approximated by a smooth function with compact support.The additional difficulty in this nonlocal setting is caused by the fact that the weights are not necessarily translation invariant. | \section{Introduction}
Goal of this paper is to provide an approximation result
by smooth and compactly supported functions
for a fractional Sobolev space with weights that are
not necessarily translation invariant.
The functional framework is the following.
Given $s\in(0,1)$, $p\in(1,+\infty)$ and
\begin{equation}\label{range}
a\in\left[0,\;\frac{n-sp}{2}\right)
\end{equation}
we introduce the semi-norm
\begin{equation}\label{norm}
[u]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}:=
\left(\iint_{{\mathbb R}^{2n}}\frac{|u(x)-u(y)|^p}{|x-y|^{n+sp}}\,\frac{dx}{|x|^a}
\,\frac{dy}{|y|^a}\right)^{1/p}.
\end{equation}
We define the space
$$ \widetilde{W}^{s,p}_a({\mathbb R}^n):=\{u:{\mathbb R}^n\to{\mathbb R} {\mbox{ measurable s.t. }}
[u]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}<+\infty\}.$$
Also, we define the weighted norm
\begin{equation}\label{UP}
\|u\|_{L^{p^*_s}_a({\mathbb R}^n)}:=\left(\int_{{\mathbb R}^n}\frac{|u(x)|^{p^*_s}}{|x|^{\frac{2ap^*_s}{p}}}\,dx
\right)^{1/p^*_s},
\end{equation}
where $p^*_s$ is the fractional critical Sobolev exponent associated to $p$, namely
$$ p^*_s:=\frac{np}{n-sp}.$$
Moreover, we set
$$ L^{p^*_s}_a({\mathbb R}^n):=\{u:{\mathbb R}^n\to{\mathbb R} {\mbox{ measurable s.t. }}
\|u\|_{L^{p^*_s}_a({\mathbb R}^n)}<+\infty\}.$$
The importance of the weighted norm in~\eqref{UP}
lies in the fact that, when~$a$ lies in the range
prescribed by~\eqref{range},
a weighted fractional Sobolev inequality holds true,
as proved in~\cite{AB}: more precisely, there exists a constant~$C_{n,s,p,a}>0$
such that
$$ \|u\|_{L^{p^*_s}_a({\mathbb R}^n)}\le
C_{n,s,p,a}\, [u]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)},$$
for any~$u\in C^\infty_0({\mathbb R}^n)$.
So we define~$\dot{W}^{s,p}_a({\mathbb R}^n)
:=\widetilde{W}^{s,p}_a({\mathbb R}^n)\cap L^{p^*_s}_a({\mathbb R}^n)$, which is naturally
endowed with the norm
\begin{equation}\label{norm2}
\|u\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}:=
[u]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}+\|u\|_{L^{p^*_s}_a({\mathbb R}^n)}.\end{equation}
The space~$\dot{W}^{s,p}_a({\mathbb R}^n)$ has recently appeared
in the literature in several circumstances, such as
in a clever change of variable (see~\cite{frank}), and
in a critical and fractional Hardy equation (see~\cite{DMPS}).
Even the case with~$a=0$ presents some applications,
see e.g.~\cite{maria}.\medskip
A natural question is whether functions
with finite norm in~$\dot{W}^{s,p}_a({\mathbb R}^n)$
can be approximated by smooth functions with compact support.
This is indeed the case, as stated by our main result:
\begin{theorem}\label{TH}
For any $u\in\dot{W}^{s,p}_a({\mathbb R}^n)$ there exists a sequence of
functions $u_\epsilon\in C^{\infty}_0({\mathbb R}^n)$ such that
$\|u-u_\epsilon\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}\to0$ as $\epsilon\to0$.
Namely, $C^\infty_0({\mathbb R}^n)$ is dense in $\dot{W}^{s,p}_a({\mathbb R}^n)$.
\end{theorem}
We observe that Theorem~\ref{TH} comprises also the ``unweighted''
case~$a=0$
(though, in this setting, the proof can be radically simplified,
thanks to the translation invariance of the kernel, see e.g.~\cite{FSV}).
The result obtained in Theorem~\ref{TH} here plays also a crucial role
in~\cite{DMPS} to obtain sharp decay estimates of the solution of
a weighted equation near the singularities and at infinity.\medskip
For related results in weighted Sobolev spaces with integer exponents
see for instance~\cite{CS, kilpe, Bo, tolle} and the references therein.
\medskip
The paper is essentially self-contained and written in the most
accessible way. We tried to avoid
as much as possible any unnecessary complication arising
from the presence of the weights and to clearly explain all the technical
details of the arguments presented.\medskip
The paper is organized as follows. In Section \ref{sec:basic} we show
a basic lemma that states that the space under consideration
is not trivial.
In Section \ref{sec:compact} we show that we can perform an
approximation with compactly supported functions.
The approximation with smooth functions is, in general, more
difficult to obtain, due to the presence of weights that
are not translation invariant. More precisely,
the standard approximation techniques that rely on convolution
need to be carefully reviewed, since the arguments
based on the continuity under translations in the classical Lebesgue spaces
fail in this case. To overcome this type of difficulties,
in Section~\ref{AVE} we estimate the ``averaged'' error
produced by translations of the weights and we use this
estimate to control the norm of a mollification in terms
of the norm of the original function.
Then, in Section~\ref{AVE2}, we perform an approximation
with continuous functions, by carefully exploiting the Lusin's Theorem.
The approximation with smooth functions is proved
in Section \ref{sec:smooth}, by using all the ingredients
that were previously introduced.
Finally, Section \ref{sec:proof} is devoted to the proof of Theorem \ref{TH}.
\section{A basic lemma}\label{sec:basic}
In this section we consider a more general semi-norm
and we show that it is bounded for functions in $C^\infty_0({\mathbb R}^n)$.
This remark shows that there is an interesting range of
parameters for which the space considered here is not trivial.
We take $\alpha,\beta\in{\mathbb R}$ such that
\begin{equation}\label{cond ab}
-s p<\alpha,\beta<n \ {\mbox{ and }} \ \alpha +\beta <n,
\end{equation}
and we define
\begin{equation}\label{2.1bis} [u]_{\widetilde{W}^{s,p}_{\alpha,\beta}({\mathbb R}^n)}
:=\iint_{{\mathbb R}^{2n}}\frac{|u(x)-u(y)|^p}{|x-y|^{n+s p}}\,\frac{dx}{|x|^\alpha}
\,\frac{dy}{|y|^\beta}.\end{equation}
\begin{lemma}\label{lemma 0}
Let $\varphi\in C^\infty_0({\mathbb R}^n)$. Then there exists a positive constant $C$ such that
$$ [\varphi]_{\widetilde{W}^{s,p}_{\alpha,\beta}({\mathbb R}^n)}\le C.$$
\end{lemma}
\begin{proof}
We take $\varphi\in C^\infty_0({\mathbb R}^n)$ and we suppose that the support of $\varphi$ is the
ball $B_R$ (for some $R>1$). Therefore, if $x,y\in {\mathbb R}^n\setminus
B_R$ then $\varphi(x)=\varphi(y)=0$,
and so we can assume in the integral in~\eqref{2.1bis} for~$\varphi$
that $x\in B_R$, up to a factor~$2$, i.e.
we have to estimate
\begin{equation}\label{int0}
I\,= \,\iint_{B_R\times {\mathbb R}^n}\frac{|\varphi(x)-\varphi(y)|^p}{|x-y|^{n+s p}}\,\frac{dx}{|x|^\alpha}
\,\frac{dy}{|y|^\beta}\,=\, I_1 +I_2,
\end{equation}
where
\begin{eqnarray*}
I_1&:=&\iint_{B_R\times B_{2R}}\frac{|\varphi(x)-\varphi(y)|^p}{|x-y|^{n+s p}}\,\frac{dx}{|x|^\alpha}
\,\frac{dy}{|y|^\beta} \\
{\mbox{ and }} I_2&: =& \iint_{B_R\times ({\mathbb R}^n\setminus B_{2R})}
\frac{|\varphi(x)-\varphi(y)|^p}{|x-y|^{n+s p}}\,\frac{dx}{|x|^\alpha}\,\frac{dy}{|y|^\beta}.
\end{eqnarray*}
We first estimate $I_1$: we have
\begin{equation}\label{2.2bis}
I_1\le C\iint_{B_{2R}\times B_{2R}}\frac{|x-y|^p}{|x-y|^{n+s p}}\,\frac{dx}{|x|^\alpha}
\,\frac{dy}{|y|^\beta},\end{equation}
for some constant $C>0$ depending on the $C^1$-norm of $u$.
Now, if $\alpha,\beta<0$ then $|x|^{-\alpha}\le (2R)^{-\alpha}$ and $|y|^{-\beta}\le (2R)^{-\beta}$.
Therefore, by the change of variable $z=x-y$, we get
$$ I_1 \le C\,(2R)^{-\alpha}(2R)^{-\beta}\int_{B_{2R}}dx
\int_{B_{2R}}dz\, |x-y|^{p-n-s p} \le C, $$
up to renaming $C$, that possibly depends on $R$.
Now we suppose that $\alpha,\beta\ge0$. We claim that
\begin{equation}\label{Iuno}
I_1\le C\,\iint_{B_{2R}\times B_{2R}}|x-y|^{p-n-s p}\,\frac{dx\,dy}{|x|^{\alpha+\beta}}.
\end{equation}
Indeed, if $|x|\le|y|$, then formula~\eqref{Iuno} trivially follows
from~\eqref{2.2bis}.
On the other hand, if $|x|\ge|y|$, then
$$ I_1\le C\,\iint_{B_{2R}\times B_{2R}}|x-y|^{p-n-s p}\,\frac{dx\,dy}{|y|^{\alpha+\beta}},$$
and so by symmetry we get \eqref{Iuno}.
From \eqref{Iuno} we obtain that
$$ I_1 \le C\,\int_{B_{2R}}\frac{dx}{|x|^{\alpha+\beta}}\,
\int_{B_{4R}}\frac{dz}{|z|^{n+s p-p}}\le C, $$
thanks to \eqref{cond ab}, up to renaming~$C$.
Finally, we deal with the case $\alpha\ge0$ and $\beta\le0$ (the other situation is symmetric).
Then, $|y|^{-\beta}\le (2R)^{-\beta}$, and so
\begin{eqnarray*}
I_1 &\le & C\,(2R)^{-\beta} \iint_{B_{2R}\times B_{2R}}|x-y|^{p-n-s p}
\,\frac{dx\,dy}{|x|^{\alpha}} \\
& \le & C\,(2R)^{-\beta} \int_{B_{2R}}\frac{dx}{|x|^\alpha}\,
\int_{B_{4R}}|z|^{p-n-s p}\,dz \\
&\le & C,
\end{eqnarray*}
thanks to \eqref{cond ab}, up to relabelling $C$ (that depends also on $R$).
Therefore, we have shown that for any $\alpha,\beta$ that satisfy~\eqref{cond ab} we have that
\begin{equation}\label{est 1}
I_1\le C,
\end{equation}
up to renaming the constant $C$.
Now we estimate $I_2$. For this, we observe that if $x\in B_{R}$ and $y\in {\mathbb R}^n\setminus B_{2R}$ then
$$ |x-y|\ge |y|-|x|=\frac{|y|}{2}+\frac{|y|}{2}-|x|\ge \frac{|y|}{2}.$$
Thus
\begin{eqnarray*}
I_2 &\le & 2^{n+s p}\,\left(2\|u\|_{L^\infty({\mathbb R}^n)}\right)^p\,
\iint_{B_R\times ({\mathbb R}^n\setminus B_{2R})}\frac{1}{|y|^{n+s p}}
\,\frac{dx}{|x|^\alpha}\,\frac{dy}{|y|^\beta} \\
&\le & 2^{n+s p}\,\left(2\|u\|_{L^\infty({\mathbb R}^n)}\right)^p\,
\int_{B_R}\frac{dx}{|x|^\alpha}\, \int_{{\mathbb R}^n\setminus B_{2R}}
\frac{dy}{|y|^{n+s p+\beta}}\\
&\le & C,
\end{eqnarray*}
thanks to \eqref{cond ab}. Using this and \eqref{est 1} into \eqref{int0}
we obtain that $I$ is bounded.
\end{proof}
As an obvious consequence of Lemma~\ref{lemma 0},
we have that~$C^\infty_0({\mathbb R}^n)\subseteq{\widetilde{W}^{s,p}_a({\mathbb R}^n)}$,
and so, by~\eqref{range}, we see that~$C^\infty_0({\mathbb R}^n)\subseteq{\dot{W}^{s,p}_a({\mathbb R}^n)}$.
This says that the approximation seeked by Theorem~\ref{TH}
is meaningful.
\section{Approximation with compactly supported functions}\label{sec:compact}
In this section we will prove that we can approximate
a function in $\dot{W}^{s,p}_a({\mathbb R}^n)$ with another function
with compact support, by keeping the error small.
\begin{lemma}\label{lemma support}
Let~$u\in\dot{W}^{s,p}_a({\mathbb R}^n)$.
Let~$\tau\in C^\infty_0(B_2,[0,1])$ with~$\tau=1$ in~$B_1$,
and~$ \tau_j(x):=\tau (x/j)$. Then
$$ \lim_{j\to+\infty} \|u-\tau_j u\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}=0.$$
\end{lemma}
\begin{proof}
We set~$\eta_j:=1-\tau_j$.
Then $u-\tau_j u=\eta_j u$, and~$\eta_j(x)-\eta_j(y)=
\tau_j(y)-\tau_j(x)$. Accordingly~$|u(x)-\tau_j u(x)|\le 2|u(x)|$
and so
\begin{equation}\label{898989789}
\lim_{j\to+\infty} \|u-\tau_j u\|_{L^{p^*_s}_a({\mathbb R}^n)}=0,\end{equation}
by the Dominated Convergence Theorem.
Moreover,
$$ |\eta_j u(x)-\eta_j u(y)|\le
|\tau_j(x)-\tau_j(y)|\,|u(y)| + |u(x)-u(y)|\,\eta_j(x).$$
Also, we
observe that if both $x$ and~$y$ lie in~$B_j$, then~$\tau_j(x)=\tau_j(y)=1$.
Therefore
\begin{equation}\label{EEEE}\begin{split}
& \iint_{{\mathbb R}^{2n}} \frac{\big| (u-\tau_ju)(x)-(u-\tau_ju)(y)\big|^p}{
|x-y|^{n+sp}}\,\frac{dx}{|x|^a}\,\frac{dy}{|y|^a}
\\&\qquad\le 2 \iint_{{\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)}
\frac{\big| (u-\tau_ju)(x)-(u-\tau_ju)(y)\big|^p}{
|x-y|^{n+sp}}\,\frac{dx}{|x|^a}\,\frac{dy}{|y|^a}\\
&\qquad \le C\,(I_j + J_j),\end{split}\end{equation}
where
\begin{eqnarray*}
I_j &:=& \iint_{{\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)}
\frac{ |\tau_j(x)-\tau_j(y)|^p\,|u(y)|^p }{|x-y|^{n+sp}}\,\frac{dx}{|x|^a}\,\frac{dy}{|y|^a}\\
{\mbox{and }} J_j &:=&
\iint_{{\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)}
\frac{ |u(x)-u(y)|^p\,\eta_j^p(x)}{|x-y|^{n+sp}}\,\frac{dx}{|x|^a}\,\frac{dy}{|y|^a}.
\end{eqnarray*}
We estimate these two terms separately. First of all,
we estimate~$I_j$. For this, we define
\begin{eqnarray*}
D_{j,0}:=\Big\{ (x,y)\in {\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)
{\mbox{ s.t. }} |x|\le |y|/2\Big\},\\
D_{j,1}:= \Big\{ (x,y)\in {\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)
{\mbox{ s.t. }} |x|> |y|/2 {\mbox{ and }}
|x-y|\ge j\Big\}\\
{\mbox{and }}D_{j,2}:= \Big\{ (x,y)\in {\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)
{\mbox{ s.t. }}|x|> |y|/2 {\mbox{ and }}|x-y|< j \Big\},
\end{eqnarray*}
and we write, for $k\in\{0,1,2\}$,
$$ I_{j,k}:=
\iint_{D_{j,k}}
\frac{ |\tau_j(x)-\tau_j(y)|^p\,|u(y)|^p }{|x-y|^{n+sp}}\,\frac{dx}{|x|^a}\,\frac{dy}{|y|^a}.$$
Notice that
\begin{equation}\label{EQ00}
I_j =I_{j,0}+I_{j,1} + I_{j,2}.
\end{equation}
So we define~$\sigma_0:=s$, and we fix~$\sigma_1\in(0,s)$
and~$\sigma_2\in (s,1)$.
We write
$$ \frac{ |\tau_j(x)-\tau_j(y)|^p\,|u(y)|^p}{|x-y|^{n+sp}\,|x|^a\,|y|^{a}}=
\frac{ |\tau_j(x)-\tau_j(y)|^p}{|x-y|^{(s+\sigma_k)p}}
\cdot
\frac{ |u(y)|^p }{|x-y|^{n-\sigma_k p}\,|x|^a\,|y|^{a}}.$$
Thus we apply the H\"older inequality with exponents~$n/sp$
and~$p^*_s/p$
(which is in turn equal to~$n/(n-sp)$) and, for any~$k\in\{0,1,2\}$,
we obtain that
\begin{equation}\label{EQ3}
\begin{split}
I_{j,k} \le \left[ \iint_{D_{j,k}}
\frac{ |\tau_j(x)-\tau_j(y)|^{\frac{n}{s}} }{|x-y|^{\frac{(s+\sigma_k)n}{s}}}
\,dx\,dy\right]^{\frac{sp}{n}}\cdot
\left[ \iint_{D_{j,k}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{\frac{(n-\sigma_k p)n}{n-sp}}
|x|^{\frac{ap^*_s}{p}}
|y|^{\frac{ap^*_s}{p}}}
\,dx\,dy\right]^{\frac{n-sp}{n}}.\end{split}\end{equation}
Now we change
variable~$X:=x/j$ and we see that
\begin{eqnarray*}
&& \iint_{D_{j,k}}
\frac{ |\tau_j(x)-\tau_j(y)|^{\frac{n}{s}} }
{|x-y|^{\frac{(s+\sigma_k)n}{s}} }
\,dx\,dy
=\iint_{D_{j,k}}
\frac{ |\tau(x/j)-\tau(y/j)|^{\frac{n}{s}} }
{|x-y|^{\frac{(s+\sigma_k)n}{s}}}
\,dx\,dy\\
&&\qquad
= j^{2n-\frac{(s+\sigma_k)n}{s}}
\iint_{D_{1,k}}
\frac{ |\tau(X)-\tau(Y)|^{\frac{n}{s}} }{
|X-Y|^{\frac{(s+\sigma_k)n}{s}} }
\,dX\,dY
=j^{\frac{(s-\sigma_k)n}{s}}
\iint_{D_{1,k}}
\frac{ |\tau(x)-\tau(y)|^{\frac{n}{s}} }{
|x-y|^{n+\sigma_k\frac{n}{s}}}
\,dx\,dy.
\end{eqnarray*}
That is, if we set~$P:=n/s$, we get that
\begin{equation}\begin{split}
\label{EQ4}
\iint_{D_{j,k}}
\frac{ |\tau_j(x)-\tau_j(y)|^{\frac{n}{s}} }
{|x-y|^{\frac{(s+\sigma_k)n}{s}} }
\,dx\,dy\le &\, j^{\frac{(s-\sigma_k)n}{s}}
\| \tau\|_{\dot{W}^{\sigma_k,P}({\mathbb R}^n)}\le
C j^{\frac{(s-\sigma_k)n}{s}},\end{split}\end{equation}
where~$\dot{W}^{\sigma,P}({\mathbb R}^n)$
is the usual Gagliardo semi-norm
(which coincides with~${\widetilde{W}^{\sigma,P}_a({\mathbb R}^n)}$
with~$a=0$, see e.g.~\cite{guida}).
In addition, if~$(x,y)\in D_{j,0}$, we have that~$|x-y|\ge|y|-|x|\ge|y|/2$
and so
\begin{equation}\label{98sadvsfg4ewes}
\begin{split}
&\iint_{D_{j,0}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{\frac{(n-\sigma_0 p)n}{n-sp}} |x|^{\frac{ap^*_s}{p}}
|y|^{\frac{ap^*_s}{p}}} \,dx\,dy
\le
C\iint_{D_{j,0}}
\frac{ |u(y)|^{p^*_s} }{
|x|^{\frac{ap^*_s}{p}}
|y|^{\frac{(n-\sigma_0 p)n}{n-sp}+
\frac{ap^*_s}{p}}} \,dx\,dy \\
&\qquad\le C \int_{{\mathbb R}^n\setminus B_j}
\left[ \int_0^{|y|/2} \rho^{n-1-\frac{ap^*_s}{p}}
\frac{ |u(y)|^{p^*_s} }{ |y|^{ \frac{n(n-sp+a)}{n-sp} } } \,d\rho\right]\,dy
=
C \int_{{\mathbb R}^n\setminus B_j}
\frac{ |y|^{ \frac{n(n-sp-a)}{n-sp} }
|u(y)|^{p^*_s} }{ |y|^{ \frac{n(n-sp+a)}{n-sp} } }\,dy\\ &\qquad=
C \int_{{\mathbb R}^n\setminus B_j}\frac{
|u(y)|^{p^*_s} }{ |y|^{\frac{2ap^*_s}{p}}}\,dy
.\end{split}
\end{equation}
Moreover, if~$k\in\{1,2\}$,
using the change of variable~$z:=x-y$
(and integrating in~$y\in{\mathbb R}^n\setminus B_j$ separately),
we see that
\begin{eqnarray*}
\iint_{D_{j,k}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{\frac{(n-\sigma_k p)n}{n-sp}} |x|^{\frac{ap^*_s}{p}}
|y|^{\frac{ap^*_s}{p}}} \,dx\,dy &\le&
C \iint_{D_{j,k}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{\frac{(n-\sigma_k p)n}{n-sp}}
|y|^{\frac{2ap^*_s}{p}}} \,dx\,dy \\
&=&C
\iint_{D_{j,k}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{n+\frac{(s-\sigma_k)pn}{n-sp}}|y|^{\frac{2ap^*_s}{p}}} \,dx\,dy
\\ &\le& \left\{\begin{matrix}
C \|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}\, \displaystyle\int_{{\mathbb R}^n\setminus B_j}
\displaystyle\frac{ dz}{
|z|^{n+\frac{(s-\sigma_1)pn}{n-sp}}} & {\mbox{ if }} k=1,\\
\, \\
C \|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}\, \displaystyle\int_{B_j}
\displaystyle\frac{ dz}{
|z|^{n+\frac{(s-\sigma_2)pn}{n-sp}}} & {\mbox{ if }} k=2.
\end{matrix}
\right.
\end{eqnarray*}
Thus, recalling that~$\sigma_1<s<\sigma_2$, we conclude that,
for any~$k\in\{1,2\}$,
\begin{equation}\label{98sadvsfg4ewes2}
\iint_{D_{j,k}}
\frac{ |u(y)|^{p^*_s} }{
|x-y|^{\frac{(n-\sigma_k p)n}{n-sp}}|x|^{\frac{ap^*_s}{p}}
|y|^{\frac{ap^*_s}{p}}} \,dx\,dy
\le C\, \|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}\, j^{ \frac{(\sigma_k-s)pn}{n-sp}}.\end{equation}
As a matter of fact, in virtue of~\eqref{98sadvsfg4ewes},
and recalling that~$\sigma_0=s$,
we have that the above equation is valid also for~$k=0$.
So, for~$k\in\{0,1,2\}$,
we insert formulas~\eqref{98sadvsfg4ewes2}
and~\eqref{EQ4}
into~\eqref{EQ3} and we conclude that
$$ I_{j,k}\le C \big(j^{\frac{(s-\sigma_k)n}{s}}\big)^{\frac{sp}{n}}\,
\cdot \,\big(\|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}\,
j^{ \frac{(\sigma_k-s)pn}{n-sp}}
\big)^{\frac{n-sp}{n}}\le C\,\|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}^{\frac{n-sp}{n}}.
$$
Thus, by~\eqref{EQ00}, we obtain
\begin{equation}\label{EE8}
I_j\le C\,\|u\|_{L^{p^*_s}_a({\mathbb R}^n\setminus B_j)}^{\frac{n-sp}{n}}\longrightarrow0
\qquad{\mbox{ as }}j\to+\infty.
\end{equation}
Now we consider~$J_j$. For this, we define
$$ \psi_j(x,y):=\chi_{{\mathbb R}^{n}\times ({\mathbb R}^n\setminus B_j)}(x,y)\,
\frac{ |u(x)-u(y)|^p\,\eta_j^p(x)}{|x-y|^{n+sp}|x|^a|y|^a}.$$
Notice that
$$ |\psi_j(x,y)|\le \frac{ |u(x)-u(y)|^p}{|x-y|^{n+sp}|x|^a|y|^a}\in
L^1({\mathbb R}^{2n}),$$
thus, by the Dominated Convergence Theorem,
$$ J_j=\iint_{{\mathbb R}^{2n}} \psi_j(x,y)\,dx\,dy
\longrightarrow0
\qquad{\mbox{ as }}j\to+\infty.$$
This, \eqref{EEEE} and~\eqref{EE8} give that
\begin{equation*}
\iint_{{\mathbb R}^{2n}} \frac{\big| (u-\tau_ju)(x)-(u-\tau_ju)(y)\big|^p}{
|x-y|^{n+sp}|x|^a|y|^a}\,dx\,dy\longrightarrow0
\qquad{\mbox{ as }}j\to+\infty.
\end{equation*}
The latter formula and~\eqref{898989789}
give the desired result.
\end{proof}
\section{Estimates in average and control of the convolution}\label{AVE}
Here we perform some detailed estimate on the ``averaged''
effect of the weights under consideration.
Roughly speaking, the weights themselves are not translation
invariant, but we will be able to estimate the averaged effect
of the translations in a somehow uniform way.
{F}rom this, we will be able to control the norm of
the mollification by the norm of the original function,
and this fact will in turn play a crucial role
in the approximation with smooth functions performed in Section~\ref{sec:smooth}
(namely,
one will approximate first a given function in the space
with a continuous and compactly supported function,
so one will have to bound the convolution of
this difference in terms of the difference of the original functions).
Due to the presence of two types of weights, the arguments of
this part are quite technical, but we tried to explain all
the details in a clear and self-contained way.
We start with an averaged weighted estimate:
\begin{prop}\label{WE}
There exists~$C>0$ such that
$$ \sup_{r>0} \frac{1}{r^{n}} \int_{B_r} \frac{dz}{|x+z|^a |y+z|^a}\le
\frac{C}{|x|^a|y|^a},$$
for every~$x$, $y\in{\mathbb R}^n\setminus\{0\}$.
\end{prop}
\begin{proof} Fixed~$r>0$, consider the following four domains:
\begin{eqnarray*}
D_0 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\ge\frac{|x|}{2}
{\mbox{ and }} |y+z|\ge\frac{|y|}{2}\right\}, \\
D_1 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\le\frac{|x|}{2}
{\mbox{ and }} |y+z|\ge\frac{|y|}{2}\right\}, \\
D_2 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\ge\frac{|x|}{2}
{\mbox{ and }} |y+z|\le\frac{|y|}{2}\right\} \\
{\mbox{and }}\;
D_3 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\le\frac{|x|}{2}
{\mbox{ and }} |y+z|\le\frac{|y|}{2}\right\}.
\end{eqnarray*}
Then
\begin{equation}\label{KK0}
\int_{D_0} \frac{dz}{|x+z|^a |y+z|^a}\le
\int_{B_r} \frac{dz}{(|x|/2)^a (|y|/2)^a}\le
\frac{4^a\, |B_r|}{|x|^a|y|^a}.
\end{equation}
Now we observe that
\begin{equation}\label{AUX0}\begin{split}
&{\mbox{if there exists~$z\in B_r$
such that~$|x+z|\le\displaystyle\frac{|x|}{2}$,}}
\\ &\qquad{\mbox{then~$r\ge |z|\ge |x|-|x+z|
\ge\displaystyle\frac{|x|}{2}$.}}\end{split}
\end{equation}
{F}rom this, we observe that
if~$D_1\ne\varnothing$ it follows that~$r\ge |x|/2$ and so,
using the substitution~$\zeta:=x+z$,
\begin{equation}\label{KK1}
\begin{split}
&\int_{D_1} \frac{dz}{|x+z|^a |y+z|^a}\le
\frac{2^a}{|y|^a}\int_{B_r} \frac{dz}{|x+z|^a}
\le\frac{2^a}{|y|^a}\int_{B_{r+|x|}} \frac{d\zeta}{|\zeta|^a}
\\ &\quad\le
\frac{C_1 (r+|x|)^{n-a}}{|y|^a}\le \frac{C_2 (3r)^{n}}{(r+|x|)^a|y|^a}
\le \frac{C_2 (3r)^{n}}{|x|^a|y|^a}=\frac{C_3 r^{n}}{|x|^a|y|^a},
\end{split}\end{equation}
for some constants~$C_1$, $C_2$, $C_3>0$.
Similarly, by exchanging the roles of~$x$ and $y$,
we see that
\begin{equation}\label{KK2}
\int_{D_2} \frac{dz}{|x+z|^a |y+z|^a}\le
\frac{C_4 r^{n}}{|x|^a|y|^a}.
\end{equation}
Moreover, if~$D_3\ne\varnothing$, we deduce from~\eqref{AUX0}
(and the similar formula for~$y$) that
$$ r\ge\max\left\{ \frac{|x|}{2},\,\frac{|y|}{2}\right\}$$
and therefore
\begin{equation}\label{KK3}
\begin{split}&\int_{D_3} \frac{dz}{|x+z|^a |y+z|^a}\le
\sqrt{ \int_{B_r} \frac{dz}{|x+z|^{2a} } }
\,\sqrt{ \int_{B_r} \frac{dz}{|y+z|^{2a} } }
\\ &\quad\le
\sqrt{ \int_{B_{r+|x|}} \frac{dz}{|\zeta|^{2a} } }
\,\sqrt{ \int_{B_{r+|y|}} \frac{dz}{|\zeta|^{2a} } }
\le C_5 \sqrt{(r+|x|)^{n-2a}}\,\sqrt{(r+|y|)^{n-2a}}
\\ &\quad=\frac{C_5 (r+|x|)^{n/2}\,(r+|y|)^{n/2}}{
(r+|x|)^a (r+|y|)^a}
\le \frac{C_5 (3r)^{n/2}\,(3r)^{n/2}}{
|x|^a|y|^a} =\frac{C_6r^n}{|x|^a|y|^a}.
\end{split}\end{equation}
Notice that we have used all over in the integrals
that~$a\le 2a <n$,
thanks to~\eqref{range}.
The desired result now follows by combining~\eqref{KK0},
\eqref{KK1}, \eqref{KK2}, \eqref{KK3} and the fact that~$B_r=D_0\cup D_1\cup D_2\cup D_3$.
\end{proof}
A simpler (but still useful for our purposes) version
of Proposition~\ref{WE} is the following:
\begin{prop}\label{WE-simple}
Let~$b:=\frac{2ap^*_s}{p}=\frac{2an}{n-sp}$. There exists~$C>0$ such that
$$ \sup_{r>0} \frac{1}{r^{n}} \int_{B_r} \frac{dz}{|x+z|^b}\le
\frac{C}{|x|^b},$$
for every~$x\in{\mathbb R}^n\setminus\{0\}$.
\end{prop}
\begin{proof} The proof is similar to the one of Proposition~\ref{WE},
just dropping the dependence in~$y$. We give the details for
the facility of the reader.
Fixed~$r>0$, consider the following two domains:
\begin{eqnarray*}
D_0 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\ge\frac{|x|}{2}
\right\}, \\
{\mbox{and }}\;
D_1 &:=& \left\{ z\in B_r {\mbox{ s.t. }} |x+z|\le\frac{|x|}{2}
\right\}.
\end{eqnarray*}
Then
\begin{equation}\label{KK0-simple}
\int_{D_0} \frac{dz}{|x+z|^b}\le
\int_{B_r} \frac{dz}{(|x|/2)^b}\le
\frac{2^b |B_r|}{|x|^b}.
\end{equation}
Now we observe that
if there exists~$z\in B_r$
such that~$|x+z|\le|x|/2$, then~$r\ge |z|\ge |x|-|x+z|
\ge|x|/2$. {F}rom this, we observe that
if~$D_1\ne\varnothing$ it follows that~$r\ge |x|/2$ and so
\begin{equation}\label{0uyfd8uhasdfghertyuioiuytf}
\int_{D_1} \frac{dz}{|x+z|^b}\le
\int_{B_{r+|x|}} \frac{d\zeta}{|\zeta|^b}
\le C_1 (r+|x|)^{n-b}=\frac{C_1(r+|x|)^n}{(r+|x|)^b}
\le \frac{C_1\,(3r)^n}{|x|^b}
\end{equation}
for some constant~$C_1>0$. We observe that we have used here above that~$b<n$,
thanks to~\eqref{range}.
Then, formulas~\eqref{0uyfd8uhasdfghertyuioiuytf}
and~\eqref{KK0-simple} imply the desired result.
\end{proof}
Now, we observe that, in this paper, two types
of ``different'' weighted norms appear all over, namely~\eqref{norm}
and~\eqref{UP}. In order to deal with both of them at the same
time, we introduce now an ``abstract'' notation, by working in~${\mathbb R}^N$
(then, in our application, we will
choose either~$N=n$ or~$N=2n$).
Also, we will consider
two functions~$\varpi:{\mathbb R}^n\to{\mathbb R}^N$
and~$\Theta:{\mathbb R}^N\to[0,+\infty]$.
The main assumption that we will take is that
\begin{equation}\label{basic}
\sup_{r>0} \frac{1}{r^{n}} \int_{B_r} \frac{dz}{\Theta\big(X+\varpi(z)\big)}\le
\frac{C}{\Theta(X)},
\end{equation}
for a suitable~$C>0$, for a.e.~$X\in{\mathbb R}^N$.
We point out that
the integral in~\eqref{basic} is always performed on an~$n$-dimensional
ball~$B_r$ (i.e., in that notation,~$z\in B_r\subset{\mathbb R}^n$),
but the point~$X$ lies in~${\mathbb R}^N$ (and~$n$ and~$N$ may be different).
Concretely, in the light of Propositions~\ref{WE}
and~\ref{WE-simple}, we have that
\begin{equation}\label{basic2}
\begin{split}
&{\mbox{condition \eqref{basic} holds true when}}\\
&\qquad N=2n, \qquad \varpi(z)=(z,z),
\qquad\Theta(X)=|x|^a|y|^a, \qquad X=(x,y)\in{\mathbb R}^n\times{\mathbb R}^n,\\
&{\mbox{and when}}
\\ &\qquad N=n, \qquad \varpi(z)=z,\qquad
\Theta(x)=|x|^b,\qquad b=\frac{2ap^*_s}{p}.
\end{split}
\end{equation}
{F}rom~\eqref{basic}, we obtain
a useful bound on (a suitable variant of)
the maximal function in ${\mathbb R}^n\times{\mathbb R}^n$:
\begin{lemma}\label{9dvfbgfn32456645}
Assume that condition~\eqref{basic} holds true. Let~$q>1$.
Let~$V$ be a measurable
function from~${\mathbb R}^{N}$ to~${\mathbb R}$. Then, for any~$r>0$,
\begin{equation}\label{9scdvf3efdvvas} \int_{{\mathbb R}^{N}}
\left[ \frac{1}{r^{n}} \int_{B_r} |V(X-\varpi(z))|\,dz\right]^q
\,\frac{dX}{\Theta(X)}
\le C\int_{{\mathbb R}^{N}}
\frac{|V(X)|^q}{\Theta(X)}\,dX,\end{equation}
for a suitable~$C>0$.
\end{lemma}
\begin{proof} We may suppose that the right hand side of~\eqref{9scdvf3efdvvas}
is finite, otherwise we are done.
We use the
H\"older inequality with exponents $q$ and~$q/(q-1)$,
to see that
\begin{eqnarray*}
&&\frac{1}{r^{n}} \int_{B_r} |V(X-\varpi(z))|\,dz
\le \frac{1}{r^{n}} \left[\int_{B_r} |V(X-\varpi(z))|^q\,dz\right]^{1/q}
\left[\int_{B_r} 1\,dz\right]^{(q-1)/q}
\\&&\qquad= \frac{C_1}{r^{n/q}} \left[\int_{B_r} |V(X-\varpi(z))|^q\,dz
\right]^{1/q},\end{eqnarray*}
for some~$C_1>0$, and so,
by~\eqref{basic}, and using the change of variable~$\widetilde X:=X-\varpi(z)$
over~${\mathbb R}^N$, we obtain
\begin{eqnarray*}
&& \int_{{\mathbb R}^{N}}
\left[ \frac{1}{r^{n}} \int_{B_r} |V(X-\varpi(z))|\,dz\right]^q
\,\frac{dX}{\Theta(X)} \le \frac{C_1^q}{r^{n}}
\int_{{\mathbb R}^{N}}
\left[\int_{B_r} |V(X-\varpi(z))|^q\,dz\right]
\,\frac{dX}{\Theta(X)}
\\ &&\quad=
\frac{C_1^q}{r^{n}} \int_{B_r}
\left[\int_{{\mathbb R}^{N}} |V(X-\varpi(z))|^q \,\frac{dX}{\Theta(X)}
\right] \,dz
=
\frac{C_1^q}{r^{n}} \int_{B_r}
\left[\int_{{\mathbb R}^{N}} |V(\widetilde X)|^q \,
\frac{d\widetilde X}{\Theta(\varpi(z)+\widetilde X)}
\right] \,dz
\\ &&\quad= \frac{C_1^q}{r^{n}}
\int_{{\mathbb R}^{N}} |V(\widetilde X)|^q
\left[\int_{B_r}
\frac{dz}{\Theta(\varpi(z)+\widetilde X)}
\right] \,d\widetilde X
\le
\int_{{\mathbb R}^{N}}
|V(\widetilde X)|^q \,
\frac{C_2}{\Theta(\widetilde X)}
\,d\widetilde X
,\end{eqnarray*}
as desired.
\end{proof}
With the estimate in Lemma~\ref{9dvfbgfn32456645}, we are in the position of bounding
a (suitable variant of) the
standard mollification.
For this, we take a radially symmetric, radially decreasing
function $\eta_o\in C^\infty({\mathbb R}^n)$,
with~$\eta\ge0$, ${\rm supp}\;\eta_o\subseteq B_1$ and
\begin{equation}\label{sadvbefewgfew}
\int_{{\mathbb R}^n}\eta_o(x)\,dx=1
\end{equation}
With a slight abuse of notation, we write~$\eta_o(r)=\eta_o(x)$
whenever~$|x|=r$.
Given a measurable
function~$v=v(x,y)$ from~${\mathbb R}^{2n}$ to~${\mathbb R}$,
we also define
\begin{equation}\label{STARR}
v\star \eta_o(x,y):=\int_{{\mathbb R}^n} v(x-z,y-z)\,\eta_o(z)\,dz.\end{equation}
Then we have:
\begin{prop}\label{7scdvgr3fcedrefd}
For every measurable
function~$v=v(x,y)$ from~${\mathbb R}^{2n}$ to~${\mathbb R}$, we have that
$$ \iint_{{\mathbb R}^{2n}}
\frac{|v\star \eta_o(x,y)|^p}{ |x|^a|y|^a }\,dx\,dy
\le C\,
\iint_{{\mathbb R}^{2n}}
\frac{|v(x,y)|^p}{ |x|^a|y|^a}\,dx\,dy,$$
for a suitable~$C>0$.
\end{prop}
\begin{proof} The argument is a careful modification of
the one on pages~63--65 of~\cite{stein}.
First of all, we use an integration by parts to notice that
\begin{equation}\label{789sdfqfgft4tysdfhjk}
\int_0^1 r^n \,|\eta_o'(r)|\,dr
= -\int_0^1 r^n \,\eta_o'(r)\,dr
= n \int_0^1 r^{n-1} \,\eta_o(r)\,dr = C_0
\int_{B_1}\eta_o(x)\,dx=C_0,
\end{equation}
for some~$C_0>0$,
due to~\eqref{sadvbefewgfew}.
We define
\begin{eqnarray*}
\lambda(r,x,y)&:=& r^{n-1}
\int_{S^{n-1}} |v(x-r\omega,y-r\omega)|\,d{\mathcal{H}}^{n-1}(\omega)\\
{\mbox{and }}\;
\Lambda(r,x,y)&:=& \int_{B_r}|v(x-z,y-z)|\,dz.
\end{eqnarray*}
Now we use Lemma~\ref{9dvfbgfn32456645}
with~$N:=2n$, $\varpi(z):=(z,z)$, $X:=(x,y)$, $\Theta(X):=|x|^a|y|^a$, $q:=p$
and~$V(X):=v(x,y)$,
see~\eqref{basic2}. In this way we obtain that
\begin{equation}\label{etrherger442}\begin{split}\iint_{{\mathbb R}^{2n}}
\left[ \frac{\Lambda(r,x,y)}{r^{n}} \right]^p
\,\frac{dx\,dy}{|x|^a|y|^a}
\le C_1 \iint_{{\mathbb R}^{2n}}
\frac{|v(x,y)|^p}{|x|^a|y|^a}\,dx\,dy,\end{split}\end{equation}
for some~$C_1>0$.
Moreover, by polar coordinates,
\begin{eqnarray*}\Lambda(r,x,y)&=&C_2 \int_0^r \left[\rho^{n-1}
\int_{S^{n-1}} |v(x-\rho\omega,y-\rho\omega)|
\,d{\mathcal{H}}^{n-1}(\omega)\right]\,d\rho\\&=&
C_2 \int_0^r \lambda(\rho,x,y)\,d\rho,\end{eqnarray*}
and therefore
$$ \frac{\partial}{\partial r }\Lambda(r,x,y)=C_2
\lambda(r,x,y).$$
Notice also that~$\Lambda(0,x,y)=0=\eta_o(1)$.
Consequently, using again polar coordinates and an integration
by parts, we obtain
\begin{equation}\label{9sdsfghedvfdfgadrsg}\begin{split}
&|v\star \eta_o(x,y)|\le
\int_{B_1} |v(x-z,y-z)|\,\eta_o(z)\,dz \\
&\quad= C_3 \int_0^1 \left[ \int_{S^{n-1}} r^{n-1}
|v(x-r\omega,y-r\omega)|\,\eta_o(r) \,d{\mathcal{H}}^{n-1}(\omega)\right]\,dr = C_3
\int_0^1 \lambda(r,x,y)\,\eta_o(r)\,dr\\
&\quad= C_4 \int_0^1 \frac{\partial\Lambda}{\partial r }(r,x,y)\,\eta_o(r)\,dr
= -C_4
\int_0^1 \Lambda (r,x,y)\,\eta_o'(r)\,dr.\end{split}\end{equation}
We recall that~$\eta_o'\le0$, so the latter term is indeed non-negative.
Now we use the Minkowski integral inequality (see e.g. Appendix A.1
in~\cite{stein}): this gives that, for a given~$F=F(r,x,y)$,
and~$d\mu(x,y):=\frac{dx\,dy}{|x|^a|y|^a}$,
we have
\[ \left[\iint_{{\mathbb R}^{2n}}\left[ \int_0^1 |F(r,x,y)|\,dr\right]^p\,d\mu(x,y)
\right]^{1/p}\le \int_0^1
\left[ \iint_{{\mathbb R}^{2n}}
|F(r,x,y)|^p\,d\mu(x,y) \right]^{1/p}\,dr.\]
Using this with~$F(r,x,y):=\Lambda (r,x,y)\,\eta_o'(r)$ and recalling~\eqref{9sdsfghedvfdfgadrsg},
we conclude that
\begin{eqnarray*}
\left[ \iint_{{\mathbb R}^{2n}}
\frac{|v\star \eta_o(x,y)|^p}{ |x|^a|y|^a }\,dx\,dy \right]^{1/p}
&\le& C_5\left[ \iint_{{\mathbb R}^{2n}}
\left[
\int_0^1 \Lambda (r,x,y)\,|\eta_o'(r)|\,dr\right]^p
\frac{dx\,dy}{ |x|^a|y|^a } \right]^{1/p}\\&\le& C_5
\int_0^1
\left[ \iint_{{\mathbb R}^{2n}}
|\Lambda (r,x,y)|^p\,|\eta_o'(r)|^p\,
\frac{dx\,dy}{ |x|^a|y|^a }
\right]^{1/p}\,dr \\&=&
C_5
\int_0^1
\left[ \iint_{{\mathbb R}^{2n}}
\left[\frac{\Lambda (r,x,y)}{r^n}\right]^p
\frac{dx\,dy}{ |x|^a|y|^a }
\right]^{1/p} r^n \,|\eta_o'(r)|\,dr
.\end{eqnarray*}
Therefore, recalling~\eqref{etrherger442},
\begin{eqnarray*}
\left[ \iint_{{\mathbb R}^{2n}}
\frac{|v\star \eta_o(x,y)|^p}{ |x|^a|y|^a }\,dx\,dy \right]^{1/p}&\le&
C_6
\int_0^1
\left[
\iint_{{\mathbb R}^{2n}}
\frac{|v(x,y)|^p}{|x|^a|y|^a}\,dx\,dy
\right]^{1/p} r^n \,|\eta_o'(r)|\,dr .\end{eqnarray*}
This and~\eqref{789sdfqfgft4tysdfhjk}
give the desired result.
\end{proof}
A simpler, but still useful, version of Proposition~\ref{7scdvgr3fcedrefd}
holds for the standard convolution of a function~$u:{\mathbb R}^n\to{\mathbb R}$, i.e.
$$ u*\eta_o(x):=\int_{{\mathbb R}^n} u(x-z)\,\eta_o(z)\,dz.$$
The reader may compare the latter formula
with~\eqref{STARR}. In this more standard
setting, we have:
\begin{prop}\label{7scdvgr3fcedrefd-s}
Let~$b:=\frac{2ap^*_s}{p}$.
For every measurable
function~$u$ from~${\mathbb R}^{n}$ to~${\mathbb R}$, we have that
$$ \int_{{\mathbb R}^{n}}
\frac{|u*\eta_o(x)|^{p^*_s} }{ |x|^b}\,dx
\le C\,
\int_{{\mathbb R}^{n}}
\frac{|u(x)|^{p^*_s}}{ |x|^b}\,dx,$$
for a suitable~$C>0$.
\end{prop}
\begin{proof} The argument is a
simplification of the one given for
Proposition~\ref{7scdvgr3fcedrefd}. For the convenience of the reader,
we provide all the details.
We define
\begin{eqnarray*}
\lambda(r,x)&:=& r^{n-1}
\int_{S^{n-1}} |u(x-r\omega)|\,d{\mathcal{H}}^{n-1}(\omega)\\
{\mbox{and }}\;
\Lambda(r,x)&:=& \int_{B_r}|u(x-z)|\,dz.
\end{eqnarray*}
Here we use Lemma~\ref{9dvfbgfn32456645}
with~$N:=n$, $\varpi(z):=z$, $X:=x$, $\Theta(X):=|x|^b$, $q:={p^*_s}$
and~$V(X):=u(x)$,
see~\eqref{basic2}. In this way we obtain that
\begin{equation}\label{etrherger442-s}\begin{split}\int_{{\mathbb R}^{n}}
\left[ \frac{\Lambda(r,x)}{r^{n}} \right]^{p^*_s}
\,\frac{dx}{|x|^b}
\le C_1 \int_{{\mathbb R}^{n}}
\frac{|u(x)|^{p^*_s}}{|x|^b}\,dx,\end{split}\end{equation}
for some~$C_1>0$.
Moreover, by polar coordinates,
$$ \Lambda(r,x)=C_2 \int_0^r \left[\rho^{n-1}
\int_{S^{n-1}} |u(x-\rho\omega)|
\,d{\mathcal{H}}^{n-1}(\omega)\right]\,d\rho=
C_2 \int_0^r \lambda(\rho,x)\,d\rho,$$
and therefore
$$ \frac{\partial}{\partial r }\Lambda(r,x)=C_2
\lambda(r,x).$$
Notice also that~$\Lambda(0,x)=0=\eta_o(1)$.
Consequently, using again polar coordinates and an integration
by parts, we obtain
\begin{equation*}
\begin{split}
&|u*\eta_o(x)|\le
\int_{B_1} |u(x-z)|\,\eta_o(z)\,dz
= C_3 \int_0^1 \left[ \int_{S^{n-1}} r^{n-1}
|u(x-r\omega)|\,\eta_o(r)\,d{\mathcal{H}}^{n-1}(\omega)\right]\,dr \\
&\qquad= C_3
\int_0^1 \lambda(r,x)\,\eta_o(r)\,dr
= C_4 \int_0^1 \frac{\partial\Lambda}{\partial r }(r,x)\,\eta_o(r)\,dr
= -C_4
\int_0^1 \Lambda (r,x)\,\eta_o'(r)\,dr.\end{split}\end{equation*}
Now we use the Minkowski integral inequality (see e.g. Appendix A.1
in~\cite{stein})
and we conclude that
\begin{eqnarray*}
&& \left[ \int_{{\mathbb R}^{n}}
\frac{|u*\eta_o(x)|^{p^*_s}}{ |x|^b }\,dx \right]^{1/ {p^*_s}}
\le C_5\left[ \int_{{\mathbb R}^{n}}
\left[
\int_0^1 \Lambda (r,x)\,|\eta_o'(r)|\,dr\right]^{p^*_s}
\frac{dx}{ |x|^b } \right]^{1/ {p^*_s}}\\&&\qquad\le C_5
\int_0^1
\left[ \int_{{\mathbb R}^{n}}
|\Lambda (r,x)|^{p^*_s}\,|\eta_o'(r)|^{p^*_s}\,
\frac{dx}{ |x|^b }
\right]^{1/ {p^*_s}}\,dr =
C_5
\int_0^1
\left[ \int_{{\mathbb R}^{n}}
\left[\frac{\Lambda (r,x)}{r^n}\right]^{p^*_s}
\frac{dx}{ |x|^b }
\right]^{1/ {p^*_s}} r^n \,|\eta_o'(r)|\,dr
.\end{eqnarray*}
So, recalling~\eqref{etrherger442-s},
\begin{eqnarray*} \left[ \int_{{\mathbb R}^{n}}
\frac{|u* \eta_o(x)|^{p^*_s}}{ |x|^b }\,dx\right]^{1/ {p^*_s}} &\le&
C_6
\int_0^1
\left[
\int_{{\mathbb R}^{n}}
\frac{|u(x)|^{p^*_s}}{|x|^b}\,dx
\right]^{1/{p^*_s}} r^n \,|\eta_o'(r)|\,dr.
\end{eqnarray*}
{F}rom this and~\eqref{789sdfqfgft4tysdfhjk}
we obtain the desired result.
\end{proof}
\section{Approximation in weighted Lebesgue spaces with
continuous functions}\label{AVE2}
In order to deal with
the semi-norm in~\eqref{norm}, it is often convenient
to introduce a weighted norm over~${\mathbb R}^{2n}$, by proceeding as follows.
Given a measurable
function~$v=v(x,y)$ from~${\mathbb R}^{2n}$ to~${\mathbb R}$, we define
\begin{equation}\label{norm-p-2n}
\| v\|_{L^p_{a,a}({\mathbb R}^{2n})}:=
\left(\iint_{{\mathbb R}^{2n}} |v(x,y)|^p\,\frac{dx}{|x|^a}
\,\frac{dy}{|y|^a}\right)^{1/p}.
\end{equation}
When~$\| v\|_{L^p_{a,a}({\mathbb R}^{2n})}$ is finite, we say that~$v$ belongs
to~$L^p_{a,a}({\mathbb R}^{2n})$.
Notice that
\begin{equation}\label{9as8dfb0oijh4trdskjjhh}
\begin{split}
&{\mbox{if~$v^{(u)}(x,y):=\displaystyle\frac{u(x)-u(y)}{|x-y|^{\frac{n}{p}+s}}$, then
formula \eqref{norm-p-2n} reduces to~\eqref{norm},}}\\
& {\mbox{namely }} \; \| v^{(u)}\|_{L^p_{a,a}({\mathbb R}^{2n})}=
[u]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}.\end{split}\end{equation}
Now we give two approximation results
(namely Lemmata~\ref{678dvfdh7654wedrft0}
and~\ref{678dvfdh7654wedrft})
with respect to the norm in~\eqref{norm-p-2n}.
\begin{lemma}\label{678dvfdh7654wedrft0}
Let~$v\in L^p_{a,a}({\mathbb R}^{2n})$.
Then there exists a sequence of
functions~$v_M\in L^p_{a,a}({\mathbb R}^{2n})\cap L^\infty({\mathbb R}^{2n})$
such that~$\| v-v_M\|_{L^p_{a,a}({\mathbb R}^{2n})}\to0$ as $M\to+\infty$.
\end{lemma}
\begin{proof} We set
$$ v_M(x,y):=\left\{
\begin{matrix}
M & {\mbox{ if }} v(x,y)\ge M,\\
v(x,y) & {\mbox{ if }} v(x,y)\in(-M,M),\\
-M & {\mbox{ if }} v(x,y)\le- M.
\end{matrix}
\right.$$
We have that~$v_M\to v$ a.e. in~${\mathbb R}^{2n}$
and
$$ \frac{|v_M(x,y)|^p}{|x|^a|y|^a}\le
\frac{|v(x,y)|^p}{|x|^a|y|^a}\in L^1({\mathbb R}^{2n}),$$ thus the claim follows from
the Dominated Convergence Theorem.
\end{proof}
\begin{lemma}\label{678dvfdh7654wedrft}
Let~$v\in L^p_{a,a}({\mathbb R}^{2n})$.
Then there exists a sequence of continuous and compactly supported
functions~$v_\delta:{\mathbb R}^{2n}\to{\mathbb R}$
such that~$\| v-v_\delta\|_{L^p_{a,a}({\mathbb R}^{2n})}\to0$ as $\delta\to0$.
\end{lemma}
\begin{proof} In the light of Lemma~\ref{678dvfdh7654wedrft0},
we can also assume that
\begin{equation}\label{dfgfdjnthgrefdwrg}
v\in L^\infty({\mathbb R}^{2n}).
\end{equation}
Let~$\tau_j\in C^\infty({\mathbb R}^{2n},[0,1])$, with~$\tau_j(P)=1$
if~$|P|\le j$ and~$\tau(P)=0$ if~$|P|\ge j+1$.
Let~$v_j:=\tau_j u$. Then~$v_j\to v$ pointwise in~${\mathbb R}^{2n}$
as~$j\to+\infty$, and
$$
\frac{|v(x,y)-v_j(x,y)|^p}{|x|^a|y|^a}
\le \frac{2^p |v(x,y)|^p}{|x|^a|y|^a}\in L^1({\mathbb R}^{2n}).$$
As a consequence, by the Dominated Convergence Theorem,
$$ \lim_{j\to+\infty}\| v-v_j\|_{L^p_{a,a}({\mathbb R}^{2n})}=0.$$
So, fixed~$\delta>0$, we find~$j_\delta\in{\mathbb N}$ such that
\begin{equation}\label{9dcvb4wersdvqewas0009}
\| v-v_{j_\delta}\|_{L^p_{a,a}({\mathbb R}^{2n})}\le \delta.\end{equation}
Notice that~$v_{j_\delta}$ is supported
in~$\{ P\in{\mathbb R}^{2n} {\mbox{ s.t. }} |P|\le j_\delta+1\}$.
Also, given a set~$A\subseteq{\mathbb R}^{2n}$, we set
$$ \mu_{a,a}(A):=
\iint_{A}\frac{dx\,dy}{|x|^a|y|^a}.$$
By~\eqref{range}, we see that~$\mu_{a,a}$ is finite
over compact sets.
So, we can use Lusin's Theorem (see e.g. Theorem~7.10 in~\cite{Fol},
and page~121 there for the definition of the uniform norm).
We obtain that there exists a closed set~$E_\delta\subset{\mathbb R}^{2n}$
and a continuous and compactly supported function~$v_\delta:{\mathbb R}^{2n}\to{\mathbb R}$
such that~$v_{\delta}=v_{j_\delta}$ in~${\mathbb R}^{2n}\setminus E_\delta$,
$\mu_{a,a}(E_\delta)\le\delta^p$ and~$\|v_\delta\|_{L^\infty({\mathbb R}^{2n})}\le
\|v_{j_\delta}\|_{L^\infty({\mathbb R}^{2n})}$.
In particular, since~$\tau_{j_\delta}\in[0,1]$,
we have that~$\|v_\delta\|_{L^\infty({\mathbb R}^{2n})}
\le\|v\|_{L^\infty({\mathbb R}^{2n})}$,
and this quantity is finite, due to~\eqref{dfgfdjnthgrefdwrg}. Therefore
\begin{eqnarray*}&& \| v_{j_\delta}-v_{\delta}\|_{L^p_{a,a}({\mathbb R}^{2n})}^p
=
\iint_{E_\delta} |v_{j_\delta}(x,y)-v_{\delta}(x,y)|^p\,\frac{dx}{|x|^a}
\,\frac{dy}{|y|^a}
\\ &&\qquad\le 2^p \big( \|v_{j_\delta}\|_{L^\infty({\mathbb R}^{2n})}^p+
\|v_{\delta}\|_{L^\infty({\mathbb R}^{2n})}^p\big)\mu_{a,a}(E_\delta)\le
2^{p+1} \|v\|_{L^\infty({\mathbb R}^{2n})}^p \delta^p.\end{eqnarray*}
{F}rom this and~\eqref{9dcvb4wersdvqewas0009},
we obtain that~$\|v -v_{\delta}\|_{L^p_{a,a}({\mathbb R}^{2n})}
\leq\big(1+4\|v\|_{L^\infty({\mathbb R}^{2n})}\big)\delta$, which concludes the proof.
\end{proof}
We remark that a simpler version of Lemma~\ref{678dvfdh7654wedrft}
also holds true in~$L^{p^*_s}_a({\mathbb R}^n)$. We state the result explicitly
as follows:
\begin{lemma}\label{678dvfdh7654wedrft-s}
Let~$u\in L^{p^*_s}_a({\mathbb R}^n)$.
Then there exists a sequence of continuous and compactly supported
functions~$u_\delta:{\mathbb R}^{n}\to{\mathbb R}$
such that~$\| u-u_\delta \|_{L^{p^*_s}_a({\mathbb R}^n)}\to0$ as $\delta\to0$.
\end{lemma}
\begin{proof} The argument is a simplified version of
the one given for Lemma~\ref{678dvfdh7654wedrft}. Full details are
provided for the reader's convenience.
First of all, by the Dominated Convergence Theorem,
we can approximate $u$
in~$L^{p^*_s}_a({\mathbb R}^n)$ with a sequence of bounded functions
$$ u_M(x):=\left\{
\begin{matrix}
M & {\mbox{ if }} u(x)\ge M,\\
u(x) & {\mbox{ if }} u(x)\in(-M,M),\\
-M & {\mbox{ if }} u(x)\le- M.
\end{matrix}
\right.$$
Consequently, we can also assume that
\begin{equation}\label{dfgfdjnthgrefdwrg-s}
u\in L^\infty({\mathbb R}^{n}).
\end{equation}
Let~$\tau_j\in C^\infty({\mathbb R}^{n},[0,1])$, with~$\tau_j(P)=1$
if~$|P|\le j$ and~$\tau(P)=0$ if~$|P|\ge j+1$.
Let~$u_j:=\tau_j u$. Then~$u_j\to u$ pointwise in~${\mathbb R}^{n}$
as~$j\to+\infty$,
and
$$
\frac{|u(x)-u_j(x)|^{p^*_s}}{|x|^{\frac{2ap^*_s}{p}}}
\le \frac{2^{p^*_s} |u(x)|^{p^*_s}}{|x|^{\frac{2ap^*_s}{p}}}\in L^1({\mathbb R}^{2n}).$$
As a consequence, by the Dominated Convergence Theorem,
$$ \lim_{j\to+\infty}\| u-u_j\|_{L^{p^*_s}_a({\mathbb R}^n)}=0.$$
So, fixed~$\delta>0$, we find~$j_\delta\in{\mathbb N}$ such that
\begin{equation}\label{9dcvb4wersdvqewas0009-s}
\| u-u_{j_\delta}\|_{L^{p^*_s}_a({\mathbb R}^n)}\le \delta.\end{equation}
Notice that~$u_{j_\delta}$ is supported
in~$\overline{B_{j_\delta+1}}$.
Also, given a set~$A\subseteq{\mathbb R}^{n}$, we set
$$ \mu_{a}(A):=
\int_{A}\frac{dx}{|x|^{\frac{2ap^*_s}{p}}}.$$
By~\eqref{range}, we see that~$\mu_{a}$ is finite
over compact sets.
So, we can use Lusin's Theorem (see e.g. Theorem~7.10 in~\cite{Fol},
and page~121 there for the definition of the uniform norm).
We obtain that there exists a closed set~$E_\delta\subset{\mathbb R}^{n}$
and a continuous and compactly supported function~$u_\delta:{\mathbb R}^{n}\to{\mathbb R}$
such that~$u_{\delta}=u_{j_\delta}$ in~${\mathbb R}^{n}\setminus E_\delta$,
$\mu_{a}(E_\delta)\le\delta^{p^*_s}$ and~$\|u_\delta\|_{L^\infty({\mathbb R}^{n})}\le
\|u_{j_\delta}\|_{L^\infty({\mathbb R}^{n})}$.
In particular, since~$\tau_{j_\delta}\in[0,1]$,
we have that~$\|u_\delta\|_{L^\infty({\mathbb R}^{n})}
\le\|u\|_{L^\infty({\mathbb R}^{n})}$,
and this quantity is finite, due to~\eqref{dfgfdjnthgrefdwrg-s}. Therefore
\begin{eqnarray*}&& \| u_{j_\delta}-u_{\delta}\|_{L^{p^*_s}_a({\mathbb R}^n)}^{p^*_s}
=
\int_{E_\delta} |u_{j_\delta}(x)-u_{\delta}(x)|^{p^*_s}
\,\frac{dx}{|x|^{\frac{2ap^*_s}{p}}}
\\ &&\qquad\le 2^{p^*_s} \big( \|u_{j_\delta}\|_{L^\infty({\mathbb R}^{n})}^{p^*_s}+
\|u_{\delta}\|_{L^\infty({\mathbb R}^{n})}^{p^*_s}\big)\mu_{a}(E_\delta)\le
2^{p^*_s+1}\|v\|_{L^\infty({\mathbb R}^{n})}^{p^*_s} \delta^{p^*_s}.\end{eqnarray*}
{F}rom this and~\eqref{9dcvb4wersdvqewas0009-s},
we obtain that~$\|u-u_{\delta}\|_{L^{p^*_s}_a({\mathbb R}^n)}
\leq\big(1+4\|u\|_{L^\infty({\mathbb R}^{n})}\big)\delta$, which concludes the proof.
\end{proof}
\section{Approximation with smooth functions}\label{sec:smooth}
In this section we show that we can approximate a function in the space $\dot{W}^{s,p}_a({\mathbb R}^n)$
with a smooth one.
We remark that, if there are no weights,
smooth approximations are much more standard,
since one can use directly
the continuity of the translations in $L^p({\mathbb R}^{2n})$.
Since the weights are not translation invariant, and the continuity
of the translations in Lebesgue spaces is, in general, not
uniform, a more careful procedure is needed in our case
(namely, to overcome this difficulty we exploit
the techniques developed in Sections~\ref{AVE}
and~\ref{AVE2}).
We take a radially symmetric, radially decreasing
function $\eta\in C^\infty_0({\mathbb R}^n)$
such that $\eta\ge0$, ${\rm supp}\;\eta\subseteq B_1$ and
\begin{equation}\label{int one}
\int_{B_1}\eta(x)\,dx=1,
\end{equation}
and, for $\epsilon>0$, we define the mollifier $\eta_\epsilon$ as
$$ \eta_\epsilon(x):=\frac{1}{\epsilon^{n}}\eta\left(\frac{x}{\epsilon}\right),
\quad {\mbox{ for any }} x\in{\mathbb R}^n.$$
Then, given $u\in\dot{W}^{s,p}_a({\mathbb R}^n)$, we consider its standard
convolution with the
mollifier $\eta_\epsilon$. That is, for any $\epsilon>0$, we define
\begin{equation}\label{STARR2}
u_\epsilon(x):=(u*\eta_\epsilon)(x)=\int_{{\mathbb R}^n} u(x-z)\,\eta_\epsilon(z)\,dz, \quad {\mbox{ for any }} x\in{\mathbb R}^n.\end{equation}
By construction, $u_\epsilon\in C^\infty({\mathbb R}^n)$.
We will show that, if $\epsilon$ is sufficiently small, then
the error made approximating $u$ with $u_\epsilon$ is ``small''.
The rigorous result is the following:
\begin{lemma}\label{lemma smooth}
Let~$u\in\dot{W}^{s,p}_a({\mathbb R}^n)$. Then
$$ \lim_{\epsilon\to 0} \|u-u_\epsilon\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}=0.$$
\end{lemma}
\begin{proof}
We first check that
\begin{equation}\label{7890sdfgh}
\lim_{\epsilon\to 0}\|u-u_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}=0.\end{equation}
To this scope, we start by proving that
\begin{equation}\label{PRE-s}
\begin{split}&{\mbox{if~$\widetilde u:{\mathbb R}^n\to{\mathbb R}$ is continuous and compactly supported, then}}
\\&\qquad \lim_{\epsilon\to0} \| \widetilde u-\widetilde u*
\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}=0.
\end{split}\end{equation}
For this, we fix~$\epsilon_o>0$
and we use the fact that~$\widetilde u$ is uniformly continuous
to write that
$$ \sup_{z\in B_1}
|\widetilde u(x-\epsilon z)-\widetilde u(x)|\le \epsilon_o,$$
provided that~$\epsilon$ is small
enough (possibly in dependence of~$\epsilon_o$).
Also, since~$\widetilde u$ is compactly supported, say in~$B_R$,
and writing~$b:=\frac{2ap^*_s}{p}$, we obtain that
\begin{eqnarray*}
&& \int_{{\mathbb R}^{n}} |\widetilde u(x)-\widetilde u*\eta_\epsilon(x)|^{p^*_s}
\frac{dx}{|x|^b}
\le \int_{B_{R+1}}
\left[ \int_{B_1} \big|\widetilde u(x)-\widetilde u
(x-\epsilon z)\big|\,\eta(z)\,dz\right]^{p^*_s}
\frac{dx}{|x|^b}\\
&&\qquad\le \epsilon_o^{p^*_s}\, \int_{B_{R+1}}\frac{dx}{|x|^b}= C\epsilon_o^{p^*_s},
\end{eqnarray*}
with~$C$ independent of~$\epsilon$ and~$\epsilon_o$.
Since~$\epsilon_o$ can be taken arbitrarily small,
the proof of~\eqref{PRE-s} is complete.
Now we prove~\eqref{7890sdfgh}. For this,
we fix~$\epsilon_o>0$, to be taken as small as we wish in the sequel,
and we use Lemma~\ref{678dvfdh7654wedrft-s} to find
a continuous and compactly supported
function~$\widetilde u:{\mathbb R}^{n}\to{\mathbb R}$
such that~$\| u-\widetilde u\|_{L^{p^*_s}_a({\mathbb R}^n)}\le\epsilon_o$.
By Proposition~\ref{7scdvgr3fcedrefd-s}, we deduce that
$$ \|u*\eta_\epsilon -\widetilde u*\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}
=\|(u-\widetilde u)*\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}\le
C \|u-\widetilde u\|_{L^{p^*_s}_a({\mathbb R}^n)}\le C\epsilon_o.$$
Furthermore, by~\eqref{PRE-s}, we know that
$$ \| \widetilde u-\widetilde u*
\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}\le\epsilon_o,$$
as long as $\epsilon$ is sufficiently small. By collecting these pieces
of information, we conclude that
\begin{eqnarray*} \|u-u_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}&\le&
\|u-\widetilde u\|_{L^{p^*_s}_a({\mathbb R}^n)}+
\|\widetilde u-\widetilde u*\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}
+\|\widetilde u*\eta_\epsilon-u*\eta_\epsilon\|_{L^{p^*_s}_a({\mathbb R}^n)}
\\&\le& (2+C)\epsilon_o.\end{eqnarray*}
This completes the proof of~\eqref{7890sdfgh}
Now we recall the notation in~\eqref{STARR} and we prove that
\begin{equation}\label{PRE}
\begin{split}&{\mbox{if~$v:{\mathbb R}^{2n}\to{\mathbb R}$ is continuous and compactly supported, then}}
\\&\qquad \lim_{\epsilon\to0} \| v-v\star\eta_\epsilon\|_{L^p_{a,a}({\mathbb R}^{2n})}=0.
\end{split}\end{equation}
For this, we fix~$\epsilon_o>0$
and we use the fact that~$v$ is uniformly continuous
to write that
$$ \sup_{z\in B_1}
|v(x-\epsilon z,y-\epsilon z)-v(x,y)|\le \epsilon_o,$$
provided that~$\epsilon$ is small enough (possibly in dependence of~$\epsilon_o$).
Also, since~$v$ is compactly supported, say in~$\{ |(x,y)|\le R\}$,
for some~$R>0$, we have that
$$ v(x,y)=0=v(x-\epsilon z,y-\epsilon z)$$
if~$z\in B_1$ and~$\max\{ |x|,|y|\}\ge R+1$, as long as~$\epsilon<1$. Moreover
$$ v(x,y)-v\star\eta_\epsilon(x,y)
= \int_{B_1} \Big(v(x,y)-v(x-\epsilon z,y-\epsilon z)\Big)\,\eta(z)\,dz,$$
and, as a consequence,
\begin{eqnarray*}
&& \iint_{{\mathbb R}^{2n}} |v(x,y)-v\star\eta_\epsilon(x,y)|^p
\frac{dx\,dy}{|x|^a|y|^a}\\
&\le& \iint_{B_{R+1}\times B_{R+1}}
\left[ \int_{B_1} \big|
v(x,y)-v(x-\epsilon z,y-\epsilon z)\big|\,\eta(z)\,dz\right]^p
\frac{dx\,dy}{|x|^a|y|^a}\\
&\le& \epsilon_o^p\, \iint_{B_{R+1}\times B_{R+1}}\frac{dx\,dy}{|x|^a|y|^a}\\
&=& C\epsilon_o^p,
\end{eqnarray*}
with~$C$ depending on~$v$,
but independent of~$\epsilon$ and~$\epsilon_o$.
Since~$\epsilon_o$ can be taken arbitrarily small,
the proof of~\eqref{PRE} is complete.
Now we are in the position of completing the proof of Lemma~\ref{lemma smooth}.
We remark that,
by~\eqref{7890sdfgh}, and
recalling \eqref{norm} and~\eqref{norm2}, in order to prove
Lemma~\ref{lemma smooth},
it only remains to show that
\begin{equation}\label{limit}
\lim_{\epsilon\to0}\iint_{{\mathbb R}^{2n}}\frac{|u(x)-u_\epsilon(x)-u(y)+u_\epsilon(y)|^p}{|x-y|^{n+sp}}\,\frac{dx}{|x|^a}
\,\frac{dy}{|y|^a}=0.
\end{equation}
To this goal, we let
\begin{equation*} v^{(u)}(x,y):= \frac{u(x)-u(y)}{
|x-y|^{\frac{n}{p}+s}}.\end{equation*}
By comparing~\eqref{STARR}
and~\eqref{STARR2}, we see that
\begin{equation}\label{PRE1}
\begin{split}
& v^{(u)} \star\eta_\epsilon(x,y)
=\int_{{\mathbb R}^n} v^{(u)}(x-z,y-z)\,\eta_\epsilon(z)\,dz \\
&\quad=\int_{{\mathbb R}^n} \frac{u(x-z)-u(y-z)}{
|x-y|^{\frac{n}{p}+s}}\,\eta_\epsilon(z)\,dz =
\frac{u*\eta_\epsilon(x)-u*\eta_\epsilon(y)}{|x-y|^{\frac{n}{p}+s}}=v^{(u*\eta_\epsilon)}(x,y).
\end{split}
\end{equation}
We fix~$\epsilon_o>0$, to be taken as small as we wish
in the sequel, and use
Lemma~\ref{678dvfdh7654wedrft},
to find a continuous and compactly supported
function~$v$
such that
\begin{equation}\label{NN1}
\| v^{(u)}-v\|_{L^p_{a,a}({\mathbb R}^{2n})}\le\epsilon_o.\end{equation}
Notice that, by~\eqref{PRE},
\begin{equation}\label{NN1.1}
\| v-v\star \eta_\epsilon\|_{L^p_{a,a}({\mathbb R}^{2n})}\le\epsilon_o,\end{equation}
as long as~$\epsilon$ is sufficiently small.
Moreover, by Proposition~\ref{7scdvgr3fcedrefd} (applied here to
the function~$v^{(u)}-v$)
and~\eqref{NN1}, we have that
\begin{equation}\label{NN2}
\big\| (v^{(u)}-v)\star\eta_\epsilon\big\|_{L^p_{a,a}({\mathbb R}^{2n})}\le
C\,\| v^{(u)}-v\|_{L^p_{a,a}({\mathbb R}^{2n})}\le C \epsilon_o.\end{equation}
Also, by~\eqref{9as8dfb0oijh4trdskjjhh}
$$ [u-u*\eta_\epsilon]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}=
\| v^{(u-u*\eta_\epsilon)}\|_{L^p_{a,a}({\mathbb R}^{2n})}=
\| v^{(u)}-v^{(u*\eta_\epsilon)}\|_{L^p_{a,a}({\mathbb R}^{2n})}.$$
Thus, recalling~\eqref{PRE1},
$$ [u-u*\eta_\epsilon]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}=
\| v^{(u)}-v^{(u)}\star \eta_\epsilon\|_{L^p_{a,a}({\mathbb R}^{2n})}.$$
Accordingly, by~\eqref{NN1}, \eqref{NN1.1} and~\eqref{NN2},
\begin{eqnarray*}
[u-u*\eta_\epsilon]_{\widetilde{W}^{s,p}_a({\mathbb R}^n)}&\le&
\| v^{(u)}-v\|_{L^p_{a,a}({\mathbb R}^{2n})}+
\| v-v\star \eta_\epsilon\|_{L^p_{a,a}({\mathbb R}^{2n})}+
\| v\star \eta_\epsilon-v^{(u)}\star \eta_\epsilon\|_{L^p_{a,a}({\mathbb R}^{2n})}\\
&\le& (2+C)\epsilon_o.
\end{eqnarray*}
Since~$\epsilon_o$ can be taken arbitrarily small, we have proved~\eqref{limit},
and therefore the proof of Lemma~\ref{lemma smooth} is
complete.
\end{proof}
\section{Proof of Theorem \ref{TH}}\label{sec:proof}
Let $u\in\dot{W}^{s,p}_a({\mathbb R}^n)$, and fix $\delta>0$.
If $\tau_j$ is as in Lemma \ref{lemma support}, then for $j$ large enough we have that
\begin{equation}\label{primo}
\|u-\tau_ju\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}<\frac{\delta}{2},
\end{equation}
thanks to Lemma \ref{lemma support}.
Now, for any $\epsilon>0$, let $\eta_\epsilon$ be the mollifier defined
at the beginning of Section \ref{sec:smooth}. We set
$$ \rho_\epsilon:=\tau_j u*\eta_\epsilon.$$
By construction, $\rho_\epsilon\in C^\infty({\mathbb R}^n)$.
Moreover, standard properties of the convolution imply that
$$ {\rm supp}\; \rho_\epsilon\subseteq {\rm supp}\;(\tau_j u)+\overline{B}_\epsilon. $$
Also (see e.g. Lemma 9 in \cite{FSV}) one sees that
$$ {\rm supp}\;(\tau_j u)\subseteq ({\rm supp}\; \tau_j)\cap ({\rm supp}\; u)\subseteq \overline{B}_{2j}\cap ({\rm supp}\; u).$$
Hence
$$ {\rm supp}\; \rho_\epsilon\subseteq \left(\overline{B}_{2j}\cap ({\rm supp}\; u)\right) +\overline{B}_\epsilon.$$
As a consequence, $\rho_\epsilon\in C^\infty_0({\mathbb R}^n)$.
Furthermore, Lemma \ref{lemma smooth} gives that
$$ \|\rho_\epsilon-\tau_ju\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}<\frac{\delta}{2}, $$
if $\epsilon$ is sufficiently small.
Therefore, from this and \eqref{primo} we obtain that
$$ \|u-\rho_\epsilon\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}\le
\|u-\tau_ju\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}+ \|\tau_ju-\rho_\epsilon\|_{\dot{W}^{s,p}_a({\mathbb R}^n)}
<\frac{\delta}{2}+\frac{\delta}{2}=\delta. $$
Since $\delta$ can be taken arbitrarily small, this concludes the proof of Theorem \ref{TH}.
| {
"timestamp": "2015-01-21T02:14:20",
"yymm": "1501",
"arxiv_id": "1501.04918",
"language": "en",
"url": "https://arxiv.org/abs/1501.04918",
"abstract": "In this paper we show a density property for fractional weighted Sobolev spaces. That is, we prove that any function in a fractional weighted Sobolev space can be approximated by a smooth function with compact support.The additional difficulty in this nonlocal setting is caused by the fact that the weights are not necessarily translation invariant.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "A density property for fractional weighted Sobolev spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.969785415552379,
"lm_q2_score": 0.7310585669110203,
"lm_q1q2_score": 0.7089699361049305
} |
https://arxiv.org/abs/2008.02238 | On the equilibrium shape of a crystal | A solution is given to a long-standing open problem posed by Almgren. | \section{Introduction}
According to thermodynamics, the equilibrium shape of a small drop of water or a small crystal minimizes the free energy under a mass constraint. The phenomenon was independently discovered by W. Gibbs in 1878 \cite{G} and P. Curie in 1885 \cite{Crist}. Assuming the gravitational effect is negligible, the energy minimization is the surface area minimization and the solution is the convex set
$$
K= \bigcap_{v \in \mathbb{S}^{n-1}} \{x \in \mathbb{R}^n: \langle x, v\rangle< f(v)\}
$$
called the Wulff shape, where $f$ is a surface tension, i.e. a convex positively 1-homogeneous
$$f:\mathbb{R}^n\rightarrow [0,\infty)$$
\\
with $f(x)>0$ if $|x|>0$ \noindent \cite{flashes, hilt, liebm, lau, MR0012454, MR82697, MR493671, MR1116536, MR1130601, MR1170247}. If $f(v)=R |v|$, i.e. the surface tension is isotropic, $K=B_R$ is the solution of the classical isoperimetric problem.
Two main ingredients define the free energy of a set of finite perimeter $E \subset \mathbb{R}^n$ with reduced boundary $\partial^* E$: the surface energy
$$
\mathcal{F}(E)=\int_{\partial^* E} f(\nu_E) d\mathcal{H}^{n-1};
$$
and, the potential energy
$$
\mathcal{G}(E)=\int_E g(x)dx,
$$
where $g \ge 0$, $g(0)=0$.
The free energy is the sum:
$$
\mathcal{E}(E)=\mathcal{F}(E)+\mathcal{G}(E).
$$
In a gravitational field, the equilibrium shape for liquids was studied by P.S. Laplace in the early 1800s \cite{cap}. If the surface tension is isotropic, uniqueness and convexity were obtained by Finn \cite{MR607991, MR816345} and Wente \cite{MR607986}. If $n=2$, subject to a wetting condition, the anisotropic tension was investigated by Avron, Taylor, and Zia via quadrature \cite{MR732374}. The work was motivated by low temperature experiments on helium crystals in equilibrium with a superfluid \cite{e1, e2, e3, e4, e5}. Also, various phase transition experiments were conducted in \cite{tr9, tr} and shape variations were investigated in \cite{trk} by L.D. Landau.
McCann considered the equilibrium shape for convex potentials with a bounded zero set \cite{MR1641031}. The central result is that the equilibrium planar crystals are a finite union of disjoint convex sets and each non-trivial component minimizes the free energy uniquely among convex sets of the same mass.
Moreover, he proved that if the Wulff shape and potential are symmetric under $x \rightarrow -x$, there is a unique convex minimizer. Therefore, this provides information on the following problem mentioned in his paper:\\
Even when the field is the (negative) gradient of a convex potential, the equilibrium crystal is not known to be connected, much less convex or unique, \cite[p. 700]{MR1641031}. \\
If $g$ is locally bounded, Theorem \ref{@'} yields a stability result for small mass in any dimension which is stronger than uniqueness.
Assuming $g \in C^1$ is coercive ($g(x)\rightarrow \infty$ as $|x|\rightarrow \infty$), $f \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$, $\alpha \in (0,1)$ is $\lambda-$elliptic, Theorem 2 in Figalli and Maggi \cite{MR2807136} yields convexity if the mass is small (a recent result obtained by Figalli and Zhang \cite{pFZ} implies that when $f$ is crystalline, if the mass is small, minimizers are polyhedra, cf. Remark \ref{Qzn}). Hence, the theorems together imply the uniqueness and convexity of minimizers if the mass is small.
Theorem \ref{85} asserts that either: uniqueness and convexity hold for all masses; or, there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M}]$, $E_m$ is unique and convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for mass $a$; or, the optimal critical mass $\mathcal{M}$ is exposed via
\begin{equation*} \label{Mph}
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}}}{w_m(\epsilon)} \ge \frac{1}{\gamma},
\end{equation*}
\\
where $w_m(\epsilon)>0$ is a modulus for the energy (see Proposition \ref{K}), $\gamma>0$ and $\epsilon \le \epsilon_0$, where $\epsilon_0$ is small. In two dimensions, the regularity assumptions are superfluous and the parameter which encodes the phase change is identified via Theorem \ref{8p}. Supposing the sub-level sets $\{g < t\}$ convex $\&$ a uniqueness assumption in the class of convex sets (valid for convex $g$), the critical mass is completely identified in Theorem \ref{8qp}.
A convexity theorem for all masses necessarily must have restrictions on the potential. The following problem historically is attributed to Almgren \cite{MR2807136, MR1641031}.\\
{\bf Problem:} If the potential $g$ is convex (or, more generally, if the sub-level sets $\{g < t\}$
are convex), are minimizers convex or, at least, connected? \cite[p. 146]{MR2807136}.\\
\noindent I first proved that a convexity assumption is in general not sufficient.
\begin{thm} \label{g}
There exists $g \ge 0$ convex such that $g(0)=0$ $\&$ such that if $m>0$, then there is no solution to
$$
\inf\{\mathcal{E}(E): |E|=m\}.
$$
\end{thm}
Nevertheless, subject to additional assumptions, if the sub-level sets $\{g < t\}$
are convex, the convexity is true:
\begin{thm} \label{n=2}
If $n=2$ and\\
(i) g is locally Lipschitz in $\{g<\infty\}$\\
(ii) g admits minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$ \\
(iii) the sub-level sets $\{g<t\}$ are convex\\
(iv) when $E \subset \{g<\infty\}$ is bounded convex, $0 \notin E$, $|E \cap \{g \neq 0\}|>0$, then
$$
\int_E \nabla g(x) dx \neq 0,
$$
then $E_m$ is convex for all $m \in (0,|\{g<\infty\}|)$.
\end{thm}
Note that if $g$ is convex, (i) is true. In addition, if $g$ is coercive, (ii) holds. In certain configurations, one may prove existence for non-coercive potentials, e.g. the gravitational potential \cite{MR732374, MR361996, MR420406}. Hence, (ii) is a natural assumption to ensure that the example in Theorem \ref{g} is precluded. In particular, since coercivity excludes the gravitational potential, a natural assumption is one that includes it; fortunately, it is not difficult to prove that (iv) includes the gravitational potential and therefore unifies the theory.
The result is the first theorem for $|\{g<\infty\}|=\infty$ with the weakest assumption formulated in Almgren's problem: convexity of sub-level sets.
If $g \in C^{1,\alpha}$ is strictly convex coercive, $f \in C^{3,\alpha}(\mathbb{R}^n\setminus \{0\})$ is uniformly elliptic, connectedness of equilibrium shapes was obtained by De Philippis and Goldman for $m \in (0,\infty)$ \cite{D}; if $n=2$ the strict convexity is not needed. Therefore, their theorem combined with McCann's theorem yields uniqueness and convexity in $\mathbb{R}^2$ subject to the regularity assumptions. If $g$ is convex, radial, and coercive, convexity of the equilibrium shapes for general convex surface tensions and $m \in (0,\infty)$ is a corollary of the above theorem without the regularity assumptions (Corollary \ref{ra}).
\begin{cor}
If $n=2$ and\\
(i) $g=g(|x|)$ is locally Lipschitz \\
(ii) $g$ is coercive \\
(iii) when $g>0$, $g$ is (strictly) increasing \\
then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{cor}[The radial convex potential] \label{ra}
If $n=2$ and $g=g(|x|)$ is coercive and convex, then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
Assuming $g$ to be the gravitational potential, existence $\&$ convexity was proved by Baer in higher dimension with a structural assumption on the surface tension \cite{R}, cf. \cite{MR2126076, MR2321891, MR2349864, MR2875653, MR872883, MR493670, MR543126, MR543127, MR872883}. Assuming smoothness on the surface tension, he proved uniqueness. If $m$ is small, Corollary \ref{@'w09} yields uniqueness for more general gravitational potentials and without the additional regularity assumption.
In two dimensions, the above theorem implies convexity when the surface tension admits minimizers (e.g. $f$ is admissible \cite{R}).
\begin{cor} \label{2-gr}
If $n=2$, $\phi(0)= 0$, $\phi'>0$, and\\
(i)
\begin{equation*}
g(x)=\begin{cases}
\phi(x_2) & \text{if } x_2\ge 0\\
\infty & \text{if }x_2<0
\end{cases}
\end{equation*}
(ii) $\mathcal{F}$ satisfies assumptions for the existence of minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$, \\
\noindent then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
Moreover, with convexity of sub-level sets, Theorem \ref{mi} yields that minimizers are a finite union of convex sets with disjoint closures. Therefore, this contains the geometry in McCann's result when $g$ is assumed convex with bounded zero set. In my proofs of Theorems \ref{n=2} $\&$ \ref{q} (and Corollary \ref{m+e}), Theorem \ref{mi} is an important technical tool.
The technique implies the following: if $A \subset \mathbb{R}^2$ is bounded, convex and
$$\mathcal{F}(E_m)=\inf\{\mathcal{F}(E): E\subset A, |E|=m\},$$ with $|K_a|<m\le |A|$ $\&$ $|K_a|$ the measure of the largest Wulff shape in $A$, then $E_m$ is convex. Interestingly, understanding the convexity in the case of the isotropic perimeter was the main objective in \cite{MR1669207} $\&$ the problem has appeared in a few subsequent papers: \cite[Theorem 3.32.]{MR1669207}, \cite[Theorem 11]{MR2178065}, \cite[(1.8)]{MR2436794}, and \cite[Remark 2.6.]{MR2468216}.
One feature is to investigate the problem within a wider framework also inclusive of the Cheeger constant of a fixed domain.
Supposing the surface tension to be even $\&$ a special condition involving $A$ and $f$, convexity for an interval was proved when $n \ge 2$ in \cite[Theorem 6.5.]{MR2436794} (cf. \cite{MR3719067}).
Similar problems are investigated in convergence of curvature flows \cite{MR2248685, MR3981988, MR4025327, MR2558422, MR2208291, MR1205983, MR1078266, MR1087347}.
\section{Main theorems}
\subsection{Proof of Theorem \ref{g}}
\begin{proof}
Define
\begin{equation*}
g(x,y)=\begin{cases}
x^2(1-y)+x^2y^2 & \text{if } y\le0\\
\frac{x^2}{1+y} & \text{if } y > 0.
\end{cases}
\end{equation*}
Note that
\begin{equation*}
D^2g(x,y)=\begin{cases}
\begin{pmatrix}
2(1-y+y^2) & 4xy-2x \\
4xy-2x & 2x^2
\end{pmatrix}
& \text{if } y\le0\\
\begin{pmatrix}
\frac{2}{1+y} & -\frac{2x}{(1+y)^2} \\
-\frac{2x}{(1+y)^2} & \frac{2x^2}{(1+y)^3}
\end{pmatrix}
& \text{if } y > 0;\\
\end{cases}
\end{equation*}
let $\lambda_1, \lambda_2$ be the eigenvalues of $D^2g(x,y)$,
\begin{equation*}
\lambda_1(x,y)\lambda_2(x,y)=\det(D^2g(x,y))=\begin{cases}
12x^2y(1-y)
& \text{if } y\le0\\
0
& \text{if } y > 0,
\end{cases}
\end{equation*}
\begin{equation*}
\lambda_1(x,y)+\lambda_2(x,y)=\begin{cases}
2(1-y+y^2+x^2)
& \text{if } y\le0\\
\frac{2}{1+y}\Big(1 + \frac{x^2}{(1+y)^2}\Big)
& \text{if } y > 0.\\
\end{cases}
\end{equation*}
\begin{figure}[htbp]
\caption{$g$}
\label{f211}
\centering
\includegraphics[width=.6 \textwidth]{saq.pdf}
\end{figure}
In particular,
$$
\lambda_1(x,y), \lambda_2(x,y) \ge 0.
$$
\\
Hence, $D^2g(x,y)$ is a real non-negative Hermitian matrix and therefore can be diagonalized by a non-negative diagonal matrix $\Lambda(x,y)$ and a real orthogonal matrix $O(x,y)$ which has as columns real eigenvectors:
$$
D^2g(x,y)=O(x,y)\Lambda(x,y) O(x,y)^T.
$$
\\
\noindent Assume $w \in \mathbb{R}^2 \setminus \{0\}$ and set $z=O(x,y)^T w$;
\begin{align*}
\langle D^2g(x,y) w, w \rangle&= \langle O(x,y)\Lambda(x,y) O(x,y)^T w, w \rangle\\
&=\langle \Lambda(x,y) O(x,y)^T w, O(x,y)^T w \rangle \\
&= \lambda_1(x,y)|z_1|^2+\lambda_2(x,y)|z_2|^2 \ge 0
\end{align*}
and this implies that $g$ is convex, see Figure \ref{f211}.
Set $e_2=(0,1)$, $a>0$; the potential is non-increasing in the $y$-variable and strictly decreasing if $x\neq 0:$
$$
\partial_y g(x,y)=
\begin{cases}
x^2(-1+2y)
& \text{if } y\le0\\
-\frac{x^2}{(1+y)^2}
& \text{if } y > 0;\\
\end{cases}
$$
in particular, if a minimizer $E_m$ exists,
$$
\int_{E_m+ae_2} g(x,y) dxdy < \int_{E_m} g(x,y) dxdy,
$$
\begin{figure}[htbp]
\centering
\includegraphics[width=.6 \textwidth]{a12}
\caption{$g^\epsilon$}
\label{a12}
\end{figure}
\noindent which yields
$$
\mathcal{E}(E_m+ae_2)< \mathcal{E}(E_m),
$$
a contradiction.
Observe that one may also concatenate another function at $y=0$: let $\epsilon>0$ $\&$ define
$$
g^\epsilon(x,y)=
\begin{cases}
x^2(1-y)+\frac{\epsilon}{2}y^2
& \text{if } -\sqrt{\frac{\epsilon}{2}} \le x \le \sqrt{\frac{\epsilon}{2}}, y \le 0\\
\frac{x^2}{1+y}
& \text{if } -\sqrt{\frac{\epsilon}{2}} \le x \le \sqrt{\frac{\epsilon}{2}}, y >0\\
\end{cases}
$$
$\&$ extend $g^\epsilon$ by a convex envelope, see Figure \ref{a12}.
\end{proof}
\begin{rem}
Note that to apply sharp theorems on the extension of $g^\epsilon$, the corners of the domain can be smoothed and the extension then inherits up to $C_{loc}^{1,1}$ regularity \cite{MR3324933} (an application to PDEs is given in \cite{MR3933403}).
\end{rem}
\subsection{A stability theorem}
The counterexample shows that in a general context, more assumptions are necessary to ensure existence. A well-known assumption is coercivity. Nevertheless, coercivity excludes the gravitational potential. In order to obtain convexity without relying on constrained settings for existence, {\bf g admits minimizers} is defined to mean any assumption generating a minimizer: one may e.g. obtain existence with $\mathcal{F}=\mathcal{H}^{n-1}$ and the gravitational potential utilizing Steiner symmetrization \cite{MR2178968, MR3055761} to avoid a sequence of sets escaping to infinity to prevent a compactness argument. Other situations specific to the potential or surface tension likewise may generate existence although not included under coercivity.
\begin{thm} \label{@'}
Suppose $g \in L_{loc}^\infty(\{g<\infty\})$ admits minimizers $E_m \subset B_{R}$ if $m$ is small. For all $\epsilon>0$ there exists $c_\epsilon>0$, $a=a(\cdot, \epsilon)$, $m_0>0$ such that
$$
\inf_{y>0} \frac{a(y,\epsilon)}{y^{\frac{n-1}{n}}} \ge c_\epsilon,
$$
$\&$ for all $E \subset B_R$, $|E|=|E_m|=m<m_0$, if
$$
|\mathcal{E}(E_m)-\mathcal{E}(E)| < a(m,\epsilon),
$$
there exists $x_0$ such that
$$
\frac{|(E+x_0) \Delta E_m|}{|E_m|} < \epsilon.
$$
\end{thm}
\begin{proof}
Assume the theorem is false. Then there exists $\epsilon>0$ such that for all $c>0$, $a(\cdot, \epsilon)$ such that
$$
\inf_{y>0} \frac{a(y,\epsilon)}{y^{\frac{n-1}{n}}} \ge c;
$$
for all $m_0 \in (0,\infty)$ there exist $m<m_0$ and minimizers $E_{m} \subset B_R$ \& sets $E_{m}' \subset B_R$ with
$$|E_{m}|=|E_{m}'|=m$$
such that
$$
|\mathcal{E}(E_m)-\mathcal{E}(E_{m}')| < a(m,\epsilon)
$$
and
$$
\inf_{x_0 \in \mathbb{R}^n} \frac{|(E_{m}'+x_0) \Delta E_{m}|}{|E_{m}|} \ge \epsilon>0.
$$
Let $w_k \rightarrow 0^+$, $q$ a modulus of continuity ($q(0^+)=0$), and define $c_k=w_k q(\epsilon)$,
$a_k(y,\epsilon)=c_k y^{\frac{n-1}{n}}$; now,
$$
\inf_{y>0} \frac{a_k(y,\epsilon)}{y^{\frac{n-1}{n}}} = c_k
$$
and selecting $m_0=\frac{1}{k}$ for $k \in \mathbb{N}$, there exist minimizers $E_{m_k}\subset B_R$ and sets $E_{m_k}' \subset B_R$, $|E_{m_k}|=|E_{m_k}'|=m_k<\frac{1}{k}$ such that
$$
|\mathcal{E}(E_{m_k})-\mathcal{E}(E_{m_k}')| < a_k(m_k,\epsilon),
$$
and
$$
\inf_{x_0 \in \mathbb{R}^n} \frac{|(E_{m_k}'+x_0) \Delta E_{m_k}|}{|E_{m_k}|} \ge \epsilon>0.
$$
Set $a_k=a_k(m_k,\epsilon)$, $\gamma_k=(\frac{|K|}{m_k})^{\frac{1}{n}}$,
$$|\gamma_k E_{m_k}|=|K|;$$
since $m_k \rightarrow 0$,
$$
\delta(\gamma_k E_{m_k}):=\frac{\mathcal{F}(\gamma_k E_{m_k})}{n|K|^{\frac{1}{n}}|\gamma_k E_{m_k}|^{\frac{n-1}{n}}} - 1 \rightarrow 0
$$
(via e.g. Corollary 2 in Figalli and Maggi \cite[p. 176]{MR2807136}).
By the triangle inequality,
\begin{align*}
|\mathcal{F}(E_{m_k}')&-\mathcal{F}(E_{m_k})|\\
&=|[\mathcal{E}(E_{m_k}')-\mathcal{E}(E_{m_k})]+[\int_{E_{m_k}} g(x)dx-\int_{E_{m_k}'} g(x)dx]|\\
&\le |\mathcal{E}(E_{m_k}')-\mathcal{E}(E_{m_k})|+\int g(x)|\chi_{E_{m_k}'}-\chi_{E_{m_k}}| dx\\
&<a_k+\int_{E_{m_k}' \Delta E_{m_k}} g(x)dx.
\end{align*}
Multiplying both sides by $\gamma_k^{n-1}$,
\begin{align*}
|\mathcal{F}(\gamma_kE_{m_k}')&-\mathcal{F}(\gamma_kE_{m_k})|\\
&<\gamma_k^{n-1}a_k+2\gamma_k^{n-1} m_k(\sup_{B_R \cap \{g<\infty\}} g) \\
&=|K|^{\frac{n-1}{n}}\frac{a_k}{m_k^{\frac{n-1}{n}}}+2|K|^{\frac{n-1}{n}}(\sup_{B_R \cap \{g<\infty\}} g) m_k^{1-\frac{n-1}{n}}
\end{align*}
and since $a_k=a_k(m_k, \epsilon)=c_k m_k^{\frac{n-1}{n}}=w_k q(\epsilon)m_k^{\frac{n-1}{n}}$,
$$
\frac{a_k}{m_k^{\frac{n-1}{n}}}=w_k q(\epsilon) \rightarrow 0
$$
$$
|\mathcal{F}(\gamma_kE_{m_k}')-\mathcal{F}(\gamma_kE_{m_k})| \rightarrow 0
$$
\begin{align*}
\delta(\gamma_k E_{m_k}') & \le |\delta(\gamma_k E_{m_k}')-\delta(\gamma_k E_{m_k})|+\delta(\gamma_k E_{m_k})\\
&= \frac{1}{n|K|}|\mathcal{F}(\gamma_kE_{m_k}')-\mathcal{F}(\gamma_kE_{m_k})|+\delta(\gamma_k E_{m_k})\\
&\hskip .15in \rightarrow 0.
\end{align*}
Hence, there exist $x_k, x_k' \in \mathbb{R}^n$ such that
$$
\frac{|(\gamma_kE_{m_k}+x_k) \Delta K|}{|\gamma_k E_{m_k}|} \rightarrow 0,
$$
\&
$$
\frac{|(\gamma_kE_{m_k}'+x_k') \Delta K|}{|\gamma_k E_{m_k}'|} \rightarrow 0
$$
via a compactness argument \footnote{or, the stability of the anisotropic isoperimetric inequality \cite{MR2672283}}:
if this is not true, then up to a subsequence
$$
\inf_x \frac{|(\gamma_kE_{m_k}+x) \Delta K|}{|\gamma_k E_{m_k}|} \ge a>0;
$$
let $E_k:=\gamma_k E_{m_k}$ $\&$ observe
$$
\sup_k \mathcal{F}(E_k)<\infty,
$$
hence up to a subsequence, $E_k \rightarrow E$ in $L_{loc}^1$, $|E|=|E_k|=|K|$,
$$
\mathcal{F}(E) \le \liminf_k \mathcal{F}(E_k)=\mathcal{F}(K) \le \mathcal{F}(E)
$$
via the anisotropic isoperimetric inequality. Therefore, there exists $x \in \mathbb{R}^n$ such that
$$
0<a \le \frac{|(\gamma_kE_{m_k}+x) \Delta K|}{|\gamma_k E_{m_k}|} \rightarrow \frac{|(E+x) \Delta K|}{|K|} =0,
$$
a contradiction. Similarly, a symmetric argument implies
$$
\frac{|(\gamma_kE_{m_k}'+x_k') \Delta K|}{|\gamma_k E_{m_k}'|} \rightarrow 0
$$
and this yields $k \in \mathbb{N}$ (via the triangle inequality in $L^1$ applied to characteristic functions) such that
$$
\frac{|(E_{m_k}'+\frac{(x_k'-x_k)}{\gamma_k}) \Delta E_{m_k}|}{|E_{m_k}|}<\epsilon,
$$
a contradiction to
$$
\inf_{x_0 \in \mathbb{R}^n} \frac{|(E_{m_k}'+x_0) \Delta E_{m_k}|}{|E_{m_k}|} \ge \epsilon>0.
$$
\end{proof}
\begin{rem}
The result may also be extended to $g \in L_{loc}^1(\{g<\infty\})$ subject to some assumptions. In the extension, Lebesgue's differentiation theorem is utilized.
\end{rem}
\begin{cor} \label{@'w}
Suppose $g \in L_{loc}^\infty$ is coercive. For all $\epsilon>0$ there exists $c_\epsilon>0$, $a=a(\cdot, \epsilon)$, $m_0>0$ such that
$$
\inf_{y>0} \frac{a(y,\epsilon)}{y^{\frac{n-1}{n}}} \ge c_\epsilon,
$$
$\&$ for a minimizer $E_m \subset B_R$, $E \subset B_R$, $|E|=|E_m|=m<m_0$, if
$$
|\mathcal{E}(E_m)-\mathcal{E}(E)| < a(m,\epsilon),
$$
there exists $x_0$ such that
$$
\frac{|(E+x_0) \Delta E_m|}{|E_m|} < \epsilon.
$$
Therefore, if $m<m_0$, $E_m$ is (mod translations and sets of measure zero) unique.
\end{cor}
\begin{rem}
Suppose $g \in L_{loc}^\infty$ $\&$ $n=2$, then there exists $m_0>0$ such that for $m<m_0$, $E_m$ is unique and convex (via combining Corollary \ref{@'w} with Theorem 1 in Figalli and Maggi).
\end{rem}
\begin{rem} \label{Qzn}
Assuming $g$ is coercive, convex (strictly for $n >2$), and additional regularity,
De Philippis and Goldman \cite{D} proved minimizers are connected and therefore if $n=2$, convex $\&$ unique via McCann \cite{MR1641031}. In the theorem, I obtain uniqueness for $m$ small without regularity, a convexity assumption, and displacement interpolation introduced in \cite{Mcq, Mc, MR1641031}; and for $n\ge2$. This also implies a new proof of convexity when $n=2$ in the convex case: De Philippis and Goldman in particular prove that there exist convex minimizers for $m \in \mathbb{R}^+$ \cite[Corollary 1.2., Remark 1.3.]{D} and if $m<m_0$, Theorem \ref{@'} implies, up to translations and sets of measure zero, that no others exist.
The small mass convexity for $n=2$ has already been proven in Figalli and Maggi's Theorem 1. The result in the corollary improves the convexity with the uniqueness. To the best of my knowledge, this is the first uniqueness and convexity result for small mass with merely $g \in L_{loc}^\infty(\mathbb{R}^2)$ and general convex surface tension. If $f$ is crystalline, Figalli and Zhang obtained the geometry of minimizers in higher dimension: for sufficiently small mass minimizers are polyhedra \cite{pFZ}. Assuming a stability condition, Corollary \ref{m_s} implies convexity for small mass with $g \in L_{loc}^\infty(\mathbb{R}^n).$
\end{rem}
Also, new results on the gravitational potential are obtained via the theorem.
\begin{cor} \label{@'w09}
Suppose $0 \le \phi \in L_{loc}^\infty([0,\infty))$ and\\
(i)
\begin{equation*}
g(x)=\begin{cases}
\phi(x_n) & \text{if } x_n\ge 0\\
\infty & \text{if }x_n<0
\end{cases}
\end{equation*}
(ii) $\mathcal{F}$ satisfies assumptions for the existence of minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$,\\
\noindent then if $m>0$ is small, $E_m$ is unique. Moreover, for all $\epsilon>0$ there exists $c_\epsilon>0$, $a=a(\cdot, \epsilon)$, $m_0>0$ such that
$$
\inf_{y>0} \frac{a(y,\epsilon)}{y^{\frac{n-1}{n}}} \ge c_\epsilon,
$$
$\&$ for $E \subset B_R$, $|E|=|E_m|=m<m_0$, if
$$
|\mathcal{E}(E_m)-\mathcal{E}(E)| < a(m,\epsilon),
$$
there exists $x_0$ such that
$$
\frac{|(E+x_0) \Delta E_m|}{|E_m|} < \epsilon.
$$
\end{cor}
\begin{cor} \label{q,}
Suppose $\alpha>0$,\\
(i)
\begin{equation*}
g(x)=\begin{cases}
\alpha x_n & \text{if } x_n\ge 0\\
\infty & \text{if }x_n<0
\end{cases}
\end{equation*}
(ii) $f$ is admissible \cite{R},\\
\noindent then if $m>0$ is small, $E_m$ is unique. Moreover, for all $\epsilon>0$ there exists $c_\epsilon>0$, $a=a(\cdot, \epsilon)$, $m_0>0$ such that
$$
\inf_{y>0} \frac{a(y,\epsilon)}{y^{\frac{n-1}{n}}} \ge c_\epsilon,
$$
$\&$ for $E \subset B_R$, $|E|=|E_m|=m<m_0$, if
$$
|\mathcal{E}(E_m)-\mathcal{E}(E)| < a(m,\epsilon),
$$
there exists $x_0$ such that
$$
\frac{|(E+x_0) \Delta E_m|}{|E_m|} < \epsilon.
$$
\end{cor}
\begin{rem}
Assuming that $f$ is smooth, Baer proved uniqueness \cite[Theorem 3.12.]{R}. Corollary \ref{q,} together with \cite[Theorem 3.10.]{R} yields a unique convex minimizer when $m$ is small under the assumptions in Corollary \ref{q,} (in particular, without additional regularity).
\end{rem}
\subsection{Geometry of $E_m$}
Next, the initially stated problem is revisited to address results for $m \in (0,\infty)$.\\
{\bf Problem:} If the potential $g$ is convex (or, more generally, if the sub-level sets $\{g < t\}$
are convex), are minimizers convex or, at least, connected?\\
\begin{thm} \label{mi}
If $n=2$, the sub-level sets $\{g < t\}$ are convex, $g$ is locally Lipschitz, and $g$ admits minimizers $E_m \subset B_{R(m)}$, then
$$E_{m}=\cup_{i=1}^N A_{i},$$
where $A_{i}$ are convex and have disjoint closures, $N<\infty$.
\end{thm}
\begin{proof}
The first variation formula for $\mathcal{F}$ implies that the anisotropic mean
curvature of $\partial E_{m}$ is non-negative and in two dimensions this is sufficient for the convexity of
every connected component of $E_{m}$ (via e.g. Proposition 1 in Figalli and Maggi \cite{MR2807136}).
Therefore,
$$E_{m}=\cup_{i=1}^\infty A_{i},$$
where $A_{i}$ are disjoint, convex. Theorem 4.4 in Giusti \cite{Giust} yields -- up to sets of measure zero --
$$
\partial E_m = \overline{\partial^* E_m},
$$
$\&$ density estimates imply
$$
\mathcal{H}^1(\partial E_m \setminus \partial^* E_m)=0;
$$
hence,
$$\mathcal{H}^1(\partial \overline{E_m} \setminus \partial \text{int} E_m)=0.$$
\\
This implies that $\text{int} E_m$ is also a minimizer. If $A_j$ and $A_l$ ($j \neq l$) have non-disjoint closures, let $x \in \partial^* E_m\cap \overline{A_j} \cap \overline{A_l} $ and note that thanks to the regularity $x$ has density $1$, but since
$x \notin \text{int} E_m$, the density estimate (e.g. Proposition A.6 in \cite{MR1641031} applied with $\text{int} E_m$) implies
$$
|B_r(x) \setminus \text{int} E_m| \ge a r^2>0
$$
when $r>0$ is small, a contradiction. Suppose $x \in (\partial E_m \setminus \partial^* E_m)\cap \overline{A_j} \cap \overline{A_l}$; thanks to
$
\mathcal{H}^1(\partial E_m \setminus \partial^* E_m)=0
$
($\&$ convexity of $A_{i}$)
$\partial A_j \cap \partial A_l=\{x\}.$ Let $E=E_m \cup \text{conv}(A_j \cup A _l)$: firstly observe that $|E|>m$, and since the closure of $A_j \cup A _l$ is connected,
$$
\mathcal{F}(\text{conv}(A_j \cup A _l)) \le \mathcal{F}(A_j \cup A _l)
$$
(via e.g. Corollary 2.8 in \cite{MR1641031}); in addition, the surface energy satisfies the inclusion-exclusion estimate
$$
\mathcal{F}(A \cup B)+\mathcal{F}(A \cap B) \le \mathcal{F}(A)+\mathcal{F}(B)
$$
\begin{figure}[htbp]
\centering
\label{nwing}
\includegraphics[width=.6 \textwidth]{drawin}
\caption{(a)}
\end{figure}
(e.g. via (32) in \cite{MR1641031});
this implies
\begin{align*}
\mathcal{F}(E) &= \mathcal{F}((E_m \setminus (A_j \cup A _l)) \cup \text{conv}(A_j \cup A _l))\\
&\le \mathcal{F}(E_m \setminus (A_j \cup A _l))+\mathcal{F}(\text{conv}(A_j \cup A _l))\\
& \le \mathcal{F}(E_m \setminus (A_j \cup A _l))+\mathcal{F}(A_j \cup A _l)= \mathcal{F}(E_m)
\end{align*}
which implies
\begin{equation} \label{E}
\mathcal{F}(E) \le \mathcal{F}(E_m).
\end{equation}
If $[E]_m=\{g\le \lambda\} \cap E$, where $\lambda$ satisfies $|[E]_m|=m$ if $E$ is not in $\{g=0\}$,
$$
\mathcal{F}([E]_m) \le \mathcal{F}(E),
$$
$$
\mathcal{G}([E]_m) \le \mathcal{G}(E_m)
$$
\\
with strict inequality unless (a) $\mathcal{G}(E_m)=0$; (b) $E_m=[E]_m$
(via e.g. Lemma 2.6 $\&$ Lemma 3.4 in \cite{MR1641031}).
This yields
$$
\mathcal{E}([E]_m) \le \mathcal{E}(E_m) \le \mathcal{E}([E]_m).
$$
\\
\noindent In particular, one of (a) or (b) must be true: if (a) is true $E_m \subset \{g=0\}$,
$\mathcal{E}(E_m)=\mathcal{F}(E_m)$, therefore contracting with $a<1$, $|aE|=m$, $E\subset \{g =0\}$ via the convexity of
$\{g =0\}$ (Figure 3), hence
$$\mathcal{E}(aE)=a\mathcal{E}(E)<\mathcal{E}(E) \le \mathcal{E}(E_m)$$
\\
\noindent because \eqref{E} is true, which contradicts that $E_m$ is a minimizer; suppose (b) is true,
$$E_m=[E]_m=\{g\le \lambda\} \cap E= \{g\le \lambda\} \cap (E_m \cup (\text{conv}(A_j \cup A _l)))\\$$
\\
\noindent and this implies $E_m \subset \{g\le \lambda\}$; thus, since $\{g\le \lambda\}$ is convex, $E \subset \{g\le \lambda\}$ which implies $E=E_m$ contradicting $|E|>m$.
Hence, $\{A_{i}\}$ have disjoint closures. Now, there exists $w(m)>0$ such that $\inf_i |A_{i}|\ge w(m) > 0$: if $|A_{i}|\rightarrow 0$, let $E_1= A_{1}$, $E_2=A_{i}$,
$$
|hE_1|=|E_1|+|E_2|
$$
\\
$|E_2|=(h^2-1)|E_1|=\gamma r^2,$ $r=\sqrt{h^2-1}$, $\gamma>0$. Thus, $\mathcal{F}(E_2) \ge c\sqrt{h^2-1}$, and if $h>1$ is sufficiently near $1$,
\begin{align*}
&\mathcal{F}(hE_1)+\int_{hE_1} g(x)dx \\
&\le \mathcal{F}(E_1) +c_1(h-1)+\int_{E_1} g(x)dx + c_2|hE_1 \setminus E_1|+\mathcal{F}(E_2)+\int_{E_2} g(x)dx\\
&\hskip .3in -\mathcal{F}(E_2)-\int_{E_2} g(x)dx\\
&\le \mathcal{F}(E_1)+\mathcal{F}(E_2)+\int_{E_1} g(x)dx+\int_{E_2} g(x)dx+\hat c(h-1)-c\sqrt{h^2-1}\\
&<\mathcal{F}(E_1)+\mathcal{F}(E_2)+\int_{E_1} g(x)dx+\int_{E_2} g(x)dx.
\end{align*}
Hence, assuming there exists $0<h-1$ sufficiently small with
\begin{equation} \label{e}
h E_1 \cap (E_m \setminus E_1) = \emptyset,
\end{equation}
\\
the inequality yields
$$
\mathcal{E}(hE_1)<\mathcal{E}(E_1 \cup E_2)
$$
and
$$
|hE_1 \cup (E_m \setminus (E_1 \cup E_2))|=m,
$$
\\
\begin{figure}[htbp]
\includegraphics[width=.8 \textwidth]{drawin1p.pdf}
\caption{A pathological scenario}
\label{dra}
\end{figure}
\noindent a contradiction which finishes the proof. Supposing a pathological scenario in which \eqref{e} is not true: for any two sets in $\{A_i\}$ the h-dilation of one of the sets, say $A_e(=:E_1)$, contains sets $A_k^h \in \{A_i\} \setminus \{A_e, E_2\}$ ($E_2$ is the second set), see Figure \ref{dra},
$$
h E_1 \cap (E_m \setminus E_1) \neq \emptyset
$$
\\
for $h\rightarrow 1^+$,
$$
|h E_1 \cap (E_m \setminus E_1)| \rightarrow 0,
$$
\noindent therefore one may argue as in Proposition A.9 in \cite{MR1641031} to obtain a contradiction.
\end{proof}
\begin{rem}
If $g \in C^{1,\alpha}$ is convex coercive, $f \in C^{3,\alpha}(\mathbb{R}^2\setminus \{0\})$ is uniformly elliptic, connectedness of equilibrium shapes was obtained by De Philippis and Goldman for $m \in (0,\infty)$ \cite{D}. Therefore, their theorem combined with my theorem yields convexity in $\mathbb{R}^2$ subject to the regularity assumptions and without McCann's theorem. Another proof of convexity when $f$ is euclidian is via Theorem 1 in Ferriero and Fusco \cite{MR2461811} $\&$ Proposition 1 in Figalli and Maggi \cite{MR2807136}:
$$
\mathcal{H}^1(\partial \text{co}(E_m))\le \mathcal{H}^1(\partial E_m) \le \mathcal{H}^1(\partial \text{co}(E_m))
$$
yields
$\text{co}(E_m)=E_m$ \cite{MR2461811}. If $f$ is elliptic, the same argument above and Corollary 2.8 in McCann \cite{MR1641031} (see the proof) replacing Theorem 1 in Ferriero and Fusco \cite{MR2461811} implies convexity.
\end{rem}
\begin{cor}
If $A \subset \mathbb{R}^2$ is bounded, convex and $$\mathcal{F}(E_m)=\inf\{\mathcal{F}(E): E\subset A, |E|=m\},$$ $|K_a|<m\le |A|$, $|K_a|$ is the measure of the largest Wulff shape in $A$, then $E_m$ is convex.
\end{cor}
\begin{proof}
Let
\begin{equation*}
g(x)=\begin{cases}
0 & \text{if } x\in \bar A\\
\infty & \text{if }x \notin \bar A.
\end{cases}
\end{equation*}
If $m \le |A|$, note that a compactness argument implies the existence of minimizers $E_m$. An analog of the proof of Theorem \ref{mi} implies $E_{m}=\cup_{i=1}^N A_{i} \subset \{g=0\}$, $A_i$ convex, disjoint. In particular, the argument to exclude (a) in the proof of the theorem implies $N=1$.
\end{proof}
\begin{rem}
The problem in the case of the isotropic perimeter was the main objective in \cite{MR1669207} $\&$ has appeared in the articles: \cite[Theorem 11]{MR2178065}, \cite[(1.8)]{MR2436794}, and \cite[Remark 2.6.]{MR2468216}.
\end{rem}
\begin{cor} \label{mil}
Assume $n=2$, the sub-level sets $\{g < t\}$ are convex, $g$ is locally Lipschitz, and $g$ is coercive, then
$$E_{m}=\cup_{i=1}^N A_{i},$$
where $A_{i}$ are convex and have disjoint closures, $N<\infty$.
\end{cor}
\begin{rem}
In the case when $g$ is convex and coercive, McCann proved the result in the corollary \cite{MR1641031}.
\end{rem}
\begin{thm} \label{q}
If $n=2$, the sub-level sets $\{g < t\}$ are convex, $g$ is locally Lipschitz, and $g$ admits minimizers $E_m \subset B_{R(m)}$, then
$\{m: E_m \hskip .03in is \hskip .03in convex \}$
is open.
\end{thm}
\begin{proof}
Assume $m>0$ and $E_m$ is convex; if $m_k>m$, $E_{m_k}$ are not convex, and $m_k \rightarrow m$, let $R>0$ satisfy $E_{m_k} \subset B_R$,
$$E_{m_k}=\cup_{i=1}^N A_{k,i},$$
$A_{k,i}$ convex and have disjoint closures (Theorem \ref{mi}); $A_{i}=\liminf_k A_{k,i}$ (in $L^1(B_R)$).
There exists $w(m)>0$ such that $|A_{k,i}|\ge w(m) > 0$: if $|A_{k,i}|\rightarrow 0$, let $\inf_k |A_{k,l}| \ge w(N,m)>0$ ($N$ is bounded; therefore such a constant exists); then, set $E_1= A_{k,l}$, $E_2=A_{k,i}$,
$$
|hE_1|=|E_1|+|E_2|
$$
\\
$|E_2|=(h^2-1)|E_1|=\gamma r^2,$ $r=\sqrt{h^2-1}$, $\gamma>0$. Thus, $\mathcal{F}(E_2) \ge c\sqrt{h^2-1}$, and if $h>1$ is sufficiently near $1$,
\begin{align*}
&\mathcal{F}(hE_1)+\int_{hE_1} g(x)dx \\
&\le \mathcal{F}(E_1) +c_1(h-1)+\int_{E_1} g(x)dx + c_2|hE_1 \setminus E_1|+\mathcal{F}(E_2)+\int_{E_2} g(x)dx\\
&\hskip .3in -\mathcal{F}(E_2)-\int_{E_2} g(x)dx\\
&\le \mathcal{F}(E_1)+\mathcal{F}(E_2)+\int_{E_1} g(x)dx+\int_{E_2} g(x)dx+\hat c(h-1)-c\sqrt{h^2-1}\\
&<\mathcal{F}(E_1)+\mathcal{F}(E_2)+\int_{E_1} g(x)dx+\int_{E_2} g(x)dx.
\end{align*}
Hence, $$\sum_i |A_{k,i}|=m_k$$
$$
\sum_i\mathcal{F}(A_{k,i})=\mathcal{F}(E_{m_k}),
$$
and $A=\cup_i A_i$ has $|A|=m$;
therefore if $s_k>0$, $|s_kE_m|=|E_{m_k}|$, $s_k \rightarrow 1$,
$$\mathcal{E}(E_m)\le \mathcal{E}(A)\le \liminf_k \mathcal{E}(E_{m_k}) \le \liminf_k \mathcal{E}(s_kE_m)=\mathcal{E}(E_m)$$
$$
\int_{A_{k_l,i}} g(x)dx \rightarrow \int_{A_i} g(x)dx;
$$
this implies that $A$ is a minimizer with mass $m$ \&
\begin{align*}
|\mathcal{F}(A) - \sum_i\mathcal{F}(A_{i})|& \le |\mathcal{E}(A)-\mathcal{E}(E_{m_k})|+|\mathcal{G}(E_{m_k})-\mathcal{G}(A)|\\
& + |\mathcal{F}(E_{m_k}) - \sum_i\mathcal{F}(A_{i})|\\
& \hskip .15in \rightarrow 0,
\end{align*}
which contradicts that minimizers with mass $m$ are convex (since $|A_{i}|\ge w(m) > 0, A=\cup_i A_i$).
Hence there exists $e_a>0$ such that for all $\bar m\in (m, m+e_a)$, $E_{\bar m}$ is convex. A symmetric argument yields $e_a>0$ such that for all $\bar m\in (m-e_a, m)$, $E_{\bar m}$ is convex.
\end{proof}
\begin{cor} \label{qw}
If $n=2$, the sub-level sets $\{g < t\}$ are convex, $g$ is locally Lipschitz, and $g$ is coercive, then
$\{m: E_m \hskip .03in is \hskip .03in convex \}$
is open.
\end{cor}
An important application of Corollary \ref{qw} is the enlarging of the small mass convexity in Figalli and Maggi \cite{MR2807136}: \\
``Theorems 1 and 2 deal with the connectedness and convexity properties
of liquid drops and crystals in the small mass regime. Outside this special regime,
one expects convexity of minimizers, provided g is convex"...``this was proved in [8,12] when the mass is large enough. The
natural problem of how to fill the gap in between these two results is open. It seems
very likely that new ideas are needed to deal with this case"
p. 149. \\
In the following, the interval in Theorem 1 in Figalli and Maggi \cite{MR2807136} is extended with merely the assumption of convex sub-level sets and a local Lipschitz regularity assumption (if $g$ is convex, both of these are true). Also, a stability result is proved via uniqueness.
\begin{cor} \label{m+e}
If $n=2$ and $m_c$ is the critical mass in Figalli and Maggi's Theorem 1 (for all $m \in (0,m_c]$, $E_m$ is convex, \cite{MR2807136}), then if $g$ is locally Lipschitz and the sub-level sets $\{g < t\}$ are convex there exists $e>0$ such that for $m \in (m_c, m_c+e)$, $E_m$ is convex.
\end{cor}
\begin{cor} \label{m+e2}
If $n=2$, $m_c$ is the critical mass in Figalli and Maggi's Theorem 1 (for all $m \in (0,m_c]$, $E_m$ is convex, \cite{MR2807136}), and if $g$ is convex there exists $e>0$ such that for $m \in (0, m_c+e)$, $E_m$ is unique $\&$ convex and for $\epsilon>0$ there exists $w_m(\epsilon)>0$ such that if
$|E|=|E_m|$, $E \subset B_R$, and
$$
|\mathcal{E}(E)-\mathcal{E}(E_m)|<w_m(\epsilon),
$$
then there exists $x \in \mathbb{R}^2$ such that
$$
\frac{|(E_m+x) \Delta E|}{|E_m|}<\epsilon.
$$
\end{cor}
If the mass is small, the convexity of the sub-level is not necessary to obtain the uniqueness and convexity of minimizers. Therefore, if $g$ is merely in $L_{loc}^\infty$, by enlarging $m$ one expects non-convex minimizers or at least two convex minimizers. The critical mass when this occurs is identified via Theorem \ref{8p} $\&$ in higher dimension with additional regularity in Theorem \ref{85}. If the sub-levels of $g$ are convex $\&$ a uniqueness assumption in the class of convex sets holds, the critical mass is completely identified in $\mathbb{R}^2$ via Theorem \ref{8qp}.
\begin{thm} \label{85}
If $g \in C^1$, $f \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$, $\alpha \in (0,1)$ is $\lambda-$elliptic
and $g$ admits minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$, either:\\
(i) $E_m$ is convex $\&$ unique for all $m \in (0,\infty)$;\\
(ii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M})$, $E_m$ is unique, convex and there exist $\epsilon_0, \gamma>0$ such that for all $\epsilon \le \epsilon_0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}})}{w_m(\epsilon)} \ge 1,
$$
where $w_m(\epsilon)>0$ satisfies Proposition \ref{K};\\
(iii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M}]$, $E_m$ is unique, convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for mass $a$.
\end{thm}
\begin{proof}
Define
$$
\mathcal{A}=\{m: E_m \hskip .08in \text{is unique $\&$ convex}\}
$$
$$
\mathcal{M}=\sup \mathcal{A}.
$$
Theorem \ref{@'} and Theorem 2 in Figalli and Maggi \cite{MR2807136} imply $(0,m_a) \subset \mathcal{A}$, hence $\mathcal{M} >0$. If $\mathcal{M}<\infty$, for $m \in (0,\mathcal{M})$, $E_m$ is unique $\&$ convex.
Therefore either: (a) there exists a non-convex minimizer having mass $\mathcal{M}$; (b) there exist two convex minimizers having mass $\mathcal{M}$; or (c) for all $m \in (0, \mathcal{M}]$, $E_m$ is unique, convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for minimizers with mass $a$.
If $m_k<\mathcal{M}$, $m_k \rightarrow \mathcal{M}$, along a subsequence, $E_{m_k} \rightarrow T_\mathcal{M}$, with $|T_\mathcal{M}|=\mathcal{M}$, $T_\mathcal{M}$ a convex minimizer. Set
$$
\epsilon=\frac{1}{5} \inf_{x} \frac{|(E_\mathcal{M}+x) \Delta T_\mathcal{M}|}{|E_\mathcal{M}|}>0,
$$
where if (a) $E_\mathcal{M}$ is the non-convex minimizer and if (b) $E_\mathcal{M}$ is a convex minimizer not (mod sets of measure zero and translations) equal to $T_\mathcal{M}$.
If $m \in (0,\mathcal{M})$, the uniqueness of convex minimizers implies that there exists $w_m(\epsilon)>0$ such that for all $\epsilon>0$, if
$|E|=|E_m|$, $E \subset B_R$, and
$$
|\mathcal{E}(E)-\mathcal{E}(E_m)|<w_m(\epsilon),
$$
then there exists $x \in \mathbb{R}^n$ such that
$$
\frac{|(E_m+x) \Delta E|}{|E_m|}<\epsilon
$$
via Proposition \ref{K}.
Let $\{m_k\}$ be the sequence such that
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}}}{w_m(\epsilon)}=\lim_{k \rightarrow \infty} \frac{\mathcal{M}^{\frac{n-1}{n}}-m_k^{\frac{n-1}{n}}}{w_{m_k}(\epsilon)},
$$
and define $\gamma_k$ via $|\gamma_k E_\mathcal{M}|=|E_{m_k}|$ (i.e. $\gamma_k=(\frac{m_k}{\mathcal{M}})^{\frac{1}{n}}$); note
\begin{align*}
|\mathcal{E}(\gamma_k E_\mathcal{M})-\mathcal{E}(E_{m_k})|& \le |\mathcal{E}(\gamma_k E_\mathcal{M})-\mathcal{E}(E_\mathcal{M})|+|\mathcal{E}(T_\mathcal{M})-\mathcal{E}(E_{m_k})|\\
&\le\mathcal{F}(E_\mathcal{M})(1-\gamma_k^{n-1})+ (\sup_{B_R}g) |E_\mathcal{M} \Delta (\gamma_k E_\mathcal{M})|\\
&+|\mathcal{E}(T_\mathcal{M})-\mathcal{E}(E_{m_k})|\\
\end{align*}
\begin{align*}
\mathcal{E}(T_{\mathcal{M}})& \le \mathcal{E}(\frac{1}{\gamma_k}E_{m_k})\\
&=\frac{1}{\gamma_k^{n-1}}\mathcal{F}(E_{m_k})+ \int_{\frac{1}{\gamma_k}E_{m_k}}g(x)dx\\
&\le (\frac{1}{\gamma_k^{n-1}}-1)\mathcal{F}(E_{m_k})+(\sup_{B_R}g) |\frac{1}{\gamma_k}E_{m_k} \Delta E_{m_k}|+\mathcal{E}(E_{m_k})
\end{align*}
and similarly thanks to $|\frac{1}{\gamma_k}E_{m_k} \Delta E_{m_k}| \le a(\frac{1}{\gamma_k}-1)$ (e.g via \cite[Lemma 4]{MR2807136}) this implies
$$
|\mathcal{E}(T_{\mathcal{M}})-\mathcal{E}(E_{m_k})| \le \alpha_p (\frac{1}{\gamma_k^{n-1}}-1)=\alpha (\mathcal{M}^{\frac{n-1}{n}}-m_k^{\frac{n-1}{n}}),
$$
$m_k \thickapprox \mathcal{M}$.
In particular,
$$
|\mathcal{E}(\gamma_k E_\mathcal{M})-\mathcal{E}(E_{m_k})| \le \gamma (\mathcal{M}^{\frac{n-1}{n}}-m_k^{\frac{n-1}{n}})
$$
where $\gamma=\gamma(\mathcal{M})$.
Suppose
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}}}{w_m(\epsilon)}<\frac{1}{\gamma},
$$
then for $k$ large
$$
|\mathcal{E}(\gamma_k E_\mathcal{M})-\mathcal{E}(E_{m_k})| \le \frac{ \gamma (\mathcal{M}^{\frac{n-1}{n}}-m_k^{\frac{n-1}{n}})}{w_{m_k}(\epsilon)} w_{m_k}(\epsilon)<w_{m_k}(\epsilon)
$$
and this implies the existence of $x_k$ such that
$$
\frac{|(E_{m_k}+x_k) \Delta (\gamma_k E_\mathcal{M})|}{|E_{m_k}|}<\epsilon;
$$
however, if $k$ is large, $\gamma_k \thickapprox 1$, which implies
\begin{align*}
\frac{|(E_{m_k}+x_k) \Delta (\gamma_k E_\mathcal{M})|}{|E_{m_k}|}& \thickapprox \frac{|(T_\mathcal{M}+x_k) \Delta E_\mathcal{M}|}{|E_\mathcal{M}|}\\
&\ge \inf_x \frac{|(E_\mathcal{M}+x) \Delta T_\mathcal{M}|}{|E_\mathcal{M}|} =5\epsilon,
\end{align*}
a contradiction.
Therefore
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\mathcal{M}^{\frac{n-1}{n}}-m_k^{\frac{n-1}{n}}}{w_m(\epsilon)}\ge \frac{1}{\gamma},
$$
for $$\epsilon \le \epsilon_0:=\frac{1}{5}\inf_x \frac{|(E_\mathcal{M}+x) \Delta T_\mathcal{M}|}{|E_\mathcal{M}|}.$$
\end{proof}
\noindent {\bf Example:} \\
If $g=0$, $a(m,\epsilon)=w_m(\epsilon)=c(n)\epsilon^2 m^{\frac{n-1}{n}}$ via Figalli-Maggi-Pratelli \cite{MR2672283}: this yields for all $\mathcal{M}, \epsilon, \gamma>0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}})}{w_m(\epsilon)} = \liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}})}{c(n)\epsilon^2 m^{\frac{n-1}{n}}}=0
$$
which precludes (ii).
\begin{cor}
If $g \in C^1$ is coercive, $f \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$, $\alpha \in (0,1)$ is $\lambda-$elliptic, either:\\
(i) $E_m$ is convex $\&$ unique for all $m \in (0,\infty)$;\\
(ii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M})$, $E_m$ is unique, convex and there exist $\epsilon_0, \gamma>0$ such that for all $\epsilon \le \epsilon_0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}^{\frac{n-1}{n}}-m^{\frac{n-1}{n}})}{w_m(\epsilon)} \ge 1,
$$
where $w_m(\epsilon)>0$ satisfies Proposition \ref{K};\\
(iii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M}]$, $E_m$ is unique, convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for mass $a$.
\end{cor}
\begin{thm} \label{8p}
If $g \in L_{loc}^\infty$, $n=2$, and $g$ admits minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$, either:\\
(i) $E_m$ is convex $\&$ unique for all $m \in (0,\infty)$;\\
(ii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M})$, $E_m$ is unique, convex and there exist $\epsilon_0, \gamma>0$ such that for all $\epsilon \le \epsilon_0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}-m)}{w_m(\epsilon)} \ge 1,
$$
where $w_m(\epsilon)>0$ satisfies Proposition \ref{K};\\
(iii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M}]$, $E_m$ is unique, convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for mass $a$.
\end{thm}
\begin{proof}
Define
$$
\mathcal{A}=\{m: E_m \hskip .08in \text{is unique $\&$ convex}\}
$$
$$
\mathcal{M}=\sup \mathcal{A}.
$$
Theorem \ref{@'} and Theorem 1 in Figalli and Maggi \cite{MR2807136} imply $(0,m_a) \subset \mathcal{A}$, hence $\mathcal{M} >0$. Thus one may argue -- verbatim -- as in the proof of Theorem \ref{85}.
\end{proof}
\begin{cor}
If $g$ is coercive, locally Lipschitz, $n=2$, either:\\
(i) $E_m$ is convex $\&$ unique for all $m \in (0,\infty)$;\\
(ii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M})$, $E_m$ is unique, convex and there exist $\epsilon_0, \gamma>0$ such that for all $\epsilon \le \epsilon_0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}-m)}{w_m(\epsilon)} \ge 1,
$$
where $w_m(\epsilon)>0$ satisfies Proposition \ref{K};\\
(iii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M}]$, $E_m$ is unique, convex and for $m>\mathcal{M}$ there exists $a<m$ such that either convexity or uniqueness fails for mass $a$.
\end{cor}
\begin{thm} \label{8qp}
If $n=2$, the sub-level sets $\{g < t\}$ are convex, $g$ is locally Lipschitz, and $g$ admits minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$ which if convex are unique within the class of convex sets, either:\\
(i) $E_m$ is convex for all $m \in (0,\infty)$;\\
(ii) there exists $\mathcal{M}>0$ such that for all $m \in (0, \mathcal{M})$, $E_m$ is convex and there exist $\epsilon_0, \gamma>0$ such that for all $\epsilon \le \epsilon_0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}-m)}{w_m(\epsilon)} \ge 1,
$$
where $w_m(\epsilon)>0$ satisfies Proposition \ref{K}.
\end{thm}
\begin{proof}
Define
$$
\mathcal{A}=\{m: E_m \hskip .08in \text{is convex}\}
$$
$$
\mathcal{M}=\sup \mathcal{A}.
$$
Corollary \ref{m+e} implies $(0,m_a) \subset \mathcal{A}$, hence $\mathcal{M} >0$. Assume $E_{\mathcal{M}}$ is convex, then Theorem \ref{q} implies the existence of $\epsilon>0$ such that $E_m$ is convex for all $m \in (\mathcal{M}, \mathcal{M}+\epsilon)$ contradicting $\mathcal{M}=\sup \mathcal{A}$. Thus, for $m<\mathcal{M}$, $E_m$ is convex and there exists a non-convex minimizer $E_\mathcal{M}$. If $m_k<\mathcal{M}$, $m_k \rightarrow \mathcal{M}$, along a subsequence, $E_{m_k} \rightarrow T_\mathcal{M}$, with $|T_\mathcal{M}|=\mathcal{M}$, $T_\mathcal{M}$ a convex minimizer. Set
$$
\epsilon=\frac{1}{5} \inf_{x} \frac{|(E_\mathcal{M}+x) \Delta T_\mathcal{M}|}{|E_\mathcal{M}|}>0
$$
and observe that the argument in Theorem \ref{85}'s proof yields (ii).
\end{proof}
\begin{rem}
Suppose $g$ is convex $\&$ coercive, then the assumptions in the theorem hold.
\end{rem}
\noindent {\bf Example:} \\
If $g=0$, $n=2$, $a(m,\epsilon)=w_m(\epsilon)=c\epsilon^2 m^{\frac{1}{2}}$ via Figalli-Maggi-Pratelli \cite{MR2672283}: this yields for all $\mathcal{M}, \epsilon, \gamma>0$,
$$
\liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}-m)}{w_m(\epsilon)} = \liminf_{m \rightarrow \mathcal{M}^-} \frac{\gamma(\mathcal{M}-m)}{c\epsilon^2 m^{\frac{1}{2}}}=0
$$
which precludes (ii).
\subsection{Proof of Theorem \ref{n=2}}
\begin{proof}
If $E_m=\cup_{k=1}^N A_k$, $\{A_k\}$ convex, disjoint, non-empty, $N>1$, there exists $\bar K$ such that
$$
|A_{\bar K} \cap \{g \neq 0\}|>0,
$$
$0 \notin A_{\bar K}$: assume not, let $E=E_m \cup \text{conv}(A_j \cup A _l) \subset \{g=0\}$ via the convexity of
$\{g =0\}$, see Figure 3; firstly observe that $|E|>m$, and since by a translation, the closure of $A_j \cup A _l$ is connected,
$$
\mathcal{F}(\text{conv}(A_j \cup A _l)) \le \mathcal{F}(A_j \cup A _l)
$$
(via e.g. Corollary 2.8 in \cite{MR1641031}); this implies
\begin{align*}
\mathcal{F}(E) &\le \mathcal{F}(E_m \setminus (A_j \cup A _l))+\mathcal{F}(\text{conv}(A_j \cup A _l))\\
& \le \mathcal{F}(E_m \setminus (A_j \cup A _l))+\mathcal{F}(A_j \cup A _l)= \mathcal{F}(E_m)
\end{align*}
which implies
\begin{equation} \label{E@}
\mathcal{F}(E) \le \mathcal{F}(E_m)
\end{equation}
therefore contracting with $a<1$, $|aE|=m$, $E\subset \{g =0\}$, hence
$$\mathcal{E}(aE)=a\mathcal{E}(E)<\mathcal{E}(E) \le \mathcal{E}(E_m)$$
because \eqref{E@} is true, which contradicts that $E_m$ is a minimizer;
hence there exists $\bar K$ such that
$$
|A_{\bar K} \cap \{g \neq 0\}|>0,
$$
$0 \notin A_{\bar K}$; (iv) implies
$$
\int_{A_{\bar K}} \nabla g(x) dx \neq 0.
$$
If $\nu \in \mathbb{S}^{1}$ and $t>0$ is small,
$$
\int_{A_{\bar K}+t\nu} g(x) dx \ge \int_{A_{\bar K}} g(x) dx
$$
since
$$
\int_{A_{\bar K}+t\nu} g(x) dx < \int_{A_{\bar K}} g(x) dx \Rightarrow \mathcal{E}(E_m) > \mathcal{E}((\cup_{k \neq \bar K} A_k) \cup A_{\bar K+t\nu}),
$$
where for sufficiently small $t>0$, $|(\cup_{k \neq \bar K} A_k) \cup A_{\bar K+t\nu}|=m$, and hence this contradicts $E_m$ having the smallest energy;
$$
\Rightarrow \int_{A_{\bar K}} g(x+t\nu)-g(x) dx \ge 0
$$
for $\nu \in \mathbb{S}^1$, $t>0$ small. Fix $\nu \in \mathbb{S}^1$, then for $t>0$ small
(i) yields
$$
\frac{|g(x+t\nu)-g(x)|}{t} \le M
$$
and dominated convergence implies
$$
\int_{A_{\bar K}} \nabla g(x) \cdot \nu dx \ge 0;
$$
$$
\Rightarrow \nu \cdot \int_{A_{\bar K}} \nabla g(x) dx \ge 0
$$
for $\nu \in \mathbb{S}^1$; since
$$
\int_{A_{\bar K}} \nabla g(x) dx \neq 0
$$
$$
\Rightarrow \Big(\frac{-\int_{A_{\bar K}} \nabla g(x) }{|\int_{A_{\bar K}} \nabla g(x)|}\Big) \cdot \int_{A_{\bar K}} \nabla g(x) dx \ge 0
$$
$$
\Rightarrow \int_{A_{\bar K}} \nabla g(x) dx = 0,
$$
and this contradicts (iv).
\end{proof}
\begin{rem}
The result is the first general convexity theorem for $m \in (0,|\{g<\infty\}|)$ with the convexity of sub-level sets $\{g<t\}$ (instead of a convexity assumption on $g$) \& includes non-convex functions. Moreover, the
condition:\\
if $E\subset \{g<\infty\}$ is bounded convex, $0 \notin E$, $|E \cap \{g \neq 0\}|>0$, then
$$
\int_E \nabla g(x) dx \neq 0,
$$
encodes the gravitational potential
\begin{equation*}
g(x)=\begin{cases}
\alpha x_n & \text{if } x_n\ge 0\\
\infty & \text{if }x_n<0
\end{cases}
\end{equation*}
because
$$
\int_E \nabla g(x) dx=(0,\alpha |E|)\neq 0,
$$
therefore this generates a unified theory.
\end{rem}
\begin{cor}
If $n=2$, $\phi(0)= 0$, $\phi'>0$, and\\
(i)
\begin{equation*}
g(x)=\begin{cases}
\phi(x_n) & \text{if } x_n\ge 0\\
\infty & \text{if }x_n<0
\end{cases}
\end{equation*}
(ii) $\mathcal{F}$ satisfies assumptions for the existence of minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$, \\
\noindent then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{rem}
The simple case is when $\mathcal{F}=\mathcal{H}^1$: existence follows via Steiner symmetrization \cite{MR2178968, MR3055761} $\&$ compactness. Likewise, one may obtain existence for $\mathcal{F}$ which have symmetric tensions with respect to $\{x_1=0\}$, e.g. $f$ is admissible \cite{R}; see also \cite{MR2245755}. Classically, $\phi(x_n)=\alpha x_n$ generates the gravitational potential. The above generalization is new.
\end{rem}
\begin{cor}
If $n=2$ and\\
(i) g is locally Lipschitz in $\{g<\infty\}$\\
(ii) $g$ is coercive \\
(iii) the sub-level sets $\{g<t\}$ are convex\\
(iv) when $E \subset \{g<\infty\}$ is bounded convex, $0 \notin E$, $|E \cap \{g \neq 0\}|>0$, then
$$
\int_E \nabla g(x) dx \neq 0,
$$
then $E_m$ is convex for all $m \in (0,|\{g<\infty\}|)$.
\end{cor}
\begin{cor}
If $n=2$ and\\
(i) g is locally Lipschitz\\
(ii) g is coercive \\
(iii) the sub-level sets $\{g<t\}$ are convex\\
(iv) when $E$ is bounded convex, $0 \notin E$, $|E \cap \{g \neq 0\}|>0$, then
$$
\int_E \nabla g(x) dx \neq 0,
$$
then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{cor} \label{a_radi}
If $n=2$ and\\
(i) $g=g(|x|)$ is locally Lipschitz \\
(ii) g admits minimizers $E_m \subset B_{R(m)}$ with $R \in L_{loc}^\infty(\mathbb{R}^+)$ \\
(iii) when $g>0$, $g$ is (strictly) increasing \\
then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{figure}
{\centering\includegraphics[width=.6 \textwidth]{drawingq}} \caption{$h'(|x|)=0$ on $B_r(z) \subset E \cap \{g \neq 0\}$, Corollary \ref{a_radi} \label{drawingq} }
\end{figure}
\begin{proof}
(i) yields $g(x)=h(|x|)$, $h:\mathbb{R}^+ \rightarrow \mathbb{R}^+$
$$
\Rightarrow \nabla g(x)=h'(|x|)\frac{x}{|x|},
$$
\& if $0 \notin E$ is convex such that $|E \cap \{ g \neq 0\}|>0$
$$
\Rightarrow \int_E \nabla g(x)dx = \int_E h'(|x|) \frac{x}{|x|} dx.
$$
Hence, up to a rotation,
$$
E \subset \text{a convex cone in $\{x_2>0\}$}
$$
thus
$$
\int_E \nabla g(x)dx=0 \Rightarrow \int_E h'(|x|) \frac{x_2}{|x|} dx=0
$$
and since $h' \ge 0$ a.e.
$\Rightarrow h'(|x|)=0$ for a.e. $x \in E$, see Figure \ref{drawingq}; therefore there exists $r>0$ such that $h'(|x|)=0$ on $B_r(z) \subset E \cap \{g \neq 0\}$
$\Rightarrow g$ is constant on some interval in $\{g > 0\}$, a contradiction to (iii). This yields
$$
\int_E \nabla g(x)dx\neq 0.
$$
\end{proof}
\begin{cor}
If $n=2$ and\\
(i) $g=g(|x|)$ is locally Lipschitz \\
(ii) $g$ is coercive \\
(iii) when $g>0$, $g$ is (strictly) increasing \\
then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{cor}[The radial convex potential] \label{ra}
If $n=2$ and $g=g(|x|)$ is coercive and convex, then $E_m$ is convex for all $m \in (0,\infty)$.
\end{cor}
\begin{rem}
This corollary is new! Assuming in addition $f(v)=f(-v)$, the convexity was obtained by McCann: Corollary 1.4 in \cite{MR1641031}.
\end{rem}
\begin{rem}
$g=g(|x|) \ge 0$ coercive and convex is equivalent to $g=g(|x|)\ge 0$ not identically zero and convex.
\end{rem}
\noindent {\bf Acknowledgement:} I want to thank several individuals for their comments on a preliminary version of the paper: Alessio Figalli, Robert McCann, Shibing Chen, Frank Morgan, Laszlo Lempert, and John Andersson. The interactions added quality. I want to especially thank Alessio, Robert, and Shibing for their energy and time investment.
\section{Appendix}
\subsection{Modulus of the free energy}
\begin{prop} \label{K}
If $m>0$, $g \in L_{loc}^\infty$, and up to translations and sets of measure zero, $g$ admits unique minimizers $E_m$, then for $\epsilon>0$ there exists $w_m(\epsilon)>0$ such that if
$|E|=|E_m|$, $E \subset B_R$, and
$$
\mathcal{A}(E,E_m):=|\mathcal{F}(E)-\mathcal{F}(E_m)|+|\int_E g(x)dx-\int_{E_m} g(x)dx|<w_m(\epsilon),
$$
then there exists $x \in \mathbb{R}^n$ such that
$$
\frac{|(E_m+x) \Delta E|}{|E_m|}<\epsilon.
$$
\end{prop}
\begin{proof}
Assume not, then there exists $\epsilon>0$ and for $w>0$, there exist $E_w'$ and $E_m$, $|E_w'|=|E_m|=m$,
$$
\mathcal{A}(E_w',E_m)<w
$$
$$
\inf_{x} \frac{|(E_m+x)\Delta E_w'|}{|E_m|} \ge \epsilon;
$$
in particular, set $w=\frac{1}{k}$, $k \in \mathbb{N}$; observe that there exist $E_{\frac{1}{k}}'$, $|E_{\frac{1}{k}}'|=m$,
$$
|\mathcal{E}(E_m)-\mathcal{E}(E_{\frac{1}{k}}')|\le \mathcal{A}(E_{\frac{1}{k}}',E_m)<\frac{1}{k},
$$
$$
\inf_{x} \frac{|(E_m+x)\Delta E_{\frac{1}{k}}'|}{|E_m|} \ge \epsilon;
$$
thus
$$
\mathcal{F}(E_{\frac{1}{k}}') \le \mathcal{E}(E_{\frac{1}{k}}') < \frac{1}{k}+\mathcal{E}(E_m),
$$
$E_{\frac{1}{k}}' \subset B_R$,
and the compactness for sets of finite perimeter implies --up to a subsequence--
$$
E_{\frac{1}{k}}' \rightarrow E' \hskip .3in in \hskip .08in L^1(B_R),
$$
and therefore $|E'|=m$,
$$
\mathcal{E}(E') \le \liminf_k \mathcal{E}(E_{\frac{1}{k}}') =\mathcal{E}(E_m),
$$
which implies
$E'$ is a minimizer contradicting
$$
\inf_{x} \frac{|(E_m+x)\Delta E'|}{|E_m|} \ge \epsilon
$$
via the uniqueness of minimizers.
\end{proof}
\begin{cor} \label{K*}
If $m>0$, $g \in L_{loc}^\infty$, and up to translations and sets of measure zero, $g$ admits unique minimizers $E_m$,
for $\epsilon>0$ there exists $w_m(\epsilon)>0$ such that if
$|E|=|E_m|$, $E \subset B_R$, and
$$
|\mathcal{E}(E)-\mathcal{E}(E_m)|<w_m(\epsilon),
$$
then there exists $x \in \mathbb{R}^n$ such that
$$
\frac{|(E_m+x) \Delta E|}{|E_m|}<\epsilon.
$$
\end{cor}
\begin{proof}
$$
|\mathcal{E}(E)-\mathcal{E}(E_m)| \le \mathcal{A}(E,E_m).
$$
\end{proof}
\subsection{Convexity in higher dimension}
One feature of Theorem \ref{@'} is the lower bound on the modulus. The result also addresses a conjecture stated in Figalli and Maggi \cite{MR2807136}.\\
{\bf Conjecture: convexity of minimizers in the small mass regime} \\
In the small mass regime, minimizers are connected and uniformly close to a (properly
rescaled and translated) Wulff shape, in terms of the smallness of the mass. The convexity
of these minimizers remains conjectural, with the exception of the planar case $n=2$ and of
the $\lambda$-elliptic case\\
(Figalli and Maggi \cite{MR2807136}, p. 147).\\
Assuming $g \in C^1$, $f \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$, $\alpha \in (0,1)$ is $\lambda-$elliptic, Figalli and Maggi \cite{MR2807136} proved the existence of $m_0=m_0(n,g,f)>0$ such that if $m \le m_0$, $E_m$ is convex [Theorem 2, \cite{MR2807136}].
Assume $g \in L_{loc}^\infty$, $m_0(n,g,f)$ is called {\bf stable} if there exist $g_a \rightarrow g$, $f_a \rightarrow f$ pointwise, with $g_a \in C^1$, $f_a \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$ $\lambda-$elliptic, $\alpha \in (0,1)$ such that: $$\liminf_a m_0(n,g_a,f_a) >0.$$
Note that if $g \in C^1$, $f \in C^{2, \alpha}(\mathbb{R}^n\setminus\{0\})$ is $\lambda-$elliptic, $\alpha \in (0,1)$, then $m_0(n,g,f)$ is stable because there exist $f_a=f \rightarrow f$, $g_a=g \rightarrow g$ $\&$
$$\liminf_a m_0(n,g_a,f_a)=m_0(n,g,f) >0.$$
\begin{cor}[Corollary of Theorem \ref{@'}] \label{m_s}
If $g \in L_{loc}^\infty$ and $m_0(n,g,f)$ is stable, then the conjecture is true.
\end{cor}
\begin{proof}
Theorem 2 in Figalli and Maggi \cite{MR2807136} implies that for smooth $g_a$, and elliptic $f_a$, if $m$ is sufficiently small, there exists a convex set $E_{m,a}$ which is a minimizer of the free energy with respect to $f_a$, $g_a$ with mass
$m$. Assume $f$ is not elliptic but $1$-homogeneous and convex. Then there exists $f_a$ elliptic such that $f_a(x) \rightarrow f(x)$ for all $x$; moreover, there exists $\{g_a\}$ smooth with $g_a \rightarrow g$. Thus supposing $m_0(n,g,f)$ is stable, along a subsequence,
$$
E_{m,a_k} \rightarrow E,
$$
where $E$ is a convex minimizer of the free energy with respect to $f$, $g$ and $|E|=m < \liminf_a m_0(n,g_a,f_a)$. The above theorem implies that for $m$ sufficiently small, $E$ is the unique minimizer (mod translations and sets of measure zero).
\end{proof}
\begin{rem}
Assuming $m_0$ is stable, the theorem in Figalli and Maggi implies the existence of convex minimizers for $m \le m_0$, nevertheless it does not preclude the existence of other minimizers. Theorem \ref{@'} yields all minimizers are convex when $m$ is small.
\end{rem}
\begin{rem}
If $f$ is crystalline, Figalli and Zhang recently proved the conjecture: for sufficiently small mass minimizers are polyhedra \cite{pFZ}.
\end{rem}
\newpage
\bibliographystyle{amsalpha}
| {
"timestamp": "2022-01-03T02:16:52",
"yymm": "2008",
"arxiv_id": "2008.02238",
"language": "en",
"url": "https://arxiv.org/abs/2008.02238",
"abstract": "A solution is given to a long-standing open problem posed by Almgren.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On the equilibrium shape of a crystal",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517507633983,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7089606543173437
} |
https://arxiv.org/abs/1508.00381 | Trojan dynamics well approximated by a new Hamiltonian normal form | We revisit a classical perturbative approach to the Hamiltonian related to the motions of Trojan bodies, in the framework of the Planar Circular Restricted Three-Body Problem (PCRTBP), by introducing a number of key new ideas in the formulation. In some sense, we adapt the approach of Garfinkel (1977) to the context of the normal form theory and its modern techniques. First, we make use of Delaunay variables for a physically accurate representation of the system. Therefore, we introduce a novel manipulation of the variables so as to respect the natural behavior of the model. We develop a normalization procedure over the fast angle which exploits the fact that singularities in this model are essentially related to the slow angle. Thus, we produce a new normal form, i.e. an integrable approximation to the Hamiltonian. We emphasize some practical examples of the applicability of our normalizing scheme, e.g. the estimation of the stable libration region. Finally, we compare the level curves produced by our normal form with surfaces of section provided by the integration of the non--normalized Hamiltonian, with very good agreement. Further precision tests are also provided. In addition, we give a step-by-step description of the algorithm, allowing for extensions to more complicated models. | \section{Introduction}\label{sec:intro}
Series expansions in terms of small physical parameters are a common
way of dealing with dynamical models in Celestial Mechanics. One case
where this approach has been extensively used is the problem of Trojan
motion. The so-called Trojan stability problem, i.e. the study of the
long-term dynamics for massless particles in the neighborhood of the
equilateral Lagrangian points, has been studied both analytically and
numerically in several contexts\footnote{For an introduction to Trojan
dynamics, see, e.g.,~\cite{Erdi-97}}.
From the numerical point of view, quite complete models (including
extra planets perturbations and/or 3-dimensional configurations) are
possible to consider. Such experiments are in general based on
accurate long integrations, done through high precision integrators. A
clear outcome is the determination of size and shape of the stable
domain around the equilibrium points. Secondary resonances
embedded within the domain of the 1:1 Mean Motion Resonance (MMR)
are of particular importance in the stability issue. For example,
in~\cite{Rob-et-al-05} and~\cite{Rob-Gab-06}, the importance of the
resonance web is highlighted for what concerns the process of
de-population of the Trojan domain, in the case of Jupiter's
asteroids. In~\cite{Dvo-Lhot-Zhou-12}, the authors study the orbital
stability of the only discovered Trojan asteroid of the Earth,
i.e.~2010 TK7, and determine the stability region around L4 and L5 in
the Sun-Earth system.
Besides pure numerical investigations, several works have also
explored the applicability of analytical methods in the problem of
Trojan stability. In particular, the Trojan problem has served as a
classical model for testing methods based on the construction of a
so-called {\it Hamiltonian normal form} approach.
Semi-analytic methods share a common structure: each method is based
on an explicit algorithm, possible, as a rule, to translate into a
programming code, which computes the expansion of a suitable
\emph{normal form} that provides some special
solutions. Therefore, a normal form is a good \emph{local}
approximation to the complete Hamiltonian, used to reproduce the
orbits of particular objects.
Normal forms have been used in several cases for computing the
size and shape of the stability domain. From the point of view of
Nekhoroshev stability estimates
\cite{Nekhoroshev-1977},\cite{Nekhoroshev-1979}, examples of this
computation are in~\cite{Cel-Gio-91}, \cite{Gio-Sko-97},
\cite{Efthy-Sand-05}, \cite{Lhot-Efthy-Dvo-08}, \cite{Efthy-2013}. On
the other hand, in~\cite{Gab-Jor-Loc-05}, the stability estimation is
induced by applying the Kolmogorov-Arnold-Moser (KAM) theory. But all
these approaches, while introducing novel and promising features, tend
to fail when it comes to reproduce the numerical results about the
size and shape of the whole stability region. In general, they are
able to justify stability for just a small subset of Trojan orbits,
located relatively close to the equilibrium point.
In the present work, we develop a new normal form approach to the
classical PCRTBP by combining three main ideas. First, we make an
accurate choice of variables for the representation of the system.
Second, we explicitly take benefit of the existence of a slow and a
fast degree of freedom, dealing with them differently in our normal
form construction. Finally, we make use of Lie series techniques for
the averaging over the fast angle; in fact, as shown below, a key
element of our method is that such a treatment makes possible to
overcome the only true singularity that the Trojan motion contains,
namely the eventual \emph{close encounters} with the primary.
We motivate these ideas as follows.
Many of previous attemps describe the system in terms of cartesian
coordinates, which however do not capture properly the \emph{physical
configuration} of the model. Considering canonical variables well
adapted to the system helps to improve the accuracy of the normal form
and the estimation of stability domains. To this end, in the present
paper, we adopt (and show the usefulness of) a modified version of
Delaunay-like coordinates introduced long ago in~\cite{Garfinkel-77}.
Regarding the other two ideas, an important point is that the only
real physical singularity in the 1:1 MMR region is due to possible
collisions~/~close encounters of the Trojan body with the primary. In
our setting of modified Delaunay-like variables (see
definition~\ref{eq:Delaunay-coord}), this singularity takes place at
$\lambda=0$, where $\lambda$ is the \emph{synodic mean longitude}
of the massless body. Thus, any polynomial expansion around the
equilibrium point, exhibits a bad convergence behavior for orbits
approaching this singularity. Nevertheless, in this work we show this
problem can be overcome in a rather simple way. This is based on the
following remarks.
On one hand, as noted already, an inspection of the main terms of the
Hamiltonian for a Trojan body, indicates that the motion is ruled by a
fast and a slow frenquency. In Delaunay variables, these two dynamics
are represented by two independent pairs of canonical coordinates.
However, the long term behavior of the orbits depends essentially only
on the slow degree of freedom which, in our variables, corresponds to
the canonical pair including the angle $\lambda$. In other words, an
appropriate normal form construction that aims to study the long term
dynamics involves averaging over the fast angle only. This produces
an \emph{integrable} Hamiltonian of two degrees of freedom in which
the fast angle is ignorable.
On the other hand, such an averaging implies
that one has to solve a so-called homological equation in which
$\lambda$ plays no role. As shown in Section 2 below, due to this
property, one can retain in the Hamiltonian a complicated functional
dependence on $\lambda$, other than trigonometric or simply
polynomial. In our case this dependence turns to be of the
form of powers of the quantity $\beta(\lambda) = \frac{1}{\sqrt{2-2
\cos \lambda}} $. Thus, using this technique allows to greatly
extend the convergence domain of the \emph{final} normal
form.
Finally, we develop a new normalization algorithm for the computation
of the integrable approximation, by using the Lie series formalism,
adapting to a modern way the technique described
in~\cite{Garfinkel-77}. With this integrable normal form, we can
approximate Trojan orbits having either tadpole or horseshoe shapes,
even in cases where distances from the equilateral point become large
and where previous approaches tend to fail.
There are several different examples where such a normalizing scheme
can be applied. The most evident corresponds to the use of the scheme
for the estimation of the stability domain. Such a computation, in the
same direction as the references mentioned before, aim to estimate the
effects induced by the remainder ${\cal R}$ (in later
Eq.~\ref{eq:H(R1,R2)}) of the normal form produced by the algorithm.
According to our results, our novel normalization may radically
improve the results regarding the size of the domain of stability
around the equilateral Lagrangian points, understimated so far. Other
examples inherent directly to the normal form can be mentioned. For
instance, in~\cite{Paez-Loc-14}, we used the normalized coordinates
for the design and optimization of maneuvers aiming to transfer a
spacecraft into the tadpole region. Moreover, the numerical
experiments described in the present work highlight that our method
allows to clearly differentiate the tadpole region from the horseshoe
region. In some cases, when the mass ratio between the two primary
bodies is very small and consequently also the chaotic regions are
small, such a computation gives a first order estimation of the stable
tadpole domain. On the other hand, this kind of perturbative approach
is motivating for problems of diverse nature. For instance,
in~\cite{Cec-Big-2013}, they make use of the Relegation algorithm,
that shares a similar structure with our normalizing scheme, to
compute some particular orbits around an irregularly shaped asteroid.
This paper is structured as follows: in Section~\ref{sec:expl_alg}, we
describe the expansion and normalization scheme, applied to the case
of the PCRTBP Hamiltonian, in the 1:1 MMR region; in
Section~\ref{sec:results_sect}, we provide different tests for a
suitable accuracy verification of the normal form, involving
comparisons with the numerical integration of the full
problem; in Section~\ref{sec:concl_future}, we summarize the
work and outline future applications of this method. Furthermore,
Appendix~\ref{sec:techn_things} formally presents the algorithm in
such a way that it can be adapted also to different models, for
example, the Elliptic Restricted Three Body Problem (ERTBP) or the
Restricted MultiPlanet Problem (RMPP), described in~\cite{Paez-Efthy-15}.
\section{Construction of the integrable approximation}\label{sec:expl_alg}
\subsection{Initial settings}\label{sbs:settings}
In heliocentric canonical variables $(\mathbf{p},\mathbf{r})$, the
Hamiltonian of the PCRTBP can be written as:
\begin{equation}\label{ham1}
H={p^2\over 2} -{G M\over r} - G m' \left({1\over\Delta}
-{\mathbf{r}\cdot\mathbf{r}'\over r'^3}\right)
\end{equation}
where $M$ and $m'$ are the masses of the larger and smaller primary,
respectively, $\mathbf{r}$ is the heliocentric position vector of the
test particle, $\mathbf{r'}$ the one corresponding to the second
primary, $p=\|\mathbf{p}\|$, $r=\|\mathbf{r}\|$, $r'=\|\mathbf{r'}\|$
and $\Delta = \| \mathbf{r} - \mathbf{r}' \|$.
In what follows, $a\,$, $e\,$, $M$ and $\varpi$ symbolize the major
semi-axis, the eccentricity, the mean anomaly and the longitude of the
perihelion (primed quantities correspond to the primary). We set the
unit of length as $a'=1$, and the unit of time such that the mean
motion of the primary is equal to $n=1$. This implies $G(M+m')=1$.
The unit of mass is set so as to $G=1$. Defining the mass parameter
as $\mu=m'$, the Hamiltonian in the above units takes the form:
\begin{equation}
H={p^2\over 2} -{1\over r} - \mu F~~,
\label{eq:ham2}
\end{equation}
where the so--called disturbing function $\mu F$ is such that
\begin{equation}
F=\left({1\over\Delta}
-{\mathbf{r}\cdot\mathbf{r}'\over r'^3}-{1\over r}\right)~~.
\label{eq:perturbing-term}
\end{equation}
Including the small keplerian correction $\mu/r$ in the disturbing
function allows to define action-angle variables with values
independent of the mass parameter $\mu$. Inspired
by~\cite{Paez-Efthy-15} and references therein, we introduce modified
Delaunay action-angle variables
\begin{equation}
\vcenter{\openup1\jot\halign{
\hbox {\hfil $\displaystyle {#}$}
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {$\displaystyle {#}$\hfil}
&\hbox {\hfil $\displaystyle {#}$}
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {$\displaystyle {#}$\hfil}\cr
G &=& \sqrt{a(1-e^2)}\ ,
\qquad
&\lambda &=& M+\varpi-M^{\prime}-\varpi^{\prime}\ ,
\cr
\Gamma &=& \sqrt{a}\left(1-\sqrt{1-e^2}\right)\ ,
\qquad
&\ell &=& M\ .
\cr
}}
\label{eq:Delaunay-coord}
\end{equation}
According to the definitions of the orbital elements above,
$\lambda$ is the {\emph synodic} mean longitude.
In order to remove the \emph{fictitious} singularity of the Delaunay--like
variables when $e=0$ (i.e., for circular orbits), it is
convenient to introduce canonical coordinates
$(\rho,\xi,\lambda,\eta)$, similar to Poincar\'e coordinates
for the {\em planar Keplerian problem}. Thus, let us define
\begin{equation}
\vcenter{\openup1\jot\halign{ \hbox {\hfil $\displaystyle {#}$} &\hbox
{\hfil $\displaystyle {#}$\hfil} &\hbox {$\displaystyle {#}$\hfil}
&\hbox {\hfil $\displaystyle {#}$} &\hbox {\hfil $\displaystyle
{#}$\hfil} &\hbox {$\displaystyle {#}$\hfil}\cr \rho &= & G-1\ ,
\qquad &\lambda &= & \lambda\ , \cr \xi &=
&\sqrt{2\Gamma}\cos\ell\ , \qquad &\eta &=
&\sqrt{2\Gamma}\sin\ell\ , \cr }}
\label{eq:Garfinkel-coord}
\end{equation}
where the values of $\rho$, $\xi$ and $\eta$ are significantly small
in a region surrounding the triangular Lagrangian points, for
instance, in the case of tadpole or horseshoe orbits (since $a\simeq
1$ and $e\gtrsim 0$). Let us recall that those variables have been
introduced in~\cite{Garfinkel-77} to study the PCRTBP.
Before starting the construction of the normal form, it is necessary
to express Hamiltonian~\eqref{eq:ham2}, in particular the disturbing
function $F$ in~\eqref{eq:perturbing-term}, in terms of
Poincar\'e--Delaunay--like coordinates
$(\rho,\xi,\lambda,\eta)$. Here, we limit ourselves to sketch this
preliminary procedure\footnote{An exhaustive description of those
expansions is out of the scopes of this publication. All the details
are deferred to P\'aez, R.I., \emph{Ph.D. Thesis} (2015), that is in
preparation and is available on request to the
author.}. Essentially, as a first step, the terms $1/r$, $r^2$ and
$\mathbf{r}\cdot\mathbf{r}'$ (appearing in the Hamiltonian) must be
expanded with respect to orbital elements $a$, $e$, $M$ and $\lambda$,
following e.g. \S6 in \cite{MurrDermm-1999}. Afterwards, the
eccentricity $e$ is replaced by its expansion in power series of the
parameter $\sqrt{2\Gamma/G}$. By using the equations $a=(G+\Gamma)^2$,
$G=1+\rho$, $\Gamma=(\xi^2+\eta^2)/2$, we can then express $1/r$,
$r^2$ and $\mathbf{r}\cdot\mathbf{r}'$ as functions of the canonical
variables~\eqref{eq:Garfinkel-coord}.
As explained in the Introduction, the particular normalizing scheme we
use allows to keep a non polynomial nor trigonometric functional
dependence with $\lambda$. By making use of this idea, we keep powers
of $\beta(\lambda)=1/\sqrt{2-2\cos\lambda}$, in the expansions of the
term $1\over\Delta$ of $F$, which describes the main part of the
inverse of the distance from the primary (for values of the major
semi-axis $a\simeq 1$ and small eccentricity). Thus, the Hamiltonian
ruling the motion of the third body takes the following
form \footnote{The first term in~\eqref{eq:initial-Ham-rewritten}
comes from the expansion of the Keplerian part, i.e. the part of the
Hamiltonian independent of $\mu$, in terms of $\xi$, $\eta$ and
$\rho$, that corresponds to ${\cal K}=
-\frac{1}{2(\rho+1+\frac{\xi^2+\eta^2}{2})} -\rho -1$.}
\begin{equation}
\vcenter{\openup1\jot\halign{ \hbox {\hfil $\displaystyle {#}$} &\hbox
{\hfil $\displaystyle {#}$\hfil} &\hbox {$\displaystyle
{#}$\hfil}\cr {\cal H}(\rho,\xi,\lambda,\eta) &= & -\frac{1}{2}
\sum_{j=0}^{\infty} (-1)^{j} (j+1)
\left(\rho+\frac{\xi^2+\eta^2}{2} \right)^{j} \, -1 -\rho
\,+\,\mu\big(1+\cos(\lambda)-\beta(\lambda)\big) \cr &+ \mu
&\sum_{l=1}^{\infty}\,\sum_{{\scriptstyle{m_1+m_2}}\atop{\scriptstyle
{+m_3=l}}} \ \sum_{{\scriptstyle{k_1+k_2\le
l}}\atop{\scriptstyle {j\le 2l+1}}}
b_{m_1,m_2,m_3,k_1,k_2,j}\,\rho^{m_1}\xi^{m_2}\eta^{m_3}
\,\cos^{k_1}(\lambda)\sin^{k_2}(\lambda)\,\beta^j(\lambda)\ , \cr
}}
\label{eq:initial-Ham-rewritten}
\end{equation}
where $b_{m_1,m_2,m_3,k_1,k_2,j}$ are rational numbers.
Actually, we just produce a truncated expansion of
formula~\eqref{eq:initial-Ham-rewritten} up to a finite polynomial
degree in $\rho$, $\xi$ and $\eta$, by using {\tt Mathematica}. By
inspection, we check that both properties $k_1+k_2\leq l$ and
$j\leq2l+1$ are satisfied in our initial expansion, and we just {\it
conjecture} that they hold true also to any following polynomial
degree in $\rho$, $\xi$ and $\eta$ (being such a proof beyond of the
scopes of this work).
\subsection{Algorithm constructing the normal form averaged over
the fast angle}\label{sbs:average}
Let us focus on the first main terms of the Keplerian part:
\begin{equation}
-\frac{3}{2}+\frac{\xi^2+\eta^2}{2}
-\frac{3}{2}\left[\rho+\frac{\xi^2+\eta^2}{2}\right]^2 + \ldots\, =\,
-\frac{3}{2}+\Gamma-\frac{3}{2}(\rho+\Gamma)^2 + \ldots\ ,
\label{eq:quadr-part-Kepl-appr}
\end{equation}
where we refer to the dynamics of the new canonical
coordinates $(\xi,\eta)$ by retaining the old action--angle pair
$(\Gamma,\ell)$, that allow us to sketch our strategy in a
simpler way. Indeed, the previous formula shows that the angular
velocities have different order of magnitude
\begin{equation}
\dot\lambda=\frac{\partial\,{\cal H}}{\partial\rho}\simeq 0\ ,
\qquad
\dot\ell=\frac{\partial\,{\cal H}}{\partial\Gamma}\simeq 1\ ,
\label{eq:slow-fast-ang-velocities}
\end{equation}
because $\mu\ll 1$ and the values of the actions $\rho$ and $\Gamma$
are small in a region surrounding the Lagrangian points. Thus,
$\lambda$ can be seen as a slow angle and $\ell$ as a fast angle. This
motivates to average the Hamiltonian over the fast angle (see, e.g.,\S~52 of \cite{Arnold-book-analyt-mech}), in order to focus mainly on
the secular evolution of the system. Therefore, we want remove (most
of) the terms depending on the fast angle $\ell$, that can be
conveniently done in the setting of the variables
$(\rho,\xi,\lambda,\eta)$, by performing a sequence of canonical
transformations. In the following, this strategy is translated in an
explicit algorithm using Lie series, by adapting to the present
context an approach that has been fruitfully applied to various
problems in Celestial Mechanics (for instance, for locating the
elliptic lower-dimensional tori in a rather realistic four-body
planetary model in~\cite{San-Loc-Gio-2011}, or studying the secular
behavior of the orbital elements of extrasolar planets
in~\cite{Lib-San-2013}). By applying such a procedure
to our Hamiltonian model, we construct an average normal form,
satisfying two important properties: (I)~it provides an accurate
approximation of the starting model; (II)~it only depends on the
actions and one of the angles and, therefore, it is integrable.
Our final normal form is produced by an algorithm dealing
with a sequence of Hamiltonians, whose the expansion can be
conveniently described after introducing the following
\begin{definition}
A generic function $g=g(\rho,\xi,\lambda,\eta)$ belongs to the class
${\cal P}_{l,s}\,$, if its expansion is of the type:
$$
\sum_{2m_1+m_2+m_3=l}
\ \sum_{{\scriptstyle{k_1+k_2\le l+4s-3}}\atop{\scriptstyle {j\le 2l+7s-6}}}
c_{m_1,m_2,m_3,k_1,k_2,j}\,\rho^{m_1}\xi^{m_2}\eta^{m_3}
\,(\cos\lambda)^{k_1}(\sin\lambda)^{k_2}\,\big(\beta(\lambda)\big)^j\ ,
$$
where $c_{m_1,m_2,m_3,k_1,k_2,j}$ are real coefficients.
\label{def-functions-classes}
\end{definition}
\noindent
Let $r_1$ and $r_2\,$ be two integer counters, running in the
intervals $[1\,,\,R_1]$ and $[0\,,\,R_2]$, respectively, being
$R_1\,,\,R_2\in\mathbb{N}$ fixed numbers. At each $(r_1,r_2)$--th step,
our algorithm introduces a new Hamiltonian $H^{(r_1,r_2)}$ such that
\begin{equation}
\vcenter{\openup1\jot\halign{
\hbox {\hfil $\displaystyle {#}$}
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {$\displaystyle {#}$\hfil}\cr
H^{(r_1,r_2)}(\rho,\xi,\lambda,\eta) &=
&\frac{\xi^2+\eta^2}{2}+
\sum_{l\ge 4}Z_l^{(0)}\Big(\rho,\frac{\xi^2+\eta^2}{2}\Big)
\cr
&+ & \sum_{s=1}^{r_1-1}\left(\sum_{l=0}^{R_2}
\mu^s Z_l^{(s)}\Big(\rho,\frac{\xi^2+\eta^2}{2},\lambda\Big)
+\sum_{l>R_2}\mu^{s}f_l^{(r_1,r_2;s)}(\rho,\xi,\lambda,\eta)\right)
\cr
&+ &\sum_{l=0}^{r_2}\mu^{r_1}
Z_l^{(r_1)}\Big(\rho,\frac{\xi^2+\eta^2}{2},\lambda\Big)
\,+\,\sum_{l>r_2}\mu^{r_1}f_l^{(r_1,r_2;r_1)}(\rho,\xi,\lambda,\eta)
\cr
&+ &\sum_{s>r_1}\sum_{l\ge 0}\mu^sf_l^{(r_1,r_2;s)}(\rho,\xi,\lambda,\eta)\ ,
\cr
}}
\label{eq:H(r1,r2)}
\end{equation}
where $Z_l^{(0)}\in{\cal P}_{l,0}$ $\forall\ l\ge 4$,
$Z_l^{(s)}\in{\cal P}_{l,s}$ $\forall\ 0\le l\le R_2\,,\ 1\le s<r_1\,$,
$Z_l^{(r_1)}\in{\cal P}_{l,r_1}$ $\forall\ 0\le l\le r_2\,$,
$f_l^{(r_1,r_2;r_1)}\in{\cal P}_{l,r_1}$ $\forall\ l> r_2\,$,
$f_l^{(r_1,r_2;s)}\in{\cal P}_{l,s}$ $\forall\ l>R_2\,,\ 1\le s<r_1\,$
and $\forall\ l\ge 0,\ s>r_1\,$. From ~\eqref{eq:H(r1,r2)}, we
emphasize that
\begin{itemize}
\item The splitting of the Hamiltonian in sub-functions belonging to
different sets ${\cal P}_{l,s}$ basically gather all the terms with the
same order of magnitude $\mu^s$ and total degree $l/2$ (that can be
semi-odd) in the actions $\rho$ and $\Gamma\,$. This is made in
order to develop a normalization procedure, exploiting the existence
of natural small parameters: $\mu$ and the values of the
pair of actions $(\rho,\Gamma)$.
\item All the terms $Z_l^{(s)}$ and $f_l^{(r_1,r_2;s)}$ appearing in
equation~\eqref{eq:H(r1,r2)} are made by expansions including a {\it
finite} number of monomials of the type described in
Definition~\ref{def-functions-classes}.
\item At the beginning of our algorithm, we can set $H^{(1,0)}={\cal H}$,
because the expansion~\eqref{eq:initial-Ham-rewritten} of the
initial Hamiltonian ${\cal H}$ can be expressed as in
equation~\eqref{eq:H(r1,r2)}.
\end{itemize}
Our algorithm requires just $R_1R_2$ normalization steps, which are
performed by constructing the finite sequence of Hamiltonians
\begin{displaymath}
H^{(1,0)}={\cal H},\ H^{(1,1)},\ \ldots\,,\ H^{(1,R_2)},\ \ldots\,,
\ H^{(R_1,0)},\ H^{(R_1,1)},\ \ldots\,,\ H^{(R_2,R_1)}~~.
\end{displaymath}
They are defined so that $H^{(r_1+1,0)}=H^{(r_1,R_2)}$ $\forall\ 1\le
r_1<R_1$ and the $(r_1,r_2)$--th normalization step is performed by a
canonical transformation. This recursively introduces the new
Hamiltonian in such a way that
\begin{equation}
H^{(r_1,r_2)}=\exp\Big({\Lscr}_{\mu^{r_1}\chi_{r_2}^{(r_1)}}\Big)H^{(r_1,r_2-1)}\ ,
\label{eq:def-funzionale-H(r1,r2)}
\end{equation}
with a generating function
$\chi_{r_2}^{(r_1)}=\chi_{r_2}^{(r_1)}(\rho,\xi,\lambda,\eta)$ that
is determined by solving a so--called ``homological equation'', while
$\exp\big({\Lscr}_{\chi}\big)\,\cdot=\sum_{j\ge
0}\frac{1}{j!}{\Lscr}_{\chi}^j\,\cdot$ denotes nothing but the Lie
series operator. The Lie derivative ${\Lscr}_{\chi} g=\poisson{g}{\chi}$
is such that $\poisson{\cdot}{\cdot}$ is the classical Poisson
bracket, with $g$ a generic function defined on the phase space and
$\chi$ any generating function (see, e.g.,~\cite{Giorgilli-2003.1}
for an introduction to canonical transformations expressed by Lie
series in the context of the Hamiltonian perturbation theory). All
the recursive formulas, which determine the terms of type $Z$ and $f$
appearing in~\eqref{eq:H(r1,r2)}, are reported in
Appendix~\ref{sec:techn_things}. Let us stress that, after each
transformation, in the present subsection we do not change the name of
the canonical variables in order to simplify the notation. We
emphasize here that the new generating function introduced at the
generic $(r_1,r_2)$--th step, namely
$\mu^{r_1}\chi_{r_2}^{(r_1)}(\rho,\xi,\lambda,\eta)$, is determined
so as to remove from the main perturbing term\footnote{Let us recall
that the size of $\mu^{s}f_{r_2}^{(r_1,r_2-1;s)}\in{\cal P}_{r_2,s}$ is
expected to decrease when the indexes $s$ or $r_2$ are increased,
because the values of $\mu$, $\rho$ and $\sqrt{\xi^2+\eta^2}$ are
assumed to be small} $\mu^{r_1}f_{r_2}^{(r_1,r_2-1;r_1)}$ its
subpart that is not in normal form.
We rewrite the final Hamiltonian in such a way to distinguish the
normal form part from the rest, as follows:
\begin{equation}
H^{(R_1,R_2)}(\rho,\xi,\lambda,\eta) =
{\cal Z}^{(R_1,R_2)}\big(\rho,(\xi^2+\eta^2)/2,\lambda\big)
+{\cal R}^{(R_1,R_2)}(\rho,\xi,\lambda,\eta)\ ,
\label{eq:H(R1,R2)}
\end{equation}
where all the averaged terms of type $Z$ are gathered into the
integrable part ${\cal Z}^{(R_1,R_2)}$, while the others contribute to
the remainder ${\cal R}^{(R_1,R_2)}$. Here, our algorithm is described at
a {\it purely formal} level in the sense that the problem of the
analytic convergence of the series on some domains is not
considered. However, it is natural to expect that our procedure
defines {\it diverging} series into the limit of $R_1,R_2\to\infty$,
because a non-integrable Hamiltonian cannot be transformed in an
integrable one, on any open domain. In principle, the values of both
integer parameters $R_1$ and $R_2$ should be carefully chosen in
such a way to reduce the size of ${\cal R}^{(R_1,R_2)}$ as much as
possible. In practice, we simply fixed the values of
$R_1$ and $R_2$ according to the computational resources, in order to
deal with the application described in the following
section~\ref{sec:results_sect}.
\subsection{Numerical computation of the flow induced by
the integrable approximation}\label{sbs:semi-analytical_integr-scheme}
Lie series induce canonical transformations in a Hamiltonian
framework; this fundamental feature allows us to design a numerical
integration method, by using both the normal form previously discussed
and the corresponding canonical coordinates. Let us denote with
$\big(\rho^{(r_1,r_2)},\xi^{(r_1,r_2)},\lambda^{(r_1,r_2)},\eta^{(r_1,r_2)}\big)$
the set of canonical coordinates related to the $(r_1,r_2)$--th
normalization step. By applying the so--called 'exchange theorem'
(see~\cite{Giorgilli-2003.1}), we have that
\begin{equation}
H^{(r_1,r_2)}
\big(\rho^{{\scriptscriptstyle (r_1,r_2)}},\xi^{{\scriptscriptstyle (r_1,r_2)}},
\lambda^{{\scriptscriptstyle (r_1,r_2)}},\eta^{{\scriptscriptstyle (r_1,r_2)}}\big)
=H^{(r_1,r_2-1)}\Big(\varphi^{(r_1,r_2)}
\big(\rho^{{\scriptscriptstyle (r_1,r_2)}},\xi^{{\scriptscriptstyle (r_1,r_2)}},
\lambda^{{\scriptscriptstyle (r_1,r_2)}},\eta^{{\scriptscriptstyle (r_1,r_2)}}\big)\Big)\ ,
\label{eq:exchange-theorem}
\end{equation}
where the variables related to the previous step, i.e.
$(\rho^{{\scriptscriptstyle (r_1,r_2-1)}},\xi^{{\scriptscriptstyle (r_1,r_2-1)}},
\lambda^{{\scriptscriptstyle (r_1,r_2-1)}},\eta^{{\scriptscriptstyle (r_1,r_2-1)}})$,
are given as
\begin{equation}
\varphi^{(r_1,r_2)}
\big(\rho^{(r_1,r_2)},\xi^{(r_1,r_2)},\lambda^{(r_1,r_2)},\eta^{(r_1,r_2)}\big)=
\exp\Big({\Lscr}_{\mu^{r_1}\chi_{r_2}^{(r_1)}}\Big)
\big(\rho^{(r_1,r_2)},\xi^{(r_1,r_2)},\lambda^{(r_1,r_2)},\eta^{(r_1,r_2)}\big)\ .
\label{eq:coord-change-(r1,r2)}
\end{equation}
According to this, Lie series must be applied separatedly to each
variable, in order to properly define all the coordinates for the
canonical transformation $\varphi^{(r_1,r_2)}$. Thus, the whole
normalization procedure is described by the canonical transformation
\begin{equation}
{\cal C}^{(R_1,R_2)}=\varphi^{(1,1)}\circ\ldots\circ\varphi^{(1,R_2)}\circ\ldots
\circ\varphi^{(R_1,1)}\ldots\circ\varphi^{(R_1,R_2)} \ .
\label{eq:def-total-canonical-transf}
\end{equation}
Such a composition of all the intermediate changes of variables can be
used for providing the following semi-analytical scheme to integrate
the equations of motion:
\begin{equation}
\vcenter{\openup1\jot\halign{
\hbox to 34 ex{\hfil $\displaystyle {#}$\hfil}
&\hbox to 11 ex{\hfil $\displaystyle {#}$\hfil}
&\hbox to 34 ex{\hfil $\displaystyle {#}$\hfil}\cr
\left(\rho^{(0,0)}(0),\xi^{(0,0)}(0),
\lambda^{(0,0)}(0),\eta^{(0,0)}(0)\right)
&\build{\longrightarrow}_{}^{{{\scriptstyle
\big({\cal C}^{(R_1,R_2)}\big)^{-1}}
\atop \phantom{0}}}
&\left(\rho^{(R_1,R_2)}(0),\xi^{(R_1,R_2)}(0),
\lambda^{(R_1,R_2)}(0),\eta^{(R_1,R_2)}(0)\right)
\cr
& &\big\downarrow \build{\Phi_{{\cal Z}^{(R_1,R_2)}}^{t}}_{}^{}
\cr
\left(\rho^{(0,0)}(t),\xi^{(0,0)}(t),
\lambda^{(0,0)}(t),\eta^{(0,0)}(t)\right)
&\build{\longleftarrow}_{}^{{{\scriptstyle {\cal C}^{(R_1,R_2)}} \atop \phantom{0}}}
&\left(\rho^{(R_1,R_2)}(t),\xi^{(R_1,R_2)}(t),
\lambda^{(R_1,R_2)}(t),\eta^{(R_1,R_2)}(t)\right)
\cr
}}
\qquad\qquad\ ,
\label{semi-analytical_scheme}
\end{equation}
where $\Phi_{{\cal K}}^{t}$ denotes the flow induced on the canonical
coordinates by the generic Hamiltonian ${\cal K}$ for an interval of time
equal to $t$. Let us emphasize that the above integration scheme
provides just an {\it approximate} solution. From an ideal point of
view (i.e., if all the expansions were performed without errors and
truncations), formula~\eqref{semi-analytical_scheme} would be exact if
the normal form part ${\cal Z}^{(R_1,R_2)}$ would correspond to the
complete Hamiltonian $H^{(R_1,R_2)}$; moreover, ${\cal Z}^{(R_1,R_2)}$ is
{\it integrable} and its flow is easy to compute\footnote{In order to
explicitly describe the solutions of the equation of motions for the
normal form ${\cal Z}^{(R_1,R_2)}$, it is convenient to introduce the
temporary action--angle variables
$\big(\Gamma^{(R_1,R_2)},\ell^{(R_1,R_2)}\big)$ such that
$\xi^{(R_1,R_2)}=\sqrt{2\Gamma^{(R_1,R_2)}}\cos\ell^{(R_1,R_2)}$
and
$\eta^{(R_1,R_2)}=\sqrt{2\Gamma^{(R_1,R_2)}}\sin\ell^{(R_1,R_2)}$,
where $\Gamma^{(R_1,R_2)}$ is a constant of motion for the normal
form
${\cal Z}^{(R_1,R_2)}={\cal Z}^{(R_1,R_2)}\big(\rho^{(R_1,R_2)},\Gamma^{(R_1,R_2)},\lambda^{(R_1,R_2)}\big)$. By
considering $\Gamma^{(R_1,R_2)}$ as a fixed parameter and using the
standard quadrature method for conservative systems with $1$~d.o.f.,
one can compute $\rho^{(R_1,R_2)}(t)$ and $\lambda^{(R_1,R_2)}(t)$
at any time $t\,$. The same can be done for the evolution of
$\ell^{(R_1,R_2)}(t)$, by evaluating the integral corresponding to
the differential equation
${\dot\ell}^{(R_1,R_2)}=\frac{\partial\,{\cal Z}^{(R_1,R_2)}}{\partial\Gamma^{(R_1,R_2)}}\,$. For
practical purposes, the application of the classical quadrature
method can be replaced by any numerical integrator that is precise
enough. Finally, the values of $\xi^{(R_1,R_2)}(t)$ and
$\eta^{(R_1,R_2)}(t)$ can be directly calculated from those of the
corresponding action--angle variables, that are
$\Gamma^{(R_1,R_2)}(t)$ and $\ell^{(R_1,R_2)}(t)$.}, reasons why
using ${\cal Z}^{(R_1,R_2)}$ becomes valuable. According to
equation~\eqref{eq:H(R1,R2)}, the approximate solution provided by the
scheme~\eqref{semi-analytical_scheme} is as more accurate as smaller
the perturbing part ${\cal R}^{(R_1,R_2)}$ is with respect to
${\cal Z}^{(R_1,R_2)}$.
A key point concerns the structure of our expansions. If we
{\it truncate} the r.h.s. of equation~\eqref{eq:H(r1,r2)} up to a
finite order of magnitude in $\mu$ and a fixed total degree $l/2$ in
the actions $\rho$ and $\Gamma=(\xi^2+\eta^2)/2$, since each term of
type $Z$ and $f$ is included in a corresponding class of functions
${\cal P}_{l,s}\,$, the truncated expansion of $H^{(r_1,r_2)}$ just
include a {\it finite} number of monomials. The same applies also to
the truncated expansions of both the final Hamiltonian $H^{(R_1,R_2)}$
and the canonical transformation ${\cal C}^{(R_1,R_2)}$. Therefore,
after introducing the truncation rules, the normalization algorithm
require a {\it finite} total number of operations, to compute the
elements necessary to implement the whole integration
scheme~\eqref{semi-analytical_scheme}. Thus, it can be translated in a
programming code.
In order to concretely apply our semi-analytical scheme, by using {\tt
Mathematica}, we compute the truncations up to the terms
${\cal O}(\mu^3)$ and to the fifth total degree with respect to the
square roots of the actions $\rho$ and $(\xi^2+\eta^2)/2$
(i.e. $R_1=3$, $R_2=5$, in order to obtain ${\cal Z}^{(3,5)}\simeq
H^{(3,5)}$, ${\cal C}^{(3,5)}$ and its inverse). At the end of its
execution, that code writes, on an external {\it file}, {\bf C}
functions which are able to compute the changes of coordinates induced
by ${\cal C}^{(3,5)}$ and by its inverse. Moreover, it provides also
another external {\it file}, where the integrable equations of motion
related to the Hamiltonian ${\cal Z}^{(3,5)}$ are written according to
the syntax of the {\tt Taylor}\footnote{{\tt Taylor} is an automatic
translator producing a {\bf C} function acting as a time-stepper
specific for a given ordinary differential equation (by means of the
Taylor method). It is publicly available at the following website:
{\tt http://www.maia.ub.es/{$\sim$}angel/soft.html}} software
package. A basic use of the {\tt Linux} {\it shell scripting} allowed
us to automatically complement all these parts of the computational
procedure, so as to produce a new {\bf C} program as an output. This
final code is able to numerically integrate the equations of motion
via the scheme~\eqref{semi-analytical_scheme}, where the flow
$\Phi_{{\cal Z}^{(3,5)}}^{t}$ is computed by using the Taylor method
(based on the automatic differentiation technique,
see~\cite{Jor-Zou-2005} and~\cite{Barrio-et-al-2012}). The truncation
rules are fixed in such a way to execute all the computations in a
reasonable amount of CPU-time.
\section{Accuracy control for the normal form}\label{sec:results_sect}
\nobreak An averaged model approximating a certain problem is powerful
to the extent that it reproduces the main features of the original
system, (in this case the Planar Circular Restricted Three Body
problem). It is widely proved that PCRTBP is barely a very simplistic
representation for the Trojan motion, but its basic dynamics allows us
to test in a simple way the normalization method. Furthermore,
according to the nature of the method, much more complex (i.e.,
non-planar, eccentric, including additional planets) models can be
treated without any substantial change in the averaging scheme.
When seen from the most commonly used reference system (i.e. a
\emph{synodic} rotating frame with origin in the barycenter), motions
around the equilateral Lagrangian points are represented by the
so-called \emph{tadpole} or \emph{horseshoe} orbits (see \S3.9 of
\cite{MurrDermm-1999}). In our set of variables, these orbits are
characterized by large variations of the angle $\lambda$. So, as
starting, a suitable averaged approximation has to be able to
reproduce these variations correctly. In particular, this should apply
even for the challenging case of bodies whose orbits lay very close to
the border of the stability domain. As second goal, the averaged
Hamiltonian should be able also to distinguish between tadpole orbits
(around just one equilateral equilibrium point) and horseshoe orbits
(around both points). In Section~\ref{sec:expl_alg}, we have provided
an integrable averaged Hamiltonian ${\cal Z}^{(R_1,R_2)}$
(see~\ref{eq:H(R1,R2)}), that explicitly describes the behavior of the
degree of freedom related to the canonical pair of variables
($\rho$,$\lambda$). In order to compare the original problem with our
averaged version, we develop two different tests.
\subsection{Numerical surfaces of section vs. semi-analytical level curves}\label{sbs:graph_test}
\nobreak The first test consists of a graphical comparison between the
orbits provided by the complete Hamiltonian and those provided by
normal form ${\cal Z}^{(R_1,R_2)}$. In both cases, the initial conditions
are fictitious, but each set is derived from the catalogued
position of a real generating Trojan body. Since we are just
interested in the behavior of the slow degree of freedom, for the
complete problem (originally 4D), we just retain the evolution of the
variables $\lambda$ and $\rho$ by means of isoenergetic surfaces of
section, which gives a 2D representation. In the two cases presented
below, the \emph{generating bodies} are 2010 TK$_7$, Trojan asteroid
of the Sun-Earth system and the asteroid 1872 Helenos of the
Sun-Jupiter system.
From a catalogue, we obtain the coordinates (rotated to the plane
of primaries) of each generating body for a certain epoch and we
convert them to our canonical coordinates
$(\rho_{gb},\lambda_{gb},\xi_{gb},\eta_{gb})$. This initial condition
provides the Jacobi constant $C_{J_{gb}}$ for the body, after which it
is possible to produce an isoenergetic surface of section. We generate
a set of 10 new orbits by keeping fixed the initial values for the
variables $\rho=\rho_{gb}$ and $\eta=\eta_{gb}$, scanning a range of
values for $\lambda$ and obtaining $\xi$ in such a way that
$C_J(\rho,\lambda,\xi,\eta) = C_{J_{gb}}$ (isoenergetic orbits).
These initial conditions are numerically integrated for a short time,
up to the variables accomplish the relation $M(\xi,\eta) = 0$, where
$M$ corresponds to the mean anomaly. We call this new set
${\cal S}_{gb}$, and it generates both the surface of section and the
level curves.
\subsubsection{Computation of surfaces of section}\label{sss:surf_sec}
Starting from Newtonian formulation for the PCRTBP, we derive the
equations of motion for the third body in cartesian coordinates in the
heliocentric system\footnote{Although cartesian coordinates are not
canonical conjugated variables, they provided the simplest
closed form for these equations of motion, and for this
reason they are still widely used for numerical experiments.}
\begin{equation}\label{eq:eqs-mot-complete-crtbp}
\vcenter{\openup1\jot\halign{
\hbox {\hfil $\displaystyle {#}$}
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {$\displaystyle {#}$\hfil}
\qquad
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {\hfil $\displaystyle {#}$\hfil}
&\hbox {\hfil $\displaystyle {#}$\hfil}\cr
\dot{x} & = & v_x\ ,
&\dot{v}_x & =
& - \frac{(1-\mu)x}{\sqrt{x^2+y^2}}
- \mu \left(\frac{x- x_P}{\sqrt{(x-x_P)^2+(y-y_P)^2}}
+ \frac{x_P}{\sqrt{x_P^2+y_P^2}} \right)\ ,
\cr
\dot{y} & = & v_y\ ,
&\dot{v}_y & = & - \frac{(1-\mu)y}{\sqrt{x^2+y^2}}
-\mu \left( \frac{y- y_P}{\sqrt{(x-x_P)^2+(y-y_P)^2}}
+ \frac{y_P}{\sqrt{x_P^2+y_P^2}} \right)\ .
\cr
}}
\end{equation}
where $x$, $y$, $v_x$, $v_y$ correspond to cartesian positions and
velocities of the massless body, and $x_P$, $y_P$ give the
instantaneous position of the planet. We translate initial conditions
of the set ${\cal S}_{gb}$ to cartesian coordinates and we integrate them
with a Runge-Kutta $7\>$-$\>8^{\>{\rm th}}$ order integrator, along
1500 periods of primaries, with time-step equal to $2\pi/100$. During this
integration we collect the points contained in the pericentric surface
of section, which in our case is represented by the condition
\begin{equation}\label{eq:peric-surf-sect}
M (\xi,\eta) = 0 \quad \mathrm{or\:equivalently} \quad \eta = 0 ~~,
\end{equation}
and provide about 1500 points per orbit. The output data is again
translated to Delaunay variables and suitable to be compared with the
results of the averaged Hamiltonian.
\subsubsection{Computation of level curves}\label{sss:lev_curv}
Through Hamilton's equations, we can
easily derive the equations of motion for the canonical variables
$\rho$ and $\lambda$,
\begin{equation}\label{eq:haml-eqs}
\dot{\rho} = -\frac{\partial {\cal Z}^{(R_1,R_2)}}{\partial \lambda} \qquad \dot{\lambda}
= \frac{\partial {\cal Z}^{(R_1,R_2)}}{\partial \rho} ~~.
\end{equation}
Since ${\cal Z}^{(R_1,R_2)}$ is given by a series expansion of the
variables $\rho$, $\Gamma$, $\cos{\lambda}$, $\sin{\lambda}$,
$\beta(\lambda)$ and the small parameter $\mu$, the equations of
motion inherit the same structure. Through a suitable storing of the
coefficients of these series and management of the equations of motion
implemented in {\tt Mathematica} (as explained in last paragraph of
Subsection~\ref{sbs:semi-analytical_integr-scheme}), we compute the
evolution of the orbits according to the averaged Hamiltonian. Every
initial condition of the set ${\cal S}_{gb}$ is first converted to
normalized variables and then integrated up to collecting about 2000
points, keeping the relative energy error of the integration smaller
than $10^{-12}$. Such a numerical integration is an efficient way to
compute the {\it level curves} for the integrable normal form
${\cal Z}^{(R_1,R_2)}$ corresponding to the values
$\Gamma=(\xi^2+\eta^2)/2$ and
${\cal Z}^{(R_1,R_2)}(\rho,\Gamma,\lambda)$ (in normalized variables).
We complement every point of a level curve with the values
$\xi=\sqrt{2\Gamma}$ and $\eta=0$ (equivalent to $M=0$). Let us
note here that the condition $M=0$ in the normalized coordinates does
not correspond exactly to the surface of section $M=0$ in the original
variables. However, since the change of coordinates
${\cal C}^{(R_1,R_2)}$ (that gives the values of the non--normalized
variables in the scheme~\eqref{semi-analytical_scheme}) is, by
construction, a near-to-identity canonical transformation, we assume
that the conditions for each surface of section do not differ too much.
Finally, via ${\cal C}^{(R_1,R_2)}$, we back-transform all the points of
a level curve in the original variables and we graphically compare
them with the corresponding numerical surface of section.
\subsubsection{Examples and results}\label{sss:examples_results}
\begin{figure}
\includegraphics[width=.35\textwidth,angle=270]{twosurfaces.ps}
\includegraphics[width=.35\textwidth,angle=270]{comparisguy.ps}
\caption{Comparison between the level curves produced by the
averaged Hamiltonian in red (light gray) and the points of the
surface of section for the complete problem in blue (dark gray),
for the Sun-Earth problem (left panel) and Sun-Jupiter problem
(right panel). In the Sun-Earth case, the generating body is the
Earth Trojan 2010 TK$_7$. In the Sun-Jupiter case, the generating
body is Trojan asteroid 1872 Helenos. See text for more
details.}
\label{fig:com_surf}
\end{figure}
We choose two systems with very different values of the mass parameter
for a better contrast in the results of the test. The first case is
provided by the PCRTBP approach to the Sun-Earth system, which is
defined by a mass parameter $\mu=0.30003 \times 10^{-5}$. The
generating body chosen for this system is the Earth only Trojan 2010
TK$_7$, for which we obtain its coordinates from Jet Propulsion
Laboratories JPL Ephemerides Service\footnote{{\tt
http://ssd.jpl.nasa.gov/?ephemerides}}, at epoch 2456987.5 JD
(2014-Nov-26). We translate them to our set of canonical variables,
resulting $\rho_{TK7} = -1.8401447 \times 10^{-2}$, $\lambda_{TK7} =
3.5736334$, $\eta_{TK7} = 0.1152511$ and $\xi_{TK7} = -0.1530054\,$.
For the second case, we choose the Sun-Jupiter system, defined by the
mass parameter $\mu=0.953855\times10^{-3}$. The generating body in
this case is the Trojan asteroid 1872 Helenos, which belongs to
the Trojan camp around L5 in such a system. The set of initial
conditions for Helenos were obtained from Bowell Catalogue\footnote{
{\tt http://www.naic.edu/$\sim$nolan/astorb.html}}, at 2452600.5~JD
(2002-Oct-22), and after being translated to canonical Delaunay
variables, they read $\rho_{1872} = -0.3836735\times10^{-2}$,
$\lambda_{1872} = 5.6716748$, $\eta_{1872} = -0.0154266$ and
$\xi_{1872} = -0.1104177\,$.
Figure~\ref{fig:com_surf} shows the comparison between the surface of
section and the level curves computed for Sun-Earth system (left
panel), and Sun-Jupiter system (right panel). For both cases, the
points of the surface of section are represented in blue (dark gray)
and the curves produced with the averaged Hamiltonian are in red
(light gray). For the case of Sun-Earth system the agreement between
the two representations is excellent. According to the milestones we
define at the beginning of this section, the averaged Hamiltonian
reproduces accurately the large variations of $\lambda$. In
particular, it is perfectly able to distinguish between orbits
belonging to the tadpole or to the horseshoe region. In the case
Sun-Jupiter system, for which the mass parameter value is 3 orders of
magnitude larger, there is a substantial presence of
chaos. Nevertheless, the averaged Hamiltonian is able to \emph{locate}
any tadpole orbit provided by the complete Hamiltonian, even in cases
when the motion is trapped into secondary resonances, and its validity
is also good for orbits close to the border of the stable region.
\subsection{Computation of quasi-actions}\label{sbs:comp_act}
So far in the literature, one the most successful attemps to construct
a normal form for testing the stability of Trojans in the context of
the Hamiltonian formalism is discussed
in~\cite{Gab-Jor-Loc-05}. However, of the 34 initial conditions they
used for the system Sun-Jupiter, 4 presented orbits that were highly
chaotic after being translated according their rotations into the
planar CRTBP, while the Kolmogorov normalization algorithm defined in
that work did not work properly for other 7 cases. Here, we revisit
the fictitious initial conditions they considered for these latter
seven asteroids (1868~Thersites, 1872~Helenos, 2146~Stentor,
2207~Antenor, 2363~Cebriones, 2674~Pandarus and 2759~Idomeneus). Since
they conform a set of coordinates that either lay very close to
the border of stability or show an anomalous behavior (with respect to
the expected tadpole orbit), they provide a natural harder test in a
more quantitative way.
In the previous subsection, we show that the evolution of
the orbits given by ${\cal Z}^{(R_1,R_2)}$ emulates correctly in many
cases the evolution under the original non-normalized Hamiltonian, by
means of graphical comparisons between level curves and surfaces of
section. This normal form ${\cal Z}^{(R_1,R_2)}$ contains two different
actions or integrals of motion. One, obtained by construction and
through the normalization, is given by $\Gamma$. The other one, not
explicitly obtained, is due to the fact that, after the reduction of
$\Gamma$, the normal form bear just 1 d.o.f., i.e. it is
integrable. This second constant of motion is associated with the area
enclosed by the level curve computed under the integration of
${\cal Z}^{(R_1,R_2)}$, and therefore, provide another quantity to be
checked. In order to do so, the computation of the orbits is done as
explained in
subsections~\ref{sss:surf_sec}--\ref{sss:examples_results}, but for
just one initial condition. After integrating both curves, we compute
the maximum and minimum values for the two variables $\rho$ and
$\lambda$ reached during the integration. In Fig.~\ref{fig:strg_guy},
we show the positions of those quantities, over the surfaces of
section defined in subsection~\ref{sbs:graph_test}. With those values,
first we obtain the positions of the centers for the two orbits
through
\begin{figure}
\centering
\includegraphics[width=.5\textwidth,angle=270]{plotito.ps}
\caption{Location of the maximum and miminum values for the
coordinates $\rho$ and $\lambda$ in the integration of the
fictitious initial condition associated to 1872 Helenos, around
L5. Red (light gray) points correspond to the averaged level curve
and blue (dark gray) points to the numerical surface of section}
\label{fig:strg_guy}
\end{figure}
\begin{equation}\label{eq:center-avrg}
\mathrm{C}_{\mathrm{avrg}} = \left(
\mathrm{C}_{(\lambda,\mathrm{avrg})},
\mathrm{C}_{(\rho,\mathrm{avrg})} \right)= \left(
\frac{\left(\mathrm{Max}\,\lambda_{\mathrm{avrg}} -
\mathrm{Min}\,\lambda_{\mathrm{avrg}} \right)}{2}, \frac{ \left(
\mathrm{Max}\,\rho_{\mathrm{avrg}} -
\mathrm{Min}\,\rho_{\mathrm{avrg}} \right) }{2} \right) ~~,
\end{equation}
and
\begin{equation}\label{eq:center-num}
\mathrm{C}_{\mathrm{num}} = \left(
\mathrm{C}_{(\lambda,\mathrm{num})}, \mathrm{C}_{(\rho,\mathrm{num})}
\right)= \left( \frac{\left(\mathrm{Max}\,\lambda_{\mathrm{num}} -
\mathrm{Min}\,\lambda_{\mathrm{num}} \right)}{2}, \frac{ \left(
\mathrm{Max}\,\rho_{\mathrm{num}} -
\mathrm{Min}\,\rho_{\mathrm{num}} \right) }{2} \right) ~~.
\end{equation}
The dispersion between centers, which shows how much displacement
there is between the orbits, is evaluated by putting
\begin{equation}\label{eq:disp}
\delta \mathrm{C} = \frac{\left(\mathrm{C}_{(\rho,\mathrm{num})} -
\mathrm{C}_{(\rho,\mathrm{avrg})}
\right)}{\mathrm{C}_{(\rho,\mathrm{num})}} +
\frac{\left(\mathrm{C}_{(\lambda,\mathrm{num})} -
\mathrm{C}_{(\lambda,\mathrm{avrg})}
\right)}{\mathrm{C}_{(\lambda,\mathrm{num})}} ~~.
\end{equation}
Furthermore, we obtain the value of the enclosed areas for each curve,
as follows: we first refer all the points composing the curve, to the
already computed center, by introducing the quantities $\delta
\lambda=\lambda-C_{\lambda}$ and $\delta\rho=\rho-C_{\rho}\,$;
therefore, we obtain the distance to the center $\mathrm{d} =
\sqrt{\delta \rho^2 + \delta \lambda^2}$ and the angle $\theta$ with
respect to the horizontal line ($ \theta = {\rm atan2}
(\delta\rho\,,\,\delta \lambda)$). Re-ordering the points by
increasing value for the angle $\theta$, we compute the area contained
in the triangle generated by two consecutive points and the center.
The sum along all the triangles represents the contained area within
each curve, $A_{\mathrm{num}}$ for the complete system and
$A_{\mathrm{avrg}}$ for average Hamiltonian flow. For a further
comparison, we compute also the relative difference between the areas
$\delta
A/A_{\mathrm{num}}=|A_{\mathrm{num}}-A_{\mathrm{avrg}}|/A_{\mathrm{num}}$,
and the displacement of the centers of the orbits with respect to
the position of the equilateral Lagrangian point they librate around
($\rho = 0$, $\lambda_{L4}=\pi/3$ for $L_4$ and
$\lambda_{L5}=5\pi/3$ for $L_5$).
\begin{table}
\centering
\caption{Summary of the results for the quantities defining each
averaged and numerical orbit}
\label{tab:table_data}
\begin{tabular}{|cccccc|}
\hline
Asteroid & $A_{\mathrm{num}}$ & $\delta A/A_{\mathrm{num}}$ & $\delta \mathrm{C}$ & $\mathrm{C}_{(\rho,\mathrm{num})}$ & $\mathrm{C}_{(\lambda,\mathrm{num})}-\lambda_{L4,L5}$ \\
\hline
\hline
1868 & $2.03\times 10^{-2}$ & $\, 3.21\times 10^{-3}$& $\, 6.95\times 10^{-3}$& $-1.08\times 10^{-2}$ & $-0.163$ \\
\hline
1872 & $3.75\times 10^{-2}$ & $\, 1.39\times 10^{-3}$& $\, 5.14\times 10^{-2}$ & $-6.86\times 10^{-3}$ & $-0.235$ \\
\hline
2146 & $1.67\times 10^{-2}$ & $\, 1.25\times 10^{-1}$& $\, 3.71\times 10^{-2}$ & $-1.94\times 10^{-1}$ & $-0.530$ \\
\hline
2207 & $2.31\times 10^{-2}$ & $\, 6.59\times 10^{-3}$& $\, 7.50\times 10^{-3}$& $-1.31\times 10^{-2}$ & $-0.196$ \\
\hline
2674 & $3.56\times 10^{-3}$ & $\, 1.51\times 10^{-2}$& $\, 3.61\times 10^{-3}$& $-1.43\times 10^{-2}$ & $-0.077$ \\
\hline
2759 & $2.67\times 10^{-2}$ & $\, 1.29\times 10^{-2}$& $\, 1.04\times 10^{-2}$& $-1.63\times 10^{-2}$ & $-0.232$ \\
\hline
\end{tabular}
\end{table}
\noindent
For 6 of the 7 cases mentioned above, we are able to obtain averaged
orbits that reflects the behavior of the numerical integrations. In
Table~\ref{tab:table_data}, where we report the results for the
previously defined quantities, we show that the averaged areas clearly
match their associated numerical areas, with a relative error smaller
than 2\%, except for one case (asteroid 2146 Stentor), for which the
error is about 13\%. This may be due to the fact that 2146 Stentor
presents the largest displacement with respect to the corresponding
Lagrangian point, in a quite anomalous orbit. For the rest of the
asteroids, the position of the orbits in the surface of section turns
to be very close to that generated by the equivalent averaged level
curve, both centered at a triangular Lagrangian point. On the other
hand, in Table 1 we do not present data for the highly
inclined\footnote{According to the Bowell Catalogue at~2452600.5 JD}
($39^{\circ}$) asteroid 2363~Cebriones of our sample. For this
asteroid, our normal form fails to provide an accurate orbit, using
the initial conditions provided in~\cite{Gab-Jor-Loc-05}. However, we
find that the numerical orbit generated by 2363~Cebriones presents a
very peculiar angular excursion (in $\lambda$) with respect to the
Lagrangian point. This failure of the normal form could be generated
in the initial condition by a non-consistent rotation to the plane of
the primaries, in the original work.
\section{Summary and perspectives}\label{sec:concl_future}
In this paper we present a novel normalization scheme, that provides
an integrable approximation of the dynamics of a Trojan asteroid
Hamiltonian in the framework of the Planar Circular Restricted
Three-Body Problem (PCRTBP). This new algorithm is based on three
co-related points: the introduction of a set of variables which
respects the physical configuration of the system; the existence of
two degrees of freedom with well distinguishable roles, one
corresponding to a fast motion and another corresponding to a slow
motion, that allows us to fruitfully average over the fast angle;
finally, the analytic singularity of this model exclusively related
with the slow angle.
These three concepts motivates a new way to deal with the initial
expansion of the Hamiltonian, as it is necessary for the normalization
procedure. The slow angle $\lambda$ does not affect the solution of
the homological equation, determined in order to remove the dependence
on the fast angle. Thus, we are able to carefully approximate level
curves that represent tadpole and horseshoe orbits by keeping, in the
expansions, a non polynomial dependence just with respect to
$\lambda$.
In order to examinate the accuracy of the normal form produced, we
develop some tests. We study numerically integrated surfaces of
section, along the flow of the complete Hamiltonian, and we
contrast them with the level curves provided by our integrable normal
form, with very good agreement. Furthermore, we estimate some
quasi-integrals of motion (by computation of enclosed areas), which
also show excellent agreement with those corresponding to the
numerically computed surfaces of section. On the whole, this novel
approach for producing a new normal form results in a very promising
approximation of the global behavior of the Trojans motion.
From a rather theoretical point of view, we think that our normalizing
scheme can be complemented with a scheme of estimates, so as to make
the remainder exponentially small on a suitable open domain. If our
algorithm is joint with a Nekhoroshev-like approach, it should ensure
that the eventual diffusion is effectively bounded (i.e., for
intervals of time comparable with the age of our Solar system), for a
set of initial conditions. We expect such a set to be significantly
larger than those considered in the works already existing in the
literature. This expectation is due to the fact that our method offers
a wide coverage of the Trojans orbits. For the same reason, we think
that our integrable approximation can be used in many cases to
successfully start the Kolmogorov normalization algorithm. While the
corresponding solution is valid for any time, the construction of an
invariant KAM torus is an extremely local procedure, because it must
be adapted to the orbit of each Trojan body to be studied. In this
context, we think that our algorithm can be efficiently used
jointly with a KAM-like approach in those cases for which the
previous implementation of the Kolmogorov normalization algorithm
failed. Let us remark that such a new application of the KAM theory
would require to preliminarly construct the action--angle coordinates
also for the slow degree of freedom, before starting the final
normalization procedure. As an alternative strategy, a different
formulation of the KAM theorem that is not strictly based on
action--angle variables could be used ~\cite{Lla-Gon-Jor-Vil-2005}.
More practically, in our opinion the most exciting point is that our
method is suitable to be translated to more complex models, without
requiring essential changes. Since the PCRTBP corresponds to a very
simplistic representation for the Trojans domain, we are presently
working to extend the normalization to a Hamiltonian that also
considers the eccentricity of the primary. The first preliminary
results show that our method can be used so as to locate the main
secondary resonances within the 1:1 MMR region. In particular, we find
a good agreement with other purely numerical indicators, when the mass
ratio between the primaries is small. We plan to include these and
other results in a future publication. Furthermore, we think that our
present and future contributions will help to fill that still existing
gap between the semi-analytic studies and the more complete numerical
experiments of the stability region.
\section*{Acknowledgements}
The authors would like to thank C.~Efthymiopoulos for his advice and
constant support. We are indebted also with C.~Sim\'o who suggested to
reconsider the article~\cite{Garfinkel-77}, and with the anonymous
referee, whose contribution helped improving the original
manuscript. During this work, R.I.P. was supported by the Astronet-II
Marie Curie Training Network (PITN-GA-2011-289240), while U.L. was
partially supported also by the research program ``Teorie geometriche
e analitiche dei sistemi Hamiltoniani in dimensioni finite e
infinite'', PRIN 2010JJ4KPA\_009, financed by MIUR.
| {
"timestamp": "2015-08-04T02:15:00",
"yymm": "1508",
"arxiv_id": "1508.00381",
"language": "en",
"url": "https://arxiv.org/abs/1508.00381",
"abstract": "We revisit a classical perturbative approach to the Hamiltonian related to the motions of Trojan bodies, in the framework of the Planar Circular Restricted Three-Body Problem (PCRTBP), by introducing a number of key new ideas in the formulation. In some sense, we adapt the approach of Garfinkel (1977) to the context of the normal form theory and its modern techniques. First, we make use of Delaunay variables for a physically accurate representation of the system. Therefore, we introduce a novel manipulation of the variables so as to respect the natural behavior of the model. We develop a normalization procedure over the fast angle which exploits the fact that singularities in this model are essentially related to the slow angle. Thus, we produce a new normal form, i.e. an integrable approximation to the Hamiltonian. We emphasize some practical examples of the applicability of our normalizing scheme, e.g. the estimation of the stable libration region. Finally, we compare the level curves produced by our normal form with surfaces of section provided by the integration of the non--normalized Hamiltonian, with very good agreement. Further precision tests are also provided. In addition, we give a step-by-step description of the algorithm, allowing for extensions to more complicated models.",
"subjects": "Earth and Planetary Astrophysics (astro-ph.EP)",
"title": "Trojan dynamics well approximated by a new Hamiltonian normal form",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517501236461,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7089606538536063
} |
https://arxiv.org/abs/2011.10498 | Weighted automata are compact and actively learnable | We show that weighted automata over the field of two elements can be exponentially more compact than non-deterministic finite state automata. To show this, we combine ideas from automata theory and communication complexity. However, weighted automata are also efficiently learnable in Angluin's minimal adequate teacher model in a number of queries that is polynomial in the size of the minimal weighted automaton.. We include an algorithm for learning WAs over any field based on a linear algebraic generalization of the Angluin-Schapire algorithm. Together, this produces a surprising result: weighted automata over fields are structured enough that even though they can be very compact, they are still efficiently learnable. | \section{Introduction}
Weighted automata (WAs) are an alternative model of finite state machines and a natural way to represent monoids.
They have received a lot of interest in the the learning community because they provide an interesting way to represent and analyze sequence data, such as music~\cite{MMW09} or text and speech processing~\cite{MPR08}.
\textcite{M09} provide a nice survey of algorithms related to weighted automata.
In this paper, we aim to expand the theoretical results known about the representational power and learnability of WAs.
We show that WAs over $\mathbb{Z}_2$ (WA2s) -- which can be viewed as word recognizers for regular languages in a natural way -- can be exponentially more compact than non-deterministic finite state automata (NFAs) and yet learnable in Angluin's~\cite{A87} queries and counter-examples (or minimal adequate teacher) model.
With Theorem~\ref{thm:sep}, we show that there exists a family of languages where the minimal WA2s are exponentially smaller than the smallest NFAs.
Unfortunately, in Theorem~\ref{thm:smallNFA} we show that there also exists an exponential separation in the other direction.
This shows that one can sometimes but not always get a significantly more compact representation by using WAs.
However, the compactness result is still interesting because there are efficient algorithms for minimizing WAs~\cite{WAmin} whereas finding a minimal NFA is PSPACE-complete~\cite{MS72}.
This makes weighted automata compact yet -- unlike NFAs -- structured enough to be actively learnable in Angluin's minimal-adequate teacher model.
In Section~\ref{sec:algo}, we show how to extend the Angluin-Schapire algorithm~\cite{A87,S91} to weighted automata over any field.
As such, we show that although WAs can be exponentially smaller than NFAs (and thus also DFAs), they still have a structure that we can exploit for efficient learning.
Since weighted automata correspond more closely to popular models like POMDPs and probabilistic automata~\cite{CT04}, this might open new avenues for learning algorithms of those representations.
\section{Formal background}
\label{sec:genAut}
\subsection{Finite state automata}
\begin{myDef}
Given a fixed alphabet $\Sigma$ and finite dimensional vector space $\mathbb{F}^n$, a weighted automaton $A$ over $\mathbb{F}$ of size $n$ is given by:
\begin{equation}
A = \langle \alpha, \omega \in \mathbb{F}^n, \{M_\sigma \in \mathbb{F}^{n \times n} | \sigma \in \Sigma \} \rangle
\end{equation}
where $\alpha$ is the initial state, $\omega$ is a final measurement, and for each $\sigma \in \Sigma$ we have a corresponding transition matrix $M_\sigma$. The function recognized by this automaton is given by:
\begin{equation}
f_A(\sigma_1...\sigma_m) = \alpha^T M_{\sigma_1} ... M_{\sigma_m} \omega
\end{equation}
\label{def:WAgen}
\end{myDef}
When dealing with automata, it is useful to adapt a general matrix representation of the function they recognize:
\begin{myDef}
Given a function $f: \; \Sigma^* \rightarrow \mathbb{F}$ the \emph{Hankel matrix} $H_f: \Sigma^* \times \Sigma^* \rightarrow \mathbb{F}$ of $f$ is: $H_f(u,v) = f(uv)$.
\end{myDef}
We will also talk about the \emph{restricted Hankel matrix} $H_f|_n: \Sigma^n \times \Sigma^n \rightarrow \mathbb{F}$ of $f$ to strings of length $n$.
The Hankel matrix allows us to come to grips with weighted automata and their size:
\begin{prop}[\cite{CP71,F74}]
$\mathrm{rank}_\mathbb{F}(H_f) \leq n$ if and only if there exists a weighted automaton $A$ over $\mathbb{F}$ of size $n$ such that $f_A = f$.
\label{thm:WAsize}
\end{prop}
If we are going to study weighted automata over $\mathbb{Z}_2$ (WA2) and non-deterministic finite state automata (NFA) together then it is best to express them in a common framework.
To do this, we will define a generic finite state automaton (Defintion~\ref{def:FSA}) and then see how augmenting this model with different acceptance criteria can produce WA2s (Definition~\ref{def:WFSA}) or NFAs (Definition~\ref{def:NDFSA}), or restricting the kinds of transitions can produce deterministic finite-state automata (DFA; Definition~\ref{def:DFSA}).
\begin{myDef}
A \emph{finite state automaton} (FSA) is a tuple
$A = \langle Q, \Sigma, \delta: Q \times \Sigma \rightarrow 2^Q, S \subseteq Q, F \subseteq Q \rangle$
where $Q$ is a finite set of states, $\Sigma$ is a finite alphabet, $\delta$ is the transition function, $S$ is a set of starting states, and $F$ is a set of final states.
The size $|A|$ of the automaton is the number of states $|Q|$.
\label{def:FSA}
\end{myDef}
\begin{myDef}
The dynamics of an FSA $A$ are defined by looking at $\mathrm{paths}_A: \Sigma^* \rightarrow 2^{Q^*}$
where for $p \in Q^*$, $w \in \Sigma^*$, $q,q' \in Q$, and $a \in \Sigma$ we have the recursive definition:
\begin{itemize}
\item $\mathrm{paths}_A(\epsilon) = S$; and
\item $pqq' \in \mathrm{paths}_A(wa)$ if $pq \in \mathrm{paths}_A(w)$
and $q' \in \delta(q,a)$.
\end{itemize}
We say that a \emph{path is accepting} if it ends in $F$, or formally:
$\mathrm{apaths}_A(w) \subseteq \mathrm{paths}_A(w)$ where $pq \in \mathrm{apaths}_A(w)$ if $q \in F$.
\label{def:paths}
\end{myDef}
To define various types of finite automata, we focus on the conditions under which $A$ accepts a word.
\begin{myDef}
If FSA $A$ is \emph{weighted automaton over $\mathbb{Z}_2$} (WA2) then
\begin{equation}
w \in L_A \subseteq \Sigma^* \iff |\mathrm{apaths}(w)| = 1 \mod 2.
\end{equation}
\label{def:WFSA}
\end{myDef}
It might not be obvious that this definition of weighted automata over $\mathbb{Z}_2$ is the same as Definition~\ref{def:WAgen}.
To see the identity, let the basis elements be the states, and the initial state $\alpha$ be the indicator vector for $S$ and the measurement set $\omega$ as the indicator vector for $F$.
The transition function $\delta(\cdot,\sigma)$ is given by $M_\sigma$. Note that it doesn't matter when we switch to mod 2: the matrix multiplication in Definition~\ref{def:WAgen} can be done over $\mathbb{R}$ until we multiply by the final measurement vector.
Multiplying by transition matrices is the same thing as counting the number of paths, and multiplication by $\omega$ adds up the paths that lead to final states (so computes $|\mathrm{apaths}(w)|$) and then taking the modules completes our computation.
\begin{myDef}
If FSA $A$ is \emph{non-deterministic finite automaton} (NFA) then
\begin{equation}
w \in L_A \subseteq \Sigma^* \iff |\mathrm{apaths}(w)| \geq 1.
\end{equation}
\label{def:NDFSA}
\end{myDef}
Note that by the same argument as the previous section, this is also equivalent to if we used the boolean semiring ('or' for addition, and 'and' for multiplication) instead of a field in Definition~\ref{def:WAgen}. In other words, NFAs can also be thought of as weighted automata over the boolean semiring. This is why when we discuss weighted automata in the rest of this article, we focus only on WAs over fields.
We can make a final familiar definition of finite state automata by putting restrictions on $\delta$, $S$:
\begin{myDef}
An FSA $A$ is \emph{deterministic finite automaton} (DFA) if it respects the restriction of a signle start state ($|S| = 1$) and deterministic transitions:
\begin{equation}
\forall q \in Q, \; a \in \Sigma \; |\delta(q,a)| = 1;
\label{eq:deterministic}
\end{equation}
and has the acceptance criteria:
\begin{equation}
w \in L_A \subseteq \Sigma^* \iff |\mathrm{apaths}(w)| = 1.
\end{equation}
\label{def:DFSA}
\end{myDef}
Note that the DFA restrictions of a single start state and determinism (Equation~\ref{eq:deterministic}) imply that given a DFA $A$, any word $w$ defines only one path (i.e., $\forall w \in \Sigma^* \;\; |\mathrm{paths}_A(w)| = 1$) and this path is either accepting or not.
This means that a DFA is also an NFA, and WA2.
It will be useful to have the following two refinements of paths:
\begin{myDef}
Given an FSA $A$ and a state $q \in Q$ we say that a word $w \in \mathrm{past}(q)$ if $pq \in \mathrm{paths}(w)$ for some $p \in Q^*$.
\end{myDef}
In other words, $\mathrm{past}(q)$ is the set of all words that lead to $q$. In a similar vein, we can define:
\begin{myDef}
Given an FSA $A$ and a state $q \in Q$ we say that $w \in \mathrm{future}(q)$ if $\exists v \in \mathrm{past}(q) \; p,r \in Q^* \; \mathrm{s.t} \; pqr \in \mathrm{apaths(vw)}$
\end{myDef}
In other words, $\mathrm{future}(q)$ is the set of all words that lead from $q$ to a state in $F$.
\subsection{Tools from communication complexity}
It will be useful to observe a link between the Hankel matrix and a concept from communication complexity:
\begin{myDef}
The \emph{1-monochromatic rectangle covering} of a function $f: \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$ is the smallest number $\chi_1(f)$ of pairs of sets (called rectangles) $A_i,B_i \subseteq \{0,1\}^n$
for $1 \leq i \leq \chi_1(f)$ such that:
\begin{enumerate}
\item for every $(x,y) \in A_i \times B_i$ we have $f(x,y) = 1$ (i.e., $A_i \times B_i$ is 1-monochromatic), and
\item for every $(x,y) \in f^{-1}(1)$ we have at least one index $i \in \{1, ..., \chi_1(f)\}$ such that $(x,y) \in A_i \times B_i$.
\end{enumerate}
\label{def:rect}
\end{myDef}
Based on the argument in \textcite{HS08}, we can show that relates this nicely to the size of NFAs:
\begin{prop
$|\mathrm{NFA}(f)| \geq \chi_1(H_f|_n)$ for any $n \in \mathbb{N}$.
\label{prop:NFAchiLow}
\end{prop}
\begin{proof}
Let $A$ be a minimal $\mathrm{NFA}$ recognizing $f$.
For each state $q \in Q$ define $A_q = \mathrm{past}(q)$ and $B_q = \mathrm{future}(q)$, by the definition of $\mathrm{future}(q)$ for any $u \in A_q$ and $v \in B_q$ we have $f(uv) = 1$.
Therefore, the $\{A_q,B_q\}_{q \in Q}$ are 1-monochromatic rectangles.
Now, consider any $uv \in f^{-1}(1)$, say that $q \in Q_u$ if $\exists p \in Q^*$ such that $pq \in \mathrm{paths}(u)$.
Since $f(uv) = 1$, there must be at least one $q \in Q_u$ such that $v \in \mathrm{future}(q) = B_q$.
Therefore, the $\{A_q,B_q\}_{q \in Q}$ are a cover of the whole Hankel matrix, and hence a subcover of any restricted submatrix.
\end{proof}
Another useful tool for proving lower bounds in communication complexity is:
\begin{myDef}
The \emph{discrepancy} of a function $f: \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$ is:
\begin{equation}
\mathrm{disc}(f) = \max_{A,B \subseteq \{0,1\}^n} \frac{1}{2^{2n}}|\sum_{x \in A, y \in B} (-1)^{f(x,y)}|
\end{equation}
\label{def:disc}
\end{myDef}
Definitions~\ref{def:rect} and~\ref{def:disc} relate nicely to each other by an extension of Lemma~13.13 from~\textcite{AB09}:
\begin{lem}
$\chi_1(f) \geq \frac{|f^{-1}(1)|}{2^{2n}\mathrm{disc}(f)}$
\label{lem:disc}
\end{lem}
\begin{proof}
Since all the ones in our function can be covered by $\chi_1(f)$ squares, and a total of $|f^{-1}(1)|$ ones need to be covered, there must be at least one monochromatic rectangle $A \times B$ that covers the average number of ones or more.
This means that $|A||B| \geq |f^{-1}(1)|/\chi_1(f)$.
Now, since the discrepancy is a max over rectangles, we can pick $A \times B$ to lower bound it:
\begin{align}
\mathrm{disc}(f) & \geq \frac{1}{2^{2n}} |\sum_{x \in A, y \in B} (-1)^{f(x,y)}| \\
& \geq \frac{1}{2^{2n}} |\sum_{x \in A, y \in B} -1| \\
& \geq \frac{|A||B|}{2^{2n}} \\
& \geq \frac{|f^{-1}(1)|}{2^{2n} \chi_1(f)}
\end{align}
\noindent where the second line follows from the first because the rectangle is 1-monochromatic. The last line can be rearranged to complete the proof.
\end{proof}
\section{Size of NFAs and WA2s}
We are interested in the following question: given a regular language $L$, what is the size of the smallest automaton $A$ with $L_A = L$?
In particular, we will define $|\mathrm{NFA}(L)|$ to be the largest integer such that for any $\mathrm{NFA}$ A, if $L_A = L$ then $|A| \geq |\mathrm{NFA}(L)|$ and similarly for $|\mathrm{DFA}(L)|$,
and $|\mathrm{WA2}(L)|$.
\subsection{WA2s can be exponentially smaller than NFAs}
The gap between $|\mathrm{NFA}(L)|$ and $|\mathrm{WA2}(L)|$ can be exponentially large. Technically, this means that:
\begin{thm}
There exists a family of regular languages $\{L_n\}$ such that
$|\mathrm{NFA}(L_n)| \in 2^{\Omega({|\mathrm{WA2}(L_n)|})}$
\label{thm:sep}
\end{thm}
To find our separating family of languages, we will look at the inner-product function:
\begin{myDef}
The \emph{n-bit inner product} is a function $\wedge^{\otimes n} : \{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$
acting on two bit strings $x = x_1..x_n \in \{0,1\}^n$ and $y = y_1...y_n \in \{0,1\}^n$ as
$x \wedge^{\otimes n} y = \sum_{i =1}^n x_1\cdot y_1 \mod 2$
\end{myDef}
Sometimes, when the size of $x$ and $y$ is obvious, we will omit the $\otimes n$.
Note that the number of zeros and ones in $\wedge$ is well balanced.
\begin{prop}
$|(\wedge^{\otimes n})^{-1}(1)| = 2^{n - 1}(2^n - 1)$
\label{prop:size}
\end{prop}
\begin{proof}
Let $D = \{(x,y) | \exists i \in [n] \; \mathrm{s.t}\; x_i = y_i \}$ be the set of pairs of
strings that overlap in at least one place. Now, consider a function $h$ defined on $D$ that given
$(x,y)$ take the smallest index of overlap $i$ (i.e. for all $j < i, x_j \neq y_j$) and sends
$x_i \rightarrow \bar{x_i}$ and $y_i \rightarrow \bar{y_i}$ this function is a bijection on $D$.
However, note that if $\wedge^{\otimes n}(x,y) = b$ then $\wedge^{\otimes n} h(x,y) = \bar{b}$.
Thus, $\wedge$ has the same number of zeros and ones in $D$.
The only pairs missing from $D$ are the ones of the form $(x,\bar{x})$ and there are $2^n$ such strings, so $|D| = 2^{2n} - 2^n$.
Finally, note that $x \wedge^{\otimes n} \bar{x} = 0$ thus $|(\wedge^{\otimes n})^{-1}(1)| = |D|/2$.
\end{proof}
\begin{lem}
$\chi_1(\wedge^{\otimes n}) \geq 2^{n/2 -2}$
\label{lem:wedge}
\end{lem}
\begin{proof}
Example 13.16 in~\cite{AB09} shows that $\mathrm{disc}(\wedge^{\otimes n}) \leq 2^{-n/2}$ which combined with Lemma~\ref{lem:disc} and Proposition~\ref{prop:size} gives us $\chi_1(\wedge^{\otimes n}) \geq \frac{2^{n - 1}(2^n - 1)2^{n/2}}{2^{2n}} \geq 2^{n/2 - 2}$.
\end{proof}
The inner-product allows us to define a special class of languages families with an important property:
\begin{myDef}
A language family $\{L_n\}$ is called an \emph{inner-product kernel family} if:
\begin{equation}
\forall n \; \forall x,y \in \{0,1\}^n \quad L_n(xy) = x \wedge^{\otimes n} y
\end{equation}
\end{myDef}
Note that the above definition places no restriction on how $L_n$ behaves on words of length other than $2n$, so there are many inner-product kernel families based on the many ways languages can behave outside the kernels.
\begin{prop}
If $\{L_n\}$ is an inner-product kernel family then $|\mathrm{NFA}(L_n)| \geq 2^{n/2 - 2}$
\label{lem:NFA}
\end{prop}
\begin{proof}
We use the communication complexity techniques from Proposition~\ref{prop:NFAchiLow}.
We can use any finite submatrix of $H_L$ to lowerbound $|\mathrm{NFA}(L)|$.
In particular, if for $L_n$ we look at the submatrix of $H_L$ corresponding to length $n$ strings then that is equivalent to the communication problem as $\wedge^{\otimes n}$.
Thus, $|\mathrm{NFA}(L_n)| \geq \chi_1(\wedge^{\otimes n}) \geq 2^{n/2 - 2}$ where the first inequality is an application of the Proposition~\ref{prop:NFAchiLow} lowerbound technique and the second inequality is from Lemma~\ref{lem:wedge}.
\end{proof}
\begin{figure}
\center
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[initial,state](S){$s$};
\node[state](M1)[below of=S]{$m_1$};
\node[state](M2)[right of=M1]{$m_2$};
\node (Mdot)[right of=M2]{$\cdots$};
\node[state](Mn)[right of=Mdot]{$m_n$};
\node[state,accepting](F)[above of=Mn]{$f$};
\path (S) edge node {1} (M1)
edge [loop right] node {0,1} (S)
(M1) edge node {0,1} (M2)
(M2) edge node {0,1} (Mdot)
(Mdot) edge node {0,1} (Mn)
(Mn) edge node {1} (F)
(F) edge [loop left] node {0,1} (F)
;
\end{tikzpicture}
\caption{A picture of the weighted automaton used to prove Theorem~\ref{thm:sep}.}
\label{fig:WAprod}
\end{figure}
%
We finish the proof of Theorem~\ref{thm:sep} by noticing that the family of weighted automata in Figure~\ref{fig:WAprod} recognize languages in an inner-product kernel family but only have $n + 2$ states.
More formally:
\begin{prop}
Let $\text{WA}^\text{prod}_n$ be the weighted automaton in Figure~\ref{fig:WAprod}.
Given any $x,y \in \{0,1\}^n$:
\begin{equation}
xy \in L_{\text{WA}^\text{prod}_n} \iff \sum_{i = 1}^n x_iy_i \mod 2 = 1.
\label{eq:propWAprod}
\end{equation}
\label{prop:WAprod}
\end{prop}
\begin{proof}
Any accepting path in $\text{WA}^\text{prod}_n$ must have the form $p \in s^*m_1m_2...m_nf^*$.
A path $p$ is caused by transitions corresponding to a word of the pattern:
\begin{equation}
\{0,1\}^*1\{0,1\}^{n-1}1\{0,1\}^*
\label{eq:pattern}
\end{equation}
\noindent i.e., by a word that has two $1$s that are exactly $n$ letters apart.
Now, let us count the number of accepting paths for any $xy$.
The word $xy$ matches the pattern in Equation~\ref{eq:pattern} for each $1 \leq i \leq n$ such that $x_i = y_i = 1$ and for no other: i.e., only for the partition $\{0,1\}^{i-1}1\{0,1\}^{n-1}1\{0,1\}^{n-i}$.
Each of these partitions of $xy$ corresponds to a unique path, so the total number of accepting paths is $\sum_{i = 1}^n x_iy_i$ and
Equation~\ref{eq:propWAprod} follows from the acceptance criteria of WAs in Definition~\ref{def:WFSA}.
\end{proof}
\subsection{NFAs can be exponentially smaller than WA2s}
Unfortunately, there are also cases where the opposite happens and we do not have a small WA2 while a small NFA exists:
\begin{thm}
There exists a family of regular languages $\{L_n\}$ such that
$|\mathrm{WA2}(L_n)| \in 2^{\Omega({|\mathrm{NFA}(L_n)|})}$
\label{thm:smallNFA}
\end{thm}
\begin{proof}
For this, consider a language family where for $u,v \in \{0,1\}^n$ $uv \in L_n$ if and only if $u \neq v$. If we look at the Hankel matrix of $L_n$ restricted to columns and rows of length $n$ then it is a matrix of all ones except with zeros on the diagonal. Clearly, this matrix has full rank, so by Theorem~\ref{thm:WAsize} $|WA2(L_n)| \geq 2^n$.
On the other hand, an NFA of size $2(n + 1)$ is given that recognizes a language consistent with $L_n$ in Figure~\ref{fig:smallNFA}.
Notice that any accepting path in this NFA can only have been caused by a word of the pattern $\{0,1\}^n0\{0,1\}^{n-1}1\{0,1\}^n$ (left branch) or $\{0,1\}^n1\{0,1\}^{n-1}0\{0,1\}^n$ (right branch).
When we restrict this to words $xy$ with $x,y \in \{0,1\}^n$, we see that one of the patterns is realized only if there is some $1 \leq i \leq n$ such that $x_i \neq y_i$.
\end{proof}
\begin{figure}
\center
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[initial above,state](S){$s$};
\node[state](L1)[left of=S]{$l_1$};
\node[state](L2)[below of=L1]{$l_2$};
\node (Ldot)[below of=L2]{$\vdots$};
\node[state](Ln)[below of=Ldot]{$l_n$};
\node[state](R1)[right of=S]{$r_1$};
\node[state](R2)[below of=R1]{$r_2$};
\node (Rdot)[below of=R2]{$\vdots$};
\node[state](Rn)[below of=Rdot]{$r_n$};
\node[state,accepting](F)[right of=Ln]{$f$};
\path (S) edge node {0} (L1)
edge node {1} (R1)
edge [loop below] node {0,1} (S)
(L1) edge node {0,1} (L2)
(L2) edge node {0,1} (Ldot)
(Ldot) edge node {0,1} (Ln)
(Ln) edge node {1} (F)
(R1) edge node {0,1} (R2)
(R2) edge node {0,1} (Rdot)
(Rdot) edge node {0,1} (Rn)
(Rn) edge node {0} (F)
(F) edge [loop above] node {0,1} (F)
;
\end{tikzpicture}
\caption{A picture of the NFA used in the proof of Theorem~\ref{thm:smallNFA}}
\label{fig:smallNFA}
\end{figure}
\section{Efficient active learning algorithm for weighted automata}
\label{sec:algo}
Deterministic finite state automata (DFAs) are not passive learnable: i.e., DFAs are known to be difficult to PAC-learn from randomly drawn labeled examples in any representation~\cite{KV94}.
However, we can instead consider a model with active learning that instead of random labeled examples has the following two types of queries:
\begin{enumerate}
\item for any string $x \in \Sigma^*$ we can query $f(x)$. This is the active learning component, since the algorithm generates the query to ask, and
\item given a candidate weighted automaton $A$, we can ask if it is correct. If $A$ computes $f$ (i.e. $f_A = f$) then the teacher will say ``CORRECT", otherwise the teacher will return a counter-example $z$ such that $f_A(z) \neq f(z)$. If a teacher is unavailable then this can alternatively be replaced by random sampling if we want a PAC-like model, and would correspond to the non-active part of learning.
\end{enumerate}
This is Angluin's queries and counter-examples or 'minimal adequate teacher' (MAT) model~\cite{A87}.
\textcite{A87} famously showed that -- in the MAT model -- regular languages are efficiently learnable in the size of their minimal DFA representation.
Later, \textcite{S91} improved the efficiency of Angluin's algorithm for learning DFAs.
In this section, we show how to adapt the Angluin-Schapire algorithm from learning DFAs to learning WAs over any field $\mathbb{F}$.
For the rest of the section, suppose we are trying to learn an unknown function $f: \Sigma^* \rightarrow \mathbb{F}$ with Hankel matrix $H: \Sigma^* \times \Sigma^* \rightarrow \mathbb{F}$.
\subsection{Initialization}
\begin{figure}
\centering
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=1.5cm, semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\node[initial,state](S){$(s,[f(\epsilon)])$};
\path (S) edge [loop right] node {$(\sigma, [\frac{f(\sigma)}{f(\epsilon)}])$} (S);
\end{tikzpicture}
\caption{Initial weighted automaton. There is a single state that is initial and outputs its weight times $f(\epsilon)$. There is a self-loop for each letter $\sigma \in \Sigma^*$ weighted by $\frac{f(\sigma)}{f(\epsilon)}$. Note that we are using the WLOG assumption that $f(\epsilon) \neq 0$.}
\label{fig:initial}
\end{figure}
At all times, our algorithm will keep track of two finite sets $S,E \subseteq \Sigma^*$ of equal size ($|S| = |E|$);
$S$ will be prefix closed and we will call its elements states.
For convenience, we will define a function $F: S \rightarrow \mathbb{F}^E$.
If we view $F$ as a matrix, then it is a restriction of $H$ to $S$ and $E$, i.e. $F = H(S,E)$ or more explicitly for $s \in S$ and $e \in E$, $F(s,e) = f(se)$.
Our algorithm will ensure that $F$ is full rank, i.e. $\mathrm{rank}_{\mathbb{F}}(F) = |S|$.
We will start with $S = E = \{\epsilon\}$ and without loss of generality assume that $f(\epsilon) \neq 0$ (if it is equal zero then just replace $f$ by $f + 1$, learn that, and then subtract $1$ from each value in the final/measurement state).
See Figure~\ref{fig:initial} for the initial automaton.
\subsection{Automaton corresponding to matrix $F$}
For each $\sigma \in \Sigma$, consider $F_\sigma: S \rightarrow \mathbb{F}^E$ where $F_\sigma(s,e) = f(s\sigma e)$.
Since $F$ has full rank (thus a basis for $\mathbb{F}^E$), we know that there is a $T_\sigma: S \rightarrow \mathbb{F}^E$ such that for every $s \in S$ we have $F_\sigma(s) = \sum_{s' \in S} T_\sigma(s,s') F(s')$.
This allows us to define the corresponding weighted automaton over $\mathbb{F}$ (see Definition~\ref{def:WAgen}) on state space $\mathbb{F}^E$, with initial state $\epsilon$, final/measurement state given by $F(\cdot,\epsilon)$, and transition matrices $T_\sigma$.
\subsection{Learning from counter-example query}
Now, suppose we tried this automaton $A$ and our teacher returned a counter-example $z$.
We will use this counter-example to find strings to extend $S$ and $E$ and thus increase the rank of our matrix $F$.
Now for each $1 \leq i \leq |z| + 1$ consider the partitions $z = z_{< i}\sigma_i z_{> i}$.
For each $z_{< i}$ define $Z_i: S \rightarrow \mathbb{F}$ to be the state of our candidate automaton when we run it on $z_{< i}$.
Let $f_i = \sum_{s \in S} Z_i(s)f(s\sigma_i z_{> i})$. From our definition, we know that $f_1 = f(z) \neq f_A(z) = f_{|z| + 1}$, thus as we increase $i$ there must be some point $k$ where $f_k \neq f_{k + 1}$. Find this point by using binary search on $i$.
Let us write out $f_{k + 1}$:
\begin{eqnarray}
f_{k + 1} &=& \sum_{s' \in S}Z_{k + 1}(s') f(s'z_{> k}) \\ &=& \sum_{s,s' \in S} T_{\sigma_k}(s,s') Z_k(s) f(s'z_{> k}) \label{eq:longkplusone}
\end{eqnarray}
Now, proceed by contradiction: if $\forall s \in S$ we have
$f(s\sigma_kz_{> k}) = \sum_{s' \in S} T_{\sigma_k}(s,s')f(s'z_{> k})$ then
\begin{eqnarray}
f_k &=& \sum_{s \in S} Z_k(s) f(s\sigma_k z_{> k})\\ &=& \sum_{s \in S} Z_k(s) \sum_{s' \in S} T_{\sigma_k}(s,s')f(s'y_k) = f_{k + 1}
\end{eqnarray}
where the last equality follows from Equation~\ref{eq:longkplusone} and contradicts $f_k \neq f_{k + 1}$. Thus, there must be some $s^* \in S$ such that
$f(s^*\sigma_kz_{> k}) \neq \sum_{s' \in S} T_{\sigma_k}(s^*,s')f(s'z_{> k})$.
Now, consider an $s\sigma \in S$ then
\begin{equation}
F(s\sigma) = F_{\sigma}(s) =
\sum_{s'}T_{\sigma}(s,s')F(s')
\end{equation}
\noindent but since the $F(s)$ are linearly independent, we must have that $T_\sigma(s,s\sigma) = 1$ and for $s' \neq s\sigma$ we must have $T_\sigma(s,s') = 0$. Plugging this into our contradiction assumption, we see that for $s\sigma_k \in S$ we have $\sum_{s' \in S} T_{\sigma_k}(s,s')f(s'z_{> k}) = f(s\sigma z_{> k})$. Therefore, our $s^*\sigma \not\in S$. Now, we can add $s^*\sigma$ to $S$ and $z_{> k}$ to $E$ to get a new linearly independent row and column and increase the rank of our matrix by 1.
\subsection{Termination}
Since our candidate automaton agrees with $f$ on every value in $F$, it must be that the real automaton corresponding to $f$ must have more states than $\mathrm{rank}(F)$.
At every counter-example query, we increase our rank by one, so if our world $f$ is represented by a minimum weighted automaton with $n$ states then after $n - 1$ counter-example queries we must have $rank(F) = rank(H_f)$.
Since our automata agrees with $f$ on every value in $F$, the $n$th counter-example query gets it ``CORRECT".
\section{Conclusion and future work}
As far as we know,
this is the first time it has been show that weighted automata can be exponentially smaller than NFAs.
Together with the learning algorithm, this produces a somewhat surprising result: weighted automata are structured enough that even though they are compact, they are still efficiently learnable.
This also means that some languages where the minimal DFAs and NFAs are exponentially bigger than the minimal WAs can be learned much faster using the WA representation.
Since WAs are always smaller than DFAs, that means that learning WA2s replaces the standard Angluin-Schapire algorithm~\cite{A87,S91} for learning regular languages.
In the cases where WAs are the same size as DFAs, we can achieve the same performance, and in the cases in which WAs are more compact, we provide an exponential savings in terms of queries used.
\section*{Acknowledgements}
We are indebted to helpful discussion with Borja Balle and Doina Precup.
\section*{References}
\bibliographystyle{plainnat}
| {
"timestamp": "2020-11-23T02:17:30",
"yymm": "2011",
"arxiv_id": "2011.10498",
"language": "en",
"url": "https://arxiv.org/abs/2011.10498",
"abstract": "We show that weighted automata over the field of two elements can be exponentially more compact than non-deterministic finite state automata. To show this, we combine ideas from automata theory and communication complexity. However, weighted automata are also efficiently learnable in Angluin's minimal adequate teacher model in a number of queries that is polynomial in the size of the minimal weighted automaton.. We include an algorithm for learning WAs over any field based on a linear algebraic generalization of the Angluin-Schapire algorithm. Together, this produces a surprising result: weighted automata over fields are structured enough that even though they can be very compact, they are still efficiently learnable.",
"subjects": "Formal Languages and Automata Theory (cs.FL)",
"title": "Weighted automata are compact and actively learnable",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517475646369,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7089606519986565
} |
https://arxiv.org/abs/2202.06482 | Splitting numerical integration for matrix completion | Low rank matrix approximation is a popular topic in machine learning. In this paper, we propose a new algorithm for this topic by minimizing the least-squares estimation over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical gradient descent within the framework of optimization on manifolds. In particular, we reformulate an unconstrained optimization problem on a low-rank manifold into a differential dynamic system. We develop a splitting numerical integration method by applying a splitting integration scheme to the dynamic system. We conduct the convergence analysis of our splitting numerical integration algorithm. It can be guaranteed that the error between the recovered matrix and true result is monotonically decreasing in the Frobenius norm. Moreover, our splitting numerical integration can be adapted into matrix completion scenarios. Experimental results show that our approach has good scalability for large-scale problems with satisfactory accuracy | \section{Introduction}
Computing an efficient and reliable low-rank approximation of a given matrix is a fundamental task in many machine learning problems, such as principal component analysis~\cite{jolliffe2002principal}, face recognition~\cite{muller2004singular} and large scale data compression~\cite{drineas2006subspace,huang2020deeppurpose}.
It is well-known that the truncated singular value decomposition (SVD) provides the best low-rank approximation to the matrix in question.
Specifically, for a given matrix $\mathbf{M}\in\mathbb{R}^{m\times n}$ with $m\geq n$, its SVD is defined as
\begin{equation*}
\label{eqn:svd}
\mathbf{M} = \mathbf{U}\mathbf{\Sigma}\mathbf{V}^T,
\end{equation*}
where $\mathbf{U}=[\mathbf{u}_1, \ldots,\mathbf{u}_n ]$ is an $m\times n$ column orthonormal matrix, $\mathbf{V}= [\mathbf{v}_1, \ldots,\mathbf{v}_n ]$ is an orthonormal matrix, and $\mathbf{\Sigma} = \mathrm{diag}(\sigma_1, \ldots,\sigma_n)$ is a diagonal matrix with the diagonal entries $\sigma_1\geq \sigma_2\geq \cdots\sigma_n\geq 0$.
Moreover,
$\mathbf{u}_j$ and $\mathbf{v}_j$ are called the left and right singular vectors corresponding to $\sigma_j$, the $j$-th largest singular value of $\mathbf{M}$.
For any $1\leq r\leq n$, then
\begin{equation*}
\label{eqn:truncated_svd}
\mathbf{M}_r =[\mathbf{u}_1, \ldots,\mathbf{u}_r] \mathrm{diag}(\sigma_1, \ldots,\sigma_r) [\mathbf{v}_1, \ldots,\mathbf{v}_r]^T
\end{equation*}
is the truncated SVD of $\mathbf{M}$ of rank at most $k$, which is unique only if $\sigma_{r+1} < \sigma_r$.
The assumption that $m\geq n\gg r$ will be maintained throughout this paper for clarity of statement.
All results will also hold for $m<n$, applied on $\mathbf{M}^T$.
It is well known that $\mathbf{M}_r$ is the best rank-$r$ approximation to $\mathbf{M}$
\cite{eckart1936approximation,golub2012matrix}.
The truncated SVD can be cast into a fixed-rank optimzation problem. That is,
\begin{equation}
\label{eqn:fix_rank}
\begin{aligned}
& \underset{\mathbf{Y}}{\arg\min} \Vert \mathbf{M}- \mathbf{Y} \Vert_F,\\
& \text{s.t.}\ \text{rank}(\mathbf{Y})= r.
\end{aligned}
\end{equation}
A bunch of algorithms, such as the Lanczos algorithm ~\cite{golub2012matrix}, randomized SVD algorithm~\cite{halko2011finding}, and subspace iteration~\cite{gu2015subspace}, have been proposed to solve this problem.
An equivalent formulation is given by introducing the concept of
matrix manifold as follows:
\begin{equation}
\label{eqn:fix_manifold}
\begin{aligned}
& \underset{\mathbf{Y}}{\arg\min} \frac{1}{2}\Vert \mathbf{M} - \mathbf{Y} \Vert_F^2,\\
& \text{s.t.}\ \mathbf{Y} \in \mathcal{M}_r,
\end{aligned}
\end{equation}
where $\mathcal{M}_r$ represents a rank-$r$ smooth manifold.
This formulation can be viewed as an unconstrained optimization problem on the low-rank manifold. Accordingly, Koch and Lubich
\cite{lubich2007dynamical} proposed a dynamical low rank approximation approach, which is a gradient descent procedure on the manifold in essence.
In this paper, we attempt to make the gradient descent process ``finer-grained.''
In particular, we view the problem from a perspective of differential dynamic systems and employ a splitting scheme to solve it.
Accordingly, we devise a novel method for low rank matrix approximation that we call a splitting numerical integration method.
Theoretical analysis guarantees that the splitting numerical integration algorithm converges asymptotically.
In addition, we apply splitting numerical integration to matrix completion scenarios, where the matrix $\mathbf{M}$ is partially observed.
Empirical results are also encouraging, especially on large-scale datasets.
The remainder of the paper is organized as follows.
Section~\ref{sec:notation}
presents the notation frequently used in this paper and the problem formulation.
Section~\ref{sec:method} describes our algorithm and theoretical analysis.
Empirical results are given in Section~\ref{sec:experiment}.
\section{Notation and Preliminaries}
\label{sec:notation}
First of all, we present the notation used in this paper. Let $\mathbf{I}_m$ be the
$m \times m$ identity matrix.
Given a matrix $\mathbf{Y} \in \mathbb{R}^{m\times n}$, $\Vert \mathbf{Y}\Vert_F$ denotes
the Frobenius norm of $\mathbf{Y}$ and $\Vert \mathbf{Y}\Vert_2$ denotes the spectral norm.
It is well established that every rank-$r$ matrix $\mathbf{Y} \in\mathbb{R}^{m\times n}$ can be written in the form
\begin{equation}
\label{eqn:SVDX}
\begin{aligned}
\mathbf{Y} = \mathbf{U} \mathbf{S}\mathbf{V}^T,
\end{aligned}
\end{equation}
where $\mathbf{U} \in \mathbb{R}^{m\times r}$ and $\mathbf{V} \in \mathbb{R}^{n\times r}$ are column orthonormal, i.e.,
$ \mathbf{U}^T\mathbf{U} = \mathbf{I}_r \; \mbox{ and }
\mathbf{V}^T\mathbf{V} = \mathbf{I}_r$,
and
$\mathbf{S} \in \mathbb{R}^{r\times r}$
is nonsingular. Notice that here we do not require $\mathbf{S}$ to be the diagonal matrix of the singular values.
The representation in Eqn. \eqref{eqn:SVDX} is not unique because
$\mathbf{Y} = \hat{\mathbf{U}}\hat{\mathbf{S}}\hat{\mathbf{V}}^T$ is another representation where
$\hat{\mathbf{U}} = \mathbf{U} \mathbf{P}$, $\hat{\mathbf{V}} = \mathbf{V} \mathbf{Q} $, and $\hat{\mathbf{S}} =
\mathbf{P}^T\mathbf{S}\mathbf{Q}$ whenever $\mathbf{P}, \mathbf{Q}\in \mathbb{R}^{r\times r}$ are any orthonormal matrices.
As a substitute for the non-uniqueness in Eqn. (\ref{eqn:SVDX}), we will use a
unique decomposition in the tangent space. Let $\mathcal{V}_{m,r}$ represent
the Stiefel manifold of real column orthonormal matrices of size $m\times r
$ ($m > r$). The tangent space at the point $\mathbf{U} \in \mathcal{V}_{m,r}$ is defined as:
\begin{equation*}
\label{eqn:tangentspace}
\begin{aligned}
\mathcal{T}_{\mathbf{U}}\mathcal{V}_{m,r} &= \{\delta\mathbf{U}\in \mathbb{R}^{m\times r}:\delta \mathbf{U}^T\mathbf{U} + \mathbf{U}^T\delta\mathbf{U} = \mathbf{0} \} \\
& = \{\delta\mathbf{U} \in \mathbb{R}^{m\times r}:\mathbf{U}^T\delta\mathbf{U}\in so(r)\},
\end{aligned}
\end{equation*}
where $so(r)$ denotes the space of skew-symmetric real $r\times r$
matrices. Consider the extended tangent map of
$(\mathbf{S},\mathbf{U},\mathbf{V})\longmapsto \mathbf{Y} = \mathbf{U}\mathbf{S}\mathbf{V}^T$,
\begin{equation*}
\label{eqn:tangentmap}
\begin{aligned}
\mathbb{R}^{r\times r} \times \mathcal{T}_{\mathbf{U}}\mathcal{V}_{m,r} \times \mathcal{T}_{\mathbf{V}}\mathcal{V}_{n,r} & \xrightarrow{} \mathcal{T}_{\mathbf{Y}}M_r \times so(r)\times so(r), \\
(\delta\mathbf{S},\delta \mathbf{U},\delta\mathbf{V}) & \xrightarrow{} (\delta\mathbf{U}\mathbf{S}\mathbf{V}^T + \mathbf{U}\delta\mathbf{S}\mathbf{V}^T + \mathbf{U}\mathbf{S}\delta\mathbf{V}^T, \mathbf{U}^T\delta\mathbf{U},\mathbf{V}^T\delta\mathbf{V}).
\end{aligned}
\end{equation*}
The manifold of rank-$r$ matrices is denoted by $\mathcal{M}_r$.
The tangent space at any $\mathbf{Y} \in \mathcal{M}_r$ is denoted by $\mathcal{T}_{\mathbf{Y}}\mathcal{M}_r $, which is defined as follows.
Every $\delta\mathbf{Y}\in \mathcal{T}_{\mathbf{Y}}\mathcal{M}_r$ can be written into the following form:
\begin{equation}
\label{eqn:differentialofX}
\begin{aligned}
\delta{\mathbf{Y}} = \delta{\mathbf{U}}\mathbf{S}\mathbf{V}^T + \mathbf{U}\delta{\mathbf{S}}\mathbf{V}^T + \mathbf{U}\mathbf{S}\delta{\mathbf{V}}^T,
\end{aligned}
\end{equation}
where $\delta\mathbf{S}\in \mathbb{R}^{r\times r}$, $\delta \mathbf{U}\in \mathcal{T}_{\mathbf{U}}\mathcal{V}_{m,r}$ and $\delta\mathbf{V} \in \mathcal{T}_{\mathbf{V}}\mathcal{V}_{n,r}$.
Furthermore, $\delta\mathbf{S}$, $\delta\mathbf{U}$ and $\delta\mathbf{V} $ are uniquely
determined by $\delta\mathbf{Y}$ if we impose the orthogonality constraints:
\begin{equation}
\label{eqn:orthogonal}
\begin{aligned}
& \mathbf{U}^T\delta\mathbf{U} = \mathbf{0}, \\
& \mathbf{V}^T\delta\mathbf{V} = \mathbf{0}.
\end{aligned}
\end{equation}
The projection operators onto the spaces spanned by the
columns of $\mathbf{U}$ and $\mathbf{V}$, and their orthogonal complements are defined as
\begin{equation*}
\label{eqn:project}
\begin{aligned}
& P_{\mathbf{U}} = \mathbf{U}\bfU^T, \\
& P_{\mathbf{V}} = \mathbf{V}\bfV^T, \\
& P^{\bot}_{\mathbf{U}} = \mathbf{I}_m - \mathbf{U}\bfU^T, \\
& P^{\bot}_{\mathbf{V}} = \mathbf{I}_n - \mathbf{V}\bfV^T.
\end{aligned}
\end{equation*}
Then $\delta\mathbf{S}$, $\delta\mathbf{U} $ and $\delta\mathbf{V} $ are uniquely determined by
$\delta\mathbf{Y}$ as follows:
\begin{equation*}
\label{eqn:yield}
\begin{aligned}
& \delta\mathbf{S} = \mathbf{U}^T\delta\mathbf{Y}\mathbf{V}, \\
& \delta\mathbf{U} = P^{\bot}_{\mathbf{U}}\delta\mathbf{Y}\mathbf{V}\mathbf{S}^{-1}, \\
& \delta\mathbf{V} = P^{\bot}_{\mathbf{V}}\delta\mathbf{Y}^T\mathbf{U}\mathbf{S}^{-T}.
\end{aligned}
\end{equation*}
\subsection{Dynamic Low Rank Approximation}
\label{sec:formulation}
Let us return to Problem~\eqref{eqn:fix_manifold} and let
\begin{equation*}
\label{fixrank2}
\begin{aligned}
f(\mathbf{Y}) = \frac{1}{2} \Vert \mathbf{Y} - \mathbf{M}\Vert_F^2
\end{aligned}
\end{equation*}
denote the objective function.
The gradient of $f(\mathbf{Y})$ in $\mathbf{Y}$ can be written as:
\begin{equation*}
\label{eqn:gradient}
\begin{aligned}
\nabla f(\mathbf{Y}) = \mathbf{Y} - \mathbf{M}.
\end{aligned}
\end{equation*}
Recall that the constraint $\mathbf{Y}\in\mathcal{M}_r$, Riemannian gradient, denoted $\nabla\mathbf{Y}$, is used instead of $\nabla f(\mathbf{Y})$, which is a specific tangent vector corresponding to the direction of steepest ascent of $f(\mathbf{Y}_{})$ but restricted to the tangent space $\mathcal{T}_{\mathbf{Y}_{}}\mathcal{M}_r$.
It can be solved via the following optimization problem:
\begin{equation}
\label{eqn:differentequation}
\begin{aligned}
& \underset{\nabla{\mathbf{Y}}\in\mathbb{R}^{m\times n}}{\arg\min} \; \frac{1}{2} \Vert \nabla{\mathbf{Y}} - \nabla{\mathbf{A}} \Vert_F^2\\
& \text{s.t.}\ \nabla{\mathbf{Y}}\in \mathcal{T}_{X}\mathcal{M}_r, \\
\end{aligned}
\end{equation}
where we denote $\nabla{\mathbf{A}} \triangleq - \nabla f(\mathbf{Y})$ for notational simplicity
and $\nabla\mathbf{A}$ can be seen as a given constant at each iteration.
The solution of Problem (\ref{eqn:differentequation}) is well-studied in the differential geometry literature. That is,
\begin{proposition}
\emph{\cite{lubich2007dynamical}} Let $\mathbf{Y} = \mathbf{U}\mathbf{S}\mathbf{V}^T \in \mathcal{M}_r$ where $\mathbf{S}\in \mathbb{R}^{r \times r}$ is nonsingular, and $\mathbf{U}\in \mathbb{R}^{m\times r}$ and $\mathbf{V}\in \mathbb{R}^{n\times r}$ are column orthonormal. Then
the solution to Problem (\ref{eqn:differentequation}) can be written in the following form:
\begin{equation}
\label{eqn:prop2.1}
\begin{aligned}
\nabla{\mathbf{Y}} = \nabla{\mathbf{U}}\mathbf{S}\mathbf{V}^T + \mathbf{U}\nabla{\mathbf{S}}\mathbf{V}^T + \mathbf{U}\mathbf{S}\nabla{\mathbf{V}}^T,
\end{aligned}
\end{equation}
where
\begin{equation}
\label{prop2.11}
\begin{aligned}
& \nabla{\mathbf{S}} = \mathbf{U}^T\nabla{\mathbf{A}}\mathbf{V}, \\
& \nabla{\mathbf{U}} = P_{\mathbf{U}}^{\bot}\nabla{\mathbf{A}}\mathbf{V}\mathbf{S}^{-1}, \\
& \nabla{\mathbf{V}} = P_{\mathbf{V}}^{\bot}\nabla{\mathbf{A}}^T\mathbf{U}\mathbf{S}^{-T}.
\end{aligned}
\end{equation}
\end{proposition}
The resulting algorithm is a simple gradient descent procedure.
That is,
\begin{equation}
\label{eqn:dlra}
\begin{aligned}
\mathbf{U} \Leftarrow \mathbf{U} + \epsilon\nabla\mathbf{U}, \; \mathbf{S} \Leftarrow \mathbf{S} + \epsilon\nabla\mathbf{S}, \; \mbox{ and } \; \mathbf{V} \Leftarrow \mathbf{V} + \epsilon\nabla\mathbf{V},
\end{aligned}
\end{equation}
where $\epsilon$ is stepsize. The algorithm was firstly proposed in~\cite{lubich2007dynamical} and called dynamical low-rank approximation, a landmark in solving SVD in the view of dynamic system.
Prior workof solving SVD in dynamic system is about full SVD, without considering truncated SVD.
According to the low rank assumption, the matrix $\mathbf{Y}$ does not come up explicitly for ease of computation.
Instead, $\mathbf{Y}$ is represented via the product of $\mathbf{U},\mathbf{S}$ and $\mathbf{V}^T$ (at order of $O(mr+nr)$), as shown in Eqn. (\ref{eqn:SVDX}).
Similarly, the Riemannian gradient $\nabla \mathbf{Y}$ can be also represented by using relatively small matrices (at order of $O(mr+nr)$) as shown in Eqn. (\ref{eqn:prop2.1}).
However, for notational convenience, we use $\mathbf{Y}$ and $\nabla\mathbf{Y}$ instead of $\mathbf{U}\mathbf{S}\mathbf{V}^T$ and $(\nabla\mathbf{U})\mathbf{S}\mathbf{V}^T+\mathbf{U}(\nabla\mathbf{S})\mathbf{V}^T+\mathbf{U}\mathbf{S}(\nabla\mathbf{V})^T$, respectively.
Though the extant dynamical low rank approximation is a benchmark work in solving low rank approximation in view of dynamic system,
it is not competitive compared with state-of-the-art low rank approximation approaches due to the following issues.
First, this framework fails to exploit geometric information sufficiently.
Specifically, on the fixed-rank manifold, from $\mathbf{Y}_i = \mathbf{U}_i\mathbf{S}_i\mathbf{V}_i^T$ to $\mathbf{Y}_{i+1} = \mathbf{U}_{i+1}\mathbf{S}_{i+1}\mathbf{V}_{i+1}^T$, the algorithm only makes use of the Riemannian gradient of $\mathbf{Y}_i$.
Second, since the discretization error of the dynamic system is proportional to the stepsize~\cite{lubich2007dynamical}, the stepsize is required to be extremely small, which limits the convergence rate.
Third, some extra operations are introduced to ensure the column orthogonality of $\mathbf{U}$ and of $\mathbf{V}$, which is deemed to be a brutal strategy,
often leading to loss of information~\cite{lubich2007dynamical}.
To address these issues, we seek a widely used scheme in differential systems to compute gradient descent.
Specifically, we use a splitting integration technique to update three components ($\mathbf{U}$, $\mathbf{S}$, $\mathbf{V}$) step-by-step, making a more sufficient use of geometric information.
\section{Methodology}
\label{sec:method}
In this section we present our method.
We first give a novel view of dynamic system for Problem~\eqref{eqn:differentequation}.
With this view, we use a splitting integration scheme to devise a dynamic flow subspace method for solving Problem~\eqref{eqn:fix_manifold}. Finally, we give the convergence analysis of the method and extend it into full SVD and low rank matrix completion.
\subsection{Dynamic System}
Departing from a perspective of differential systems, we study the solution of the optimization problem in \eqref{eqn:differentequation}.
In particular, we consider an alternative formulation for Problem (\ref{eqn:fix_manifold}) in the form of a dynamic system:
\begin{equation}
\label{eqn:differentequation2}
\begin{aligned}
& \underset{{\dot{\mathbf{Y}}\in \mathbb{R}^{m\times n}}}{\arg\min} \; \frac{1}{2} \Vert \dot{\mathbf{Y}} - \dot{\mathbf{A}} \Vert_F^2\\
& \text{s.t.}\ \dot{\mathbf{Y}}\in T_{X}\mathcal{M}_r,
\end{aligned}
\end{equation}
where $\mathbf{Y}$ is regarded as a time-dependent matrix such that $\mathbf{Y}=\mathbf{Y}(t)$ and $\dot{\mathbf{Y}}$ denotes the derivative of $\mathbf{Y}$ w.r.t. time, and
$\dot{\mathbf{A}} \triangleq \nabla \mathbf{A} = - \nabla f(\mathbf{Y})$ in our case.
Notice that it is the continuous version of Problem~\eqref{eqn:differentequation}.
According to the Galerkin condition on the tangent space
$\mathcal{T}_{\mathbf{Y}}\mathcal{M}_r$ in numerical analysis~\cite{hairer2006geometric},
Problem (\ref{eqn:differentequation2}) is equivalent to the following projection:
\begin{equation}
\label{eqn:galerkin}
\begin{aligned}
& \text{finding}\ \dot{\mathbf{Y}}\in \mathcal{T}_{\mathbf{Y}}\mathcal{M}_r\ \text{such that}\\
& \left \langle \dot{\mathbf{Y}} - \dot{\mathbf{A}},\delta \mathbf{Y} \right \rangle = 0 \ \
\text{for\ all\ } \delta \mathbf{Y} \in \mathcal{T}_{\mathbf{Y}}\mathcal{M}_r.
\end{aligned}
\end{equation}
Furthermore,
Problem (\ref{eqn:galerkin}) can be transformed into the following form:
\begin{equation}
\label{eqn:galerkin2}
\begin{aligned}
\dot{\mathbf{Y}} = \tilde{P}_\mathbf{Y}(\dot{\mathbf{A}}),
\end{aligned}
\end{equation}
where $\tilde{P}_\mathbf{Y}(\cdot)$ is a projection operator, defined as
\begin{equation}
\label{eqn:projectionoperator3}
\begin{aligned}
& \tilde{P}_\mathbf{Y}(\mathbf{B}) = \mathbf{B} - \tilde{P}_\mathbf{Y}^\bot (\mathbf{B}) \\
& \text{with} \ \ \tilde{P}_\mathbf{Y}^\bot(\mathbf{B}) = P^\bot_{\mathbf{U}}\mathbf{B} P^\bot_{\mathbf{V}}\ \;\ \forall\ \mathbf{B}\in \mathbb{R}^{m\times n}.
\end{aligned}
\end{equation}
Substituting Eqn. (\ref{eqn:projectionoperator3}) into
Eqn. (\ref{eqn:galerkin2}), we have the following dynamic system:
\begin{equation}
\label{eqn:projectionoperator2}
\begin{aligned}
\dot{\mathbf{Y}} = \mathbf{U}\bfU^T\dot{\mathbf{A}} - \mathbf{U}\bfU^T\dot{\mathbf{A}}\mathbf{V}\bfV^T + \dot{\mathbf{A}}\mathbf{V}\bfV^T.
\end{aligned}
\end{equation}
\subsection{Dynamic Flow Subspace Method}
Our current concern is
to solve the differential equation in (\ref{eqn:projectionoperator2}). We resort to a
splitting scheme~\cite{hairer2006geometric}. In particular, let
$\mathcal{L}$ be a local generator~\cite{leimkuhler2013rational} corresponding to the exact solution to
Problem (\ref{eqn:projectionoperator2}) and separate it into several
sub-generators as follows:
\begin{equation*}
\label{eqn:split}
\begin{aligned}
\mathcal{L} = \mathcal{L}_A + \mathcal{L}_B + \mathcal{L}_O,
\end{aligned}
\end{equation*}
where
\begin{equation}
\label{eqn:3_steps}
\begin{aligned}
& \mathcal{L}_A:\ \dot{\mathbf{Y}} = \mathbf{U}\bfU^T\dot{\mathbf{A}}, \\
& \mathcal{L}_B:\ \dot{\mathbf{Y}} = - \mathbf{U}\bfU^T\dot{\mathbf{A}}\mathbf{V}\bfV^T, \\
& \mathcal{L}_O:\ \dot{\mathbf{Y}} = \dot{\mathbf{A}}\mathbf{V}\bfV^T.
\end{aligned}
\end{equation}
\begin{theorem}
\label{thm:subODE}
For $\mathbf{Y}$ defined in Eqn. (\ref{eqn:SVDX}) and $\dot{\mathbf{Y}}$ defined in Eqn. (\ref{eqn:differentialofX}), assume that the condition described in Eqn.(\ref{eqn:orthogonal}) is satisfied. Then
the analytical solution to the sub-generator $\mathcal{L}_A$ described in Eqn. (\ref{eqn:3_steps}) is
\begin{equation*}
\label{eqn:solve111111}
\begin{aligned}
\dot{\overbrace{\mathbf{S}\mathbf{V}^T}} & = \mathbf{U}^T\dot{\mathbf{A}}, \\
\dot{\mathbf{U}} & = \mathbf{0}.
\end{aligned}
\end{equation*}
The analytical solution to the sub-generator $\mathcal{L}_B$ is
\begin{equation*}
\label{eqn:solve33333}
\begin{aligned}
& \dot{\mathbf{S}} = -\mathbf{U}^T\dot{\mathbf{A}}\mathbf{V}, \\
& \dot{\mathbf{U}} = \mathbf{0}, \\
& \dot{\mathbf{V}} = \mathbf{0}.
\end{aligned}
\end{equation*}
And the analytical solution to the sub-generator $\mathcal{L}_O $ is
\begin{equation*}
\label{eqn:solve22222}
\begin{aligned}
\dot{\overbrace{\mathbf{U}\mathbf{S}}} & = \dot{\mathbf{A}}\mathbf{V}, \\
\dot{\mathbf{V}} & = \mathbf{0}.
\end{aligned}
\end{equation*}
\end{theorem}
Based on Theorem~\ref{thm:subODE}, we devise a novel method for Problem~\eqref{eqn:fix_manifold}. The splitting integration scheme also allows us to alternatively update $\mathbf{U}$, $\mathbf{V}$ and $\mathbf{S}$, rather than directly update $\mathbf{Y}$.
Owing to the Markovian property of the Kolmogorov
operator, different orders of sub-generators $\mathcal{L}_A$, $\mathcal{L}_B$ and $\mathcal{L}_O$ are
equivalent~\cite{leimkuhler2013rational}.
In our work, we restrict our interest on `OBA' scheme: $\mathcal{L}_O + \mathcal{L}_B + \mathcal{L}_A$.
We call our method the splitting numerical integration method.
The detail is given in Algorithm~\ref{alg:sos-LRMC}.
Here $\sigma_{\text{min}}(\mathbf{V}_{i-1}^T\mathbf{V}_i)$ measures the distance between column spaces of $\mathbf{V}_{i-1}$ and $\mathbf{V}_i$, and
it is employed as the stopping criteria.
Compared with dynamical low rank approximation in Eqn.~\eqref{eqn:dlra} which directly updates $\mathbf{Y}$ and the stepsize must be small enough (approaching to 0),
our splitting numerical integration method updates the three components ($\mathbf{U}$, $\mathbf{S} $ and $\mathbf{V}$) in a ``finer-grained'' manner.
{For instance, more specifically, in Step 7 of Algorithm \ref{alg:sos-LRMC}, we adopt the fresh $\mathbf{U}_i$ rather than $\mathbf{U}_{i-1}$.} Notice that in Algorithm~\ref{alg:sos-LRMC} the stepsize is implicitly set to 1.
In summary, our approach is able to address the issues mentioned in the end of Section~\ref{sec:formulation}.
\subsection{Convergence Analysis}
In this section we study convergence properties of our splitting numerical integration algorithm.
\begin{theorem}
\label{thm:fullobservation}
Let $\{\mathbf{Y}_i\colon i=0, 1, \ldots \}$ denote the sequence generated by Algorithm \ref{alg:sos-LRMC}.
Then
\begin{equation*}
\label{eqn:thmfull}
\begin{aligned}
& \| \mathbf{Y}_{i-1} - \mathbf{M} \|_F > \|
\mathbf{Y}_{i} - \mathbf{M}\|_F.
\end{aligned}
\end{equation*}
\end{theorem}
Furthermore, it can be also shown that our splitting numerical integration method converges in terms of the subspace estimation.
\begin{theorem}
\label{thm:subspace}
Let $\{\mathbf{U}_i\colon i=0, 1, \ldots \}$ be the sequence generated by splitting numerical integration in Algorithm \ref{alg:sos-LRMC}.
Then the subspace error $\Vert \mathbf{U}_{i}\mathbf{U}_{i}^T\mathbf{M} - \mathbf{M} \Vert_F$ decreases monotonically until convergence.
In particular, we have
\begin{equation*}
\label{eqn:subspace}
\Vert\mathbf{U}_{i-1}\mathbf{U}_{i-1}^T\mathbf{M} - \mathbf{M} \|_F > \|
\mathbf{U}_i\mathbf{U}_i^T\mathbf{M} - \mathbf{M}\|_F.
\end{equation*}
The similar results hold for $\{\mathbf{V}_i\colon i=0, 1, \ldots \}$.
That is,
\[
\Vert\mathbf{M}\mathbf{V}_{i-1} \mathbf{V}_{i-1}^T- \mathbf{M}\Vert_F > \Vert\mathbf{M}\mathbf{V}_{i} \mathbf{V}_{i}^T- \mathbf{M}\Vert_F.
\]
\end{theorem}
\begin{algorithm}[!ht]
\caption{splitting numerical integration}
\label{alg:sos-LRMC}
\begin{algorithmic}[1]
\REQUIRE matrix $\mathbf{M} \in \mathbb{R}^{m{\times} n}$, target rank $r$, initial value $\mathbf{Y}_0 = \mathbf{U}_0\mathbf{S}_0\mathbf{V}_0^T \in
\mathcal{M}_r$, tolerance for stopping criteria $\tau<1$,
maximal iteration $T$.
\ENSURE truncated SVD of $\mathbf{M}$.
\FOR {$i = 1:T$}
\STATE Compute the gradient $\dot{\mathbf{A}}=\nabla f(X_{i-1})$. \quad \# O($mn$) flops
\STATE $\mathbf{Q} = \dot{\mathbf{A}}\mathbf{V}_{i-1}\ \ \ \in \mathbb{R}^{m\times r}$ \quad \# O($mnr$) flops
\STATE $\mathbf{K} = \mathbf{U}_{i-1}\mathbf{S}_{i-1} + \mathbf{Q}\ \ \ \in \mathbb{R}^{m\times r}$ \quad \# O($mr^2$) flops
\STATE Perform QR-factorization to $\mathbf{K}$:
\ \ $[\mathbf{U}_i,\mathbf{S}_{i-1+1/3}] = \text{QR}(\mathbf{K}) $ \quad \# O($mr^2$) flops
\STATE $\mathbf{S}_{i-1+2/3} = \mathbf{S}_{i-1+1/3} - \mathbf{U}^T_{i}\mathbf{Q}\ \ \ \in \mathbb{R}^{r\times r}$ \quad \# O($mr^2$) flops
\STATE $\mathbf{L} = \mathbf{V}_{i-1}\mathbf{S}_{i-1+2/3}^T + \dot{\mathbf{A}}^T\mathbf{U}_i\
\ \ \in \mathbb{R}^{n\times r} $ \quad \# O($mnr$) flops
\STATE Perform QR-factorization to $\mathbf{L}$:
\ \ $[\mathbf{V}_i,\mathbf{S}_i^T] = \text{QR}(\mathbf{L}) $ \quad \# O($nr^2$) flops
\IF {($\sigma_{\text{min}}(\mathbf{V}_{i-1}^T\mathbf{V}_i) > \tau $)}
\STATE break.
\ENDIF
\quad \# O($nr^2$) flops
\ENDFOR
\STATE Perform SVD on the matrix $\mathbf{S}_T = \mathbf{U}_s\mathbf{D}\mathbf{V}_s^T$, then we have that $\mathbf{M}_r = (\mathbf{U}_T\mathbf{U}_s)\mathbf{D}(\mathbf{V}_T\mathbf{V}_s)^T$.
\end{algorithmic}
\end{algorithm}
\subsection{Application in low-rank matrix completion}
The matrix completion problem is to recover a low-rank matrix from
a few observations of this matrix.
In fixed-rank formulation of matrix completion, we modify Problem~\eqref{eqn:fix_manifold} into
\begin{equation*}
\label{eqn:fix_completion}
\begin{aligned}
& \underset{\mathbf{Y}}{\arg\min} \frac{1}{2}\Vert P_\Omega({\mathbf{M}})- P_\Omega({\mathbf{Y}}) \Vert_F^2\\
& \text{s.t.}\ \mathbf{Y} \in \mathcal{M}_r,
\end{aligned}
\end{equation*}
where $\Omega$ represents the index of observations.
Naturally, our splitting numerical integration method applies to this scenario with only a modification that the objective function becomes
\begin{equation}
\label{eqn:object2}
f_1(\mathbf{Y}) = \frac{1}{2}\Vert P_\Omega({\mathbf{M}})- P_\Omega({\mathbf{Y}}) \Vert_F^2.
\end{equation}
Then the convergence property can be extended into the partial observation case.
In the following theorem, we prove that the objective function is monotonically decreasing until reaching the convergence condition.
\begin{theorem}
\label{thm:convergence}
Let $\{\mathbf{Y}_i \colon i=0, 1, \ldots\}$ be the sequence generated by splitting numerical integration in Algorithm~\ref{alg:sos-LRMC} under any observation index $\Omega$.
Then splitting numerical integration decreases monotonically in the objective function $f_1$ defined in Eqn. (\ref{eqn:object2}); that is,
\begin{equation*}
\label{eqn:thmpartial7}
\begin{aligned}
f_1(\mathbf{Y}_{i-1}) > f_1(\mathbf{Y}_i).
\end{aligned}
\end{equation*}
\end{theorem}
\section{Empirical Evaluation}
\label{sec:experiment}
In this section, we conduct the empirical analysis of the splitting numerical integration method. First we analyze the performance of splitting numerical integration
for low rank matrix approximation on simulated datasets. Then we validate the performance of splitting numerical integration for low rank matrix completion on a set of real data~\cite{fu2019ddl,fu2021mimosa}.
\subsection{Low rank matrix approximation}
We evaluate splitting numerical integration on low rank matrix approximation with comparison with some popular baseline methods.
The baseline methods contain the power method~\cite{gu2015subspace,fu2021probabilistic}, randomized SVD (RSVD)~\cite{halko2011finding} and dynamical low rank approximation~\cite{lubich2007dynamical}. The primary goal is to illustrate the approximate accuracy on three simulated datasets.
In particular,
the three target matrices with different ranks are randomly generated.
The error is measured by $\Vert\mathbf{U}\mathbf{S}\mathbf{V}-\mathbf{M}_r\Vert_F/\Vert\mathbf{M}_r\Vert_F$.
Since the complexity of these approaches are all $O(mnr)$, the runtime is similar for all methods and not reported.
Each trial is conducted three independent times
and average error are reported in Table~\ref{table:approx}.
We observe that our splitting numerical integration method owns obvious advantage over other baseline method in accuracy.
In addition, we use a special initialization by letting the column spaces of $\mathbf{U}_0$ and $\mathbf{V}_0$ lie in the orthogonal complements of $\mathbf{U}_r$ and $\mathbf{V}_r$, respectively.
The similar accuracy can be obtained, which means that our splitting numerical integration method is insensitive to initialization.
\begin{table*}[]
\centering
\caption{Performance of all methods on low rank matrix approximation, DLRA denotes dynamical low rank approximation and RSVD denotes randomized SVD. Note that ``(20K,20K,20K)'' corresponds to number of row, column and rank, respectively.
}
\label{table:approx}
\begin{tabular}{|l|c|c|c|c|}
\hline
Method & A:(20K,20K,20K) & B:(20K,20K,2K) & C:(20K,20K,200) \\ \hline
DLRA & 2.8e-03 & 1.0e-03 & 1.2e-03 \\ \hline
RSVD & 9.3e-02 & 6.0e-02 & 7.2e-02 \\ \hline
power & 2.5e-08 & 4.7e-07 & 7e-08 \\ \hline
SNI & 2.3e-11 & 2.1e-10 & 1.4e-10 \\ \hline
\end{tabular}
\end{table*}
\subsection{Low rank matrix completion}
\begin{table*}[tbp]
\centering
\small
\caption{Results of recommendation systems measured in terms of the RMSE.
`-' represents the absence of results, which means that corresponding algorithm fails on this task due to memory or running time issue.}
\label{table:resultRecommend}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
Data set & Soft-Impute & ALS & GECO & LMaFit & RP & ScGrass & LRGoemCG & SNI \\ \hline
Movielens 100K & 0.9026 & 0.9696 & 0.9528 & 1.0821 & 0.9508 & 0.9502 & 0.9643 & 0.9501 \\ \hline
Movielens 1M & 0.9127 & 0.9159 & 0.8601 & {0.8972} & 0.8590 & 0.8723 & 0.8934 & {0.8612} \\ \hline
Movielens 10M & 0.8915 & 0.8726 & 0.8241 & 0.8921 & 0.8290 & 0.8991 & 0.8779 & {0.823} \\ \hline
Netflix & 0.9356 & 0.9501 & 0.8738 & 0.9247 & {0.8601} & 0.9232 & 0.8723 & {0.8612} \\ \hline
Yahoo Music & 24.77 & 24.59 & - & 26.43 & 23.93 & - & 24.09 & {22.86} \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[]
\centering
\small
\caption{Running time (in seconds) of all methods on recommendation systems.
}
\label{table:resultRecommend2}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|}
\hline
Data set & Soft-Impute & ALS & GECO & LMaFit & RP & ScGrass & LRGoemCG & SNI \\ \hline
Movielens 100K & 2.56 & 0.49 & 2.90 & 0.230 & 0.21 & 0.92 & 0.99 & 0.094 \\ \hline
Movielens 1M & 22.81 & 5.61 & 176.11 & 1.412 & 1.00 & 50.11 & 15.23 & 0.94\\ \hline
Movielens 10M & 675.11 & 88.40 & $>10^3$ & 159.80 & 147.34 & $>10^3$ & 313.30 & 47.93 \\ \hline
Netflix & $>5\times 10^3 $ & 1189.47 & $>10^4$ & 345.00 & 744.43 & $>5\times10^3 $ & 3823.13 & 350.38 \\ \hline
Yahoo Music & $>5\times 10^4 $ & 8522.23 & - & 1239.56 & 1858.43 & - & 4043.22 & 236.32 \\ \hline
\end{tabular}
\end{table*}
We now conduct the empirical analysis of our splitting numerical integration method for the low rank matrix completion (LRMC) problem.
To show the efficiency and effectiveness of our splitting numerical integration-LRMC, we compare it with a bunch of baseline methods,
including Soft-Impute~\cite{mazumder2010spectral},
ALS (Soft-Impute Alternating Least Squares)~\cite{hastie2014matrix},
GECO (Greedy Efficient Component Optimization)~\cite{shalev2011large},
LMaFit (Low Rank Matrix Fitting)~\cite{wen2010low},
RP (Riemann Pursuit for matrix recovery)~\cite{tan2014riemannian},
ScGrass (Scaled Gradient on Grassmann Manifold)~\cite{scaled}, and
LRGeomCG (Low rank Geometric Conjugate Gradient)~\cite{vander}.
The codes of all the methods can be available online, e.g.,
Soft-Impute and ALS \footnote{http://web.stanford.edu/~hastie/pub.htm},
GECO\footnote{http://www.cs.huji.ac.il/~shais/code/index.html},
LMaFit\footnote{http://lmafit.blogs.rice.edu/},
RP and LRGoemCG\footnote{http://www.tanmingkui.com/rp.html}, and
ScGrass\footnote{http://www-users.cs.umn.edu/~thango/}.
These algorithms have been proved to be state-of-the-art algorithms in low rank matrix completion.
We compare these methods on several popular recommendation systems.
It is worth mentioning that large-scale recommendation systems (say, Yahoo Music and Netflix) are used to evaluate the scalability of our method.
We use five publicly available datasets: Movielens 100K, 1M, 10M, NetFlix, Yahoo Music Track 1 to evaluate both the effectiveness and efficiency of our method.
Testing error in terms of RMSE (Root-Mean-Square Error) and computational efficiency measured by running time are shown in Table \ref{table:resultRecommend} and Table \ref{table:resultRecommend2}, respectively.
From Table \ref{table:resultRecommend}, we can observe that our method can achieve better performance than most of the baseline methods in terms of RMSE~\cite{fu2020alpha,fu2019pearl}.
Whilst Table \ref{table:resultRecommend2} shows that our method can achieve great speedup compared with almost all baseline methods under the same setting. It is worth mentioning that in large scale tasks such as Yahoo Music dataset, some results are not listed, which means that the corresponding algorithm can not handle these cases in limited time or simply fail in these situations.
\bibliographystyle{named}
| {
"timestamp": "2022-02-15T02:34:56",
"yymm": "2202",
"arxiv_id": "2202.06482",
"language": "en",
"url": "https://arxiv.org/abs/2202.06482",
"abstract": "Low rank matrix approximation is a popular topic in machine learning. In this paper, we propose a new algorithm for this topic by minimizing the least-squares estimation over the Riemannian manifold of fixed-rank matrices. The algorithm is an adaptation of classical gradient descent within the framework of optimization on manifolds. In particular, we reformulate an unconstrained optimization problem on a low-rank manifold into a differential dynamic system. We develop a splitting numerical integration method by applying a splitting integration scheme to the dynamic system. We conduct the convergence analysis of our splitting numerical integration algorithm. It can be guaranteed that the error between the recovered matrix and true result is monotonically decreasing in the Frobenius norm. Moreover, our splitting numerical integration can be adapted into matrix completion scenarios. Experimental results show that our approach has good scalability for large-scale problems with satisfactory accuracy",
"subjects": "Machine Learning (cs.LG); Numerical Analysis (math.NA)",
"title": "Splitting numerical integration for matrix completion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517437261225,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7089606492162315
} |
https://arxiv.org/abs/2002.02657 | Optimization of Structural Similarity in Mathematical Imaging | It is now generally accepted that Euclidean-based metrics may not always adequately represent the subjective judgement of a human observer. As a result, many image processing methodologies have been recently extended to take advantage of alternative visual quality measures, the most prominent of which is the Structural Similarity Index Measure (SSIM). The superiority of the latter over Euclidean-based metrics have been demonstrated in several studies. However, being focused on specific applications, the findings of such studies often lack generality which, if otherwise acknowledged, could have provided a useful guidance for further development of SSIM-based image processing algorithms. Accordingly, instead of focusing on a particular image processing task, in this paper, we introduce a general framework that encompasses a wide range of imaging applications in which the SSIM can be employed as a fidelity measure. Subsequently, we show how the framework can be used to cast some standard as well as original imaging tasks into optimization problems, followed by a discussion of a number of novel numerical strategies for their solution. | \section{Introduction}
Image denoising, deblurring, and inpainting are only a few examples of standard image processing tasks which are traditionally solved through numerical optimization. In most cases, the objective function associated with such problems is expressed as the sum of a {\em data fidelity term} $f$ and a {\em regularization term} $h$ (or a number thereof) \cite{frjg,Chambolle04,Boyd,BoydADMM}. In particular, considering the desired image estimate $x$ to be a (column) vector in $\mathbb{R}^n$, both $f$ and $h$ are usually defined as non-negative functionals on $\mathbb{R}^n$, in which case the standard form of an optimization-based imaging task is given by
\begin{equation}\label{tasks1}
\min_x \, f(x)+\lambda h(x).
\end{equation}
Here $\lambda > 0$ is a {\em regularization constant} that balances the effects of empirical and prior information on the optimal solution. Specifically, the first (fidelity) term in \eqref{tasks1} forces the solution to ``agree" with the observed data $y$, as it would be the case if one set, e.g., $f(x) = \frac{1}{2} \sum_{i=1}^n |x_i - y_i|^2 = \frac{1}{2} \| x - y \|_2^2$. On the other hand, the prior information is represented by the second (regularization) term in \eqref{tasks1}, which is frequently required to prevent overfitting and/or to render the optimal solution unique and stably computable. For instance, when the optimal solution is expected to be sparse, it is common to set $h(x)=\sum_{i=1}^n |x_i| = \|x\|_1$\cite{frjg,beckteboulle,turlach}.
The convexity and differentiability of the squared Euclidean distance, along with the unparalleled convenience of its numerical handling, are behind the main reasons for its prevalence as a measure of image proximity. The same applies to Mean Squared Error (MSE) and Peak to Signal Noise Ratio (PSNR)---closely related metrics, which have been extensively used to quantify visual quality of images and videos. Yet, neither of the above quantitative measures can be considered a good model for the Human Visual System (HVS), as opposed to the Structural Similarity Index (SSIM), originally proposed by Wang {\em et al.} \cite{wang2002, wang2004}. The SSIM index is based upon the assumption that the HVS has evolved to perceive visual distortions as changes in structural information. On the basis of subjective quality assessments involving large databases, SSIM has been generally accepted to be one of the best measures of visual quality and, by extension, of perceptual proximity between images.
Considering the exceptional characteristics of the SSIM mentioned above, the idea of replacing the standard norm-based fidelity measures with SSIM seems rather straightforward. However, the optimization problems thus obtained demand much more careful treatment, especially when the question of existence and uniqueness of their solutions is concerned. The main difficulty here stems from the fact that SSIM is not a convex function.
Notwithstanding the above obstacles, optimization problems employing the SSIM as a data fidelity term $f$ have already been addressed. For instance, in \cite{BrunetVrscayWang10}, the authors address the problem of finding the best approximation of data images in the domain of an orthogonal transform, with the optimization performed with respect to the SSIM measure. In this work, instead of maximizing SSIM, the authors minimize
\begin{equation}\label{txy}
T(x,y) = 1-\text{SSIM}(x, y), \quad \mbox{with } x = \Phi c,
\end{equation}
in which case $\Phi$ is a matrix representing the {\em synthesis} phase of an orthogonal transform (e.g., either Fourier, cosine, or wavelet transform), $c$ is a vector of approximation coefficients, and $y$ is a data image to be approximated. It is important to note that, as opposed to the SSIM, $T(x,y)$ may be considered a measure of structural {\em dissimilarity} between $x$ and $y$.
Based on the above results, Rehman {\em et al.} \cite{Rehman} address the problem of sparse dictionary learning through modifying the original k-SVD method of Elad {\em et al.} in \cite{Elad}. The conceptual novelty of the work in \cite{Rehman} consists in using SSIM instead of $\ell_2$-norm based proximity measures, which have been shown to yield substantial improvements in the performance of the sparse learning as well as its application to a number of image processing tasks, including super-resolution. Another interesting use of the SSIM for denoising images was proposed in \cite{Channappayya}. Here, the authors introduce the statistical SSIM index (statSSIM), an extension of the SSIM for wide-sense stationary random processes. Subsequently, using the proposed statSSIM, the authors have been able to reformulate a number of classical statistical estimation problems, such as adaptive filter design. Moreover, it was shown in \cite{Channappayya} that statSSIM is a {\em quasi-concave} function, which opens up a number of possibilities for its efficient numerical optimization, e.g., using the bisection method \cite{Boyd,Channappayya}. Interestingly enough, what seems to have been omitted in \cite{Channappayya} is to recognize that the SSIM is a quasi-concave function \cite{BrunetVrscayWang12} that can be optimized by means of a number of a number of efficient numerical methods as well. For more examples of exploiting SSIM in imaging sciences, the reader is referred to \cite{Wang12,Rehman13,Brunet2017}, which demonstrate the application of this measure to rate distortion optimization, video coding, and image classification.
Finally, we note that maximization of $\text{SSIM}(x,y)$ is equivalent to minimization of $T(x,y)$ in \eqref{txy}, with $T(x, y) = 0$ if and only if $x = y$. Consequently, many image processing problems can be formulated in the form of a quasi-convex program as given by
\begin{eqnarray}\label{eqregssim}
\min_x &&T\left( \Phi(x), y \right) \\
\text{subject to } && h_i(x) \le 0, \quad i = 1, \ldots, p, \notag
\end{eqnarray}
where $x \in\mathbb{R}^n$ is an optimization variable, $\Phi:\mathbb{R}^n\to\mathbb{R}^m$ is a linear transform, $y \in\mathbb{R}^m$ is a vector of observed data, and $h_i:\mathbb{R}^n\to \mathbb{R}$ are convex inequality constraint functions. With little loss of generality, in this work, we assume $p=1$, in which case the problem in \eqref{eqregssim} can be rewritten in its equivalent {\em unconstrained} (Lagrangian) form as
\begin{equation}\label{regssim}
\min_x \, T(\Phi(x),y)+\lambda h(x),
\end{equation}
which fits the format of \eqref{tasks1}. In what follows, we refer to the above problem as an unconstrained SSIM-based optimization problem, as opposed to its constrained version in \eqref{eqregssim}. Note that both problems could also be viewed as special instances under the umbrella of {\em SSIM-based optimization}.
As opposed to developing specific methods for particular applications, in this paper, we introduce a set of algorithms to solve the general problems (\ref{regssim}) and (\ref{eqregssim}). Subsequently, we demonstrate the performance of these algorithms with several applications such as (TV) denoising and sparse approximation, as well as providing comparisons against more traditional approaches.
\section{Structural Similarity Index (SSIM)}
\subsection{Definition}
\label{def}
The SSIM index provides a measure of visual similarity between two images, which could be, for instance, an original scheme and its distorted version. Since it is assumed that the distortionless image is always available, the SSIM is considered a \textit{full-reference} measure of image quality assessment (IQA) \cite{wang2004}. Its definition is based on two assumptions: (i) images are highly structured (that is, pixel values tend to be correlated, especially if they are spatially close), and (ii) HVS is particularly sensitive to structural information. For these reasons, SSIM assesses similarity by quantifying changes in perceived structural information, which can be expressed in terms of the luminance, contrast, and structure of the compared images. In particular, given two images, $x$ and $y$, let $\mu_x$ and $\mu_y$ denote their mean values. Also, let $\sigma_x$ and $\sigma_y$ be the standard deviations of the images, whose cross-correlation coefficient is equal to $\sigma_{x y}$. Then, discrepancies between the luminance of $x$ and $y$ can be quantified using
\begin{equation}
l(x,y)=\frac{2\mu_x\mu_y+C_1}{\mu_x^2+\mu_y^2+C_1},
\end{equation}
where $C_1 > 0$ is added for stability purposes. Note that $l(x,y)$ is sensitive to relative (rather than absolute) changes of luminance, which makes it consistent with Weber's law---a model for light adaptation in the HVS \cite{wang2004}.
\noindent
Further, the contrast of $x$ and $y$ can be compared using
\begin{equation}
c(x,y)=\frac{2\sigma_x\sigma_y+C_2}{\sigma_x^2+\sigma_y^2+C_2},
\end{equation}
where, as before, $C_2 > 0$ is added to prevent division by zero. It is interesting to note that, when there is a change in contrast, $c(x,y)$ is more sensitive if the base contrast is low than when this is high---a characteristic behaviour of the HVS.
\noindent
Finally, the structural component of SSIM is defined as
\begin{equation}
s(x,y)=\frac{\sigma_{xy}+C_3}{\sigma_x\sigma_y+C_3},
\end{equation}
with $C_3 >0$, which makes it very similar to the normalized cross-correlation.
Once the three components $l$, $c$, and $s$ are computed, SSIM is defined according to
\begin{equation}\label{ssim}
\text{SSIM}(x,y)=l(x,y)^\alpha c(x,y)^\beta s(x,y)^\gamma,
\end{equation}
where $\alpha>0$, $\beta>0$ and $\gamma>0$ control the relative influence of their respective components. In \cite{wang2004}, the authors simplify (\ref{ssim}) by setting $\alpha=\beta=\gamma=1$ and $C_3=C_2/2$. This leads to the following well known formula
\begin{equation}
\text{SSIM}(x,y)=\left(\frac{2\mu_x\mu_y+C_1}{\mu_x^2+\mu_y^2+C_1}\right)\left(\frac{2\sigma_{xy}+C_2}{\sigma_x^2+\sigma_y^2+C_2}\right).
\label{stassim}
\end{equation}
This definition of the SSIM will be employed for the remainder of the paper.
We close this section by pointing out that the statistics of natural images vary greatly across their spatial domain, in which case it makes sense to replace the {\em global} means $\mu_x$, $\mu_y$, variances $\sigma_x$, $\sigma_y$, and cross-correlation $\sigma_{x y}$ by their {\em localized} versions computed over $N$ (either distinct or overlapping) local neighbourhoods. In this case, using the localized statistics would result in $N$ values of the SSIM, $\{ {\rm SSIM}_{i}(x_i,y_i)\}_{i=1}^N$, which can be averaged to yield a {\em mean} SSIM index (MSSIM) \cite{wang2004}, which is a more frequently used metric in practice.
\subsection{A normalized metric yielded by the SSIM}
Let $x$ and $y$ be (column) vectors in $\mathbb{R}^n$. In the special case when $x$ and $y$ have equal means, i.e., $\mu_x = \mu_y$, the luminance component $l(x,y)$ in \eqref{stassim} is equal to one, therefore \eqref{stassim} is reduced to
\begin{equation}
\text{SSIM}(x,y)=\frac{2\sigma_{xy}+C_2}{\sigma_x^2+\sigma_y^2+C_2}.
\label{varssim}
\end{equation}
This less cumbersome version of the SSIM can be simplified even further if both $x$ and $y$ have zero mean, i.e., $\mu_x = \mu_y = 0$. In this case, it is rather straightforward to show that
\begin{equation}\label{ssimple}
\text{SSIM}(x,y)=\frac{2x^Ty+C}{\|x\|_2^2+\|y\|_2^2+C}, \quad C = C_2 n,
\end{equation}
with the associated dissimilarity index $T(x,y)$ thus becoming
\begin{equation}\label{dssimple}
T(x,y) = 1-\text{SSIM}(x,y)=\frac{\|x-y\|^2_2}{\|x\|^2_2+\|y\|_2^2+C}.
\end{equation}
Note that, if $C = 0$, $0 \leq T(x,y) \leq 2$, while $T(x,y)=0$ if and only if $x=y$.
$T(x,y)$ given by \eqref{dssimple} is an example of a (squared) normalized metric, which has been discussed in \cite{Brunet, BrunetVrscayWang12}. Thus, while $\text{SSIM}(x,y)$ tells us about how correlated or similar $x$ and $y$ are, $T(x,y)$ gives us a sense of how far $x$ is from a given observation $y$ (in the normalized sense). Moreover, since in the majority of optimization problems the fidelity term quantifies the {\em distance} between a model/estimate and its observation, it seems more suitable for us to proceed with minimization of $T(x,y)$.
We finally note that, throughout the rest of the paper, unless otherwise stated, we will be working with zero-mean images, thus using the definitions of SSIM as given in \eqref{ssimple} and \eqref{dssimple}. This simplification, however, is by no means restrictive. Indeed, many algorithms of modern image processing are implemented under Neumann boundary conditions, thus preserving the mean brightness (or, equivalently, mean value) of input images. Hence, it is rarely a problem to subtract the mean value before the processing starts, while adding it back to the final result.
\subsection{Quasiconvexity and quasiconcavity}
An interesting property of the dissimilarity measure $T$ is that it is quasiconvex over a half-space of $\mathbb{R}^n$. This can be easily proved by using the fact that a function $f:\mathbb{R}^n\to\mathbb{R}$ is quasiconvex if its domain and all its sub-level sets
\begin{equation}
S_\alpha=\{x\in\textbf{dom}~f~|~f(x)\leq\alpha\}, \quad \alpha\in\mathbb{R},
\label{level}
\end{equation}
are convex \cite{Boyd}.
\begin{theorem}
\cite{Otero}
Let $y\in\mathbb{R}^n$ be fixed. Then, $T(x,y)$ is quasiconvex if $x^Ty\geq-\frac{C}{2}$, where $C$ is the stability constant of the dissimilarity measure $T$.
\end{theorem}
Note that, from the relation $T(x,y)-1=-\text{SSIM}(x,y)$, it immediately follows that ${\rm SSIM}(x,y)$ is a quasiconcave function over the halfspace defined by $x^T y\geq-\frac{C}{2}$. Moreover, using the definition of quasiconcavity, the same line of arguments as in \cite{Otero} can be used to show that $T(x,y)$ (resp. ${\rm SSIM}(x,y)$) is a quasiconcave (resp. quasiconvex) function, when restricted to the half-space defined by $x^Ty\leq-\frac{C}{2}$.
\section{Constrained SSIM-based Optimization}
A standard quasiconvex optimization problem is defined as follows:
\begin{eqnarray}
\min_x~~&&f(x)\nonumber\\
\label{quasi}
\text{subject to}~~&& h_i(x)\leq 0,~~i=1,\dots,m\\
&&Ax=b\nonumber,
\end{eqnarray}
where $f:\mathbb{R}^n\to\mathbb{R}$ is a quasiconvex cost function, $Ax=b$ (with $A \in \mathbb{R}^{p\times n}$ and $b \in \mathbb{R}^p$) represents a set of $p$ affine equality constraints, and $h_i$ are a set of $m$ convex inequality constraint functions. A standard approach to solving \eqref{quasi} is by means of the {\em bisection method}, which is designed to converge to the desired solution up to a certain accuracy \cite{Boyd}. This method reduces direct minimization of $f$ to a sequence of convex feasibility problems. Using the above approach it is therefore possible to take advantage of the quasiconvexity of $T(x,y)$ to cast SSIM-based optimization problems as quasiconvex optimization problems, which can be subsequently solved by efficient numerical means.
In particular, in what follows, we consider the following quasiconvex optimization problem
\begin{eqnarray}
\min_x~~&& \, T(\Phi x, y) \nonumber\\
\label{conssim}
\text{subject to}~~&& h_i(x)\leq 0,~~i=1,\dots,m\\
&&Ax=b\nonumber,
\end{eqnarray}
where $\Phi \in \mathbb{R}^{m\times n}$ represents a linear transform and $y \in \mathbb{R}^m$ is a data image. Although the dissimilarity index $T$ above is generally defined as given by \eqref{dssimple}, in practical settings, it is common to deal with non-trivial data images, suggesting $\| y \|_2^2 > 0$. In this case, the stability constant $C$ in \eqref{dssimple} can be set to zero, thereby leading to a simplified expression for $T$ as given by
\begin{equation}
\label{nose}
T(\Phi x ,y)=\frac{\|\Phi x-y\|^2_2}{\|\Phi x\|^2_2+\|y\|_2^2}.
\end{equation}
Note that, for $\|y\|_2 \neq 0$, the above expression is well defined and differentiable for all $x\in\mathbb{R}^n$.
As it was mentioned earlier in this section, the quasiconvex problem \eqref{conssim} can be efficiently solved by means of the bisection algorithm, a particular version of which we introduce next.
\subsection{Quasiconvex optimization}
Before providing details on the proposed optimization procedure, we note that, in image processing, one usually deals with positive-valued vectors $x$. Moreover, it is also frequently the case that the matrix $\Phi$ obeys the property of producing positive-valued results when multiplied by positive-valued vectors. In other words, provided $x \succeq 0$, the above property guarantees that $\Phi x \succeq 0$, which is very typical for situations when $\Phi$ describes the effects of, e.g., blurring and/or subsampling. In such cases, it is reasonable to assume that $(\Phi x)^T y \ge 0$ (provided, of course, $y \succeq 0$ as well), thereby allowing us to consider $T(\Phi x, y)$ as a quasiconvex function of $x$.
Given the latter, in general, a solution of \eqref{conssim} can be found by solving a sequence of feasibility problems \cite{Boyd}. These problems are formulated using a family of convex inequalities, which are defined by means of a family of functions $\phi_\alpha:\mathbb{R}^n\to\mathbb{R}$ such that
\begin{equation}
f(x)\leq\alpha\iff\phi_\alpha(x)\leq0,
\label{iff}
\end{equation}
with the additional property that $\phi_\beta(x)\leq\phi_\alpha(x)$ for all $x$, whenever $\alpha\leq\beta$. In particular, one can show that all the above properties are satisfied by
\begin{equation}
\phi_\alpha(x)=(1-\alpha)\|\Phi(x)-y\|_2^2-2\alpha \, x^T \Phi^T y.
\label{convexfun}
\end{equation}
Consequently, the feasibility problems then assume the following form
\begin{eqnarray}
\text{find}~~&& x\nonumber\\
\label{feasible}
\text{subject to}~~&& \phi_\alpha(x)\leq 0\\
&&h_i(x)\leq 0,~~i=1,\dots,m\nonumber\\
&&Ax=b\nonumber.
\end{eqnarray}
Let $p^\ast$ be the optimal value of \eqref{conssim}. Then, if \eqref{feasible} is feasible, we have that $p^\ast\leq\alpha$, whereas for the infeasible case, $p^\ast>\alpha$.
Using the fact that $0\leq T(\Phi x,y)\leq2$, and defining $\mathbf{1}$ and $\mathbf{0}$ to be vectors in $\mathbb{R}^n$ whose entries are all equal to one and zero, respectively, we propose the following algorithm for solving (\ref{conssim}).
\noindent
\begin{framed}
\begin{center}
{\sc\textbf{\textsc{Algorithm I:}} Bisection method for constrained SSIM-based optimization}
\begin{tabbing}
\textbf{initialize} $x=\mathbf{0}$, $l=0$, $u=2$, $\epsilon>0$;\\
\textbf{data preprocessing $y=y-\frac{1}{n}\mathbf{1}^T y$;}\\
\textbf{while} $u-l>\epsilon$ \textbf{do}\\
\hspace{.2in}$\alpha:=(l+u)/2$;\\
\hspace{.2in}Solve (\ref{feasible});\\
\hspace{.2in}\textbf{if} (\ref{feasible}) is feasible \textbf{then}\\
\hspace{.4in} $u:=\alpha$;\\
\hspace{.2in}\textbf{elseif} $\alpha=1$ \textbf{then}\\
\hspace{.4in}(\ref{conssim}) can not be solved, \textbf{break};\\
\hspace{.2in}\textbf{else}\\
\hspace{.4in}$l:=\alpha$;\\
\hspace{.2in}\textbf{end if}\\
\textbf{end while}\\
\textbf{return} $x$.
\end{tabbing}
\end{center}
\end{framed}
Notice that this method will find a solution $x^\ast$ such that $l\leq f(x^\ast)\leq l+\epsilon$ in exactly $\lceil\log_2((u-l)/\epsilon)\rceil$ iterations \cite{Boyd}, provided that such solution lies in the quasiconvex region of $T(\Phi x,y)$, in which case the algorithm converges to the optimal $p^\ast$ to a precision controllable by the value of $\epsilon$. Once again, we note that the optimal solution $x^\ast$ can be reasonably assumed to lie within the region of quasi-convexity of $T$, since one is normally interested in reconstructions that are positively correlated to the given observation $y$.
\section{Unconstrained SSIM-based Optimization}
In many practical applications, the feasible set of SSIM-based optimization can be described by a single constraint, in which case the optimization problem in \eqref{conssim} can be rewritten in its equivalent unconstrained (Lagrangian) form, that is,
\begin{equation}
\label{unicorn}
\min_{x}\, T(\Phi x, y)+\lambda h(x),
\end{equation}
where $\lambda > 0$ is a regularization parameter and $h: \mathbb{R}^n \to \mathbb{R}$ is a regularization functional, which is often defined to be convex. As usual, when $\lambda$ is strictly greater than zero, the regularization term is used to force the optimal solution to reside within a predefined ``target" space, thus letting one to use {\em a priori} information on the properties of the desired solution. This is particularly important in the case where $\Phi^T \Phi$ is either rank-deficient or poorly conditioned, in which case the regularization can help to render the solution well-defined and stably computable. However, since the first term in \eqref{unicorn} is not convex, the entire cost function is not convex either. Consequently, the existence of a unique global minimizer of \eqref{unicorn} cannot be generally guaranteed. With this restriction in mind, however, it is still possible to devise efficient numerical methods capable of converging to either a locally or globally optimal solution, as it will be shown in the following section of the paper.
\subsection{ADDM-based Approach}
In order to solve problem in \eqref{unicorn} we follow an approach based on the Augmented Lagrangian Method of Multipliers (ADMM). This methodology is convenient since it allows us to solve a wide variety of unconstrained SSIM-based optimization problems by splitting the cost function to be minimized into simpler optimization problems that are easier to solve. Moreover, as will be seen in Section \ref{adssimm}, in several cases these simpler problems will have closed form solutions which may be computed very quickly. This is, of course, advantageous since it improves the performance of the resulting algorithm in terms of execution time.
Before going any further, it is worthwhile to mention that one of this simpler optimization problems will usually have the following form,
\begin{equation}
\label{partquad}
\min_x T(\Phi x, y)+\lambda \|x-z\|_2^2,
\end{equation}
where both $y\in\mathbb{R}^m$ and $z\in\mathbb{R}^n$ are given. We consider this special case separately in the following section.
\subsubsection{Quadratic regularization}
\label{quadratic}
Since the cost function in \eqref{partquad} is twice-differentiable, root-finding algorithms such as the \textit{generalized Newton's method} \cite{Ortega} can be employed to find a local zero-mean solution $x^\ast$.
To such an end, we must first compute the gradient of \eqref{partquad} which satisfies the equation,
\begin{equation}
[s(x^\ast)\Phi^T\Phi+\lambda r(x^\ast)I]x^\ast-\lambda r(x^\ast)z-\Phi^Ty=0.
\end{equation}
Here, $s(x):=1-T(\Phi x,y)$, $r(x):=\|\Phi x\|_2^2+\|y\|_2^2+C$, and $I\in\mathbb{R}^{n\times n}$ is the identity matrix. If we define the following function $f:X\subset\mathbb{R}^n\to\mathbb{R}^n$,
\begin{equation}
\label{thef}
f(x)=[s(x)\Phi^T\Phi+\lambda r(x)I]x-\lambda r(x)z-\Phi^Ty,
\end{equation}
then $x^\ast$ is a (zero-mean) vector in $\mathbb{R}^n$ such that $f(x^\ast)=0$.
Under certain conditions, the convergence of this method is guaranteed
by the \textit{Newton-Kantorovich theorem}, stated below.
\begin{theorem}
\cite{Ortega}
Let $X$ and $Y$ be Banach spaces and $g:X\subset A\to Y$. Assume $g$ is Fr\'{e}chet differentiable on an open convex set $D\subset X$ and
\begin{equation}
\|g'(x)-g'(z)\|\leq K\|x-z\|_2, ~\text{for all $x,z\in D$},
\end{equation}
Also, for some $x_0\in D$, suppose that $g'(x_0)^{-1}$ is defined on all $Y$ and that
\begin{equation}
h:=L\|g'(x_0)^{-1}\|\|g'(x_0)^{-1}g(x_0)\|\leq\frac{1}{2},
\end{equation}
where $\|g'(x_0)^{-1}\|\leq\beta$ and $\|g'(x_0)^{-1}g(x_0)\|\leq\eta$. Set
\begin{equation}
K_1=\frac{1-\sqrt{1-2h}}{K\beta} ~\text{and }K_2=\frac{1+\sqrt{1-2h}}{K\beta},
\end{equation}
and assume that $S:=\{x:\|x-x_0\|\leq K_1\}\subset D$. Then, the Newton iterates
\begin{equation}
x_{k+1}=x_k-g'(x_k)^{-1}g(x_k), ~k\in\mathbb{N},
\end{equation}
are well defined, lie in $S$ and converge to a solution $x^\ast$ of $g(x)=0$, which is unique in $D\cap\{x:\|x-x_0\|\leq K_2\}$. Moreover, if $h<\frac{1}{2}$, the order of convergence is at least quadratic.
\end{theorem}
In our particular case, the Fr\'{e}chet derivative of $f$ is its Jacobian, which we denote as $J_f$,
\begin{equation}
J_f(x)=\Phi^T\Phi x(\nabla s(x))^T+s(x)\Phi^T\Phi+\lambda (x-z)(\nabla r(x))^T+\lambda r(x)I.
\end{equation}
By the previous theorem, convergence is guaranteed in any open convex subset $\Omega$ of $X$, where $X\subset\mathbb{R}^n$, as long as the initial guess $x_0$ satisfies the following condition,
\begin{equation}
L\|J_f(x_0)^{-1}\|\|J_f(x_0)^{-1}f(x_0)\|\leq\frac{1}{2} \, .
\label{inicon}
\end{equation}
Here, $J_f(\cdot)^{-1}$ denotes the inverse of $J_f(\cdot)$, and $L>0$ is a constant that is less or equal than the Lipschitz constant of $J_f(\cdot)$. Indeed, it can be proved that for any open convex subset $\Omega\subset X$, $J_f(\cdot)$ is Lipschitz continuous.
\begin{theorem}
Let $f:X\subset\mathbb{R}^n\to\mathbb{R}$ be defined as in Eq. (\ref{thef}). Then, its Jacobian is Lipschitz continuous on any open convex set $\Omega\subset X$; that is, there exists a constant $L>0$ such that for any $x,w\in\Omega$,
\begin{equation}
\|J_f(x)-J_f(w)\|_F\leq L\|x-w\|_2 \, .
\end{equation}
Here, $\|\cdot\|_F$ denotes the Frobenius norm\footnote{The Frobenius norm of an $m\times n$ matrix $A$ is defined as $\|A\|_F=\sqrt{\sum_{i=1}^m\sum_{j=1}^n|a_{ij}|^2}$.} and
\begin{equation}
L = C_1\|\Phi^T\Phi\|_F+\lambda (C_2\|z\|_2+C_3),~C_1,C_2,C_3>0.
\end{equation}
\begin{proof}
See Appendix.
\end{proof}
\end{theorem}
From this discussion, we propose the following algorithm for solving the problem in \eqref{partquad}.
\begin{framed}
\begin{center}
{\sc\textbf{\textbf{Algorithm II:}} Generalized Newton's method for unconstrained SSIM-based optimization with quadratic regularization}
\begin{tabbing}
\textbf{initialize} Choose $x=x_0$ according to (\ref{inicon});\\
\textbf{data preprocessing} $\bar{y}=\frac{1}{n}\mathbf{1}^Ty$, $y=y-\bar{y}\mathbf{1}$;\\
\textbf{repeat}
\hspace{.2in}$x=x-J_f(x)^{-1}f(x)$;\\
\textbf{until} stopping criterion is satisfied\\
\textbf{return} $x$.
\end{tabbing}
\end{center}
\end{framed}
It is worthwhile to point out that in some cases the calculation of the inverse of $J_f(x)$ may be computationally expensive which, in turn, also makes the $x$-update costly. This difficulty can be addressed by updating the variable $x$
in the following manner,
\begin{equation}
J_f(x^k)x^{k+1}=J_f(x^k)x^k-f(x^k).
\end{equation}
Note that this $x$-update involves the solution of a linear system of equations which can be conveniently done by numerical schemes such as the \textit{Gauss-Seidel} method \cite{BoydADMM}.
\subsubsection{ADMM-based Algorithm}
\label{adssimm}
The problem in \eqref{unicorn} can be solved efficiently by taking advantage of the fact that the objective function is separable. Let us introduce a new variable, $z\in\mathbb{R}^n$. Also, let the independent variable of the regularizing term $h$ be equal to $z$. We then define the following optimization problem,
\begin{eqnarray}
\label{uniadmm}
\min_{x,z}&&T(\Phi x, y)+\lambda h(z),\\ \nonumber
\text{subject to}&& x-z=0.
\end{eqnarray}
Clearly, \eqref{uniadmm} is equivalent to problem \eqref{unicorn}, thus a solution of \eqref{uniadmm} is also a minimizer of the original optimization problem \eqref{unicorn}. Furthermore, notice that \eqref{uniadmm} is presented in ADMM form \cite{BoydADMM}, implying that an analogue of the standard ADMM algorithm can be employed to solve it.
As is customary in the ADMM methodology, let us first form the corresponding augmented Lagrangian of \eqref{uniadmm},
\begin{equation}
\label{Lagran}
L_\rho(x,z,v)=T(\Phi x, y)+\lambda h(z)+v^T(x-z)+\frac{\rho}{2}\|x-z\|_2^2.
\end{equation}
This expression can be rewritten in its more convenient scaled form by combining the linear and quadratic terms and scaling the dual variable $v$, yielding
\begin{equation}
\label{Lagrang}
L_\rho(x,z,u)=T(\Phi x, y)+\lambda h(z)+\frac{\rho}{2}\|x-z+u\|_2^2,
\end{equation}
where $u=v/\rho$. We employ this version of Eq. \eqref{Lagran} since the formulas associated with it are usually simpler than their unscaled counterparts \cite{BoydADMM}. As usual, the iterations of the proposed algorithm for solving \eqref{uniadmm} will be the minimization of Eq. \eqref{Lagrang} with respect to variables $x$ and $z$ in an alternate fashion, and the update of the dual variable $u$, which accounts for the maximization of the dual function $g(u)$:
\begin{equation}
g(u):=\inf_{x,y}L_\rho(x,z,u).
\end{equation}
As such, we define the following iteration schemes for minimizing the cost function of the equivalent counterpart of problem \eqref{unicorn}:
\begin{eqnarray}
x^{k+1}&:=&\argmin_x\left(T(\Phi x,y)+\frac{\rho}{2}\|x-z^{k}+u^{k}\|_2^2\right),\\
z^{k+1}&:=&\argmin_z\left(h(z)+\frac{\rho}{2\lambda}\|x^{k+1}-z+u^{k}\|_2^2\right),\\
u^{k+1}&:=&u^k+x^{k+1}-z^{k+1}.
\end{eqnarray}
Observe that the $x$-update can be computed using the method described in the previous section. Furthermore, when $h$ is convex, the $z$-update is equal to the \textit{proximal operator} of $(\lambda/\rho)h$ \cite{Parikh}. Recall that for a convex function $f:\mathbb{R}^n\to\mathbb{R}$ its proximal operator $\mathbf{prox}_f:\mathbb{R}^n\to\mathbb{R}^n$ is defined as
\begin{equation}
\mathbf{prox}_f(v):=\argmin_x\left(f(x)+\frac{1}{2}\|x-v\|_2^2\right) \, .
\end{equation}
It follows that
\begin{equation}
z^{k+1}:=\mathbf{prox}_{\frac{\lambda}{\rho}h}(x^{k+1}+u^k).
\end{equation}
Taking this into account, we introduce the following algorithm for solving the problem in \eqref{unicorn}.
\begin{framed}
\begin{center}
{\sc\textbf{\textsc{Algorithm III:}} ADMM-based method for unconstrained SSIM-based optimization}
\begin{tabbing}
\textbf{initialize} $x=z=x_0$, $u=\mathbf{0}$;\\
\textbf{data preprocessing} $y=y-\frac{1}{n}\mathbf{1}^T y$;\\
\textbf{repeat}\\
\hspace{.2in}$x:=\argmin_x\left(T(\Phi x,y)+\frac{\rho}{2}\|x-z+u\|_2^2\right)$;\\
\hspace{.2in}$z:=\argmin_z\left(h(z)+\frac{\rho}{2\lambda}\|x-z+u\|_2^2\right)$;\\
\hspace{.2in}$u:=u+x-z$;\\
\textbf{until} stopping criterion is satisfied.\\
\textbf{return} $x$.
\end{tabbing}
\end{center}
\end{framed}
\subsection{Alternative ADMM-based Approach}
As shown in the previous section, one of the advantages of the ADMM-based approach is that it allows us to solve problem \eqref{unicorn} by solving a sequence of simpler optimization problems. The complexity of the proposed method can be further reduced by reformulating \eqref{unicorn} as the following equivalent optimization problem:
\begin{eqnarray}
\min_{x,w,z}&&T(w, y)+\lambda h(z),\\ \nonumber
\text{subject to}&& \Phi x-w=0, \nonumber\\
&& x-z=0.
\label{altadmm}
\end{eqnarray}
The scaled form of the augmented Lagrangian of this new problem is given by
\begin{eqnarray*}
L_{\rho,\mu}(x,w,z,u,v)&=&T(w, y)+\lambda h(z)+\frac{\rho}{2}\|\Phi x-w+u\|_2^2+\frac{\mu}{2}\|x-z+v\|_2^2\nonumber.
\end{eqnarray*}
Therefore, problem \eqref{altadmm} can be solved by defining the following iterations:
\begin{eqnarray}
x^{k+1}&:=&\argmin_x\left(\frac{\rho}{2}\|\Phi x-w^k+u^{k}\|_2^2+\frac{\mu}{2}\|x-z^{k}+v^{k}\|_2^2\right),\\
w^{k+1}&:=&\argmin_w\left(T(w,y)+\frac{\rho}{2}\|w-\Phi x^{k+1}-u^{k}\|_2^2\right),\label{wup}\\
z^{k+1}&:=&\argmin_z\left(h(z)+\frac{\mu}{2\lambda}\|z-x^{k+1}-v^{k}\|_2^2\right),\\
u^{k+1}&:=&u^k+\Phi x^{k+1}-w^{k+1},\\
v^{k+1}&:=&v^k+ x^{k+1}-z^{k+1}.
\end{eqnarray}
\noindent
Notice that the $x$-update has a closed form solution, which is given by
\begin{equation}
x^{k+1}=(\rho\Phi^T\Phi+\mu I)^{-1}(\rho\Phi^T(w^k-u^k)+\mu(z^k-v^k)),
\end{equation}
where $I\in\mathbb{R}^{n\times n}$ is the identity matrix. Alternatively, observe that the variable $x$ can also be updated by solving the following system of linear equations:
\begin{equation}
(\rho\Phi^T\Phi+\mu I)x^{k+1}=(\rho\Phi^T(w^k-u^k)+\mu(z^k-v^k)).
\end{equation}
This option is convenient when the computation of the inverse of the matrix $\rho\Phi^T\Phi+\mu I$ is expensive. Since the matrix $\rho\Phi^T\Phi+\mu I$ is symmetric and positive-definite we can employ a \textit{conjugate gradient method} to update the primal variable $x$ \cite{Hestenes}.
The $w$-update is a simple and special case of the more general method described in Section \ref{quadratic}, thus the methods discussed in that section can be employed to compute \eqref{wup}.
Finally, the $z$-update is simply the proximal operator of the function $\frac{\lambda}{\mu}h:\mathbb{R}^n\to\mathbb{R}$,
\begin{equation}
z^{k+1}:=\mathbf{prox}_{\frac{\lambda}{\mu}h}(x^{k+1}+u^k),
\end{equation}
provided $h$ is a convex function of $x$.
Given the above discussion, we present the following alternative algorithm for solving the problem in \eqref{unicorn}.
\newpage
\begin{framed}
\begin{center}
{\sc\textbf{\textsc{Algorithm IV:}} Alternative ADMM-based method for unconstrained SSIM-based optimization}
\begin{tabbing}
\textbf{initialize} $x=z=x_0$, $w=\Phi x_0$, $u=v=\mathbf{0}$;\\
\textbf{data preprocessing} $y=y-\frac{1}{n}\mathbf{1}^T y$;\\
\textbf{repeat}\\
\hspace{.2in}$x:=(\rho\Phi^T\Phi+\mu I)^{-1}(\rho\Phi^T(w-u)+\mu(z-v))$;\\
\hspace{.2in}$w:=\argmin_w\left(T(w,y)+\frac{\rho}{2}\|w-\Phi x-u\|_2^2\right)$;\\
\hspace{.2in}$z:=\argmin_z\left(h(z)+\frac{\mu}{2\lambda}\|z-x-v\|_2^2\right)$;\\
\hspace{.2in}$u:=u+\Phi x-w$;\\
\hspace{.2in}$v:=v+x-z$;\\
\textbf{until} stopping criterion is satisfied.\\
\textbf{return} $x$.
\end{tabbing}
\end{center}
\end{framed}
\section{Applications}
Naturally, different sets of constraints and regularization terms yield different SSIM-based imaging problems. In this section, we review some applications and the corresponding optimization problems associated with them.
\subsection{SSIM with Tikhonov regularization}
A common method used for ill-posed problems is \textit{Tikhonov regularization} or \textit{ridge regression}. This is basically a constrained version of least squares and it is found in different fields such as statistics and engineering. It is stated as follows
\begin{eqnarray}
\label{tiko}
\min_x&&\,\|\Phi x-y\|_2^2\\
\text{subject to}&&\, \|Ax\|_2^2\leq \lambda\nonumber,
\end{eqnarray}
where $A\in\mathbb{R}^{p\times n}$ is called the \textit{Tikhonov matrix}. A common choice for the matrix $A$ is the identity matrix, however, other choices may be a scaled finite approximation of a differential operator or a scaled orthogonal projection \cite{HochstenbachReichel,FuhryReichel,MorigiReichelSgallari}.
The SSIM counterpart of problem (\ref{tiko}) is obtained by replacing the Euclidean fidelity term of problem (\ref{tiko}) by the dissimilarity measure $T(\Phi x,y)$:
\begin{eqnarray}
\min_x&&\,\,T(\Phi x,y)\\
\text{subject to}&&\,\|Ax\|_2^2\leq \lambda\nonumber.
\end{eqnarray}
Notice that this problem can be addressed by employing Algorithm I.
Algorithm II can be employed to solve the unconstrained counterpart of the previous problem, given by
\begin{equation}
\min_{x} \, T(\Phi x,y)+\lambda\|Ax\|_2^2 \, .
\end{equation}
In this particular case, the corresponding iteration schemes
are defined as follows,
\begin{eqnarray}
x^{k+1}&:=&\argmin_x\left(T(\Phi x,y)+\frac{\rho}{2}\|x-z^{k}+u^{k}\|_2^2\right),\\
z^{k+1}&:=&\left(\frac{2\lambda}{\rho}A^TA+I\right)^{-1}(x^{k+1}+u^{k}),\\
u^{k+1}&:=&u^k+x^{k+1}-z^{k+1},
\end{eqnarray}
where $I\in\mathbb{R}^{n\times n}$ is the identity matrix. In some cases, the computation of the inverse of the matrix $\frac{2\lambda}{\rho}A^TA+I$ may be expensive, making it more appropriate to employ alternative methods to compute the $z$-update. For instance, if $p$ is smaller than $n$, it may be convenient to employ the \textit{matrix inversion lemma} \cite{BoydADMM}. Furthermore, notice that the $z$-update is equivalent to solving the following system of linear equations,
\begin{equation}
\left(\frac{2\lambda}{\rho}A^TA+I\right)z^{k+1}:=x^{k+1}+u^{k} \, .
\end{equation}
Since the matrix $\frac{2\lambda}{\rho}A^TA+I$ is positive-definite and symmetric, the variable $z$ can be updated efficiently by using a conjugate gradient method.
\subsection{SSIM-$\mathbf{\ell^1}$-based optimization}
\label{eleuno}
Other interesting applications emerge when the $\ell_1$ norm is used either as a constraint or a regularization term. For example, by defining the constraint $h(x)=\|x\|_1-\lambda$, we obtain the following SSIM-based optimization problem,
\begin{eqnarray}
\label{ssiml1}
\min_x&& \, T(\Phi x,y) \\
\text{subject to}&& \|x\|_1\leq \lambda\nonumber.
\end{eqnarray}
As expected, the solution of \eqref{ssiml1} can be obtained by means of Algorithm I. Clearly, the unconstrained counterpart of (\ref{ssiml1}) is given by,
\begin{equation}
\min_{x} \, T(\Phi x,y)+\lambda\|x\|_1,
\label{unssiml1}
\end{equation}
which can be solved with Algorithm II. Problem \eqref{unssiml1} can be seen as the SSIM version of the classical regularized version of the least squares method known as LASSO (Least Absolute Shrinkage and Selection Operator) \cite{efron2004,frjg}. The corresponding iteration schemes to solve it are as follows,
\begin{eqnarray}
x^{k+1}&:=&\argmin_x\left(T(\Phi x,y)+\frac{\rho}{2}\|x-z^{k}+u^{k}\|_2^2\right),\\
z^{k+1}&:=&S_{\frac{\lambda}{\rho}}(x^{k+1}+u^k),\\
u^{k+1}&:=&u^k+x^{k+1}-z^{k+1} \, .
\end{eqnarray}
Here, the $z$-update is equal to the proximal operator of the $\ell_1$ norm, also known as the \textit{soft thresholding operator} \cite{efron2004,frjg,BoydADMM}. This is an element-wise operator, which is defined as
\begin{equation}
S_\tau(t)=
\begin{cases}
t-\tau, & t>\tau,\\
0, & |t|\leq\tau,\\
t+\tau, & t<\tau\, .
\end{cases}
\end{equation}
Problems \eqref{ssiml1} and \eqref{unssiml1} are appealing because they combine the concepts of similarity and sparseness, therefore, these are relevant in applications in which sparsity is the assumed underlying model for images. To the best of our knowledge, optimization of the SSIM along with the $\ell^1$ norm has been only reported in \cite{Otero,Otero2014,OteroIPCV} and in this work.
\subsection{SSIM and Total Variation}
\label{latv}
By employing the constraint $h(x)=\|Dx\|_1-\lambda$, where $D\in\mathbb{R}^{n\times n}$ is a difference matrix and $\Phi$ is the identity matrix $I\in\mathbb{R}^{n\times n}$, we can define a SSIM-total-variation-denoising method for one-dimensional discrete signals as follows. Given a noisy signal $y\in\mathbb{R}^n$, its denoised reconstruction $x$ is the solution of the problem,
\begin{eqnarray}
\label{consTV}
\min_x&& \,\, T (x,y) \\
\text{subject to}\,&& \|Dx\|_1\leq \lambda\nonumber.
\end{eqnarray}
Here, we consider $\|Dx\|_1$ as a measure of the total variation (TV) of the vector $x$. Notice that instead of minimizing the TV, we employ it as a constraint. Moreover, methods for solving the $\ell_2$ version of \eqref{consTV} can be found in \cite{CombettesP04,FadiliP11}. As with the classical TV optimization problems, solutions of \eqref{consTV} have bounded variation as well.
As for images, these can be denoised in the following manner. Let $Y\in\mathbb{R}^{m\times n}$ be a noisy image. Also, let $V:\mathbb{R}^{m\times n}\to\mathbb{R}^{mn\times1}$ be a linear transformation that converts matrices into column vectors, that is,
\begin{equation}
V(A)=\vect(A)=[a_{11}, a_{21},\dots,a_{(m-1)n},a_{mn}]^T.
\end{equation}
A reconstruction $X\in\mathbb{R}^{m\times n}$ of the noiseless image can be obtained by means of the following SSIM-based optimization problem,
\begin{equation}
\min_{X} \, T(V(X),V(Y))+\lambda\|X\|_{TV},
\label{unssimtv}
\end{equation}
where the regularizing term is a discretization of the isotropic TV seminorm for real-valued images. In particular, the TV of $X$ is the discrete counterpart of the standard total variation of a continuous image $g$ that belongs to the function space $L^1(\Omega)$, where $\Omega$ is an open subset of $\mathbb{R}^2$:
\begin{equation}
TV(g)=\int_\Omega \|Dg(x)\|_2dx=\sup_{\xi\in \Xi}\left\{\int_\Omega g(x)\nabla\cdot\xi~dx\right\}.
\label{dualtv}
\end{equation}
Here, $\Xi=\{\xi:\xi\in C_c^1(\Omega,\mathbb{R}^2), \|\xi_k(x)\|_2\leq 1\ \forall x\in\Omega\}$, and $\nabla\cdot$ is the divergence operator \cite{Chambolle04}. As anticipated, we employ the following iterations for solving \eqref{unssimtv}:
\begin{eqnarray}
X^{k+1}&:=&\argmin_X\left(T(V(X),V(Y))+\frac{\rho}{2}\|X-Z^{k}+U^{k}\|_F^2\right),\\
Z^{k+1}&:=&\argmin_Z\left(\|Z\|_{TV}+\frac{\rho}{2\lambda}\|Z-X^{k+1}-U^{k}\|_F^2\right),\\
U^{k+1}&:=&U^k+X^{k+1}-Z^{k+1},
\end{eqnarray}
where $\|\cdot\|_F$ is the Frobenius norm. Notice that the $Z$-update may be computed efficiently by using the algorithm introduced by Chambolle in \cite{Chambolle04}.
As mentioned before, it is more convenient to employ an average of local SSIMs as a fidelity term. Let $\{Y_i\}_{i=1}^N$ be a partition of the given image $Y$ such that $\cup_{i=1}^NY_i=Y$. Further, let $\{X_i,Z_i\}_{i=1}^N$ also be partitions of the variables $X$ and $Z$ such that $\cup_{i=1}^NX_i=X$ and $\cup_{i=1}^NZ_i=Z$. Also, let $MT:\mathbb{R}^{m\times n}\times\mathbb{R}^{m\times n}\to\mathbb{R}$ be given by
\begin{equation}
MT(X,Y)=\frac{1}{N}\sum_{i=1}^N T(V(X_i),V(Y_i)).
\label{MT}
\end{equation}
Then, the optimization problem that is to be solved is
\begin{equation}
\min_{X} \, MT(X,Y)+\lambda\|X\|_{TV}.
\label{unmssimtv}
\end{equation}
If $\{Y_i,X_i\,Z_i\}_{i=1}^N$ are partitions of non-overlapping blocks, the problem in \eqref{unmssimtv} can be solved by carrying out the following iterations,
\begin{eqnarray}
X_i^{k+1}&:=&\argmin_{X_i}\left(T(V(X_i),V(Y_i))+\frac{N\rho}{2}\|X_i-Z_i^{k}+U_i^{k}\|_F^2\right),\\
Z^{k+1}&:=&\argmin_Z\left(\|Z\|_{TV}+\frac{\rho}{2\lambda}\|Z-X^{k+1}-U^{k}\|_F^2\right),\\
U^{k+1}&:=&U^k+X^{k+1}-Z^{k+1},
\end{eqnarray}
where $U_i$ is an element of the partition of the dual variable $U$. As expected, $\cup_{i=1}^NU_i=U$, and $U_i\cap U_j=\varnothing$ for all $i\neq j$. The extension of this algorithm when a weighted average of local SSIMs is used as a measure of similarity between images is straightforward.
Other imaging tasks can be performed by solving the following variant of the SSIM-based optimization problem in \eqref{unmssimtv},
\begin{equation}
\min_{X} \, MT(A(X),Y)+\lambda\|X\|_{TV},
\label{unmopssimtv}
\end{equation}
where $A(\cdot)$ is a linear operator (e.g., blurring kernel, subsampling procedure, etc.). A minimizer of \eqref{unmopssimtv} may be found by means of Algorithm IV. In this case, the corresponding ADMM iterations are the following:
\begin{eqnarray}
X^{k+1}&:=&\argmin_X\left(\frac{\rho}{2}\|A(X)-W^k+U^{k}\|_2^2+\frac{\mu}{2}\|X-Z^{k}+V^{k}\|_2^2\right),\\
W^{k+1}&:=&\argmin_W\left(MT(W,Y)+\frac{\rho}{2}\|W-A(X)^{k+1}-U^{k}\|_2^2\right),\\
Z^{k+1}&:=&\argmin_Z\left(\|Z\|_{TV}+\frac{\mu}{2\lambda}\|Z-X^{k+1}-V^{k}\|_2^2\right),\\
U^{k+1}&:=&U^k+A(X)^{k+1}-W^{k+1},\\
V^{k+1}&:=&V^k+ X^{k+1}-Z^{k+1}.
\end{eqnarray}
Furthermore, under certain circumstances, Algorithm III can be employed to solve problem \eqref{unmopssimtv}. Let $\{A(X)_i\}_{i=1}^N$ be a partition of $A(X)$ such that $\cup_{i=1}^NA(X)_i=A(X)$, and $A(X)_i\cap A(X)_j=\varnothing$ for all $i\neq j$. If there exist operators $D_i$, $1\le i\leq N$, such that
\begin{equation}
D_i(V(X_i))=V(A(X)_i),
\end{equation}
for all $i\in\{1,\dots,N\}$, then a minimizer of \eqref{unmopssimtv} can be found by means of Algorithm III. The corresponding iterations are as follows:
\begin{eqnarray}
X_i^{k+1}&:=&\argmin_{X_i}\left(T(D_i(V(X_i)),V(Y_i))+\frac{N\rho}{2}\|X_i-Z_i^{k}+U_i^{k}\|_F^2\right),\\
Z^{k+1}&:=&\argmin_Z\left(\|Z\|_{TV}+\frac{\rho}{2\lambda}\|Z-A(X)^{k+1}-U^{k}\|_F^2\right),\\
U^{k+1}&:=&U^k+X^{k+1}-Z^{k+1},
\end{eqnarray}
where $X_i$, $U_i$ and $Z_i$ are elements of the partitions $\{X_i,U_i,Z_i\}$ defined above.
We close this section by mentioning that to the best of our knowledge, the contributions reported in \cite{YuShao,OteroIPCV,Otero} along with the applications presented above are the only approaches in the literature that combine TV and the SSIM.
\section{Experiments}
In this section, we provide some numerical and visual results to evaluate the performance of some of the methods introduced above. We have focussed our attention on two types of unconstrained SSIM-based optimization problems that have been barely explored in the literature: SSIM with $\ell_1$ regularization and SSIM with the TV seminorm as a regularizing term. The results are presented as follows: In Section \ref{unconop}, we assess the efficacy of the ADMM-SSIM-based approach when the $\ell_1$ regularization is employed, whereas in Section \ref{ssimtvexp}, performance of the ADMM-SSIM-based methodology that employs the TV seminorm as a regularizing term is evaluated.
In all experiments, we employed non-overlapping pixel blocks. Performance of the $\ell_2$- and SSIM-based approaches is compared by computing the MSSIM of the original images and their corresponding reconstructions. Here, the MSSIM is simply the average of the SSIM values of all non-overlapping blocks, which are computed using Eq. \eqref{nose}.
\newpage
\subsection{SSIM with $\ell_1$ Regularization}
\label{unconop}
Here, we consider the unconstrained SSIM-based optimization problem,
\begin{equation}
\label{missl1}
\min_x \, T(Dx,y) + \lambda \|x\|_1,
\end{equation}
where $D$ is a $n\times n$ Discrete Cosine Transform (DCT) matrix, and $y\in\mathbb{R}^n$ is the given observation. This problem can be solved with either Algorithm III or Algorithm IV. For these experiments we have employed Algorithm III.
We shall compare the solutions obtained by the proposed method with those obtained by the $\ell_2$ version of problem \eqref{missl1}, namely,
\begin{equation}
\label{eltwo}
\min_x \, \frac{1}{2} \| Dx - y \|_2^2 + \lambda \| x \|_1,
\end{equation}
which can be solved by means of the soft thresholding (ST) operator \cite{frjg,turlach} if $D$ is an orthogonal matrix. For the sake of a fair comparison between the two approaches, the regularization of each $8\times8$ image-block was adjusted so that the $\ell_0$ norm of the set of recovered DCT coefficients is 18 in all cases.
In these experiments, the well-known test images \textit{Lena} and \textit{Mandrill} were employed.
The SSIM maps clearly show that the proposed method (ADMM-SSIM) outperforms the classical $\ell_2$ approach (ST) with respect to the SSIM. Moreover, by taking a closer look at the reconstructions, we observe that the SSIM-based approach yields images which are brighter and posses a higher contrast than the $\ell_2$ reconstructions (e.g., compare the central regions of the reconstructions in the bottom row). This should not be surprising since the dissimilarity measure $T(Dx,y)$ takes into account the component of the SSIM that measures the contribution of the contrast of the images that are being compared. On the other hand, thanks to the structural component of the SSIM, textures are better captured by the proposed method.
Regarding numerical results, the MSSIMs that are obtained with the proposed approach are 0.9164 (\textit{Lena}) and 0.8440 (\textit{Mandrill}). As for the classical $\ell_2$ method, the corresponding MSSIMs are 0.9020 (\textit{Lena}) and 0.7935 (\textit{Mandrill}). Even though in the case of the image \textit{Lena} the performance of the SSIM-based approach is somewhat superior, the proposed method significantly outperforms the $\ell_2$ reconstruction of \textit{Mandrill} (see the SSIM maps of the third row). The main reason is that this image is much less regular than \textit{Lena}, so its features are better approximated by SSIM-based techniques.
In order to obtain a deeper insight as to how the SSIM-based approach yields brighter reconstructions with higher contrast, it is worthwhile to see the type of solutions that are obtained by both $\ell_2$ and SSIM approaches when the regularization varies. Experimental results show that, in general, the shrinking of the DCT coefficients is not as strong as the classical ST operator. In other words, the SSIM solution is usually a ``scaled'' version of the $\ell_2$ solution, nevertheless, the experimental results show that this scaling is not always the same for all recovered coefficients---an example is presented in Figure \ref{dctcoeff}. In the plots that are shown, the same image-block is processed but subjected to different degrees of regularization. In the plot on the left hand side, the $\ell_0$ norm of both solutions is 18, whereas on the plot on the right hand side, the number of non-zero coefficients for both methods is 3.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=1\textwidth]{dct_coeff.eps}
\caption{A visual comparison between the original and recovered coefficients from a particular block of the \textit{Lena} image can be observed. Regularization is carried out so that the two methods being compared induce the same sparseness in their recoveries. In the two shown examples, the same block was processed but subjected to different amounts of regularization. In particular, the $\ell_0$ norm of the set of DCT coefficients that were recovered by both the proposed method and ST is 18 for the first example (first plot from left to right), and 3 for the second (plot on the right hand side).}
\label{dctcoeff}
\end{center}
\end{figure}
As a complement to the above discussion, Figure \ref{spassim} shows how the MSSIM changes as a function of the $\ell_0$ norm of the solutions that are obtained by both methods. The plot on the left hand side shows this behaviour of the MSSIM for an image patch of \textit{Lena}, whereas the other plot illustrates what happens in the case of an image patch of \textit{Mandrill}. It can be seen that when the regularization is not strong, the performance of both approaches is very similar. However, as the regularization is increased, the difference of the MSSIM values yielded by both methods tends to be greater. As expected, this is more noticeable for \textit{Mandrill}. These results show that when high compression is required, and when the images are not so regular, it is more convenient to opt for a SSIM-based technique for the sake of visual quality.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=1\textwidth]{mssim_spa.eps}
\caption{In this figure, both plots correspond to the average SSIM versus the $\ell_0$ norm of the recovered coefficients for the test images \textit{Lena} and \textit{Mandrill}. It can be observed that the SSIM-based technique gradually outperforms the classical $\ell_2$ method as regularization increases.}
\label{spassim}
\end{center}
\end{figure}
\subsection{SSIM and Total Variation}
\label{ssimtvexp}
In this section, we examine several imaging tasks which employ the TV seminorm as a regularization term. In order to assess the performance of the proposed ADMM-SSIM methods, we shall compare their results with their $\ell_2$ counterparts.
It is important to mention that in order to reduce blockiness in the reconstructions the mean of each non-overlapping pixel block is not subtracted prior to processing. This implies that the fidelity term defined in \eqref{MT} is not equivalent but is based on the dissimilarity measure introduced in Section \ref{def}. Despite this, the experiments presented below suggest that this fidelity measure may be used as a substitute of the SSIM.
\subsubsection{Denoising}
In the following experiments, the denoising of some images corrupted with Additive White Gaussian Noise (AWGN) was performed. Although from a maximum a posteriori (MAP) perspective the ADMM-SSIM approach is not optimal, it is worthwhile to see how denoising is carried out when the SSIM-based metric is employed as a fidelity term.
As one might expect, the noiseless approximation is obtained by solving the following unconstrained SSIM-based optimization problem,
\begin{equation}
\min_{X} \, MT(X,Y)+\lambda\|X\|_{TV},
\label{STV}
\end{equation}
where $MT:\mathbb{R}^{m\times n}\times\mathbb{R}^{m\times n}\to\mathbb{R}$ is the fidelity term that was previously defined in Eq. \eqref{MT}. Problem \eqref{STV} can be solved by using either Algorithm III or Algorithm IV. Here, we employed Algorithm III since the ADMM-SSIM iterations are quite simple optimization problems.
In order to assess the performance of the proposed ADMM-SSIM method, we compare it with its $\ell_2$ counterpart, namely,
\begin{equation}
\min_{X} \, \|X-Y\|_2^2+\lambda\|X\|_{TV}.
\end{equation}
Naturally, Chambolle's algorithm can be employed for solving this optimization problem \cite{Chambolle04}. In order to compare the effectiveness of the proposed approach and Chambolle's method (TV), regularization was carried out in such a way that the TV seminorms of the reconstructions yielded by both methods are the same.
In Figure \ref{lemanssimtv}, some visual results are shown. Once again, we employed the test images \textit{Lena} and \textit{Mandrill}. The noisy images, as well as the SSIM maps, can be observed in the first and the third rows. Reconstructions and original images are presented in the second and fourth rows. The TV seminorm of the reconstructions is 2500 for \textit{Lena} and 4500 for \textit{Mandrill}. The Peak Signal-to-Noise Ratio (PSNR) prior to denoising was 18.067 dB in all experiments.
In the case of \textit{Lena}, it is evident that the proposed method performs significantly better than its $\ell_2$ counterpart. Notice that some features of the original \textit{Lena} are better reconstructed (e.g., the eyes in \textit{Lena}) whereas in the TV reconstruction these features are considerably blurred. This is mainly due to the fact that the noise does not completely hide some of the more important attributes of the original image. Since the fidelity term enforces the minimizer of problem \eqref{STV} to be visually as similar as possible as the given noisy observation, while denoising is still accomplished with the regularizing term, the reconstruction yielded by the ADMM-SSIM approach is visually more similar to the noiseless image. As for MSSIM values, these are 0.4669 and 0.6426 for the TV and ADMM-SSIM reconstructions, respectively.
Nevertheless, in some circumstances, there is not such a noticeable gap between the ADMM-SSIM and TV approaches. This is the case for the \textit{Mandrill} image. This image is much less regular than \textit{Lena}, therefore TV-based denoising techniques will deliver reconstructions devoid of the fine details that the original \textit{Mandrill} has (e.g., the fur). The ADMM-SSIM method performs better than the $\ell_2$ approach. However, its effectiveness is only somewhat better due to the low regularity of \textit{Mandrill}. Regarding numerical results, the computed MSSIMs are 0.4832 (ADMM-SSIM) and 0.4708 (TV).
\begin{figure}[htbp]
\centering
\includegraphics[width=1\textwidth,height=0.63\textwidth]{lena_ssim_tv1.eps}
\includegraphics[width=1\textwidth,height=0.63\textwidth]{mandrill_ssim_tv1.eps}
\caption{Some visual results for the denoising of the test images \textit{Lena} and \textit{Mandrill}. For \textit{Lena}, the TV seminorm of the shown reconstructions is 2500, whereas for \textit{Mandrill} it is 4500. In the first and third rows are shown the noisy images along with the the SSIM maps between the reconstructions and the original images. The original and reconstructed images are shown in the second and fourth rows. The MSSIMs for \textit{Lena} and \textit{Mandrill} that are obtained by using the ADMM-SSIM-based method are 0.6426 and 0.4832, respectively. The corresponding MSSIM values for the
$\ell^2$ approach are 0.4669 (\textit{Lena}) and 0.4708 (\textit{Mandrill}).}
\label{lemanssimtv}
\end{figure}
In order to have a general idea of the effectiveness of the SSIM-based methodology when regularization varies, in Figure \ref{ssimtv}, we show the behaviour of the MSSIM as a function of the TV seminorm of the reconstructions obtained by both the ADMM-SSIM and the TV approaches. The plot on the left shows the behaviour of the MSSIM for a noisy image patch of \textit{Lena} whereas the plot on the right shows the results for a corrupted image patch of \textit{Mandrill}. As expected, the plot on the right hand side shows that for images with low regularity---such as \textit{Mandrill}---the ADMM-SSIM and TV methods exhibit similar effectiveness over a wide range of regularization values. On the other hand, for the image \textit{Lena}, one observes a significant difference between the performances of the two methods. This suggests that when strong regularization is required, it is more advantageous to employ SSIM-based techniques over $\ell_2$ methods if certain visual features need to be recovered, provided that the reconstruction possesses some degree of regularity.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=0.65\textwidth]{mssim_tv.eps}
\caption{The behaviour of the average SSIM of reconstructed images
obtained from the proposed SSIM-based method and the classical
$\ell_2$ method as a function of the TV seminorm of the reconstruction.
{\bf Left:} The \textit{Lena} image. {\bf Right:} The
\textit{Mandrill} image.
In the case of the \textit{Lena} image, the SSIM-based approach clearly outperforms the classical $\ell_2$ method. For the \textit{Mandrill} image, however, the performance of both methods is, in general, very similar.}
\label{ssimtv}
\end{center}
\end{figure}
\subsubsection{Zooming}
Given a low resolution image $Y$, we wish to find an image $X$ which is a high resolution version of $Y$. This imaging task may be achieved by solving the following optimization problem \cite{ComboChambolle},
\begin{equation}
\min_{X} \, \|S(X)-Y\|_2^2+\lambda\|X\|_{TV},
\label{stv}
\end{equation}
where $S(\cdot)$ is a linear operator that consists of a blurring kernel followed by a subsampling procedure. Problem \eqref{stv} can be solved by different approaches such as the Chambolle-Pock algorithm \cite{ChambollePock} and ADMM \cite{BoydADMM}. In our particular case, we employed the ADMM method.
\noindent
The SSIM counterpart is given by
\begin{equation}
\min_{X} \, MT(S(X),Y)+\lambda\|X\|_{TV}.
\label{ztv}
\end{equation}
As mentioned in Section \ref{latv}, under certain circumstances, both Algorithm III and Algorithm IV can be used to perform the minimization of \eqref{ztv}. In this study, we employed Algorithm IV.
Visual results are presented in Figure \ref{zoomtv}. In the first row, the SSIM maps are presented. The reconstructions yielded by both methods are presented in the bottom row along with the original image and its given low resolution counterpart. For the sake of a fair comparison, the strength of the regularization was adjusted so that the obtained reconstructions have the same TV. Also, in these experiments, the zoom factor is four.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=0.7\textwidth]{zoom_eye.eps}
\caption{Visual results for the Zooming application. In the first row, the SSIM maps are shown. Original image, low resolution observation, and reconstructions are presented in the bottom row. In this case, the zoom factor is four. The TV and the MSSIM for both reconstructions are 4563.26 and 0.7695 respectively. The image can be downloaded from \url{https://pixabay.com/fr/belle-gros-plan-oeil-sourcils-cils-2315/}.}
\label{zoomtv}
\end{center}
\end{figure}
It can be observed that both the proposed ADMM-SSIM method and the classical $\ell_2$ approach exhibit very similar performance. In fact, the MSSIM values for both reconstructions is 0.7695. In general, we found that the capabilities of both methods are virtually the same even when the strength of the regularization changes. This behaviour can be observed in Figure \ref{tvzoom}.
A possible explanation for this phenomenon is that many important features such as textures and fine details are either lost or barely present in the low resolution observation. The ADMM-SSIM method will obtain reconstructions which, after subsampling, will be visually as similar as possible to the low resolution observations. These, in general, will not possess visual elements that are better captured by the SSIM-based approach. In other words, if, hypothetically speaking, we had two perfect reconstructions, one that is considered optimal with respect to the SSIM, and the other optimal with respect to $\ell_2$, both images would look almost identical when observed from afar. This is equivalent to having the same low resolution image as an observation.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=0.7\textwidth]{eye_zoom.eps}
\caption{A plot of the average SSIM as a function of the TV seminorm of the reconstructions yielded by both the ADMM-SSIM and $\ell_2$ methods. In these experiments, a patch of the reference image was employed for the zooming application. The performance of both methods is virtually the same.}
\label{tvzoom}
\end{center}
\end{figure}
\subsubsection{Deblurring}
We perform the deblurring of an image by solving the following SSIM-based optimization problem:
\begin{equation}
\min_{X} \, MT(B(X),Y)+\lambda\|X\|_{TV},
\label{vtv}
\end{equation}
where $Y$ is the given blurred image, and $B(\cdot)$ is a blurring kernel. In all the experiments presented in this section, a Gaussian kernel was employed. Once again, we solve \eqref{vtv} by using Algorithm IV. The $\ell_2$ counterpart of \eqref{vtv} is given by
\begin{equation}
\min_{X} \, \|B(X)-Y\|_2^2+\lambda\|X\|_{TV}.
\label{btv}
\end{equation}
The minimization of \eqref{btv} was performed using ADMM.
Some visual results are shown in Figure \ref{blurtv}. The blurred observation along with the SSIM maps of the reconstructions are presented in the first row. The original image and the corresponding reconstructions of the two methods are presented in the bottom row. The TV of both reconstructions is 4000. The standard deviation of the Gaussian blurring kernel was 5.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\textwidth,height=0.7\textwidth]{blur_eye.eps}
\caption{Some visual results of the deblurring application. In the first row, the SSIM maps and the blurred observation are shown. Original image and reconstructions are presented in the bottom row. The standard deviation of the Gaussian blurring kernel was five. The MSSIM values of the reconstructions are 0.7517 and 0.7455 for the ADMM-SSIM and $\ell_2$ approaches, respectively. Notice that the fine features of the original image are better reconstructed by the proposed approach. The image can be downloaded from \url{https://pixabay.com/fr/belle-gros-plan-oeil-sourcils-cils-2315/}.}
\label{blurtv}
\end{center}
\end{figure}
Overall, the effectiveness of both the $\ell_2$ and the ADMM-SSIM approaches is similar, however, the proposed method exhibits a better reconstruction of the fine details of the original image, e.g., the eyebrow, eyelashes and the reflection in the eye. This can also be observed in the SSIM maps---compare, for example, the regions that correspond to the eyelashes of the upper eyelid. In this area, the SSIM map of the ADMM-SSIM reconstruction is brighter than the SSIM map of its $\ell_2$ counterpart. As for numerical results, the MSSIM values of the ADMM-SSIM and $\ell_2$ reconstructions are 0.7517 and 0.7455, respectively.
\section{Final Remarks}
In this paper, we have provided a general mathematical
as well as computational framework for
constrained and unconstrained SSIM-based optimization.
This framework provides a means of including SSIM as a fidelity term
in a wide range of applications. Several
algorithms which can be used to accomplish a variety of
SSIM-based imaging tasks have also been proposed.
Problems in which both the $\ell_1$-norm and the
TV seminorm are used, either as constraints or as
regularization terms, have also been examined in this paper. To the
best of our knowledge, these problems have been examined only to a small
degree in the literature \cite{Otero,Otero2014,YuShao,Brunet2017}.
We have employed a
simplified version of the SSIM which yields
a dissimilarity measure $T(x,y)$ that is, in fact, an example of a squared
normalized metric.
Mathematically, it is more desirable to work with $T(x,y)$ because of
its quasiconvex property.
Our SSIM-based optimization problems
are then formulated as minimization problems involving $T(x,y)$.
Formally, the dissimilarity measure $T(x,y)$ used in this paper
was obtained with the assumption that the vectors $x$ and $y$
have zero mean. That being said,
some experimental results of Section \ref{ssimtvexp} suggest that
$T(x,y)$ may be used effectively in cases where the
the vectors have nonzero mean.
The results presented in this paper indicate that, in general, the performance of SSIM-based optimization schemes can be at least as good as that of their classical $\ell_2$ counterparts. In some cases, SSIM-based methods appear to work better than $\ell_2$ approaches. One example is the compression of images with low regularity (see Figure \ref{lemanssim}). It appears that in these cases, the given (degraded) observation possesses at least partial information of the key features of the uncorrupted image.
That being said, the determination of problems for which
SSIM-based optimization outperforms
$\ell_2$-based methods, and {\em vice versa},
requires much more investigation.
The primary purpose of this paper, however, was to provide the framework
for such work.\\
\noindent
{\bf Acknowledgements:} This work has been supported in part
by Discovery Grants (ERV and OM) from the Natural Sciences and
Engineering Research Council of Canada (NSERC). Financial
support from the Faculty of Mathematics and the Department of
Applied Mathematics, University of Waterloo (DO) is also gratefully
acknowledged.
\section{Appendix}
\textbf{Proof of Theorem 4:} \textit{Let $f:X\subset\mathbb{R}^n\to\mathbb{R}$ be defined as in Eq. (\ref{thef}). Then, its Jacobian is Lipschitz continuous on any open convex set $\Omega\subset X$; that is, there exists a constant $L>0$ such that for any $x,w\in\Omega$,
\begin{equation}
\|J_f(x)-J_f(w)\|_F\leq L\|x-w\|_2 \, .
\end{equation}
Here, $\|\cdot\|_F$ denotes the Frobenius norm, and
\begin{equation}
L = C_1\|\Phi^T\Phi\|_F+\lambda (C_2\|z\|_2+C_3),~C_1,C_2,C_3>0.
\end{equation}}
\begin{proof}
Without loss of generality, and for the sake of simplicity, let the stability constant $C$ of the dissimilarity measure be zero. Also, let $y$ be a non-zero vector in $\mathbb{R}^m$. Let us define
\begin{equation}
s(x):=\frac{2x^Ty}{\|Dx\|_2^2+\|y\|_2^2},
\end{equation}
and
\begin{equation}
r(x):=\|Dx\|_2^2+\|y\|_2^2.
\end{equation}
Therefore, we have that $\|J_f(x)-J_f(w)\|_F$ is bounded by
\begin{eqnarray}
\|J_f(x)-J_f(w)\|_F&\leq&\|\Phi^T\Phi\|_F\|x\nabla s(x)^T-w\nabla s(w)^T\|_F+\nonumber\\
&&\lambda\|z(\nabla r(x)^T-\nabla r(w)^T)\|_F+\nonumber\\
&&\lambda\|x\nabla r(x)^T-w\nabla r(w)^T\|_F+\nonumber\\
&&|s(x)-s(w)|\|\Phi^T\Phi\|_F+\lambda|r(x)-r(w)|,
\end{eqnarray}
To show that $J_f$ is Lipschitz continuous on $\Omega$, we have to show that each term is Lipschitz continuous on $\Omega$ as well. Let us begin with the term $|r(x)-r(w)|$. By using the mean-value theorem for real-valued functions of several variables, we have that
\begin{equation}
|r(x)-r(w)|\leq\|2\Phi^T\Phi(\alpha x+(1-\alpha)w)\|_2\|x-w\|_2\\,
\end{equation}
for some $\alpha\in[0,1]$ and all $x,w\in\Omega$. Thus,
\begin{eqnarray}
|r(x)-r(w)|&\leq&2\|\Phi^T\Phi\|_2(\alpha\|x-w\|_2+\|w\|_2)\|x-w\|_2\\
&\leq&2\|\Phi^T\Phi\|_2(\|x-w\|_2+\|w\|_2)\|x-w\|_2.
\end{eqnarray}
Let $\sigma(\Omega)$ be the diameter of the set $\Omega$, that is,
\begin{equation}
\sigma(\Omega)=\sup_{x,v\in\Omega}\|x-v\|_2.
\end{equation}
Also, let $\rho(\Omega)$ be the $\ell_2$ norm of the largest element of the set $\Omega$, i.e.,
\begin{equation}
\rho(\Omega)=\sup_{x\in\Omega}\|x\|_2.
\end{equation}
Then,
\begin{eqnarray}
|r(x)-r(w)|&\leq&2\|\Phi^T\Phi\|_2(\sigma(\Omega)+\rho(\Omega))\|x-w\|_2\\
&\leq&K_1\|x-w\|_2,
\end{eqnarray}
where $K_1=2\|\Phi^T\Phi\|_2(\sigma(\Omega)+\rho(\Omega))$.
\noindent
As for $|s(x)-s(w)|$, in a similar fashion, we obtain that
\begin{equation}
|s(x)-s(w)|\leq\|\nabla s(\alpha x+(1-\alpha)w)\|_2\|x-w\|_2
\end{equation}
In fact, it can be shown that for any vector $v\in\mathbb{R}^n$, the norm of the gradient of $s$ is bounded by
\begin{equation}
\|\nabla s(v)\|\leq(\sqrt{2}+1)\frac{\|D\|_2}{\|y\|_2}.
\end{equation}
Let $K_2=(\sqrt{2}+1)\frac{\|D\|_2}{\|y\|_2}$. Thus, $|s(x)-s(w)|\leq K_2\|x-w\|_2$.
Regarding the term $\|x\nabla s(x)^T-w\nabla s(w)^T\|_F$, we have that the $ij$-th each entry of the $n\times n$ matrix $x\nabla s(x)^T-w\nabla s(w)^T$ is given by
\begin{equation}
\nabla_js(x)x_i-\nabla_js(w)w_i,
\end{equation}
where $\nabla_js(\cdot)$ is the $j$-th component of the gradient of $s(\cdot)$. By employing the mean-value theorem for functions of one variable we obtain that
\begin{equation}
|\nabla_js(x)x_i-\nabla_js(w)w_i|=\left|\frac{\partial}{\partial x_i}(\nabla_js(x(v)))\right||x_i-w_i|,
\end{equation}
for some $v\in\mathbb{R}$. Here, $x(v)=[x_1,\dots,x_{i-1},v,\dots,x_n]$. The partial derivative of the previous equation is bounded, which can be proved using the classical triangle inequality and differential calculus. Given this, we have that
\begin{eqnarray}
\left|\frac{\partial}{\partial x_i}(\nabla_js(x))(v)\right|&\leq&(\sqrt{2}+3)\frac{\|\Phi_i^T\|_2\|\Phi_j^T\|_2}{\|y\|_2^2}+(2\sqrt{3}+2)\frac{\|\Phi_j^T\|_2}{\|y\|_2^3}\\
&=&K_{ij},
\end{eqnarray}
where $\Phi_k^T$ is the $k$-th row of the the transpose of the matrix $\Phi$. Therefore,
\begin{equation}
|\nabla_js(x)x_i-\nabla_js(w)w_i,|\leq K_{ij}|x_i-w_i|.
\end{equation}
Using this result, we can conclude that
\begin{equation}
\|x\nabla s(x)^T-w\nabla s(w)^T\|_F\leq K_3\|x-w\|_2,
\end{equation}
where $K_3$ is equal to
\begin{equation}
K_3=n\max_{1\leq i,j\leq n}K_{ij};
\end{equation}
that is, $K_3$ is equal to the largest $K_{ij}$ times $n$.
\noindent
In a similar manner, it can be shown that
\begin{equation}
\|x\nabla r(x)^T-w\nabla r(w)^T\|_F\leq K_4\|x-w\|_2,
\end{equation}
where $K_4$ is given by
\begin{equation}
K_4=\max_{1\leq i,j\leq n}\{2nK_1\|\Phi_j^T\|_2(\|\Phi_i^T\|_2+\|\Phi\|_2)\}.
\end{equation}
\noindent
As for the term $\lambda\|z(\nabla r(x)^T-\nabla r(w)^T)\|_F$, this is equal to
\begin{equation}
\lambda\|z(\nabla r(x)^T-\nabla r(w)^T)\|_F=2\lambda\|z(\Phi^T\Phi(x-w))^T\|_F.
\end{equation}
Each $ij$-th entry of the matrix $z(\Phi^T\Phi(x-w))^T$ is given by $z_i(\Phi^T\Phi_j(x-w))^T$, where $\Phi^T\Phi_j$ is the $j$-th row of $\Phi^T\Phi$. Then, we have that
\begin{equation}
2\lambda\|z(\Phi^T\Phi(x-w))^T\|_F=2\lambda\sqrt{\sum_i^n\sum_j^n|z_i(\Phi^T\Phi_j(x-w))^T|^2}.
\end{equation}
Therefore,
\begin{eqnarray*}
2\lambda\sqrt{\sum_i^n\sum_j^n|z_i(\Phi^T\Phi_j(x-w))^T|^2}&\leq&2\lambda\sqrt{\sum_i^n\sum_j^n|z_i|^2\|\Phi^T\Phi_j\|_2^2\|x-w\|_2^2}\\
&\leq&2\lambda\sqrt{\sum_i^n|z_i|^2\sum_j^n\|\Phi^T\Phi_j\|_2^2}\|x-w\|_2\\
&=&2\lambda\|z\|_2\sqrt{\sum_j^n\|\Phi^T\Phi_j\|_2^2}\|x-w\|_2\\
&=&\lambda K_5\|z\|_2\|x-w\|_2\\,
\end{eqnarray*}
where $K_5=2\sqrt{\sum_j^n\|\Phi^T\Phi_j\|_2^2}$.
\noindent
Finally, we obtain that
\begin{equation*}
\|J_f(x)-J_f(w)\|_F\leq[(K_2+K_3)\|\Phi^T\Phi\|_F+\lambda(K_5\|z\|_2+K_1+K_4)]\|x-z\|_2,
\end{equation*}
which completes the proof.
\end{proof}
| {
"timestamp": "2020-02-10T02:06:28",
"yymm": "2002",
"arxiv_id": "2002.02657",
"language": "en",
"url": "https://arxiv.org/abs/2002.02657",
"abstract": "It is now generally accepted that Euclidean-based metrics may not always adequately represent the subjective judgement of a human observer. As a result, many image processing methodologies have been recently extended to take advantage of alternative visual quality measures, the most prominent of which is the Structural Similarity Index Measure (SSIM). The superiority of the latter over Euclidean-based metrics have been demonstrated in several studies. However, being focused on specific applications, the findings of such studies often lack generality which, if otherwise acknowledged, could have provided a useful guidance for further development of SSIM-based image processing algorithms. Accordingly, instead of focusing on a particular image processing task, in this paper, we introduce a general framework that encompasses a wide range of imaging applications in which the SSIM can be employed as a fidelity measure. Subsequently, we show how the framework can be used to cast some standard as well as original imaging tasks into optimization problems, followed by a discussion of a number of novel numerical strategies for their solution.",
"subjects": "Optimization and Control (math.OC); Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV); Numerical Analysis (math.NA)",
"title": "Optimization of Structural Similarity in Mathematical Imaging",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517507633983,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7089606485042177
} |
https://arxiv.org/abs/0801.4500 | On the motion under focal attraction in a rotating medium | New results are established here on the phase portraits and bifurcations of the kinematic model in a system of ODE's, first presented by H.K. Wilson in his 1971 book, and by him attributed to L. Markus (unpublished). A new, self-sufficient, study which extends Wilson's result and allows an essential conclusion for the applicability of the model is reported here. | \section{Introduction}\label{sec:1}
Consider the following family of planar differential equations
depending on three
real parameters $(\omega, v, R)$, with $\omega \geq 0, \; v > 0, \; R >
0$,
\begin{equation}
\label{eq:01}
\begin{array}{lclr}
\dot{x} & = & -\omega y +
v\frac{R - x}{\sqrt{(R - x)^2 + y^2}}, & \\
\dot{y} & = & \,\,\, \, \omega x - v\frac{y}{\sqrt{(R - x)^2 +
y^2}}.
\end{array}
\end{equation}
The {\it solutions}, also called {\it orbits}, of system
(\ref{eq:01}) describe the motion of certain {\it entities} (such
as particles or micro-organisms) attracted by a {\it focus} $F =
(R,0)$ (such as a light source or a magnetic pole) toward which
they move with {\it velocity} $ v$; the motion takes place in a
{\it medium}\, (such as a fluid) which rotates with {\it angular
velocity} $\omega$ around a fixed point, located at the origin
$O=(0,0)$.
The focal point $F =
(R,0)$, where system (\ref{eq:01}) is undefined, will be regarded
as a {\it singular point}, outside of which it is analytic. A
point at which the components of the system vanish will be
referred to as an {\it equilibrium
point}.
According to Wilson \cite {wils}, p. 297, this is a model
suggested by L. Markus (see \cite{wils}, p. viii,) for the motion
of {\it phototropic platyhelminthes}
---light-seeking flatworms--- swimming in a liquid filling a
shallow circular recipient with section
$$G_R = \{ x^2 + y^2 \leq R
\}. $$
No printed
reference source containing Markus' suggestion is given in \cite{wils}.
\begin {definition} Denote by $W^s (Q)$ the {\it basin of attraction}
of an equilibrium or singular point
$Q$ of (\ref{eq:01}). This is the set of points $p_0$ such that $\varphi (t, p_0 ) \to Q$
as $t \to m_+ (p_0),$
the right extreme of the maximal interval of $\varphi (t, p_0 ) $.
The {\it basin of repulsion}, $W^u (Q)$, is defined analogously
for $t \to m_- (p_0)$, referring to the left extreme of the
maximal interval.
\end {definition}
Notice that for $Q=F$ the approach to $F$ by orbits of
(\ref{eq:01}) happens in finite time, that is, $ m_+ (p_0)$ or $
m_-(p_0)$ is finite.
In \cite {wils}, p. 298, using the Poincar\'e - Bendixson
Theorem and the Bendixson Negative Criterium, it is proved the
existence and uniqueness of an {\it equilibrium point} $P=
P_{(\omega,
v, R)}$ attracting all positively
{\it complete} semi-orbits $\varphi (t, p_0 )$ of system
({\ref{eq:01}}) starting at points $ p_0 \in G_R \setminus F$.
Complete means that $ \varphi (t, p_0 )$
is defined for all $ t \in [0, \infty). $ Denote such set of points $p_0$ by $G_{R,
+}$.
For the terms in ODE's not defined here, besides \cite{wils}, the
reader can profit from consulting Chicone \cite{car}, among other
more recent up to date books.
Under the assumption
$\omega
> v/R$, $P$ is
\begin{equation} \label{eq:02}
P_{(\omega, v, R)} = ((v/{\omega})^2 /R, \sqrt{R^2-(v/\omega)^2
}(v/\omega R)).
\end{equation}
\noindent This corrects a misprint in \cite{wils}, p. 298.
The theorem
below improves the results on this subject outlined in
\cite{wils}. An elementary proof will be given in section
\ref{sec:2}.
\begin {theorem} \label{th:01}
For all $\omega \geq 0$, the region $ G_R \setminus F$
is positively invariant, in fact the radial component of system
(\ref{eq:01}) is negative on the complement of the closed disk $C$
of center $(R/2, 0)$ and radius $R/2$.
\noindent It holds that
\begin{itemize}
\item[1.] For $ 0 \leq \omega \leq v/R $, $F$ is a global attractor: \,
$W^s (F) = \mathbb R ^2 \setminus F .$
\item[2.] For $\omega > v/R $ there is a unique hyperbolic
attracting equilibrium $P$ located at (\ref{eq:02}) whose basin
of attraction contains $W^u (F)$, the basin of repulsion of $F$,
itself a regular analytic curve contained in $G_R \setminus F.$
\item[3.] Also, $W^s (F)$, the basin of attraction of $F$
is a
regular analytic
curve
disjoint from $G_R$.
\noindent In particular it holds that for $\omega > v/R $,
$$G_{R, +} = \, G_R \cap W^s (P)\, =\, G_R \setminus
F.$$
\end{itemize}
\end{theorem}
\section{Proof of Theorem \ref{th:01}} \label{sec:2}
Performing the change of variables and parameter
rescaling
\begin{equation}
\label{eq:03} x= R \bar x,\, y=R \bar y, \, t=\bar t R/v , \,
\omega = \bar \omega v/R
\end{equation}
and then removing the bars, obtain system (\ref{eq:01}) with
$R\, =\, v\, =\, 1 :$
\begin{equation}
\label{eq:01.1} \dot{x} = -\omega y + \frac{1 - x}{\sqrt{(1 -
x)^2 + y^2}}, \,\,\,\,\dot{y} = \omega x - \frac{y}{\sqrt{(1 -
x)^2 + y^2}}.
\end{equation}
Writing this equation in polar coordinates centered at $F$,
given by
\begin{equation} \label{eq:05} x= 1-r\cos\theta, \, \; y = \,r\sin\theta, \,
\end{equation}
\noindent and multiplying both components by $r$, which amounts to
rescaling again the time, obtain
\begin{equation}
\label{eq:07}
\dot{\theta} = -\omega(r-\cos\theta ), \; \;
\dot{r} = \,\,\, \, r(\omega\sin\theta -1).
\end{equation}
Derivation of $z = x^2 +
y^2$ (which, in polar coordinates, is $z= 1+r^2 -2r\cos\theta$)
in the direction of equation (\ref{eq:01.1}) (which, in polar
coordinates, has the same direction field as (\ref{eq:07})), gives
$z^\prime = -2r(r-\cos\theta)$.
Clearly $z^\prime $ is negative above the
curve $r=\cos\theta$, which is the polar expression for the
border of the disk $C$.
Calculation of the equilibrium of (\ref{eq:07}) on the plane $r
>0$ gives the point $P$ whose polar coordinates
$(\theta_P , r_P) $
satisfy
$$\sin\theta_P=1/\omega,\, \, r_P =
\cos\theta_P = \sqrt{1- (1/\omega)^2}.$$
The expression, $(x_P , y_P)$, of $P$ in cartesian coordinates
follows:
$$x_P=1- \sqrt{1-(1/\omega)^2} \sqrt{1- (1/\omega)^2} = (1/\omega)^2
, \, \, y_P=\sqrt{1- (1/\omega)^2}/\omega,$$
\noindent which in the original coordinates, before the change in
(\ref{eq:03}), reproduces the expression in (\ref{eq:02}).
Calculation of the divergence, $\sigma$, and Jacobian, $\delta$,
of (\ref{eq:07}) gives
$$\sigma = -1, \, \, \, \delta = -\omega^2 + \omega \sin\theta +
\omega^2 r \cos\theta+\omega^2 \cos^2 \theta . $$
Evaluation at $P$, i.e. at $(\theta_P , r_P ) $, gives $\delta =
\omega^2 -1$. This shows that $P$ is an attracting hyperbolic
equilibrium point of {\it node} or {\it focus} type.
Calculation of the discriminant, $\sigma ^2 -4\delta$, for values
of $\omega \geq 1$ gives that the transition of $P$ from a node
to a focus occurs at $\sqrt{5}/2$, that is at $\omega = \sqrt{5}
v /2R$, in the original parameters.
This proves the first general assertion and item 1 in the
theorem.
For $\omega >0$, system (\ref{eq:07}) has two equilibria; one is a
hyperbolic saddle at $S_- = (-\pi/2, 0)$ with unstable separatrix
along the $\theta$-axis and attracting eigenvector parallel to
$(1,(2\omega+1)/\omega)$.
The other equilibrium, located at $S_+ = (\pi/2, 0),$ is a
hyperbolic node for $\omega < 1$. For $\omega
> 1$, $S_+ = (\pi/2, 0)$ is a hyperbolic saddle with stable
separatrix along the $\theta$-axis and with repelling eigenvector,
for positive eigenvalue $\omega -1,$ parallel to $(1,(-2\omega +
1)/\omega)$.
For the equilibrium at
$(\pi/2, 0)$, the transition at $\omega = 1$, for increasing
$\omega$, gives a cubic {\it pitchfork} node to saddle
bifurcation. For $\omega > 1$, bifurcate two attracting nodal
equilibrium points, one to $r > 0$ and the other to $r < 0 $,
which capture the unstable separatrices of the saddle. Only the
first one, located at $P$, is seen in the original $(x,y)$-plane.
\begin{figure}[htb]
\psfrag{L}{Saddle $S_- $} \psfrag{G}{Border of $G$}
\psfrag{C}{Border of $C$} \psfrag{R}{Saddle $S_+ $}
\begin{center}
\includegraphics[height=6cm]{FiguraS.eps}
\end{center}
\caption{Borders of $G$ and $C$ and Separatrix Directions
\label{fig1}}
\end{figure}
\begin{figure}[htb]
\psfrag{G}{Border of $G$} \psfrag{C}{Border of $C$}
\psfrag{F}{Focus $(R,0) $} \psfrag{P}{ $ P $}
\begin{center}
\includegraphics[height=6cm]{platfot.eps}
\caption{Borders of $G$ and $C$, Equilibrium Point $P$ and
Separatrix Directions at Focus $F$ \label{fig2}}
\end{center}
\end{figure}
Therefore, the slope of the
unstable separatrix, at $S_+ $, is $-2+ 1/\omega$ and that of
the stable separatrix at $S_- $ is $2+ 1/\omega$.
Comparison at $S_- $ of the slope the stable separatrix with the
slopes of the borders of $G_R$ (for $R=1$), given by
$r=2\cos\theta$, which is $2$, and of $C$, given by
$r=\cos\theta$, which is $1$, leads to the location of $W^s
(F)$ outside of $G_R$.
The location of $W^u (F)$ between $G_R$ and $C$ follows from
similar comparison of slopes, taking into account that at $S_+ $
the slopes of the borders of $G_R$ (for $R=1$) is $-2$, and of
$C$ is $-1$.
See Fig. \ref{fig1}, in polar coordinates, and Fig.
\ref{fig2}, in the original coordinates.
By continuity, for any $\omega > 1$ (that is greater than $v/R$, in
the original coordinates), the basin of repulsion of $F$ is
always contained in $W^s(P).$ This, for $\omega$ near $1$, is a
consequence of the structure of the local {\it saddle-node}
bifurcation. For large $\omega$ this follows from
Poincar\'e-Bendixson Theorem and the Bendixson Negative Criterium
\cite{wils}.
From the fact that $z^{\prime}$ is negative outside the disk $C$
follows that $W^s(P)$ coincides with $\mathbb R ^2 \setminus
W^s(F)$.
The two main conclusions in items 2 and 3 are established.
\section{Concluding Comments} \label{sec:3}
Item 3 is essential for the purpose of the biological model in
\cite{wils}. In fact, from the knowledge of the correct
location, at (\ref{eq:02}), of the equilibrium attracting all
entities moving with velocity $v$, their separation from similar
ones also contained in the positive invariant set $G_R$ but having
different characteristic velocities and, therefore, clustering at
different points of $G_R$, for large $t$. From this information
the removal of the entities for study in isolation could be
implemented practically. However, no laboratory experiment report
where the model has actually been used is known by the author.
Both Wilson \cite{wils} and Sotomayor \cite{lic}, p. 264,
the only written sources for system (\ref{eq:01}),
overlooked the property in item 3 of the theorem. Its
consideration was proposed to the author toward 1995 by Dan Henry
(1944 - 2002).
\vskip 0.2cm
\noindent{\bf Acknowledgement} The author is grateful to R.
Garcia for his help with the pictures and to L. F. Mello and A.
Gasull for their comments.
| {
"timestamp": "2008-01-29T15:21:16",
"yymm": "0801",
"arxiv_id": "0801.4500",
"language": "en",
"url": "https://arxiv.org/abs/0801.4500",
"abstract": "New results are established here on the phase portraits and bifurcations of the kinematic model in a system of ODE's, first presented by H.K. Wilson in his 1971 book, and by him attributed to L. Markus (unpublished). A new, self-sufficient, study which extends Wilson's result and allows an essential conclusion for the applicability of the model is reported here.",
"subjects": "Dynamical Systems (math.DS); Classical Analysis and ODEs (math.CA)",
"title": "On the motion under focal attraction in a rotating medium",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517501236461,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7089606480404803
} |
https://arxiv.org/abs/1908.08420 | When is the Sum of Two Closed Subgroups Closed in a Locally Compact Abelian Group | Locally compact abelian groups are classified in which the sum of any two closed subgroups is itself closed. This amounts to reproving and extending results by Yu.~N.~Mukhin from 1970. Namely we contribute a complete classification of all totally disconnected \lca\ groups with $X+Y$ closed for any closed subgroups $X$ and $Y$. | \section{Introduction and Main Results}
\label{s:intro-ab}
It was {\sc R. Dedekind}, who
in 1877 proved the modular law
for the subgroup lattice of
a certain abelian group (see \cite{Dedekind77}).
However, his proof works for {\em any} abelian group.
In 1970 Yu.~N.~Mukhin investigated
the analogous property for \lc\ a\-bel\-ian\ groups in \cite{muk2}.
The {\em closed subgroup lattice} $L(G)$ of a topological group
is its set of closed subgroups endowed with
{\em join} given as $A\vee B:= \gen{A\cup B}$
and {\em meet} as $A\wedge B:= A\cap B$ for $A$ and
$B$ any closed subgroups. Then $G$ is a {\em \tM\ group}
provided the modular law holds for
any closed subgroups $A$, $B$ and $C$ of $G$ with $A$ a subgroup of $C$:
\[A\vee (B\wedge C)=(A\vee B)\wedge C\]
Note that a group is {\em modular} if, and only if,
its lattice of closed subgroups does not contain
a sublattice isomorphic to $E_5$ (cf. \cite[2.1.2 Theorem]{schmidt}),
i.e., geometrically, a pentagon (see also Remark \ref{rem:not-M} below).
\[\xymatrix@R=2mm@C=3mm{& \bullet\ar@{-}[ld]\ar@{-}[rdd]& \\
\bullet\ar@{-}[dd]& & \\
& & \bullet\ar@{-}[ldd]\\
\bullet\ar@{-}[rd]& & \\
&\bullet&}\]
Mukhin, in the same paper, also classifies all \lc\ a\-bel\-ian\ groups
$G$ for which $X+Y$ is closed whenever $X$ and $Y$ are closed
subgroups of $G$ and we will call any \lc\ a\-bel\-ian\ group
$G$ with this property {\em strong\-ly \tqh}.
It follows from the definitions that every \stqh\ group\ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
We shall derive his results with slightly different approach, but
essentially the same methods of proof. Our motivation comes from
studying nonabelian lo\-cal\-ly com\-pact\ groups satisfying an analogous property,
see \cite{HHR-stqh-nonab}.
Let us fix some notation. We mostly use the notation from \cite{hofmor}.
The {\em ${\mathbb Z}$-rank} of a discrete abelian
group $A$ is the {\em torsion free rank}, i.e., the $\Q$-dimension
of $A\otimes_{\mathbb Z}\Q$. When $A$ is tor\-sion-free\ then
the ${\mathbb Z}$-rank of $A$ is the dimension of
its dual $\hat A$ (see e.g. Theorem 8.22
in \cite{hofmor}).
We shall call a \lc\ a\-bel\-ian\ group $A$
{\em periodic} if it is both totally disconnected and the union
of its compact subgroups.
We shall use additive notation unless
stated differently. For $p$ a prime, an element $x$
in a \lc\ a\-bel\-ian\ group $G$ with $p^kx\to0$ as $k$ tends to infinity,
is called a {\em $p$-element}. As discussed on page 48 in \cite{HHR18}
this definition is equivalent to saying that $\gen x$ is
a pro-$p$ group.
In a periodic \lc\ a\-bel\-ian\ group $A$
the set of $p$-elements is a
closed subgroup $A_p$ -- its {\em $p$-primary component}
(or \psyl p subgroup,
see \cite[Definitions 8.7]{hofmor}.).
In a periodic abelian group,
for a set of primes $\pi$ the {\em $\pi$-primary component} $A_\pi$
of a periodic group $A$ is defined
to be the subgroup of $A$ topologically generated by all
$p$-primary components with $p\in\pi$. For a periodic group $A$
we denote by $\pi(A)$ the
set of all primes $p$ with $A_p$ not trivial.
If $\pi(A)=\{p\}$, for a single prime $p$, then $A$ is a {\em $p$-group}.
For a compact group our definition
agrees with \cite[Definition 8.7]{hofmor}.
For any fixed prime $p$ the kernel of the map $x\mapsto px$
is the {\em $p$-socle} of $A$ and will be denoted by $\text{\rm socle}_p(A)$
or just by $\text{\rm socle}(A)$ if there is no danger of confusion
(see \cite[Definition A1.20]{hofmor}.).
A \lc\ a\-bel\-ian\ group $A$ will be termed {\em finitely generated} if there is a finite subset
$X$ of $A$ which generates $A$ {\em topologically},
i.e., $A=\gen X$.
We say that a \lc\ a\-bel\-ian\ $p$-group $A$ has finite \hbox{$p$-rank}\ $n$ if,
and only if,
every finitely generated\ closed subgroup $H$ has a set of topological generators of
cardinality $n$ and, in addition, $A$ contains a finitely generated\ subgroup
which cannot be generated topologically
with fewer elements. If $G$ contains finitely generated\ subgroups of arbitrary large rank
then the \hbox{$p$-rank}\ of $G$ is said to be infinite.
For a finitely generated\ compact $p$-group $A$ this definition
agrees with $d(A)$, the minimum cardinality of
a topological set of generators of $A$,
as given in \cite[p.~43]{ribes-zalesskii}, i.e.,
$d(A)=\mathop\mathrm{rank}\nolimits_p(A)$. Moreover, by \cite[Proposition 4.3.6]{ribes-zalesskii},
for any closed subgroup $H$ of $A$
one has $d(H)\le d(A)$. Therefore for any finite \hbox{$p$-rank}\ abelian $p$-group
$G$ and closed subgroup $H$ one has $\mathop\mathrm{rank}\nolimits_p(H)\le \mathop\mathrm{rank}\nolimits_p(G)$.
It is a consequence of
\cite[Lemma 3.91]{HHR18} that our definition is equivalent to the one
given by \v Carin in \cite{Charin66}.
If $A$ is not topologically finitely generated, we say that
$A$ has {\em infinite \hbox{$p$-rank}}. A proposal for defining the \hbox{$p$-rank}\
of an arbitrary \lc\ a\-bel\-ian\ $p$-group has been recently made
in \cite[Section 10]{HHR-comfort}, which got reproduced in \cite[3.10, p.~93]{HHR18}.
For a prime $p$ the symbols $\Q_p$, ${\mathbb Z}_p$, $\prf p$ denote respectively
the additive group of the field of $p$-adic rationals, its closed subgroup
of $p$-adic integers, and the factor group $\Q_p/{\mathbb Z}_p$
(see \cite[p.~28 ff]{hofmor}.).
The set of elements of finite order of an abelian group $G$ will
be denoted by $\mathop\mathrm{tor}(G)$ and we let $\div(G)$ stand for the
largest divisible subgroup of $G$.
The main properties of $\mathop\mathrm{tor}(G)$ and $\div(G)$
are discussed in \cite[Appendix 1]{hofmor}.
Our main results are as follows:
\begin{theorem}\label{t:mainA}
The following statements for a \lc\ a\-bel\-ian\ $p$-group $G$ are equivalent:
\begin{enumerate}[\rm(a)]
\item $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item For $U$ an open compact subgroup exclusively
one of the following holds
\begin{enumerate}[\rm({b.}\rm 1)]
\item $U$ has finite \hbox{$p$-rank}. Then $\mathop\mathrm{tor}(G)$ is discrete and
$G/\mathop\mathrm{tor}(G)$ has finite \hbox{$p$-rank}.
\item $U$ has infinite \hbox{$p$-rank}. Then $\div(G)$ is closed,
$G/U$ and $\div(G)$ both have finite \hbox{$p$-rank},
and, $G/\div(G)$ is compact.
\end{enumerate}
\item $G$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{t:M-periodic}
A periodic \lc\ a\-bel\-ian\ group $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ if, and only if,
for every prime $p$ the respective $p$-component is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\end{theorem}
The next two results exhibit the structure of
any torsion \stqh\ group\ and the splitting of the torsion subgroup
in \tM\ group s.
\begin{theorem}\label{t:abelian-tor-stqh}
Let $A$ be a \lc\ a\-bel\-ian\ torsion group.
The following statements are equivalent:
\begin{enumerate}[\rm(a)]
\item The group $A$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item There
is a partition
\[\pi(A)=\delta\cup \phi\]
and all of the following holds:
\begin{enumerate}[\rm({b}.1)]
\item The set of primes $\phi$ is finite and
\[A_\phi=D_\phi\oplus V_\phi\]
for $D_\phi$ discrete and divisible
and $V_\phi$ compact and open in $G$.
\item $A_\delta$ is a discrete subgroup of $A$.
\end{enumerate}
\item The group $A$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{t:mainB}
Let $G$ be a totally disconnected \lc\ a\-bel\-ian\ group -- neither discrete
nor periodic.
Then the following statements are equivalent.
\begin{enumerate}[\rm(a)]
\item $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item All of the following holds:
\begin{enumerate}[\rm ({b.}1)]
\item
$T=\mathop\mathrm{tor}(G)=\mathop{\rm comp}\nolimits(G)$ is open in $G$ and $G/T$ is discrete
and tor\-sion-free\ of finite ${\mathbb Z}$-rank.
\item The torsion subgroup $T=\mathop\mathrm{tor}(G)$ is strong\-ly \tqh.
\end{enumerate}
Moreover, if $N$ is any closed subgroup of $G$, contained in $T$
then $\mathop\mathrm{tor}(G/N)=T/N$ and (b) holds for $G/N$ with
$T$ replaced by $T/N$.
\item $G$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
The preceding result corrects
\cite[Theorem 14.32(b.3)]{HHR18}.
Using Pontryagin duality (see \cite[Chapter 7]{hofmor})
we shall deduce a structure theorem for
\lc\ a\-bel\-ian\ to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ groups with nontrivial connected components, see
Theorem \ref{t:M-conn}.
The fact that not every periodic nondiscrete to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ group is strong\-ly \tqh,
will be shown in Lemma \ref{l:cyclic-stqh}.
A \lc\ a\-bel\-ian\ group $A$ is {\em in\-duc\-ti\-ve\-ly mo\-no\-the\-tic} provided
every finite subset of $A$ is contained in a monothetic subgroup
(see Definition \ref{d:img-per} below):
\begin{theorem}\label{t:ab-periodic-stqh}
For a \lc\ a\-bel\-ian\ periodic group $A$ and open compact
subgroup $U$ the following statements are equivalent:
\begin{enumerate}[\rm(A)]
\item $A$ is strong\-ly \tqh.
\item There is a partition of $\pi(A)$ into
4 disjoint subsets
$\delta$, $\gamma$, $\phi$,
and $\mu$ and all of the following holds:
\begin{enumerate}[\rm(i)]
\item $\delta:=\{p\in\pi(A):A_p\cap U=\{0\}\}$
and $A_\delta$ is a discrete subgroup of $A$.
\item $\gamma:=\{p\in\pi(A):A_p\le U\}$ and $A_\gamma$
is a profinite subgroup of $A$.
\item $\phi:=\{p\in\pi(A)\setminus\{\delta\cup\gamma\}:
\mathop\mathrm{rank}\nolimits_p(A_p)\ge2\}$.
The set $\phi$ is finite and
for all $p\in\phi$ the \psyl p subgroup $A_p$ is strong\-ly \tqh.
\item $A_\mu$ is in\-duc\-ti\-ve\-ly mo\-no\-the\-tic.
\item $A=A_\delta\oplus A_\gamma\oplus A_\phi\oplus A_\mu$
topologically and algebraically.
\end{enumerate}
\end{enumerate}
\end{theorem}
This result, we feel, is our genuine contribution.
Namely, for periodic to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ groups
Theorem 2 in \cite{muk2} and its proof seem not to lead
to a proof of our description of periodic nondiscrete \stqh\ group s.
We also correct \cite[Theorem 14.22(B)]{HHR18}, where $\gamma$
and $A_\gamma$ are missing in the decomposition.
The concluding Section \ref{s:consequences} contains several
consequences.
\section{Preliminaries}
\label{s:prelim-ab}
In a number of places we shall need a fact about
compact abelian torsion groups, \cite[Corollory 8.9]{hofmor},
which we rephrase here:
\begin{proposition}\label{p:cpt-tg}
The following statements about a compact abelian group $G$ are equivalent:
\begin{enumerate}[\rm(a)]
\item $G$ is a torsion group.
\item $G$ is profinite and has finite exponent.
\item $G=\prod_{p\in S}G_p$ is the cartesian product of compact
$p$-groups of finite exponent for $p$ in a finite set $S$.
\end{enumerate}
\end{proposition}
We have the following observation:
\begin{lemma}\label{l:stqh<tM}
Every \stqh\ group\ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\end{lemma}
\begin{proof}
The equality $(A\vee B)\wedge C=A\vee(B\wedge C)$ for
closed subgroups $A\subseteq C$ and $B$ follows from the containments
\[
\begin{array}{cccccccr}
(A\vee B)\wedge C&=&(A+B)\cap C &
\subseteq & A+(B\cap C)&=&A\vee(B\wedge C),\\
A\vee (B\wedge C)&=&A+(B\cap C) &
\subseteq & (A+B)\cap C&=&(A\vee B)\wedge C. \\
\end{array}\]
\end{proof}
A result by \v Carin (see \cite[Theorem 5]{Charin66} and a mild
generalisation in \cite{HHR-comfort})
will be used frequently.
\begin{proposition} \label{p:finrank} For a \lc\ a\-bel\-ian\ $p$-group $G$
the following conditions are equivalent:
\begin{enumerate}[\rm(1)]
\item $G$ has finite $p$-rank.
\item There is a compact open subgroup $U$ such that
both $\mathop\mathrm{rank}\nolimits_p(U)$ and $\mathop\mathrm{rank}\nolimits_p(G/U)$ are finite.
\item There is a compact open subgroup $U$ such that
$U\cong {\mathbb Z}_p^m\oplus F$ for a nonnegative integer
$m\in\N_0$ and a finite abelian group $F$ and that
the $p$-socle
$\text{\rm socle}_p(G/U)$ is isomorphic to ${\mathbb Z}(p)^n$
for some $n\in\N_0$.
\item
There is a natural number $r$ such that every finitely generated\
subgroup of $G$ can be generated by at most $r$ elements.
\item There are nonnegative integers $m$, $n$, $k$, and a finite
$p$-group $F$ such that
algebraically and topologically
\[ G\cong \Q_p^m\oplus \prf p^n\oplus {\mathbb Z}_p^k\oplus F\]
\end{enumerate}
\end{proposition}
\begin{lemma}[{\cite[Lemma 3.6]{HHR-comfort}}]\label{l:Rb}
For a \lc\ a\-bel\-ian\ $p$-group $G$ the following statements are equivalent:
\begin{enumerate}[\rm(a)]
\item $G$ is finitely generated.
\item $G$ is compact and has finite \hbox{$p$-rank}.
\item There are $m\ge0$ and a finite abelian $p$-group $F$
such that $G$ is algebraically and topologically
isomorphic to ${\mathbb Z}_p^m\oplus F$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{l:GU-prank}
Let $G$ be a \lc\ a\-bel\-ian\ tor\-sion-free\ $p$-group containing a compact
open subgroup $U$
of finite \hbox{$p$-rank}. Then \[\mathop\mathrm{rank}\nolimits_p(G)=\mathop\mathrm{rank}\nolimits_p(U).\]
\end{lemma}
\begin{proof}
By the definition of the \hbox{$p$-rank}\ we need to prove that
every finitely generated\ subgroup $T$ of $G$ satisfies $d(T)=\mathop\mathrm{rank}\nolimits_p(T)\le r:= \mathop\mathrm{rank}\nolimits_p(U)$.
Lemma \ref{l:Rb} implies that $T$ is compact and so is $T+U$.
Because of $\mathop\mathrm{rank}\nolimits_p(T)\le\mathop\mathrm{rank}\nolimits_p(T+U)$
it will suffice to prove $\mathop\mathrm{rank}\nolimits_p(T+U)\le r$,
i.e., we may assume $U\le T$. Since $T$ is compact and $U$ is an open
subgroup there is $k\ge0$ such that $|T/U|=p^k$.
The homomorphism $\phi:T\to U$ sending $t\in T$ to $p^kt$
is continuous and injective and therefore the compact subgroup
$\phi(T)=p^kT\cong T$ algebraically and topologically.
Deduce from this that
\[\mathop\mathrm{rank}\nolimits_p(T)=\mathop\mathrm{rank}\nolimits_p(p^kT)\le r=\mathop\mathrm{rank}\nolimits_p(U)\le \mathop\mathrm{rank}\nolimits_p(T),\]
showing $\mathop\mathrm{rank}\nolimits_p(T)=r$, as desired.
\end{proof}
We record a well known fact, see e.g. \cite[2.13 Corollary]{armacost81}:
\begin{lemma}\label{l:Rc}
A \lc\ a\-bel\-ian\ group $G$ is a $p$-group if, and only if, its dual $\hat G$ is.
\end{lemma}
\begin{remark}\label{rem:tMg}\rm
As has been said above, a lattice is modular if, and only if,
it is $E_5$-free (see \cite[2.1.2 Theorem]{schmidt}).
The absence of $E_5$ in the closed subgroup
lattice is inherited by closed subgroups and quotient groups.
Hence closed subgroups and quotient groups of a \tM\ group\ are \tM\ group s.
Moreover, a \lc\ a\-bel\-ian\ group $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ if, and only if, its Pontryagin dual
$\hat G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ (the latter fact follows from applying the
Annihilator Mechanism, see \cite[p.~314]{hofmor}).
\end{remark}
However, the class of \tM\ group s fails to be closed under the formation
of strict projective
limits and (local) products as the following example, due to
Mukhin shows (see \cite{muk2} which will be reproduced in
Example \ref{ex:P+S} below).
\begin{remark}\label{rem:not-M}\rm
Every group $G$ that is not to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ must contain closed subgroups $A$, $B$,
and $C$, where $A\subseteq C$,
the meet $B\wedge C$ is a proper subgroup of $A$, and, $C$ is a proper subgroup
of the join $A\vee B$. Then the five closed subgroups
\[A\vee B,\ C,\ A,\ B,\ B\wedge C \tag{$\dagger$}\label{eq:ABC}\]
form a subgroup sublattice of the lattice of closed subgroups of $G$
or equivalently, the five groups in Eq.~(\ref{eq:ABC}) are all pairwise
different and $B\cap C\subseteq A$.
Indeed, if the closed subgroups $X\subset Z$ and $Y$ do not satisfy the
modular identity then $A:= X\vee(Y\wedge Z)$,
$B:= Y$, and, $C:=(X\vee Y)\wedge Z$
serve the purpose.
\end{remark}
This observation provides a simple method
for exhibiting important examples of \lc\ a\-bel\-ian\ groups not to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\begin{example}\label{ex:reals}\rm
Let $G:={\mathbb R}$ be the reals and
fix subgroups $C:={\mathbb Z}$, $A:= 2{\mathbb Z}$, and, $B:=\sqrt2{\mathbb Z}=
\{z\sqrt2:z\in{\mathbb Z}\}$.
Then $A\vee B={\mathbb R}$ by the density of $2{\mathbb Z}+\sqrt2{\mathbb Z}=2({\mathbb Z}+\frac{\sqrt2}2{\mathbb Z})$.
Moreover, $B\wedge C=\{0\}$ is contained in $A$.
\end{example}
\begin{example}\label{ex:pK=K}\rm
Let $p$ be a prime and
$G=Z\oplus K$ be the topological direct sum of a discrete group $Z\cong{\mathbb Z}$
and an infinite compact monothetic group $K$. Suppose that $K=pK$, i.e.,
$K$ is $p$-divisible.
Fix a topological generator $k$ of $K$ and a generator $t$ of $Z$.
Let $C:= Z$, $A:= pZ$, and,
$B:=\{zt+zk: z\in {\mathbb Z}\}$ and observe that it is
the graph of the homomorphism $f:Z\to K$ sending the
generator $t$ of $Z$ to the generator $k$ of $K$. Hence $B$
is discrete. For proving $G=A\vee B$
observe first that $pK=K$ implies that $\gen{pk}=K$.
Then $A+B$ contains all elements of the form $put+v(t+k)=put+vt+vk$ for
$u$ and $v$ in ${\mathbb Z}$. Select $u:= 1$ and
$v:=-p$ in order to see that $pk\in A+B$.
Therefore $A\vee B=\overline{A+B}$ contains $\gen{pk}=K$
and hence
\begin{equation}\label{eq:AvBC}
(A\vee B)\wedge C=G\cap C=C=Z.
\end{equation}
Suppose there exists an integer $z\in{\mathbb Z}$ such that
$z(t+k)=zt+zk\in B\cap C$. This implies $zk=0$.
However, $K=\gen k$ is an infinite monothetic group and thus $k$ cannot be a
torsion element. Thus $z=0$ and therefore $B\cap C=\{0\}$, so that
taking Eq.~(\ref{eq:AvBC}) into account,
\[A\vee (B\cap C)=A\neq C=(A\vee B)\wedge C\]
follows. Thus $G$ is not to\-po\-lo\-gi\-cal\-ly qua\-si\-ha\-mil\-ton\-ian.
\end{example}
\begin{example}\label{ex:P+S}\rm
Let $S:={\mathbb Z}(p)^{(\N)}$ and $P:={\mathbb Z}(p)^{\N}$
and form $G:= S\oplus P$, the
topological direct sum. Let $\iota:S\to P$ be the canonical dense embedding
of $S$ in $P$ and $K\cong{\mathbb Z}(p)$ a finite subgroup
of $P$ intersecting $\iota(S)$ trivially. Such $K$ can be provided
by the subgroup of all constant maps $\N\to {\mathbb Z}(p)$.
Define closed subgroups $C:= S\oplus K$, $A:= S$, and,
$B:=\{(s+\iota(s)):s\in S\}$. Then $B$ is algebraically and
topologically isomorphic to
the graph of the function $\iota$ and hence a discrete subgroup of $G$.
Then, for $x$ to belong to $B\cap C$ it is necessary and sufficient
that there are $s,s'\in S$ an $k\in K$ with
\[x=s+\iota(s)=s'+k.\]
Since $K\cap\iota(S)=\{0\}$ we must have $\iota(s)=s=k=0$.
Hence $B\wedge C=\{0\}$.
Since $A+B=S+\iota(S)$ and $\overline{\iota(S)}=P$
one finds $A\vee B=\overline{A+B}=S+P=G$.
\end{example}
For describing the next examples, and also later, for the
proof of Theorem \ref{t:M-abelian-p}, we need to recall the notion of
{\em local product} of lo\-cal\-ly com\-pact\ groups.
\begin{definition}\label{d:loc-prod}
\rm Let $(G_j)_{j\in J}$ be a family of
lo\-cal\-ly com\-pact\ groups and assume that for each $j\in J$ the group
$G_j$ contains a compact open subgroup $C_j$. Let $P$ be the
subgroup of the cartesian product of the $G_j$ containing
exactly those $J$-tuples $(g_j)_{j\in J}$ of elements $g_j\in G_j$
for which the set $\{j\in J: g_j\notin C_j\}$ is finite. Then
$P$ contains the cartesian product $C:=\prod_{j\in J} C_j$ which is
a compact topological group with respect to the Tychonoff topology.
The group $P$ has a unique group topology with respect to which $C$
is an open subgroup. Now
the {\em local product} of the family $((G_j,C_j))_{j\in J}$ is the group $P$
with this topology, and it is denoted by
\[P=\prod_{j\in J}^{\rm loc}(G_j,C_j).\]
\end{definition}
Finally, when $G=\prod^{\rm loc}_{i\ge1}(G_i,C_i)$
is a local product and
$G_i\cong A$ and $C_i\cong B$ algebraically and topologically then
we shall denote $G$ by $(A,B)^{{\rm loc,}\,\N}$.
\begin{example}\label{ex:p-quadrat}\rm
Let us show that the local product
\[L:=(\Z(p^2),p\Z(p^2))^{{\rm loc,}\, \N}\]
cannot be to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
We are going to show that a closed subgroup of a Hausdorff quotient group
of $L$ is not to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and hence $L$ is not to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ by Remark~\ref{rem:tMg}.
Select an infinite subset $I$ of $\N$ with infinite
complement $J:= \N\setminus I$. Then there is
a topological and algebraic isomorphism
\[L\cong L_I\oplus L_J,\]
where $L_I:= ({\mathbb Z}(p^2),p{\mathbb Z}(p^2))^{{\rm \mathop\mathrm{loc}}, I}$ and
$L_J:=({\mathbb Z}(p^2),p{\mathbb Z}(p^2))^{{\rm loc}, J}$ are both algebraically
and topologically isomorphic to $L$. The socles $S_I$ and $S_J$
of respectively $L_I$ and $L_J$ are compact and open therein and
isomorphic to ${\mathbb Z}(p)^\N$. Since $L_I/S_I\cong {\mathbb Z}(p)^{(\N)}$
the subquotient $L_I/S_I\oplus S_J$ of $L$
is algebraically and topologically
isomorphic to ${\mathbb Z}(p)^{(\N)}\oplus {\mathbb Z}(p)^\N$, which is not to\-po\-lo\-gi\-cal\-ly mo\-du\-lar,
as has been shown in Example \ref{ex:P+S}.
\end{example}
\begin{lemma}\label{l:pC=C}
Let $C$ be a compact monothetic not torsion group.
Then there is a prime $p$ and a monothetic subgroup $K$ of $C$
with $pK=K$ and $K$ is not torsion.
\end{lemma}
\begin{proof}
If the connected component $C_0$ of $C$ is not trivial we may choose
$K:= C_0$. Since $C_0$ is divisible, for any prime $p$,
$pK=K$. Since the weight of $C_0$ does not exceed the weight of $C$
infer from \cite[(25.17) Theorem]{hewitt-ross1-book}
that $C_0$ is monothetic.
Next assume $C_0=\{0\}$. Then $C$ is profinite and abelian and
hence pronilpotent. Making
use of \cite[Proposition 2.3.8]{ribes-zalesskii}, we deduce
that $C=\prod_pC_p$
is the cartesian product of its \psyl p subgroups.
If there is a prime $p$ with $C_p=\{0\}$
then $K:= C$ serves the purpose.
Now assume that $C_p\neq\{0\}$ holds for all primes $p$. Select any prime
$p$ and note that by Proposition \ref{p:cpt-tg} the
closed subgroup $K:= \prod_{q\neq p}C_q$ cannot be torsion.
Certainly $pK=K$.
\end{proof}
\begin{lemma}\label{l:not-tM}
Let $G$ be a \lc\ a\-bel\-ian\ \tM\ group. Then
\begin{enumerate}[\rm(a)]
\item The connected component $G_0$ of $G$ is compact
and $\mathop{\rm comp}\nolimits(G)$ is an open subgroup of $G$.
\item
If $\mathop{\rm comp}\nolimits(G)$ is a proper subgroup of $G$ then $\mathop{\rm comp}\nolimits(G)=\mathop\mathrm{tor}(G)$.
\item If $U$ is any open compact subgroup then $G/U$ has finite
${\mathbb Z}$-rank.
\end{enumerate}
\end{lemma}
\begin{proof}
(a)
Since $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ so is, by Remark \ref{rem:tMg},
the connected component $G_0$. By the Vector Splitting Theorem (see
\cite[Theorem 7.57]{hofmor}) there is $n\ge0$ and a compact
connected subgroup $K$ such that
\[G_0={\mathbb R}^n\oplus K.\]
If $G_0$ were not compact then $n>0$ and hence there is a closed
subgroup $R\cong{\mathbb R}$ of $G_0$ which must be to\-po\-lo\-gi\-cal\-ly mo\-du\-lar,
contradicting the findings in Example \ref{ex:reals}.
Hence $G_0=K$ is compact.
The factor group $G/G_0$ is totally disconnected and thus
contains a compact open subgroup, say $C$.
The latter gives rise to an open
compact subgroup of $G$.
Therefore $\mathop{\rm comp}\nolimits(G)$ is open.
\medskip
(b)
Since $\mathop{\rm comp}\nolimits(G)<G$, (a) implies that
the factor group $G/\mathop{\rm comp}\nolimits(G)$ is discrete
and tor\-sion-free. Therefore one can find a discrete subgroup $Z\cong{\mathbb Z}$ of $G$.
Suppose $G$ to contain an element $c$ with $C:=\gen c$
compact and not torsion.
Then $C$ is an infinite monothetic subgroup.
Lemma \ref{l:pC=C} provides a prime $p$ and a monothetic infinite
subgroup $K$ of $C$ with $K=pK$. Remark \ref{rem:tMg} shows that
the closed subgroup $Z\oplus K$ must be to\-po\-lo\-gi\-cal\-ly mo\-du\-lar. This leads to a contradiction
in light of Example \ref{ex:pK=K}.
\medskip
(c)
If, for some open compact subgroup $U$ of $G$, the factor group
$G/U$ has infinite ${\mathbb Z}$-rank then $G$ contains a closed subgroup
$S\cong {\mathbb Z}^{(\N)}\oplus U$. By (b) $U$ is a compact torsion group
and by Proposition \ref{p:cpt-tg} it
is the cartesian product $U=\prod_{p\in S}U_p$ of compact finite
exponent $p$-groups for a finite set $S$ of primes.
Since $U$ is assumed to be infinite (else $G$ would be discrete)
there is $p\in S$ with $U_p/pU_p$ infinite. Since $G$ is
by assumption to\-po\-lo\-gi\-cal\-ly mo\-du\-lar, so is $R:={\mathbb Z}^{(\N)}\oplus U_p/pU_p$.
Let $(V_i)_{i\in\N}$ be a properly descending sequence of open subgroups of
$R$ all contained in $U_p/pU_p$ and let
$V:=\bigcap_{i\ge1}V_i$ denote the intersection.
Then $(U_p/pU_p)/V$ is first countable and has exponent $p$.
Therefore $(U_p/pU_p)/V\cong {\mathbb Z}(p)^\N$ and hence
topologically and algebraically
\[R/V\cong {\mathbb Z}^{(\N)}\oplus {\mathbb Z}(p)^{\N}.\]
Letting $A={\mathbb Z}^{(\N)}$ denote the first direct summand
we may factor $pA$ and obtain the to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ $p$-group
\[(R/V)/pA\cong {\mathbb Z}(p)^{(\N)}\oplus{\mathbb Z}(p)^{\N}\]
contradicting our findings in Example \ref{ex:P+S}.
\end{proof}
If a periodic group is the topological direct sum
of groups $G$ and $H$ and
$\pi(G)\cap\pi(H)=\emptyset$ it will be enough to ensure that each factor
is strong\-ly \tqh, in order to prove that $G\oplus H$ is strong\-ly \tqh.
\begin{lemma}\label{l:coprime}
If $G$ and $H$ are both periodic \stqh\ group s and $\pi(G)\cap\pi(H)=\emptyset$
then their topological direct sum $G\oplus H$ is a \stqh\ group.
\end{lemma}
\begin{proof}
Put $\pi:= \pi(G)$ and $\sigma:= \pi(H)$.
For closed subgroups $X$ and $Y$
there is a corresponding decomposition
\[X=X_\pi\oplus X_\sigma, \ \ Y= Y_\pi\oplus Y_\sigma.\]
Then $X_\pi+Y_\pi$ and $Y_\sigma+Y_\sigma$ are both closed subgroups in
respectively $G$ and $H$ by our assumptions. Hence
\[ X+Y=(X_\pi+ Y_\pi)\oplus(X_\sigma+Y_\sigma) \]
is a closed subgroup of $G\oplus H$.
\end{proof}
The following fact has already been observed in \cite[Remark 2]{muk5}.
\begin{lemma}\label{l:cyclic-stqh}
Let $I$ be a nonempty index set and select
for every $i\in I$ a prime $p_i$. Set
\[A:= \bigoplus_{i\in I}{\mathbb Z}(p_i)\times \prod_{i\in I}{\mathbb Z}(p_i).\]
Then $A$ is strong\-ly \tqh\ if, and only if, $I$ is finite.
\end{lemma}
\begin{proof}
Suppose that $A$ is strong\-ly \tqh.
Let $c_i$ be a topological generator of ${\mathbb Z}(p_i)$
in the profinite factor
\[C:= \prod_{i\in I}{\mathbb Z}(p_i)\]
of $A$
and $b_i$ for ${\mathbb Z}(p_i)$ in the discrete factor
\[B:= \bigoplus_{i\in I}{\mathbb Z}(p_i)\]
of $A$. Then
\[A=B\oplus C\]
where $C$ is a compact open subgroup of $A$. For $i\in I$
set $a_i:= b_i+c_i$ and define $X:=\gen{a_i:i\in I}$.
View $X$ as the graph of the
obvious injection $\iota\colon B\to C$ in $B\times C$.
A graph of any continuous function is
always homeomorphic to the domain and
therefore $X$ is a discrete subgroup of $G$.
Set $Y:= B$. Then
\[X+Y=\gp{c_i:i\in I}+\gp{b_i: i\in I}=B+\iota(B)\]
is dense in $B\times C$
and is therefore closed if, and only if, $I$ is finite.
\end{proof}
\begin{remark}\rm\label{r:tqh-stqh}
If $I$ is infinite countable and
the primes $p_i$ are pairwise different
then we will show later, in Theorem \ref{t:M-periodic},
that $A$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and not strong\-ly \tqh.
Note that $A$ is the projective limit with
compact kernels of discrete \stqh\ group s.
\end{remark}
The {\em good}
properties of the class of \stqh\ group s are the
following ones.
\begin{proposition}\label{p:stqh-class}
Let $\mathfrak X$ be either the class of \tM\ group s or of \stqh\ group s.
Then $\mathfrak X$ is closed under
\begin{enumerate}[\rm (a)]
\item passing to closed subgroups; and
\item passing to factor groups modulo closed normal subgroups.
\end{enumerate}
\end{proposition}
For its proof we first establish an elementary fact.
\begin{lemma}\label{l:quotient-closed}
Let $G$ be a topological group and $N$ a closed normal subgroup.
Then any subgroup $S$ containing $N$ is closed in $G$ if and only if
$S/N$ is a closed subgroup of $G/N$.
\end{lemma}
\begin{proof}
Let $\phi:G\to G/N$ denote the quotient map. Then $\phi(S)\subseteq G/N$
is closed if, and only if, $S+N=\phi^{-1}(\phi(S))$ is closed in $G$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:stqh-class}]
When $\mathfrak X$ is the class of all \tM\ group s then
(a) and (b) follow from the fact that the lattice
of closed subgroups must not contain the graph $E_5$.
\medskip
We turn to $\mathfrak X$ being the class of \stqh\ group s.
Let $G\in\mathfrak X$ and $L$ a closed subgroup. Then the product
of any two closed subgroups of $L$ is a closed subgroup of $G$ and hence
of $L$. Thus $L$ is strong\-ly \tqh.
That $G/N$ is strong\-ly \tqh\ follows from Lem\-ma \ref{l:quotient-closed}.
\end{proof}
A fact about certain $p$-groups of exponent $p^2$ and the
local product
\[L:=(\Z(p^2),p\Z(p^2))^{{\rm loc,}\, \N} \tag{*}\label{eq:p2p}\]
will be needed.
From Example \ref{ex:p-quadrat} it should be clear that
we are looking for information which secures that a locally compact
abelian $p$-group may have a quotient which contains a subgroup
isomorphic to $L$ in Eq.~(\ref{eq:p2p}).
The following discussion serves this purpose
The group $L$ has two significant components, namely,
\[P:= p{\mathbb Z}(p^2)^\N, \]
the socle of $L$, a compact open characteristic subgroup, and
\[F:={\mathbb Z}(p^2)^{(\N)},\]
a noncharacteristic dense countable subgroup
such that
\[L:= F+P, \mbox{ and } p\.F=F\cap P, \mbox{ dense in }P.\]
We observe that we have a basis of compact open zero neighborhoods
\[P_m:= p{\mathbb Z}(p^2)^{\{n\in\N:m\le n\}},\ m\in\N\]
in $L$, and an ascending union of discrete
finite subgroups $F_0=\{0\}$ and
\[F_m:={\mathbb Z}(p^2)^{\{n\in\N: n<m\}},\ m\in\N\]
such that
\begin{align*}
F=\bigcup_{m\in\N}F_m \mbox{ and} \\
L=\bigcup_{m\in\N}(F_m\oplus P_m) &={\rm colim}_{m\in\N}F_m\oplus P_m,\\
(\forall m\in\N)\ \ (p\.F_m\oplus P_m) &= P.\hfill\\
\end{align*}
Finding a copy of $L$ in a $p$-group $G$ now amounts to finding
cyclic subgroups $Z_k$ isomorphic ${\mathbb Z}(p^2)$ matching these configurations.
\begin{lemma}\label{l:p2p}
Assume that a locally compact abelian $p$-group $G$
has a descending basis of $0$--neighborhoods of compact open subgroups
$V_1\supseteq V_2\supseteq V_3\supseteq\cdots$
and a family $\{Z_k:k\in\N\}$ of subgroups with
isomorphisms $\zeta_k\colon {\mathbb Z}(p^2)\to Z_k$
satisfying the following conditions
\begin{enumerate}[\hspace*{1em}\rm(a)]
\item $V_1$ has exponent $p$.
\item The sum $Z_1+\cdots+Z_m+V_{m+1}$ is direct (algebraically and
topologically) for $m=1,2,\dots$.
\item $p\.Z_k\subseteq V_k$ for $k=1,2,\dots$.
\end{enumerate}
\noindent Set $F_G:=\sum_{k\in\N}Z_k$, and $L_G:=\overline{F_G}$,
further $P_G:=\overline{p\.F_G}\subseteq V_1$.
Then $L_G$ is equal to the sum $F_G+P_G$ and all of the following holds:
\begin{enumerate}[\rm(i)]
\item there is an algebraic isomorphism
$\eta_F\colon {\mathbb Z}(p^2)^{(\N)}\to F_G$ such that the restriction
to the $k$-th summand is $\zeta_k$.
\item There is an isomorphism of compact groups
$\eta_P\colon p{\mathbb Z}(p^2)^\N \to P_G$
such that the restriction to the $k$-th factor is $\zeta_k|p{\mathbb Z}(p^2)$.
\item There is an isomorphism of topological groups
$\eta_L\colon L\to L_G$.
\end{enumerate}
\end{lemma}
\begin{proof}
There is no loss of generality to assume $G=L_G$.
For each $k$ we have an isomorphism $\zeta_k\colon{\mathbb Z}(p^2)\to Z_k$
by the definition of $Z_k$. Conclusion (i) follows from Assumption (b).
Since $F_G$ is dense in $L_G=G$ we have
that $F_G\cap V_1$ is dense in the open set $V_1$,
i.e., $\overline{F_G\cap V_1}=V_1$.
Since $V_1$ has exponent $p$ the equation $F_G\cap V_1=pF_G$
follows from (c).
Then
\begin{equation}\label{eq:FGV1}
V_1=\overline{F_G\cap V_1}=\overline{pF_G}=P_G \end{equation}
is open in $G$. As a consequence
by the density of $F_G$ in $L_G$ we have
\begin{equation}G=F_G+P_G.\label{eq:FGPG}\end{equation}
Furthermore,
\[(\forall k\in\N)\, Z_k\cap P_G=Z_k\cap V_1\cap F_G=Z_k\cap pF_G=pZ_k.\]
We now prove (ii).
Let us set $F_{G,k}:= \bigoplus_{1\le j\le k}Z_j$.
We claim that for any $k\ge1$
\[P_G=pF_{G,k}\oplus V_{k+1}.\]
Passing in (c) on both sides to the union and noting
that $(V_l)_{l\ge1}$ is decreasing one obtains
\[\bigcup_{l\ge1}pZ_l=
\left(\bigcup_{1\le l\le k}pZ_l\right)\cup\left(\bigcup_{l\ge k+1}pZ_k\right)
\subseteq
\left(\bigcup_{1\le l\le k}pZ_k\right)\cup \left(\bigcup_{l\ge k+1}V_l\right)
=
\left(\bigcup_{1\le l\le k}pZ_k\right)\cup V_{k+1}.\]
Observing that the set on the left hand side generates $pF_G$
and, taking (b) into account, one arrives at
$pF_G\le pF_{G,k}\oplus V_{k+1}$.
Hence $P_G=\overline{pF_{G}} \le pF_{G,k}\oplus V_{k+1}$.
For proving the converse containment, we take Eq.~(\ref{eq:FGV1}) into
account and observe
\[pF_{G,k}\oplus V_{k+1}\le pF_G+V_1=P_G.\]
Thus,
\[P_G=pF_{G,k}\oplus V_{k+1},\]
that is, there is a projection $p_k\colon P_G\to pF_{G,k}$,
and for each $k\ge2$ there is a canonical
projection $\phi_k:pF_{G,k}\to pF_{G,k-1}$ with kernel $pZ_k$.
So $(pF_{G,k},\phi_k)_{k\in\N}$ forms
an inverse system with projective limit
$\varprojlim_k pF_{G,k}= \prod_{k\ge1}pZ_k$.
Let us prove the equality \[\phi_k\circ p_k=p_{k-1}, \ \ k\ge2.\]
As $P_G=pF_{G,k}\oplus V_{k+1}$ we may decompose $x\in P_G$
as $x=pf+v$ for $f\in F_{G,k}=\bigoplus_{1\le j\le k}Z_j$ and $v\in V_{k+1}$.
Therefore
\[(\phi_k\circ p_k)(pf+v)=\phi_k(pf).\]
Decomposing $f=f_1+z_k$ for some $f_1\in F_{G,k-1}$ and $z_k\in Z_k\le V_k$,
the expression on the right yields
\[\phi_k(pf)=\phi_k(pf_1+pz_k)=pf_1=p_{k-1}(pf_{k-1}+pz_k+v)=p_{k-1}(x),\]
as needed.
Therefore, by the universal property of the limit,
there is a unique morphism
\[\phi\colon P_G\to \varprojlim_kpF_{G,k}=\prod_k pZ_k. \]
Since all morphisms $(p_k)_{k\ge1}$ are surjective, so is
$\phi$ and and since these morphisms separate the points,
$\phi$ is an isomorphism of compact groups.
By the definition of $Z_k$ we have an isomorphism
$\alpha\colon p{\mathbb Z}(p^2)^\N\to \prod_{k\ge1}pZ_k$ so that the
restriction and corestriction to the $k$-th factor agrees
with $\zeta_k|p{\mathbb Z}(p^2):p{\mathbb Z}(p^2)\to pZ_k$.
Thus, as has been claimed,
$\eta_P=\phi^{-1}\circ\alpha\colon p{\mathbb Z}(p^2)^\N\to P_G$
is an isomorphism
mapping the $k$-th factor of $p{\mathbb Z}(p^2)^\N$ to $pZ_k\subseteq P_G$.
For a proof of (iii) denote ${\mathbb Z}(p^2)^{(\N)}$ by $F$ and note
that it is a free ${\mathbb Z}(p^2)$-module. Therefore there is a homomorphism
$\eta_F:F\to \sum_{k\ge1}Z_k$ which restricts to $\zeta_k$ on the
$k$-th direct summand of $F$. Take an element
$z=(z_n)_{n\in\N}\in {\mathbb Z}(p^2)^{(\N)} \cap p{\mathbb Z}(p^2)$.
Then $\eta_F(z)=\sum_{n\in\N} \zeta_n(z_n)$ by (i) in view of the
definition of a direct sum. Therefore the restrictions of respectively $\eta_F$
and $\eta_P$ from (ii)
to $pF=(p{\mathbb Z}(p^2))^{(\N)}$ agree.
Setting $P:= (p{\mathbb Z}(p^2))^{\N}$ one observes that
$\eta_F$ and $\eta_P$ agree on $F\cap P$ and thus define
a unique algebraic morphism $\eta\colon L=F+P\to F_G+P_G=G$,
where $F_G$ and $P_G$ are as in Eq.~(\ref{eq:FGPG}).
Since $\eta$ agrees on the open subgroup $P$ with the continuous and
open map $\eta_P$ it is continuous and open.
Since $\eta_F$ is an isomorphism,
$F_G$ is in the image of $\eta$. Similarly, $P_G$ is in the image of
$\eta$ as well. Hence $\eta$ is surjective. If $\eta(z)=0$, and
$z\in F$, then $0=\eta(z)=\eta_F(z)$ implies $z=0$ since $\eta_F$
is injective. If $z\in P$ then $0=\eta_F(z)=\eta_P(z)$ and
by (ii) we must have $z=0$.
If $z=f+v$ for $f\in F$ and $v\in P$
then $0=\eta(z)=\eta(f)+\eta(v)$ implies
$0=p\eta(f)=\eta(pf)$ so that from $pf\in P$ and $\eta\vert_{P}=\eta_F$
we may deduce $pf=0$. Therefore $f$ itself belongs to $P$
and thus $\eta(f+\nu)=0$ implies $z=f+\nu=0$.
Hence $\eta$
is injective and thus is an isomorphism of topological groups.
This completes the proof.
\end{proof}
\begin{proposition}\label{p:findim-neu}
Let $U$ be a closed totally disconnected subgroup of
a compact connected $n$-dimensional abelian group $G$. Then,
for every $p\in\pi(G)$, the \hbox{$p$-rank}\ of the \psyl p subgroup
$U_p$ of $U$ is bounded by $n$.
In particular, every subgroup of $G$ of finite exponent is finite.
\end{proposition}
\begin{proof}
By \cite[Corollary 8.24(iv)]{hofmor} we have $\dim(G/U)=\dim(G)=n$.
Observing that $G$ is connected, duality applied to the
exact sequence
\[\{0\}\to U\to G\to G/U\to \{0\}\]
renders an exact sequence
\[\{0\}\to \widehat{G/U}\to \hat G\to \hat U\to\{0\}\]
where the second and the third term are tor\-sion-free\ groups of ${\mathbb Z}$-rank $n$
and $\hat U$ is a discrete torsion group.
Since $\hat G$ is a subgroup of $\Q^n$ and $\widehat{G/U}$ must contain
a subgroup $L\cong{\mathbb Z}^n$ it follows that $\hat U$ must be a subgroup
of a quotient of
\[\Q^n/L\cong \bigoplus_p\prf p^n.\]
Therefore the \hbox{$p$-rank}\ of $\Q^n/L$ does not exceed $n$. Hence
$\hat U$ has \hbox{$p$-rank}\ not exceeding $n$.
Recalling from \cite[Definition 3.1]{HHR-comfort} that the \hbox{$p$-rank}\ of
$U_p$ is precisely the \hbox{$p$-rank}\ of the socle of $\hat U_p$ we arrive
at $\mathop\mathrm{rank}\nolimits_p(U_p)\le n$, as needed.
\medskip
For proving the second statement, let $E$ be a subgroup of $G$ of
finite exponent, say $e$. Then its closure $\overline E$ also has
exponent $e$ and thus there is no loss of generality to assume that
$E$ is closed and hence compact.
Therefore Proposition \ref{p:cpt-tg} implies that
$E=\prod_{p\in S}E_p$ for a finite set of $S$ of primes and,
moreover, $E_p$ has finite exponent.
By the first part of the proof we know that $\mathop\mathrm{rank}\nolimits_p(E_p)$ must
be finite, and therefore, $E_p$ is finite for every $p\in S$, and
so is $E$.
\end{proof}
\begin{corollary}\label{c:findim-neu}
Suppose $G$ is a \lc\ a\-bel\-ian\ group with compact finite dimensional connected
component $G_0$ and suppose that $\mathop\mathrm{tor}(G/G_0)$ is a discrete subgroup
of $G/G_0$ and has finite exponent, say $e$.
Then $E:=\gp{x\in G:ex=0}$ is a discrete subgroup of $G$.
\end{corollary}
\begin{proof}
By \cite[(5.23) Lemma]{hewitt-ross1-book} the isomorphism
$(E+G_0)/G_0\to E/(E\cap G_0)$ maps open sets to open sets
and since $(E+G_0)/G_0$
is discrete we may conclude that $E/(E\cap G_0)$ is discrete.
Proposition \ref{p:findim-neu} shows that $F:= E\cap G_0$ is finite.
Since $E$ is a closed totally disconnected subgroup of $G$ it contains
a compact open subgroup $V$ with $V\cap F=\{0\}$. Therefore the
discrete compact and hence finite group $(V+G_0)/G_0$
maps onto $V/(V\cap G_0)$ by the above map showing that $V$ itself is
finite. Thus $E$ is a discrete subgroup of $G$.
\end{proof}
\begin{lemma}\label{l:XYU}
Let $G$ be a \lc\ a\-bel\-ian\ group and $X$ and $Y$ be closed subgroups.
Let $U_X$ and $U_Y$ be compact subgroups of respectively $X$ and $Y$.
Then $\widetilde X:= X+U_Y$ and $\widetilde Y:= Y+U_X$
are closed and $\widetilde X+\widetilde Y=X+Y$.
Letting $K:= U_X+U_Y$, the subgroup
$X+Y$ is closed in $G$
if, and only if, $\widetilde X/K+\widetilde Y/K$ is closed in $G/K$.
If there is a compact subgroup $U$ with $U_X=X\cap U$ and
$U_Y=Y\cap U$ then we have $\widetilde X\cap U=\widetilde Y\cap U=K$. If, in
addition, $U$ is open, then $X/(X\cap U)$ is a discrete
subgroup of $G/(X\cap U)$.
\end{lemma}
\begin{proof}
Since $K=U_X+U_Y$ is the sum of
compact subgroups of $G$ it is compact. Certainly
\[\widetilde X+\widetilde Y=X+U_Y+Y+U_X=X+U_X+Y+U_Y=X+Y.\]
Now the result follows from Lemma \ref{l:quotient-closed}.
For proving the second statement, we only show that $\widetilde X\cap U=K$,
as the equality $\widetilde Y\cap U=K$ can be proved along the same lines.
By construction, $K\le \widetilde X\cap U$. Now fix $z\in \widetilde X\cap U$.
Then there are $x\in X$ and $y\in Y\cap U=Y_U$ with $z=x+y$.
Since $y\in K\le U$ and $z\in U$ we can conclude $x\in X\cap U=X_U$. Hence
$z=x+y\in X_U +K=K$. Thus $\widetilde X\cap U=K$.
Suppose that $U$ is compact open. Then $(X+U)/U$ is discrete and
\cite[(5.32) Theorem]{hewitt-ross1-book}
implies that the isomorphism $(X+U)/U\to X/(X\cap U)$ maps open
subsets of $(X+U)/U$ to open subsets of $X/(X\cap U)$.
Hence $X/(X\cap U)$ is a discrete subgroup of $G/(X\cap U)$.
\end{proof}
\begin{lemma}\label{l:XK}
Let $G$ be a \lc\ a\-bel\-ian\ group.
Suppose $K$ is a compact subgroup of $G$ and $X$ a closed subgroup of $G$.
Then $X+K$ is closed in $G$. Furthermore, $X$ is compact
if, and only if, $(X+K)/K$ is compact.
\end{lemma}
\begin{proof}
Certainly $X+K$ is closed and if $X$ is compact, so is $(X+K)/K$.
On the other hand $X+K$ is \lc\ a\-bel\-ian, so if $(X+K)/K$ is compact then so is $X+K$ by
\cite[(5.25) Theorem]{hewitt-ross1-book} and therefore $X$ is compact.
\end{proof}
\begin{lemma}\label{l:phi-kappa}
Let $H$ be a closed subgroup of a \lc\ a\-bel\-ian\ group $A$ satisfying the following
premises:
\begin{enumerate}[\rm(a)]
\item $A$ is periodic and $\phi:=\pi(A)$ is finite.
\item $E:=\mathop\mathrm{tor}(A)$ is discrete and has finite exponent, say $e$.
\item $A/E$ is finitely generated.
\end{enumerate}
Then, algebraically and topologically,
$H=Z\oplus E_H$ for $Z$ tor\-sion-free\ and finitely generated\ and $E_H=\mathop\mathrm{tor}(H)$.
\end{lemma}
\begin{proof}
We first claim that $A=X\oplus E$ for $X$ a finitely generated\ tor\-sion-free\ subgroup of $A$.
Since $\phi$ is finite and, algebraically and topologically,
$A=\bigoplus_{p\in\phi}A_p$ for $A_p$ the $p$-primary
subgroup of $A$, it will suffice to prove the claim under the additional
assumption that $\phi=\{p\}$ and thus $A$ is a \lc\ a\-bel\-ian\ $p$-group.
According to (c), $A/E$ is finitely generated. Hence Lemma \ref{l:Rb}
implies for some $r\ge0$ the topological isomorphism
$A/E\cong{\mathbb Z}_p^r$. Lifting topological
generators of $A/E$ to $A$ gives rise to a closed subgroup $X\cong{\mathbb Z}_p^r$
of $A$ with $A=X\oplus E$. The claim holds.
\medskip
As during the proof of the claim,
for establishing the statements about $H$,
we may assume that $H$ is a $p$-group for some prime $p$.
Certainly $E_H=\mathop\mathrm{tor}(H)=H\cap \mathop\mathrm{tor}(A)$ is discrete.
The continuous epimorphism
$\phi:A\to A/E\cong{\mathbb Z}_p^r$ restricts to a map $\chi:H\to A/E$ with kernel
$E_H=\mathop\mathrm{tor}(H)$. The induced homomorphism
$\bar\chi$ from the compact group $H/E_H$
to $A/E$ is continuous and renders a compact image, say $L$, in $A/E$.
It follows
from the compactness of $H/E_H$ and $A/E$ that
$\bar\chi:H/E_H\to L$ is an isomorphism of topological groups
and hence $H/E_H$ must
be finitely generated. Therefore, replacing in the above claim $A$ by $H$, we can deduce
the existence of a finitely generated\ subgroup $Z$ of $H$ with $H=Z\oplus E_H$
algebraically and topologically.
\end{proof}
\section{Proving the Main Results}
We shall proceed in three subsections, dealing first with $p$-groups,
then with totally disconnected ones, and finally with groups having
nontrivial connected components.
\subsection{$p$-Groups}
\label{ss:Mg-p-case}
Let us first describe the structure
of an abelian to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ $p$-group.
\begin{theorem}[Mukhin, see \cite{muk2}]\label{t:M-abelian-p}
A \lc\ a\-bel\-ian\ to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ $p$-group $G$ satisfies one of the following conditions:
\begin{enumerate}[\rm(a)]
\item $G$ contains an open compact subgroup of finite \hbox{$p$-rank}.
Then the torsion subgroup $T=\mathop\mathrm{tor}(G)$ of\, $G$ is discrete and $G/T$
has finite \hbox{$p$-rank}.
\item There is an open compact subgroup $U$ of $G$ with infinite \hbox{$p$-rank}.
Then $G/U$ has finite \hbox{$p$-rank}\ and $G$
contains a closed subgroup $D$ of finite $p$-rank
with compact factor group $G/D$. In particular, $D$ can be taken to be
$\div(G)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose first that the premise of (a) is valid, i.e., $\mathop\mathrm{rank}\nolimits_p(U)$ is
finite for some open compact subgroup $U$ of $G$. Then, taking
Proposition~\ref{p:finrank} into account, we may
replace $U$ by a suitable of its open
subgroups and achieve that $U$ is tor\-sion-free.
Therefore $\mathop\mathrm{tor}(G)$ must be a discrete subgroup.
As the tor\-sion-free\ group $G/\mathop\mathrm{tor}(G)$ contains
the open subgroup $(U+\mathop\mathrm{tor}(G))/\mathop\mathrm{tor}(G)$
and the latter has finite \hbox{$p$-rank}, Lemma \ref{l:GU-prank} implies that
$G/T=G/\mathop\mathrm{tor}(G)$ has finite \hbox{$p$-rank}.
\medskip
Let us assume the premise of (b) now.
Suppose, by way of contradiction, the \hbox{$p$-rank}\ of $G/U$ to be infinite.
We shall derive a contradiction from this by showing that a local
product isomorphic to the one in Eq.~(\ref{eq:p2p})
can be manufactured to be a factor group of a closed subgroup of $G$,
being to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ by Remark \ref{rem:tMg},
and then refer to Example \ref{ex:p-quadrat}.
\medskip
{\em Claim 1: One can assume $U$ to have exponent $p$.}
\medskip
With $G$ also $G/pU$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar. Consider instead of $G$ and $U$
the factor group $G/pU$ and its open compact subgroup $U/pU$
which still has infinite \hbox{$p$-rank}.
\medskip
{\em Claim 2: One can assume $U$ to be first countable and hence metric.
Furthermore, one can arrange $G/U\cong{\mathbb Z}(p)^{(\N)}$ and $U\cong{\mathbb Z}(p)^\N$.
Moreover, every open subgroup $V$ of $U$ is isomorphic to $U$. In particular, $V$ is metrizable, has infinite \hbox{$p$-rank}, and, has exponent $p$.}
\medskip
There is a strictly decreasing sequence $(V_k)_{k\ge1}$ of open
subgroups of $U$. Letting $V:=\bigcap_{k\ge1}V_k$
we pass from $(G,U)$ to $(G/V,U/V)$. Then $G/V$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and
$U/V$ is first countable and infinite of exponent $p$.
Therefore $U/V$ has infinite \hbox{$p$-rank}\
and is thus isomorphic to ${\mathbb Z}(p)^{\N}$.
By replacing $G$ with the inverse image of $\text{\rm socle}(G/U)$ under
projection $G\to G/U$ we achieve $G/U\cong{\mathbb Z}(p)^{(\N)}$.
The ``moreover'' statement follows from $U\cong{\mathbb Z}(p)^{(\N)}$
and \cite[Theorem 4.3.8]{ribes-zalesskii}.
\medskip
{\em Claim 3: One can assume that $\overline{pG\cap U}$ is
open in $G$. Moreover, for any open subgroup $V$ of $U$ the
intersection $pG\cap V\neq\{0\}$.}
\medskip
If $T:=\overline{pG\cap U}$ is {\em not} open in $G$ it must have
infinite index in $U$. By Claim 2 the group $G$ has exponent $p^2$
and $pG\le U$. Therefore the factor group $G/T$
has exponent $p$ and, as $G/U$ is infinite, so is
$(G/T)/(U/T)$. Since $G/T$ may be considered a GF$(p)$-vector space
the open subgroup $U/T$ admits a complement, say $S$.
Thus algebraically and topologically
\[G/T\cong S\oplus U/T.\]
Select in $G/T$ a countable subgroup $\Sigma_p$ of $S$
isomorphic to $\cong{\mathbb Z}(p)^{(\N)}$.
Since $U/T$ is a compact group of exponent $p$ (by Claim 1) and
metrizable (by Claim 2), it follows that it
is topologically isomorphic to ${\mathbb Z}(p)^\N$. Hence it turns out
that $\Sigma_p\oplus U/T$ is a factor group of a closed subgroup of $G$
and therefore is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ by Remark \ref{rem:tMg}.
This contradicts the finding in Example \ref{ex:P+S}.
\medskip
For proving the second statement, suppose, by way of contradiction
that $pG\cap V=\{0\}$ for some open subgroup $V$ of $U$.
The (purely algebraic) isomorphism
\[pG\cong pG/(pG\cap V)\cong (pG+V)/V\]
and the fact that $V$ and $pG+V$ are open subgroups of $U$ implies
that $pG$ must be finite. But then the open subgroup
$\overline{pG\cap U}$ would be finite and $G$ would be discrete, a
contradiction.
\medskip
{\em Claim 4: There is a sequence $(F_k,V_k)_{k\ge1}$
of pairs of compact subgroups of $G$ where
for all $k\ge1$
\begin{enumerate}[(1)]
\item $F_k$ is finite, $V_k$ is open, and, $F_k\cap V_{k+1}=\{0\}$; and
\item $F_k\cap V_k=\gp{px_k}$ for some nontrivial element $x_k\in G$; and
\item $F_k\subseteq F_{k+1}$ and $V_{k+1}\subseteq V_k$
and $\bigcap_{k\ge1}V_k=\{0\}$.
\end{enumerate}
}
\medskip
We proceed by induction on $k$ and recall that for all $k\ge1$
the subgroups $V_k$ of $U$ will be separable of infinite \hbox{$p$-rank}, and,
have exponent $p$ (see Claim 2).
For $k=1$ let $V_1$ be the open subgroup $\overline{pG\cap U}$ (see Claim 3).
Then there exists $x_1\in G$ of order $p^2$ with $px_1\in V_1$ and
we set $F_1:=\gp{x_1}$.
Suppose $(F_i,V_i)$ have been found for $1\le i\le k$.
Then $F_k$ is finite and hence there exists an open subgroup $V_{k+1}$
contained in $V_k$ with $F_k\cap V_{k+1}=\{0\}$.
By Claim 3 the intersection $pG\cap V_{k+1}\neq\{0\}$.
Therefore one can find
$x_{k+1}\in G$ of order $p^2$ with $px_{k+1}\in V_{k+1}$.
Set $F_{k+1}:= F_k\oplus \gp{x_{k+1}}$.
(1) and the first statement of (3)
are now clear from the construction.
For proving (2) suppose $x\in F_{k+1}\cap V_{k+1}$.
Then $x=f_k+\lambda x_{k+1}$ for some $f_k\in F_k$ and $0\le\lambda
\le p^2-1$.
Since $x\in V_{k+1}$ we must have $0=px=pf_k+p\lambda x_{k+1}$
and $pf_k=-\lambda px_{k+1}\in F_k\cap V_{k+1}=\{0\}$ (by the construction
of $V_{k+1}$).
Hence $\lambda=p\mu$ for some $0\le\mu\le p-1$.
This implies $x=\lambda x_{k+1}=\mu px_{k+1}\in\gp{px_{k+1}}$.
Selecting at each step $V_i$ small enough
one can achieve the second statement in (3), namely
$\bigcap_{i\ge1}V_i=\{0\}$.
\medskip
Setting in Claim 4 for all $k\in\N$ respectively $Z_k:=\gp{x_k}$
and $F:=\gp{Z_k:k\ge1}$ shows that the assumptions of
Lemma \ref{l:p2p} hold. Therefore there is a closed
subgroup $L$ of $G$ topologically and algebraically isomorphic
to the group in Eq.~(\ref{eq:p2p}).
We have reached a contradiction and therefore the \hbox{$p$-rank}\ of
$G/U$ must be finite.
\medskip
For proving the remaining assertions of (b)
let us return to the original meaning of $G$ and its open compact
subgroup $U$ of infinite \hbox{$p$-rank}. By what we just proved,
the factor group $G/U$ has finite \hbox{$p$-rank}.
Thus, according to Proposition \ref{p:finrank},
$G/U\cong \prf p^m\oplus F$, for some $m\ge0$ and finite group $F$.
Replacing $U$ by the preimage of $F$ in $G$ allows to have $F=\{0\}$.
Lemma \ref{l:Rc} implies that $\hat G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and
duality theory applied to the short exact sequence $U\to G\to\prf p^m$
implies that $\hat G$ must contain an open compact subgroup $\cong{\mathbb Z}_p^m$.
Hence $\hat G$ satisfies the premise of (a) and therefore $\mathop\mathrm{tor}(\hat G)$
is a discrete subgroup of $\hat G$ with $\hat G/\mathop\mathrm{tor}(\hat G)$
being tor\-sion-free\ of finite \hbox{$p$-rank}.
Therefore Proposition \ref{p:finrank}(5) implies the existence
of $k\ge0$ and $l\ge0$ such that
\[\hat G/\mathop\mathrm{tor}(\hat G)\cong \Q_p^k\oplus {\mathbb Z}_p^l.\]
Duality applies and yields topological isomorphisms
\[D:= \mathop\mathrm{tor}(\hat G)^\perp\cong (\hat G/\mathop\mathrm{tor}(\hat G))^{\hat{\phantom{m}}}
\cong \Q_p^k\oplus \prf p^l.\]
Hence $D$ is a closed divisible subgroup of $G$
having finite \hbox{$p$-rank}, with compact factor group $G/D$.
\end{proof}
\begin{lemma}\label{l:U-fg-stqh}
Let a \lc\ a\-bel\-ian\ $p$-group $G$ contain a finitely generated\ open subgroup $U$.
Then $G$ is strong\-ly \tqh.
\end{lemma}
\begin{proof} Since $G$ is periodic the finitely generated\ subgroup
$U$ is compact.
Making use of Proposition \ref{p:finrank}
and Theorem \ref{t:M-abelian-p}(a)
we can pass to a smaller open subgroup of $U$ and achieve $U$
to be tor\-sion-free. Moreover, since the smaller subgroup is open in
the finitely generated\ subgroup $U$
it is itself finitely generated\ (see e.g. \cite[Proposition 2.5.5]{ribes-zalesskii}).
Then $\mathop\mathrm{tor}(G)$ turns out to be a discrete, hence closed
subgroup of $G$.
Let $X$ and $Y$ be closed subgroups of $G$.
Since $U$ is compact and open, we can pass to a factor group of $G$
which still satisfies the premises of the lemma, and thereby modifying
$X$ and $Y$ by making
use of
Lemma \ref{l:XYU} and achieve that $X$ and $Y$ are both discrete
subgroups of the modified group $G$. Therefore $X+Y$ is contained
in the discrete subgroup $\mathop\mathrm{tor}(G)$ and is thus closed in $G$.
\end{proof}
\begin{theorem}\label{th:tM=stqh p}
The following statements about a \lc\ a\-bel\-ian\ $p$-group $G$ are equivalent:
\begin{enumerate}[(a)]
\item $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item $G$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
\begin{proof}
In light of Lemma \ref{l:stqh<tM}, we only need to show that (b)
is a consequence of (a).
Thus assume (a) and let $U$ be an open compact subgroup of $G$.
If $\mathop\mathrm{rank}\nolimits_p(U)$ is finite then (b) is a consequence of Lemma \ref{l:U-fg-stqh}.
We assume from now on that the \hbox{$p$-rank}\ of $U$ is infinite.
Let $X$ and $Y$ be closed subgroups of $G$.
Making use of Lemma \ref{l:XYU} with $K:= X\cap U+Y\cap U$
by modifying $X$ and $Y$ and replacing $G$ by $G/K$
(which is still to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ by Proposition \ref{p:stqh-class})
we can arrange $X\cap U=Y\cap U=\{0\}$.
If $\mathop\mathrm{rank}\nolimits_p(U)$ is finite then $G$ is strong\-ly \tqh\ by
Lemma \ref{l:U-fg-stqh}.
Otherwise the \hbox{$p$-rank}\ of $U$ is infinite and therefore
Theorem \ref{t:M-abelian-p}(b) implies that
the discrete group $G/U$ is finite.
Moreover, the embeddings $X\to G/U$ and
$Y\to G/U$ are embeddings of discrete
groups. Hence the \hbox{$p$-rank} s of $X$ and $Y$ are finite and therefore
the \hbox{$p$-rank}\ of $X+Y$ is finite (considered without topology).
Thus $X+Y$ has finite socle.
Making our open compact subgroup $U$ small enough and observing
that the \hbox{$p$-rank}\ of $U$ is then still infinite
allows us to assume that $\text{\rm socle}(X+Y)$ and
hence $X+Y$ intersect $U$ trivially.
Therefore $X+Y$ is a discrete subgroup of $G$
and hence is closed.
Thus (b) holds.
\end{proof}
\begin{lemma}\label{l:prf} Let $G$ be a \lc\ a\-bel\-ian\ $p$-group containing
a subgroup $D$ which algebraically is isomorphic to $\prf p^k$ for
some $k\ge1$. Then $D$ is a discrete and hence closed subgroup of $G$.
\end{lemma}
\begin{proof}
There is no loss of generality to assume $G=\overline D$. Let $U$ be
a compact open subgroup of $G$. We claim that $D\cap U$ must have finite
exponent. In order to see this we remark that $D\cap U$, as an abstract
group, has finite \hbox{$p$-rank}\ at most $r$ and thus, algebraically
\[D\cap U\cong\prf p^l\oplus F\]
for some $0\le l\le r$ and finite $F$. Since $U$ is a compact $p$-group it is
reduced and cannot contain a divisible subgroup. Hence $l=0$ and hence
$D\cap U$ is finite. Since $D\cap U=F$ is dense in $U$ we may
conclude that $U$
itself is finite and thus $D$ must be a discrete subgroup of $G$.
\end{proof}
We are ready for proving the first main result.
\begin{proof}[Proof of Theorem \ref{t:mainA}]
Let (a) be true. Then (b) follows from Theorem \ref{t:M-abelian-p}.
\medskip
Assume (b.1). This condition holds for closed subgroups and factor groups.
Let $X$ and $Y$ be any closed subgroups. Then, letting $U_X$ and $U_Y$
be open compact subgroups of respectively $X$ and $Y$, and taking
Lemma \ref{l:XYU} into account, one may pass to
the factor group $H:= G/K$ and hence assume that $X$ and $Y$
intersect $U$ trivially. Hence $X$ and $Y$ can be assumed to be discrete
and are therefore torsion subgroups of $H$.
Since $\mathop\mathrm{tor}(H)$ is discrete conclude that
$X+Y$ is discrete and hence closed.
\ssk
Assume (b.2).
Let $X$ and $Y$ be any closed subgroups and, similarly as before,
setting $U_X:= U\cap X$ and $U_Y:= U\cap Y$, and, using
Lemma \ref{l:XYU}, replace $X$ and
$Y$ respectively by $X+U_Y$ and $Y+U_X$, and,
factor $U_X+U_Y$ in $G$, we can achieve that $X$ and $Y$ can
be assumed to be discrete subgroups of $G$.
If $U/(U_X+U_Y)$ has finite \hbox{$p$-rank}, we may
apply the reasoning of case (b.1). Let us assume now that after factoring
$U_X+U_Y$ that $U/(U_X+U_Y)$ still has infinite \hbox{$p$-rank}. Simplifying notation
we let $G$ and $U$ and $X$ and $Y$ denote the respective factor groups.
Since $(X+U)/U\cong X/(X\cap U)\cong X$ is discrete and has finite \hbox{$p$-rank}\ by
condition (b.2) and a similar statement holds for $Y$, we can assume
\[ X=D_X+F_X, \ \ Y=D_Y+F_Y\]
for finite groups $F_X$ and $F_Y$ and finite \hbox{$p$-rank}\ torsion divisible
subgroups $D_X$ and $D_Y$ of $G$. Since
\[ X+Y=(D_X+D_Y)+(F_X+F_Y)\]
it will suffice to prove
that $\Delta:= D_X+D_Y$ is closed and to note
that adding the finite summand $F_X+F_Y$ renders again a closed subgroup
of $G$. Since $\Delta$, as an abstract group,
is isomorphic to $\prf p^k$ for some $k\ge1$ it follows from Lemma \ref{l:prf}
that $\Delta$ and hence $X+Y$ is closed.
\ssk
Thus (b) implies (c).
\medskip
That (c) implies (a) has been shown in Theorem \ref{th:tM=stqh p}.
\end{proof}
More can be said if $G$ is torsion.
\begin{corollary}\label{c:M-abelian-p}
Let $G$ be a \lc\ a\-bel\-ian\ nondiscrete torsion $p$-group.
Then $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ if, and only if,
the maximal divisible subgroup $D:=\div(G)$ is discrete and
has finite \hbox{$p$-rank}\ and
there is a compact open, and hence reduced, subgroup $R$ such that
\[G=R\oplus D\]
algebraically and topologically.
\end{corollary}
\begin{proof}
We discuss the cases (a) and (b) in Theorem \ref{t:M-abelian-p}.
If the premise in (a) holds then, since $U$ is a compact torsion group
having finite exponent and finite \hbox{$p$-rank}, it is finite. Then
$G$ would have to be discrete, contrary to our assumptions.
Thus $G$ satisfies premise (b) in Theorem \ref{t:M-abelian-p}.
Therefore the \hbox{$p$-rank}\ of $G/U$ is finite
and $D$ is a finite \hbox{$p$-rank}\ divisible subgroup -- hence $D$ is a
discrete subgroup of $G$. Passing to a smaller open
compact subgroup of $U$ if necessary,
\cite[Proposition 2.5.5]{ribes-zalesskii}
ensures finite generation, and,
taking Proposition \ref{p:finrank}
into account, we can in addition assume $D\cap U=\{0\}$.
Using a result of R.~Baer (see \cite[Theorem 22.2]{fuchs1}), one can
find a reduced subgroup $R$ of $G$ containing the reduced subgroup
$U$ with $G=D\oplus R$. Necessarily $R$ is open in $G$ and $G=D\oplus R$
algebraically and topologically.
Hence $R$ is itself
compact.
Since $R$ is open and $D\cap R=\{0\}$ we infer that
$D$ is discrete.
\end{proof}
\subsection{Totally Disconnected LCA-Groups}
We treat the case when $G$ is periodic first and later turn to
totally disconnected but not periodic $G$.
\begin{proof}[Proof of Theorem \ref{t:abelian-tor-stqh}]
(a)\implies(b).
Assume first that $A$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
Select an open compact subgroup, say $U$, of $A$.
Since $A$ is torsion, $U$ is a compact abelian torsion group,
and therefore the set $\phi:= \pi(U)$ must be finite.
Put $\delta:=\pi(A)\setminus\phi$.
From $A_\delta\cap U=\{0\}$ it follows that $A_\delta$ is a discrete
subgroup of $A$, proving (b.2).
Next observe that
\[A_\phi=\bigoplus_{p\in\phi}A_p\]
is a direct sum since $\phi$ is finite.
Corollary \ref{c:M-abelian-p}
(in conjunction with Theorem \ref{t:M-abelian-p})
implies that for each $p$ in $\phi$ there
is a decomposition
\[A_p=D_p\oplus V_p,\]
with $D_p$ a divisible finite $p$-rank subgroup of $A$ and $V_p$ compact.
Thus there is a decomposition
\[A_\phi=D_\phi\oplus V_\phi\]
where $D_\phi:=\bigoplus_{p\in\phi}D_p$
is a discrete divisible subgroup,
and,
$V_\phi:=\bigoplus_{p\in\phi}V_p$ is compact.
Thus also (b.1) is established.
\medskip
(b)\implies(c). Assume (b) to hold.
Apply Lemma \ref{l:coprime}
to $A_\delta$ and the finitely many factors $A_p$ for $p\in\phi$.
Then $A_\delta$ being discrete, is strong\-ly \tqh\
and by Corollary \ref{c:M-abelian-p} (in conjunction with Theorem
\ref{t:M-abelian-p})
so is $A_p$ for every $p$ in $\phi$.
\medskip
(c)\implies(a).
Assume (c) to hold.
Then, using Lemma \ref{l:stqh<tM}, (a) follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:M-periodic}].
If $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar, then, for every $p\in\pi(G)$, the $p$-component
is a factor group of $G$ and hence, taking Remark \ref{rem:tMg}
into account, is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
Let us sketch how to prove the converse, for more details see
\cite{muk2}.
For any closed subgroup $H$ of $G=\prod^{\rm loc}_{p\in\pi}(G_p,C_p)$ one has
\[H=\gen{H_p:p\in\pi}=\prod^{\rm loc}_{p\in\pi}(H_p,H_p\cap C_p).\]
It then follows that
\[X\vee Y=\gen{X_p\vee Y_p:p\in\pi} \ \ {\rm and} \ \
X\wedge Y=\gen{X_p\wedge Y_p:p\in\pi}.\]
Using these equalities and the fact that $X\le Z$ if, and only if,
$X_p\le Z_p$ holds for all $p\in\pi$, one derives from
\[\forall p\in\pi: \ \ (X_p\vee Y_p)\wedge Z_p=X_p\vee(Y_p\wedge Z_p)\]
that $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\end{proof}
\begin{proof}[Proof of Theorem \ref{t:mainB}]
Remark first that $G$ is totally disconnected and \lc\ a\-bel\-ian. Therefore
a compact open subgroup $U$ exists. Since $G$ is neither discrete
nor periodic $U$ is infinite.
(a)\implies(b):
\ssk
(b.1)
Since $G$ is not periodic $\mathop{\rm comp}\nolimits(G)$ is a proper subgroup of $G$.
Therefore Lemma \ref{l:not-tM}(b) shows that $\mathop\mathrm{tor}(G)=\mathop{\rm comp}\nolimits(G)$.
By Lemma \ref{l:not-tM}(c),
the ${\mathbb Z}$-rank of $G/U$ is finite. Since $U\le \mathop{\rm comp}\nolimits(G)$ both subgroups
are open and hence $G/U\to G/\mathop{\rm comp}\nolimits(G)$
is a quotient map of discrete groups.
This implies that the ${\mathbb Z}$-rank of $G/\mathop\mathrm{tor}(G)$
does not exceed the ${\mathbb Z}$-rank
of $G/U$ and is therefore finite.
\ssk
(b.2)
Since, by (b.1), $T=\mathop\mathrm{tor}(G)$ is a to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ torsion group it follows from
Theorem \ref{t:abelian-tor-stqh} that $T$ is strong\-ly \tqh.
Let us prove the extra statement
about $G/N$ with $N$ a closed subgroup of $T$.
Certainly $\mathop\mathrm{tor}(G/N)=T/N$ because $N\le T=\mathop\mathrm{tor}(G)$.
Since $N$ is a closed subgroup of $G$ it follows that $G/N$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
As $N\le T$ it follows that $G/N$ cannot be periodic.
Indeed,
by the second isomorphism theorem $G/T\cong (G/N)/(T/N)$
must be torsion free of finite ${\mathbb Z}$-rank and
$\mathop\mathrm{tor}(G/N)=T/N$ is strong\-ly \tqh\ by Proposition \ref{p:stqh-class}.
Thus the factor group $G/N$ will enjoy all the
properties listed in (b).
\medskip
(b) $\implies$ (c):
Fix closed subgroups $X$ and $Y$ of $G$. During the proof
we shall modify $X$ and $Y$ and factor
some closed subgroup $N$ of $\mathop\mathrm{tor}(G)$ with $N\le X\cap Y$.
By the additional statement in (b) $G/N$ still enjoys
the properties in (b) and thus $X+Y$ will be closed if, and only if,
$(X+Y)/N$ is closed in $G/N$, by Lemma \ref{l:quotient-closed}.
Thus we need to show that $X+Y$
is closed and first consider the cases:
\noindent $\alpha.)$ $X$ and $Y$ are both torsion.\medskip
\noindent $\beta.)$ $Y$ is torsion.\medskip
$\alpha.)$ Since $X+Y\le T=\mathop\mathrm{tor}(G)$ and the latter is strong\-ly \tqh\ by
(b), conclude that $X+Y$ must be closed.
Since $\phi$ is finite conclude
from $X+Y=\bigoplus_{p\in\phi}(X_p+Y_p)$ being topologically
isomorphic to the direct product of the $p$-primary groups
$(X_p+Y_p)$ that $X+Y$ is closed.
\medskip
$\beta.)$
We already know that
$\mathop\mathrm{tor}(X)+Y=\mathop\mathrm{tor}(X)+\mathop\mathrm{tor}(Y)$ is closed,
and we first remark that $X+Y$ is closed,
if and only if, $(X+(\mathop\mathrm{tor}(X)+Y))/\mathop\mathrm{tor}(X)$
and $\mathop\mathrm{tor}(X)$ are
closed, in light of Lemma \ref{l:quotient-closed}.
Therefore observing that
$X/\mathop\mathrm{tor}(X)=X/\mathop\mathrm{tor}(G)\cap X\cong (X+\mathop\mathrm{tor}(G))/\mathop\mathrm{tor}(G)$ is discrete
and of finite ${\mathbb Z}$-rank, we may factor
the closed subgroup $\mathop\mathrm{tor}(X)$ and hence assume that $X$ is tor\-sion-free, has
finite ${\mathbb Z}$-rank, and, that $Y$ is torsion. Moreover, due to the
additional statement in (b),
our modified group $G$ still satisfies (b).
Since $\mathop\mathrm{tor}(G)$ is open in $G$ and $X\cap \mathop\mathrm{tor}(G)=\{0\}$ we have that
$M:= X+\mathop\mathrm{tor}(G)$ is open and a direct sum $M=X\oplus\mathop\mathrm{tor}(G)$. Since
$Y$ is a closed subgroup of $\mathop\mathrm{tor}(G)$ deduce that $X\oplus Y$ is
closed in $M$ and hence in $G$.
\medskip
For finishing the proof of ``(b)\implies(c)''
let $X$ and $Y$ now be arbitrary closed subgroups of $G$. Then
$X+\mathop\mathrm{tor}(Y)$ and $Y+\mathop\mathrm{tor}(X)$ are closed subgroups of $G$ and
$X+Y$ is closed if, and only if, $(X+\mathop\mathrm{tor}(Y))+(Y+\mathop\mathrm{tor}(X))$ is closed.
Since
\[\mathop\mathrm{tor}(X+\mathop\mathrm{tor}(Y))=\mathop\mathrm{tor}(Y+\mathop\mathrm{tor}(X))=\mathop\mathrm{tor}(X)+\mathop\mathrm{tor}(Y)\]
we may factor $\mathop\mathrm{tor}(X)+\mathop\mathrm{tor}(Y)$ and, taking the additional statement
in (b) into account,
in the sequel assume that
both, $X$ and $Y$, are tor\-sion-free\ and
hence discrete subgroups with finite ${\mathbb Z}$-rank.
Therefore, if $r$ is the sum of the ${\mathbb Z}$-ranks of $X$ and $Y$,
it turns out that every {\em algebraically}
finitely generated\ subgroup of $X+Y$
can be generated by at most $r$ elements. This property holds in particular
for $C:=(X+Y)\cap U$ for $U$ any open compact subgroup of $G$.
Since $U\le\mathop\mathrm{tor}(G)$ it is a compact torsion group and has therefore
finite exponent; and so has its subgroup $C$.
Therefore $C$ is finite. Hence, passing to
a smaller open compact subgroup $V$ of $U$,
one can arrange $(X+Y)\cap V=\{0\}$.
Therefore $X+Y$ is a discrete and hence closed subgroup of $G$.
\medskip
Certainly (c) implies (a), by Lemma \ref{l:stqh<tM}.
\end{proof}
\begin{remark}\label{r:nonsplit}\rm
The group $G$ in Theorem \ref{t:mainB}
need not be a split extension of $\mathop\mathrm{tor}(G)$ by $L$ --
even if $G$ is discrete,
as has been demonstrated by \cite[Appendix 1, Theorem A1.32]{hofmor}.
\end{remark}
\begin{corollary}\label{c:M-td}
Let the totally disconnected nonperiodic
\lc\ a\-bel\-ian\ strong\-ly \tqh\ group $G$. Then
every tor\-sion-free\ subgroup is discrete and hence closed. Moreover,
every algebraically finitely generated\ subgroup is discrete and hence closed.
\end{corollary}
\begin{proof}
Since $G$ is not periodic the subgroup $\mathop{\rm comp}\nolimits(G)$ is open and by the premises
it agrees with $T=\mathop\mathrm{tor}(G)$. Therefore, for $H$ any tor\-sion-free\ subgroup of $G$,
one has $T\cap H=\{0\}$ showing that $H$ is a discrete subgroup of $G$.
The second statement follows from the first one and the
structure of algebraically
finitely generated abelian groups.
\end{proof}
\subsection{Groups with a Nontrivial Connected Component}
Via Pontryagin duality Theorem \ref{t:mainB}
implies at once the following structure theorem.
\begin{theorem}\label{t:M-conn}
The following statements about a \lc\ a\-bel\-ian\ group $G$, neither compact
nor discrete, and with nontrivial component $G_0$, are equivalent:
\begin{enumerate}[\rm(a)]
\item $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item There are a finite set of primes $\phi$ and a disjoint
set $\delta$ of primes and all of the following statements hold:
\begin{enumerate}[\rm({b.}1)]
\item The component $G_0$ is a finite dimensional compact connected
subgroup of $G$.
\item $G/G_0$ algebraically and topologically decomposes as
\[G/G_0=F_\phi\oplus Z_\phi\oplus Z_\delta\oplus S_\delta,\]
where $F_\phi$ is discrete of finite exponent,
$Z_\phi=\prod_{p\in\phi}Z_\phi$ is tor\-sion-free\
and for every $p\in\phi$ one has $\mathop\mathrm{rank}\nolimits_p(Z_p)$ finite. Moreover,
$Z_\delta=\prod_{p\in\delta}Z_p$ is compact and tor\-sion-free,
and, for every $p\in\delta$, the $p$-primary subgroup
$Z_p\cong {\mathbb Z}_p^{\mathfrak m_p}$, where $\mathfrak m_p$ is some cardinal.
The subgroup $S_\delta$ is compact and coreduced (i.e., its dual is
reduced).
\item The preimage, say $K$, of $Z_\phi\oplus Z_\delta$ under the
canonical epimorphism from $G$ onto $G/G_0$
is a split extension of $G_0$ by $Z_\phi\oplus Z_\delta$,
i.e., algebraically and topologically
\[K\cong G_0\oplus Z_\phi\oplus Z_\delta\]
and the factor group $G/K$ is algebraically and topologically
isomorphic to $F_\phi\oplus S_\delta$.
\end{enumerate}
\item $G$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
\begin{proof}
Suppose (a).
\medskip
{\em
Claim 1: The dual $\hat G$ satisfies the premise of Theorem \ref{t:mainB}.}
\medskip
Remark \ref{rem:tMg} implies that $\hat G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
Since $G$ is neither discrete nor compact so is $\hat G$ by duality.
Therefore, in particular, $\mathop{\rm comp}\nolimits(\hat G)<\hat G$.
Now Lemma \ref{l:not-tM}(a) and (b) together
imply that $(\hat G)_0$ is compact
and $\mathop{\rm comp}\nolimits(\hat G)=\mathop\mathrm{tor}(\hat G)$. Then $(\hat G)_0$ is a compact
torsion group and therefore is trivial
(see \cite[Corollary 8.5(a)(e)]{hofmor}).
Hence $\hat G$ is totally disconnected and Claim 1 is established.
\medskip
{\em Claim 2: {\rm (b.1)} holds.}
\medskip
By \cite[Corollary 7.69]{hofmor} we have that
$\widehat{G_0}\cong \hat G/\mathop\mathrm{tor}(\hat G)$.
The last statement in Theorem \ref{t:mainB}(b.1) shows that $\widehat{G_0}$
has finite ${\mathbb Z}$-rank. Therefore
\cite[Theorem 8.22]{hofmor} implies that $G_0$
has finite dimension. Whence Claim 2 follows.
\medskip
{\em Claim 3: {\rm (b.2)} holds.}
\medskip
Claim 1 shows that $\mathop\mathrm{tor}(\hat G)$ satisfies
Theorem \ref{t:abelian-tor-stqh}(a).
Therefore Theorem \ref{t:abelian-tor-stqh}(b.2)
applied to $\mathop\mathrm{tor}(\hat G)$
yields a finite set $\phi$ of primes and
\[\mathop\mathrm{tor}(\hat G)=D_\phi\oplus E\oplus D_\delta \oplus R_\delta\]
where $D_\phi$ is discrete, divisible, and of finite \hbox{$p$-rank}\ for all
$p\in\phi$, $E$ is a compact subgroup and $\pi(E)\subseteq\phi$,
$D_\delta$ is a discrete divisible torsion group, and,
$R_\delta$ is reduced discrete subgroup. Moreover
$\pi(D_\delta)\cup\pi(R_\delta)$ intersects $\phi$ trivially.
Dualization yields a decomposition
\[G/G_0\cong \mathop\mathrm{tor}(\hat G)\hbox{$\hat{~}$}\cong Z_\phi\oplus F_\phi\oplus
Z_\delta\oplus S_\delta\]
where, for respective \psyl p subgroups $Z_p\cong {\mathbb Z}_p^{\mathfrak m_p}$ for
cardinalities $\mathfrak m_p$ and $p\in\phi\cup\delta$,
we have $Z_\phi=\prod_{p\in\phi}Z_p$ and $Z_\delta=\prod_{p\in\delta}Z_p$.
Moreover, for $p\in\phi$ the cardinality
$\mathfrak m_p$ is finite. Furthermore, $F_\phi\cong \widehat{E}$
is discrete and has finite exponent and $S_\delta\cong \widehat{R_\delta}$
is compact and since $R_\delta$ is reduced $S_\delta$ is coreduced.
Thus Claim 2 is established.
\medskip
Let us show that (b.3) holds. Note that $K$ is compact and
$K/G_0\cong Z_\phi\oplus Z_\delta$ is tor\-sion-free\ and compact.
Therefore \cite[(25.30)(b)]{hewitt-ross1-book}
implies a splitting, i.e., algebraically
and topologically
\[K\cong G_0\oplus Z_\phi\oplus Z_\delta.\]
Hence (b.3) holds.
\medskip
(b) $\implies$ (c). We start with a simple observation:
\ssk
{\em Claim 1: Let $K$ be any closed subgroup
of $G_0$. Then the factor group $G_0/K$ is finite dimensional and
$G/G_0\cong (G/K)/(G_0/K)$ is a topological isomorphism.}
\ssk
As $G_0$ has finite dimension so has $G_0/K$.
The second isomorphism theorem yields the second statement of the Claim.
\ssk
{\em Claim 2: We may assume $G_0\cap X=G_0\cap Y=\{0\}$.}
\ssk
We may use Lemma \ref{l:XYU} with $G_0$ playing the role of $U$, modify
$X$ and $Y$ according to the lemma, make use of Claim 1 where
we let $K:= X\cap G_0+Y\cap G_0$, pass to the factor group $G/K$.
Then certainly $X\cap G_0=Y\cap G_0=\{0\}$.
Observe that we still have a decomposition of $G/G_0$ as in (b.2).
Claim 2 holds.
\ssk
{\em Claim 3:
Let $C$ be any closed subgroup $G$ with $C\cap G_0=\{0\}$
and let $\widetilde C:= (C+G_0)/G_0$.
Let $p:G\to G/G_0$ be the canonical projection and
$\phi_C:\widetilde C\to C$ the canonical algebraic isomorphism.
All of the following holds:
\begin{enumerate}[\rm (i)]
\item
For any closed subgroup $\widetilde L$ of $\widetilde C$ one has
$\phi_C(\widetilde L)=p^{-1}(\widetilde L)\cap C$.
Moreover, if $\widetilde L$ is compact, then so is $\phi_C(\widetilde L)$.
\item
For $\sigma\subseteq \pi(\widetilde C)$
and $\widetilde C_\sigma$ the $\sigma$-primary
subgroup of $\widetilde C$ one has $C_\sigma=\phi_C(\widetilde C_\sigma)$.
\item
There is an algebraic and topological direct decomposition
$\widetilde C=\widetilde Z_C\oplus \widetilde F_C\oplus\widetilde C_\delta$
with $\widetilde F_C=\mathop\mathrm{tor}{\widetilde C_\phi}$ and $\widetilde Z_C$ a finitely generated\ tor\-sion-free\
subgroup of $\widetilde C_\phi$.
\item If $C$ is torsion and only contains $\phi$-elements, it is discrete
and has finite exponent.
\item
The map $\phi_C$ induces algebraic and topological isomorphisms
\[Z_C:= \phi_C(\widetilde Z_C)\cong \widetilde Z_C, \ \
F_C := \mathop\mathrm{tor}(C_\phi) \cong \widetilde F_C \ \
{\rm and} \ \ C_\delta=\phi_C(\widetilde C_\delta)\cong\widetilde C_\delta\]
and the subgroups $Z_C$ and $C_\delta$ are compact.
\item
$C\cong\widetilde C$ algebraically and topologically.
\end{enumerate}
}
\ssk
(i)
Let $i_C:C\to G$ be the canonical embedding and $j_C:C\to\widetilde C$ be the
inverse of $\phi_C$. Then, as $j_C=p\circ i_C$, the map $j_C$
is continuous and one finds
\[\phi_C(\widetilde L)=j_C^{-1}(\widetilde L)
=i_C^{-1}p^{-1}(\widetilde L)
=p_C^{-1}(\widetilde L)\cap C.\]
Suppose that in addition $\widetilde L$ is compact. Then, applying
Lemma \ref{l:XK} to $p_C^{-1}(\widetilde L)$ where $G_0$ plays the
role of $K$ we find that $p_C^{-1}(\widetilde L)$ is compact. Therefore
so is the intersection
$\phi_C(\widetilde L)=p_C^{-1}(\widetilde L)\cap C$.
(ii)
Pick $x\in C_\sigma$. The algebraic isomorphism $\phi_C:\widetilde C\to C$
induces a topological isomorphism
\[\gen x\cong (\gen x+G_0)/G_0=\gen{\phi_C(x)}\le C.\]
Thus $\phi_C(\widetilde C_\sigma)\le C_\sigma$. On the other hand, since
$\phi_C:\widetilde C\to C$ is an algebraic isomorphism with continuous
inverse $j_C$, it follows that $j_C(C_\sigma)\le \widetilde C_\sigma$,
and applying $\phi_C$ on both sides renders $C_\sigma\le \widetilde C_\sigma$.
Therefore $\phi_C(\widetilde X_\sigma)=C_\sigma$.
\ssk
(iii)
The decomposition of $G/G_0$ in (b.2)
in conjunction with Lemma \ref{l:phi-kappa}
applied to the $\phi$-component of
$\widetilde C$ implies that
\[\widetilde C=\widetilde Z_C\oplus \widetilde F_C\oplus \widetilde C_\delta\]
for $\widetilde Z_C$ finitely generated\ tor\-sion-free\
with $\pi(\widetilde Z_C)\subseteq\phi$,
$\widetilde F_C$ a discrete subgroup of finite exponent, and,
$\widetilde Z_C$ and $\widetilde C_\delta$ profinite groups
with $\pi(\widetilde X_\delta)\subseteq\delta$.
\ssk
(iv) Observe first that $\widetilde C:=(C+G_0)/G_0$ is discrete
being contained in the discrete subgroup $\mathop\mathrm{tor}(G/G_0)_\phi$.
Thus there is an open compact subgroup $U$ of $G/G_0$ with
$\widetilde C\cap U=\{0\}$. Let $p:G\to G/G_0$ be the canonical
projection. Then $W:= p^{-1}(U)$ is open and
\[ W\cap C\le (W\cap (C+G_0))\cap C\le G_0\cap C=\{0\}\]
showing that $C$ is indeed a discrete subgroup of $G$.
Since $\mathop\mathrm{tor}(G)_\phi$ has finite exponent so has $\widetilde C$
and thus also $C$.
\ssk
(v)
Setting $\sigma:=\phi$ in (ii)
implies $C_\phi=\phi_C(\widetilde C_\phi)$ and using (i) with $\widetilde L:=
\widetilde Z_C$ one obtains a topological isomorphism
$Z_C:=\phi_C(\widetilde Z_C)\cong\widetilde Z_C$
since $\widetilde Z_C$ and hence $Z_C$ are finitely generated\ and hence compact.
For proving the second equation we note that by (iv)
$\mathop\mathrm{tor}(C_\phi)$ is discrete and hence
\[F_C:=\phi_C(\widetilde F_\phi)=\phi_C(\mathop\mathrm{tor}(\widetilde C_\phi))=\mathop\mathrm{tor}(C_\phi).\]
The third topological isomorphism follows by letting $\sigma:=\delta$ and
making use of (i) in order to see that $C_\delta$ is compact.
\ssk
(vi)
Since $G_0\cap C=\{0\}$ so that $C/G_0\cap C=C$,
\cite[(5.32) Theorem]{hewitt-ross1-book} implies that
the map $\phi_C:\widetilde C=(C+G_0)/G_0\to C$
is an algebraic isomorphism carrying open sets
to open sets. The compact subgroup $\widetilde Z_C\oplus \widetilde C_\delta$
is open in $\widetilde C$ and therefore, taking (i) into account, its image
\[\phi_C(\widetilde Z_C\oplus\widetilde C_\delta)=Z_C\oplus C_\delta\]
is an open compact subgroup of $C$.
Thus the restriction of $\phi_C$ to $\widetilde Z_C\oplus\widetilde C$ is a
topological isomorphism onto an open subgroup of $C$ and therefore
$\phi_C$ is a topological isomorphism. Thus (vi) and hence Claim 3 are
established.
\medskip
Let us finish proving ``(b)\implies(c)''.
Applying Claim 3(vi) to $X$ and $Y$ separately one finds
compact groups $Z_X$, $Z_Y$, $X_\delta$, $Y_\delta$ and
discrete torsion subgroups
$F_X$ and $F_Y$ with $\pi(F_X)\cup\pi(F_Y)\subseteq\phi$
such that algebraically and topologically
\[X=Z_X\oplus F_X\oplus X_\delta, \ \ Y=Z_Y\oplus F_Y\oplus Y_\delta.\]
The sum of compact subgroups
\[V:=Z_X+Z_Y+X_\delta+Y_\delta\]
is compact. Therefore
\[X+Y=(F_X+F_Y)+V\]
will turn out to be closed if we can show that $F_X+F_Y$ is closed.
From (iv) it follows that $F_X$ and $F_Y$ are discrete torsion
groups of finite exponent and hence
closed subgroups of $G$ and Corollary \ref{c:findim-neu}
implies the finiteness of $(F_X+F_Y)\cap G_0$.
Now we may use Claim 1 with $K:= (F_X+F_Y)\cap G_0$ and
assume $(F_X+F_Y)\cap G_0=\{0\}$. Since $\pi(F_X+F_Y)\subseteq\phi$ and
$F_X+F_Y$ is torsion deduce from (iv) that indeed $F_X+F_Y$ is closed.
Therefore (c) holds.
\medskip
(c) $\implies$ (a).
This follows from Lemma \ref{l:stqh<tM}.
\end{proof}
The preceding result corrects
Theorem \cite[Theorem 14.34(ii)]{HHR18}.
\section{Some Consequences}
\label{s:consequences}
From Corollary \ref{c:M-abelian-p} one obtains
refined structure results for reduced, for torsion, and, for
divisible $p$-groups.
\begin{corollary}\label{c:reduced-p-stqh}
Let $G$ be a reduced \lc\ a\-bel\-ian\ torsion $p$-group.
To be strong\-ly \tqh\ it is necessary and sufficient that
$G$ is either discrete or compact.
\end{corollary}
\begin{corollary}\label{c:stqh-p-divisible}
Let $G$ be a \lc\ a\-bel\-ian\ torsion strong\-ly \tqh\ $p$-group. Then either $G$ is discrete or
$\div(G)$ has finite $p$-rank.
In particular $\div(G)$ is a closed subgroup of $G$.
\end{corollary}
A referee provided the proof of the preceding proposition
and actually proved the following very general result:
\begin{proposition}\label{p:referee}
Let $G$ be a topological group with a subgroup topology
and $D$ a discrete divisible subgroup.
Then $G=D\oplus R$ algebraically and topologically
for some open subgroup $R$.
\end{proposition}
\begin{proof}
Since $D$ is discrete there is an open subgroup $U$
of $G$ intersecting $D$ trivially and hence $D+U=D\oplus U$
algebraically and topologically.
The canonical epimorphism $\phi:D+U\to D$, by the universal
property of divisibility extends to a homomorphism $\phi':G\to D$.
The latter agrees with $\phi$ on the open subgroup $D\oplus U$
of $G$ so that $\phi'$ is continuous. Therefore
\[G=D\oplus R\]
for $R:= \ker(\phi')$.
\end{proof}
For the $p$-group case the following immediate consequence will be helpful.
\begin{corollary}\label{c:stqh-ab-p-D-tor}
Let $G$ be a \stqh\ group\ having discrete maximal
divisible subgroup $D:=\div(G)$.
Then $G=D\oplus R$ for a reduced subgroup $R$
algebraically and topologically.
\end{corollary}
An additional consequence may be concluded.
\begin{corollary}\label{c:maxdiv}
Let $G$ be a \lc\ a\-bel\-ian\ strong\-ly \tqh\ nondiscrete torsion $p$-group.
Then, for $C$
compact subgroup of $G$,
\[(\div(G)+C)/C = \div(G/C).\]
\end{corollary}
\begin{proof}
Theorem \ref{th:tM=stqh p} implies that $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and Corollary
\ref{c:M-abelian-p} shows that the maximal divisible subgroup
$D:=\div(G)$ of $G$ is discrete and has finite \hbox{$p$-rank}.
Moreover, $G=D\oplus R$, for some compact open torsion subgroup $R$ of $G$.
By Proposition \ref{p:cpt-tg} $R$ and hence $R/R\cap(D+C)$ have
finite exponent.
Because $(D+C)/C$ is divisible we have $(D+C)/C\le \div(G/C)$.
In order to show that
$(D+C)/C$ is the {\em maximal} divisible subgroup of $G/C$ consider
the algebraic isomorphisms
\[(G/C)/((D+C)/C))\cong G/(D+C)=(R+D+C)/(D+C)\cong R/(R\cap(D+C)).\]
It follows that $(G/C)/((D+C)/C)$ has finite exponent
and is hence reduced
showing the desired containment $\div(G/C)\le (D+C)/C$.
\end{proof}
Every compact or discrete abelian $p$-group clearly is strong\-ly \tqh.
Therefore we first concentrate on groups neither
compact nor discrete.
\begin{proposition}\label{p:p-red-stqh}
The following statements about a \lc\ a\-bel\-ian\ reduced $p$-group $G$,
neither discrete nor compact, are equivalent:
\begin{enumerate}[\rm (a)]
\item $G$ is strong\-ly \tqh.
\item $G$ contains an open finitely generated\ subgroup.
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose (a). Then by Theorem \ref{th:tM=stqh p} $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\
and therefore either (a) or (b) of Theorem \ref{t:M-abelian-p} must hold.
In the latter case $G$ would have to be compact since $\div(D)=\{0\}$.
Hence case (a) of Theorem \ref{t:M-abelian-p} holds and therefore,
as desired, $G$ has an open finitely generated\ subgroup.
\medskip
That (b) implies (a) is an immediate
consequence of Lemma \ref{l:U-fg-stqh}.
\end{proof}
\begin{remark}\label{r:p-red-stqh}\rm
An example of a reduced \lc\ a\-bel\-ian\ $p$-group which is
strong\-ly \tqh\ and neither compact nor discrete can be
found in \cite[Remark 14.6]{HHR18}.
\end{remark}
\begin{lemma}\label{l:stqhg-p-DR}
Let $G$ be a non-discrete \lc\ a\-bel\-ian\ $p$-group with a finitely generated\
open subgroup $U$. Then the maximal
divisible subgroup $D$ is closed and $G$ algebraically
and topologically decomposes
\[G=D\oplus R,\]
for $R$ a closed reduced subgroup of $G$.
\end{lemma}
\begin{proof}
Lemma \ref{l:U-fg-stqh} implies that $G$ is strong\-ly \tqh.
Since $U$ is finitely generated\ Lemma \ref{l:Rb} implies that $U$ is
algebraically and topologically isomorphic to $V\oplus F$
for some finite subgroup $F$ and $V$ open and tor\-sion-free.
If $V=\{0\}$ the group $G$ is discrete, a contradiction.
Thus $V\neq\{0\}$ and, replacing $U$ by $V$ we can arrange
that $U$ is tor\-sion-free.
Then certainly $\overline{D\cap U}$ is finitely generated\ and hence by
Proposition 3.76 in \cite{HHR18} $D$ is closed and
thus $X:= D\cap U=\overline{D\cap U}$ is finitely generated.
Since $U/X$ is finitely generated, by Lemma \ref{l:Rb}
there are a finite $p$-group $F$, some $r\ge0$, and,
a closed
subgroup $W$ of $U$ containing $X$ such that $U/X=F\oplus W/X$ and
$W/X\cong {\mathbb Z}_p^r$.
Lifting $r$ generators of ${\mathbb Z}_p^r$ to $W$ yields a closed subgroup
$Z\cong{\mathbb Z}_p^r$ of $W$ such that
\[W=X\oplus Z=(D\cap U)\oplus Z.\]
The endomorphism $\eta:W\to D\cap U$ extends to a continuous
homomorphism $\widetilde\eta:W+D\to D$ which restricts to the identity
on $D$. Therefore, setting $R:=\ker(\widetilde\eta)$
\[G=D\oplus R\]
is a splitting.
\end{proof}
Now we offer a new and short argument for the following
result of Mukhin \cite{muk2}.
\begin{proposition}\label{p:p-D-stqh}
A divisible \lc\ a\-bel\-ian\ $p$-group $G$ is strong\-ly \tqh\ if and only if, for
some set $I$ and nonnegative integer $m$,
\[G\cong \prf p^{(I)}\oplus\Q_p^m\]
algebraically and topologically.
\end{proposition}
\begin{proof}
Assume first that $G$ is strong\-ly \tqh.
If $G$ is discrete it has the described structure for $m=0$.
Thus we may assume $G$ {\em not} to be discrete.
Fix an open compact subgroup $U$ of $G$.
Then $G/pU$ is a torsion \stqh\ group\
and Corollary \ref{c:M-abelian-p}
(in conjunction with Theorem \ref{t:M-abelian-p})
implies that $G/pU$ is either
discrete or has finite $p$-rank.
In either case $U/pU$ is finite and
hence $U$ has finite $p$-rank. Therefore,
by Proposition \ref{p:finrank},
\[U\cong F\oplus {\mathbb Z}_p^m\]
for some nonnegative integer $m$ and a finite subgroup $F$ of $G$.
Choosing the open compact subgroup $U$ small enough, we can arrange
\[U\cap \mathop\mathrm{tor}(G)=\{0\}.\]
Hence the subgroup $\mathop\mathrm{tor}(G)$ is discrete and divisible
and is therefore, as a consequence of Lemma 4.4 in \cite{HHR-comfort},
topologically and algebraically a direct summand of $G$ intersecting
$U$ trivially. Then
\[G=\mathop\mathrm{tor}(G)\oplus D_U,\]
with $D_U\cong \Q_p^m$ a divisible hull of $U$.
Since $\mathop\mathrm{tor}(G)$ is discrete and divisible there is a set $I$ with
\[\mathop\mathrm{tor}(G)\cong \prf p^{(I)}.\]
\medskip
Conversely, suppose
\[G\cong \prf p^{(I)}\oplus \Q_p^m\]
for $I$ some set and $m\ge0$.
Since the \hbox{$p$-rank}\ of the open summand
$\Q_p^m$ is finite (equal to $m$)
there is an open finitely generated\ subgroup $U$ of $\Q_p^m$ and
hence of $G$.
Lemma \ref{l:U-fg-stqh} implies that $G$ is strong\-ly \tqh.
\end{proof}
Next we turn to providing a full
classification of the periodic \stqh\ group s,
Theorem \ref{t:ab-periodic-stqh}.
\begin{definition}\label{d:img-per}\rm
A periodic \lc\ a\-bel\-ian\ group $G$ is {\em in\-duc\-ti\-ve\-ly mo\-no\-the\-tic},
provided every finitely generated\ subgroup $H$ of $G$ can be
topologically generated by a single element.
\end{definition}
Note that a periodic \lc\ a\-bel\-ian\ group $G$ is in\-duc\-ti\-ve\-ly mo\-no\-the\-tic\ if, and only if,
for every $p\in\pi(G)$ the $p$-primary subgroup $G_p$ has \hbox{$p$-rank}\ 1.
(see Subsection \ref{ss:Mg-p-case}).
That in\-duc\-ti\-ve\-ly mo\-no\-the\-tic\ groups are always strong\-ly \tqh\ could be read off
from Mukhin's characterization
of abelian \stqh\ group s, see \cite{muk2}, a very elementary proof
of this fact follows.
\begin{proposition}\label{p:img-stqh}
Every periodic in\-duc\-ti\-ve\-ly mo\-no\-the\-tic\ group $H$ is strong\-ly \tqh.
\end{proposition}
\begin{proof}
Assume first that $H$ is a $p$-group. Then, as a consequence of
Proposition \ref{p:finrank}, $H$ is isomorphic to either
the additive group of the $p$-adic field $\Q_p$, or to the additive group
of the $p$-adic integers ${\mathbb Z}_p$, or to a finite cyclic $p$-group,
or to Pr\"ufer's $p$-group $\prf p$.
Then, for $X$ and $Y$ any closed subgroups of $H$,
either $X\subseteq Y$ or $Y\subseteq X$ must hold.
But then $X+Y$ agrees with one of the closed subgroups $Y$ or $X$
and is hence a closed subgroup of $H$.
\medskip
Let $H$ now be arbitrary and consider closed subgroups $X$ and $Y$.
Put $\pi:= \{p\in \pi(H): X_p\subseteq Y_p\}$.
Then, letting $\pi':= \pi(X)\setminus\pi$,
there are direct sum decompositions
\[X=X_\pi\oplus X_{\pi'} \ \hbox{ and } \ Y=Y_\pi\oplus Y_{\pi'}.\]
For proving $X+Y$ to be a closed subgroup of $H$,
in light of Lemma \ref{l:coprime}, it suffices to prove closedness
of the two subgroups $X_\pi+ Y_\pi$ and $X_{\pi'}+Y_{\pi'}$.
In the first case the group in question agrees
with the closed subgroup $Y_\pi$ and
in the second one with the closed subgroup $X_{\pi'}$. Hence $X+Y$ is a closed
subgroup. Thus $H$ is strong\-ly \tqh.
\end{proof}
Here is a complete description of all {\em torsion} \lc\ a\-bel\-ian\ \stqh\ group s.
We can now complete Mukhin's classification
of abelian \stqh\ group s.
For a profinite abelian group $U$ the
{\em Frattini subgroup} $\Phi(U)$ is the
intersection of all maximal open subgroups of $U$.
As pointed out in \cite{ribes-zalesskii} the Frattini subgroup
of $G=\prod_pA_p$ takes the form
\[\Phi(A)=\prod_p\Phi(A_p)=\prod_ppA_p.\]
An elementary fact about \lc\ a\-bel\-ian\ $p$-groups will be needed.
\begin{lemma}\label{l:ApMp}
Let $A$ be a \lc\ a\-bel\-ian\ $p$-group containing properly
an open compact subgroup $U$ of exponent $p$.
If $A$ is not in\-duc\-ti\-ve\-ly mo\-no\-the\-tic\ then
there are elements $a\in A\setminus U$ and
$0\neq b\in U$ with $\gp a\cap \gp b=\{0\}$.
\end{lemma}
\begin{proof}
Since the compact open subgroup $U$ has exponent $p$
the group $A$ is torsion and $U\le \text{\rm socle}(A)$.
If there is $a\in\text{\rm socle}(A)\setminus U$
then $\gp a + U=\gp a\oplus U$ and we may pick $0\neq b\in U$ in order
to have $\gp a\cap \gp b=\{0\}$.
Suppose next that $\text{\rm socle}(A)=U$. Pick any $a\in A\setminus U$.
Since $A$, by assumption, is {\em not} in\-duc\-ti\-ve\-ly mo\-no\-the\-tic\ and is torsion $U=\text{\rm socle}(A)$
cannot be cyclic and hence $U$ must contain
a subgroup $L=\gp{x,y}\cong{\mathbb Z}(p)\oplus{\mathbb Z}(p)$. There must be
$b\in L$ not belonging to $\text{\rm socle}(\gp a)$. Then $\gp a\cap \gp b=
\text{\rm socle}(\gp a)\cap \gp b=\{0\}$.
\end{proof}
We come to prove our
addition to the classification results in \cite{muk2}.
\begin{proof}[Proof of Theorem \ref{t:ab-periodic-stqh}]
Assume (A). Then certainly $A_\delta\cap U=\{0\}$
and hence $A_\delta$ is a discrete \psyl\delta subgroup of $A$.
Since $A_\gamma\le U$ conclude that $A_\gamma$ is profinite and
so (ii) holds.
It follows that
\[A=A_\delta\times A_\gamma\times A_{\delta'},\]
for $\delta':= \pi(A)\setminus\{\delta\cup\gamma\}$.
algebraically and topologically. Thus (i) is established.
\medskip
For establishing (iii) and (iv)
we may restrict ourselves
to the case $\delta=\gamma=\emptyset$, i.e.,
$A_\delta=A_\gamma=\{0\}$ from now on.
\medskip
Since, by Proposition \ref{p:stqh-class},
every subgroup and quotient of a \stqh\ group\ again is a \stqh\ group,
passing to subgroups of quotients of $A_\phi$
renders \stqh\ group s. For proving that $\phi$ must be finite we may
factor $\Phi(U_\phi)=\prod_{p\in\phi}pU_p$
and achieve that $U_p$ has exponent $p$ only. By using
Lemma \ref{l:ApMp} one can find for
every $p\in\phi$ elements $a_p\in A_p\setminus U_p$ of order at most $p^2$
and $0\neq b_p\in U_p$ with $\gp{a_p}\cap \gp{b_p}=\{0\}$.
The closed subgroup
\[L:= \gen{a_p,b_p: p\in\phi}\]
of $A_\phi$ is still strong\-ly \tqh. Observe that
$U\cap L=\gen{pa_p,b_p:p\in\phi}$ and, after factoring in it the
closed subgroup $N$ generated by all elements
of the form $pa_p$ we find an algebraic and topological isomorphism
\[L/N\cong \bigoplus_{p\in\phi}{\mathbb Z}(p)\oplus \prod_{p\in\phi}{\mathbb Z}(p).\]
Since $L/N$ is strong\-ly \tqh\ deduce from Lem\-ma \ref{l:cyclic-stqh}
that $\phi$ must be finite. Hence (iii) holds.
\medskip
(iv) is an immediate consequence of the fact
that $p\in\mu$ if and only if $\mathop\mathrm{rank}\nolimits_p(A_p)=1$
if and only if $A_p$ is in\-duc\-ti\-ve\-ly mo\-no\-the\-tic.
\medskip
Finally, $A$ certainly is the cartesian product of the Sylow subgroups $A_\delta$, $A_\phi$ and $A_\mu$.
Hence (B) holds.
\medskip
Assume now (B). In light of Lemma \ref{l:coprime}
it will suffice to prove that $A_\delta$, $A_\gamma$,
$A_\phi$, and $A_\mu$ are strong\-ly \tqh.
For $A_\delta$ this is obvious since $A_\delta$ is discrete.
Since $A_\gamma$ is compact it is strong\-ly \tqh.
For every $p$ in the finite set $\phi$ we know from (iii) that
$A_p$ is strong\-ly \tqh. Thus applying Lemma \ref{l:coprime}
to the finite product
\[A_\phi=\prod_{p\in\phi}A_p\]
shows that $A_\phi$ is strong\-ly \tqh.
Finally observe that $A_\mu$ has \hbox{$p$-rank}\ $1$ \psyl p
subgroups for all $p\in\mu$.
\end{proof}
One may ask under which conditions on a \lc\ a\-bel\-ian\ group $G$
the group is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ if, and only if, $G$ is strong\-ly \tqh.
When $G$ is discrete then, as has been mentioned in the
introduction, Dedekind \cite{Dedekind77}
proved that $G$, equipped with the discrete topology,
is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ and it certainly is strong\-ly \tqh.
We conclude our work by contributing to the above question.
\begin{theorem}\label{t:M-stqh-td}
Let $G$ be a nonperiodic totally disconnected \lc\ a\-bel\-ian\ group.
The following statements are equivalent:
\begin{enumerate}[\rm(a)]
\item $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
\item $G$ is strong\-ly \tqh.
\end{enumerate}
\end{theorem}
\begin{proof}
Since, by Lemma \ref{l:stqh<tM}, (b) implies (a)
we only need to deduce (b) from (a).
Thus assume that $G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar.
If $G$ is compact then $G$ is certainly strong\-ly \tqh\ else
$G$ is neither discrete nor compact and the result
follows from Theorem \ref{t:mainB}.
\end{proof}
Let us summarize the findings of Theorems \ref{th:tM=stqh p},
\ref{t:M-conn}, and, \ref{t:M-stqh-td}
and keep in mind Lemma \ref{l:cyclic-stqh} and Remark \ref{r:tqh-stqh}:
\begin{corollary}\label{l:summary-M-stqh}
Under the following conditions a nondiscrete \lc\ a\-bel\-ian\ group
$G$ is to\-po\-lo\-gi\-cal\-ly mo\-du\-lar\ if, and only if, $G$ is strong\-ly \tqh.
\begin{enumerate}[\rm(i)]
\item $G$ is a $p$-group.
\item $G$ is totally disconnected but not periodic.
\item $G_0$ is not trivial.
\end{enumerate}
Moreover, whenever $G$ satisfies one of the conditions (i)--(iii)
then the Pontryagin dual $\hat G$ is strong\-ly \tqh\ if, and only if,
$G$ is strong\-ly \tqh.
\end{corollary}
\ignore
\subsection*{Acknowledgement}
We are thankful to the referees, not only
for reading our paper carefully and proposing numerous valuable suggestions
which lead to significant improvement of our work, but particularly
for continued interest in our results and
going with enormous patience and endurance through
several gradually improved versions of our paper. }
| {
"timestamp": "2019-08-23T02:13:08",
"yymm": "1908",
"arxiv_id": "1908.08420",
"language": "en",
"url": "https://arxiv.org/abs/1908.08420",
"abstract": "Locally compact abelian groups are classified in which the sum of any two closed subgroups is itself closed. This amounts to reproving and extending results by Yu.~N.~Mukhin from 1970. Namely we contribute a complete classification of all totally disconnected \\lca\\ groups with $X+Y$ closed for any closed subgroups $X$ and $Y$.",
"subjects": "Group Theory (math.GR)",
"title": "When is the Sum of Two Closed Subgroups Closed in a Locally Compact Abelian Group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517443658749,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7089606438668431
} |
https://arxiv.org/abs/1102.5464 | Lattices of subalgebras of Leibniz algebras | I describe the lattice of subalgebras of a one-generator Leibniz algebra. Using this, I show that, apart from one special case, a lattice isomorphism between Leibniz algebras L, L' maps the Leibniz kernel of L to that of L'. | \section{Introduction} \label{sec-intro}
A lot is known about the lattices of subalgebras of Lie algebras. See for example, Barnes \cite{LIsos}, \cite{Lautos}, Goto \cite{Goto}, Towers \cite{T-dimpres}, \cite{T-Lautos}, \cite{smod}. In this paper, I look at some basic results needed to extend these results to Leibniz algebras.
In the following, $L,L'$ are finite-dimensional (left) Leibniz algebras over the field $F$. I denote by $\langle a, b, \dots\rangle$ the subspace spanned by the listed elements and by $\alg a,b, \dots \rangle$ the subalgebra they generate. The Leibniz kernel of $L$ is the subspace $\Leib(L) = \langle x^2 \mid x \in L\rangle$ spanned by the squares of the elements of $L$. By Barnes \cite[Lemma 1.1]{Sch-Leib}, it is a $2$-sided ideal and $L/\Leib(L)$ is a Lie algebra.
Since the main aim of this paper is to show that (apart from one exceptional case,) $\Leib(L)$ can be recognised from lattice properties alone, we say of a subalgebra $U$ of $L$, that $U$ is \textit{recognisable} if, from properties of the lattice $\lat(L)$, it can be shown that $U \subseteq \Leib(L)$, that is, if for every lattice isomorphism $\phi \co \lat(L) \to \lat(L')$, we have $\phi(U) \subseteq \Leib(L')$.
There is one case in which $\Leib(L)$ is not recognisable.
\begin{example} \label{diamond} Let $L = \alg a \rangle = \langle a, a^2 \rangle$ with $a^3 = a^2$. Put $b = a - a^2$ and $v = a^2$. Then $b^2 = v^2 =vb = 0$ and $bv = v$. Then $\Leib(L) = \langle v \rangle$ and $\lat(L)$ is
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(16,10)(-5,0)
\put(5,10){\circle*{1}} \put(0,5){\circle*{1}} \put(10,5){\circle*{1}} \put(5,0){\circle*{1}}
\put(0,5){\line(1,1){5}} \put(5,0){\line(1,1){5}}
\put(0,5){\line(1,-1){5}} \put(5,10){\line(1,-1){5}}
\put(-5,4){$\langle b \rangle$}
\put(11,4){$\langle v \rangle$}
\end{picture}
\end{center}
The map $\phi \co \lat(L) \to \lat(L)$ which interchanges $\langle b \rangle$ and $\langle v \rangle$ is clearly a lattice automorphism.
\end{example}
The lattice of Example \ref{diamond} will be called the \textit{diamond lattice} and an algebra with this as its subalgebra lattice will be called a \textit{diamond algebra.} The lattice of all subspaces of a vector space over $F$ will be called a \textit{vector space lattice}.
\begin{lemma} Let $D$ be a diamond algebra. Then there exist $b,v \in D$ such that $D = \langle b,v\rangle$ and $b^2 = v^2 = vb = 0$ and $bv = v$.
\end{lemma}
\begin{proof} A Leibniz algebra $L$ which is not a Lie algebra has a proper subalgebra $\Leib(L)$, so an algebra without proper subalgebras is a 1-dimensional Lie algebra. $D$ has two proper subalgebras, $B$ and $V$ say. As not every 1-dimensional subspace of $D$ is a subalgebra, $D$ is not a Lie algebra, so one of these, $V$ say is $\Leib(L)$. Take generators $b, v$ of $B, V$ respectively. Then $b^2 = v^2 = vb=0$ and $bv = \lambda v$ for some $\lambda \in F$, $\lambda \ne 0$. We replace $b$ with $b/\lambda$.
\end{proof}
To avoid repeated descriptions of notations, I use the following convention. Whenever we have a subalgebra $U$ of $L$, $U'$ denotes the subalgebra $\phi(U)$ of $L'$ under the lattice isomorphism $\phi \co L \to L'$. For an element $a \in L$, $a'$ denotes a generator of $\alg a \rangle'$. (We shall see later, Lemma \ref{lem-kchains}, that $\alg a\rangle'$ is a one-generator algebra. This holds trivially if $\dim\alg a\rangle = 1$.) If $A \supseteq B$ are subalgebras of $L$, I denote by $A \div B$ the lattice interval consisting of all subalgebras $C$ with $A \supseteq C \supseteq B$.
\section{One-generator algebras}
Consider the one-generator Leibniz algebra $L = \langle a, a^2, \dots, a^n \rangle$. We have here a vector space $V = \langle a^2, \dots, a^n \rangle$ acted on by a linear transformation $\theta : V \to V$ such that $V$ is generated as $F[\theta]$-module by the element $a^2$. Let $f(x)= x^r g(x)$ be the characteristic polynomial of $\theta$. Then $f(x)$ is also the minimum polynomial of $\theta$. Put $V_1 = \theta^rV$. Then $g(x)$ is the minimum polynomial of $\theta|{V_1}$.
Put $h(x) = \bigl( g(x) - g(0)\bigr)/xg(0)$ and $b= a + h(\theta)a^2$. Then
$$b^{r+2}= a^{r+2} + \theta^{r+1}h(\theta)a^2 = a^{r+2} + \bigl(g(\theta)-g(0)\bigr)a^{r+2}/g(0) = 0.$$
Let $B$ be the subalgebra generated by $b$. Then $B^2$ is the only maximal subalgebra of $B$ and $\lat(B)$ consists of $B$ and all the subspaces of $B^2$. If $a$ is not nilpotent, then $B$ is a proper subalgebra of $L$ not contained in the maximal subalgebra $V$, so $V$ is not the only maximal subalgebra of $L$.
\begin{lemma}\label{nilp1gen} Let $B = \alg b\rangle$ be a subalgebra generated by a nilpotent element $b$. Then $B \simeq B'$ and $B^2 = \Leib(B)$ is recognisable.
\begin{proof} $B^2$ is the only maximal subalgebra of $B$, so $W = \phi(B^2)$ is the only maximal subalgebra of $B'$. There exists $c \in B'$ , $c \notin W$. Since $c$ is not contained in any maximal subalgebra of $B'$, we have $\alg c \rangle = B'$. Since $W$ is the only maximal subalgebra of $B'$, $c$ is nilpotent. Since the maximal chains of $\lat(B)$ and $\lat(B')$ have the same length, we have $B \simeq B'$. $\Leib(B) = B^2$ and $\phi(\Leib(B)) = W = \Leib(B')$.
\end{proof}
\end{lemma}
\begin{lemma} \label{ifnilp} Suppose $c \notin V$ is nilpotent, then $\alg c \rangle = B$.
\end{lemma}
\begin{proof} Since $c \notin V$, we have $c = \lambda b + b_1 + v$ for some $\lambda \in F$, $\lambda \ne 0$, $b_1 \in B^2$ and $v \in V_1$. Since $\theta$ is non-singular on $V_1$, $v = 0$ and $c \in B$. But $c$ is not in the only maximal subalgebra of $B$, so $\alg c \rangle = B$.
\end{proof}
To determine the invariant subspaces, we use the prime power factorisation $g(x) = p_1^{r_1}(x) \dots p_k^{r_k}(x)$ where the $p_i(x)$ are distinct irreducible polynomials. Since $V$ and so also $V_1$ is generated as $F[\theta]$-module by a single element, the only invariant subspaces of $V_1$ are the spaces
$$V_{s_1, \dots, s_k} = \{v \in V \mid p_1^{s_1}(\theta) \dots p_k^{s_k}(\theta) v = 0\} = \theta^r p_1^{r_1-s_1}(\theta) \dots p_k^{r_k-s_k}(\theta) V,$$
for $s_i \le r_i$. Put $U_{s_1, \dots, s_k} = B + V_{s_1, \dots, s_k}$.
\begin{theorem} Let $L = \alg a \rangle$. Then the $U_{s_1, \dots, s_k}$ are the only subalgebras of $A$ not contained in $V$. The lattice interval $L \div B$ is the lattice product of $k$ chains of lengths $r_1, \dots, r_k$.
\end{theorem}
\begin{proof} Let $C$ be a subalgebra of $L$ not contained in $V$. Then $C$ has an element $c = b + b_1 + v$ where $b_1 \in B^2$ and $v \in V_1$. As above , we obtain a nilpotent element $c_0 = c+w \in C$ where $w \in V$. By Lemma \ref{ifnilp}, $c_0$ generates $B$. Thus $B \subseteq C$. It follows also that $v \in C$, and the invariant subspace generated by $v$ is contained in $C$. It follows that $C = U_{s_1, \dots, s_k}$ for some $s_1, \dots, s_k$. That the lattice interval is the product of chains as described follows.
\end{proof}
To illustrate this, I consider the case $r=0$, $k=1$, $r_1=1$ and $p_1(x) = x-1$.
\begin{example}\label{ex-schain} Let $L = \langle b, v_1, v_2\rangle$ with $b^2 = v_ix = 0$ for all $x \in L$ and $bv_1 = v_1+v_2$, $bv_2 = v_2$. Then $L = \alg b+v_1\rangle$ and $\lat(L)$ is
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(30,30)(-15,-2)
\put(8,16){\circle*{2}} \put(0,8){\circle*{1}} \put(16,8){\circle*{2}} \put(8,0){\circle*{2}}
\put(0,8){\line(1,1){8}} \put(8,0){\line(1,1){8}}
\put(0,8){\line(1,-1){8}} \put(8,16){\line(1,-1){8}}
\put(8,8){\circle*{1}} \put(8,0){\line(0,1){16}}
\put(3,8){\circle*{0.7}} \put(4,8){\circle*{0.7}} \put(5,8){\circle*{0.7}}
\put(13,8){\circle*{0.7}} \put(11,8){\circle*{0.7}} \put(12,8){\circle*{0.7}}
\put(-7,7){$\langle v_2 \rangle$} \put(17.5,7){$\langle v_1 \rangle$} \put(9.5,15){$V = V_1$} \put(10,-1){$0$}
\put(-12,8){\circle*{2}} \put(8,0){\line(-5,2){20}}
\put(-12,8){\line(1,1){16}} \put(4,24){\circle*{2}}
\put(-4,16){\circle*{1}}
\put(4,24){\line(1,-2){4}} \put(-4,16){\line(1,-2){4}}
\put(5.5,23){$L$} \put(-18,7){$\langle b \rangle$}
\put(-14.5,15){$\langle b,v_2 \rangle$}
\end{picture}\end{center}
\end{example}
Observe that the emphasised points $L,V_1, \langle b\rangle, \langle v_1\rangle, 0$ form a sublattice, and that $\lat(L)$ is not modular. Indeed, for any one-generator algebra with $\dim(V_1) > 1$, taking $v_1$ an element which generates $V_1$ under the action of $\theta$, we obtain in this way the standard non-modular lattice.
\begin{definition} The \textit{signature} of the one-generator Leibniz algebra $L$ is the list $[r|r_1, \dots r_k|d_1, \dots, d_k]$ where $d_i$ is the degree of the irreducible polynomial $p_i(x)$. If $k=1$, we call $L$ a \textit{single-chain algebra}.
\end{definition}
Clearly, from the signature and knowledge of the field $F$, one can reconstruct $\lat(L)$. The algebra $L$ of Example \ref{ex-schain} is a single-chain algebra with signature $[0|2|1]$.
\begin{lemma} \label{lem-schain} Suppose that $L$ is a single-chain algebra. Then $L'$ is a single-chain algebra with the same signature as $L$. If $\dim(L) > 2$, then $\phi(\Leib(L)) = \Leib(L')$.
\end{lemma}
\begin{proof} $L$ has exactly two maximal subalgebras, so $L'$ has exactly two maximal subalgebras. A vector space cannot be the set union of two proper subspaces, so there exists $a' \in L'$ which is not contained in any maximal subalgebra. Thus $\alg a' \rangle = L'$. Let $[r|r_1|d_1]$ be the signature of $L$. Let $M$ be maximal subalgebra containing $B$.
The lattice of $V$ is the vector space lattice of dimension $\dim(V) = r+r_1d_1$ and it follows that $\dim(L') \ge 1+r+r_1d_1$, while chains in $\lat(M)$ have length at most $r+r_1d_1$. If $\Leib(L') = M'$, then $M$ also has the $(r+ r_1d_1)$-dimensional vector space lattice. But $M$ has at most two maximal subalgebras, one containing $B$ and $M \cap V$. This requires $\dim(M) = 1$ and $\dim(L)=2$. In this case, $L \simeq L'$. If $\dim(L)>2$, then $\Leib(L') = V'$, and the signature of $L'$ can be read from the length of the chain $L' \div B'$ and the dimensions of $B' \cap V'$ and $V'$.
\end{proof}
\begin{lemma}\label{lem-kchains} Let $L$ be a one-generator Leibniz algebra and suppose that the number $k$ of chains is greater than $1$. Then $L'$ is a one-generator algebra with the same signature as $L$ and $\phi(\Leib(L)) = \Leib(L')$.
\end{lemma}
\begin{proof} Let $[r|r_1, \dots, r_k|d_1, \dots, d_k]$ be the signature of $L$. For each $i$, we have a single-chain subalgebra $C_i \supset B$ with signature $[r|r_i|d_i]$ such that $L \div B$ is the product of the chains $C_i\div B$.
I prove first that $B' \not\subseteq \Leib(L')$. For this, it is sufficient to consider the case $k=2$. If $r>0$ or for any $i$, we have $r_id_i>1$, then by Lemma \ref{lem-schain}, $B' \not\subseteq \Leib(L')$.
So suppose that $r=0$ and that $r_i = d_i = 1$ for $i=1,2$. Then $\lat(L)$ is
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(50,40)(0,-5)
\put(20,30){\circle*{1}} \put(18.5,31.5){$L$}
\put(10,20){\circle*{1}} \put(5,19){$C_1$}
\put(20,20){\circle*{1}} \put(22,19){$C_2$}
\put(40,20){\circle*{1}} \put(42,19){$V$}
\put(10,20){\line(1,1){10}} \put(20,20){\line(0,1){10}}
\put(20,30){\line(2,-1){20}}
\put(10,10){\circle*{1}} \put(5,9){$B$}
\put(10,10){\line(0,1){10}} \put(10,10){\line(1,1){10}}
\put(30,10){\circle*{1}} \put(40,10){\circle*{1}}
\put(20,20){\line(2,-1){20}} \put(10,20){\line(2,-1){20}}
\put(30,10){\line(1,1){10}} \put(40,10){\line(0,1){10}}
\put(30,0){\circle*{1}} \put(29,-3.5){$0$}
\put(10,10){\line(2,-1){20}} \put(30,0){\line(0,1){10}}
\put(30,0){\line(1,1){10}}
\put(30,0){\line(2,1){20}}
\put(50,10){\circle*{1}} \put(40,20){\line(1,-1){10}}
\put(43,10){\circle*{0.7}} \put(44,10){\circle*{0.7}} \put(45,10){\circle*{0.7}}
\end{picture}\end{center}
As $C_1, C_2$ do not have vector space lattices, $\Leib(L')$ cannot be $C'_1$ or $C'_2$. If $B' \subseteq \Leib(L')$, then $B' = \Leib(L')$ and $L'/B.$ is a Lie algebra, contrary to $L' \div B'$ being the diamond lattice.
We now have, whatever the signature of $L$, that $\phi(\Leib(B))$, $\phi(\Leib(C_i))$ are all contained in $\Leib(L')$. But they generate $V'$, so $V' \subseteq \Leib(L)$. As $V'$ is a maximal subalgebra of $L'$, this implies that $V' = \Leib(L')$.
Take an element $b'$ which generates $B'$ and let $\beta\co V' \to V'$ be left multiplication by $b'$. Then the minimum polynomial of $\beta|C'_i \cap V'$ is $x^rq^{r_i}_i(x)$ for some irreducible polynomial $q_i(x)$. I now prove that the $q_i$ are distinct.
Suppose that $q_1=q_2$. Let $W_i$ be a minimal invariant subspace of $C'_i \cap V'$ with minimum polynomial $q_i(x)$. Then $W_1 \simeq W_2$ as $F[\beta]$-modules. Take an isomorphism $\gamma \co W_1 \to W_2$ and put $W^* = \{w+\gamma(w) \mid w \in W_1 \}$. Then $W^* $ is an invariant subspace and $B'+W^*$ is a subalgebra not in the product of chains. Therefore $q_1 \ne q_2$.
We now have that $\beta^rV'$ is generated as $F[\beta]$-module by some single element $w$, and it follows that $\alg b'+w \rangle = L'$.
\end{proof}
\section{Recognising $\Leib(L)$}
In this section, $L,L'$ are Leibniz algebras and $\phi \co \lat(L) \to \lat(L')$ is a lattice isomorphism. The aim is to prove that $\phi(\Leib(L)) = \Leib(L')$. For this to fail, we must have a diamond subalgebra $\langle b,v\rangle$, $bv = v$, with $\langle b \rangle'\subseteq \Leib(L')$. I shall assume this and show that, if $\dim(L) \ge 3$, then there are other subalgebras whose relation to $\langle b, v\rangle$ makes this impossible. It is convenient to represent data as a geometric configuration, with points representing 1-dimensional subalgebras and lines representing 2-dimensional subalgebras, all of whose subspaces are subalgebras. Thus the lines represent Lie subalgebras. Broken lines are used to represent 2-dimensional subalgebras which have subspaces that are not subalgebras. Their lattice of subalgebras is the diamond lattice, the product of two chains each of length 1.
\begin{theorem} Let $L,L'$ be Leibniz algebras and let $\phi \co \lat(L) \to \lat(L')$ be a lattice isomorphism. Suppose $\dim(L) \ge 3$. Then $\phi(\Leib(L)) = \Leib(L'))$.
\begin{proof} In the notation set out above, I assume that $\langle b \rangle' \subseteq \Leib(L')$. I investigate and eliminate a number of cases.
\textit{Case 1:} Suppose that there exists $x \in L$, $x^2 = xb=bx=xv=vx=0$. Since $(b+\lambda x)v=v$, we have that $\langle (b+\lambda x), v\rangle$ is a diamond subalgebra for all $\lambda \in F$.
\begin{center} \setlength{\unitlength}{1mm}
\begin{picture}(100,40)
\put(50,30){\circle*{1}} \put(50,10){\circle*{1}}
\put(30,10){\circle*{1}} \put(70,10){\circle*{1}}
\multiput(52,28)(6,-6){3}{\line(1,-1){4}}
\multiput(48,28)(-6,-6){3}{\line(-1,-1){4}}
\put(25,10){\line(1,0){50}}
\put(50,5){\line(0,1){30}}
\put(52,29){$\langle v \rangle$}
\put(52,6){$\langle x \rangle$}
\put(28,6){$\langle b \rangle$}
\put(65,6){$\langle b+x \rangle$}
\end{picture}
\end{center}
For suitable choice of $b' $ and $x'$, we have $b'+x' \in \langle b+x\rangle'$. Since $b' \in \Leib(L)$, we can choose $v'$ such that $v'b'=b'$. Since $\langle v', b'+x'\rangle$ is a diamond algebra, we must have $v'(b'+x') = \lambda(b'+x')$ for some $\lambda \in F$. But $b', (b'+x') \in \Leib(L')$, so $x' \in \Leib(L')$. Since $\langle v', x' \rangle$ is a Lie algebra, $v'x' = - x'v'=0$ and $v'(b'+x') = b'$ contrary to $v'(b'+x') = \lambda(b'+x')$. Thus Case 1 is impossible.
\textit{Case 2:} Suppose that $\dim(\Leib(L)) > 1$. Then there exists $w \in \Leib(L)$ not in $\langle v\rangle$. If in the space $W$ generated by $w$ under the action $\theta$ of $b$ contains an element $w_0$ such that $\theta w_0 = 0$, then we have Case 1. Therefore $\theta$ acts non-singularly on $W$ and $\alg b+w\rangle \supset W$. If $\dim(W) >1$, then we cannot have $b' \in \Leib(L')$, so $bw=\lambda w$, $\lambda \ne 0$.
If $\lambda \ne 1$, then for every $\mu \ne 0$, $b$ and $v+\mu w$ generate a 3-dimensional subalgebra. If $\lambda = 1$, then for all $\mu$, $b$ and $v+\mu w$ generate a diamond algebra.
\begin{center} \setlength{\unitlength}{1mm}
\begin{picture}(120,40)
\put(30,30){\circle*{1}} \put(30,10){\circle*{1}}
\put(10,10){\circle*{1}} \put(50,10){\circle*{1}}
\multiput(32,28)(6,-6){3}{\line(1,-1){4}}
\multiput(28,28)(-6,-6){3}{\line(-1,-1){4}}
\put(5,10){\line(1,0){50}}
\put(32,29){$\langle b \rangle$}
\put(23.5,6){$\langle v+\mu w \rangle$}
\put(8,6){$\langle v \rangle$}
\put(47.5,6){$\langle w \rangle$}
\put(16,0){Case 2(a): $\lambda \ne 1$}
\put(90,30){\circle*{1}} \put(90,10){\circle*{1}}
\put(70,10){\circle*{1}} \put(110,10){\circle*{1}}
\multiput(92,28)(6,-6){3}{\line(1,-1){4}}
\multiput(88,28)(-6,-6){3}{\line(-1,-1){4}}
\put(65,10){\line(1,0){50}}
\put(92,29){$\langle b \rangle$}
\put(83.5,6){$\langle v+\mu w \rangle$}
\put(68,6){$\langle v \rangle$}
\put(107.5,6){$\langle w \rangle$}
\multiput(90,12)(0,6){3}{\line(0,1){3}}
\put(76,0){Case 2(b): $\lambda= 1$}
\end{picture}
\end{center}
Since $b' \in \Leib(L')$, for suitable choice of $v', w'$, we have $v'b' = b'$ and $w'b'=b'$. But this implies that $(v'-w')$ and $b'$ generate a 2-dimensional Lie algebra, contrary to the lattice information. Therefore Case 2 is impossible.
\textit{Case 3:} $\dim(\Leib(L)) = 1$. Take $x \notin \langle b,v \rangle$. If $x^2 \ne 0$ but $x^3 = 0$, then $\phi(\langle x^2 \rangle )\subseteq \Leib(L')$ contrary to assumption. Therefore either $x^2 = 0$ or $\langle x, v\rangle$ is the diamond algebra. In either case, there exists $c = x + \lambda v$ with $c^2 = 0$. Either $cv=v$ or $cv=0$. Since $v' \notin \Leib(L')$, if $cv = v$, then $c' \in \Leib(L')$ and $\dim(\Leib(L')) >1$, contrary to Case 2 applied to $\phi^{-1}$. So $cv=0$. But this implies that $x^2 = xv= 0$. Therefore, for every $x \notin \langle b,v \rangle$, we have $xv=0$. But $xv=0$ and $(x+b)v = 0$ implies $bv=0$ contrary to assumption. Thus Case 3 also is impossible.
\end{proof}
\end{theorem}
\bibliographystyle{amsplain}
| {
"timestamp": "2011-03-01T02:01:15",
"yymm": "1102",
"arxiv_id": "1102.5464",
"language": "en",
"url": "https://arxiv.org/abs/1102.5464",
"abstract": "I describe the lattice of subalgebras of a one-generator Leibniz algebra. Using this, I show that, apart from one special case, a lattice isomorphism between Leibniz algebras L, L' maps the Leibniz kernel of L to that of L'.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Lattices of subalgebras of Leibniz algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517443658749,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.708960643866843
} |
https://arxiv.org/abs/1611.08253 | A geometric proof of Lück's vanishing theorem for the first $L^2$-Betti number of the total space of a fibration | A significant theorem of Lück says that the first $L^2$-Betti number of the total space of a fibration vanishes under some conditions on the fundamental groups. The proof is based on constructions on chain complexes. In the present paper, we translate the proof into the world of CW-complexes to make it more accessible. | \section{Introduction}
In \cite{Lueck}, Lück proved the following significant theorem:
\begin{theorem}[{\cite[Theorem 3.1]{Lueck}}]
Let $F\xrightarrow{i} E\xrightarrow{p}B$ be a fibration of connected CW-complexes such that $F$ and $B$ have finite $2$-skeletons. Then $E$ has finite $2$-skeleton up to homotopy. If the image of $\pi_1(F)\to\pi_1(E)$ is infinite and $\pi_1(B)$ contains $\mathbb{Z}$ as a subgroup, then the first $L^2$-Betti number of $E$ vanishes:
$b_1(E)=0$.
\end{theorem}
Lück's proof is based on somewhat abstract constructions in the world of chain complexes, which make it quite hard to understand what really is going on geometrically.
On closer examination, however, it turns out that most of these constructions do have a counterpart already at the level of CW-complexes.
The purpose of the present paper is to elaborate these geometric counterparts and thereby translating Lück's proof into the world of CW-complexes.
The hope is that the geometric version of the proof is more accessible to the generic reader.
It should be said that the present paper is not meant to be considered independently of the original paper \cite{Lueck}.
In particular, we use without any recapitulation the same notation and assume that the reader is familiar with the basic results in \cite[Sections 1 \& 2]{Lueck}.
Furthermore, we only re-prove the original theorem shown above, but not any generalization such as \cite[Theorem 6.67]{LueckBuch} (although the proof of the latter theorem contains a geometric construction which exhibits a slight similarity to what we do here).
After all, the purpose of this paper is to simplify matters, not to complicate them.
The fact that our proof takes more space then the original proof in \cite{Lueck} is mainly due to the fact that we included a few more details.
\section{Outline of proof}
The idea of the proof is as follows. We will construct a more accessible CW-complex $T$ and a $1$-connected map $h:T\to E$ for which we can directly prove
\[b_1(E)\leq b_1(T,(h_*)^*\ell^2(\pi_1(E)))=0\,.\]
In preparation of the proof we shall make the set-up precise. First of all, we assume that $E$ has finite $2$-skeleton and we can and will assume that all maps appearing (including loops defined on the unit interval $I:=[0,1]$ with the obvious cell structure) are cellular. And secondly, we choose $0$-cells $e\in E$ and $b:=p(e)\in B$ as basepoints and let $F:=p^{-1}(b)$ with basepoint $e\in F$.
Now, denote $\pi:=\pi_1(B,b)$, $\Gamma:=\pi_1(E,e)$ and $\Delta=\im(i_*:\pi_1(F,e)\to\Gamma)$. Thus, we obtain a group extension
\[1\to\Delta\to\Gamma\xrightarrow{p_*}\pi\to 1\,.\]
For each $w\in\pi$ we chose some arbitrary pre-image $\overline{w}\in\Gamma$ under $p_*$. We shall also use the same letters $w,\overline{w}$ for representing loops $I\to B$, $I\to E$, respectively, and assume $w=p\circ \overline{w}$.
Choose a solution $h(w)$ to the lifting problem
\[\xymatrix@C=16ex{
F\times\{0\}\cup\{e\}\times I \ar[r]^-{i\cup\overline{w}} \ar[d]_{\text{incl.}}
&E\ar[d]^{p}
\\F\times I\ar[r]_-{w\circ \operatorname{pr}_I}\ar[ur]^{h(w)}
&B
}\]
and denote $\sigma(w):=h(\_,1):(F,e)\to (F,e)$.
The pointed homotopy class of $\sigma(w)$ is independent of the choices made and called the pointed fibre transport along $w$.
Denote by
\[T_{\sigma(w)}:=F\times I/(x,1)\sim(\sigma(w)x,0)\]
the mapping torus of $\sigma(w)$.
We are now ready to define the CW-complex $T$. Choose a generating set $S=\{s_1,\dots, s_g\}$ of $\pi$ such that $s_1$ has infinite order and apply the constructions above to each $w\in S$. Then $T$ is obtained by gluing together $T_{\sigma(s_1)},\dots,T_{\sigma(s_g)}$ along the common subcomplex $F\times\{0\}$. It is obviously connected, because $F$ is connected.
All $h(s_1),\dots,h(s_g)$ together assemble to a map $h:T\to E$ which fits into the self-explaining commutative diagram
\[\xymatrix@C=16ex{
F\ar@{=}[d]\ar[r]^{\times\{0\}}
&T\ar[d]^h\ar[r]^{\text{proj.}}
&\bigvee_{n=1}^gS^1\ar[d]^{\bigvee_{n=1}^gs_n}
\\F\ar[r]^i
&E\ar[r]^p&B\,.
}\]
On fundamental groups, this induces
\[\xymatrix@C=16ex{
\pi_1(F,e)\ar@{->>}[d]^{i_*}\ar[r]
&\pi_1(T,(e,0))\ar[d]^{h_*}\ar@{->>}[r]
&\mathbb{Z}^{*g}\ar@{->>}[d]
\\\Delta\ar@{^{(}->}[r]
&\Gamma\ar@{->>}[r]^{p_*}&\pi
}\]
and exactness of the lower row together with the indicated surjectivity of some of the maps immediately implies that $h_*$ is surjective, too, and so $h$ is $1$-connected.
Denote by $\widetilde E\to E$ and $\widetilde T\to T$ the universal coverings and by $\widehat{T}\xrightarrow{t} T$ the connected covering of $T$ associated to the subgroup $\ker(h_*)$. Thus, the latter has deck transformation group $\Gamma$ and there is a $\Gamma$-equivariant lift $\widehat{h}:\widehat{T}\to \widetilde E$ of $h$.
We obtain a $1$-connected $\mathbb{Z}\Gamma$-chain map of free $\mathbb{Z}\Gamma$-chain complexes
\[\mathbb{Z}\Gamma\otimes_{\mathbb{Z}\Gamma}C_*(\widetilde T)\cong C_*(\widehat{T})\xrightarrow{\widehat{h}_*}C_*(\widetilde E)\]
and the proof of \cite[Lemma 1.2.1]{Lueck} implies
\[b_1(E)\leq b_1(T,(h_*)^*\ell^2\Gamma)\,.\]
In the following section, we shall provide a more concrete construction of $\widehat{T}$ which allows us to calculate the right hand side of this inequality directly in the final section.
\section{Explicit construction of $\widehat{T}$}
Denote by $f:\overline{F}\to F$ the connected covering corresponding to the subgroup $\ker(i_*)$, which has $\Delta$ as deck transformation group.
Choose any $0$-cell $\overline{e}\in\overline{F}$ with $f(\overline{e})=e$ as basepoint.
For arbitrary $w$, the map $h(w):F\times I\to E$ is a homotopy between $i\circ\sigma(w)$ and $i$, which implies
\[\im((\sigma(w)\circ f)_*)=(\sigma(w))_*(\ker(i_*))=\ker(i_*)=\im(f_*)\]
and thus $\sigma(w)$ lifts to a map $\overline{\sigma}(w):\overline{F}\to\overline{F}$ which fixes $\overline{e}$.
This map is not $\Delta$-equivariant, but:
\begin{lemma}\label{lem:noncommuting}
For arbitrary $\delta\in\Delta$ we have
\[\overline{\sigma}(w)\circ\delta= \underbrace{\overline{w}^{-1}\delta\overline{w} }_{\in\Delta}\,\circ\,\overline{\sigma}(w)\,.\]
\end{lemma}
\begin{proof}
Note that both sides are lifts $\overline{F}\to\overline{F}$ of the map $\sigma(w):F\to F$. It therefore suffices to prove the equality at the point $\overline{e}$, i.\,e.\ that $\overline{\sigma}(w)(\delta\cdot\overline{e})= (\overline{w}^{-1}\delta\overline{w})\cdot \overline{e}$.
Denote a representative loop $I\to F\subset E$ of $\delta$ by the same letter and let $\overline{\delta}$ be a lift of $\delta$ to $\overline{F}$ with $\overline{\delta}(0)=\overline{e}$. Then $\delta\cdot \overline{e}$ is defined as $\overline{\delta}(1)$.
With this data at hand, the point $\overline{\sigma}(w)(\delta\cdot\overline{e})$ is defined as $\alpha(1)$, where $\alpha:I\to \overline{F}$ is the lift of the loop $\sigma(w)\circ\delta$ with starting point $\alpha(0)=\overline{e}$. In other words, the action of $\sigma(w)\circ\delta\in\Delta$ takes $\overline{e}$ to $\overline{\sigma}(w)(\delta\cdot\overline{e})$.
But $\sigma(w)\circ\delta= \overline{w}^{-1}\delta\overline{w}$ in $\Gamma$ and therefore also in $\Delta$, because $h(w)$ gives rise to a homotopy in $E$ between those loops. This proves the claim.
\end{proof}
Denote by $\widehat{F}:=\Gamma\times\overline{F}/\Delta$ the $\Gamma$-CW-complex obtained from $\Gamma\times\overline{F}$ by dividing out the equivalence relation $(\gamma,x)\sim(\gamma\delta^{-1},\delta x)$. The $\Gamma$-action is the obvious left action on the first component.
Lemma \ref{lem:noncommuting} now implies, that the $\Gamma$-equivariant map
\[\widehat{\sigma}(w):[(\gamma,x)]\mapsto [(\gamma\overline{w},\overline{\sigma}(w)x)]\]
is well-defined.
Denote by $T_{\widehat{\sigma}(w)}=\widehat{F}\times I/\sim$ the mapping torus of $\widehat{\sigma}(w)$.
In this section, we define $\widehat{T}$
by gluing together the $T_{\widehat{\sigma}(s_1)},\dots,T_{\widehat{\sigma}(s_g)}$ along the common $\widehat{F}\times\{0\}$ and claim that it is exactly the covering described in the previous section.
First of all, note that $\widehat{T}$ is indeed a covering of $T$ with each of the subcomplexes $T_{\widehat{\sigma}(s_n)}$ covering the corresponding subcomplex $T_{\sigma(s_n)}$, and clearly, the canonical $\Gamma$-action on $\widehat{T}$ coming from the action on $\widehat{F}$ is by deck transformations.
\begin{lemma}
The space $\widehat{T}$ is connected.
\end{lemma}
\begin{proof}
Note that it clearly suffices to show that the points of
\[t^{-1}\{(e,0)\}=\{[(\gamma,\overline{e},0)]\,|\,\gamma\in\Gamma\}\]
can be connected by paths in $\widehat{T}$.
For each $n=1,\dots,g$ and $\gamma\in\Gamma$, the path
\[\widehat{s}_{n,\gamma}:\,I\to T_{\widehat{\sigma}(s_n)}\subset \widehat{T}\,,\quad r\mapsto [(\gamma,\overline{e},r)]\]
connects
$[(\gamma,\overline{e},0)]$ with \[[(\gamma,\overline{e},1)]=[(\gamma \overline{s_n},\overline{\sigma}(s_n)\overline{e},0)]=[(\gamma \overline{s_n},\overline{e},0)]\]
and is mapped to $\overline{s_n}$, $s_n$ under $h\circ t$ and $p\circ h\circ t$, respectively.
By applying this repeatedly, we see that each $[(\gamma,\overline{e},0)]$ is connected to $[(\delta,\overline{e},0)]=[(1,\delta\overline{e},0)]$ for some $\delta\in \ker(p_*)=\Delta$, and this is in turn is connected to $[(1,\overline{e},0)]$, because $\overline{F}$ is path connected.
\end{proof}
\begin{lemma}
If $\tau:I\to\widehat{T}$ is a path connecting $[(\gamma',\overline{e},0)]$ to
$[(\gamma'\gamma,\overline{e},0)]$, then
$h\circ t$ maps $\tau$ to a representative loop $I\to E$ of $\gamma$.
\end{lemma}
\begin{proof}
We have already seen this for $\tau$ being one of the paths $\widehat{s}_{n,\gamma'}$ defined in the proof of the previous lemma. It is also clear for $\tau$ a path within $\widehat{F}\times\{0\}$, because any such path is of the form $r\mapsto [(\gamma',\tau'(r),0)]$ with $\tau'$ a path in $\overline{F}$ satisfying $\tau'(0)=\overline{e}$ and $(\gamma'\gamma,\overline{e})\sim (\gamma',\tau'(1))$, which implies $\gamma\in\Delta$ and $\tau'(1)=\gamma\overline{e}$.
The set of all paths which satisfy the claim is clearly closed under concatenation and taking reversed paths. It is thus sufficient to show that any path $\tau$ satisfying the prerequisites of the lemma can be homotoped into a concatenation of the $\widehat{s}_{n,\gamma'}$ and their inverses and paths within $\widehat{F}\times\{0\}$.
By cellular approximation and a subsequent homotopy within the parameter space $I$, any such $\tau$ can be written as a concatenation of finitely many paths $\tau_1,\dots,\tau_k$, each of which is a constant speed path along a $1$-cell of $\widehat{T}$.
These are either contained in $\widehat{F}\times\{0\}$ or run along a $1$-cell of the form $c\times I\subset T_{\widehat{\sigma}(s_n)}\subset \widehat{T}$
with $c$ being a $0$-cell of $\widehat{F}$.
Denote the paths along the latter in positive direction by $\rho_{n,c}$.
Note that for $c=[(\gamma,\overline{e})]$ we recover the path $\widehat{s}_{n,\gamma}$.
Any $c$ which is not of this form can be connected to some $[(\gamma,\overline{e})]$ by a path $\alpha:I\to\widehat{F}$ and an obvious homotopy in $\widehat{T}$ shows
\[\rho_{n,c}\cdot(\sigma(s_n)\circ\alpha)\simeq \alpha\cdot\widehat{s}_{n,\gamma}\,.\]
This allows us to trade any of the $\tau_m$ which is equal to some $\rho_{n,c}$ (or its inverse) for a concatenation of two paths in $\widehat{F}$ and one of the $\widehat{s}_{n,\gamma}$ (or its inverse) in between.
This shows the claim.
\end{proof}
The last two Lemmas imply that $\widehat{T}$ is exactly the covering associated to $\ker(h_*)$: it is connected and if $\tau:I\to\widehat{T}$ maps to a loop in $T$ based at $e$, then $\tau$ itself is a loop if and only if $h_*[t\circ\tau]=0$.
Furthermore, the last lemma shows that the two canonical actions of $\Gamma$ on $\widehat{T}$ as deck transformations, the action coming from general covering theory and the action induced by the $\Gamma$-action on $\widehat{F}$, are in fact the same.
\section{Calculating $b_1(T,(h_*)^*\ell^2\Gamma)=0$}
The proof of the theorem is completed by calculating $b_1(T,(h_*)^*\ell^2\Gamma)=0$.
Note that $\widehat{T}\setminus T_{\widehat{\sigma}(s_1)}=\coprod_{n=2}^g\widehat{F}\times(0,1)$ and we therefore obtain a short exact sequence of $\Gamma$-chain-complexes
\[0\to C_*(T_{\widehat{\sigma}(s_1)})\to C_*(\widehat{T})\to \bigoplus_{n=2}^g C_{*-1}(\widehat{F})\to 0\,.\]
This induces by \cite[Thm. 2.1 on p.10]{CheegerGromov} a weakly exact $L^2$-homology sequence
\begin{align*}
H_1(\ell^2\Gamma\otimes_{\mathbb{Z}\Gamma}C_*(T_{\widehat{\sigma}(s_1)}))
&\to H_1(\ell^2\Gamma\otimes_{\mathbb{Z}\Gamma}C_*(\widehat{T}))
\to \bigoplus_{n=2}^g H_0(C_*(\ell^2\Gamma\otimes_{\mathbb{Z}\Gamma}\widehat{F}))\,.
\end{align*}
On the right hand side, the van Neumann dimension of the summands is $b_0(F,(i_*)^*\ell^2\Gamma)$, which vanishes by \cite[Lemma 1.2.5]{Lueck} as $\im(i_*)=\Delta$ is infinite.
The van Neumann dimension of the left hand side is $b_1(T_{\sigma(s_1)},(\phi_*)^*\ell^2\Gamma)$, where $\phi:T_{\sigma(s_1)}=F\times I/{\sim}\to E$ is a quotient of $h(s_1)$.
Let $\Gamma'\subset\Gamma$ be the image of $\phi_*$, which is exactly the subgroup of $\Gamma$ generated by $\Delta$ and $\overline{s_1}$. As $s_1\in B$ has infinite order, the canonical map $\pi_1(T_{\sigma(s_1)},e)\to\mathbb{Z}$ factors as $\pi_1(T_{\sigma(s_1)},e)\xrightarrow{\phi'}\Gamma'\to\mathbb{Z}$.
Using \cite[Lemma 1.2.3 and Theorem 2.1]{Lueck} we conclude
\[b_1(T_{\sigma(s_1)},(\phi_*)^*\ell^2\Gamma)=b_1(T_{\sigma(s_1)},(\phi')^*\ell^2\Gamma')=0\,.\]
Thus, the weakly exact sequence implies that the van Neumann dimension of the middle term $H_1(\ell^2\Gamma\otimes_{\mathbb{Z}\Gamma}C_*(\widehat{T}))$, which is exactly $b_1(T,(h_*)^*\ell^2\Gamma)$, vanishes as well and the proof of the theorem is complete.
\bibliographystyle{plain}
| {
"timestamp": "2016-11-28T02:07:16",
"yymm": "1611",
"arxiv_id": "1611.08253",
"language": "en",
"url": "https://arxiv.org/abs/1611.08253",
"abstract": "A significant theorem of Lück says that the first $L^2$-Betti number of the total space of a fibration vanishes under some conditions on the fundamental groups. The proof is based on constructions on chain complexes. In the present paper, we translate the proof into the world of CW-complexes to make it more accessible.",
"subjects": "Algebraic Topology (math.AT)",
"title": "A geometric proof of Lück's vanishing theorem for the first $L^2$-Betti number of the total space of a fibration",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978051741806865,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7089606420118929
} |
https://arxiv.org/abs/1111.2730 | A statistical and computational theory for robust and sparse Kalman smoothing | Kalman smoothers reconstruct the state of a dynamical system starting from noisy output samples. While the classical estimator relies on quadratic penalization of process deviations and measurement errors, extensions that exploit Piecewise Linear Quadratic (PLQ) penalties have been recently proposed in the literature. These new formulations include smoothers robust with respect to outliers in the data, and smoothers that keep better track of fast system dynamics, e.g. jumps in the state values. In addition to L2, well known examples of PLQ penalties include the L1, Huber and Vapnik losses. In this paper, we use a dual representation for PLQ penalties to build a statistical modeling framework and a computational theory for Kalman smoothing.We develop a statistical framework by establishing conditions required to interpret PLQ penalties as negative logs of true probability densities. Then, we present a computational framework, based on interior-point methods, that solves the Kalman smoothing problem with PLQ penalties and maintains the linear complexity in the size of the time series, just as in the L2 case. The framework presented extends the computational efficiency of the Mayne-Fraser and Rauch-Tung-Striebel algorithms to a much broader non-smooth setting, and includes many known robust and sparse smoothers as special cases. | \section{Introduction}
Consider the following discrete-time linear state-space model
\begin{equation}
\label{LinearGaussModel}
\begin{array}{rcl}
x_1&=&x_0 + w_1
\\
x_k & = & G_k x_{k-1} + w_k, \qquad k=2,3,\ldots,N
\\
z_k & = & H_k x_k + v_k, \quad \qquad k=1,2,\ldots,N
\end{array}
\end{equation}
where $x_k \in \mB{R}^n$ is the state, $x_0$ is known, $z_k \in
\mB{R}^m$ contains noisy output samples, $G_k$ and $H_k$ are known
matrices. Further, $\{w_k\}$ and $\{v_k\}$ are mutually independent
zero-mean random variables with
covariances given by $\{Q_k\}$ and $\{R_k\}$, respectively.\\
The classical fixed-interval Kalman smoothing problem is to obtain
the (unconditional) minimum variance linear estimator of
the states $\{x_k\}_{k=1}^N$ as a function of $\{z_k\}_{k=1}^N$. It
is well known that the structure of this estimator
is related to the following optimization problem %
\begin{equation}
\label{KSNonlinObjective} \min_{\{x_k\}} %
\sum_{k=1}^N \|z_k - H_k x_k\|_{R_k^{-1}}^2 + \|x_k - G_k
x_{k-1}\|_{Q_k^{-1}}^2
\end{equation}
where $G_1$ denotes the identity matrix and $\|a\|^2_{\Sigma}:=a^\top
\Sigma a$ for every column vector $a$. When data become available,
the solution can be computed by the classical Kalman smoother with the
number of operations linear in $N$. This procedure also provides the
minimum variance estimate
of the states when all the system noises are assumed to be Gaussian.\\
In many circumstances, linear estimators that rely on quadratic penalization of
model deviation, such as (\ref{KSNonlinObjective}), lead to unsatisfactory results. For
instance, they are not robust with respect to the presence of
outliers in the data \citep{Hub,Aravkin2011,Farahmand2011} and may
have difficulties in reconstructing fast system dynamics, e.g. jumps
in the state values \citep{Ohlsson2011}. In addition,
sparsity-promoting regularization is often used in order to extract
from a large measurement or parameter vector a small subset having
greatest impact on the predictive capability of the estimate for
future data. This sparsity principle permeates many well known
techniques in machine learning and signal processing, such as feature
selection, selective shrinkage, and compressed sensing
\citep{Hastie90,LARS2004,Donoho2006}. In many circumstances, when
smoothing is considered, it can be interpreted as a sparse non
Gaussian prior distribution on the noises entering the system. In these cases,
the estimator (\ref{KSNonlinObjective}) is often replaced by
\begin{equation} \label{prob2}
\sum_{k=1}^N V\left(z_k - H_k x_k;R_k\right) + J\left(x_k - G_k x_{k-1};Q_k \right)
\end{equation}
where, for example, $V$ can be the Huber or the Vapnik's
$\epsilon$-insensitive loss, used in support vector regression
\citep{Vapnik98,Evgeniou99}, while $J$ may be the $\ell_1$-norm, as
in the LASSO procedure \citep{Lasso1996}.\\
The interpretation of problems such as (\ref{prob2}) in terms of Bayesian estimation
has been extensively studied in the statistical and machine learning literature
in recent years and
probabilistic approaches used in the analysis of estimation and learning algorithms
can be found e.g. in \citep{McKayARD,Tipping2001,Wipf_IEEE_TIT_2011}.
Non-Gaussian model errors and priors leading to a great %
variety of loss and penalty functions are also reviewed in
\citep{Wipf_ARD_NIPS_2006} using convex-type and integral-type
variational representations, with the latter being related to
Gaussian scale mixtures.
The fundamental novelty in this work is that, rather than taking this approach, we start
with a particular class of losses, called PLQ penalties, well known
from optimization literature \citep{RTRW}. We establish conditions
which allow these losses to be viewed as negative logs of true densities,
ensuring that $w_k$ and $v_k$ in (\ref{LinearGaussModel}) come from
true distributions. This in turn allows us to
interpret the solution to the problem (\ref{prob2}) as a MAP
estimator when the loss functions $V$ and $J$ come from this
subclass of PLQ penalties. We will show that this subclass includes
the four key examples, the $L_2$, $L_1$, Huber, and Vapnik
penalties.\\
The density characterization of PLQ penalties is achieved using a
dual representation, which also underlies the development of algorithms for fitting models of
the form (\ref{prob2}). In particular, in the second part of the
paper we derive the conditions, complimentary to those needed to set
up the statistical framework, that allow the development of new and
computationally efficient Kalman smoothers designed using
non-smooth penalties on the process and measurement deviations.
Amazingly, it turns out that the interior point method used in \citep{Aravkin2011}
generalizes perfectly to the entire class of PLQ densities under a
simple verifiable non-degeneracy condition. Hence, the solutions of
all the PLQ Kalman smoothers can be computed with a number of
operations that scales linearly in $N$, as in the quadratic case.
Such theoretical foundation generalizes the results recently
obtained in
\citep{Aravkin2011,AravkinIFAC,Farahmand2011,Ohlsson2011}, framing
them as particular cases of the framework presented here.\\
The paper is organized as follows. In Section \ref{PLQP} we
introduce the class of PLQ convex functions, and provide the
conditions under which they can be interpreted as negative logs
of corresponding densities. In Section \ref{InteriorPointKS} we
present a new PLQ Kalman smoother theorem
that generalizes the well known
Mayne-Fraser two-filter and the Rauch-Tung-Striebel
algorithm \citep{Gelb} to nonsmooth formulations. This theorem is obtained
by solving the Karush-Kuhn-Tucker (KKT) system for PLQ penalties
using interior point methods, and exploiting the state space structure
to obtain the solution in linear time. The necessary
results and proofs supporting the main theorems appear in the Appendix.
We end the paper with a few concluding remarks.
\section{Piecewise Linear Quadratic Penalties and Densities}
\label{PLQP}
\subsection{Preliminaries}
We recall a few definitions from
convex analysis.
\begin{itemize}
\item \index{affine hull} (Affine hull) Define the affine hull of any set
$S$, denoted by $\R{aff}\; S$, as the smallest affine set that
contains $S$.
\item (Cone) For any set $S$, denote by $\R{cone}\; S$ the set $\{ts | s \in S, t
\in \mB{R}_+\}$.
\item \index{polar} (Polar Cone) For any cone $K \subset \mB{R}^m$,
the polar of $K$ is defined to be
\[
K^\circ := \{v | \langle v, w \rangle \leq 0 \; \forall \; w \in
K\}.
\]
\item (Horizon cone). The (convex) Horizon cone $C^{\infty}$
is the set of `unbounded directions' for $C$,
i.e. $d\in C^\infty$ if for any point $\bar w \in C$
we have $\{d | \bar w + \tau d \in \R{cl}\; C\;
\forall\; \tau \geq 0\}$.
\end{itemize}
\subsection{PLQ densities}
We now introduce the PLQ penalties and densities that are the focus of this paper.
\begin{definition}
\index{penalties!piecewise linear-quadratic} (piecewise
linear-quadratic penalties) \citep{RTRW}. For a nonempty polyhedral
set $U \subset \mB{R}^m$ and a symmetric positive-semidefinite
matrix $M\in \mB{R}^{m\times m}$ (possibly $M =0$), the function
$\theta_{U, M}: \mB{R}^m \rightarrow \overline{\mB{R}}$ defined by
\begin{equation}
\label{PLQbasic}
\theta_{U, M}(w) := \sup_{u \in U}\left\{\langle u,w \rangle -
\frac{1}{2}\langle u, Mu \rangle\right\}
\end{equation}
\noindent is proper, convex, and piecewise linear-quadratic. When $M
=0$, it is piecewise linear; $\theta_{U, 0}=\sigma_U$, the support
function of $U$. The effective domain of $\theta_{U, M}$, denoted by
$\R{dom}(\theta_{U, M})$, is the set of $w \in \mB{R}^m$ where
$\theta_{U, M}(w) < \infty$, and is given by $(U^\infty \cap \R{Null}(M))^\circ$.
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{definition}
In order to capture the full class of penalties of interest, we
consider injective affine transformations into
$\mB{R}^m$ of the form $b + By$. The requirements on $B$ therefore
are $m \geq n$ and $\mathrm{Null}(B) = \{0\}$. The final technical
requirement we impose is that $b \in \R{dom}\; \theta_{U, M}$.
\begin{definition}(PLQ penalties with shifts and transforms)
\label{generalPLQ} Using (\ref{PLQbasic}),
define $\rho: \mB{R}^n \rightarrow \mB{R}$
as $\theta_{U,M}(b + By)$:
\begin{equation}
\begin{array}{rcl}
\rho_{U, M, b, B}(y) &:=&
\sup_{u \in U}
\left\{ \langle u,b + By \rangle - \frac{1}{2}\langle u, Mu
\rangle \right\} \;
\end{array}
\end{equation}
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{definition}
The following result characterizes the effective domain of $\rho$ (see Appendix for proof).
\begin{theorem}
\label{domainCharTheorem} Let $\rho$ denote $\rho_{U, M, B, b}(y)$, and
$K$ denote $U^\infty \cap \mathrm{Null}(M)$.
Suppose $U \subset \mB{R}^m$ is a
polyhedral set, $y \in \mB{R}^n$, $b \in K^\circ$, $M \in
\mB{R}^{m\times m}$ is positive semidefinite,
and $B\in \mB{R}^{m \times n}$ is injective.
Then we have \((B^\R{T}K)^\circ \subset
\mathrm{dom}(\rho) \) and \((B^\R{T}(K\cap -K))^\perp =
\mathrm{aff}(\mathrm{dom}(\rho))\).
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{theorem}
Note that the functions $\rho$ are still piecewise linear-quadratic.
All of the important examples mentioned before can be
represented in this way, as shown below.
\begin{figure} \label{HuberVapnikFig}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.29]{Huber-1.pdf}
\hspace{.1in}
{\includegraphics[scale=0.36]{Vapnik-1.pdf}}
%
%
\end{tabular}
\caption{Huber (left) and Vapnik (right) Penalties}
\end{center}
\end{figure}
\begin{remark}[scalar examples]
\label{scalarExamples} The $L_2$, $\ell_1$, Huber, and Vapnik
penalties are representable in the notation of Definition
\ref{generalPLQ}.
\begin{enumerate}
\item $L_2$: Take $U = \B{R}$, $M = 1$, $b = 0$, and $B = 1$. We obtain
\( \displaystyle \rho(y) = \sup_{u \in \B{R}}\left\langle uy -
\frac{1}{2}u^2 \right\rangle\;. \) The function inside the $\sup$ is
maximized at $u = y$, whence $\rho(y) = \frac{1}{2}y^2$.
\item $\ell_1$: Take $U = [-1, 1]$, $M = 0$, $b = 0$, and $B = 1$. We obtain
\( \displaystyle \rho(x) = \sup_{u \in [-1, 1]}\left\langle
uy\right\rangle\;. \) The function inside the $\sup$ is maximized by
taking $u = \R{sign}(y)$, whence $\rho(x) = |y|$.
\item Huber: Take $U = [-K, K]$, $M = 1$, $b = 0$, and $B = 1$.
We obtain \( \displaystyle \rho(y) = \sup_{u \in [-K,
K]}\left\langle uy - \frac{1}{2}u^2 \right\rangle\;. \) Take the
derivative with respect to $u$ and consider the following cases:
\begin{enumerate}
\item If $ y < -K $, take $u = -K$ to obtain
$-Ky -\frac{1}{2}K^2$.
\item If $-K \leq y \leq K$, take $u = y$ to obtain
$\frac{1}{2}y^2$.
\item If $y > K $, take $u = K$ to obtain
a contribution of $Ky -\frac{1}{2}K^2$.
\end{enumerate}
This is the Huber penalty with parameter $K$, shown in the left
panel of Fig. 1.%
\item Vapnik: take $U = [0,1]\times[0,1]$,
$M = \left[\begin{smallmatrix}0 & 0\\0 & 0
\end{smallmatrix}\right]$, $B = \left[ \begin{smallmatrix} 1\\-1
\end{smallmatrix} \right]$, and $b = \left[ \begin{smallmatrix}
-\epsilon \\-\epsilon \end{smallmatrix} \right]$, for some $\epsilon
> 0$. We obtain \( \rho(y) = \sup_{u_1, u_2 \in [0,1]} \left\langle
\begin{bmatrix}
y-\epsilon\\
-y-\epsilon
\end{bmatrix},
\begin{bmatrix}
u_1\\
u_2
\end{bmatrix}
\right\rangle . \)
We can obtain an explicit representation by considering three cases:
\begin{enumerate}
\item If $|y| < \epsilon$, take $u_1 = u_2 = 0$. Then $\rho(y) = 0$.
\item If $y > \epsilon$, take $u_1 = 1$ and $u_2 = 0$. Then
$\rho(y) = y - \epsilon$.
\item If $y < -\epsilon$, take $u_1 = 0$ and $u_2 = 1$. Then
$\rho(y) = -y - \epsilon$.
\end{enumerate}
This is the Vapnik penalty with parameter $\epsilon$, shown in the
right panel of Fig. 1.%
\end{enumerate}
Note that the affine generalization (Definition \ref{generalPLQ}) is
already needed to express the Vapnik penalty.
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{remark}
In order to characterize PLQ penalties as negative logs
of density functions, we need to ensure the integrability of said
density functions. A function $\rho(x)$ is called \emph{coercive} if
$\lim_{\|x\|\rightarrow \infty}\rho(x) = \infty$, and coercivity
turns out to be the key property to ensure integrability.
The proof of this fact, and the characterization of
coercivity for PLQ penalties using the notation of
Def. \ref{generalPLQ}, are the subject of the
next two theorems (see Appendix for proofs).
\begin{theorem}
\label{PLQIntegrability} (PLQ Integrability). Suppose $\rho(y)$ is
coercive, and let $n_{\R{aff}}$ denote the dimension of
$\R{aff}(\R{dom}\; \rho)$. Then the function $f(y) = \exp(-\rho(y))$
is integrable on $\R{aff}(\R{dom}\; \rho)$ with the
$n_{\R{aff}}$-dimensional Lebesgue measure.
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{theorem}
\begin{theorem}
\label{coerciveRho} (Coercivity of $\rho$). $\rho$ is coercive if
and only if $[B^\R{T}\mathrm{cone}(U)]^\circ = \{0\}$.
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{theorem}
Theorem \ref{coerciveRho} can be used to show the coercivity
of familiar penalties.
\begin{corollary} The penalties $L_2$, $L_1$, Vapnik, and
Huber are all coercive.
\end{corollary} %
{\bf{Proof:}} We show all of these penalties satisfy the hypothesis of Theorem
\ref{coerciveRho}.
\begin{enumerate}
\item $L_2$: $U = \B{R}$ and $B = 1$, so $[B^\R{T}\R{cone}(U)]^\circ =
\B{R}^\circ = \{0\}$.
\item $\ell_1$: $U = [-1, 1]$, so $\R{cone}(U) = \B{R}$, and $B = 1$,
so proof reduces to that case 1.
\item Huber: $U = [-K,K]$, so $\R{cone}(U) = \B{R}$, and $B = 1$, so proof reduces to that of case 1.
\item Vapnik: $U = [0,1] \times [0,1]$, so $\R{cone}(U) = \B{R}^2_+$.
$B = \left[ \begin{smallmatrix} 1\\-1 \end{smallmatrix} \right]$, so
$B^\R{T}\R{cone}(U) = \B{R}$, and again we reduce to case 1.
\end{enumerate}
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}%
We now define a family of distributions on $\mB{R}^n$ by
interpreting piecewise linear quadratic functions $\rho$ as negative
logs of corresponding densities.
Note that the support of the distributions is always contained
in the affine set $\R{aff}(\R{dom}\; \rho)$, characterized
in Th. \ref{domainCharTheorem}.
\begin{definition}
\label{PLQDensityDef} \index{density!piecewise linear-quadratic}
(Piecewise linear quadratic densities). Let $\rho$ be any coercive
piecewise linear quadratic function on $\mB{R}^n$ of the form
$\rho_{U, M, B, b;}(y) = \theta_{U, M}(b + By)$. Define $\B{p}(y)$
to be the following density on $\mB{R}^n$:
\begin{equation}
\label{PLQdensity} \B{p}(y) =
\begin{cases}
c_1^{-1}\exp(- \rho(y)) & y \in \R{dom}\; \rho\\
0 & \R{else},
\end{cases}
\end{equation}
where
\[
c_1 = \left(\int_{y \in \R{dom}\; \rho} \exp(-\rho(y))dy\right),
\]
and integral is with respect to the Lebesgue measure with
dimension $\R{dim}\Big(\R{aff}(\R{dom}\; \rho)\Big)$.
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{definition}
PLQ densities are true densities on the affine hull of the domain of $\rho$. %
The proof of Theorem \ref{PLQIntegrability} can be easily adapted to
show that they have moments of all orders.
\section{Kalman Smoothing with PLQ penalties}
\label{InteriorPointKS}
In this section, we consider the model
(\ref{LinearGaussModel}), but in the general case
where errors $w_k$ and $v_k$ can come from any of the
densities introduced in the previous section. To this
end, we first formulate the KS problem over the entire
sequence $\{x_k\}$.
Given a sequence of column vectors $\{ u_k \}$ and matrices $ \{ T_k
\}$ we use the notation
\[
\R{vec} ( \{ u_k \} ) =
\begin{bmatrix}
u_1 \\ u_2 \\ \vdots \\ u_N
\end{bmatrix}
\; , \; \R{diag} ( \{ T_k \} ) =
\begin{bmatrix}
T_1 & 0 & \cdots & 0 \\
0 & T_2 & \ddots & \vdots \\
\vdots & \ddots & \ddots & 0 \\
0 & \cdots & 0 & T_N
\end{bmatrix} \; .
\]
We make the following definitions.
\[
\begin{array}{lll}
& x = \R{vec}\{x_1, \cdots, x_N\}\;,\qquad
&w = \R{vec}\{w_1, \cdots, w_K\}\; \\
&v = \R{vec}\{v_1, \cdots, v_k\}\;, \qquad
&Q = \R{diag}\{Q_1, \cdots, Q_N\}\;\\
&R = \R{diag}\{R_1, \cdots, R_N\}\;, \qquad
&H = \R{diag}\{H_1, \cdots, H_N\}.
\end{array}
\]
We also introduce the matrices $G$ and $H$:
\[
G =
\begin{bmatrix}
\R{I} & 0 & &
\\
-G_2 & \R{I} & \ddots &
\\
& \ddots & \ddots & 0
\\
& & -G_N & \R{I}
\end{bmatrix}\;,\; \quad
H =
\begin{bmatrix}
H_1 & 0 & &
\\
0 & H_2 & \ddots &
\\
& \ddots & \ddots & 0
\\
& & 0 & H_N
\end{bmatrix}\;.
\]
With this notation, model (\ref{LinearGaussModel}) can be written
\begin{equation}
\label{fullStat}
\begin{array}{lll}
\tilde x_0
&=&
Gx + w\\
z
&=&
Hx + v\;,
\end{array}
\end{equation}
where $x \in \mB{R}^{nN}$ is the entire state sequence of
interest, $w$ is corresponding process noise,
$z$ is the vector of all measurements,
$v$ is the measurement noise, and
$\tilde x_0$ is a vector of size $nN$ with the first
$n$-block equal to $x_0$, the initial state estimate,
and the other blocks set to $0$.
The general Kalman smoothing problem is
described in the following proposition.
\begin{proposition}
\label{prop:KS}
Suppose that the noises $w$ and $v$ in the model~\eqref{fullStat}
are PLQ densities with means $0$, variances $Q$ and $R$
(see Def. \ref{PLQDensityDef}). Then, for suitable
$U^w, M^w,b^w,B^w$ and $U^v, M^v,b^v,B^v$
we have %
\begin{equation}
\label{kalmanDensities}
\begin{aligned}
\B{p}(w)&\propto \exp(-\theta_{U^w, M^w}(b^w + B^w Q^{-1/2}w)) \\
\B{p}(v) &\propto \exp(-\theta_{U^v, M^v}(b^v + B^v R^{-1/2}v))\;
\end{aligned}
\end{equation}
while the MAP estimator
of $x$ in the model~\eqref{fullStat} is
\begin{equation}
\arg \min_{x\in \mB{R}^{nN}}
\left\{
\begin{aligned}
\label{PLQsubproblem}
&\theta_{U^w,M^w}(b^w + B^w Q^{-1/2}(Gx - \tilde x_0)) \\
&+ \theta_{U^v, M^v}(b^v + B^vR^{-1/2}(Hx - z))
\end{aligned}
\right\}\;
\end{equation}
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{proposition}
Note that since $w_k$ and $v_k$ are independent,
problem~\eqref{PLQsubproblem} is decomposable into a sum of
terms analogous to~\eqref{KSNonlinObjective}.
This special structure is manifest in the
block diagonal structure of $H, Q, R, B^v, B^w$,
the bidiagonal structure of $G$, and
the structure of sets $U^w$ and $U^v$,
and is key in proving the linear complexity
result that will be derived in the next part of this section.\\
For our purposes, it is now important to recall that, when the sets $U^w$ and $U^v$ are polyhedral,
(\ref{PLQsubproblem}) is an Extended
Linear Quadratic program (ELQP), described in \cite[Example
11.43]{RTRW}.
Rather than directly solving~\eqref{PLQsubproblem},
we work with the Karush-Kuhn-Tucker (KKT) system.
We present the system in the following lemma, and derive
it in the Appendix.
\begin{lemma}
\label{lem:KKT}
Suppose that the sets $U^w$ and $U^v$
are polyhedral, i.e. can be written
\[
U^w = \{u|(A^w)^Tu \leq a^w \}, \quad U^v = \{u|(A^v)^Tu\leq a^v\}\;.
\]
Then the necessary first-order conditions
for optimality of~\eqref{PLQsubproblem} are given by
\begin{equation}
\label{PLQFinalConditions}
\begin{array}{lll}
&\begin{array}{llllll}
0 &=& (A^w)^\R{T}u^w + s^w - a^w\;;&& 0= (A^v)^\R{T}u^v + s^v - a^v\\
0 &=& (s^w)^\R{T}q^w\;; && 0= (s^v)^\R{T}q^v
\end{array}\\\\
&\begin{array}{llllll}
0 &=& \tilde b^w + B^w Q^{-1/2}G\bar{d} - M^w \bar{u}^w - A^wq^w\\
0 &=& \tilde b^v - B^v R^{-1/2}H\bar{d} - M^v \bar{u}^v - A^v q^v\\
0 &=& G^\R{T}Q^{-\R{T}/2}(B^w)^\R{T}\bar u^w -
H^\R{T}R^{-\R{T}/2}(B^v)^\R{T}\bar u^v\\
0 &\leq& s^w, s^v, q^w, q^v.
\end{array}
\end{array}
\end{equation}
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{lemma}
Our approach is to solve~\eqref{PLQFinalConditions}
directly using Interior Point (IP) methods.
IP methods work by applying a damped Newton iteration to a
relaxed version of (\ref{PLQFinalConditions}),
specifically relaxing the `complementarity conditions':
\[
\begin{array}{lll}
(s^w)^\R{T}q^w = 0 & \rightarrow & Q^wS^w\B{1} - \mu\B{1} = 0 \\
(s^v)^\R{T}q^v = 0 & \rightarrow & Q^vS^v\B{1} - \mu\B{1} = 0\;,
\end{array}
\]
where $Q^w, S^w, Q^v, S^v$ are diagonal matrices
with diagonals $q^w, s^w, q^v, s^v$ respectively.
The parameter $\mu$ is aggressively decreased to $0$ as the IP
iterations proceed. Typically, no more than 10 or 20 iterations
of the relaxed system are required to obtain a solution of~\eqref{PLQFinalConditions},
and hence an optimal solution to~\eqref{PLQsubproblem}.
The following theorem is key and represents the main result of this section.
It shows that the computational effort required (per IP iteration)
is linear in the number of time steps whatever PLQ density
enters the state space model.
\begin{theorem}
\label{thm:PLQsmoother}
(PLQ Kalman Smoother Theorem) Suppose that all $w_k$ and $v_k$ in
the Kalman smoothing model (\ref{LinearGaussModel}) come from PLQ
densities that satisfy
$\mathrm{Null}(M)\cap U^{\infty} = \{0\}$, i.e.
their corresponding penalties are finite-valued.
Then~\eqref{PLQsubproblem} can be solved using an IP method,
with computational complexity $O(Nn^3 + Nm)$ time.%
\vspace{-.55cm}
\begin{flushright}
$\blacksquare$
\end{flushright}\end{theorem}
The proof is presented in the Appendix
and shows that IP methods for
solving~\eqref{PLQsubproblem}
preserve the key block tridiagonal structure of the
standard smoother. General smoothing
estimates can therefore be computed in $O(Nn^3)$ time,
as long as the number of IP iterations is fixed
(as it usually is in practice, to $10$ or $20$).\\
It is important to observe that the motivating examples
(see Remark \ref{scalarExamples}) all satisfy the conditions
of Theorem \ref{thm:PLQsmoother}.
\begin{corollary}
\label{cor:examples}
The densities corresponding to $L^1, L^2$, Huber,
and Vapnik penalties
all satisfy the hypotheses of Theorem \ref{thm:PLQsmoother}.
\end{corollary}
{\bf Proof:}
We verify that $\mathrm{Null}(M) \cap \mathrm{Null}(A^\R{T}) = 0$
for each of the four penalties.
In the $L^2$ case, $M$ has full rank.
For the $L^1$, Huber, and Vapnik penalties, the respective
sets $U$ are bounded, so $U^{\infty}= \{0\}$.
\section{Conclusions}
We have presented a new theory for robust and sparse Kalman smoothing using
nonsmooth PLQ penalties applied to process and measurement deviations.
These smoothers can be designed within a statistical framework obtained
by viewing PLQ penalties as negative logs of true probability densities,
and we have presented necessary conditions that allow this interpretation.
In this regard, the coercivity condition characterized in Th. \ref{coerciveRho}
has been shown to play a central role. Notice that such a condition is also a nice example of how the
statistical framework established in the first part of this paper gives an alternative viewpoint for an idea
useful in machine learning. In fact, coercivity is also a
fundamental prerequisite in sparse and robust estimation as it precludes
directions for which the loss and the regularizer are insensitive to
large parameter/state changes. Thus, the condition for a (PLQ) penalty
to be a negative log of a true density also
ensures that the problem is well posed and that the learning machine/smoother
can control model complexity.\\
In the second part of the paper, we have
shown that solutions to PLQ Kalman
smoothing formulations can be computed with a number of operations
that is linear in the length of the time series, as in the quadratic case.
A sufficient condition for the successful execution of IP iterations
is that the PLQ penalties used should be finite valued,
which implies non-degeneracy of the corresponding statistical
distribution (the support cannot be contained in a lower-dimensional
subspace). The statistical interpretation is thus strongly linked
to the computational procedure.\\
The computational framework presented allows
a broad application of interior point methods to a wide class of
smoothing problems of interest to
practitioners. The powerful algorithmic scheme designed here,
together with the breadth and significance of the new statistical framework presented,
underscores the practical utility and flexibility of this approach.
We believe that this perspective on model
development and Kalman smoothing will be useful in a number of applications
in the years ahead.
\bibliographystyle{plain}
| {
"timestamp": "2011-11-14T02:02:13",
"yymm": "1111",
"arxiv_id": "1111.2730",
"language": "en",
"url": "https://arxiv.org/abs/1111.2730",
"abstract": "Kalman smoothers reconstruct the state of a dynamical system starting from noisy output samples. While the classical estimator relies on quadratic penalization of process deviations and measurement errors, extensions that exploit Piecewise Linear Quadratic (PLQ) penalties have been recently proposed in the literature. These new formulations include smoothers robust with respect to outliers in the data, and smoothers that keep better track of fast system dynamics, e.g. jumps in the state values. In addition to L2, well known examples of PLQ penalties include the L1, Huber and Vapnik losses. In this paper, we use a dual representation for PLQ penalties to build a statistical modeling framework and a computational theory for Kalman smoothing.We develop a statistical framework by establishing conditions required to interpret PLQ penalties as negative logs of true probability densities. Then, we present a computational framework, based on interior-point methods, that solves the Kalman smoothing problem with PLQ penalties and maintains the linear complexity in the size of the time series, just as in the L2 case. The framework presented extends the computational efficiency of the Mayne-Fraser and Rauch-Tung-Striebel algorithms to a much broader non-smooth setting, and includes many known robust and sparse smoothers as special cases.",
"subjects": "Optimization and Control (math.OC); Statistics Theory (math.ST); Applications (stat.AP); Computation (stat.CO)",
"title": "A statistical and computational theory for robust and sparse Kalman smoothing",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978051741806865,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7089606420118928
} |
https://arxiv.org/abs/2102.00906 | On upper bounds for the count of elite primes | We look at upper bounds for the count of certain primes related to the Fermat numbers $F_n=2^{2^n}+1$ called elite primes. We first note an oversight in a result of Krizek, Luca and Somer and give the corrected, slightly weaker upper bound. We then assume the Generalized Riemann Hypothesis for Dirichlet L functions and obtain a stronger conditional upper bound. | \section{\@startsection {section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries\boldmath}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{corollary}{Corollary}
\theoremstyle{definition}
\newtheorem{definition}{Definition}
\newtheorem{conjecture}{Conjecture}
\newtheorem{remark}{Remark}
\renewcommand{\mod}[1]{\ (\text{mod}\ #1)}
\begin{document}
\begin{center}
\uppercase{\bf On Upper Bounds for the Count of Elite Primes}
\vskip 20pt
{\bf Matthew Just}\\
{\smallit Department of Mathematics, University of Georgia, Athens, Georgia 30605, United States}\\
{\tt justmatt@uga.edu}\\
\end{center}
\centerline{\bf Abstract}
\noindent
We look at upper bounds for the count of certain primes related to the Fermat numbers $F_n=2^{2^n}+1$ called elite primes. We first note an oversight in a result of K{\v{r}}{\'\i}{\v{z}}ek, Luca and Somer and give the corrected, slightly weaker upper bound. We then assume the Generalized Riemann Hypothesis for Dirichlet $L$ functions and obtain a stronger conditional upper bound.
\section{Introduction}
The Fermat numbers are given by $F_n=2^{2^n}+1$ for $n\geq 0$. The first few Fermat numbers are 3, 5, 17, 257, 65537, 4294967297. Notice that the first five Fermat numbers are prime, and it was initially conjectured (by Fermat) that all such numbers are prime. The sixth Fermat number is not prime, and no other Fermat primes are known. It is known that $F_n$ is composite for $5\leq n \leq 32$ though interestingly no prime factor of $F_{14}$, $F_{20}$, $F_{22}$, or $F_{24}$ is known (see \cite{crandall2003twenty}).
An efficient test exists to determine whether or not a Fermat number is prime, called P\'epin's test.
\begin{proposition}[P\'epin's test \cite{ribenboim2012new}]
Let $n>0$. Then $F_n$ is prime if and only if \[3^{(F_n-1)/2} \equiv -1 \emph{$\mod{F_n}$}.\]
\end{proposition}
\begin{comment}
\begin{proof}
First suppose that $F_n$ is prime. Then by Euler's criterion $3^{(F_n-1)/2} \equiv -1 \mod{F_n}$ if and only if 3 is a quadratic nonresidue$\mod{F_n}$. That is to say \[\left(\frac{3}{F_n}\right) = -1\] where here we use the Legendre symbol defined for odd primes $p$ by \[\left(\frac{a}{p}\right)=\begin{cases}0 & \text{if $p\mid a$} \\ 1 & \text{if $p\nmid a$ and $a$ is a quadratic residue$\mod{p}$ } \\-1 & \text{if $p\nmid a$ and $a$ is a quadratic nonresidue$\mod{p}$ }\end{cases}\] Now for $n>0$ it is clear that $F_n \equiv 1\mod{4}$ so by the law of quadratic reciprocity \[\left(\frac{3}{F_n}\right) = \left(\frac{F_n}{3}\right)\] so it is enough to show that $F_n$ is a quadratic nonresidue$\mod{3}$. But for $n>0$ it is again clear $F_n\equiv -1 \mod{3} $, so $F_n$ is a quadratic nonresidue$\mod{3}$.
Now if $3^{(F_n-1)/2}\equiv -1 \mod{F_n}$ let $f$ be the order of $3\mod{F_n}$. Then $f\mid (F_n -1) = 2^{2^n}$. But it is also that case that $f\nmid (F_n-1)/2 = 2^{2^n-1}$, so $f=2^{2^n}$. Thus the order of the multiplicative group of units$\mod{F_n}$ has order $2^{2^n}=F_n-1$ which can only happen if $F_n$ is prime. \end{proof}
\end{comment}
The only fact about the prime 3 used in this proof is that $F_n$ is a quadratic nonresidue$\mod{3}$ for $n>0$. Thus we can replace 3 with any other prime that satisfies this requirement.
Primes $p$ for which all sufficiently large Fermat numbers are quadratic nonresidues$\mod{p}$ are called \textit{elite primes}, a term introduced by Aigner \cite{aigner1986primzahlen}. Thus we can use elite primes to test the primality of $F_n$ for all but finitely many Fermat numbers. For more on the search for elite primes see \cite{allep}, \cite{continuingsearch}, and \cite{complex}.
Similarly defined are anti-elite primes which are primes $p$ such that $F_n$ is a quadratic residue for all but finitely many $n$. Generalizations of elite and anti-elite primes for numbers other than the Fermat numbers were studied by M{\"u}ller, see \cite{muller2007anti}, \cite{muller2008generalization}, and \cite{muller2008generalized}.
Let $E(x)$ be the number of elite primes up to $x$. K{\v{r}}{\'\i}{\v{z}}ek, Luca and Somer \cite{kvrivzek2002convergence} gave the upper bound \[E(x) = O\left( \frac{x}{(\log x)^2} \right)\] as $x\rightarrow\infty$. Their proof used the estimate \[\prod_{i=0}^{2t}F_i< 2^{2^{t+1}};\] however one can check that for all $t\geq 0$ \[\prod_{i=0}^{2t}F_i = 2^{2^{2t+1}} -1.\] Using the revised estimate $\prod_{i=0}^{2t}F_{i} < 2^{2^{2t+1}}$ we obtain the following result following their proof:
\begin{theorem}
Let $E(x)$ be the number of elite primes up to $x$. Then \[E(x)=O\left(\frac{x}{(\log x)^{3/2}}\right)\] as $x\rightarrow \infty$.
\end{theorem}
Corollary 2 of K{\v{r}}{\'\i}{\v{z}}ek, Luca and Somer's paper states that the sum of the reciprocals of the elite primes converges. We note that this still follows from Theorem 1.
We next give our main result which is a stronger upper bound for $E(x)$ conditional upon the Generalized Riemann Hypothesis for Dirichlet $L$ functions:
\begin{theorem}
Let $E(x)$ be the number of elite primes up to $x$. Then assuming the Generalized Riemann Hypothesis \[E(x) = O(x^{5/6})\] as $x\rightarrow \infty$.
\end{theorem}
\section{Proof of Theorem 2}
Before beginning the proof we introduce some tools we will use. Let $\left(\frac{a}{m}\right)$ denote the Kronecker symbol.
\begin{comment}
The Jacobi symbol, a generalization of the Legendre symbol, is defined for odd numbers $m=p_1^{\alpha_1}p_2^{\alpha_2}\ldots p_k^{\alpha_k}$ by \[\left( \frac{a}{m} \right) = \prod_{i=1}^k \left(\frac{a}{p_i} \right)^{\alpha_i}\] We will use the following version of the law of quadratic reciprocity that states if $a$ and $m$ are odd coprime integers and either $a$ or $m$ is congruent to $1\mod{4}$ then \[\left( \frac{a}{m} \right) = \left( \frac{m}{a} \right)\]
The Kronecker symbol, a generalization of the Jacobi symbol, is defined for all integers $a$ and $m$ (which we write as $m=u2^s k$ where $u=\pm 1$ and $k$ is odd) by \[\left( \frac{a}{m} \right) = \left( \frac{a}{u}\right) \left( \frac{a}{2}\right)^s \left( \frac{a}{k} \right)\] where \[\left(\frac{a}{u}\right) = \begin{cases}1 & \text{if $u=1$ or $a<0$} \\ -1 & \text{if $u=-1$ and $a\geq 0$} \end{cases}\] and \[\left(\frac{a}{2}\right)=\begin{cases}0 & \text{if $2\mid a$} \\ 1 & \text{if $a\equiv \pm1\mod{8}$} \\ -1 & \text{if $a\equiv\pm3\mod{8}$}\end{cases}\] Finally if $m=0$ then \[\left(\frac{a}{0}\right) = \begin{cases}1 & \text{if $a=\pm 1$} \\ 0 & \text{otherwise} \end{cases}\]
\end{comment}
We will use the fact that for a fixed integer $a$ satisfying $a\neq 0$ and $a\not\equiv 3\mod{4}$ the Kronecker symbol is a Dirichlet character to the modulus $|a|$ (or $4|a|$ if $a\equiv2\mod{4}$). Furthermore this Dirichlet character is not the principal character as long as $a$ is not a square. The key lemma will be the following classical result which can be found in Montgomery and Vaughan \cite{montgomery2007multiplicative}.
\begin{lemma}
Let $\chi$ be a Dirichlet character to the modulus $q$, not the principal character. Then assuming the Generalized Riemann Hypothesis \[\sum_{p\leq x} \chi(p) = O(\sqrt{x} \log(qx)).\]
\end{lemma}
\begin{proof}[Proof of Theorem 2]
Let $p\leq x$ be an elite prime. We write \[p-1 = 2^{e_p} k_p, \] where $k_p$ is odd and let $f_p$ denote the multiplicative order $2\mod{k_p}$. Now for any $m\geq e_p$ we have that \[(p-1)\mid 2^m (2^{f_p}-1)=2^{m+f_p}-2^m,\] which shows by Fermat's little theorem that \[2^{2^{m+f_p}}\equiv 2^{2^m}\mod{p}.\] This periodicity of the sequence of Fermat numbers shows that if there exists an $m\geq e_p$ such that $F_m$ is a quadratic residue$\mod{p}$ then $p$ cannot be elite since $F_{m+\ell f_p}$ would be a quadratic residue$\mod{p}$ for all $\ell \geq 0$.
Now suppose $p$ is a prime with $e_p> t$, where $t$ is a parameter depending on $x$ to be chosen later. Then $p$ lies in the residue class $1\mod{2^t}$. As long as $2^t\leq \sqrt{x}$ then we may apply the Brun-Titchmarsh inequality to get an upper bound on the distribution of such primes in arithmetic progressions: \[\pi(x;2^t,1) \leq \frac{2x}{\phi(2^t) (\log x - \log 2^t)} \ll \frac{x}{2^t}. \]
Now we assume $p$ is an elite prime with $e_p\leq t$. We see now that $p$ must be a quadratic nonresidue modulo $F_{t+i}$ for all $i\geq 0$. Looking at the Legendre symbol we see that for any prime number $p$ \[ 1 - \left( \frac{F_{t+i}}{p} \right) = \begin{cases}2 & \text{if $F_{t+i}$ is a quadratic nonresidue$\mod{p}$}, \\ 0 & \text{if $F_{t+i}$ is a quadratic residue$\mod{p}$} , \\ 1 & \text{if $p\mid F_{t+i}$}.\end{cases} \]
Fixing another parameter $T$ depending on $x$ to be chosen later let \[A=\prod_{i=0}^T F_{t+i}\] so that we now have the following upper bound for $E(x)$: \[E(x) \leq \frac{1}{2^{T+1}}\sum_{p\leq x }\prod_{i=0}^T \left(1-\left(\frac{F_{t+i}}{p}\right) \right) + \sum_{p|A} 1 + O\left(\frac{x}{2^t}\right).\] Notice that \[A<\prod_{i=0}^{t+T} F_i <2^{2^{T+t+1}},\] and thus \[\sum_{p|A}1 \ll \log A \ll 2^T2^t. \]
As for the first term in our estimate for $E(x)$, we have
\[\frac{1}{2^{T+1}} \sum_{j=1}^{T+1} (-1)^j \sum_{\substack{B\subset \{0,1,2,\ldots,T \}\\ |B|=j}} \sum_{p\leq x}\left(\frac{\prod_{b\in B} F_{t+b}}{p} \right) + \frac{\pi(x)}{2^{T+1}}.\] Using the fact that these inner Kronecker symbols are Dirichlet characters to the modulus \[\prod_{b\in B} F_{t+B} \leq A < 2^{2^{t+T+1}}, \] we can apply Lemma 1 once we observe that Fermat numbers are pairwise coprime and hence any product of Fermat numbers will never be a square. We then have the upper bound \[\sum_{p\leq x}\left(\frac{\prod_{b\in B} F_{t+b}}{p} \right) \ll \sqrt{x}(\log x + 2^{t+T}). \] Putting all this together, we have \[E(x) \ll \sqrt{x}\log x + \sqrt{x}2^{t+T} + \frac{x}{2^T} + \frac{x}{2^t}.\]
Letting $t=T=\frac{\log x}{6\log 2}$ gives the desired result. \end{proof}
\noindent{\bf Remark.} Theorem 2 gives an upper bound for the count of elite primes, but if we instead used \[1 + \left( \frac{F_{t+i}}{p} \right) = \begin{cases}2 & \text{if $F_{t+i}$ is a quadratic residue$\mod{p}$} \\ 0 & \text{if $F_{t+i}$ is a quadratic nonresidue$\mod{p}$}\\ 1 & \text{if $p\mid F_{t+i}$},\end{cases}\] we obtain the same upper bound for the count of anti-elite primes. Furthermore, if we consider the generalized Fermat numbers to the base $b$, \[b^{2^n} + 1,\] we can ask what is special about the case $b=2$. In fact the only time the base $b=2$ is used in the proof of Theorem 2 is that for a fixed prime $p$ the sequence $2^{2^n}+1$ will eventually be periodic$\mod{p}$. But this is true for any base $b$, hence Theorem 2 will also hold for the count of generalized elite and anti-elite primes with respect to the generalized Fermat numbers to the base $b$.
\section*{Acknowledgments}
The author was partially supported by the Research and Training Group grant DMS-1344994 funded by the National Science Foundation. He thanks Paul Pollack and Florian Luca for helpful comments.
| {
"timestamp": "2021-02-02T02:36:37",
"yymm": "2102",
"arxiv_id": "2102.00906",
"language": "en",
"url": "https://arxiv.org/abs/2102.00906",
"abstract": "We look at upper bounds for the count of certain primes related to the Fermat numbers $F_n=2^{2^n}+1$ called elite primes. We first note an oversight in a result of Krizek, Luca and Somer and give the corrected, slightly weaker upper bound. We then assume the Generalized Riemann Hypothesis for Dirichlet L functions and obtain a stronger conditional upper bound.",
"subjects": "Number Theory (math.NT)",
"title": "On upper bounds for the count of elite primes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978051747564637,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7089606403724045
} |
https://arxiv.org/abs/2005.03219 | $L^{q}$-error estimates for approximation of irregular functionals of random vectors | Avikainen showed that, for any $p,q \in [1,\infty)$, and any function $f$ of bounded variation in $\mathbb{R}$, it holds that $\mathbb{E}[|f(X)-f(\widehat{X})|^{q}] \leq C(p,q) \mathbb{E}[|X-\widehat{X}|^{p}]^{\frac{1}{p+1}}$, where $X$ is a one-dimensional random variable with a bounded density, and $\widehat{X}$ is an arbitrary random variable. In this article, we will provide multi-dimensional versions of this estimate for functions of bounded variation in $\mathbb{R}^{d}$, Orlicz--Sobolev spaces, Sobolev spaces with variable exponents, and fractional Sobolev spaces. The main idea of our arguments is to use the Hardy--Littlewood maximal estimates and pointwise characterizations of these function spaces. We apply our main results to analyze the numerical approximation for some irregular functionals of the solution of stochastic differential equations. | \section{Introduction}\label{sec_1}
Numerical analysis for stochastic differential equations (SDEs) is a significant issue in stochastic calculus, from both theory and practical points of view.
In order to approximate the solution of the SDE $\mathrm{d} X(t)=b(X(t)) \mathrm{d} t+\sigma(X(t))\mathrm{d} B(t)$ driven by a Brownian motion $B$, one often uses the Euler--Maruyama scheme $X^{(h)}$ with time step $h>0$ (see, \cite{KP}).
The convergence of this scheme has been widely studied, see, e.g., \cite{BaTa96, GoLa08, Gu06, KoMa02, KoMe17, TaTu90} for the weak convergence; \cite{Av09,BaHuYu19,BuDaGe19,GyRa11,LeSz17b,MeTa,MuYa20,NT1,NT2,Ta20} for the strong convergence.
There are many modifications of the Euler--Maruyama scheme, see, e.g, \cite{Al13,HiMaSt02,NeSz14} for backward schemes and \cite{HuJeKl12,Sa16} for tamed schemes.
In particular, Bally and Talay \cite{BaTa96} proved that whenever the coefficients $b$ and $\sigma$ satisfy some certain regularity conditions, $|\mathbb{E}[f(X(T))]-\mathbb{E}[f(X^{(h)}(T))]|\leq Ch$ for any bounded measurable function $f$, and time step $h=T/n$, (see, also \cite{GoLa08,Gu06,KoMa02,TaTu90}).
In this article, for an irregular function $f$ (e.g., bounded variation or Sobolev differentiable) and $q \in [1,\infty)$, we are interested in the rate of convergence to zero of
\begin{align}\label{MSE_0}
\mathbb{E}\left[
\left|
f(X(T))
-
f(X^{(h)}(T))
\right|^{q}
\right]
\end{align}
as $h \to 0$ (see, Theorem \ref{Cor_0}).
This rate is needed to apply the multilevel Monte Carlo (MLMC) method, whose computational cost is much lower than that of classical (single level) Monte Carlo method, for $\mathbb{E}\left[\left|f(X(T))\right|\right]$.
Heinrich \cite{He01} firstly introduced the MLMC method for parametric integrations.
Later, Giles \cite{Gi08} developed the method for SDEs based on the Euler--Maruyama scheme $X^{(h)}$ as a generalization of the statistical Romberg method proposed by Kebaier \cite{Ke05}.
If the function $f$ is $\alpha$-H\"older continuous for some $\alpha \in (0,1]$, then \eqref{MSE_0} can be bounded from above by $\|f\|_{\alpha}^{q} \mathbb{E}[|X(T)-X^{(h)}(T)|^{q \alpha}]$, where $\|f\|_{\alpha}:=\sup_{x \neq y} \frac{|f(x)-f(y)|}{|x-y|^{\alpha}}$, and thus its rate of convergence to zero can be obtained under some suitable assumptions on the coefficients $b$ and $\sigma$ (see, e.g., \cite{BaHuYu19, BuDaGe19,GyRa11,KP,LeSz17b,MeTa,MuYa20,NT1,NT2}).
However, if the function $f$ is irregular, it is not clear how to derive the rate of convergence to zero.
Motivated by such a problem, Avikainen \cite{Av09} proved the following remarkable inequality.
\begin{Thm}[Theorem 2.4 (i) in \cite{Av09}]
Let $X$ be a one-dimensional random variable with a bounded density function $p_{X}$, and let $f$ be a real valued function of bounded variation in $\mathbb{R}$.
Then for any $p,q \in [1,\infty)$ and random variable $\widehat{X}$, it holds that
\begin{align}\label{Av_0}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
3^{q+1}
V(f)^{q}
\left(
\sup_{x \in \mathbb{R}}
p_{X}(x)
\right)^{\frac{p}{p+1}}
\mathbb{E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}},
\end{align}
where $V(f)$ is the total variation of $f$.
\end{Thm}
Note that this estimate is optimal, that is, there exist some random variables $X$, $\widehat{X}$ and function $f$ such that the equality holds in \eqref{Av_0} (see, Theorem 2.4 (ii) in \cite{Av09}).
The proof of this estimate is based on the following three ideas.
The first idea is to use the Lipschitz continuity of the distribution function of $X$, which is equivalent to the existence of a bounded density function of $X$.
Note that if the distribution function of $X$ is H\"older continuous, it is still able to obtain a generalized version of Avikainen's estimate, which is useful to evaluate numerical schemes for some stochastic processes (see, \cite{Ta20}).
The second idea is to use Skorokhod's ``explicit" representation to embed the distribution of $X$ in the probability space $([0,1], \mathscr{B}([0,1]), \mathrm{Leb})$ (e.g., Section 3 in \cite{Wi91}).
For multi-dimensional random variables, this representation is known as Skorokhod's embedding theorem (e.g., Theorem 2.7 in \cite{IkWa}).
However, it might be difficult to apply it to the multi-dimensional case since it is not explicit.
The third idea is that every function $f$ of (normalized) bounded variation in $\mathbb{R}$ can be expressed as an integral of indicator functions ${\bf 1}_{(z,\infty)}$ with respect to a signed measure $\nu(\mathrm{d} z)$ which has a bounded variation.
Then the estimate \eqref{Av_0} can be obtained by first considering it for $f(x)={\bf 1}_{(z,\infty)}(x)$ (see, Lemma 3.4 in \cite{Av09} for details and Proposition 5.3 in \cite{GiXi17} for a simple proof).
In this article, we will propose some versions of Avikainen's estimate \eqref{Av_0} for multi-dimensional random variables.
To the best of our knowledge, there is no result in this direction so far.
As mentioned above, it might be difficult to apply the approach in \cite{Av09} for multi-dimensional random variables.
Instead, we propose a new approach based on the Hardy--Littlewood maximal operator $M$ for locally finite vector valued measures $\nu$, which is defined by
\begin{align*}
M\nu(x)
:=
\sup_{s>0}
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z),
~
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z)
:=
\frac{|\nu|(B(x;s))}{\mathrm{Leb}(B(x;s))},~
x \in \mathbb{R}^{d},
\end{align*}
where $|\nu|$ is the total variation of $\nu$ and $B(x;r)$ is the closed ball in ${\mathbb R}^{d}$ with center $x$ and radius $r$.
The operator $M$ is well-studied in the fields of real analysis and harmonic analysis, and it satisfies the following Hardy--Littlewood maximal weak type estimate
\begin{align*}
\mathrm{Leb}
(\{x\in \mathbb{R}^{d}~;~M\nu(x) > \lambda\})
\leq
A_{1}
|\nu|(\mathbb{R}^{d})
\lambda^{-1},~\lambda>0,
\end{align*}
where the constant $A_{1}$ depends only on $d$.
Using this estimate, we will prove that for any random variables $X,\widehat{X}:\Omega \to \mathbb{R}^{d}$ with density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and for any $f \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d})$, $p \in (0, \infty)$ and $q\in [1,\infty)$, if $p_{X}$ and $p_{\widehat{X}}$ are bounded, then it holds that
\begin{align}\label{Av_1}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
C
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}}
\end{align}
for some constant $C$ which depends on $p,q,d,f$, $\|p_{X}\|_{\infty}$ and $\|p_{\widehat{X}}\|_{\infty}$ (for more details, see, Theorem \ref{main_0}).
Here, $BV(\mathbb{R}^{d})$ is the class of functions $f$ of bounded variation in $\mathbb{R}^{d}$, which is a subset of $L^{1}(\mathbb{R}^{d})$ such that the total variation $|Df|(\mathbb{R}^{d})=\int_{\mathbb{R}^{d}} |Df|$ of the Radon measure $Df$ defined by
\begin{align*}
\int_{\mathbb{R}^{d}}|Df|
:=
\sup
\left\{
\int_{\mathbb{R}^{d}}
f(x) \mathrm{div} g(x)
\mathrm{d} x
~;~
g \in C^{1}_{\mathrm{c}}(\mathbb{R}^{d};\mathbb{R}^{d})
\text{~and~}
\sup_{x \in \mathbb{R}^{d}}
|g(x)|
\leq 1
\right\}
\end{align*}
is finite, where the Radon measure $Df$ is defined as the generalized derivative formulated by the integration by parts for functions of bounded variations (for more details, see, Section \ref{sec_2_1}).
The most important property of $f \in BV(\mathbb{R}^{d})$ which we use in this article is the following pointwise estimate (see, Lemma \ref{Lem_key_0})
\begin{align}\label{pw_esti}
|f(x)-f(y)|
&\leq
K_{0}
|x-y|
\left\{
M_{2|x-y|}(Df)(x)
+
M_{2|x-y|}(Df)(y)
\right\},
\text{ $\mathrm{Leb}$-a.e. }x,y \in \mathbb{R}^{d},
\end{align}
where for $R>0$, $M_{R}\nu$ is the restricted Hardy--Littlewood maximal function defined by
\begin{align*}
M_{R}\nu(x)
:=
\sup_{0<s\leq R}
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z),~
x \in {\mathbb R}^{d}.
\end{align*}
It is worth noting that Haj\l{}asz \cite{Ha96,Ha03} characterized Sobolev spaces $W^{1,p}(\mathbb{R}^{d})$, $1 \leq p <\infty$ by using a pointwise estimate similar to \eqref{pw_esti}, and defined Sobolev spaces on metric spaces using its pointwise estimate.
On the other hand, Lahti and Tuominen \cite{LaTu14}, and Tuominen \cite{Tu07} generalized this characterization to $BV(\mathbb{R}^{d})$ and the Orlicz--Sobolev space $W^{1,\Phi}(\mathbb{R}^{d})$ with a Young function $\Phi$ such that both $\Phi$ and its complementary function $\Psi$ satisfy the $\Delta_{2}$-condition (or are doubling, see Section \ref{sec_2_1}).
Moreover, the Sobolev space $W^{1,p(\cdot)}(\mathbb{R}^{d})$ with a variable exponent $p:\mathbb{R}^{d} \to [1,\infty]$ and the fractional Sobolev space $W^{s,p}(\mathbb{R}^{d})$ for $(s,p) \in (0,1) \times [1,\infty)$ also satisfy some pointwise estimates similar to \eqref{pw_esti}.
Inspired by these facts, we will also provide estimates similar to \eqref{Av_1} for $f \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d}) \cup W^{s,p}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d})$.
This article is structured as follows.
In Section \ref{sec_2}, we first recall the definitions and properties of functions of bounded variation in $\mathbb{R}^{d}$, Orlicz--Sobolev spaces, Sobolev spaces with variable exponents, fractional Sobolev spaces and the Hardy--Littlewood maximal function, and recall its estimates on their function spaces.
Then we will provide multi-dimensional versions of Avikainen's estimate (see, Theorem \ref{main_0} \ref{main_1}, \ref{main_2}, \ref{main_3}) for these function spaces.
In Section \ref{sec_3}, we apply our main results to numerical analysis on irregular functionals of the solution of SDEs based on the Euler--Maruyama scheme and the multilevel Monte Carlo method.
\subsection*{Notations}
We give some basic notations and definitions used throughout this article.
We consider that elements of $\mathbb{R}^d$ are column vectors, and for $x\in\mathbb{R}^d$, we write $x=(x_{1},\ldots,x_{d})^{\top}$.
Let $C^{1}_{\mathrm{c}}(U;\mathbb{R}^q)$ be the space of $\mathbb{R}^q$-valued functions on an open set $U$ of $\mathbb{R}^{d}$ with compact support such that the first continuous partial derivatives on $U$ exist.
For differentiable functions $f:\mathbb{R}^{d} \to \mathbb{R}^{d}$ and $g:\mathbb{R}^{d} \to \mathbb{R}$, we define the divergence of $f$ by $\mathrm{div}f:=\sum_{i=1}^{d}\frac{\partial f_{i}}{\partial x_{i}}$ and the gradient of $g$ by $\nabla g=(\frac{\partial g}{\partial x_{1}},\ldots,\frac{\partial g}{\partial x_{d}})^{\top}$.
For an essentially bounded measurable function $f:\mathbb{R}^{d} \to \mathbb{R}^{q}$, the supremum norm of $f$ is defined by $\|f\|_{\infty}:={\rm ess~sup}_{x \in \mathbb{R}^{d}}|f(x)|$.
For measurable functions $f:\mathbb{R}^{d} \to \mathbb{R}$ and $g:\mathbb{R}^{d} \to [0,\infty)$, we define
\begin{align*}
\|f\|_{L^{r}(\mathbb{R}^{d}, g)}
:=
\left\{ \begin{array}{ll}
\displaystyle
\left(
\int_{\mathbb{R}^{d}}
|f(x)|^{r}
g(x)
\mathrm{d} x
\right)^{1/r},
&\text{ if } r \in [1,\infty), \\
\displaystyle
\|f\|_{\infty},
&\text{ if } r = \infty
\end{array}\right.
\end{align*}
and the class of all functions with $\|f\|_{L^{r}(\mathbb{R}^{d}, g)}<\infty$ by $L^{r}(\mathbb{R}^{d}, g)$.
In particular, if $g\equiv 1$, then we use the notation $L^{r}(\mathbb{R}^{d})$ as usual $L^{r}$ space in $\mathbb{R}^{d}$.
For $s>0$ and $x \in \mathbb{R}^{d}$, we denote open and closed balls by $U(x;s):=\{y \in \mathbb{R}^{d}~;~|y-x| < s\}$ and $B(x;s):=\{y \in \mathbb{R}^{d}~;~|y-x|\leq s\}$, respectively.
For an invertible $d \times d$-matrix $A=(A_{i,j})_{1 \leq i,j\leq d}$, we set $|A|^2:=\sum_{i,j=1}^{d} A_{i,j}^2$ and
$g_A(x,y)
=\frac{\exp(-\frac{1}{2} \langle A^{-1}(y-x),y-x \rangle_{{\mathbb R}^{d}})}
{(2\pi)^{d/2} \sqrt{\det A}}$,
and $g_{c}(x,y)=g_{cI}(x,y)$ for $c>0$, where the matrix $I$ is the identity matrix.
We denote the gamma function by $\Gamma(x):=\int_{0}^{\infty} t^{x-1}e^{-t} \mathrm{d} t$ for $x \in (0,\infty)$.
\section{Multi-dimensional Avikainen's estimates}\label{sec_2}
Let $(\Omega,\mathscr{F},\mathbb{P})$ be a probability space and let $d$ be a positive integer.
In this section, we provide multi-dimensional versions of Avikainen's estimate for any random variables $X$ and $\widehat{X}$ with bounded density functions and for functions of bounded variation in $\mathbb{R}^{d}$, Orlicz--Sobolev spaces, Sobolev spaces with variable exponents and fractional Sobolev spaces.
\subsection{Function spaces}\label{sec_2_1}
In this subsection, we provide the definitions and properties of functions of bounded variation in $\mathbb{R}^{d}$, Orlicz--Sobolev spaces, Sobolev spaces with variable exponents and fractional Sobolev spaces.
\subsubsection*{Bounded variation in $\mathbb{R}^{d}$}
We first recall the definition of functions of bounded variation in an open subset $U$ of $\mathbb{R}^{d}$.
For more detail, we refer to \cite{EvGa92,Gi84}.
A function $f \in L^{1}(U)$ has bounded variation in $U$, denoted by $f \in BV(U)$, if
\begin{align*}
\int_{U}|Df|
:=
\sup
\left\{
\int_{U}
f(x) \mathrm{div} g(x)
\mathrm{d} x
~;~
g \in C^{1}_{\mathrm{c}}(U;\mathbb{R}^{d}),~
\sup_{x \in U}
|g(x)|
\leq 1
\right\}
<\infty.
\end{align*}
We call $\int_{U}|Df|$ the total variation of $f$ in $U$.
A function $f \in L_{\mathrm{loc}}^{1}(U)$ has locally bounded variation in $U$, denoted by $f \in BV_{\mathrm{loc}}(U)$, if $\int_{V}|Df|<\infty$ for any open set $V \subset U$ such that its closer $\overline{V}$ is compact and $\overline{V} \subset U$.
It follows from the structure theorem (e.g., Theorem 5.1 in \cite{EvGa92}) that for $f \in BV_{\mathrm{loc}}(U)$, there exists a vector valued Radon measure $Df=(D_{1}f,\ldots,D_{d}f)^{\top}$ on $(U,\mathscr{B}(U))$ such that the following integration by parts formula holds:
\begin{align*}
\int_{U}
f (x)\mathrm{div} g(x)
\mathrm{d} x
=
-
\sum_{k=1}^{d}
\int_{U}
g_{k}(x)
D_{k}f(\mathrm{d} x)
,~\text{for all}~g \in C^{1}_{\mathrm{c}}(U;\mathbb{R}^{d}).
\end{align*}
\begin{Rem}\label{Rem_BV_0}
\begin{itemize}
\item[(i)]
$BV(U)$ is a Banach space with the norm $\|f\|_{BV(U)}:=\|f\|_{L^{1}(U)}+\int_{U}|Df|$ (see, Remark 1.12 in \cite{Gi84}).
\item[(ii)]
Let $\{f_{n}\}_{n \in \mathbb{N}}$ be a sequence of functions in $BV(U)$ which converges to $f$ in $L^{1}_{\mathrm{loc}}(U)$.
Then it holds that $\int_{U}|Df| \leq \liminf_{n \to \infty} \int_{U} |Df_{n}|$ (semi-continuity, e.g., Theorem 5.2 in \cite{EvGa92} or Theorem 1.9 in \cite{Gi84}).
\item[(iii)]
Sobolev's inequality holds on $BV(\mathbb{R}^{d})$, that is, if $f \in BV(\mathbb{R}^{d})$ and $d \geq 2$, then there exists $C>0$ such that $\|f\|_{L^{d/(d-1)}({\mathbb R}^{d})} \leq C\int_{\mathbb{R}^{d}}|Df|$, (see, Theorem 5.10 (i) in \cite{EvGa92} or Theorem 1.28 (A) in \cite{Gi84}).
And if $d=1$, then $\|f\|_{\infty} \leq \int_{{\mathbb R}}|Df|$.
Indeed, in the same way as the proof of Theorem 5.6 in \cite{EvGa92}, we choose $f_{k} \in C^{1}_{\mathrm{c}}(\mathbb{R};\mathbb{R})$, $k \in \mathbb{N}$ such that $f_{k} \to f$ $\mathrm{Leb}$-a.e. and $\int_{\mathbb{R}}|f'_{k}(z)| \mathrm{d} z \to \int_{\mathbb{R}} |Df|$ as $k \to \infty$.
Then by the fundamental theorem of calculus, $|f_{k}(x)| \leq |\int_{-\infty}^{x}f_{k}'(z) \mathrm{d} z| \leq \int_{\mathbb{R}}|f_{k}'(z)| \mathrm{d} z$, which implies $\|f\|_{\infty} \leq \int_{{\mathbb R}}|Df|$.
\end{itemize}
\end{Rem}
\begin{Eg}\label{Eg_0}
\begin{itemize}
\item[(i)]
Let $W^{1,1}(U)$ be a Sobolev space.
Then $W^{1,1}(U) \subset BV(U)$ (see, e.g., Example, page 197 of \cite{EvGa92} or Example 1.2 in \cite{Gi84}).
\item[(ii)]
Let $E$ be a bounded subset of $\mathbb{R}^{d}$ with $C^{2}$ boundary.
Then ${\bf 1}_{E} \in BV(\mathbb{R}^{d}) \setminus W^{1,1}(\mathbb{R}^{d})$ (see, e.g. Example 1.4 in \cite{Gi84}).
\end{itemize}
\end{Eg}
\subsubsection*{Orlicz--Sobolev spaces}
We now recall the definition of the Orlicz--Sobolev space $W^{1,\Phi}(\mathbb{R}^{d})$ with a Young function $\Phi$.
For more detail, we refer to \cite{HaHa19,RaRe}.
We first recall the definitions of Young functions and N-functions.
A convex function $\Phi:[0,\infty) \to [0,\infty]$ is called a {\it Young function} if it satisfies the conditions: $\Phi(0)=0$ and $\lim_{x \to \infty}\Phi(x)=\infty$.
A Young function $\Phi$ has the following integral form
\begin{align*}
\Phi(x)
=
\int_{0}^{x}
\varphi(y)
\mathrm{d} y,~x \in [0,\infty),
\end{align*}
where $\varphi:[0,\infty) \to [0,\infty]$ is non-decreasing and left continuous such that $\varphi(0)=0$, and if $\varphi(x)=\infty$ for $x \geq a \geq 0$, then $\Phi(x)=\infty$ for $x \geq a$ (see, e.g., Section 1.3, Corollary 2 in \cite{RaRe}).
For a Young function $\Phi$, the {\it complementary function} $\Psi:[0,\infty) \to [0,\infty]$ of $\Phi$ and the {\it generalized inverse} $\Phi^{-1}:[0,\infty] \to [0,\infty]$ of $\Phi$ are defined by
\begin{align*}
\Psi(x)
:=
\sup_{y \geq 0}\{yx-\Phi(y)\}
=
\int_{0}^{x}
\varphi^{-1}(y)
\mathrm{d} y
\quad\text{and}\quad
\Phi^{-1}(x)
:=
\inf\{y \geq 0~;~\Phi(y)>x\},
\end{align*}
where $\varphi^{-1}(x):=\inf\{y \geq 0~;~\varphi(y)>x\}$, $x \geq 0$.
A Young function $\Phi$ is called an {\it N-function} if it satisfies the following conditions: (N-i) $\Phi$ is continuous; (N-ii) $\Phi(x)=0$ if and only if $x=0$; (N-iii) $\lim_{x \to 0}\Phi(x)/x=0$ and $\lim_{x \to \infty}\Phi(x)/x=\infty$.
Here, continuity in the topology of $C([0,\infty); [0,\infty])$ means that $\lim_{y \to x}\Phi(y)=\Phi(x)$ for every point $x \in [0,\infty)$ regardless of whether $\Phi(x)$ is finite or infinite (e.g., page 14 on \cite{HaHa19}).
For a Young function $\Phi$, the Orlicz space $L^{\Phi}(\mathbb{R}^{d})$ is defined by
\begin{align*}
L^{\Phi}(\mathbb{R}^{d})
&:=
\bigcup_{\alpha >0}
\left\{
f:\mathbb{R}^{d} \to \mathbb{R}
~;~
f\text{ : measurable and }
\int_{\mathbb{R}^{d}}
\Phi(\alpha |f(x)|)
\mathrm{d} x
<\infty
\right\}.
\end{align*}
If $\mathrm{Leb}$-a.e. equal functions are identified, then this function space is a Banach space with the Luxemburg norm
\begin{align*}
\|f\|_{L^{\Phi}(\mathbb{R}^{d})}
:=
\inf
\left\{
\lambda>0~;~
\int_{\mathbb{R}^{d}}
\Phi
\left(
\frac{|f(x)|}{\lambda}
\right)
\mathrm{d} x
\leq 1
\right\}
\end{align*}
(e.g., Section 3.3, Theorem 10 in \cite{RaRe}), and if $\Psi$ is the complementary function of $\Phi$, then the generalized H\"older's inequality
\begin{align}\label{eq_GHolder}
\int_{\mathbb{R}^{d}}
|f(x)g(x)|
\mathrm{d} x
\leq
2
\|f\|_{L^{\Phi}(\mathbb{R}^{d})}
\|g\|_{L^{\Psi}(\mathbb{R}^{d})}
\end{align}
holds for any $f \in L^{\Phi}(\mathbb{R}^{d})$ and $g \in L^{\Psi}(\mathbb{R}^{d})$ (e.g., Section 3.3, Proposition 1 in \cite{RaRe}).
A Young function $\Phi$ satisfies the $\Delta_{2}$-condition (or is {\it doubling}) if there exists $C>0$ such that for each $x>0$, $\Phi(2x) \leq C \Phi(x)$.
More specifically, various characterizations (equivalent conditions) of the $\Delta_{2}$-condition are described in Section 2.3 of \cite{RaRe}.
For a Young function $\Phi$, the Orlicz--Sobolev space $W^{1,\Phi}(\mathbb{R}^{d})$ is defined by
\begin{align*}
W^{1,\Phi}(\mathbb{R}^{d})
:=
\left\{
f \in L^{\Phi}(\mathbb{R}^{d})
~;~
|Df| \in L^{\Phi}(\mathbb{R}^{d})
\right\},
\end{align*}
where $Df:=(D_{1}f,\ldots,D_{d}f)^{\top}$ is the vector of the first order weak partial derivatives $D_{i}f$ of $f$ for $i=1,\ldots,d$ (e.g., Section 9.3, Definition 1 in \cite{RaRe}).
\begin{Rem}\label{Rem_Orlicz_1}
\begin{itemize}
\item[(i)] The complementary function $\Psi$ to a Young function $\Phi$ is also a Young function (e.g., page 10 on \cite{RaRe}, or page 14 on \cite{HaHa19} and Lemma 2.4.2 in \cite{HaHa19}).
\item[(ii)]
The complementary function $\Psi$ to an N-function $\Phi$ satisfies (N-ii) and (N-iii).
Indeed, if $\Psi(x)=0$, then it holds that $x \leq \Phi(y)/y$ for $y>0$.
Thus we obtain $x=0$ by $\lim_{y \to 0}\Phi(y)/y=0$.
On the other hand, since $\varphi$ is non-decreasing, it holds that $\Phi(x)/x \leq \varphi(x) \leq \Phi(2x)/x$ for $x>0$.
Hence $\Phi$ satisfies $\lim_{x \to 0}\Phi(x)/x=0$ and $\lim_{x \to \infty}\Phi(x)/x=\infty$ if and only if $\varphi$ satisfies $\lim_{x \to 0}\varphi(x)=0$ and $\lim_{x \to \infty}\varphi(x)=\infty$.
This implies that $\lim_{x \to 0}\Psi(x)/x=0$ and $\lim_{x \to \infty}\Psi(x)/x=\infty$.
Furthermore, $\Psi$ is left continuous (see, e.g., page 14 on \cite{HaHa19} and Lemma 2.4.2 in \cite{HaHa19}).
In particular, if $\Psi$ is real-valued, then it is an N-function.
\item[(iii)] In general, the complementary function $\Psi$ to a Young function $\Phi$ does not always satisfy the $\Delta_{2}$-condition even if $\Phi$ satisfies the $\Delta_{2}$-condition (e.g., $\Phi(x)=\int_{0}^{x}\log(1+y){\rm d}y=(1+x)\log(1+x)-x$, $x \in [0,\infty)$).
\item[(iv)]
Let $\Phi$ be a Young function.
Then the following inclusion relations hold $W^{1,\Phi}({\mathbb R}^{d}) \subset W_{{\rm loc}}^{1,1}({\mathbb R}^{d}) \subset BV_{{\rm loc}}({\mathbb R}^{d})$.
Indeed, the first relation is shown as follows:
Let $f \in W^{1,\Phi}({\mathbb R}^{d})$ and $V$ be a compact subset of ${\mathbb R}^{d}$.
By the definition of $L^{\Phi}({\mathbb R}^{d})$, there exists $\alpha>0$ such that $\int_{{\mathbb R}^{d}}\Phi(\alpha|Df(x)|){\rm d}x<\infty$.
Note that if $\Phi^{-1}(x)=\infty$, then for any $y \geq 0$, $\Phi(y) \leq x$, and thus by taking $y \to \infty$, we conclude $x=\infty$.
Therefore, we have $\Phi^{-1}(\Xint-_{V}\Phi(\alpha|Df(x)|){\rm d}x)<\infty$.
By using Jensen's inequality, we obtain
\begin{align*}
\int_{V}|Df(x)|{\rm d}x
&=
\frac{{\rm Leb}(V)}{\alpha}
\Xint-_{V}
\alpha|Df(x)|
{\rm d}x
\leq
\frac{{\rm Leb}(V)}{\alpha}
\Phi^{-1}
\left(
\Phi
\left(
\Xint-_{V}\alpha|Df(x)|{\rm d}x
\right)
\right) \\
&\leq
\frac{{\rm Leb}(V)}{\alpha}
\Phi^{-1}
\left(
\Xint-_{V}\Phi(\alpha|Df(x)|){\rm d}x
\right)
<\infty,
\end{align*}
which implies $f \in W^{1,1}_{\mathrm{loc}}(\mathbb{R}^{d})$.
\item[(v)]
If a Young function $\Phi$ satisfies the $\Delta_{2}$-condition, then the Orlicz space $L^{\Phi}({\mathbb R}^{d})$ coincides with the set of all functions $f$ which satisfy $\int_{{\mathbb R}^{d}}\Phi(|f(x)|){\rm d}x<\infty$.
\end{itemize}
\end{Rem}
\begin{Eg}\label{Ex_Orlicz_0}
\begin{itemize}
\item[(i)]
Let $p \in (1,\infty)$ and $\Phi(x):=x^{p}/p$, $x \in [0,\infty)$.
Then $\Phi$ is an N-functions and satisfies the $\Delta_{2}$-condition.
Moreover, the Orlicz space $L^{\Phi}({\mathbb R}^{d})$ coincides with the classical Lebesgue space $L^{p}({\mathbb R}^{d})$.
\item[(ii)]
Let $p>1$ and $\alpha>0$, or $p>1-\alpha$ and $-1\leq \alpha<0$.
Then the function $\Phi(x):=x^{p} (\log(e+x))^{\alpha}$, $x \in [0,\infty)$ is an N-function.
Moreover, $\Phi$ and its complementary function $\Psi$ satisfy the $\Delta_{2}$-condition.
Note that the Orlicz--Sobolev spaces with such $\Phi$ are used and studied in \cite{AdHu03,IwKoOn01}.
We only check that $\Phi$ and $\Psi$ satisfy the $\Delta_{2}$-condition.
Let $x>0$.
Since $\log(e+x) \leq \log(e+2x) \leq 2\log(e+x)$, we obtain $\Phi(2x) \leq 2^{p}\max\{1,2^{\alpha}\}\Phi(x)$.
On the other hand, if $p>1$ and $\alpha>0$, we obtain for any $y \geq 0$,
\begin{align*}
y(2x)-\Phi(y
&\leq
2yx
-
y^{p}
\left\{
\log
\left(
e+\frac{y}{2^{\frac{1}{p-1}}}
\right)
\right\}^{\alpha}
=
2^{\frac{p}{p-1}}
\left\{
\frac{y}{2^{\frac{1}{p-1}}}x
-
\Phi
\left(
\frac{y}{2^{\frac{1}{p-1}}}
\right)
\right\}.
\end{align*}
Thus $\Psi(2x) \leq 2^{\frac{p}{p-1}}\Psi(x)$.
If $p>1-\alpha$ and $-1\leq \alpha<0$, since the function $y \mapsto \log(e+y)$ is concave, we obtain for any $y \geq 0$,
\begin{align*}
y(2x)
-
\Phi(y)
&\leq
2yx
-
2^{\frac{\alpha}{\alpha+p-1}}
y^{p}
\left\{
\log\left(
e+\frac{y}{2^{\frac{1}{\alpha+p-1}}}
\right)
\right\}^{\alpha}
=
2^{\frac{\alpha+p}{\alpha+p-1}}
\left\{
\frac{y}{2^{\frac{1}{\alpha+p-1}}}x
-
\Phi\left(
\frac{y}{2^{\frac{1}{\alpha+p-1}}}
\right)
\right\}.
\end{align*}
Thus $\Psi(2x) \leq 2^{\frac{\alpha+p}{\alpha+p-1}}\Psi(x)$.
\item[(iii)]
Let $\Phi$ be a Young function which satisfies the condition (N-iii) and let $f \in L^{1}(\mathbb{R}^{d})$ be a continuous and strictly positive function with $\lim_{|x| \to \infty}f(x)=0$, then $f \in L^{\Phi}(\mathbb{R}^{d})$.
Indeed, for any $\alpha>0$, there exists $K>0$ such that $C_{\alpha}:=\sup_{|x|>K}\frac{\Phi(\alpha f(x))}{\alpha f(x)}<\infty$.
Thus since $\Phi$ is non-decreasing and $f$ is bounded on $B(0;K)$, we obtain $\int_{{\mathbb R}^{d}}\Phi(\alpha|f(x)|){\rm d}x \leq {\rm Leb}(B(0;K))\Phi(\alpha\sup_{x \in B(0;K)}|f(x)|)+\alpha C_{\alpha}\|f\|_{L^{1}({\mathbb R}^{d})}<\infty$.
\end{itemize}
\end{Eg}
\subsubsection*{Sobolev spaces with variable exponents}
We next recall the definition of the Sobolev space $W^{1,p(\cdot)}({\mathbb R}^{d})$ with a variable exponent $p$.
This function space is defined as one of generalized Orlicz spaces (also known as Musielak--Orlicz spaces) by the modular $f \mapsto \int_{{\mathbb R}^{d}}|f(x)|^{p(x)}{\rm d}x$.
For more detail, we refer to \cite{DiHaHaRu11}.
A measurable function $p:\mathbb{R}^{d} \to [1,\infty]$ is called a variable exponent on ${\mathbb R}^{d}$, and denoted by $p \in {\mathcal P}({\mathbb R}^{d})$.
We define
\begin{align*}
p^{-}
:=
\underset{x \in \mathbb{R}^{d}}{{\rm ess~inf~}}
p(x)
\quad \text{ and } \quad
p^{+}
:=
\underset{x \in \mathbb{R}^{d}}{{\rm ess~sup~}}
p(x).
\end{align*}
The Lebesgue space $L^{p(\cdot)}({\mathbb R}^{d})$ with a variable exponent $p \in {\mathcal P}({\mathbb R}^{d})$ is defined by
\begin{align*}
L^{p(\cdot)}({\mathbb R}^{d})
&:=\bigcup_{\alpha>0}
\left\{
f:\mathbb{R}^{d} \to \mathbb{R}
~;~
f\text{ : measurable and }
\int_{{\mathbb R}^{d}}
\left|
\alpha
f(x)
\right|^{p(x)}
{\rm d}x
<\infty
\right\}
\end{align*}
If $\mathrm{Leb}$-a.e. equal functions are identified, then this function space is a Banach space with respect to the norm
\begin{align*}
\|f\|_{L^{p(\cdot)}({\mathbb R}^{d})}
:=
\inf
\left\{
\lambda>0
~;~
\int_{{\mathbb R}^{d}}
\left|
\frac{f(x)}{\lambda}
\right|^{p(x)}
{\rm d}x
\leq 1
\right\}
\end{align*}
(e.g., Theorem 3.2.7 in \cite{DiHaHaRu11}).
Let $p,q,s \in {\mathcal P}({\mathbb R}^{d})$ and assume that
\begin{align*}
\frac{1}{s(x)}=\frac{1}{p(x)}+\frac{1}{q(x)},~
\text{$\mathrm{Leb}$-a.e}.~x \in \mathbb{R}^{d}.
\end{align*}
Then for any $f \in L^{p(\cdot)}({\mathbb R}^{d})$ and $g \in L^{q(\cdot)}({\mathbb R}^{d})$, the generalized H\"older's inequality
\begin{align}\label{eq_GHolder_1}
\|fg\|_{L^{s(\cdot)}({\mathbb R}^{d})}
\leq
2
\|f\|_{L^{p(\cdot)}({\mathbb R}^{d})}
\|g\|_{L^{q(\cdot)}({\mathbb R}^{d})}
\end{align}
holds (see, Lemma 3.2.20 in \cite{DiHaHaRu11}).
In the case $s=p=q=\infty$, we use the convention $1/\infty=0$.
We say a function $f: {\mathbb R}^{d} \to {\mathbb R}$ is locally log-H\"older continuous on ${\mathbb R}^{d}$ if there exists $C>0$ such that for any $x, y \in {\mathbb R}^{d}$,
\begin{align*}
\left|
f(x)
-
f(y)
\right|
\leq
\frac{C}{\log(e+1/|x-y|)}.
\end{align*}
More specifically, various characterizations (equivalent conditions) of the locally log-H\"older continuity are described in Lemma 4.1.6 of \cite{DiHaHaRu11}.
We say that $f: {\mathbb R}^{d} \to {\mathbb R}$ satisfies the log-H\"older decay condition if there exist $f_{\infty} \in {\mathbb R}$ and $C>0$ such that for any $x \in {\mathbb R}^{d}$,
\begin{align}
\label{eq:7_1}
\left|
f(x)
-
f_{\infty}
\right|
\leq
\frac{C}{\log(e+|x|)}.
\end{align}
We say that $f: {\mathbb R}^{d} \to {\mathbb R}$ is globally log-H\"older continuous on ${\mathbb R}^{d}$ if it is locally log-H\"older continuous on ${\mathbb R}^{d}$ and satisfies the log-H\"older decay condition.
If $f: {\mathbb R}^{d} \to {\mathbb R}$ is globally log-H\"older continuous on ${\mathbb R}^{d}$, then the constant $f_{\infty}$ in \eqref{eq:7_1} is unique and $f$ is bounded (e.g., page 100 on \cite{DiHaHaRu11}).
We define
\begin{align*}
{\mathcal P}^{\log}({\mathbb R}^{d})
:=
\left\{
p \in {\mathcal P}({\mathbb R}^{d})
~;~
1/p \text{ is globally log-H\"older continuous}
\right\}
\end{align*}
and define $p_{\infty}$ by $1/p_{\infty}:=\lim_{|x| \to \infty}1/p(x)$.
As usual we use the convention $1/\infty:=0$.
The Sobolev space $W^{1,p(\cdot)}({\mathbb R}^{d})$ with a variable exponent $p \in {\mathcal P}({\mathbb R}^{d})$ is defined by
\begin{align*}
W^{1,p(\cdot)}({\mathbb R}^{d})
:=
\left\{
f \in L^{p(\cdot)}({\mathbb R}^{d})
~;~
|Df|
\in
L^{p(\cdot)}({\mathbb R}^{d})
\right\},
\end{align*}
where $Df:=(D_{1}f,\ldots,D_{d}f)^{\top}$ is the vector of the first order weak partial derivatives $D_{i}f$ of $f$ for $i=1,\ldots,d$ (see, e.g., Definition 8.1.2 in \cite{DiHaHaRu11}).
\begin{Rem}\label{Rem_Sob_expo}
\begin{itemize}
\item[(i)]
For $p \in {\mathcal P}^{\log}({\mathbb R}^{d})$, although $1/p$ is bounded, $p$ is not always bounded (e.g., page 101 on \cite{DiHaHaRu11}).
\item[(ii)]
$p \in {\mathcal P}^{\log}({\mathbb R}^{d})$ if and only if $p^{*}:=p/(p-1) \in {\mathcal P}^{\log}({\mathbb R}^{d})$, and then $(p_{\infty})^{*}=(p^{*})_{\infty}$ (e.g., page 101 on \cite{DiHaHaRu11}).
\item[(iii)]
For $p \in {\mathcal P}({\mathbb R}^{d})$ with $p^{+}<\infty$, $p \in {\mathcal P}^{\log}({\mathbb R}^{d})$ if and only if $p$ is globally log-H\"older continuous.
This is due to the fact that $p \mapsto 1/p$ is a bilipschitz mapping from $[p^{-},p^{+}]$ to $[1/p^{+},1/p^{-}]$ (e.g., Remark 4.1.5 in \cite{DiHaHaRu11}).
\item[(iv)]
Note that the following inclusion relations hold $W^{1,p(\cdot)}({\mathbb R}^{d}) \subset W_{{\rm loc}}^{1,p^{-}}({\mathbb R}^{d}) \subset W_{{\rm loc}}^{1,1}({\mathbb R}^{d})$.
Indeed, the first relation is shown by using the generalized H\"older's inequality \eqref{eq_GHolder_1}.
\end{itemize}
\end{Rem}
\begin{Eg}
Let $d=1$ and
\begin{align*}
p(x)
:=
\max
\left\{
1-e^{3-|x|},
\min
\left\{
\frac{6}{5},
\max
\left\{
\frac{1}{2},
\frac{3}{2}-x^{2}
\right\}
\right\}
\right\}
+1,
\quad
x \in {\mathbb R}.
\end{align*}
Then $p \in {\mathcal P}^{\log}({\mathbb R})$ and $1<p^{-}<p^{+}<\infty$ (e.g., Example 1.3 in \cite{NaSa12}, or Example 9.1.15 and Example 9.1.16 in \cite{YaLiKy17}).
\end{Eg}
\subsubsection*{Fractional Sobolev spaces}
We finally recall the definition of the fractional Sobolev space $W^{s,p}({\mathbb R}^{d})$.
For more detail, we refer to \cite{DiPaVa12}.
Let $s \in (0,1)$ be a fractional exponent and $p \in [1,\infty)$.
The fractional Sobolev space $W^{s,p}({\mathbb R}^{d})$ is defined by
\begin{align*}
W^{s,p}({\mathbb R}^{d})
:=
\left\{
f \in L^{p}({\mathbb R}^{d})
~;~
\int_{{\mathbb R}^{d}}
\int_{{\mathbb R}^{d}}
\left(
\frac{|f(x)-f(y)|}{|x-y|^{d/p+s}}
\right)^{p}
\mathrm{d} x
\mathrm{d} y
<\infty
\right\}.
\end{align*}
In the literature, the fractional Sobolev space is also called the Aronszajn, Gagliardo or Slobodeckij space.
For $s \in (0,1)$ and $p \in [1,\infty)$ and $f \in W^{s,p}(\mathbb{R}^{d})$, we define the operator $G_{s,p}$ by
\begin{align*}
G_{s,p}f(x)
:=
\left(
\int_{{\mathbb R}^{d}}
\left(
\frac{|f(x)-f(y)|}{|x-y|^{d/p+s}}
\right)^{p}
{\rm d}y
\right)^{1/p},~
x \in {\mathbb R}^{d}.
\end{align*}
Then $G_{s,p}f \in L^{p}(\mathbb{R}^{d})$.
\subsection{Hardy--Littlewood maximal function and estimates}\label{sec_2_2}
In this subsection, we recall the definition of the Hardy--Littlewood maximal function and their estimates on function spaces defined in Section \ref{sec_2_1}, which are well-known in the fields of real analysis and harmonic analysis.
Let $\nu$ be a locally finite vector valued measure on $\mathbb{R}^{d}$.
The Hardy--Littlewood maximal operator $M$ for $\nu$ is defined by
\begin{align*}
M\nu(x)
:=
\sup_{s>0}
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z),
\quad
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z)
:=
\frac{|\nu|(B(x;s))}{\mathrm{Leb}(B(x;s))},
\end{align*}
where $|\nu|$ is the total variation of $\nu$.
For $R>0$, we define the restricted Hardy--Littlewood maximal function $M_{R}\nu$ by
\begin{align*}
M_{R}\nu(x)
:=
\sup_{0<s\leq R}
\Xint-_{B(x;s)}
\mathrm{d} |\nu|(z),~
x \in {\mathbb R}^{d}.
\end{align*}
If $\nu(\mathrm{d} x)=f(x) \mathrm{d} x$, then we write $Mf(x)$ and $M_{R}f(x)$, respectively.
The following lemma is well-known as the Hardy--Littlewood maximal weak and strong type estimates.
\begin{Lem}\label{Lem_key_2}
\begin{itemize}
\item[(i)]
Weak type estimate (e.g., Chapter III, Section 4.1, (a) in \cite{St70}).
There exists $A_{1}>0$ such that for any finite and vector valued signed measure measure $\nu$ on $\mathbb{R}^{d}$ and $\lambda>0$,
\begin{align*}
\mathrm{Leb}
\left(\left\{x\in \mathbb{R}^{d}~;~M\nu(x) > \lambda\right\}\right)
\leq
A_{1}
|\nu|(\mathbb{R}^{d})
\lambda^{-1}.
\end{align*}
\item[(ii)]
Strong type estimate (e.g., Chapter I, Section 1.3, Theorem 1 (c) in \cite{St70}).
For any $p \in (1,\infty]$, there exists $A_{p}>0$ such that for any $f \in L^{p}(\mathbb{R}^{d})$,
\begin{align*}
\|Mf\|_{L^{p}(\mathbb{R}^{d})}
\leq
A_{p} \|f\|_{L^{p}(\mathbb{R}^{d})}.
\end{align*}
\end{itemize}
\end{Lem}
\begin{Rem}
\begin{itemize}
\item[(i)]
The estimate in Lemma \ref{Lem_key_2} (i) can be shown by the same way as the proof of Theorem 1 (b) in Chapter I, Section 1.3 of \cite{St70} as an application of Vitali's covering lemma (e.g., Chapter I, Section 1.6, Lemma in \cite{St70}), and the constant $A_{1}$ can be chosen as $A_{1}=5^{d}$.
\item[(ii)]
The Hardy--Littlewood maximal operator is used to prove the flow property of ordinary differential equations (ODEs) and stochastic differential equations (SDEs) with Sobolev coefficients.
In particular, by using this maximal operator, Crippa and De Lellis \cite{CrLe08} proved the existence of a unique regular Lagrangian flow for ODEs with a local Sobolev coefficient, and Zhang \cite{Zh11} (also see, \cite{Zh13}) studied the stochastic homeomorphism flows property for SDEs with local Sobolev coefficients.
\end{itemize}
\end{Rem}
The following lemma shows that the $\Delta_{2}$-condition is equivalent to the Hardy--Littlewood maximal strong type estimate on the Orlicz space.
\begin{Lem}[Theorem 2.1 in \cite{Ga88}]\label{Lem_key_3}
Let $\Phi$ be an N-function and $\Psi$ be its complementary function.
Then $\Psi$ satisfies the $\Delta_{2}$-condition if and only if there exists $A_{\Phi}>0$ such that for any $f \in L^{\Phi}(\mathbb{R}^{d})$,
\begin{align*}
\|Mf\|_{L^{\Phi}({\mathbb R}^{d})}
\leq
A_{\Phi}
\|f\|_{L^{\Phi}({\mathbb R}^{d})}.
\end{align*}
\end{Lem}
The following lemma shows that the Hardy--Littlewood maximal strong type estimate holds on the Sobolev space $W^{1,p(\cdot)}(\mathbb{R}^{d})$ with a variable exponents $p \in {\mathcal P}^{\log}({\mathbb R}^{d})$.
\begin{Lem}[e.g., Theorem 4.3.8 in \cite{DiHaHaRu11}]
\label{lem:0.3}
Let $p \in {\mathcal P}^{\log}({\mathbb R}^{d})$ with $1<p^{-}$.
Then there exists $A_{p(\cdot)}>0$ such that for any $f \in L^{p(\cdot)}({\mathbb R}^{d})$,
\begin{align*}
\|Mf\|_{L^{p(\cdot)}({\mathbb R}^{d})}
\leq
A_{p(\cdot)}
\|f\|_{L^{p(\cdot)}({\mathbb R}^{d})}.
\end{align*}
\end{Lem}
\subsection{Main results}\label{sec_2_3}
In this subsection, we state our main results of this article as multi-dimensional versions of Avikainen's estimate.
We use notations $1/\infty:=0$ and $1/0:=\infty$ for convenience.
We first consider the case of bounded variation in $\mathbb{R}^{d}$.
\begin{Thm}\label{main_0}
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and let $r \in (1,\infty]$.
Suppose that $p_{X},p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$.
Then for any $f \in BV(\mathbb{R}^{d}) \cap L^{r}(\mathbb{R}^{d},p_{X}) \cap L^{r}(\mathbb{R}^{d},p_{\widehat{X}})$, $p \in (0,\infty)$ and $q \in [1,r)$, it holds that
\begin{align}\label{main_0_1}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
C_{BV}(p,q,r)
\mathbb{E}\left[\left|X-\widehat{X}\right|^{p}\right]^{\frac{1-q/r}{p+1}},
\end{align}
where the constant $C_{BV}(p,q,r)$ is defined by
\begin{align*}
&C_{BV}(p,q,r)
\\&:=
\left\{
\begin{array}{lll}
\begin{array}{l}
\displaystyle
(2\|f\|_{\infty})^{q-1}
\Big(
2^{p+1}K_{0}^{p}
\|f\|_{\infty}
\\
\hspace{1.92cm}
\displaystyle
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|Df|
\Big),
\end{array}
&\text{if}\quad
r=\infty,\\
\begin{array}{l}
\hspace{-0.17cm}
\displaystyle{
2^{q-1}
\left(
2^{p+1}K_{0}^{p}
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{{\mathbb R}^{d}}
|Df|
\right)
} \\
\hspace{1.92cm}
\displaystyle
+
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{r},
\end{array}
&\text{if}\quad
r \in (1,\infty),
& {\mathbb E}[|X-\widehat{X}|^{p}]<1, \\
\displaystyle
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q},
&\text{if}\quad
r \in (1,\infty),
& {\mathbb E}[|X-\widehat{X}|^{p}] \geq 1.
\end{array}\right.
\end{align*}
Here, $K_{0}$ and $A_{1}$ are the constants of the pointwise estimate \eqref{Lem_key_0_1} in Lemma \ref{Lem_key_0} and of the Hardy--Littlewood maximal weak type estimate in Lemma \ref{Lem_key_2} (i), respectively.
\end{Thm}
\begin{Rem}
\label{rem:2.12}
\begin{itemize}
\item[(i)]
In Theorem \ref{main_0}, we need the existence of bounded density functions for both $X$ and $\widehat{X}$ in order to use the poinwise estimate \eqref{Lem_key_0_1} in Lemma \ref{Lem_key_0}.
\item[(ii)]
For the one-dimensional case, Theorem \ref{main_0} with $r=\infty$ requires the stronger assumption (i.e., the existence of bounded density functions for both $X$ and $\widehat{X}$) than the result of Avikainen \cite{Av09}.
Since the constant $C_{BV}(p,q,\infty)$ depends on $\|f\|_{\infty}$ and $K_{0}$, it might be difficult to compare the constants in the upper bound of \eqref{Av_0} and \eqref{main_0_1} in general.
\item[(iii)]
By using the same way as the proof of Theorem 2.4 (ii) in \cite{Av09}, we can prove that in the estimate \eqref{main_0_1} for $r=\infty$, the power $1/(p+1)$ is optimal, that is, there exist $f \in BV(\mathbb{R}^{d}) \cap L^{\infty}({\mathbb R}^{d})$ and random variables $X$ and $\widehat{X}$ with the bounded density functions such that both sides in \eqref{main_0_1} coincide for some constant $C_{BV}(p,q,\infty)$.
\item[(iv)]
In the case of $r \in (1,\infty)$ and ${\mathbb E}[|X-\widehat{X}|^{p}] \geq 1$, the power of the right hand side of the estimate \eqref{main_0_1} does not necessarily have to be $\frac{1-q/r}{p+1}$ and can be chosen arbitrarily.
Indeed, for any $\alpha \geq 0$, since ${\mathbb E}[|X-\widehat{X}|^{p}]^{\alpha} \geq 1$, we obtain
\begin{align*}
{\mathbb E}\left[\left|f(X)-f(\widehat{X})\right|^{q}\right]
&\leq \left(\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}+\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}\right)^{q} \\
&\leq \left(\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}+\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}\right)^{q}{\mathbb E}\left[\left|X-\widehat{X}\right|^{p}\right]^{\alpha}.
\end{align*}
\item[(v)]
If $r \in (1,\infty)$, then the constant $C_{BV}(p,q,r)$ depends on both densities $p_{X}$ and $p_{\widehat{X}}$ or expectations $\mathbb{E}[|f(X)|^{r}]$ and $\mathbb{E}[|f(\widehat{X})|^{r}]$.
However, in the application to the multilevel Monte Carlo method in Section \ref{sec_3}, it might be possible to improve these dependence by using the Gaussian upper bound (see, \eqref{GB_1}, \eqref{GB_2} and Remark \ref{Rem_GB_0} (ii)).
\end{itemize}
\end{Rem}
Before proving Theorem \ref{main_0}, we give a pointwise estimate for functions of locally bounded variation in $\mathbb{R}^{d}$, which plays a crucial role in our arguments.
\begin{Lem}
\label{Lem_key_0}
Let $f \in BV_{\mathrm{loc}}(\mathbb{R}^{d})$.
Then there exist a constant $K_{0}>0$ and a Lebesgue null set $N \in {\mathscr B}({\mathbb R}^{d})$ such that for all $x,y \in \mathbb{R}^{d} \setminus N$,
\begin{align}
|f(x)-f(y)|
&\leq
K_{0}
|x-y|
\left\{
M_{2|x-y|}(Df)(x)
+
M_{2|x-y|}(Df)(y)
\right\},
\label{Lem_key_0_1}
\end{align}
\end{Lem}
\begin{Rem}\label{Rem_0}
\begin{itemize}
\item[(i)]
Note that Theorem 3 in \cite{LaTu14} shows that functions in $BV(\mathbb{R}^{d})$ can be characterized by the estimate \eqref{Lem_key_0_1}.
\item[(ii)]
In Theorem \ref{main_0}, we need to assume boundedness for the density functions of both random variables $X$ and $\widehat{X}$, which is a stronger assumption than in the one-dimensional case (see, Theorem 2.4(i) in \cite{Av09}).
Here, if a ``nonsymmetric" version $|f(x)-f(y)|\leq K_{0}|x-y|M_{2|x-y|}(Df)(x)$, $\mathrm{Leb}$-a.e. $x, y \in \mathbb{R}^{d}$ of the pointwise estimate is correct, then we can remove the boundedness of the density function of either $X$ or $\widehat{X}$ in Theorem \ref{main_0}.
\end{itemize}
\end{Rem}
The estimate \eqref{Lem_key_0_1} is basically known in the field of harmonic analysis.
For the convenience of readers, we will give a proof below.
\begin{proof}[Proof of Lemma \ref{Lem_key_0}]
The proof is based on Theorem 3.2 in \cite{HaKo00}.
We first note that if $d \geq 2$, by using Jensen's inequality and Poincar\'e's inequality for functions of locally bounded variation (see, e.g., Theorem 5.10 (ii) in \cite{EvGa92}), there exists a constant $C_{0}>0$ such that for any $x \in \mathbb{R}^{d}$ and $r>0$,
\begin{align}\label{Lem_key_0_2}
\Xint-_{B(x;r)}
\left|
f(z)
-
(f)_{x,r}
\right|
\mathrm{d} z
&\leq
\left(
\Xint-_{B(x;r)}
\left|
f(z)
-
(f)_{x,r}
\right|^{\frac{d}{d-1}}
\mathrm{d} z
\right)^{\frac{d-1}{d}} \notag\\
&\leq
\frac{C_{0}}{\mathrm{Leb}(U(x;r))^{\frac{d-1}{d}}}
\int_{U(x;r)}|Df| \notag\\
&\leq
C_{d}
r
M_{r}(Df)(x),
\end{align}
where $(f)_{x,r}:=\Xint-_{B(x;r)} f(z) \mathrm{d} z$ and $C_{d}:=C_{0} \sqrt{\pi} \Gamma(d/2+1)^{-1/d}$.
If $d=1$, there exists $\{f_{k}\}_{k \in \mathbb{N}} \subset C^{1}(U(x;r); {\mathbb R})$ such that $f_{k} \to f$ in $L^{1}(U(x;r))$ and $\int_{U(x;r)} |f_{k}'(z)| \mathrm{d} z \to \int_{U(x;r)}|Df|$ as $k \to \infty$ (e.g., Theorem 5.3 in \cite{EvGa92} or Theorem 1.17 in \cite{Gi84}).
Then by using Fatou's lemma and Lemma 4.1 in \cite{EvGa92} with $p=1$, there exists $C_{1}>0$ such that
\begin{align}\label{Lem_key_0_20}
\Xint-_{B(x;r)}
\left|
f(z)
-
(f)_{x,r}
\right|
\mathrm{d} z
&\leq
\liminf_{k \to \infty}
\Xint-_{U(x;r)}
\Xint-_{U(x;r)}
\left|
f_{k}(z)
-
f_{k}(y)
\right|
\mathrm{d} y
\mathrm{d} z \notag\\
&\leq C_{1}r
\liminf_{k \to \infty}
\Xint-_{U(x;r)}
\Xint-_{U(x;r)}
|f_{k}'(y)|
\mathrm{d} y
\mathrm{d} z
\notag
\\
&=C_{1}r
\Xint-_{U(x;r)} |Df|
\leq
C_{1}rM_{r}(Df)(x).
\end{align}
Moreover, the Lebesgue differentiation theorem (e.g., Theorem 1.32 in \cite{EvGa92}) shows that there exists a Lebesgue null set $N \in {\mathscr B}({\mathbb R}^{d})$ such that for any $x \in \mathbb{R}^{d} \setminus N$,
\begin{align}\label{Lem_key_0_3}
\lim_{r \to 0}
(f)_{x,r}
=
f(x).
\end{align}
Let $x, y \in \mathbb{R}^{d} \setminus N$ be fixed and set $r_{i}:=2^{-i}|x-y|$ for $i \in \mathbb{N}\cup\{0\}$.
Then by using \eqref{Lem_key_0_3}, we obtain
\begin{align*}
|f(x)-(f)_{x,r_{0}}|
&\leq
\sum_{i=0}^{\infty}
\left|
(f)_{x,r_{i+1}}
-
(f)_{x,r_{i}}
\right|\notag\\
&\leq
\sum_{i=0}^{\infty}
\Xint-_{B(x;r_{i+1})}
\left|
f(z)
-
(f)_{x,r_{i}}
\right|
\mathrm{d} z\notag\\
&\leq
2^{d}
\sum_{i=0}^{\infty}
\Xint-_{B(x;r_{i})}
\left|
f(z)
-
(f)_{x,r_{i}}
\right|
\mathrm{d} z.
\end{align*}
Therefore, it follows from \eqref{Lem_key_0_2} or \eqref{Lem_key_0_20} that
\begin{align}\label{Lem_key_0_4}
|f(x)-(f)_{x,r_{0}}|
&\leq
2^{d+1}
C_{d}
|x-y|
M_{|x-y|}(Df)(x).
\end{align}
By the same way,
we have
\begin{align}\label{Lem_key_0_5}
|f(y)-(f)_{y,r_{0}}|
\leq
2^{d+1}
C_{d}
|x-y|
M_{|x-y|}(Df)(y).
\end{align}
On the other hand, it holds from \eqref{Lem_key_0_2} or \eqref{Lem_key_0_20} that
\begin{align}\label{Lem_key_0_6}
|(f)_{x,r_{0}}-(f)_{y,r_{0}}|
&\leq
|(f)_{x,r_{0}}-(f)_{x,2r_{0}}|
+
|(f)_{x,2r_{0}}-(f)_{y,r_{0}}| \notag\\
&\leq
\Xint-_{B(x;r_{0})}
|f(z)-(f)_{x,2r_{0}}|
\mathrm{d} z
+
\Xint-_{B(y;r_{0})}
|(f)_{x,2r_{0}}-f(z)|
\mathrm{d} z \notag\\
&\leq
2^{d+1}
\Xint-_{B(x;2r_{0})}
|f(z)-(f)_{x,2r_{0}}|
\mathrm{d} z \notag\\
&\leq
2^{d+2}C_{d}|x-y|M_{2|x-y|}(Df)(x).
\end{align}
By combining \eqref{Lem_key_0_4}, \eqref{Lem_key_0_5} and \eqref{Lem_key_0_6}, we conclude the proof.
\end{proof}
By using the Hardy--Littlewood maximal weak type estimate in Lemma \ref{Lem_key_2} (i) and the pointwise estimate \eqref{Lem_key_0_1} in Lemma \ref{Lem_key_0}, we first prove the estimate \eqref{main_0_1} for indicator functions ${\bf 1}_{E} \in BV(\mathbb{R}^{d})$, which is a multi-dimensional version of Lemma 3.4 in \cite{Av09} and Proposition 5.3 in \cite{GiXi17}.
\begin{Lem}\label{Lem_key_1}
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively.
Suppose that $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$.
Then for any $E \in \mathscr{B}(\mathbb{R}^{d})$ with ${\bf 1}_{E} \in BV(\mathbb{R}^{d})$ and $p,q \in (0,\infty)$, it holds that
\begin{align}\label{main_0_2}
\mathbb{E}\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{q}
\right]
\leq
\left(
(2K_{0})^{p}
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E}|
\right)
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}}.
\end{align}
\end{Lem}
\begin{proof}
If $\mathbb{E}[|X-\widehat{X}|^{p}]=0$ then $X=\widehat{X}$ almost surely, and thus the statement is obvious.
We assume $\mathbb{E}[|X-\widehat{X}|^{p}]>0$.
For $\lambda>0$, we define the event $\Omega(D {\bf 1}_{E},\lambda) \in \mathscr{F}$ by
\begin{align*}
\Omega(D {\bf 1}_{E},\lambda)
:=
\left\{
M(D {\bf 1}_{E})(X)>\lambda
\right\}
\cup
\left\{
M(D {\bf 1}_{E})(\widehat{X})>\lambda
\right\}.
\end{align*}
We first remark that for any $x,y \in \mathbb{R}^{d}$, it holds that
\begin{align}\label{Lem_key_1_1}
|{\bf 1}_{E}(x)-{\bf 1}_{E}(y)|^{q}
=
|{\bf 1}_{E}(x)-{\bf 1}_{E}(y)|^{p}.
\end{align}
By using this trick, we obtain
\begin{align*}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{q}
\right]
=
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)}
\right]
+
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)^{{\rm c}}}
\right].
\end{align*}
On the event $\Omega(D {\bf 1}_{E},\lambda)$, since $X$ and $\widehat{X}$ have bounded density functions, by using Lemma \ref{Lem_key_2} (i), we have
\begin{align}
\label{eq:7}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)}
\right]
&\leq
{\mathbb P}(M(D{\bf 1}_{E})(X)>\lambda)
+
{\mathbb P}(M(D{\bf 1}_{E})(\widehat{X})>\lambda)
\notag\\&
\leq
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E}|
\lambda^{-1}.
\end{align}
Let $N \in {\mathscr B}({\mathbb R}^{d})$ be the Lebesgue null set provided by Lemma \ref{Lem_key_0} for $f={\bf 1}_{E}$.
On the event $\Omega(D {\bf 1}_{E},\lambda)^{{\rm c}}$, since $X$ and $\widehat{X}$ have density functions, by Lemma \ref{Lem_key_0}, we obtain
\begin{align}
\label{eq:8}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)^{{\rm c}}}
\right]
&=
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)^{{\rm c}}}
{\bf 1}_{\mathbb{R}^{d} \setminus N}(X)
{\bf 1}_{\mathbb{R}^{d} \setminus N}(\widehat{X})
\right]
\notag\\
&\leq
K_{0}^{p}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\{
M(D{\bf 1}_{E})(X)
+
M(D{\bf 1}_{E})(\widehat{X})
\}^{p}
{\bf 1}_{\Omega(D {\bf 1}_{E},\lambda)^{{\rm c}}}
\right]
\notag \\
&\leq
(2K_{0})^{p}
\lambda^{p}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right].
\end{align}
Hence, by \eqref{eq:7} and \eqref{eq:8}, we have
\begin{align*}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{q}
\right]
\leq
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E}|\lambda^{-1}
+
(2K_{0})^{p}
\lambda^{p}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right].
\end{align*}
Now we choose $\lambda:={\mathbb E}[|X-\widehat{X}|^{p}]^{-R}>0$ for some $R>0$.
Then we obtain
\begin{align*}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{q}
\right]
\leq
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E}|
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{R}
+
(2K_{0})^{p}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{1-pR}.
\end{align*}
By choosing $R$ as $R=1-pR$, that is, $R=\frac{1}{p+1}$, then we have
\begin{align*}
{\mathbb E}
\left[
\left|
{\bf 1}_{E}(X)
-
{\bf 1}_{E}(\widehat{X})
\right|^{q}
\right]
\leq
\left(
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E}|
+
(2K_{0})^{p}
\right)
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}},
\end{align*}
which concludes the statement.
\end{proof}
\begin{Rem}
Note that the equation \eqref{Lem_key_1_1} is the key trick for replacing the power $q$ in the left hand side of Avikainen's estimates \eqref{main_0_1} and \eqref{main_0_2} by $p$ in the right hand side.
\end{Rem}
By using Lemma \ref{Lem_key_1} with the coarea formula for functions of bounded variation, we now prove Theorem \ref{main_0} for general functions $f \in BV(\mathbb{R}^{d})$.
\begin{proof}[Proof of Theorem \ref{main_0}]
For $\lambda>0$, we define the event $\Omega(f,\lambda) \in \mathscr{F}$ by
\begin{align*}
\Omega(f,\lambda)
:=
\{
|f(X)|>\lambda
\}
\cup
\{
|f(\widehat{X})|>\lambda
\}.
\end{align*}
For $t \in \mathbb{R}$, we define $E_{t}:=\{x \in \mathbb{R}^{d}~;~f(x) > t\}$.
Then for any $x,y \in \mathbb{R}^{d}$, it holds that
\begin{align*}
|f(x)-f(y)|
=
\int_{f(x) \wedge f(y)}^{f(x)\vee f(y)}
\left|
{\bf 1}_{E_{t}}(x)
-
{\bf 1}_{E_{t}}(y)
\right|
\mathrm{d} t.
\end{align*}
Hence, since $q \in [1,\infty)$, by using Jensen's inequality, it holds that
\begin{align*}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(f,\lambda)^{{\rm c}}}
\right]
\leq
(2\lambda)^{q-1}
\int_{-\lambda}^{\lambda}
\mathbb{E}\left[
\left|
{\bf 1}_{E_{t}}(X)
-
{\bf 1}_{E_{t}}(\widehat{X})
\right|^{q}
\right]
\mathrm{d} t.
\end{align*}
It follows from Theorem 5.9 (i) in \cite{EvGa92} that ${\bf 1}_{E_{t}} \in BV(\mathbb{R}^{d})$ for $\mathrm{Leb}$-a.e. $t \in \mathbb{R}$.
Note that by the coarea formula for functions of bounded variation (see, e.g., Theorem 5.9 (ii) in \cite{EvGa92}), it holds that
\begin{align*}
\int_{\mathbb{R}}
\mathrm{d} t
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E_{t}}|
=
\int_{\mathbb{R}^{d}}
|Df|.
\end{align*}
Therefore, by using Lemma \ref{Lem_key_1} with $E=E_{t}$, we obtain
\begin{align}\label{eq_0_2}
&\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(f,\lambda)^{{\rm c}}}
\right] \notag
\\&\leq
(2\lambda)^{q-1}
\int_{-\lambda}^{\lambda}
\left(
(2K_{0})^{p}
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|D{\bf 1}_{E_{t}}|
\right)
\mathrm{d} t
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}} \notag\\
&\leq
(2\lambda)^{q-1}
\left(
2^{p+1}K_{0}^{p}
\lambda
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{\mathbb{R}^{d}}
|Df|
\right)
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}}.
\end{align}
If $r=\infty$ (i.e., $f$ is
essentially bounded with respect to Lebesgue measure), then since $X$ and $\widehat{X}$ have density functions, by choosing $\lambda:=\|f\|_{\infty}$, it holds that $\mathbb{P}(\Omega(f,\lambda)^{{\rm c}})=1$.
Thus the estimate \eqref{eq_0_2} implies that the estimate \eqref{main_0_1} in the case of $r=\infty$ holds.
We next show the estimate \eqref{main_0_1} in the case of $r \in (1,\infty)$ and ${\mathbb E}[|X-\widehat{X}|^{p}]<1$.
On the event $\Omega(f,\lambda)$, by using H\"older's inequality with $\frac{1}{r/q}+\frac{1}{r/(r-q)}=1$, we obtain
\begin{align}\label{eq_0_3}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(f,\lambda)}
\right]
&\leq
\left(\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}+\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}\right)^{q}
\mathbb{P}(\Omega(f,\lambda))^{1-\frac{q}{r}} \notag\\
&\leq
\left(\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}+\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}\right)^{r}
\lambda^{-(r-q)}.
\end{align}
We choose $\lambda:=(\mathbb{E}[|X-\widehat{X}|^{p}]^{\frac{1}{p+1}})^{-1/r}>1$.
Then by \eqref{eq_0_2} and \eqref{eq_0_3}, we have
\begin{align*}
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
&\leq
2^{q-1}\lambda^{q}
\left(
2^{p+1}K_{0}^{p}
+
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\int_{{\mathbb R}^{d}}|Df|
\right)
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1}{p+1}}\\
&\hspace{0.35cm}
+
\left(
\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}
+
\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}
\right)^{r}
\lambda^{-(r-q)} \\
&=C_{BV}(p,q,r)
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{p}
\right]^{\frac{1-q/r}{p+1}},
\end{align*}
which concludes the estimate \eqref{main_0_1} in the case of $r \in (1,\infty)$ and ${\mathbb E}[|X-\widehat{X}|^{p}]<1$.
In the case of $r \in (1,\infty)$ and ${\mathbb E}[|X-\widehat{X}|^{p}] \geq 1$, it is already shown in Remark \ref{rem:2.12} (iv).
\end{proof}
\subsubsection*{Orlicz--Sobolev spaces and Sobolev spaces with variable exponents}
For a function $f$ in $BV_{\mathrm{loc}}(\mathbb{R}^{d})$ or $W^{1,1}_{\mathrm{loc}}(\mathbb{R}^{d})$, $\int_{\mathbb{R}^{d}}|Df|$ might not be finite, and thus it is difficult to estimate the probability $\mathbb{P}(M(Df)(X)>\lambda)$.
Therefore, we now consider Avikainen's estimates for several subspaces of $W^{1,1}_{\mathrm{loc}}(\mathbb{R}^{d})$.
We first consider the case of the Orlicz--Sobolev space.
\begin{Thm}\label{main_1}
Let $\Phi$ be an $N$-function and $\Psi$ be its complementary function.
Suppose that $\Psi$ satisfies the $\Delta_{2}$-condition.
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and let $r \in (1,\infty]$.
Suppose that $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d}) $ or $p_{X}, p_{\widehat{X}} \in L^{\Psi}(\mathbb{R}^{d})$.
Then for any $f \in W^{1,\Phi}(\mathbb{R}^{d}) \cap L^{r}(\mathbb{R}^{d},p_{X}) \cap L^{r}(\mathbb{R}^{d},p_{\widehat{X}})$ and $q \in (0,r)$, it holds that
\begin{align}\label{main_1_1}
&\mathbb{E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right] \notag \\
&\leq
\left\{
\begin{array}{ll}
\displaystyle
C_{W^{1,\Phi}}(q,r,\infty)
\inf_{\lambda>0}
\left\{
\lambda^{-(1-\frac{q}{r})}
+
(\Phi^{-1}(\lambda))^{q}
{\mathbb E}
\left[
\left|
X-\widehat{X}
\right|^{q}
\right]
\right\},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
C_{W^{1,\Phi}}(q,r,\Psi)
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]^{\frac{1-q/r}{q+1-q/r}},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{\Psi}(\mathbb{R}^{d}),
\end{array}\right.
\end{align}
where the constants $C_{W^{1,\Phi}}(q,r,\infty)$ and $C_{W^{1,\Phi}}(q,r,\Psi)$ are defined by
\begin{align*}
&C_{W^{1,\Phi}}(q,r,\infty)
\\&:=
\max\left\{
\left(
\frac{2K_{0}}{\alpha}
\right)^{q},
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\|\Phi(\alpha|Df|)\|_{L^{1}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}
\right\}, \\
&C_{W^{1,\Phi}}(q,r,\Psi)
\\&:=
(2K_{0})^{q}
+
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
2A_{\Phi}
\{
\|p_{X}\|_{L^{\Psi}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{\Psi}({\mathbb R}^{d})}
\}
\| |Df| \|_{L^{\Phi}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}.
\end{align*}
Here, $K_{0}$, $A_{1}$ and $A_{\Phi}$ are the constants of the pointwise estimate \eqref{Lem_key_0_1} in Lemma \ref{Lem_key_0}, of the Hardy--Littlewood maximal weak and strong type estimates in Lemma \ref{Lem_key_2} (i) and Lemma \ref{Lem_key_3}, respectively, and $\alpha$ is any positive constant such that $\|\Phi(\alpha|Df|)\|_{L^{1}({\mathbb R}^{d})}<\infty$.
\end{Thm}
\begin{Rem}\label{rem:2.13}
\begin{itemize}
\item[(i)]
In the right hand side of the estimates \eqref{main_1_1}, the power inside of the expectation is $q$ not $p \in (0,\infty)$ unlike the case of $BV(\mathbb{R}^{d})$ (see, Theorem \ref{main_0}).
The reason is that we do not know the indicator function ${\bf 1}_{E_{t}}$, $E_{t}:=\{x \in \mathbb{R}^{d}~;~f(x) > t\}$ belongs to $BV(\mathbb{R}^{d})$ or $W^{1,\Phi}(\mathbb{R}^{d})$, and thus we cannot apply the trick \eqref{Lem_key_1_1} for replacing the power $q$ by $p$.
\item[(ii)]
Let $\Phi$ be a Young function.
Since $W^{1,\Phi}(\mathbb{R}^{d}) \subset BV_{\mathrm{loc}}(\mathbb{R}^{d})$ (see, Remark \ref{Rem_Orlicz_1} (iv)), the pointwise estimates in Lemma \ref{Lem_key_0} and Remark \ref{Rem_0} hold for $f \in W^{1,\Phi}({\mathbb R}^{d})$.
Moreover, by using Jensen's inequality for the convex function $\Phi$, for $\mathrm{Leb}$-a.e. $x,y \in {\mathbb R}^{d}$,
\begin{align}\label{eq:21}
|f(x)-f(y)|
\leq
K_{1}|x-y|
\left\{
\Phi^{-1}(M_{2|x-y|}(\Phi(|Df|))(x))
+
\Phi^{-1}(M_{2|x-y|}(\Phi(|Df|))(y))
\right\}.
\end{align}
Theorem 1.2 in \cite{Tu07} shows that functions $f \in W^{1,\Phi}(\mathbb{R}^{d})$ can be characterized by the estimate \eqref{eq:21}.
\item[(iii)]
If $r \in (1,\infty)$, then the constants $C_{W^{1,\Phi}}(q,r,\infty)$ and $C_{W^{1,\Phi}}(q,r,\Psi)$ depend on both densities $p_{X}$ and $p_{\widehat{X}}$ or expectations $\mathbb{E}[|f(X)|^{r}]$ and $\mathbb{E}[|f(\widehat{X})|^{r}]$ (see, Remark \ref{rem:2.12} (v)) .
\end{itemize}
\end{Rem}
As a conclusion of Theorem \ref{main_0} and Theorem \ref{main_1} noting Example \ref{Ex_Orlicz_0} (i), we obtain the following estimates for the Sobolev space $W^{1,p}(\mathbb{R}^{d})$ for $p \in [1,\infty)$.
\begin{Cor}\label{Cor_1}
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and let $r \in (1,\infty]$, $p \in [1,\infty)$ and $p^{*}:=p/(p-1)$.
Suppose that $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$ or $p_{X},p_{\widehat{X}} \in L^{p^{*}}(\mathbb{R}^{d})$.
Then for any $f \in W^{1,p}(\mathbb{R}^{d}) \cap L^{r}(\mathbb{R}^{d},p_{X}) \cap L^{r}(\mathbb{R}^{d},p_{\widehat{X}})$ and $q \in (0,r)$, there exist $C_{W^{1,p}}(q,r,\infty)>0$ and $C_{W^{1,p}}(q,r,p^{*})>0$ such that
\begin{align*}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
&\leq
\left\{ \begin{array}{ll}
\displaystyle
C_{W^{1,p}}(q,r,\infty)
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]^{\frac{p(1-q/r)}{q+p(1-q/r)}},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
C_{W^{1,p}}(q,r,p^{*})
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]^{\frac{1-q/r}{q+1-q/r}},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{p^{*}}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
\end{Cor}
\begin{proof}
It is sufficient to consider $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$.
Let $\Phi(x):=x^{p}/p$, $x \in [0,\infty)$.
Then the Orlicz--Sobolev space $W^{1,\Phi}(\mathbb{R}^{d})$ coincides with the classical Sobolev space $W^{1,p}(\mathbb{R}^{d})$ (see, Example \ref{Ex_Orlicz_0} (i)).
Since $\Phi^{-1}(x)=p^{1/p} x^{1/p}$, the infimum of the right hand side of \eqref{main_1_1} is bounded from above by
\begin{align*}
\lambda^{-(1-\frac{q}{r})}
+
(p \lambda)^{q/p}
{\mathbb E}
\left[
\left|
X-\widehat{X}
\right|^{q}
\right]
\end{align*}
for any $\lambda >0$.
By choosing $\lambda:=\mathbb{E}[|X-\widehat{X}|^{q}]^{-\frac{1}{1+q/p-q/r}}$, we conclude the statement.
\end{proof}
\begin{Rem}
Let $f \in W^{1,p}(\mathbb{R}^{d})$ with $d<p<\infty$.
Then by using Morrey's inequality (see, Theorem 4.10 in \cite{EvGa92}) and Lebesgue differentiation theorem (see, Theorem 1.32 in \cite{EvGa92}) there exists a $(1-d/p)$-H\"older continuous function $f^{*}$ such that $f=f^{*}$, $\mathrm{Leb}$-a.e.
Hence from Jensen's inequality, we have
\begin{align*}
\mathbb{E}\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
\|f^{*}\|_{1-d/p}^{q}
\mathbb{E}\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]^{1-\frac{d}{p}},
\end{align*}
where $\|f^{*}\|_{\alpha}:=\sup_{x \neq y}
\frac{|f^{*}(x)-f^{*}(y)|}{|x-y|^{\alpha}}$ for $\alpha \in (0,1]$.
On the other hand, if $\mathbb{E}[|X-\widehat{X}|^{q}]<1$ and $q \in (0,d/(p-d))$ i.e., $1-d/p<1/(q+1)$, then Corollary 2.19 with $p_{X} \in L^{p^{*}}(\mathbb{R}^{d})$ and $r= \infty$ is sharper than the above.
\end{Rem}
\begin{proof}[Proof of Theorem \ref{main_1}]
We first assume that $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$.
Since $|Df| \in L^{\Phi}({\mathbb R}^{d})$, there exists $\alpha>0$ such that $\|\Phi(\alpha|Df|)\|_{L^{1}({\mathbb R}^{d})}<\infty$.
Then for $\lambda>0$, we define the event $\Omega(\Phi(\alpha|Df|),\lambda) \in {\mathscr F}$ by
\begin{align*}
\Omega(\Phi(\alpha|Df|),\lambda)
:=
\left\{
M(\Phi(\alpha|Df|))(X)>\lambda
\right\}
\cup
\left\{
M(\Phi(\alpha|Df|))(\widehat{X})>\lambda
\right\}.
\end{align*}
Since $X$ and $\widehat{X}$ have bounded density functions, by using Lemma \ref{Lem_key_2} (i), we obtain
\begin{align*}
\mathbb{P}(\Omega(\Phi(\alpha|Df|),\lambda))
&\leq
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\|\Phi(\alpha|Df|)\|_{L^{1}({\mathbb R}^{d})}
\lambda^{-1}.
\end{align*}
Hence by using H\"older's inequality with $\frac{1}{r/q}+\frac{1}{r/(r-q)}=1$ in the case of $r \in (1,\infty)$ and by using the boundedness of $f$ in the case of $r=\infty$, we have
\begin{align}\label{eq:14}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(\Phi(\alpha|Df|),\lambda)}
\right] \notag \\ \notag
&\leq
\left(
\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}
+
\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}
\right)^{q}
{\mathbb P}(\Omega(\Phi(\alpha|Df|),\lambda))^{1-\frac{q}{r}}\\
&\leq
\left(
\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}
+
\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\|\Phi(\alpha|Df|)\|_{L^{1}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}
\lambda^{-(1-\frac{q}{r})}.
\end{align}
Let $N \in {\mathscr B}({\mathbb R}^{d})$ be the Lebesgue null set defined on Lemma \ref{Lem_key_0}.
On the event $\Omega(\Phi(\alpha|Df|),\lambda)^{{\rm c}}$, since $X, \widehat{X}$ have density functions and $\Phi^{-1}$ is non-decreasing, by similar way as \eqref{eq:21} in Remark \ref{rem:2.13} (ii), we obtain
\begin{align}\label{eq:15}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(\Phi(\alpha|Df|),\lambda)^{{\rm c}}}
\right] \notag \\
&=
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(\Phi(\alpha|Df|),\lambda)^{{\rm c}}}
{\bf 1}_{\mathbb{R}^{d} \setminus N}(X)
{\bf 1}_{\mathbb{R}^{d} \setminus N}(\widehat{X})
\right]\notag\\
&\leq
\left(\frac{K_{0}}{\alpha}\right)^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\left\{
\Phi^{-1}(M(\Phi(\alpha|Df|))(X))
+
\Phi^{-1}(M(\Phi(\alpha|Df|))(\widehat{X}))
\right\}^{q}
{\bf 1}_{\Omega(\Phi(\alpha|Df|),\lambda)^{{\rm c}}}
\right]\notag\\
&\leq
\left(\frac{2K_{0}}{\alpha}\right)^{q}
(\Phi^{-1}(\lambda))^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right].
\end{align}
Hence, by \eqref{eq:14} and \eqref{eq:15}, we have
\begin{align*}
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
C_{W^{1,\Phi}}(q,r,\infty)
\left(
\lambda^{-1+\frac{q}{r}}
+
(\Phi^{-1}(\lambda))^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]
\right),
\end{align*}
which concludes the statement for $p_{X} \in L^{\infty}(\mathbb{R}^{d})$.
Now we suppose $p_{X}, p_{\widehat{X}} \in L^{\Psi}(\mathbb{R}^{d})$.
For $\lambda>0$, we define the event $\Omega(Df,\lambda) \in \mathscr{F}$ by
\begin{align*}
\Omega(Df,\lambda)
:=
\left\{
M(Df)(X)>\lambda
\right\}
\cup
\left\{
M(Df)(\widehat{X})>\lambda
\right\}.
\end{align*}
Since $\Psi$ satisfies the $\Delta_{2}$-condition, it follows from Lemma \ref{Lem_key_3} that
\begin{align*}
\|M(Df)\|_{L^{\Phi}({\mathbb R}^{d})}
\leq
A_{\Phi}
\||Df|\|_{L^{\Phi}({\mathbb R}^{d})}.
\end{align*}
Hence by using the Markov inequality and the generalized H\"older's inequality \eqref{eq_GHolder}, we obtain
\begin{align*}
{\mathbb P}(\Omega(Df,\lambda))
&\leq
\int_{{\mathbb R}^{d}}
M(Df)(x)
\{
p_{X}(x)
+
p_{\widehat{X}}(x)
\}
\mathrm{d} x
\lambda^{-1}\\
&\leq
2
\|M(Df)\|_{L^{\Phi}({\mathbb R}^{d})}
\{
\|p_{X}\|_{L^{\Psi}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{\Psi}({\mathbb R}^{d})}
\}
\lambda^{-1}\\
&\leq
2A_{\Phi}
\||Df|\|_{L^{\Phi}({\mathbb R}^{d})}
\{
\|p_{X}\|_{L^{\Psi}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{\Psi}({\mathbb R}^{d})}
\}
\lambda^{-1}.
\end{align*}
Hence by using H\"older's inequality with $\frac{1}{r/q}+\frac{1}{r/(r-q)}=1$ in the case of $r \in (1,\infty)$ and by using the boundedness of $f$ in the case of $r=\infty$, we have
\begin{align}\label{eq:25}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(Df,\lambda)}
\right] \notag \\
&\leq
\left(
\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}
+
\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}
\right)^{q}
{\mathbb P}(\Omega(Df,\lambda))^{1-\frac{q}{r}}, \notag \\
&\leq
\left(\|f\|_{L^{r}({\mathbb R}^{d},p_{X})}+\|f\|_{L^{r}({\mathbb R}^{d},p_{\widehat{X}})}\right)^{q}
\left(
2
A_{\Phi}
\{
\|p_{X}\|_{L^{\Psi}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{\Psi}({\mathbb R}^{d})}
\}
\||Df|\|_{L^{\Phi}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}
\lambda^{-(1-\frac{q}{r})}.
\end{align}
On the event $\Omega(Df,\lambda)^{{\rm c}}$, since $X$ and $\widehat{X}$ have density functions, by Lemma \ref{Lem_key_0} and Remark \ref{Rem_Orlicz_1} (iv), we obtain
\begin{align}\label{eq:23}
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(Df,\lambda)^{{\rm c}}}
\right]
&=
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(Df,\lambda)^{{\rm c}}}
{\bf 1}_{\mathbb{R}^{d} \setminus N}(X)
{\bf 1}_{\mathbb{R}^{d} \setminus N}(\widehat{X})
\right]
\notag\\
&\leq
K_{0}^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\{
M(Df)(X)
+
M(Df)(\widehat{X})
\}^{q}
{\bf 1}_{\Omega(Df,\lambda)^{{\rm c}}}
\right]\notag\\
&\leq
(2K_{0})^{q}
\lambda^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right].
\end{align}
We choose $\lambda:=\mathbb{E}[|X-\widehat{X}|^{q}]^{-\frac{1}{q+1}}$ in the case of $r=\infty$ and $\lambda:=\mathbb{E}[|X-\widehat{X}|^{q}]^{-\frac{1}{q+1-q/r}}$ in the case of $r \in (1,\infty)$, and then we conclude the statement from \eqref{eq:25} and \eqref{eq:23}.
\end{proof}
We next consider the case of the Sobolev space with a variable exponent.
\begin{Thm}\label{main_2}
Let $p \in {\mathcal P}^{\log}({\mathbb R}^{d})$ with $1<p^{-}$ and $p^{*}(\cdot):=p(\cdot)/(p(\cdot)-1)$.
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and let $r \in (1,\infty]$.
Suppose that $p_{X}, p_{\widehat{X}} \in L^{p^{*}(\cdot)}({\mathbb R}^{d})$.
Then for any $f \in W^{1,p(\cdot)}(\mathbb{R}^{d}) \cap L^{r}(\mathbb{R}^{d},p_{X}) \cap L^{r}(\mathbb{R}^{d},p_{\widehat{X}})$ and $q \in (0,r)$, it holds that
\begin{align*}
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
&\leq
C_{W^{1,p(\cdot)}}(q,r,p^{*}(\cdot))
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{q}
\right]^{\frac{1-q/r}{q+1-q/r}},
\end{align*}
where the constant $C_{W^{1,p(\cdot)}}(q,r,p^{*}(\cdot))$ is defined by
\begin{align*}
&C_{W^{1,p(\cdot)}}(q,r,p^{*}(\cdot))
\\&:=
(2K_{0})^{q}
+
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
2A_{\Phi}
\{
\|p_{X}\|_{L^{p^{*}(\cdot)}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{p^{*}(\cdot)}({\mathbb R}^{d})}
\}
\| |Df| \|_{L^{p(\cdot)}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}.
\end{align*}
Here, $K_{0}$ and $A_{p(\cdot)}$ are the constants of the pointwise estimate \eqref{Lem_key_0_1} in Lemma \ref{Lem_key_0} and of the Hardy--Littlewood maximal strong estimate in Lemma \ref{lem:0.3}, respectively.
\end{Thm}
\begin{proof}
We can use the Hardy--Littlewood maximal strong type estimate in Lemma \ref{lem:0.3} since $p \in \mathcal{P}^{\log}(\mathbb{R}^{d})$.
Moreover, it holds that $W^{1,p(\cdot)}({\mathbb R}^{d}) \subset BV_{{\rm loc}}({\mathbb R}^{d})$ (see, Remark \ref{Rem_Orlicz_1} (iv) and Remark \ref{Rem_Sob_expo} (iv)).
Therefore, by the same way as the proof of Theorem \ref{main_1}, we can prove the statement by using the generalized H\"older's inequality \eqref{eq_GHolder_1} for $M(Df) \in L^{p(\cdot)}({\mathbb R}^{d})$ and $p_{X}, p_{\widehat{X}} \in L^{p^{*}(\cdot)}({\mathbb R}^{d})$, and thus it will be omitted.
\end{proof}
\begin{Rem}
For the Sobolev space with a variable exponent $p$, it is difficult to obtain Avikainen's estimate in the case of $p_{X}, p_{\widehat{X}} \in L^{\infty}({\mathbb R}^{d})$.
The reason is that since the variable exponent $p$ is not constant, we cannot use Jensen's inequality in the same way as the estimate \eqref{eq:15}.
\end{Rem}
\subsubsection*{Fractional Sobolev spaces}
We finally consider Avikainen's estimates for fractional Sobolev spaces.
\begin{Thm}\label{main_3}
Let $s \in (0,1)$, $p \in [1,\infty)$ and $p^{*}:=p/(p-1)$.
Let $X, \widehat{X}:\Omega \to \mathbb{R}^{d}$ be random variables which admit the density functions $p_{X}$ and $p_{\widehat{X}}$ with respect to Lebesgue measure, respectively, and let $r \in (1,\infty]$.
Suppose that $p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$ or $p_{X}, p_{\widehat{X}} \in L^{p^{*}}({\mathbb R}^{d})$.
Then for any $f \in W^{s,p}(\mathbb{R}^{d}) \cap L^{r}(\mathbb{R}^{d},p_{X}) \cap L^{r}(\mathbb{R}^{d},p_{\widehat{X}})$ and $q \in (0,r)$, it holds that
\begin{align*}
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
\right]
\leq
\left\{ \begin{array}{ll}
\displaystyle
C_{W^{s,p}}(q,r,\infty)
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\right]^{\frac{p(1-q/r)}{q+p(1-q/r)}},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
C_{W^{s,p}}(q,r,p^{*})
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\right]^{\frac{1-q/r}{q+1-q/r}},
&\text{if} \quad p_{X}, p_{\widehat{X}} \in L^{p^{*}}(\mathbb{R}^{d}),
\end{array}\right.
\end{align*}
where
\begin{align*}
&C_{W^{s,p}}(q,r,\infty)
\\&:=
(2K_{0}(s,p))^{q}
+
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\||G_{s,p}f|^{p}\|_{L^{1}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}, \\
&C_{W^{s,p}}(q,r,p^{*})
\\&:=
(2K_{0}(s,p))^{q}
+
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{p}
\{
\|p_{X}\|_{L^{p^{*}}(\mathbb{R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{p^{*}}(\mathbb{R}^{d})}
\}
\|G_{s,p}f\|_{L^{p}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}.
\end{align*}
Here, $K_{0}(s,p)$, $A_{1}$ and $A_{p}$ are the constants of pontwise estimate \eqref{eq:28} in Lemma \ref{lem:0.5} and of the Hardy--Littlewood maximal weak and strong type estimates in Lemma \ref{Lem_key_2} (i) and (ii), respectively.
\end{Thm}
Before proving Theorem \ref{main_3}, we give a pointwise estimate for functions in $W^{s,p}(\mathbb{R}^{d})$, which plays a crucial role in our argument.
\begin{Lem}\label{lem:0.5}
Let $s \in (0,1)$, $p \in [1,\infty)$ and $f \in W^{s,p}({\mathbb R}^{d})$.
Then there exist a constant $K_{0}(s,p)>0$ and a Lebesgue null set $N \in {\mathscr B}({\mathbb R}^{d})$ such that for all $x,y \in {\mathbb R}^{d} \setminus N$,
\begin{align}
\label{eq:28}
\left|
f(x)
-
f(y)
\right|
\leq
K_{0}(s,p)
|x-y|^{s}
\left\{
M_{2|x-y|}(G_{s,p}f)(x)
+
M_{2|x-y|}(G_{s,p}f)(y)
\right\}.
\end{align}
\end{Lem}
\begin{Rem}
Note that Yang \cite{Ya03} introduced Haj\l{}asz--Sobolev space $W^{s,p}(X)$ on a metric measure space $X$ of homogeneous type by using the pointwise estimate similar to \eqref{eq:28} (see, Definition 1.4 in \cite{Ya03}).
\end{Rem}
\begin{proof}[Proof of Lemma \ref{lem:0.5}]
The proof is similar to Lemma \ref{Lem_key_0}.
By using Jensen's inequality, for any $x \in \mathbb{R}^{d}$ and $r>0$,
\begin{align}\label{eq:0.1}
\Xint-_{B(x;r)}
\left|
f(z)
-
(f)_{x,r}
\right|
{\rm d}z
&\leq
\Xint-_{B(x;r)}
\left(
\Xint-_{B(x;r)}
\left|
f(z)
-
f(y)
\right|^{p}
{\rm d}y
\right)^{1/p}
{\rm d}z \notag\\
&\leq
(2r)^{(d+sp)/p}
\Xint-_{B(x;r)}
\left(
\Xint-_{B(x;r)}
\frac
{|f(z)-f(y)|^{p}}
{|z-y|^{d+sp}}
{\rm d}y
\right)^{1/p}
{\rm d}z \notag\\
&\leq
C_{0}(s,p)
r^{s}
\Xint-_{B(x;r)}
G_{s,p}f(z)
\mathrm{d} z \notag\\
&\leq
C_{0}(s,p)
r^{s}
M_{r}(G_{s,p}f)(x),
\end{align}
where $C_{0}(s,p):=2^{(d+sp)/p}(\frac{\Gamma(d/2+1)}{\pi^{d/2}})^{1/p}$.
Let $N \in {\mathscr B}({\mathbb R}^{d})$ be the Lebesgue null set defined on \eqref{Lem_key_0_3}.
Then, by the same way as the proof of Lemma \ref{Lem_key_0}, for fixed $x, y \in \mathbb{R}^{d} \setminus N$ and $r_{i}:=2^{-i}|x-y|$, for $i \in \mathbb{N} \cup \{0\}$, we obtain
\begin{align}\label{eq:0.2}
\left|
f(x)
-
(f)_{x,r_{0}}
\right|
&\leq
2^{d}
\sum_{i=0}^{\infty}
\Xint-_{B(x;r_{i})}
\left|
f(z)
-
(f)_{x,r_{i}}
\right|
{\rm d}z \notag\\
&\leq
2^{d}
C_{0}(s,p)
M_{|x-y|}(G_{s,p}f)(x)
\sum_{i=0}^{\infty}
r_{i}^{s}
\notag\\
&=
\frac{2^{s+d}}{2^{s}-1}
C_{0}(s,p)
M_{|x-y|}(G_{s,p}f)(x)
|x-y|^{s}.
\end{align}
By the same way,
we have
\begin{align}\label{eq:0.22}
\left|
f(y)
-
(f)_{y,r_{0}}
\right|
&\leq
\frac{2^{s+d}}{2^{s}-1}
C_{0}(s,p)
M_{|x-y|}(G_{s,p}f)(y)
|x-y|^{s}.
\end{align}
On the other hand, by the same way as proof of Lemma \ref{Lem_key_0}, it holds from \eqref{eq:0.1} that
\begin{align}\label{eq:0.23}
|(f)_{x,r_{0}}-(f)_{y,r_{0}}|
&\leq
2^{d+1}
\Xint-_{B(x;2r_{0})}
|f(z)-(f)_{x,2r_{0}}|
\mathrm{d} z \notag\\
&\leq
2^{s+d+1}
C_{0}(s,p)
|x-y|^{s}
M_{2|x-y|}(G_{s,p}f)(x).
\end{align}
By combining \eqref{eq:0.2}, \eqref{eq:0.22} and \eqref{eq:0.23}, we conclude the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_3}]
We first assume $p_{X},p_{\widehat{X}} \in L^{\infty}(\mathbb{R}^{d})$.
For $\lambda>0$, we define the event $\Omega(|G_{s,p}f|^{p},\lambda) \in \mathscr{F}$ by
\begin{align*}
\Omega(|G_{s,p}f|^{p},\lambda)
:=
\left\{
M(|G_{s,p}f|^{p})(X)
>
\lambda
\right\}
\cup
\left\{
M(|G_{s,p}f|^{p})(\widehat{X})
>
\lambda
\right\}.
\end{align*}
Since $|G_{s,p}f|^{p} \in L^{1}({\mathbb R}^{d})$, by using Lemma \ref{Lem_key_2} (i), we obtain
\begin{align*}
\mathbb{P}(\Omega(|G_{s,p}f|^{p},\lambda))
\leq
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\||G_{s,p}f|^{p}\|_{L^{1}(\mathbb{R}^{d})}
\lambda^{-1}.
\end{align*}
Hence by using H\"older's inequality with $\frac{1}{r/q}+\frac{1}{r/(r-q)}=1$ in the case of $r \in (1,\infty)$ and by using the boundedness of $f$ in the case of $r=\infty$, we have
\begin{align}\label{eq:18}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(|G_{s,p}|^{p},\lambda)}
\right] \notag\\
&\leq
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
{\mathbb P}(\Omega(|G_{s,p}f|^{p},\lambda))^{1-\frac{q}{r}} \notag \\
&\leq
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{1}
\{
\|p_{X}\|_{\infty}
+
\|p_{\widehat{X}}\|_{\infty}
\}
\||G_{s,p}f|^{p}\|_{L^{1}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}
\lambda^{-(1-\frac{q}{r})}.
\end{align}
Let $N \in {\mathscr B}({\mathbb R}^{d})$ be the Lebesgue null set defined on Lemma \ref{lem:0.5}.
On the event $\Omega(|G_{s,p}f|^{p},\lambda)^{{\rm c}}$, since $X$ and $\widehat{X}$ have density functions, by Lemma \ref{lem:0.5} and using Jensen's inequality, we obtain
\begin{align}
\label{eq:35}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(|G_{s,p}f|^{p},\lambda)^{{\rm c}}}
\right] \notag
=
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(|G_{s,p}f|^{p},\lambda)^{{\rm c}}}
{\bf 1}_{\mathbb{R}^{d}\setminus N}(X){\bf 1}_{\mathbb{R}^{d}\setminus N}(\widehat{X})
\right] \notag \\
&\leq
K_{0}(s,p)^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\{
M(G_{s,p}f)(X)
+
M(G_{s,p}f)(\widehat{X})
\}^{q}
{\bf 1}_{\Omega(|G_{s,p}f|^{p},\lambda)^{{\rm c}}}
\right]
\notag \\
&\leq
K_{0}(s,p)^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\{
M(|G_{s,p}f|^{p})(X)^{\frac{1}{p}}
+
M(|G_{s,p}f|^{p})(\widehat{X})^{\frac{1}{p}}
\}^{q}
{\bf 1}_{\Omega(|G_{s,p}f|^{p},\lambda)^{{\rm c}}}
\right]
\notag \\
&\leq
(2K_{0}(s,p))^{q}
\lambda^{\frac{q}{p}}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\right].
\end{align}
By choosing $\lambda:={\mathbb E}[|X-\widehat{X}|^{qs}]^{-\frac{1}{q/p+1-q/r}}$, we conclude the statement for $p_{X} \in L^{\infty}(\mathbb{R}^{d})$ from \eqref{eq:18} and \eqref{eq:35}.
Now we suppose $p_{X}, p_{\widehat{X}} \in L^{p^{*}}(\mathbb{R}^{d})$.
For $\lambda>0$, we define the event $\Omega(G_{s,p}f, \lambda) \in \mathscr{F}$ by
\begin{align*}
\Omega(G_{s,p}f, \lambda)
:=
\left\{
M(G_{s,p}f)(X)>\lambda
\right\}
\cup
\left\{
M(G_{s,p}f)(\widehat{X})>\lambda
\right\}.
\end{align*}
By using the Markov inequality, H\"older's inequality and Lemma \ref{Lem_key_2} (ii), we obtain
\begin{align*}
{\mathbb P}(\Omega(G_{s,p}f, \lambda))
&\leq
\int_{{\mathbb R}^{d}}
M(G_{s,p}f)(x)
\{
p_{X}(x)
+
p_{\widehat{X}}(x)
\}
\mathrm{d} x
\lambda^{-1}\\
&\leq
\{
\|p_{X}\|_{L^{p^{*}}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{p^{*}}({\mathbb R}^{d})}
\}
\|M(G_{s,p}f)\|_{L^{p}({\mathbb R}^{d})}
\lambda^{-1}\\
&\leq
A_{p}
\{
\|p_{X}\|_{L^{p^{*}}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{p^{*}}({\mathbb R}^{d})}
\}
\|G_{s,p}f\|_{L^{p}({\mathbb R}^{d})}
\lambda^{-1}.
\end{align*}
Hence by using H\"older's inequality with $\frac{1}{r/q}+\frac{1}{r/(r-q)}=1$ in the case of $r \in (1,\infty)$ and by using the boundedness of $f$ in the case of $r=\infty$, we have
\begin{align}\label{eq:19}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(G_{s,p}f, \lambda)}
\right] \notag \\
&\leq \left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}{\mathbb P}(\Omega(G_{s,p}f,\lambda))^{1-\frac{q}{r}} \notag \\
&\leq
\left(
\|f\|_{L^{r}(\mathbb{R}^{d},p_{X})}
+
\|f\|_{L^{r}(\mathbb{R}^{d},p_{\widehat{X}})}
\right)^{q}
\left(
A_{p}
\{
\|p_{X}\|_{L^{p^{*}}({\mathbb R}^{d})}
+
\|p_{\widehat{X}}\|_{L^{p^{*}}({\mathbb R}^{d})}
\}
\|G_{s,p}f\|_{L^{p}({\mathbb R}^{d})}
\right)^{1-\frac{q}{r}}
\lambda^{-(1-\frac{q}{r})}.
\end{align}
On the event $\Omega(G_{s,p}f, \lambda)^{{\rm c}}$, since $X$ and $\widehat{X}$ have density functions, by Lemma \ref{lem:0.5}, we obtain
\begin{align}\label{eq:20}
&{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(G_{s,p}f, \lambda)^{{\rm c}}}
\right]
\notag\\&=
{\mathbb E}
\left[
\left|
f(X)
-
f(\widehat{X})
\right|^{q}
{\bf 1}_{\Omega(G_{s,p}f, \lambda)^{{\rm c}}}
{\bf 1}_{\mathbb{R}^{d} \setminus N}(X)
{\bf 1}_{\mathbb{R}^{d} \setminus N}(\widehat{X})
\right]
\notag\\
&\leq
K_{0}(s,p)^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\{
M(G_{s,p}f)(X)
+
M(G_{s,p}f)(\widehat{X})
\}^{q}
{\bf 1}_{\Omega(G_{s,p}f, \lambda)^{{\rm c}}}
\right]\notag\\
&\leq
(2K_{0}(s,p))^{q}
\lambda^{q}
{\mathbb E}
\left[
\left|
X
-
\widehat{X}
\right|^{qs}
\right].
\end{align}
By choosing $\lambda:=\mathbb{E}[|X-\widehat{X}|^{qs}]^{-\frac{1}{q+1-q/r}}$, we conclude the statement for $p_{X} \in L^{p^{*}}(\mathbb{R}^{d})$ from \eqref{eq:19} and \eqref{eq:20}.
\end{proof}
\section{Applications}\label{sec_3}
In this section, we apply Avikanen's estimates proved in Section \ref{sec_2} to numerical analysis on irregular functionals of a solution to stochastic differential equations (SDEs) based on the multilevel Monte Carlo method.
\subsection{Upper bound and integrability of density functions}\label{sec_3_1}
In order to apply Avikanen's estimates proved in Section \ref{sec_2}, we need an appropriate upper bound or integrability of density functions.
In this subsection, we give some examples of random variables with a bounded or integrable density function, which are studied in various ways.
We first give the well-known fact as a conclusion of L\'evy's inversion formula.
\begin{Eg}\label{Eg_density_0}
Let $X:\Omega \to \mathbb{R}^{d}$ be a random variable.
If the characteristic function $\varphi_{X}(\xi):=\mathbb{E}[e^{\sqrt{-1}\langle \xi, X\rangle_{\mathbb{R}^{d}}}]$ belongs to $L^{1}(\mathbb{R}^{d})$, then by using L\'evy's inversion formula, $X$ admits a continuous density function $p_{X}$ of the form
\begin{align*}
p_{X}(x)
=
\frac{1}{(2\pi)^{d}}
\int_{\mathbb{R}^{d}}
e^{-\sqrt{-1}\langle x, \xi \rangle_{{\mathbb R}^{d}}}
\varphi_{X}(\xi)
\mathrm{d} \xi
\end{align*}
(see, e.g., Proposition 2.5 in \cite{Sato} or Theorem 16.6 in \cite{Wi91}), and thus $X$ has a bounded density function.
\end{Eg}
Next, we recall the Gaussian two-sided bound for density functions of solutions to SDEs driven by a Brownian motion.
\begin{Eg}\label{Eg_GB_1}
Let $B=(B(t))_{t \in [0,T]}$ be a $d$-dimensional standard Brownian motion,
and let $X=(X(t))_{t \in [0,T]}$ be a solution to the following $d$-dimensional Markovian SDE of the form
\begin{align}\label{SDE_0}
\mathrm{d} X(t)
=
b(t,X(t)) \mathrm{d} t
+
\sigma(t,X(t))\mathrm{d} B(t),
~X(0)=x \in \mathbb{R}^{d},
~t\in [0,T],
\end{align}
and let $X^{(n)}=(X^{(n)}(t))_{t \in [0,T]}$ be the Euler--Maruyama scheme for SDE \eqref{SDE_0} with time step $T/n$, which is defined by
\begin{align*}
\mathrm{d} X^{(n)}(t)
=
b(\eta_{n}(t), X^{(n)}(\eta_{n}(t))) \mathrm{d} t
+
\sigma(\eta_{n}(t), X^{(n)}(\eta _{n}(t))) \mathrm{d} B(t),~
X^{(n)}(0)=X(0),~
t \in [0,T],
\end{align*}
where $\eta _{n}(s):=kT/n$ if $s \in [kT/n,(k+1)T/n)$, the drift coefficient $b:[0,T] \times \mathbb{R}^{d} \to \mathbb{R}^{d}$ and the diffusion matrix $\sigma:[0,T] \times \mathbb{R}^{d} \to \mathbb{R}^{d \times d}$ are measurable functions.
Suppose that $b$ is bounded and $\sigma$ satisfies the following two conditions.
\begin{itemize}
\item[(i)]
$a:=\sigma \sigma^{\top}$ is $\alpha$-H\"older continuous in space and $\alpha/2$-H\"older continuous in time for some $\alpha \in (0,1]$, that is,
\begin{align*}
\|a\|_{\alpha}
:=
\sup_{t \in [0,T], x \neq y}
\frac{|a(t,x)-a(t,y)|}{|x-y|^{\alpha}}
+
\sup_{x\in \mathbb{R}^d, t \neq s}
\frac{|a(t,x)-a(s,x)|}{|t-s|^{\alpha/2}}
<\infty.
\end{align*}
\item[(ii)]
The diffusion coefficient $\sigma$ is bounded and uniformly elliptic, that is, there exist $\underline{a}, \overline{a}>0$ such that for any $(t,x,\xi) \in [0,T] \times \mathbb{R}^d \times \mathbb{R}^d$, $\underline{a}|\xi|^2 \leq \langle a(t,x) \xi,\xi \rangle_{{\mathbb R}^{d}} \leq \overline{a} |\xi|^2$.
\end{itemize}
Then there exists a weak solution of SDE \eqref{SDE_0} and the uniqueness in law holds (see, Theorem 4.2, 5.6 in \cite{StVa69} or Proposition 1.14 in \cite{ChEn}), and it is well-known that for all $t \in (0,T]$, $X(t)$ admits a density function $p_{t}(x,\cdot)$ with respect to Lebesgue measure (e.g. Theorem 9.1.9 in \cite{StVa79}) which has the Gaussian two sided bound, that is, there exist $C_{\pm}>0$ and $c_{\pm}>0$ such that for any $(t,x,y) \in (0,T]\times \mathbb{R}^{d} \times \mathbb{R}^{d}$,
\begin{align}\label{GB_1}
C_{-} g_{c_{-}t}(x,y)
&\leq
p_{t}(x,y)
\leq
C_{+} g_{c_{+}t}(x,y).
\end{align}
As references of this two-sided bound, we refer to Theorem 9.4.2 in \cite{Fr64}, section 4.1, 4.2 and Remark 4.1 in \cite{LeMe10}, and \cite{Sh91} for time independent case (see also \cite{QiZh02,QiZh03,TaTa18} for a sharp two-sided bound in the case $\sigma \equiv I$).
Therefore, it holds that $p_{t}(x,\cdot) \in L^{\infty}(\mathbb{R}^{d}) \cap L^{\Psi}(\mathbb{R}^{d}) \cap L^{p^{*}(\cdot)}(\mathbb{R}^{d}) $ for the complementary function $\Psi$ of an N-function $\Phi$ (see, Remark \ref{Rem_Orlicz_1} (ii) and Example \ref{Ex_Orlicz_0} (iii)) and $p^{*}(\cdot):=p(\cdot)/(p(\cdot)-1)$ for $p \in \mathcal{P}(\mathbb{R}^{d})$ with $1<p^{-}\leq p^{+} <\infty$.
Moreover, for all $k=1,\ldots,n$, $X^{(n)}(kT/n)$ admits a density function $p_{kT/n}^{(n)}(x,\cdot)$ with respect to Lebesgue measure (e.g. section 2 in \cite{LeMe10}), which has the Gaussian two sided bound uniformly in $n$, that is, there exist $C_{\pm}>0$ and $c_{\pm}>0$ independent of $n$ such that for any $(k,x,y) \in \{1,\ldots,n\} \times \mathbb{R}^{d} \times \mathbb{R}^{d}$,
\begin{align}\label{GB_2}
C_{-} g_{c_{-}kT/n}(x,y)
&\leq
p_{kT/n}^{(n)}(x,y)
\leq
C_{+} g_{c_{+}kT/n}(x,y)
\end{align}
(see, Theorem 2.1 in \cite{LeMe10}).
Moreover, the same or similar bounds \eqref{GB_1} hold for SDEs with a path--dependent or an unbounded drift (see, Theorem 2.5. in \cite{Ku17}, Theorem 3.4 in \cite{TaTa18} and Theorem 1.2 in \cite{MePeZh20}), and the Gaussian two sided bound \eqref{GB_1} holds for Brownian motions with a signed measure valued drift belonging to the Kato class $K_{d,1}$ (see, Theorem 3.14 in \cite{KiSo06}).
It is also well-known that the Gaussian two--sided bound holds for the fundamental solution $\Gamma(s,x;t,y)$ of parabolic equations in the divergence form $(\frac{\partial}{\partial s}+\frac{1}{2}\sum_{i,j=1}^{d} \frac{\partial}{\partial x_{i}}a_{i,j}(x) \frac{\partial}{\partial x_{j}}) u(s,x)=0$ (see, \cite{Ar67}), and there exists a Hunt process with the transition density function $\Gamma$ (see, Example 4.5.2, Theorem A.2.2 in \cite{FOT} and Theorem I 9.4 in \cite{BlGe68}).
On the other hand, Malliavin calculus can be used to study the regularity and upper bounds of density functions.
Indeed, it is known that under H\"ormander's and the smoothness conditions on the coefficients, $X(t)$ admits a bounded and smooth density function (see, e.g., Theorem 6.16 in \cite{Shi04} and see, also \cite{DeMe10,KoMa13} for the Gaussian type estimates for density functions of solutions to degenerate SDEs).
We also note that the Gaussian type two sided bound holds for density functions of solutions to SDEs driven by a fractional Brownian motion (see, \cite{BaNuOuTi16,BeKoTi16}).
\end{Eg}
The next example shows density estimates for path-dependent stochastic differential equations.
\begin{Eg}
Let $B=(B(t))_{t \in [0,T]}$ be a $d$-dimensional standard Brownian motion,
and let $X=(X(t))_{t \in [0,T]}$ be a solution to the following $d$-dimensional path--dependent SDE of the form
\begin{align*}
\mathrm{d} X(t)
=
b(t,X) \mathrm{d} t
+
\sigma(t,X)\mathrm{d} B(t),
~X(0)=x \in \mathbb{R}^{d},
~t\in [0,T],
\end{align*}
where the drift coefficient $b:[0,T] \times C([0,T];\mathbb{R}^d) \to \mathbb{R}^{d}$ and the diffusion matrix $\sigma:[0,T] \times C([0,T];\mathbb{R}^d) \to \mathbb{R}^{d \times d}$ are measurable functions.
\begin{itemize}
\item[(i)]
Suppose that the coefficients $b$ and $\sigma$ are continuous in time and bounded continuously G\^ateaux differentiable up to order $n+2$ in space, and $\sigma$ is uniformly elliptic.
Then, by using Malliavin calculus, it is shown that for all $t \in (0,T]$, $X(t)$ admits a density function with respect to Lebesgue measure which belongs to $C^{n}_{b}(\mathbb{R}^{d};\mathbb{R})$ (see, \cite{KuSt82}).
\item[(ii)]
Suppose that the coefficients $b$ and $\sigma$ are bounded, $\sigma$ is uniformly elliptic and there exist $\varepsilon>0$ and $C>0$ such that for any $(s,t,\omega) \in [0,T] \times [0,T] \times C([0,T];\mathbb{R}^d)$ with $s<t$,
\begin{align*}
\sup_{1\leq j \leq d}
|\sigma_{j}(t,\omega) -\sigma_j(s,\omega)|
\leq
C
\left\{
\log
\left(
\frac{1}{\sup_{s\leq u \leq t} |\omega_u-\omega_s|}
\right)
\right\}^{-(2+\varepsilon)},
\end{align*}
where $\sigma_{j}:=(\sigma_{1,j},\ldots,\sigma_{d,j})^{\top}$.
Then, by using an interpolation method, it is shown that for all $t \in (0,T]$, $X(t)$ admits a density function $p_{t}(x,\cdot)$ with respect to Lebesgue measure which belongs to $L^{\mathbf{e}_{\log}}(\mathbb{R}^{d})$, where $\mathbf{e}_{\log}(x):=(1+|x|)\log (1+|x|)$ (see, Theorem 3.1 in \cite{BaCa}).
\end{itemize}
\end{Eg}
Finally, we give examples for a two sided bound of density functions of solutions to SDEs driven by a rotation invariant $\alpha$-stable process.
\begin{Eg}\label{Eg_Levy}
Let $Z=(Z(t))_{t \in [0,T]}$ be a rotation invariant $\alpha$-stable process in $\mathbb{R}^{d}$ with $\alpha \in (0,2)$ and $\mathbb{E}[e^{\sqrt{-1}\langle \xi,Z(t) \rangle}]=e^{-t|\xi|^{\alpha}}$, $\xi \in \mathbb{R}^{d}$(see Theorem 14.14 in \cite{Sato}), and let $X=(X(t))_{t \in [0,T]}$ be a solution to the following $d$-dimensional SDE of the form
\begin{align}\label{SDE_stable_0}
\mathrm{d} X(t)
=
b(X(t)) \mathrm{d} t
+
\sigma(X(t-))I\mathrm{d} Z(t),~
X(0)=x \in \mathbb{R}^{d},~
t\in [0,T],
\end{align}
where $b:\mathbb{R}^{d} \to \mathbb{R}^{d}$ and $\sigma:\mathbb{R}^{d} \to \mathbb{R}$ are bounded measurable functions.
Suppose that the drift coefficient $b$ is $\gamma$-H\"older continuous with $\gamma \in (0,1]$, and the jump intensity coefficient $a:=|\sigma|^{\alpha}$ is $\eta$-H\"older continuous with $\eta \in (0,1]$ and uniformly positive, that is, there exists $\underline{a}>0$ such that for any $x \in {\mathbb R}^{d}$, $a(x) \geq \underline{a}$.
Under the balance condition $\alpha+\gamma>1$, Kulik \cite{Kul19} proved that the existence of a unique weak solution to the equation \eqref{SDE_stable_0}.
Moreover, by using the parametrix method, he showed that for all $t \in (0,T]$, $X(t)$ admits a density function $p_{t}(x,\cdot)$ with respect to Lebesgue measure and gave its two sided bound, that is, there exist $C_{\pm}>0$ such that for any $(t,x,y) \in (0,T]\times \mathbb{R}^{d} \times \mathbb{R}^{d}$,
\begin{align*}
C_{-}
\widetilde{p}_{t}(x,y)
\leq
p_{t}(x,y)
\leq
C_{+}
\widetilde{p}_{t}(x,y),
\end{align*}
where
\begin{align*}
\widetilde{p}_{t}(x,y)
:=
\frac{1}{t^{d/\alpha} a(x)^{d/\alpha}}
g^{(\alpha)}
\left(
\frac{y-v_{t}(x)}{t^{1/\alpha} a(x)^{1/\alpha}}
\right),
\end{align*}
$g^{(\alpha)}$ is the density function of $Z(1)$ and $\{v_{t}(x)\}_{t \in [0,T]}$ is a solution to ODE $\mathrm{d} v_{t}(x)=b(v_{t}(x)) \mathrm{d} t$ with $v_{0}(x)=x$ (see Theorem 2.1 and Theorem 2.2 in \cite{Kul19}).
Note that if $\gamma<1$, then such a solution of ODE may fail to be uniqueness, and if $\alpha+\gamma<1$, then a solution of SDE \eqref{SDE_stable_0} may fail to be uniqueness in law (see, Theorem 3.2 (ii) in \cite{TaTsuWa74}).
Moreover, by the asymptotic behaviour of $g^{(\alpha)}$ (see, e.g., Theorem 2.1 in \cite{BlGe60}), we have $g^{(\alpha)}(x) \leq C \min\{1, |x|^{-d-\alpha}\}$ for some $C>0$, which implies that $p_{t}(x,\cdot) \in L^{\infty}(\mathbb{R}^{d}) \cap L^{\Psi}(\mathbb{R}^{d}) \cap L^{p^{*}(\cdot)}(\mathbb{R}^{d}) $ for the complementary function $\Psi$ of an N-function $\Phi$ (see, Remark \ref{Rem_Orlicz_1} (ii) and Example \ref{Ex_Orlicz_0} (iii)) and $p^{*}(\cdot):=p(\cdot)/(p(\cdot)-1)$ for $p \in \mathcal{P}(\mathbb{R}^{d})$ with $1<p^{-}\leq p^{+} <\infty$, (see, also \cite{KaSz15,KnSc13,Kuh19} for upper bounds of density functions of L\'evy processes).
\end{Eg}
\subsection{Multilevel Monte Carlo method}\label{sec_3_2}
In this subsection, we apply Avikainen's estimates proved in Section \ref{sec_2} to the multilevel Monte Carlo method for solutions to SDE \eqref{SDE_0}.
We first define the union of function spaces $F(\mathbb{R}^{d})$ by
\begin{align*}
F(\mathbb{R}^{d})
:=
\left\{
BV(\mathbb{R}^{d})
\cup
W^{1,\Phi}(\mathbb{R}^{d})
\cup
W^{1,p(\cdot)}(\mathbb{R}^{d})
\cup
W^{s,p}(\mathbb{R}^{d})
\right\}
\bigcap
L^{\infty}(\mathbb{R}^{d})
\end{align*}
for an N-function $\Phi$ with its complementary function $\Psi$ which satisfies the $\Delta_{2}$-condition, a variable exponent $p(\cdot) \in \mathcal{P}^{\log}(\mathbb{R}^{d})$ with $1<p^{-}\leq p^{+} <\infty$ and $(s,p) \in (0,1] \times [1,\infty)$.
We consider the computational complexity of the mean squared error (MSE) to estimate the expectation of $P:=f(X(T))$ for some measurable function $f:\mathbb{R}^{d} \to \mathbb{R}$ with $\mathbb{E}[|f(X(T))|]<\infty$, by using the standard and multilevel Monte Carlo method.
We first recall the standard Monte Carlo method.
Let $X^{(h)}=(X^{(h)}(t))_{t \in [0,T]}$ be the Euler--Maruyama scheme for SDE \eqref{SDE_0} with time step $h \in (0,T)$, which is defined by
\begin{align*}
\mathrm{d} X^{(h)}(t)
=
b(\eta_{h}(t), X^{(h)}(\eta_{h}(t))) \mathrm{d} t
+
\sigma(\eta_{h}(t), X^{(h)}(\eta _{h}(t))) \mathrm{d} B(t),~
X^{(h)}(0)=X(0),~
t \in [0,T],
\end{align*}
where $\eta _{h}(s):=kh$ and $k$ is the natural number such that $s/h-1<k\leq s/h$.
We define $\widehat{P}^{(h)}:=f(X^{(h)}(T))$.
Let $\widehat{Y}^{(h)}$ be an estimator for $\widehat{P}^{(h)}$.
For example, one may use $\widehat{Y}^{(h)}$ as the arithmetic mean, that is,
\begin{align*
\widehat{Y}^{(h)}
:=
N^{-1}
\sum_{i=1}^{N}
\widehat{P}^{(h,i)},
\end{align*}
where $\widehat{P}^{(h,1)}, \ldots, \widehat{P}^{(h,N)}$ are i.i.d. random variables which have the same distribution as $\widehat{P}^{(h)}$.
We suppose that the weak rate of convergence for $X^{(h)}$ is $\alpha>0$, that is, there exists $c_{0}>0$ such that $|\mathbb{E}[f(X(T))]-\mathbb{E}[f(X^{(h)}(T))]| \leq c_{0}h^{\alpha}$.
Then the mean squared error is estimated as follows:
\begin{align*}
\text{MSE}
:=
\mathbb{E}[|\widehat{Y}^{(h)}-\mathbb{E}[P]|^{2}]
=
\mathbb{E}[|\widehat{Y}^{(h)}
-
\mathbb{E}[\widehat{Y}^{(h)}]|^{2}]
+
\left|\mathbb{E}[\widehat{P}^{(h)}]-\mathbb{E}[P]\right|^{2}
\leq
\mathrm{Var}[\widehat{P}^{(h)}]
N^{-1}
+
c_{0}^{2}
h^{2\alpha}.
\end{align*}
Assume that $\sup_{h \in (0,T)} \mathrm{Var}(\widehat{P}^{(h)})<\infty$.
Then, if we would like to make $\mathrm{MSE} \leq \varepsilon^{2}$ for a given $\varepsilon>0$, we choose $N$ and $h$ to satisfy $\sup_{h \in (0,T)} \mathrm{Var}[\widehat{P}^{(h)}]N^{-1} \leq \varepsilon^{2}/2$ and $c_{0}^{2}h^{2\alpha} \leq \varepsilon^{2}/2$.
Then the computational complexity for $\widehat{Y}^{(h)}$ is estimated above by $c_{1}\varepsilon^{-(2+1/\alpha)}$ for some constant $c_{1}>0$.
Now we recall the multilevel Monte Carlo method.
Let $M \in {\mathbb N}$ and $X_{\ell}=(X_{\ell}(t))_{t \in [0,T]}$, $\ell=0,\ldots,L$ be numerical approximations to $X$ with each time step $h_{\ell}:=T/M^{\ell}$ and define $\widehat{P}_{\ell}:=f(X_{\ell}(T))$.
Then it holds that
\begin{align*}
\mathbb{E}[\widehat{P}_{L}]
=
\sum_{\ell=0}^{L}
\mathbb{E}[
\widehat{P}_{\ell}
-
\widehat{P}_{\ell-1}
],
\end{align*}
where $\widehat{P}_{-1}:=0$.
Let $\widehat{Y}_{\ell}$, $\ell=0,\ldots,L$ be independent estimators for each $\mathbb{E}[\widehat{P}_{\ell}-\widehat{P}_{\ell-1}]$ and $C_{\ell}$, $\ell=0,\ldots,L$ be their corresponding computational complexities.
For example, one may use $\widehat{Y}_{\ell}$ as the arithmetic mean, that is,
\begin{align}\label{MLMC_estimator_0}
\widehat{Y}_{\ell}
=
N_{\ell}^{-1}
\sum_{i=1}^{N_{\ell}}
\left(
\widehat{P}_{\ell}^{(\ell,i)}
-
\widehat{P}_{\ell-1}^{(\ell,i)}
\right),
\end{align}
where $\widehat{P}_{\ell}^{(\ell,1)}-\widehat{P}_{\ell-1}^{(\ell,1)},\ldots,\widehat{P}_{\ell}^{(\ell,N_{\ell})}-\widehat{P}_{\ell-1}^{(\ell,N_{\ell})}$ are i.i.d random variables which have the same distribution as $\widehat{P}_{\ell}-\widehat{P}_{\ell-1}$.
Note that the random variable $\widehat{P}_{\ell}-\widehat{P}_{\ell-1}$ is the deference between two discrete approximations with different time steps $h_{\ell}$ and $h_{\ell-1}$, and the key point is that they are defined by the same Brownian motion.
We define the estimator $\widehat{Y}:=\sum_{\ell=0}^{L} \widehat{Y}_{\ell}$ and its computational complexity $C_{\text{MLMC}}:=\sum_{\ell=0}^{L}C_{\ell}$.
Then the following complexity theorem holds for the MLMC method.
\begin{Thm}[Complexity theorem, Theorem 3.1 in \cite{Gi08}, Theorem 2.1 \cite{Gi15}]\label{MLMC_0}
Let $X$ be a solution to SDE \eqref{SDE_0}, and define $P:=f(X(T))$.
We assume that $\{\widehat{P}_{\ell}\}_{\ell=0,\ldots,L}$, independent estimators $\{\widehat{Y}_{\ell}\}_{\ell=0,\ldots,L}$ and their corresponding computational complexities $\{C_{\ell}\}_{\ell=0\ldots, L}$ satisfy the following conditions:
there exist positive constants $c_{1}, c_{2}, c_{3}$ and $\alpha,\beta$ such that $\alpha \geq \beta/2$ and
(i) $|\mathbb{E}[P]-\mathbb{E}[\widehat{P}_{\ell}]|\leq c_{1}h_{\ell}^{\alpha}$;
(ii) $\mathbb{E}[\widehat{Y}_{\ell}]=\mathbb{E}[\widehat{P}_{\ell}-\widehat{P}_{\ell-1}]$;
(iii) $\mathrm{Var}[\widehat{Y}_{\ell}] \leq c_{2} N_{\ell}^{-1} h_{\ell}^{\beta}$;
(iv) $C_{\ell} \leq c_{3}N_{\ell} h_{\ell}^{-1}$.
Then for any $\varepsilon \in (0,1/e)$, the mean squared error is estimated by
\begin{align*}
\text{MSE}
:=
\mathbb{E}\left[
\left|
\widehat{Y}
-
\mathbb{E}[P]
\right|^2
\right]
\leq
\varepsilon^2
\end{align*}
with the computational complexity $C_{\text{MLMC}}$ for $\widehat{Y}$ bounded by
\begin{align*}
C_{\text{MLMC}}
\leq
\left\{ \begin{array}{ll}
\displaystyle
c_{4}\varepsilon^{-2},
&\text{ if } \beta \in (1,\infty),\\
\displaystyle
c_{4}\varepsilon^{-2} (\log \varepsilon)^{2},
&\text{ if } \beta =1, \\
\displaystyle
c_{4}
\varepsilon^{-\{2+(1-\beta)/\alpha\}},
&\text{ if } \beta \in (0,1)
\end{array}\right.
\end{align*}
for some constants $c_{4}>0$.
\end{Thm}
Before applying Theorem \ref{MLMC_0} to the Euler--Maruyama scheme $X^{(n)}$ with time step $h=T/n$, we provide the strong rate of convergence for \eqref{MSE_0} for irregular functions $f \in F(\mathbb{R}^{d})$.
\begin{Thm}\label{Cor_0}
Suppose that the coefficients $b$ and $\sigma$ of SDE \eqref{SDE_0} are bounded and Lipschitz continuous in space, and $1/2$-H\"older continuous in time, and $\sigma$ is uniformly elliptic.
Then for any $f \in F(\mathbb{R}^{d})$, $q \in [1,\infty)$ and $\delta \in (0,1)$, there exist $C_{\text{EM}}(q,\delta)>0$ and $C_{\text{EM}}(q)>0$ such that
\begin{align*}
\mathbb{E}\left[
\left|
f(X(T))
-
f(X^{(n)}(T))
\right|^{q}
\right]
\leq
\left\{ \begin{array}{ll}
\displaystyle
C_{\text{EM}}(q,\delta)
n^{-\frac{\delta}{2}},
&\text{ if } f \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
C_{\text{EM}}(q)
n^{-\frac{q}{2(q+1)}},
&\text{ if } f \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
C_{\text{EM}}(q)
n^{-\frac{pqs}{2(q+p)}},
&\text{ if } f \in W^{s,p}(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
\end{Thm}
\begin{proof}
We first note that under the assumptions on the coefficients, the strong rate of convergence for the Euler--Maruyama scheme is $1/2$, that is, for any $p >0$, there exists a constant $C_{p}>0$ such that $\mathbb{E}[|X(T)-X^{(n)}(T)|^{p}]^{1/p} \leq C_{p} n^{-1/2}$ (see, e.g. \cite{KP}).
It follows from Example \ref{Eg_GB_1} that $X(T)$ and $X^{(h)}(T)$ admit density functions $p_{T}(x,\cdot)$ and $p_{T}^{(n)}(x,\cdot)$ with respect to Lebesgue measure which have the Gaussian upper bound \eqref{GB_1} and \eqref{GB_2}, respectively.
Hence we have $p_{T}(x,\cdot), p_{T}^{(n)}(x,\cdot) \in L^{\infty}(\mathbb{R}^{d}) \cap L^{\Psi}(\mathbb{R}^{d}) \cap L^{p^{*}(\cdot)}(\mathbb{R}^{d}) $.
Therefore, by using Theorem \ref{main_0}, \ref{main_1}, \ref{main_2}, \ref{main_3} with $r=\infty$, we conclude the statement.
\end{proof}
\begin{Rem}\label{Rem_GB_0}
\begin{itemize}
\item[(i)]
Recently under non-Lipschitz coefficients, the strong rate of convergence for the Euler--Maruyama scheme are widely studied (see, \cite{BaHuYu19,BuDaGe19,GyRa11,KuSc19,LeSz17b,MeTa,MuYa20,NT1,NT2}).
\item[(ii)]
If we assume $f \in L^{r}(\mathbb{R}^{d},p_{T}(x,\cdot))$ for some $r \in (1,\infty)$, then similar estimates of Theorem \ref{Cor_0} hold.
Note that the constants $C_{EM}$ may depend on $\|f\|_{L^{r}(\mathbb{R}^{d},p_{T}^{(n)}(x,\cdot))}$, especially on the time step $h=T/n$ (see also Remark \ref{rem:2.12} (v) and Remark \ref{rem:2.13} (iii)).
However, by using the Gaussian upper bound \eqref{GB_2} for the density of $X^{(n)}(T)$, under the additional assumption $f \in L^{r}(\mathbb{R}^{d}, g_{c_{+}T}(x,\cdot))$, the constants $C_{EM}$ are uniformly bounded with respect to the time step $h=T/n$.
\end{itemize}
\end{Rem}
As applications of Theorem \ref{MLMC_0} and Theorem \ref{Cor_0}, we have the following two examples for irregular functions $f \in F(\mathbb{R}^{d})$.
\begin{Eg}\label{Ex_MLMC_0}
Let $X_{\ell}$ be the Euler--Maruyama scheme with time step $h_{\ell}=T/M^{\ell} $ and $\widehat{Y}_{\ell}$ be the independent estimator defined by \eqref{MLMC_estimator_0}.
Suppose the coefficients $b$ and $\sigma$ of SDE \eqref{SDE_0} satisfy $b \in C^{1,3}_{b}([0,T]\times \mathbb{R}^{d};{\mathbb R}^{d})$, $\sigma \in C^{1,3}_{b}([0,T]\times \mathbb{R}^{d};{\mathbb R}^{d \times d})$ and $\partial_{t}\sigma \in C^{0,1}_{b}([0,T]\times \mathbb{R}^{d};{\mathbb R}^{d \times d})$, and $\sigma$ is uniformly elliptic.
Then it follows from Theorem 2.5 in \cite{GoLa08} that for any bounded measurable function $f:\mathbb{R}^{d} \to \mathbb{R}$, the weak rate of convergence $\alpha$ in Theorem \ref{MLMC_0} is one (see also, Theorem 3.5 in \cite{BaTa96}, Corollary 22 in \cite{Gu06}, Theorem 1.1 in \cite{KoMa02} and Theorem 1 in \cite{TaTu90}).
Note that
\begin{align*}
\mathrm{Var}[\widehat{Y}_{\ell}]
&=
\mathbb{E}[|\widehat{Y}_{\ell}-\mathbb{E}[\widehat{Y}_{\ell}]|^{2}]
=
N_{\ell}^{-1}
\mathrm{Var}[\widehat{P}_{\ell}-\widehat{P}_{\ell-1}]
\end{align*}
for each $\ell=0,\ldots,L$.
Hence by using Theorem \ref{Cor_0}, for any $f \in F(\mathbb{R}^{d})$, we have
\begin{align*}
&\mathrm{Var}[\widehat{P}_{\ell}-\widehat{P}_{\ell-1}]
\leq
\left|
\mathbb{E}[\widehat{P}_{\ell}]
-
\mathbb{E}[\widehat{P}_{\ell-1}]
\right|^{2}
+
2\mathbb{E}[|P-\widehat{P}_{\ell}|^{2}]
+
2\mathbb{E}[|P-\widehat{P}_{\ell-1}|^{2}]
\\&\leq
c_{0}^{2}
\{
h_{\ell}
+
h_{\ell-1}
\}^{2}
+
\left\{ \begin{array}{ll}
\displaystyle
2C_{\text{EM}}(2,\delta)
\{
h_{\ell}^{\frac{\delta}{2}}
+
h_{\ell-1}^{\frac{\delta}{2}}
\},
&\text{ if } f \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
2C_{\text{EM}}(2)
\{
h_{\ell}^{\frac{1}{3}}
+
h_{\ell-1}^{\frac{1}{3}}
\},
&\text{ if } f \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
2C_{\text{EM}}(2)
\{
h_{\ell}^{\frac{ps}{p+2}}
+
h_{\ell-1}^{\frac{ps}{p+2}}
\},
&\text{ if } f \in W^{s,p}(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
Hence, for $f \in F(\mathbb{R}^{d})$, the computational complexity $C_{\text{MLMC}}$ for $\widehat{Y}$ is bounded from above by
\begin{align*}
C_{\text{MLMC}}
\leq
\left\{ \begin{array}{ll}
\displaystyle
c_{4}
\varepsilon^{-\frac{6-\delta}{2}},
&\text{ if } f \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
c_{4}
\varepsilon^{-\frac{8}{3}},
&\text{ if } f \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
c_{4}
\varepsilon^{-(3-\frac{ps}{p+2})},
&\text{ if } f \in W^{s,p}(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
\end{Eg}
\begin{Eg}\label{Ex_MLMC_1}
Let $X_{\ell}$ be the Euler--Maruyama scheme with time step $h_{\ell}=T/M^{\ell}$ and $\widehat{Y}_{\ell}$ be the independent estimator defined by \eqref{MLMC_estimator_0}.
Suppose the coefficients $b$ and $\sigma$ of SDE \eqref{SDE_0} are bounded and Lipschitz continuous in space, and $1/2$-H\"older continuous in time, and $\sigma$ is uniformly elliptic.
Then for any bounded measurable function $f:\mathbb{R}^{d} \to \mathbb{R}$ and $\delta \in (0,1)$, the weak rate of convergence $\alpha$ in Theorem \ref{MLMC_0} is $\delta/2$, (see Theorem 1.1 in \cite{KoMe17}).
Therefore, for $f \in F(\mathbb{R}^{d})$, the computational complexity $C_{\text{MLMC}}$ for $\widehat{Y}$ is bounded from above by
\begin{align*}
C_{\text{MLMC}}
\leq
\left\{ \begin{array}{ll}
\displaystyle
c_{4}
\varepsilon^{-1-\frac{2}{\delta}},
&\text{ if } f \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
c_{4}
\varepsilon^{-2-\frac{4}{3\delta}},
&\text{ if } f \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
c_{4}
\varepsilon^{-2-\frac{p(1-s)+2}{\delta(p+2)}},
&\text{ if } f \in W^{s,p}(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
\end{Eg}
\begin{comment}
\subsection{$L^{2}$-time regularity of BSDEs}\label{sec_3_3}
In this subsection, inspired by \cite{EHaJe17,GoMa10,HaJeE18,HuPhWa20,Zhj04}, we apply Avikainen's estimates proved in Section \ref{sec_2} to numerical analysis for BSDEs with irregular terminal functions $g\in F(\mathbb{R}^{d})$.
Let $(X,Y,Z)$ be a solution of the following (Markovian) decoupled forward-backward stochastic differential equation (FBSDE):
\begin{align}\label{FBSDE_0}
\begin{split}
X(t)
&=
x
+
\int_{0}^{t}
b(s,X(s))
\mathrm{d} s
+
\int_{0}^{t}
\sigma(s,X(s))
\mathrm{d} B(s),
\quad
x \in \mathbb{R}^{d},~t \in [0,T],\\
Y (t)
&=
g\left(X(T)\right)
+
\int_{t}^{T}
f(s,X(s),Y(s),Z(s))
\mathrm{d} s
-
\int_{t}^{T}
Z(s)^{\top}
\mathrm{d} B(s),~t \in [0,T],
\end{split}
\end{align}
where $B$ is a $d$-dimensional $({\mathscr F}_{t})_{t \in [0,T]}$-Brownian motion, and $b:[0,T] \times {\mathbb R}^d\to {\mathbb R}^d$, $\sigma:[0,T] \times {\mathbb R}^d\to {\mathbb R}^{d\times d}$, $f:[0,T]\times {\mathbb R}^d \times {\mathbb R} \times {\mathbb R}^d\to {\mathbb R}$ and $g:{\mathbb R}^d \to {\mathbb R}$ are measurable functions.
For details of the theory and applications of BSDEs, we refer to \cite{Bi76,EHaJe17,KaPeQu97,GoMa10,HaJeE18,HuPhWa20,PaPa90,Zhj04}.
We need the following assumptions on the coefficients of FBSDE \eqref{FBSDE_0}.
\begin{Ass} \label{ASS_BSDE}
\begin{itemize}
\item[(i)]
The drift coefficient $b$ and the diffusion coefficient $\sigma$ are bounded and twice continuously differentiable in space, and their partial derivatives are uniformly bounded and $\gamma$-H\"older continuous with $\gamma \in (0,1]$.
Moreover, $b$ and $\sigma$ are $1/2$-H\"older continuous in time, and $\sigma$ is uniformly elliptic.
\item[(ii)]
The driver $f$ is continuous and continuously differentiable in space, and its partial derivative is uniformly bounded.
Moreover, $\int_{0}^{T}|f(s,0,0,0)| \mathrm{d} s <\infty$.
\end{itemize}
\end{Ass}
It is known (see, \cite{BoTo04,GoLe06,HuPhWa20, Zhj04}) that the rate of convergence for numerical schemes of \eqref{FBSDE_0} is derived by the $L^{2}$-time regularity $\varepsilon(Z,\pi)$, that is, for a given time mesh $\pi:0=t_{0}<\cdots<t_{n}=T$,
\begin{align}\label{Reg_z_0}
\varepsilon(Z,\pi)
&:=
\mathbb{E}
\left[
\sum_{i=0}^{n-1}
\int_{t_i}^{t_{i+1}}
\left|
Z(t) - \overline{Z}(t_i)
\right|^2
\mathrm{d} t
\right],~
\overline{Z}(t_i)
:= \frac{1}{t_{i+1}-t_{i}}
\mathbb{E}\left[
\int_{t_i}^{t_{i+1}}
Z(t)
\mathrm{d} t
\mid
\mathscr{F}_{t_{i}}
\right].
\end{align}
For providing the error estimate of $\varepsilon(Z,\pi)$, we consider the following space:
\begin{align*}
{\bf L}_{2,\alpha}
:=
\left\{
g:\mathbb{R}^{d} \to \mathbb{R}
~;~
\mathbb{E}\left[
\left|
g(X(T))
\right|^{2}
\right]
+
\sup_{0\leq t <T}
\frac{
\mathbb{E}
\left[
\left|
g(X(T))
-
\mathbb{E}
\left[
g(X(T))
\mid
\mathscr{F}_{t}
\right]
\right|^{2}
\right]
}
{
(T-t)^{\alpha}
}
<\infty
\right\}.
\end{align*}
Under Assumption \ref{ASS_BSDE}, Gobet and Makhlouf \cite{GoMa10} provided an upper bound of the $L^{2}$-time regularity $\varepsilon(Z,\pi)$ for $g \in {\bf L}_{2,\alpha}$ (see, Theorem 3.1 and Theorem 3.2 in \cite{GoMa10}).
By using these theorems and Avikainen's estimate proved in Section \ref{sec_2}, we can provide the order of the $L^{2}$-time regularity $\varepsilon(Z,\pi)$ for $g \in F({\mathbb R}^{d})$.
\begin{Thm}\label{main_5}
Assumption \ref{ASS_BSDE} hold.
Let $g \in F(\mathbb{R}^{d})$ and $(X,Y,Z)$ be a solution of decoupled FBSDE \eqref{FBSDE_0}, and let $\pi$ be the equidistant time mesh, that is, $\pi=\{t_{i}=iT/n~;~ i=0,\ldots,n\}$, and let $\delta \in (0,1)$.
Then, there exist constants $C_{\delta}>0$ and $C>0$ which do not depend on $n$ such that
\begin{align*}
\varepsilon(Z,\pi)
\leq
\left\{ \begin{array}{ll}
\displaystyle
C_{\delta}
n^{-\frac{\delta}{2}},
&\text{if} \quad g \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}), \\
\displaystyle
C
n^{-\frac{1}{3}},
&\text{if} \quad g \in \{W^{1,\Phi}(\mathbb{R}^{d}) \cup W^{1,p(\cdot)}(\mathbb{R}^{d})\} \cap L^{\infty}(\mathbb{R}^{d}),\\
\displaystyle
C
n^{-\frac{ps}{p+2}},
&\text{if} \quad g \in W^{s,p}(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d}).
\end{array}\right.
\end{align*}
\end{Thm}
\begin{proof}
We only prove the statement for $g \in BV(\mathbb{R}^{d}) \cap L^{\infty}(\mathbb{R}^{d})$.
The proofs for the other function spaces are similar.
We first note that since $g$ is bounded, $\mathbb{E}[|g(X(T))|^{2}]<\infty$.
From Theorem 3.2 (a) in \cite{GoMa10}, it suffices to prove that for any $\delta \in (0,1)$,
\begin{align*}
\sup_{0\le t <T}
\frac{
\mathbb{E}\left[
\left|
g(X(T))
-
\mathbb{E}\left[
g(X(T))
\mid
\mathscr{F}_{t}
\right]
\right|^{2}
\right]
}
{
(T-t)^{\frac{\delta}{2}}
}
<\infty.
\end{align*}
Under Assumption \ref{ASS_BSDE} (i), the stochastic process $X$ admits a transition probability density $p(s,x;t,\cdot)$ with respect to Lebesgue measure, and it has the Gaussian upper bound, that is, there exist $C_{+}>0$ and $c_{+}>0$ such that for any $x,y \in \mathbb{R}^{d}$ and $0\leq s<t \leq T$,
\begin{align*}
p(s,x;t,y)
\leq
C_{+} g_{c_{+}(t-s)}(x,y)
\end{align*}
(e.g., Theorem 6.4.5 in \cite{Fr64}).
Therefore, by using the Markov property of $X$, Jensen's inequality and change of variables, for any $0<t<T$, we have
\begin{align*}
\mathbb{E}\left[
\left|
g(X(T))
-
\mathbb{E}\left[
g(X(T))
\mid
\mathscr{F}_{t}
\right]
\right|^{2}
\right]
&=
\mathbb{E}\left[
\left|
\int_{\mathbb{R}^{d}}
\left\{
g(X(T))
-
g(y)
\right\}
p(t,X(t);T,y)
\mathrm{d} y
\right|^{2}
\right]\\
&\leq
\mathbb{E}\left[
\int_{\mathbb{R}^{d}}
\left|
g(X(T))
-
g(y)
\right|^{2}
p(t,X(t);T,y)
\mathrm{d} y
\right]\\
&\leq
C_{+}
\int_{\mathbb{R}^{d}}
\mathbb{E}\left[
\left|
g(X(T))
-
g(y+X(t))
\right|^{2}
\right]
g_{c_{+}(T-t)}(0,y)
\mathrm{d} y.
\end{align*}
Note that $X(T)$ and $y+X(t)$ admit density functions with respect to Lebesgue measure, and the density function of $X(T)$ has the Gaussian upper bound (see, \eqref{GB_1}).
Thus by using Theorem \ref{main_0} (i), for $p\in (0,\infty)$, we obtain
\begin{align}\label{eq_main_5_0}
&\mathbb{E}\left[
\left|
g(X(T))
-
\mathbb{E}\left[
g(X(T))
\mid
\mathscr{F}_{t}
\right]
\right|^{2}
\right] \notag\\
&\leq
C_{+}
C_{BV}(p,2,\infty)
\int_{\mathbb{R}^{d}}
\mathbb{E}
\left[
\left|
X(T)
-
(y+X(t))
\right|^{p}
\right]^{\frac{1}{p+1}}
g_{c_{+}(T-t)}(0,y)
\mathrm{d} y \notag\\
&\leq
2^{\frac{p-1}{p+1}}
C_{+}
C_{BV}(p,2,\infty)
\left(
\mathbb{E}\left[
\left|
X(T)
-
X(t)
\right|^{p}
\right]^{\frac{1}{p+1}}
+
\int_{\mathbb{R}^{d}}
|y|^{\frac{p}{p+1}}
g_{c_{+}(T-t)}(0,y)
\mathrm{d} y
\right) \notag\\
&\leq
C(p)
|T-t|^{\frac{p}{2(p+1)}}
\end{align}
for some constant $C(p)>0$ independent of $t$.
Hence by choosing $p:=\frac{\delta}{1-\delta}$, we conclude the proof.
\end{proof}
\begin{Rem}\label{Rem_BSDE}
\begin{itemize}
\item[(i)]
If we assume $g \in L^{r}(\mathbb{R}^{d},p_{T}(x,\cdot))$ for some $r \in (1,\infty)$, then the constant $C(p)$ in \eqref{eq_main_5_0} may depend on $\|g\|_{L^{r}(\mathbb{R}^{d},p_{t}(x,\cdot))}$, especially on $t$ (see also Remark \ref{rem:2.12} (v) and Remark \ref{rem:2.13} (iii)).
For instance, even if we apply the Gaussian upper bound \eqref{GB_1} to $p_{t}(x,\cdot)$, we cannot obtain an upper bound of $C(p)$ uniformly in $t \in (0,T)$ in general.
Therefore, it is difficult to apply our main results in Section \ref{sec_2} in order to estimate the $L^{2}$-time regularity $\varepsilon(Z,\pi)$.
To avoid this problem, we assume the boundedness of $g$ in Theorem \ref{main_5}.
\item[(ii)]
Recently, numerical schemes for high--dimensional forward--backward SDEs based on machine learning have been studied (e.g., \cite{EHaJe17,HaJeE18,HuPhWa20}).
In particular, in \cite{HuPhWa20}, several backward schemes based on a dynamic programming equation are proposed, and the upper bound of the squared error of their schemes has the sum of the mean squared error $\mathbb{E}[|g(X(T))-g(X^{(\pi)}(T))|^{2}]$ and the $L^{2}$-time regularity $\varepsilon(Z,\pi)$ defined by \eqref{Reg_z_0}, where $X$ is a solution to forward equation of \eqref{FBSDE_0}, $X^{(\pi)}$ is its Euler--Maruyama scheme with time step $h =T/n$ and $g$ is the terminal condition of backward equation of \eqref{FBSDE_0} (see, Theorem 4.1 and Theorem 4.2 in \cite{HuPhWa20}).
As one of applications of Theorem \ref{Cor_0} and Theorem \ref{main_5}, we can conclude the convergence of their numerical schemes in the case of irregular terminal conditions $g \in F(\mathbb{R}^{d})$ for any dimension $d \geq 1$.
\end{itemize}
\end{Rem}
\end{comment}
\subsection*{Acknowledgements}
The authors would like to thank the anonymous referees for their careful readings and valuable comments.
The authors deeply grateful to Professor Flavien Leger for his valuable comments.
The first author was supported by JSPS KAKENHI Grant Number 19K14552.
The second author was supported by Sumitomo Mitsui Banking Corporation.
The third author was supported by JSPS KAKENHI Grant Number 17J05514.
| {
"timestamp": "2020-11-30T02:06:16",
"yymm": "2005",
"arxiv_id": "2005.03219",
"language": "en",
"url": "https://arxiv.org/abs/2005.03219",
"abstract": "Avikainen showed that, for any $p,q \\in [1,\\infty)$, and any function $f$ of bounded variation in $\\mathbb{R}$, it holds that $\\mathbb{E}[|f(X)-f(\\widehat{X})|^{q}] \\leq C(p,q) \\mathbb{E}[|X-\\widehat{X}|^{p}]^{\\frac{1}{p+1}}$, where $X$ is a one-dimensional random variable with a bounded density, and $\\widehat{X}$ is an arbitrary random variable. In this article, we will provide multi-dimensional versions of this estimate for functions of bounded variation in $\\mathbb{R}^{d}$, Orlicz--Sobolev spaces, Sobolev spaces with variable exponents, and fractional Sobolev spaces. The main idea of our arguments is to use the Hardy--Littlewood maximal estimates and pointwise characterizations of these function spaces. We apply our main results to analyze the numerical approximation for some irregular functionals of the solution of stochastic differential equations.",
"subjects": "Probability (math.PR); Numerical Analysis (math.NA)",
"title": "$L^{q}$-error estimates for approximation of irregular functionals of random vectors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517475646369,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7089606403724045
} |
https://arxiv.org/abs/1703.00102 | SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient | In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm. | \section{Introduction}
\label{introduction}
We are interested in solving a problem of the form
\begin{gather}\label{eq:problem}
\min_{w\in\R^d} \left\{ \ P(w) \stackrel{\text{def}}{=} \frac{1}{n} \sum_{i\in[n]} f_i(w)\right\},
\end{gather}
where each $f_i$, $i \in [n]\stackrel{\text{def}}{=}\{1,\dots,n\}$, is convex
with a Lipschitz continuous gradient. Throughout the paper, we assume that there exists an optimal solution $w^{*}$ of \eqref{eq:problem}.
Problems of this type arise frequently in supervised learning applications~\cite{ESL}. Given a training set $\{(x_i,y_i)\}_{i=1}^n$ with $x_i \in\R^{d}, y_i\in\R$, the least squares regression model, for example, is written as \eqref{eq:problem} with $f_i(w)\stackrel{\text{def}}{=} (x_i^Tw-y_i)^2 + \frac{\lambda}{2} \|w\|^2$, where $\|\cdot\|$ denotes the $\ell_2$-norm. The $\ell_2$-regularized logistic regression for binary classification is written with $f_i(w) \stackrel{\text{def}}{=} \log (1+\exp(-y_ix_i^Tw)) + \frac{\lambda}{2}\|w\|^2$ $(y_i\in\{-1,1\})$.
In recent years, many advanced optimization methods have been developed for problem \eqref{eq:problem}. While the objective function is smooth and convex, the traditional optimization methods, such as gradient descent (GD) or Newton method are often impractical for this problem, when $n$ --
the number of training samples and hence the number of $f_i$'s -- is very large. In particular, GD updates iterates as follows
$$w_{t+1} = w_{t} - \eta_t \nabla P(w_{t}), \quad t=0, 1, 2, \ldots .$$
Under strong convexity assumption on $P$ and with appropriate choice of $\eta_t$, GD converges at a linear rate in terms of objective function values $P(w_t)$.
However, when $n$ is large, computing $ \nabla P(w_{t})$ at each iteration can be prohibitive.
As an alternative, stochastic gradient descent (SGD)\footnote{We mark here that even though stochastic gradient is referred to as SG in literature, the term stochastic gradient descent (SGD) has been widely used in many important works of large-scale learning, including SAG/SAGA, SDCA, SVRG and MISO.}, originating from the seminal work of Robbins and Monro in 1951 \cite{RM1951}, has become the method of choice for solving \eqref{eq:problem}. At each step, SGD picks an index $i\in [n]$ uniformly at random, and updates the iterate as
$w_{t+1} = w_{t} - \eta_t \nabla f_i(w_{t})$, which is up-to $n$ times cheaper than an iteration of a full gradient method. The convergence rate of SGD is
slower than that of GD, in particular, it is sublinear in the strongly convex case. The tradeoff, however, is advantageous due to the tremendous per-iteration savings and the fact that low accuracy solutions are sufficient. This trade-off has been thoroughly analyzed in \cite{Bottou1998}. Unfortunately, in practice SGD method is often too slow and its performance is too sensitive to the variance in the sample gradients $\nabla f_i(w_{t})$. Use of mini-batches (averaging multiple sample gradients $\nabla f_i(w_{t})$) was used in \cite{pegasosICML, acceleratedmb, takac2013ICML} to reduce the variance and improve convergence rate by constant factors. Using diminishing sequence $\{\eta_t\}$ is used to control the variance \cite{pegasos, bottou2016optimization}, but the practical convergence of SGD is known to be very sensitive to the choice of this sequence, which needs to be hand-picked.
Recently, a class of more sophisticated
algorithms have emerged, which use the specific finite-sum form of \eqref{eq:problem} and combine some deterministic and stochastic aspects to reduce variance of the steps. The examples of these methods are
SAG/SAGA \cite{SAG, SAGA}, SDCA \cite{SDCA}, SVRG \cite{SVRG, Xiao2014}, DIAG \cite{mokhtari2017double}, MISO \cite{mairal2013optimization}
and S2GD \cite{S2GD}, all of which enjoy faster convergence rate than that of SGD and use a fixed learning rate parameter $\eta$. In this paper we introduce a new method in this category, SARAH, which further improves several aspects of the existing methods.
In Table~\ref{table:summary} we summarize complexity and some other properties of the existing methods and SARAH when applied to strongly convex problems.
Although SVRG and SARAH have the same convergence rate, we introduce a practical variant of SARAH that outperforms SVRG in our experiments.
\begin{table
\small
\caption{Comparisons between different algorithms for strongly convex functions. $\kappa = L/\mu$ is the condition number.
}
\label{table:summary}
\centering
\begin{tabular}{C{1.25cm} c C{1.4cm} C{1.3cm} }
Method & Complexity & Fixed Learning Rate & Low Storage Cost \\
\hline
GD & $\Ocal\left(n\kappa \log\left({1}/{\epsilon}\right)\right)$ & \ding{51} & \ding{51} \\
\hline
SGD & $\Ocal\left({1}/{\epsilon}\right)$ & \ding{55} & \ding{51} \\
\hline
SVRG & $\Ocal\left((n+\kappa) \log\left({1}/{\epsilon}\right)\right)$ & \ding{51} & \ding{51} \\
\hline
SAG/SAGA & $\Ocal\left((n + \kappa) \log\left({1}/{\epsilon}\right)\right)$ & \ding{51} & \ding{55} \\
\hline
{\textbf{SARAH}} & {$\Ocal\left((n + \kappa) \log\left({1}/{\epsilon}\right)\right)$} & {\ding{51}} & {\ding{51}} \\
\hlin
\end{tabular}
\vskip0.3cm
\caption{Comparisons between different algorithms for convex functions.}
\label{table:summary_convex}
\begin{tabular}{C{3cm} c }
Method & Complexity \\
\hline
GD & $\Ocal\left(n/\epsilon \right)$ \\
\hline
SGD & $\Ocal\left({1}/{\epsilon^2}\right)$ \\
\hline
SVRG & $\Ocal\left(n + (\sqrt{n}/\epsilon) \right)$ \\
\hline
SAGA & $\Ocal\left(n + (n/\epsilon) \right)$ \\
\hline
{\textbf{SARAH}} & {$\Ocal\left((n + (1/\epsilon)) \log(1/\epsilon) \right)$} \\
\hline
{\textbf{SARAH (one outer loop)}} & {$\Ocal\left(n + (1/\epsilon^2) \right)$} \\
\hlin
\end{tabular}
\end{table}
In addition, theoretical results for complexity of the methods or their variants when applied to general convex functions have been derived~\cite{SAGjournal, SAGA, nonconvexSVRG, SVRG++, Katyusha}. In Table~\ref{table:summary_convex} we summarize the key complexity results, noting that convergence rate is now sublinear.
\paragraph{Our Contributions.} In this paper, we propose a novel algorithm which combines some of the good properties of existing algorithms, such as SAGA and SVRG, while aiming to improve on both of these methods. In particular, our algorithm does not take steps along a stochastic gradient direction, but rather along an accumulated direction using past stochastic gradient information (as in SAGA) and occasional exact gradient information (as in SVRG). We summarize the key properties of the proposed algorithm below.
\begin{itemize}[noitemsep,nolistsep]
\item Similarly to SVRG, SARAH's iterations are divided into the outer loop where a full gradient is computed and the inner loop where only stochastic gradient is computed. Unlike the case of SVRG, the steps of the inner loop of SARAH are based on accumulated stochastic information.
\item Like SAG/SAGA and SVRG, SARAH has a sublinear rate of convergence for general convex functions, and a linear rate of convergence for strongly convex functions.
\item SARAH uses a constant learning rate, whose size is larger than that of SVRG. We analyze and discuss the optimal choice of the learning rate and the number of inner loop steps. However, unlike SAG/SAGA but similar to SVRG, SARAH does not require a storage of $n$ past stochastic gradients.
\item We also prove a linear convergence rate (in the strongly convex case) for the inner loop of SARAH, the property that SVRG does not possess. We show that the variance of the steps inside the inner loop goes to zero, thus SARAH is theoretically more stable and reliable than SVRG.
\item We provide a practical variant of SARAH based on the convergence properties of the inner loop, where the simple stable stopping criterion for the inner loop is used (see Section \ref{sec:sarahplus} for more details). This variant shows how SARAH can be made more stable than SVRG in practice.
\end{itemize}
\section{Stochastic Recursive Gradient Algorithm}
Now we are ready to present our SARAH (Algorithm \ref{sarah}).
\begin{algorithm
\caption{SARAH}
\label{sarah}
\begin{algorithmic}
\STATE {\bfseries Parameters:} the learning rate $\eta > 0$ and the inner loop size $m$.
\STATE {\bfseries Initialize:} $\tilde{w}_0$
\STATE {\bfseries Iterate:}
\FOR{$s=1,2,\dots$}
\STATE $w_0 = \tilde{w}_{s-1}$
\STATE $v_0 = \frac{1}{n}\sum_{i=1}^{n} \nabla f_i(w_0)$
\STATE $w_1 = w_0 - \eta v_0$
\STATE {\bfseries Iterate:}
\FOR{$t=1,\dots,m-1$}
\STATE Sample $i_{t}$ uniformly at random from $[n]$
\STATE $v_{t} = \nabla f_{i_{t}} (w_{t}) - \nabla f_{i_{t}}(w_{t-1}) + v_{t-1}$
\STATE $w_{t+1} = w_{t} - \eta v_{t}$
\ENDFOR
\STATE Set $\tilde{w}_s = w_{t}$ with $t$ chosen uniformly at random from $\{0,1,\dots,m\}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The key step of the algorithm is a recursive update of the stochastic gradient estimate \textit{(SARAH update)}
\begin{equation}\label{eq:vt}
v_{t} = \nabla f_{i_{t}} (w_{t}) - \nabla f_{i_{t}}(w_{t-1}) + v_{t-1},
\end{equation}
followed by the iterate update:
\begin{equation}\label{eq:iterate}
w_{t+1} = w_{t} - \eta v_{t}.
\end{equation}
For comparison, SVRG update can be written in a similar way as
\begin{equation}\label{eq:svrgvt}
v_{t} = \nabla f_{i_{t}} (w_{t}) - \nabla f_{i_{t}}(w_{0}) + v_{0}.
\end{equation}
Observe that in SVRG, $v_t$ is an unbiased estimator of the gradient, while it is not true for SARAH. Specifically, \footnote{
$\Exp [\cdot | \mathcal{F}_{t}] = \Exp_{i_{t}}[\cdot]$, which is expectation with respect to the random choice of index $i_{t}$ (conditioned on
$w_0, i_1, i_2, \dots, i_{t-1}$).}
\begin{equation}\label{eq:NotSGD}
\Exp[v_{t} | \mathcal{F}_{t}] = \nabla P(w_{t}) - \nabla P(w_{t-1}) + v_{t-1}\neq \nabla P(w_{t}),
\end{equation}
where \footnote{$\mathcal{F}_{t}$ also contains all the information of $w_0,\dots,w_{t}$ as well as $v_0,\dots,v_{t-1}.$} $\mathcal{F}_{t} = \sigma(w_0,i_1,i_2,\dots,i_{t-1})$ is the $\sigma$-algebra generated by $w_0,i_1,i_2,\dots,i_{t-1}$; $\mathcal{F}_{0} = \mathcal{F}_{1} = \sigma(w_0)$.
Hence, SARAH is different from SGD and SVRG type of methods, however, the following total expectation holds, $\Exp[v_{t}] = \Exp[\nabla P(w_{t})]$, differentiating SARAH from SAG/SAGA.
SARAH is similar to SVRG since they both contain outer loops which require one full gradient evaluation per outer iteration followed by one full gradient descent step with a given learning rate. The difference lies in the inner loop, where SARAH updates the stochastic step direction $v_t$ recursively by adding and subtracting component gradients to and from the previous $v_{t-1}\ (t\geq 1)$ in \eqref{eq:vt}. Each inner iteration evaluates
$2$ stochastic gradients and hence the total work per outer iteration is $\Ocal(n+m)$ in terms of the number of gradient evaluations.
Note that due to its nature, without running the inner loop, i.e., $m = 1$, SARAH reduces to the GD algorithm.
\section{Theoretical Analysis}
To proceed with the analysis of the proposed algorithm, we will make the following common assumptions.
\begin{ass}[$L$-smooth]
\label{ass_Lsmooth}
Each $f_i: \mathbb{R}^d \to \mathbb{R}$, $i \in [n]$, is $L$-smooth, i.e., there exists a constant $L > 0$ such that
$$
\| \nabla f_i(w) - \nabla f_i(w') \| \leq L \| w - w' \|, \ \forall w,w' \in \mathbb{R}^d.
$$
\end{ass}
Note that this assumption implies that $P(w) = \frac{1}{n}\sum_{i=1}^n f_i(w)$ is also \emph{L-smooth}. The following strong convexity assumption
will be made for the appropriate parts of the analysis, otherwise, it would be dropped.
\begin{subass}
\begin{ass}[$\mu$-strongly convex]
\label{ass_stronglyconvex}
The function $P: \mathbb{R}^d \to \mathbb{R}$, is $\mu$-strongly convex, i.e., there exists a constant $\mu > 0$ such that $\forall w,w' \in \mathbb{R}^d$,
\begin{gather*}
P(w) \geq P(w') + \nabla P(w')^\top (w - w') + \tfrac{\mu}{2}\|w - w'\|^2.
\end{gather*}
\end{ass}
Another, stronger, assumption of $\mu$-strong convexity for \eqref{eq:problem} will also be imposed when required in our analysis. Note that Assumption~\ref{ass_stronglyconvex2} implies Assumption~\ref{ass_stronglyconvex} but not vice versa.
\begin{ass}
\label{ass_stronglyconvex2}
Each function $f_i: \mathbb{R}^d \to \mathbb{R}$, $i \in [n]$, is strongly convex with $\mu>0$.
\end{ass}
\end{subass}
Under Assumption \ref{ass_stronglyconvex}, let us define the (unique) optimal solution of \eqref{eq:problem} as $w^{*}$,
Then strong convexity of $P$ implies that
\begin{equation}\label{eq:strongconvexity2}
2\mu [ P(w) - P(w^{*})] \leq \| \nabla P(w)\|^2, \ \forall w \in \mathbb{R}^d.
\end{equation}
We note here, for future use, that for strongly convex functions of the form \eqref{eq:problem}, arising in machine learning applications, the condition number is defined as $\kappa\stackrel{\text{def}}{=} L/\mu$. Furthermore, we should also notice that Assumptions \ref{ass_stronglyconvex} and \ref{ass_stronglyconvex2} both cover a wide range of problems, e.g. $l_2$-regularized empirical risk minimization problems with convex losses.
Finally, as a special case of the strong convexity of all $f_i$'s with $\mu=0$, we state the general convexity assumption, which we will use for convergence analysis.
\begin{ass}
\label{ass_convex}
Each function $f_i: \mathbb{R}^d \to \mathbb{R}$, $i \in [n]$, is convex, i.e.,
\begin{gather*}
f_i(w) \geq f_i(w') + \nabla f_i(w')^\top (w - w'), \quad \forall i\in[n].
\end{gather*}
\end{ass}
Again, we note that Assumption~\ref{ass_stronglyconvex2} implies Assumption~\ref{ass_convex}, but Assumption \ref{ass_stronglyconvex} does not.
Hence in our analysis, depending on the result we aim at, we will require Assumption~\ref{ass_convex} to hold by itself, or Assumption \ref{ass_stronglyconvex} and Assumption~\ref{ass_convex} to hold together, or Assumption~\ref{ass_stronglyconvex2} to hold by itself. We will always use Assumption \ref{ass_Lsmooth}.
Our iteration complexity analysis aims to bound the number of outer iterations $\mathcal{T}$ (or total number of stochastic gradient evaluations) which is needed to guarantee that
$\|\nabla P(w_\mathcal{T})\|^2\leq \epsilon$. In this case we will say that $w_\mathcal{T}$ is an $\epsilon$-accurate solution.
However, as is common practice for
stochastic gradient
algorithms, we aim to obtain the bound on the number of iterations, which is required to guarantee the bound on the expected squared norm of a gradient, i.e.,
\begin{equation}\label{eq:accuracy}
\Exp [\| \nabla P(w_\mathcal{T}) \|^2] \leq \epsilon.
\end{equation}
\subsection{Linearly Diminishing Step-Size in a Single Inner Loop}\label{sec:linearconvergence}
\iffalse
It can be shown that SARAH, similar as SVRG, also enjoys the advantage of variance reduction, albeit with a different technique. In the following, we state the variance reduction property of SARAH with linear decreasing second moment, which suggests a more aggressive variance reduction than SVRG.
In order to prove the decreasing second moment with linear rate, we need to make a little stronger assumptions. That is, we require each $f_i$, $i \in [n]$, to be both $\mu$-\emph{strongly convex} and \emph{L-smooth}. For such functions, we have the following result in literature.
\fi
The most important property of the SVRG algorithm is the variance reduction of the steps. This property holds as the number of outer iteration grows, but it does not hold, if only the number of inner iterations increases. In other words, if we simply run the inner loop for many iterations (without executing additional outer loops), the variance of the steps does not reduce in the case of SVRG, while it goes to zero in the case of SARAH.
To illustrate this effect, let us take a look at Figures~\ref{fig:VR1} and \ref{fig:VR2}.
In Figure~\ref{fig:VR1}, we applied one outer loop of SVRG and SARAH to a sum of $5$ quadratic functions in a two-dimensional space, where the optimal solution is at the origin, the black lines and black dots indicate the trajectory of each algorithm and the red point indicates the final iterate.
Initially, both SVRG and SARAH take steps along stochastic gradient directions towards the optimal solution. However, later iterations of SVRG wander randomly around the origin with large deviation from it, while SARAH follows a much more stable convergent trajectory, with a final iterate falling in a small neighborhood of the optimal solution.
\begin{figure}
\centering
\epsfig{file=Figs/ExSVRG.eps,width=0.22\textwidth}
\hspace{4mm}
\epsfig{file=Figs/ExSARAH.eps,width=0.22\textwidth}
\caption{\footnotesize A two-dimensional example of $\min_w P(w)$ with $n=5$ for SVRG (left) and SARAH (right).}
\label{fig:VR1}
$\ $\\$\ $\\
\centering
\epsfig{file=Figs/VR1.eps,width=0.22\textwidth}
\hspace{4mm}
\epsfig{file=Figs/VR20.eps,width=0.22\textwidth}
\caption{\footnotesize An example of $\ell_2$-regularized logistic regression on \emph{rcv1} training dataset for SARAH, SVRG, SGD+ and FISTA with multiple outer iterations (left) and a single outer iteration (right).}
\label{fig:VR2}
\end{figure}
In Figure~\ref{fig:VR2}, the x-axis denotes the \emph{number of effective passes} which is equivalent to the number of passes through all of the data in the dataset, the cost of each pass being equal to the cost of one full gradient evaluation; and y-axis represents $\|v_t\|^2$.
Figure~\ref{fig:VR2} shows the evolution of $\|v_t\|^2$ for SARAH, SVRG, SGD+ (SGD with decreasing learning rate) and FISTA (an accelerated version of GD~\cite{fista}) with $m=4n$,
where the left plot shows the trend over multiple outer iterations and the right plot shows a single outer iteration\footnote{In the plots of Figure~\ref{fig:VR2}, since the data for SVRG is noisy, we smooth it by using moving average filters with spans 100 for the left plot and 10 for the right one.}. We can see that for SVRG, $\|v_t\|^2$ decreases over the outer iterations, while it has an increasing trend or oscillating trend for each inner loop. In contrast, SARAH enjoys decreasing trends both in the outer and the inner loop iterations.
We will now show that the stochastic steps computed by SARAH converge linearly in the inner loop. We present two linear convergence results based on our two different assumptions of $\mu$-\emph{strong convexity}. These results substantiate our conclusion that SARAH uses more stable stochastic gradient estimates than SVRG. The following theorem is our first result to demonstrate the linear convergence of our stochastic recursive step $v_t$.
\begin{subthm}
\begin{thm}\label{lem_bouned_moment_stronglyconvexP}
Suppose that Assumptions \ref{ass_Lsmooth}, \ref{ass_stronglyconvex} and \ref{ass_convex} hold. Consider $v_{t}$ defined by \eqref{eq:vt} in SARAH (Algorithm \ref{sarah}) with $\eta < 2/L$. Then, for any $t\geq 1$,
\begin{align*}
\mathbb{E}[\|v_{t}\|^2]
&\leq \left[ 1 - \left(\tfrac{2}{\eta L} - 1 \right) \mu^2 \eta^2 \right] \mathbb{E}[\|v_{t-1}\|^2]
\\%========================
&\leq \left[ 1 - \left(\tfrac{2}{\eta L} - 1 \right) \mu^2 \eta^2 \right]^{t} \mathbb{E}[\| \nabla P(w_{0}) \|^2].
\end{align*}
\end{thm}
This result implies that by choosing $\eta=\Ocal(1/L)$, we obtain the linear convergence of $\|v_t\|^2$ in expectation with the rate $(1-1/\kappa^2)$. Below we show that
a better convergence rate can be obtained under a stronger convexity assumption.
\begin{thm}\label{thm:bound_moment}
Suppose that Assumptions \ref{ass_Lsmooth} and \ref{ass_stronglyconvex2} hold. Consider $v_{t}$ defined by \eqref{eq:vt} in SARAH (Algorithm \ref{sarah}) with $\eta \leq 2/(\mu+L)$. Then the following bound holds, $\forall\ t\geq 1$,
\begin{align*}
\mathbb{E}[\| v_{t} \|^2 ]
& \leq \left( 1 - \tfrac{2 \mu L \eta}{\mu + L} \right) \Exp[ \|v_{t-1} \|^2 ]
\\%========================
& \leq \left(1 - \tfrac{2\mu L \eta}{\mu + L} \right)^{t} \Exp[ \| \nabla P(w_{0}) \|^2].
\end{align*}
\end{thm}
\end{subthm}
Again, by setting $\eta=\Ocal(1/L)$, we derive the linear convergence with the rate of $(1-1/\kappa)$, which is a significant improvement over the result of Theorem~\ref{lem_bouned_moment_stronglyconvexP}, when the problem is severely ill-conditioned.
\subsection{Convergence Analysis}
In this section, we derive the general convergence rate results for Algorithm \ref{sarah}. First, we present two important Lemmas as the foundation of our theory. Then, we proceed to prove sublinear convergence rate of a single outer iteration when applied to general convex functions. In the end, we prove that the algorithm with multiple outer iterations has linear convergence rate in the strongly convex case.
We begin with proving two useful lemmas that do not require any convexity assumption.
The first Lemma
\ref{lem_main_derivation}
bounds the sum of expected values of
$\|\nabla P(w_t)\|^2$.
The second, Lemma \ref{lem:var_diff_01},
bounds $\mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]$.
\begin{lem}\label{lem_main_derivation}
Suppose that Assumption \ref{ass_Lsmooth} holds. Consider SARAH (Algorithm \ref{sarah}). Then, we have
\begin{align*}
& \sum_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t})\|^2 ] \leq \frac{2}{\eta} \mathbb{E}[ P(w_{0}) - P(w^{*})]
\tagthis \label{eq:001}
\\&\quad+ \sum_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]
- ( 1 - L\eta ) \sum_{t=0}^{m} \mathbb{E} [ \| v_{t} \|^2 ].
\end{align*}
\end{lem}
\begin{lem}\label{lem:var_diff_01}
Suppose that Assumption \ref{ass_Lsmooth} holds. Consider $v_{t}$ defined by \eqref{eq:vt} in SARAH (Algorithm \ref{sarah}). Then for any $t\geq 1$,
\begin{align*}
&\mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]
= \sum_{j = 1}^{t} \mathbb{E}[ \| v_{j} - v_{j-1} \|^2 ]
\\&\qquad\qquad
- \sum_{j = 1}^{t} \mathbb{E}[ \| \nabla P(w_{j}) - \nabla P(w_{j-1}) \|^2 ].
\end{align*}
\end{lem}
Now we are ready to provide our main theoretical results.
\subsubsection{General Convex Case}
Following from Lemma \ref{lem:var_diff_01}, we can obtain the following upper bound for $\mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]$ for convex functions $f_i, i\in[n]$.
\begin{lem}\label{lem_bound_var_diff_str_02}
Suppose that Assumptions \ref{ass_Lsmooth} and \ref{ass_convex} hold. Consider $v_{t}$ defined as \eqref{eq:vt} in SARAH (Algorithm \ref{sarah}) with $\eta < 2/L$. Then we have that for any $t\geq 1$,
\begin{align*}
\mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]
&\leq \frac{\eta L}{2 - \eta L} \Big[ \mathbb{E}[ \|v_{0} \|^2] - \mathbb{E}[\| v_{t} \|^2] \Big]
\\%==============================
&\leq \frac{\eta L}{2 - \eta L} \mathbb{E}[ \|v_{0} \|^2].
\tagthis\label{eq:bound1}
\end{align*}
\end{lem}
Using the above lemmas, we can state and prove one of our core theorems as follows.
\begin{thm}\label{thm:generalconvex_01}
Suppose that Assumptions \ref{ass_Lsmooth} and \ref{ass_convex} hold. Consider SARAH (Algorithm \ref{sarah}) with $\eta \leq 1/L$. Then for any $s\geq 1$, we have
\begin{align*}
\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ]
&\leq \frac{2}{\eta (m + 1)} \mathbb{E}[ P(\tilde w_{s-1}) - P(w^{*})]
\\
&\qquad + \frac{ \eta L}{2 - \eta L} \mathbb{E}[ \| \nabla P(\tilde w_{s-1})\|^2 ]. \tagthis \label{eq:agasgsasw}
\end{align*}
\end{thm}
\begin{proof}
Since $v_0=\nabla P(w_0)$ implies $\| \nabla P(w_{0}) - v_{0} \|^2 = 0$ then by Lemma \ref{lem_bound_var_diff_str_02}, we can write
\begin{align*}
& \textstyle{\sum}_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]
\leq \frac{m\eta L}{2 - \eta L} \mathbb{E}[ \|v_{0} \|^2]. \tagthis \label{eq:abcdef}
\end{align*}
Hence, by Lemma \ref{lem_main_derivation} with $\eta\leq 1/L$, we have
\begin{align*}
& \textstyle{\sum}_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t})\|^2 ] \\
& \leq \tfrac{2}{\eta} \mathbb{E}[ P(w_{0}) - P(w^{*})] + \textstyle{\sum}_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t}) - v_{t} \|^2 ]
\\%==============================
& \overset{\eqref{eq:abcdef}}{\leq} \tfrac{2}{\eta} \mathbb{E}[ P(w_{0}) - P(w^{*})] + \tfrac{m\eta L}{2 - \eta L} \mathbb{E}[ \| v_{0} \|^2 ]. \tagthis\label{eq:thm1conv}
\end{align*}
Since we are considering one outer iteration, with $s\geq 1$, then we have $v_0 = \nabla P(w_0) = \nabla P(\tilde w_{s-1})$ (since $w_0 =\tilde{w}_{s-1}$), and $\tilde{w}_s = w_t$, where $t$ is picked uniformly at random from $\{0,1,\dots,m\}$. Therefore, the following holds,
\begin{align*}
\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ] &= \tfrac{1}{m+1}\textstyle{\sum}_{t=0}^{m} \mathbb{E}[ \| \nabla P(w_{t})\|^2 ]
\\%==============================
&\overset{\eqref{eq:thm1conv}}{\leq} \tfrac{2}{\eta (m + 1)} \mathbb{E}[ P(\tilde w_{s-1}) - P(w^{*})]
\\
&\qquad + \tfrac{ \eta L}{2 - \eta L} \mathbb{E}[ \| \nabla P(\tilde w_{s-1})\|^2 ]. \qedhere
\end{align*}
\end{proof}
Theorem~\ref{thm:generalconvex_01}, in the case when $\eta\leq 1/L$ implies that
\begin{align*}
\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ]
&\leq \tfrac{2}{\eta (m + 1)} \mathbb{E}[ P(\tilde w_{s-1}) - P(w^{*})]
\\
&\qquad + { \eta L} \mathbb{E}[ \| \nabla P(\tilde w_{s-1})\|^2].
\end{align*}
By choosing the learning rate $\eta = \sqrt{\frac{2}{L(m+1)}}$ (with $m$ such that $\sqrt{\frac{2}{L(m+1)}}\leq 1/L$) we can derive the following convergence result,
\begin{align*}
&\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ]
\\
&\qquad \leq \sqrt{\tfrac{2L}{m + 1}} \mathbb{E}[ P(\tilde w_{s-1}) - P(w^{*}) + \| \nabla P(\tilde w_{s-1})\|^2].
\end{align*}
Clearly, this result shows a sublinear convergence rate for SARAH under general convexity assumption within a single inner loop, with increasing $m$,
and consequently, we have the following result for complexity bound.
\begin{cor}\label{cor:generalconvex_1}
Suppose that Assumptions \ref{ass_Lsmooth} and \ref{ass_convex} hold. Consider SARAH (Algorithm \ref{sarah}) within a single outer iteration with the learning rate $\eta = \sqrt{\frac{2}{L(m+1)}}$ where $m\geq 2L - 1$ is the total number of iterations, then $\|\nabla P(w_t)\|^2$ converges
sublinearly in expectation with a rate of $\sqrt{\frac{2L}{m+1}}$, and therefore, the total complexity to achieve an $\epsilon$-accurate solution defined in \eqref{eq:accuracy} is $\Ocal(n+1/\epsilon^2)$.
\end{cor}
\begin{figure*
\centering
\epsfig{file=Figs/stepsize.eps,width=0.3\textwidth}
\epsfig{file=Figs/rate1.eps,width=0.3\textwidth}
\epsfig{file=Figs/rate2.eps,width=0.3\textwidth}
\caption{\footnotesize Theoretical comparisons of learning rates (left) and convergence rates (middle and right) with $n=1,000,000$ for SVRG and SARAH in one inner loop.}
\label{fig:comparison}
\end{figure*}
We now turn to estimating convergence of SARAH with multiple outer steps. Simply using Theorem~\ref{thm:generalconvex_01} for each of the outer steps we have the following result.
\begin{thm}\label{thm:generalconvex_02}
Suppose that Assumptions \ref{ass_Lsmooth} and \ref{ass_convex} hold. Consider SARAH (Algorithm \ref{sarah}) and define
\begin{align*}
\delta_{k} &= \tfrac{2}{\eta(m+1)} \mathbb{E}[ P(\tilde{w}_{k}) - P(w^{*})], \ k = 0,1,\dots,s-1,
\end{align*}
and $\delta = \max_{0 \leq k \leq s-1} \delta_{k}$. Then we have
\begin{align*}
\tagthis\label{asdfsfas}
\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ] - \Delta \leq \alpha^s ( \| \nabla P(\tilde{w}_0)\|^2 - \Delta),
\end{align*}
where
$\Delta = \delta\left(1 + \tfrac{\eta L}{2(1 - \eta L)} \right),$ and
$\alpha = \tfrac{\eta L}{2 - \eta L}.$
\end{thm}
Based on Theorem \ref{thm:generalconvex_02}, we have the following total complexity for SARAH in the general convex case.
\begin{cor}\label{cor:thm_3_complexity}
Let us choose $\Delta = \epsilon/4$, $\alpha=1/2$ (with $\eta = 2/(3 L)$), and $m = \Ocal(1/\epsilon)$ in Theorem \ref{thm:generalconvex_02}. Then, the total complexity to achieve an $\epsilon$-accuracy solution defined in \eqref{eq:accuracy} is $\Ocal((n + (1/\epsilon))\log(1/\epsilon))$.
\end{cor}
\subsubsection{Strongly Convex Case}
We now turn to
the discussion of the linear convergence rate of SARAH under the strong convexity assumption on $P$.
From Theorem~\ref{thm:generalconvex_01}, for any $s\geq 1$, using property \eqref{eq:strongconvexity2} of the $\mu$-\emph{strongly convex} $P$, we have
\begin{align*}
&\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ]
\leq \tfrac{2}{\eta (m + 1)} \mathbb{E}[ P(\tilde w_{s-1}) - P(w^{*})]
\\
&\qquad\qquad\qquad\qquad + \tfrac{ \eta L}{2 - \eta L} \mathbb{E}[ \| \nabla P(\tilde w_{s-1})\|^2 ]
\\
&\qquad \overset{\eqref{eq:strongconvexity2}}{\leq}
\left( \tfrac{1}{\mu \eta (m+1)} + \tfrac{\eta L}{2 - \eta L} \right) \mathbb{E}[ \| \nabla P(\tilde w_{s-1})\|^2 ],
\end{align*}
and equivalently,
\begin{equation}\label{eq:recursive}
\Exp [ \| \nabla P(\tilde{w}_s)\|^2 ] \leq \sigma_m \Exp [ \| \nabla P(\tilde w_{s-1})\|^2 ].
\end{equation}
Let us define $\sigma_m \stackrel{\text{def}}{=} \tfrac{1}{\mu \eta (m + 1)} + \tfrac{\eta L}{2 - \eta L}$.
Then by choosing $\eta$ and $m$ such that $\sigma_m<1$, and applying \eqref{eq:recursive} recursively, we are able to reach the following convergence result.
\begin{thm}\label{thm:stronglyconvexconvergence}
Suppose that Assumptions \ref{ass_Lsmooth}, \ref{ass_stronglyconvex} and \ref{ass_convex} hold. Consider SARAH (Algorithm \ref{sarah})
with the choice of $\eta$ and $m$ such that
\begin{equation}\label{eq:sigma0}
\sigma_m \stackrel{\text{def}}{=} \frac{1}{\mu \eta (m + 1)} + \frac{\eta L}{2 - \eta L} < 1.
\end{equation}
Then, we have
\begin{align*}
\mathbb{E}[ \| \nabla P(\tilde{w}_s)\|^2 ] \leq (\sigma_m)^s \| \nabla P(\tilde{w}_0)\|^2.
\end{align*}
\end{thm}
\begin{remark}\label{rem1}
Theorem~\ref{thm:stronglyconvexconvergence} implies that any $\eta<1/L$ will work for SARAH. Let us compare our convergence rate to that of SVRG. The linear rate of SVRG, as presented in \cite{SVRG}, is given by
\begin{align*}
\alpha_m = \tfrac{1}{\mu \eta (1 - 2 L \eta) m} + \tfrac{2 \eta L}{1 - 2 \eta L} < 1.
\end{align*}
We observe that it implies that the learning rate has to satisfy $\eta < 1/(4L)$, which is a tighter restriction than $\eta<1/L$ required by SARAH. In addition, with the same values of $m$ and $\eta$, the rate or convergence of (the outer iterations) of SARAH is always smaller than that of SVRG.
\begin{align*}
\sigma_m &= \tfrac{1}{\mu \eta (m + 1)} + \tfrac{\eta L}{2 - \eta L} = \tfrac{1}{\mu \eta (m + 1)} + \tfrac{1}{2/(\eta L) - 1}
\\%==============================
&< \tfrac{1}{\mu \eta (1 - 2 L \eta) m} + \tfrac{1}{0.5/(\eta L) - 1} = \alpha_m.
\end{align*}
\end{remark}
\begin{remark}\label{rem2}
To further demonstrate the better convergence properties of SARAH, let us consider following optimization problem
$$\min_{0<\eta<1/L}\ \sigma_m,
\qquad \min_{0<\eta<1/{4L}}\ \alpha_m,$$
which can be interpreted as the best convergence rates for different values of $m$, for both SARAH and SVRG. After simple calculations, we
plot both learning rates and the corresponding theoretical rates of convergence, as shown in Figure~\ref{fig:comparison},
where the right plot is a zoom-in on a part of the middle plot. The left plot shows that the optimal learning rate for SARAH is significantly larger than that of SVRG, while the other two plots show significant improvement upon outer iteration convergence rates for SARAH over SVRG.
\end{remark}
Based on Theorem~\ref{thm:stronglyconvexconvergence}, we are able to derive the following total complexity for SARAH in the strongly convex case.
\begin{cor}\label{cor:complexity}
Fix $\epsilon\in(0,1)$, and let us run SARAH with $\eta = 1/(2L)$ and $m = 4.5\kappa$ for $\mathcal{T}$ iterations where
$\mathcal{T} = \lceil \log(\|\nabla P(\tilde{w}_0)\|^2/\epsilon)/\log(9/7) \rceil,$
then we can derive an $\epsilon$-accuracy solution defined in \eqref{eq:accuracy}. Furthermore, we can obtain the total complexity of SARAH, to achieve the $\epsilon$-accuracy solution, as
$\Ocal\left((n+\kappa)\log(1/\epsilon)\right).$
\end{cor}
\section{A Practical Variant}
\label{sec:sarahplus}
While SVRG is an efficient variance-reducing stochastic gradient method, one of its main drawbacks is the sensitivity of the practical performance with respect to the choice of $m$. It is know that $m$ should be around $\Ocal(\kappa)$,\footnote{
In practice, when $n$ is large, $P(w)$ is often considered as a regularized Empirical Loss Minimization problem with regularization parameter $\lambda = \frac1n$, then $\kappa \sim \Ocal(n).$
}
while it still remains unknown that what the exact best choice is. In this section, we propose a practical variant of SARAH as SARAH+ (Algorithm \ref{sarah_sc}), which provides an automatic and adaptive choice of the inner loop size $m$. Guided by the linear convergence of the steps in the inner loop, demonstrated in Figure~\ref{fig:VR2}, we introduce a stopping criterion based on the values of $\|v_t\|^2$ while upper-bounding the total number of steps by a large enough $m$ for robustness. The other modification compared to SARAH (Algorithm \ref{sarah}) is the more practical choice $\tilde{w}_s = w_{t}$, where $t$ is the last index of the particular inner loop, instead of randomly selected intermediate index.
\begin{algorithm}
\caption{SARAH+}
\label{sarah_sc}
\begin{algorithmic}
\STATE {\bfseries Parameters:} the learning rate $\eta > 0$, $0 < \gamma \leq 1$ and the maximum inner loop size $m$.
\STATE {\bfseries Initialize:} $\tilde{w}_0$
\STATE {\bfseries Iterate:}
\FOR{$s=1,2,\dots$}
\STATE $w_0 = \tilde{w}_{s-1}$
\STATE $v_0 = \frac{1}{n}\sum_{i=1}^{n} \nabla f_i(w_0)$
\STATE $w_1 = w_0 - \eta v_0$
\STATE $t = 1$
\WHILE{$\|v_{t-1}\|^2 > \gamma \|v_{0}\|^2$ {\bf and} $t<m$}
\STATE Sample $i_{t}$ uniformly at random from $[n]$
\STATE $v_{t} = \nabla f_{i_{t}} (w_{t}) - \nabla f_{i_{t}}(w_{t-1}) + v_{t-1}$
\STATE $w_{t+1} = w_{t} - \eta v_{t}$
\STATE $t = t + 1$
\ENDWHILE
\STATE Set $\tilde{w}_s = w_{t}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
Different from SARAH, SARAH+ provides a possibility of earlier termination and unnecessary careful choices of $m$, and it also covers the classical gradient descent when we set $\gamma = 1$ (since the while loop does not proceed). In Figure~\ref{fig:SARAHplus} we present the numerical performance of SARAH+ with different $\gamma$s on \emph{rcv1} and \emph{news20} datasets. The size of the inner loop provides a trade-off between the fast sub-linear convergence in the inner loop and linear convergence in the outer loop. From the results, it appears that $\gamma=1/8$ is the optimal choice. With a larger $\gamma$, i.e. $\gamma > 1/8$, the iterates in the inner loop do not provide sufficient reduction, before another full gradient computation is required, while with $\gamma < 1/8$
an unnecessary number of inner steps is performed without gaining substantial progress. Clearly $\gamma$ is another parameter that requires tuning, however, in our experiments, the performance of SARAH+ has been very robust with respect to the choices of $\gamma$ and did not vary much from one data set to another.
Similarly to SVRG, $\|v_t\|^2$ decreases in the outer iterations of SARAH+. However, unlike SVRG, SARAH+ also inherits from SARAH the consistent decrease of $\|v_t\|^2$ in expectation in the inner loops. It is not possible to apply the same idea of adaptively terminating the inner loop of SVRG based on the reduction in $\|v_t\|^2$, as $\|v_t\|^2$ may have side fluctuations as shown in Figure~\ref{fig:VR2}.
\begin{figure}
\epsfig{file=Figs/SARAHplusrcv1.eps,width=0.23\textwidth}
\epsfig{file=Figs/SARAHplusnews20.eps,width=0.23\textwidth}
\caption{\footnotesize An example of $\ell_2$-regularized logistic regression on \emph{rcv1} (left) and \emph{news20} (right) training datasets for SARAH+ with different $\gamma$s on loss residuals $P(w)-P(w^*)$.}
\label{fig:SARAHplus}
\end{figure}
\section{Numerical Experiments}
\begin{figure*}
\centering
\epsfig{file=Figs/covtype.eps,width=0.23\textwidth}
\epsfig{file=Figs/ijcnn1test.eps,width=0.23\textwidth}
\epsfig{file=Figs/news20.eps,width=0.23\textwidth}
\epsfig{file=Figs/rcv1test.eps,width=0.23\textwidth}
\epsfig{file=Figs/covtypeerror.eps,width=0.23\textwidth}
\epsfig{file=Figs/ijcnn1testerror.eps,width=0.23\textwidth}
\epsfig{file=Figs/news20error.eps,width=0.23\textwidth}
\epsfig{file=Figs/rcv1testerror.eps,width=0.23\textwidth}
\caption{\footnotesize Comparisons of loss residuals $P(w) - P(w^*)$ (top) and test errors (bottom) from different modern stochastic methods on \emph{covtype, ijcnn1, news20} and \emph{rcv1}.}
\label{fig:test_errors}
\end{figure*}
To support the theoretical analyses and insights, we present our empirical experiments, comparing SARAH and SARAH+ with the state-of-the-art first-order methods for $\ell_2$-regularized logistic regression problems with
$$f_i(w) = \log (1+\exp(-y_ix_i^Tw)) + \tfrac{\lambda}{2}\|w\|^2,$$
on datasets \emph{covtype, ijcnn1, news20} and \emph{rcv1}~\footnote{All datasets are available at \burl{http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/}.}. For \emph{ijcnn1} and \emph{rcv1} we use the predefined testing and training sets, while \emph{covtype} and \emph{news20} do not have test data, hence we randomly split the datasets with $70\%$ for training and $30\%$ for testing. Some statistics of the datasets are summarized in Table~\ref{table:datasets}.
\begin{table}
\scriptsize
\centering
\caption{Summary of datasets used for experiments.}
\label{table:datasets}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dataset & $d$ & $n$ (train) & Sparsity & $n$ (test) & $L$ \\
\hline \hline
\emph{covtype} & 54 & 406,709 & 22.12\% & 174,303 & 1.90396 \\
\hline
\emph{ijcnn1} & 22 & 91, 701 & 59.09\% & 49, 990 & 1.77662 \\
\hline
\emph{news20} & 1,355,191 & 13, 997 & 0.03375\% & 5, 999 & 0.2500 \\
\hline
\emph{rcv1} & 47,236 & 677,399 & 0.1549\% & 20,242 & 0.2500\\
\hline
\end{tabular}
\end{table}
The penalty parameter $\lambda$ is set to $1/n$ as is common practice \cite{SAG}. Note that like SVRG/S2GD and SAG/SAGA, SARAH also allows an efficient sparse implementation named ``lazy updates"~\cite{konecny2015mini}.
We conduct and compare numerical results of SARAH with SVRG, SAG, SGD+ and FISTA. SVRG~\cite{SVRG} and SAG~\cite{SAG} are classic modern stochastic methods. SGD+ is SGD with decreasing learning rate $\eta=\eta_0/(k+1)$ where $k$ is the number of effective passes and $\eta_0$ is some initial constant learning rate. FISTA~\cite{fista} is the Fast Iterative Shrinkage-Thresholding Algorithm, well-known as an efficient accelerated version of the gradient descent. Even though for each method, there is a theoretical safe learning rate, we compare the results for the best learning rates in hindsight.
Figure~\ref{fig:test_errors} shows numerical results in terms of loss residuals (top) and test errors (bottom) on the four datasets, SARAH is sometimes comparable or a little worse than other methods at the beginning. However, it quickly catches up to or surpasses all other methods, demonstrating a faster rate of decrease across all experiments. We observe that on \emph{covtype} and \emph{rcv1}, SARAH, SVRG and SAG are comparable with some advantage of SARAH on \emph{covtype}. On \emph{ijcnn1} and \emph{news20}, SARAH and SVRG consistently surpass the other methods.
\begin{table}
\scriptsize
\centering
\caption{Summary of best parameters for all the algorithms on different datasets.}
\label{table:stats}
\begin{tabular}{|c|C{1.25cm}|C{1.25cm}|C{0.7cm}|C{0.7cm}|C{0.7cm}|}
\hline
Dataset & SARAH $(m^*,\eta^*)$ & SVRG $(m^*,\eta^*)$ & SAG ($\eta^*$) & SGD+ ($\eta^*$) & FISTA ($\eta^*$) \\
\hline \hline
\emph{covtype} & (2n, 0.9/L)& (n, 0.8/L) & 0.3/L & 0.06/L & 50/L\\
\hline
\emph{ijcnn1} & (0.5n, 0.8/L) & (n, 0.5/L) & 0.7/L & 0.1/L & 90/L \\%& 1.77662 \\
\hline
\emph{news20} & (0.5n, 0.9/L) & (n, 0.5/L) & 0.1/L & 0.2/L & 30/L \\%& 0.2500 \\
\hline
\emph{rcv1} & (0.7n, 0.7/L) & (0.5n, 0.9/L) & 0.1/L & 0.1/L & 120/L \\
\hline
\end{tabular}
\end{table}
In particular, to validate the efficiency of our practical variant SARAH+, we provide an insight into how important the choices of $m$ and $\eta$ are for SVRG and SARAH in Table~\ref{table:stats} and Figure~\ref{fig:ms}. Table~\ref{table:stats} presents the optimal choices of $m$ and $\eta$ for each of the algorithm, while Figure~\ref{fig:ms} shows the behaviors of SVRG and SARAH with different choices of $m$ for \emph{covtype} and \emph{ijcnn1}, where $m^*$s denote the best choices.
In Table~\ref{table:stats}, the optimal learning rates of SARAH vary less among different datasets compared to all the other methods and they approximate the theoretical upper bound for SARAH ($1/L$); on the contrary, for the other methods the empirical optimal rates can exceed their theoretical limits (SVRG with $1/(4L)$, SAG with $1/(16L)$, FISTA with $1/L$). This empirical studies suggest that it is much easier to tune and find the ideal learning rate for SARAH. As observed in Figure~\ref{fig:ms}, the behaviors of both SARAH and SVRG are quite sensitive to the choices of $m$. With improper choices of $m$, the loss residuals can be increased considerably from $10^{-15}$ to $10^{-3}$ on both \emph{covtype} in 40 effective passes and \emph{ijcnn1} in 17 effective passes for SARAH/SVRG.
\begin{figure}
\centering
\epsfig{file=Figs/covtype_SVRG.eps,width=0.23\textwidth}
\epsfig{file=Figs/ijcnn1_SVRG.eps,width=0.23\textwidth}
\epsfig{file=Figs/covtype_SARAH.eps,width=0.23\textwidth}
\epsfig{file=Figs/ijcnn1_SARAH.eps,width=0.23\textwidth}
\caption{\footnotesize Comparisons of loss residuals $P(w) - P(w^*)$ for different inner loop sizes with SVRG (top) and SARAH (bottom) on \emph{covtype} and \emph{ijcnn1}.}
\label{fig:ms}
\end{figure}
\section{Conclusion}
We propose a new variance reducing stochastic recursive gradient algorithm SARAH, which combines some of the properties of well known existing algorithms, such as SAGA and SVRG. For smooth convex functions, we show a sublinear convergence rate, while for strongly convex cases, we prove the linear convergence rate and the computational complexity as those of SVRG and SAG. However, compared to SVRG, SARAH's convergence rate constant is smaller and the algorithms is more stable both theoretically and numerically. Additionally, we prove the linear convergence for inner loops of SARAH which support the claim of stability. Based on this convergence we derive a practical version of SARAH, with a simple stopping criterion for the inner loops.
\section*{Acknowledgements}
The authors would like to thank the reviewers for useful suggestions which helped to improve the exposition in the paper.
\section{Electronic Submission}
\label{submission}
Submission to ICML 2017 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/2017/}}
\end{center}
Send questions about submission and electronic templates to
\texttt{icml2017pc@gmail.com}.
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF.
\item The maximum paper length is \textbf{8 pages excluding references and acknowledgements, and 10 pages
including references and acknowledgements} (pages 9 and 10 must contain only references and acknowledgements).
\item Do \textbf{not include author information or acknowledgements} in your initial
submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions {\em under} the figure (and omit titles from inside
the graphic file itself). Place table captions {\em over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one
paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase.
Title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
{\bf Paper Deadline:} The deadline for paper submission to ICML 2017
is at \textbf{23:59 Universal Time (3:59 p.m.\ Pacific Standard Time) on February 24, 2017}.
If your full submission does not reach us by this time, it will
not be considered for publication. There is no separate abstract submission.
{\bf Anonymous Submission:} To facilitate blind review, no identifying
author information should appear on the title page or in the paper
itself. Section~\ref{author info} will explain the details of how to
format this.
{\bf Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences during ICML's review period. Authors
may submit to ICML substantially different versions of journal papers
that are currently under review by the journal, but not yet accepted
at the time of submission. Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
To ensure our ability to print submissions, authors must provide their
manuscripts in \textbf{PDF} format. Furthermore, please make sure
that files contain only Type-1 fonts (e.g.,~using the program {\tt
pdffonts} in linux or using File/DocumentProperties/Fonts in
Acrobat). Other fonts (like Type-3) might come from graphics files
imported into the document.
Authors using \textbf{Word} must convert their document to PDF. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} to format their accepted papers need to pay close
attention to the typefaces used. Specifically, when producing the PDF by first
converting the dvi output of \LaTeX\ to Postscript the default behavior is to
use non-scalable Type-3 PostScript bitmap fonts to represent the standard
\LaTeX\ fonts. The resulting document is difficult to read in electronic form;
the type appears fuzzy. To avoid this problem, dvips must be instructed to use
an alternative font map. This can be achieved with the following two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
Note that it is a zero following the ``-G''. This tells dvips to use
the config.pdf file (and this file refers to a better font mapping).
A better alternative is to use the \textbf{pdflatex} program instead of
straight \LaTeX. This program avoids the Type-3 font problem, however you must
ensure that all of the fonts are embedded (use {\tt pdffonts}). If they are
not, you need to configure pdflatex to use a font map file that specifies that
the fonts be embedded. Also you should ensure that images are not downsampled
or otherwise compressed in a lossy way.
Note that the 2017 style files use the {\tt hyperref} package to
make clickable links in documents. If this causes problems for you,
add {\tt nohyperref} as one of the options to the {\tt icml2017}
usepackage statement.
\subsection{Reacting to Reviews}
We will continue the ICML tradition in which the authors are given the
option of providing a short reaction to the initial reviews. These
reactions will be taken into account in the discussion among the
reviewers and area chairs.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except of
course that the normal author information (names and affiliations)
should be given. See Section~\ref{final author} for details of how to
format this.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{34}^{th}$ International Conference on Machine Learning},
Sydney, Australia, PMLR 70, 2017.
Copyright 2017 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2017\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2017\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$ point thick.
The running head should be centered, bold and in $9$ point type. The
rule should be $10$ points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the {\tt fancyhdr} package which is included in the ICML
2017 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the same format to ensure the printer can
reproduce them without problems and to let readers more easily find
the information that they desire.
\subsection{Length and Dimensions}
Papers must not exceed eight (8) pages, including all figures, tables,
and appendices, but excluding references and acknowledgements. When references and acknowledgements are included,
the paper must not exceed ten (10) pages.
Acknowledgements should be limited to grants and people who contributed to the paper.
Any submission that exceeds
this page limit or that diverges significantly from the format specified
herein will be rejected without review.
The text of the paper should be formatted in two columns, with an
overall width of 6.75 inches, height of 9.0 inches, and 0.25 inches
between the columns. The left margin should be 0.75 inches and the top
margin 1.0 inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
To facilitate blind review, author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2017.sty} file, you may use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless {\tt accepted} is passed as an argument to the
style file. (Again, see the TeX code used to produce this PDF.)
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If your are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section by removing or
blacking out author names. The only exception are manuscripts that are
not yet published (e.g. under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look a the Supplementary Material when writing their
review.
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches
below the bottom rule surrounding the title. The authors' names should
appear in 10~point bold type, in a row, separated by white space, and centered.
Author names should not be broken across lines.
Unbolded superscripted numbers, starting 1, should be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}email@domain.com\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed.
A sample file (in PDF) with author names is included in the ICML2017
style file package. Turn on the \texttt{[accepted]} option to the ICML stylefile to see the names rendered.
All of the guidelines above are automatically met by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below
the final address. The heading `Abstract' should be centered, bold,
and in 11~point type. The abstract body should use 10~point type, with
a vertical spacing of 11~points, and should be indented 0.25~inches
more than normal on left-hand and right-hand margins. Insert
0.4~inches of blank space after the body. Keep your abstract brief and
self-contained,
limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{For the sake of readability, footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and
International Workshops on Machine Learning (ML 1988 -- ML
1992). At the time this figure was produced, the number of
accepted papers for ICML 2008 was unknown and instead estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to help readers visualize
your approach and your results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
{\it after\/} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
Figure~\ref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment {\tt figure*} in \LaTeX), but always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
Algorithm~\ref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title {\it above\/} the table with at least
0.1~inches of space before the title and the same after it, as in
Table~\ref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\hline
\abovespace\belowspace
Data set & Naive & Flexible & Better? \\
\hline
\abovespace
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
\belowspace
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\hline
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material that can be typeset, as contrasted
with figures, which contain graphical material that must be drawn.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns, but place two-column tables at the
top or bottom of the page.
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use {\tt natbib.sty} and {\tt icml2017.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to Section~\ref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and
use a hanging indent style, with the first line of the reference flush
against the left margin and subsequent lines indented by 10 points.
The references at the end of this document give examples for journal
articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI},
technical reports \cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
\subsection{Software and Data}
We strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, do not
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look a this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
| {
"timestamp": "2017-06-06T02:03:32",
"yymm": "1703",
"arxiv_id": "1703.00102",
"language": "en",
"url": "https://arxiv.org/abs/1703.00102",
"abstract": "In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH), as well as its practical variant SARAH+, as a novel approach to the finite-sum minimization problems. Different from the vanilla SGD and other modern stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple recursive framework for updating stochastic gradient estimates; when comparing to SAG/SAGA, SARAH does not require a storage of past gradients. The linear convergence rate of SARAH is proven under strong convexity assumption. We also prove a linear convergence rate (in the strongly convex case) for an inner loop of SARAH, the property that SVRG does not possess. Numerical experiments demonstrate the efficiency of our algorithm.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Optimization and Control (math.OC)",
"title": "SARAH: A Novel Method for Machine Learning Problems Using Stochastic Recursive Gradient",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517469248845,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.708960639908667
} |
https://arxiv.org/abs/2302.06812 | Scalable Optimal Multiway-Split Decision Trees with Constraints | There has been a surge of interest in learning optimal decision trees using mixed-integer programs (MIP) in recent years, as heuristic-based methods do not guarantee optimality and find it challenging to incorporate constraints that are critical for many practical applications. However, existing MIP methods that build on an arc-based formulation do not scale well as the number of binary variables is in the order of $\mathcal{O}(2^dN)$, where $d$ and $N$ refer to the depth of the tree and the size of the dataset. Moreover, they can only handle sample-level constraints and linear metrics. In this paper, we propose a novel path-based MIP formulation where the number of decision variables is independent of $N$. We present a scalable column generation framework to solve the MIP optimally. Our framework produces a multiway-split tree which is more interpretable than the typical binary-split trees due to its shorter rules. Our method can handle nonlinear metrics such as F1 score and incorporate a broader class of constraints. We demonstrate its efficacy with extensive experiments. We present results on datasets containing up to 1,008,372 samples while existing MIP-based decision tree models do not scale well on data beyond a few thousand points. We report superior or competitive results compared to the state-of-art MIP-based methods with up to a 24X reduction in runtime. | \section{Additional Details for Section 3: Problem Formulation}
\subsection{Proof for Proposition 1}
\begin{proof}
Given a $k$-dimensional input data $x^k$, denote $\eta_f$ as the number of unique feature values associated with the $f^{th}$ feature. The number of paths from the source to the sink is given by $|\mathcal{P}|=O\left(\prod_{f=1}^k \eta_f\right)=O\left(\eta^k\right)$, for some constant $\eta$.
\end{proof}
\subsection{Feature graph}
While the solution quality (in terms of the objective value of the MIP) is independent of feature order and different feature orders lead to different decision trees (via different feature graphs), the interpretability of the solution may depend on the order. In Section 5, we give an example in the medical field where a certain feature order representing the sequence of diagnostic tests is preferred by doctors. Our approach provides an easy way to enforce such a structure in the rules by simply arranging features according to the preferred order in the feature graph. In another scenario, one may first run a black-box prediction model to learn the feature importance (e.g., SHAP scores \cite{lundberg2017unified}), and arrange the features in descending order of importance. This is particularly relevant for high-dimensional datasets where $k$ is large.
Using sorted features, one can find a good solution quickly by exploring the more important feature combinations first.
\subsection{Optimal decision tree via solving OMT}
Decision trees are different from rule sets -- the latter consists of unordered rules which may overlap, whereas a decision tree has a hierarchical structure of features and each example is covered by exactly a single rule. The feature graph provides an embedded hierarchical structure of features. Thus, when combined with the set coverage constraint which enforces each sample can only be assigned to one rule, the optimal solution to the mixed integer optimization problem, OMT, identifies an optimal multiway-split decision tree.
Once we obtain the tree, the label of a leaf node is determined in the same way as in traditional decision trees, i.e., with a classification tree, the label of a leaf node/path is the majority class of samples in that node.
\section{Additional Details for Section 4: Column Generation }
Column generation (CG) is nowadays a prominent method to cope with a huge number of variables. We refer readers to \cite{lubbecke2005selected} for an excellent survey on column generation, where numerous integer programming CG applications are described.
\subsection{Proof for Proposition 2}
\begin{proof}
When we impose a constraint on tree depth $d$, we restrict to consider the paths that contain $d$ features, which are in the order of $\eta^d$ by Proposition 1. As we are also choosing $d$ out of $k$ features, giving rise to ${k \choose d}$ combinations. Thus, the feasible path set for a tree of depth $d$ is in $O\left( {k \choose d} \eta^d \right)$.
\end{proof}
Compare this result to Proposition 1 where we do not limit rule length, we see that having a constraint on the tree depth vastly reduces the total number of feasible paths. Additional constraints such as the minimum number of samples per path further reduce the number of feasible paths (e.g., in our experiments, we require at least 1\% of training data for the selected rules). We discuss in details in Section 5 how such constraints can be easily incorporated in our approach during the KSP subproblem.
\subsection{Subproblem Heuristic}
While an exact subproblem solution is required to achieve provable optimality \cite{barnhart1998branch}, as pointed out in \cite{lubbecke2005selected}, there is often no need to solve the subproblem exactly -- the role of the subproblem is to provide a column with a negative reduced cost or to prove that none exists. It is important to see that any column with negative reduced cost contributes to this aim. Our proposed KSP method is a polynomial-time heuristic path-dependent adaptation of the $K$-shortest path method for acyclic networks having arc costs \cite{horne1980finding, eppstein1998finding}.
On a high level, the KSP algorithm starts from the source node and proceeds in a fixed order according to the feature graph, visiting every node and extending up to the $K$ best reduced-cost paths at each node to the nodes of the next feature layer. {This algorithm differs from the standard shortest path algorithms as the path costs are not additive over arcs. }When a path is extended to include the next feature node, metrics $\xi_j$ such as misclassification error and the resulting reduced cost are re-computed.
More specifically, given a feature graph $G$, we label the nodes from 1 through $|V|$, starting from the source and proceeding to the sink, where node $1$ represents the source node and node $|V|$ is the sink node. With the exception of the sink node, for each node $t$, denote $C(t)$ as its children nodes in the next feature layer. We denote $\Pi_t$ as the set of (up to) K shortest paths (i.e., with the lowest reduced cost) at node $t$.
For each path $j$ in the feature graph, recall that $S_j$ represents the samples that fall into rule $j$.
Once $S_j$ is identified, we can compute the loss metric $\xi_j$
as well as the reduced cost $rc_j$ according to Equation (4) in the paper. Denote $\phi_j$ as an ordered index set of the visited feature nodes in path $j$.
For the algorithm, we define the following two operators:
$\Leftarrow$: \textit{Insert} a path $\phi_j$ into the path set $\Pi_t$ and retain the $K$ best paths in $\Pi_t$.
$\oplus$: \textit{Extend} a path $\phi_j$ by adding node $t$ to it and create a new path $\phi_j^{new}$.
For example, in the feature graph shown in Figure 2, we may extend the path $\phi_j:=$\{Source, Persons-2, Buying-High\} to node \{Safety-Medium\} to create a new path $\phi_{j^{new}}:=$\{Source, Persons-2, Buying-High, Safety-Medium\}.
This operation consists of three sub-tasks:
\begin{enumerate}
\item Filter $S_j$ to obtain a subset of samples, $S_{j^{new}}$, which is covered by the new path $\phi_{j^{new}}$.
\item Verify the feasibility of $\phi_{j^{new}}$ to attribute and path constraints and return $\O$ if infeasible.
\item Compute the loss metric $\xi_{j^{new}}$ and the resulting reduced cost $rc_{j^{new}}$ = $\xi_{j^{new}} - (\sum_{i\in S_{j^{new}}} \lambda_i +\mu) $.
\end{enumerate}
The pseudo-code for the KSP subproblem is shown in Algorithm~\ref{alg:cap}. The nodes are examined in sequence from the source to the sink. In Line 2 to 5, for every node $t$, we examine its KSP partial path list, $\Pi_t$. If a path has a negative reduced cost, it is immediately inserted into $\Pi_{|V|}$, i.e., the KSP path list at the sink node. In Line 6 to 8, we extend a path to the nodes in the next feature layer. The output of the algorithm is the KSP list at its sink node, $\Pi_{|V|}$.
\begin{algorithm}
\caption{KSP subproblem}\label{alg:cap}
\noindent\textbf{Input}: $K,\lambda, \mu$
\noindent\textbf{Output}: $\Pi_{|V|}$ \-\hspace{3cm}
\noindent\textbf{Initialize}: $\Pi_1 \gets \{1\}, \Pi_t \gets \O$, for $t=2,\cdots, |V|$
\begin{algorithmic}[1]
\For{$t$ = 1 to $|V|$}
\ForAll{$j$ in $\Pi_t$} \-\hspace{1.9cm}
\If{ $rc_j < 0$}
\State $\Pi_{|V|} \Leftarrow \phi_j$
\EndIf
\ForAll{$m \in C(t)$
\State $\Pi_m \Leftarrow \phi_j \oplus m$
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\iffalse
---------
As a preprocessing step at the start of the CG procedure, we generate a static feature-ordered list of indices of the nodes $V$ in $G$, $i = 0, ..n$, starting from the source and proceeding to the sink (e.g., in the feature graph shown in Figure 2, the nodes would be numbered from the top-down and left to right). $V_0 \equiv$ source node and $V_n \equiv$ sink node ($n = |V|-1)$). For each node, we also store the indices of the data samples it covers.
\\
Notation
$H_{(.)}$ = sorted index set of data samples covered by a node or a path. \sun{index of $S_j$}
$C(i)$ = index set of the nodes in the next feature layer for node $V_i$.
$\phi_j$ = path, represented as an ordered index set of the visited feature nodes. Loss metric $\xi_j$, reduced cost $rc_j$, and $H_j$, the index set of data samples feasible to the path are additional data stored for each $\phi_j$.
$\sigma_i$ = Set of (up to) $K$ shortest (lowest reduced cost) paths to node $i$.
\\
Define operators:
$\Leftarrow$: \textit{Insert} a path $\phi_j$ into a path set $\sigma_i$ and retain the $K$ best paths in $\sigma_i$.
$\oplus$: \textit{Extend} a path $\phi_j$ by adding node $i$ to it and create a new path $\phi_j^{new}$. For e.g., in the feature graph of Figure 2, we may extend the path {source, Persons-2, Buying-High} from node {Buying-High} to node {Safety-Medium} to create a new path {source, Persons-2, Buying-High, Safety-Medium}. This operation consists of three sub-tasks:
i) Filter $H_j^{new}$ = $H_j \cap H_i$ to obtain the index set of samples covered by the new path $\phi_{j^{new}}$.
ii) Verify the feasibility of $\phi_j^{new}$ to attribute and path constraints and return $\O$ if infeasible.
iii) Compute the loss metric $\xi_j^{new}$ and the resulting $rc_j^{new}$ = $\xi_j^{new} - (\sum_{i\in H_j^{new}} \lambda_i +\mu) $ using the data samples in $H_j^{new}$ and return $\phi_j^{new}$.
\\
\begin{algorithm}
\caption{Pseudocode for KSP subproblem }\label{alg:cap}
\hspace*{\algorithmicindent} \textbf{Input: $K,\lambda, \mu$} \-\hspace{3cm} // $K$-value and RMP dual vector.
\hspace*{\algorithmicindent} \textbf{Output: $\sigma_n$} \-\hspace{3cm} // $K$ most negative reduced cost paths at sink node.
\begin{algorithmic}
\State $\sigma_0 \gets \{0\}, \sigma_{i>0} \gets \O$
\For{$i$ = 0 to $n-1$} \-\hspace{2.2cm}// Visit nodes $V_i$ of $G$ in preset order
\ForAll{$\phi_j$ in $\sigma_i$} \-\hspace{1.9cm} // Select every path in the $KSP$ list at node $V_i$
\If{ $rc_j < 0$}
\State $\sigma_n \Leftarrow \phi_j$. \-\hspace{3cm} // Insert path into $KSP$ list at sink node if negative rc
\EndIf
\ForAll{$m \in C(i)$} \-\hspace{1.5cm} // Extend path to every node $m$ of the next feature layer
\State $\sigma_m \Leftarrow \phi_j \oplus V_m$. \-\hspace{1.9cm} // Insert extended path into KSP list of node $m$
\EndFor
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{KSP Pseudocode}
Initialize: $\sigma_0$ = \{0\}, $\sigma_i = \{\O\}, \forall i > 0$.
/* visit every non-sink node in fixed order */
for $i$ = 0 to $n$:
/* select a path from the $KSP$ list at $V(i) */
\-\hspace{1cm} for all $\phi_j$ in $\sigma_i$:
/*insert path into $KSP$ list at sink node if reduced cost is negative */
\-\hspace{2cm} If $rc_j < 0$, $\sigma_n \Leftarrow \phi_j$.
/* extend the path to every node $m$ in the next feature layer */
\-\hspace{2cm} for all $m \in FS(i)$:
/* perform path extension operation and insert extended path into KSP list of next node $m$ */
\-\hspace{3cm}
/* return $K$ best paths at sink as candidates to be added to RMP */
Return $\sigma_n$
\\
\fi
Our extensive experiments show that this heuristic works well in terms of runtime and achieving near-optimal solution quality.
Resource-constrained shortest path heuristics (\cite{desrosiers2005primer, desaulniers2006column} similar to our KSP procedure have been applied in large-scale industrial applications. For example \cite{subramanian2008effective} apply a KSP-like subproblem algorithm within a large-scale airline crew scheduling optimizer on graphs having several thousand nodes. Although the network sizes they analyze are considerably larger than the biggest feature graph we report on, the \textit{Crop Mapping} Dataset (175 categorical features) we analyze (see the experiments section) has by far the largest feature set in the MIP-based ODT literature to the best of our knowledge.
\subsection{Complexity analysis}
{For exposition, consider solving OMT for a classification task. }
We begin by analyzing the worst case runtime of the KSP subproblem. First, note that we visit every node and perform no more than $K\eta$ path extensions from each node. Next, the runtime complexity of every such path extension $\phi_j \oplus m$ is obtained by examining the computation time for each of the three sub-tasks:
Sub-task i) Filtering: computing the intersection of the sets of data samples belonging to the path, and the next node, can be performed in O$(N log N)$ time using an efficient sorting algorithm.
Sub-task ii) Assuming we only have a maximum rule length $d$ and a minimum sample size constraint and we store the length and number of data samples belonging to the path, feasibility can be verified in $\mathcal{O}(1)$ time.
Sub-task iii) Computing the loss metric $\xi_j$ requires us to count the number of data samples of the path that belong to each class. The misclassification cost $\xi_j$ is simply the difference between the number of data samples of the path and the sample count for the most popular class. The dual values ($\lambda,\mu$) of the data samples have to be summed to recalculate the reduced cost. The computation time is linear in the data sample size: $\mathcal{O}(|S_j|)$ time, or $\mathcal{O}(N)$ in the worst case.
The runtime is dominated by the first subtask, giving us a worst case $\mathcal{O}(N log N)$ time per path extension, which yields an overall polynomial runtime bound of $\mathcal{O}(K\eta|V|N log N)$ for KSP.
{
RMP, being a linear program, is also solvable in polynomial time \cite{bazaraa2008linear}, and therefore every CG iteration of RMP and KSP runs in polynomial time. }
{Note that the Master-MIP that is solved after the CG procedure is an NP-Hard problem due the presence of the binary variables that inject non-convexity. The number of constraints in the RMP or the MIP increase in proportion to the number of data samples but are not a limiting factor as they are convex and processed efficiently within the (polynomial time solvable) LP subproblems solved during the MIP solver's branch-and-bound procedure \cite{wolsey1999integer}. This ability to efficiently handle constraints are also why MIP-based methods are a valuable solution approach to ODTs}.
\subsection{Limitations and potential enhancements}
There are several limitations regarding the current implementation. For simplicity and replicability, we chose to employ standard off-the-shelf methods (e.g. CPLEX Barrier and MIP solver for RMP and Master-MIP). For instance, the final Master-MIP problem can be solved to optimality with more advanced methods such as branch-and-price instead of a standard solver. The RMP implementation can be enhanced using subgradient methods, which are usually much faster\cite{subramanian2008effective}. Also, rather than start from scratch with no columns, the CG procedure can be ``warm-started'' using the rules obtained by running CART.
The existing KSP procedure is also a ``plain vanilla'' implementation for easy reproducibility, and can be enhanced in several ways. For example, it can be implemented in parallel easily by dividing the path search among different processors. Next, the nodes in the feature graph can be arranged based on the SHAP-score for the corresponding features, which can be obtained from any blackbox prediction model. This can accelerate KSP and CG convergence to good quality solutions much earlier.
\section{Additional Details for Section 5: Flexible Framework}
\subsection{Incorporate nonlinear metrics in constraints}
To model a F1-score at the sample-level or dataset-level, we first compute path-level metrics such as true positives (tp), false positive (fp), false negatives (fn). To be more precise, suppose we want to impose a constraint that F1 at the dataset-level is above a certain threshold $\delta$, that is,
$$F1=\frac{\sum_j tp_jz_j}{\sum_j (tp_j + 0.5(fp_j + fn_j))z_j}\geq \delta.$$
We can then rewrite this as a linear constraint and add it to the RMP, i.e., $$\sum_j tp_jz_j\geq \delta\sum_j \left(tp_j + 0.5(fp_j + fn_j)\right)z_j.$$
Such a constraint will influence the subproblem via a modified reduced cost to ensure that paths that are more likely to be feasible are added to the RMP. The same idea can be applied to Matthews correlation coefficient and Fowlkes–Mallows index. With the proposed framework, we can also impose path-level constraints which we discuss in the next section.
\subsection{Fairness constraints}
Prior fairness metrics are defined as \emph{sample-level} constraints, i.e., statistical parity conditioned on certain sensitive features such as gender or race \cite{aghaei2019learning,aghaei2020learning}.
However, none of the existing MIP-based methods are able to efficiently handle constraints at \emph{path-level}, as the notion of ``path'' is not explicitly defined in the arc-based formulation.
Consider a use case where the sensitive feature is ``gender'' =\{M,F\}. For each rule $j$, we denote the gender-specific fairness metric as $\phi_{Mj}$ and $\phi_{Fj}$, and apply a path-level constraint: $|\phi_{Mj}-\phi_{Fj}|\leq \delta$, where $\delta$ refers to a user-specified bias tolerance.
By applying the fairness constraint to each rule, we eliminate blatantly unfair rules, which may be admissible in prior fairness-constrained MIP models that are unable to efficiently evaluate such path-level metrics.
Meanwhile, a sample-level ``fairness budget'' constraint akin to \cite{aghaei2020learning} can also be modeled in our framework by summing the fairness metric over all paths, i.e., $\sum_{j=1}^L|\phi_{Mj}-\phi_{Fj}|z_j\leq \delta$.
Once path $j$ is delineated in the feature graph, the associated metrics can be defined and simply enter the RMP as coefficients associated with rule $j$. For example, the cardinality constraint in the OMT formulation is a path-based restriction that directly controls the number of active leaf nodes in a tree, which is relatively difficult to enforce in prior MIP-based methods.
\section{Additional Details for Section 6: Experiments}
\subsection{Small/medium datasets}
\subsubsection*{Experiment setup}
We closely follow the experiment setup in \cite{aghaei2020learning}.
As \textsf{FlowOCT} only allows binary input, we perform one-hot encoding on integer input and quantile discretization on features with real numbers. For \textsf{OMT}, we perform cumulative binning on numerical and ordinal features, where thresholds are determined via quantile discretization. Meanwhile, \textsf{OCT} and \textsf{BinOCT} use the original input.
As \textsf{OCT} and \textsf{FlowOCT} have a regularization parameter which controls the complexity of the tree, for each split and each depth, we tune the corresponding model by selecting the regularization parameter from the set $\{0, 0.01, 0.1\}$ with the best performance on the validation set. Next, we retrain a model with this parameter on both the training and validation sets
and report the associated out-of-sample performance.
The validation step is skipped for \textsf{BinOCT} which does not have such a tuning parameter and only constructs full trees with $2^d$ leaf nodes. For \textsf{OMT}, we use cross-validation to determine the number of discrete bins for numerical features. More specifically, the number of bins tested were \{3, 4, 5, 8\} and quantile discretization was applied to determine the corresponding thresholds.
\subsubsection*{Results}
Table \ref{table_small_data_full} shows the experimental results in terms of the out-of-sample accuracy averaged over five splits for $d\in\{2,3,4,5\}$, conducted over the same 12 UCI datasets from that have been used in \textsf{FlowOCT} \cite{aghaei2020learning}, which is considered the state-of-the-art ODT. Best accuracy in a given row is reported in \textbf{bold}. Compared to three recently proposed MIP-based alternatives as well as CART, \textsf{OMT} dominates in terms of accuracy in 56.3\% of instances, \textsf{FlowOCT} 33.3\%, \textsf{OCT} 29.2\%, \textsf{BinOCT} 8.3\%, and \textsf{CART} 14.6\% (including ties).
\iffalse
To analyze the solution time taken by different MIP-based methods, we define an ``instance'' as a unique (dataset, depth, data split) combination.
In Figure \ref{fig_runtime1}, the x-axis shows the solution time in seconds, while the y-axis shows the number of instances solved. With the MIP-based ODT benchmarks, when the solution time is under the time limit, implying the instance is solved to (near) optimality. With a 5-fold cross validation, there are $12\times4\times5=240$ instances in total.
Figure \ref{fig_runtime1} shows that \textsf{OMT} solves most instances before the 20 minutes time limit, followed by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT}. More specifically, 188 out of 240 instances was solved under 1 minute by \textsf{OMT}, in contrast to 137, 95 and 59 instances by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT} respectively; 230 instances out of 240 was solved under 10 minute by \textsf{OMT},
while 175, 118 and 90 instances by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT} respectively. The bulk of the unsolved instances by the time limit are the medium datasets with depth greater than 3.
\begin{figure}[h]
\centering
\includegraphics[width=0.98\linewidth]{fig/runtime}
\caption{Runtime performance on a total of 240 instances}\label{fig_runtime1}
\end{figure}
\fi
\begin{table}
\centering
\begin{tabular}{rrrrr}
\toprule
$d$ & Mean & Std. Dev & Median \\
\midrule
2 & 0.08 & 0.05 & 0.10 \\
3 & 0.59 & 2.21 & 0.14 \\
4 & 4.37 & 10.35 & 0.21 \\
5 & 3.65 & 6.94 & 0.26 \\
\bottomrule
\end{tabular}
\caption{Optimality gap $\Delta$ $(\%)$}\label{optimality_gap}
\end{table}
\begin{table*}[t]
\small
\caption{Mean ± standard deviation of out of sample accuracy on the small/medium datasets for $d\in\{2,3,4,5\}$.
}
\centering
\begin{tabular}{lllrccccc}
\toprule
dataset & $N$ & $d$ & OMT & OCT & BinOCT & FlowOCT & CART \\
\midrule
soybean-small & 47& 2 & 0.95±0.112 & \B 1.0±0.048 & 0.972±0.048 & \B 1.0±0.048 & 0.778±0.048 \\
soybean-small & 47& 3 & 0.983±0.037 & 0.944±0.0 & 0.75±0.3 & 0.972±0.048 & \B 1.0±0.0 \\
soybean-small & 47& 4 & 0.883±0.139 & 0.944±0.048 & 0.833±0.22 & 0.944±0.083 & \B 1.0±0.0 \\
soybean-small & 47& 5 & 0.95±0.075 & 0.972±0.096 & 0.722±0.21 & 0.972±0.083 & \B 1.0±0.0 \\
monks-3 & 122& 2 & 0.967±0.013 & 0.966±0.004 & 0.966±0.004 & 0.966±0.004 & \B 0.971±0.012 \\
monks-3 & 122& 3 & 0.987±0.008 & \B 0.99±0.011 & 0.99±0.011 & \B 0.99±0.011 & 0.986±0.019 \\
monks-3 & 122& 4 & 0.987±0.008 & 0.99±0.015 & 0.988±0.011 & 0.99±0.011 & \B 0.993±0.007 \\
monks-3 & 122& 5 & 0.987±0.008 & 0.978±0.014 & 0.983±0.015 & 0.99±0.012 & \B 0.993±0.007 \\
monks-1 & 124& 2 & 0.751±0.035 & 0.751±0.041 & 0.751±0.041 & 0.751±0.041 & 0.734±0.019 \\
monks-1 & 124& 3 & \B 1.0±0.0 & 0.856±0.097 & 0.859±0.015 & 0.859±0.03 & 0.818±0.087 \\
monks-1 & 124& 4 & \B 1.0±0.0 & \B 1.0±0.029 & \B 1.0±0.0 & \B 1.0±0.03 & 0.806±0.064 \\
monks-1 & 124& 5 & \B 1.0±0.0 & 0.935±0.142 & \B 1.0±0.0 & \B 1.0±0.03 & 0.787±0.042 \\
hayes-roth & 132& 2 & \B 0.615±0.11 & 0.608±0.014 & 0.55±0.1 & 0.608±0.052 & 0.425±0.09 \\
hayes-roth & 132& 3 & \B 0.78±0.111 & 0.758±0.038 & 0.7±0.115 & 0.725±0.09 & 0.517±0.038 \\
hayes-roth & 132& 4 & 0.75±0.075 & 0.75±0.038 & 0.642±0.138 & \B0.8±0.066 & 0.55±0.087 \\
hayes-roth &132& 5 & 0.77±0.089 & 0.75±0.076 & 0.575±0.066 & \B 0.817±0.029 & 0.708±0.058 \\
monks-2 & 169& 2 & 0.633±0.024 & \B 0.662±0.054 & 0.607±0.019 & \B 0.662±0.054 & 0.596±0.02 \\
monks-2 & 169& 3 & 0.583±0.035 & \B 0.662±0.04 & 0.585±0.044 & \B 0.662±0.043 & 0.594±0.034 \\
monks-2 & 169& 4 & 0.603±0.045 & \B 0.662±0.031 & 0.581±0.027 & \B 0.662±0.023 & 0.598±0.033 \\
monks-2 & 169& 5 & \B 0.779±0.03 & 0.662±0.05 & 0.607±0.03 & 0.662±0.055 & 0.651±0.055 \\
house-votes-84 & 232& 2 & \B 0.979±0.008 & 0.971±0.01 & 0.966±0.017 & 0.971±0.01 & 0.977±0.01 \\
house-votes-84 & 232& 3 & \B 0.976±0.009 & 0.971±0.01 & 0.971±0.01 & 0.971±0.02 & 0.96±0.02 \\
house-votes-84 & 232& 4 & 0.962±0.046 & \B 0.971±0.01 & 0.914±0.03 & \B 0.971±0.01 & 0.96±0.02 \\
house-votes-84 & 232& 5 & 0.955±0.026 & \B 0.971±0.01 & 0.96±0.01 & \B 0.971±0.01 & 0.96±0.02 \\
spect & 267& 2 & 0.755±0.054 & \B 0.836±0.071 & 0.781±0.071 & \B 0.836±0.071 & 0.711±0.023 \\
spect & 267& 3 & 0.749±0.048 & \B 0.836±0.06 & 0.756±0.06 &\B 0.836±0.091 & 0.731±0.015 \\
spect & 267& 4 & \B 0.834±0.04 & 0.801±0.086 & 0.746±0.065 & 0.791±0.079 & 0.731±0.026 \\
spect & 267& 5 & \B 0.803±0.039 & 0.791±0.06 & 0.721±0.048 & 0.796±0.085 & 0.731±0.03 \\
breast-cancer & 277& 2 & \B 0.774±0.07 & 0.695±0.036 & 0.686±0.038 & 0.681±0.079 & 0.743±0.043 \\
breast-cancer & 277& 3 & \B 0.72±0.06 & 0.714±0.079 & 0.714±0.049 & 0.681±0.092 & 0.705±0.036 \\
breast-cancer & 277& 4 & 0.74±0.046 & 0.724±0.038 & 0.662±0.091 & \B 0.743±0.079 & 0.69±0.058 \\
breast-cancer & 277& 5 & 0.669±0.094 & \B 0.738±0.0 & 0.567±0.157 & 0.676±0.036 & 0.671±0.049 \\
balance-scale & 625& 2 & \B 0.703±0.043 & 0.665±0.019 & 0.65±0.013 & 0.671±0.007 & 0.62±0.016 \\
balance-scale & 625& 3 & \B 0.726±0.006 & 0.696±0.051 & 0.665±0.042 & 0.696±0.026 & 0.709±0.037 \\
balance-scale & 625& 4 & \B 0.781±0.016 & 0.747±0.048 & 0.707±0.006 & 0.699±0.01 & 0.769±0.033 \\
balance-scale & 625& 5 & \B 0.772±0.012 & 0.735±0.01 & 0.565±0.1 & 0.72±0.045 & 0.762±0.013 \\
tic-tac-toe & 958& 2 & \B 0.71±0.018 & 0.689±0.05 & 0.689±0.021 & 0.689±0.05 & 0.672±0.035 \\
tic-tac-toe & 958& 3 & 0.739±0.013 &\B 0.765±0.027 & 0.75±0.036 & 0.735±0.05 & 0.725±0.045 \\
tic-tac-toe & 958& 4 & \B 0.802±0.039 & 0.776±0.073 & 0.786±0.021 & 0.757±0.032 & 0.758±0.019 \\
tic-tac-toe & 958& 5 & \B 0.82±0.061 & 0.711±0.025 & 0.812±0.029 & 0.788±0.05 & 0.778±0.046 \\
car-evaluation & 1728& 2 & 0.769±0.013 & 0.765±0.01 & 0.765±0.01 & 0.765±0.01 & \B 0.782±0.013 \\
car-evaluation & 1728& 3 & \B 0.803±0.015 & 0.789±0.02 & 0.786±0.029 & 0.798±0.016 & 0.782±0.033 \\
car-evaluation & 1728& 4 & \B 0.879±0.009 & 0.796±0.076 & 0.848±0.01 & 0.823±0.016 & 0.842±0.029 \\
car-evaluation & 1728& 5 & \B 0.91±0.005 & 0.742±0.041 & 0.815±0.052 & 0.8±0.016 & 0.857±0.019 \\
kr-vs-kp & 3196& 2 & \B 0.866±0.016 & \B 0.866±0.014 & \B 0.866±0.014 & \B 0.866±0.014 & 0.762±0.017 \\
kr-vs-kp & 3196& 3 & \B 0.941±0.008 & 0.859±0.096 & 0.925±0.025 & 0.938±0.011 & 0.898±0.014 \\
kr-vs-kp & 3196& 4 & \B 0.96±0.006 & 0.847±0.094 & 0.938±0.012 & 0.94±0.011 & 0.94±0.011 \\
kr-vs-kp & 3196& 5 & \B 0.968±0.003 & 0.652±0.098 & 0.847±0.164 & 0.946±0.057 & 0.94±0.011 \\
\bottomrule
\end{tabular}\label{table_small_data_full}
\end{table*}
Table~\ref{optimality_gap} reports on the MIP-LP gap with respect to $d$, which is defined as $\Delta = (\nu_{IP} - \nu_{LP})/\nu_{IP}$, i.e., the difference between the objective value achieved by the Master-MIP ($\nu_{IP}$) and the final RMP ($\nu_{LP}$).
The small gaps suggest that the path-based LP relaxation is a relatively strong approximation of the nonconvex discrete OMT model. it also underscores the advantage of converging to a relatively small subset of high quality paths from which an effective decision tree can be distilled using a standard MIP solver.
{To illustrate the reduction in runtime and MIP size for large datasets, the CG procedure for the largest MIP-ODT instance analyzed in the literature in terms of sample size (\textit{Skin}) at $d = 2$, converges in $7\pm1$ minutes. The procedure generated less than $\hat{L} = 2400$ columns to improve upon the solution quality achieved by the multivariate MIP approach \textsf{S1O} that requires several hours and employs a data preprocessing step. This result is not surprising considering that this dataset yields a relatively small feature graph after discretizing its few numerical features. Hence, by Proposition 2, the number of feasible paths at $d = 2$ would be limited (and independent of data sample size). Prior MIP based binary ODT models are unable to exploit this hierarchical graph structure and employ an arc-based approach that requires more than $2^dN$ (~800,000) binary decision variables.}
\subsection{Large datasets}
For the experiments with large-scale datasets, we down-sampled the original \textsf{susy} dataset to about a million samples to enable it to be processed on a laptop.
\clearpage
\bibliographystyle{plain}
\section{Introduction}
Decision trees are among the most popular machine learning models as the tree structure is visually easy to understand.
As learning an optimal decision tree is NP-hard \cite{laurent1976constructing}, popular algorithms such as CART \cite{breiman1984classification}, ID3
\cite{quinlan1986induction} and C4.5 \cite{quinlan2014c4} rely on greedy heuristics to construct trees. Motivated by the heuristic nature of the traditional methods, there have been many efforts across different fields to learn optimal decision trees (ODT), e.g., dynamic programming \cite{lin2020generalized}, constraint programming \cite{verhaeghe2020learning}, Boolean satisfiability \cite{narodytska2018learning}, itemset mining \cite{aglin2020learning}.
In particular, recent advances in modern optimization has facilitated a nascent stream of research that leverages mixed-integer programming (MIP) to train globally optimal
trees with constraints \cite{bertsimas2017optimal,aghaei2019learning,verwer2019learning, aghaei2020learning} - this is the methodology we are focusing on in this paper.
Prior MIP-based methods rely on an \emph{arc-based} formulation that require a large number of binary decision variables to identify splitting conditions at branch nodes as well as label and sample assignments to leaf nodes.
This approach has several drawbacks: (1) the optimization problem becomes easily intractable as the number of binary variables and constraints increases linearly with training data. Hence, experiments are typically restricted to datasets with no more than a few thousand samples.
(2) Prior MIP frameworks can only handle linear metrics.
In many applications, it is desirable to consider nonlinear metrics, e.g., F1-score is preferred over accuracy to evaluate machine learning models trained on imbalanced datasets. (3) With the arc-based formulation, it is challenging to impose constraints on individual decision rules and feature combinations. One such example occurs in the medical field, where doctors have to arrive at an appropriate diagnosis while taking into account the costs of medical tests \cite{lomax2013survey}.
(4) The vast majority of the decision tree literature focuses on binary-split trees, where each node can have at most two child nodes.
Multiway-split trees (see Figure~\ref{fig_OMT} for an example) whose branching condition may contain several values are often more intuitive and comprehensible \cite{fulton1995efficient}.
\begin{figure}[h]
\centering
\includegraphics[width=.9\linewidth]{fig/std_OMT.png}
\caption{An example of OMT trained on \textsf{car-evaluation}.}\label{fig_OMT}
\end{figure}
\iffalse
\begin{figure*}\centering
\begin{subfigure}{.4\textwidth}
\includegraphics[width=1\linewidth]{fig/std_OMT.png}
\caption{Each unique feature value is mapped to a node}\label{std_OMT}
\end{subfigure}
\begin{subfigure}{.4\textwidth}
\includegraphics[width=1\linewidth]{fig/culmative_OMT.png}
\caption{Cumulative binning is applied to ordinal features}\label{cumulative_binning_OMT}
\end{subfigure}
\caption{Two examples of OMTs trained on \textsf{car-evaluation}, with $l=8$ leaf nodes and depth $d=3$,.}\label{fig_OMT}
\end{figure*}
\fi
In this paper, we attempt to address these prior limitations by proposing a scalable MIP-based framework to train constrained \emph{optimal multiway-split trees} (OMT).
Our contributions are five-fold.
\textbf{Novel path-based MIP formulation} Unlike arc-based MIP, we explicitly model decision rules or paths in a tree.
The high level idea is as follows - we first define a feature graph that admits every possible combination
of input features as feasible decision rule candidates. The objective of the MIP is to identify a constrained subset of rules from this potentially huge rule space to minimize the prediction error. The feature graph naturally embeds the hierarchical structure of a tree, and we add a constraint to model another property of decision trees, i.e., each sample can only be assigned to a single rule.
\textbf{Scalable algorithm}
Unlike existing MIP ODTs whose binary decision variables are typically in the order of $\mathcal{O}(2^dN)$, where $d$ and $N$ refer to the depth of the tree and the size of training data, the
number of binary decision variables in our formulation is independent of
$N$. It equals the number of candidate decision rules defined in the feature graph. While this number
can also be excessive in the worst case, we employ column generation (CG) to dynamically generate a relatively small number of candidates to recover a near-optimal solution.
While
existing MIP approaches can manage no more than a few thousand training samples, an initial implementation of our proposed CG algorithm effectively solved a $10^6$ sample dataset.
\textbf{Flexible framework}
Our framework can be applied to both classification and regression settings. The path-based model allows us to incorporate
any path-level metric, including nonlinear ones, which enter the MIP as input parameters.
We also show how a broader class of constraints including path-level, and attribute-level constraints, which are difficult to express within existing arc-based formulations, can be easily incorporated into our CG framework.
\textbf{Enhanced interpretability with multiway-splits} To the best of our knowledge, we are the first to train optimal multiway-split trees for prediction tasks, where a node may have more than two child nodes.
Since a feature seldom appears more than once in any root-to-leaf path, multiway-split trees are easier to comprehend than their binary counterparts \cite{fulton1995efficient}.
\textbf{Extensive numerical results}
We conduct extensive computational studies on public datasets and report the first MIP-based results for large datasets containing up to 1,008,372 samples, which are an order of magnitude larger than the prior datasets analyzed in the MIP-based ODT literature. We show that our framework is either competitive or improves upon the state-of-the-art MIP methods in terms of out-of-sample accuracy and achieves up to a 24-fold reduction in runtime.
\section{Related literature}
The discrete nature of decisions involved in training a decision tree and recent algorithmic advances in integer optimization have inspired a burgeoning body of literature that utilize a MIP formulation to construct optimal decision trees \cite{bertsimas2017optimal,verwer2019learning,aghaei2019learning,NEURIPS2020_1373b284,aghaei2020learning,gunluk2021optimal}.
Most of the existing MIP methods build on top of the Optimal Classification Tree (OCT) first introduced in \citealp{bertsimas2017optimal}, where decisions about the split condition at each node, label assignment for each leaf node, and the routing of each data point from the root node to a leaf node are made.
Different flavors of optimal decision trees have been proposed in the literature, e.g., discrimination-aware trees \cite{aghaei2019learning}, trees with combinatorial splitting rules \cite{gunluk2021optimal}, multivariate trees \cite{NEURIPS2020_1373b284}.
As the tractability of the original formulation is limited by data size, subsequent efforts sought to improve the efficiency of the approach.
The BinOCT approach of \citealp{verwer2019learning} employs a binary encoding method to model the threshold selection at branch nodes to
reduce the number of binary decision variables needed. However, both the OCT and BinOCT use “big-M” constraints which may weaken the underlying LP relaxations, leading to poor performance of the branch-and-bound method. Most notably, a recent work proposes FlowOCT \cite{aghaei2020learning}, a strong max-flow based MIP formulation with binary data.
Its formulation yields a tighter underlying LP relaxation and outperforms prior methods in many instances.
Nevertheless, existing methods' tractability is limited by data size and tree depth as the arc-based formulation uses binary variables to assign samples to nodes. This motivates us to seek a completely different approach,
by explicitly modeling the paths used to construct a tree.
While the worst-case number of rules can be huge with high-dimensional data, we show that our problem can be solved efficiently via column generation (CG). CG has been successful in solving large-scale discrete optimization models in many domains including vehicle routing (\citealp{chen2006dynamic}), crew scheduling (\citealp{subramanian2008effective, bront2009column}), and supply chain management, among others (\citealp{xu2019solving}).
Utilizing large-scale optimization techniques for MIP-based ODTs has been attempted previously.
\citealp{aghaei2020learning} show that FlowOCT can be solved by Benders’ decomposition (row generation). Its subproblem involves solving a max-flow problem for \emph{every} sample potentially, limiting
results to datasets having a few thousand samples.
\iffalse
To the best of our knowledge, the largest datasets have been attempted by MIP-based ODTs are from \cite{NEURIPS2020_1373b284},
with up to 245,000 samples.
However, \cite{NEURIPS2020_1373b284} requires a LP-based data-selection method to preprocess the data, prior to training the model.
Our method which has been applied to $10^6$ samples does not need a preprocessing step. Another notable difference is that \cite{NEURIPS2020_1373b284} produces a multivariate (oblique) tree, wherein splits at a node use multiple variables, while ours is a univariate tree with axis-parallel splits. While these multivariate splits (e.g., $x_1+x_2+x_5 \leq 0.4)$ tend to be stronger than univariate splits, it comes at the expense of significantly diminished interpretability.
\fi
\section{Problem formulation}
We consider a dataset which consists of $N$ samples, $\{(x_i,y_i)\}_{i=1}^N$, where $x_i \in \mathcal{X}^k$ are features which are assumed to be categorical. We will discuss how to handle numerical features in Section~\ref{sect_cumulative_binning}, and present experiments on datasets with both categorical and numerical features in Section~\ref{sect_experiments}.
$y_i$ is the outcome, which can be a discrete label for classification or a continuous quantity for regression.
A binary-split tree of depth $d$ can have at most $2^d$ leaf nodes. In a multiway-split tree, each node may have more than two children. Thus, we use the depth of a tree $d$, as well as the number of leaf nodes $l$, which are user-specified parameters, to describe such a tree. An example of a multiway-split tree with $d=3$ and $l=8$ is shown in Figure \ref{fig_OMT}.
In our framework, we explicitly model individual decision paths from the root to the leaf nodes. We begin by defining a feature graph
that contains every possible combination of input features, followed by introducing a MIP optimization problem to identify a subset of paths to form the tree.
\subsection{Feature graph}\label{sect_rule_space}
We consider an acyclic multi-level digraph, $G(V, E)$, where each feature indicates a level in the graph, represented by multiple nodes corresponding to its distinct feature values.
Nodes of a feature are fully connected to nodes in the next level. The graph includes a source and sink node. A decision rule is defined as a path
from the source to the sink node.
For each feature, we introduce a dummy node \emph{SKIP}.
If a path goes through \emph{SKIP} node of a feature, it means that the particular feature is not used in a decision rule.
As \emph{SKIP} nodes allow paths to ignore features, paths on this acyclic graph represent all possible feature combinations, defining the full search space $\mathcal{P}$ that one may need to consider to construct an optimal tree. Figure~\ref{feature_space} illustrates an example of a feature graph using three features from \textsf{car-evaluation}, a UCI dataset \cite{dua2017uci}, which records assessment of cars based on criteria such as ``Persons'' (number of seats), ``Buying'' (purchase cost) and ``Safety''.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{fig/feature_graph.png}
\caption{The feature graph for \textsf{car-evaluation}
}\label{feature_space}
\end{figure}
Denote $\eta$ as the maximum unique feature values associated with a feature. The following result implies that enumerating through the space $\mathcal{P}$ may become impractical for high-dimensional datasets.
\begin{proposition}\label{prop_feature_space}
The search space $\mathcal{P}$ is $\mathcal{O}(\eta^k).$
\end{proposition}
Later in Section \ref{sect_algorithm}, we describe our algorithm which uses a large-scale optimization technique to intelligently search through the space, and only paths that can improve the current solution are generated on the fly.
\subsection{Path-Based MIP formulation}\label{sect_formulation}
We denote a decision rule as $j \in\{ 1,\cdots,L\}$, where $L=|\mathcal{P}|$. Let $S_j \in [N]$ be the subset of observations which fall into rule $j$.
Denote $\xi_j$ as a user-specified metric associated with rule $j$, e.g., misclassification error in rule $j$ in a classification task, squared loss or absolute loss for regression.
Let $z_j$ be a binary decision variable which indicates whether rule $j$ is selected (1) or not (0), for $j = 1,\cdots,L$.
{We define non-negative slack variables $s_i$ and a positive penalty cost $c_i$ for each sample $i$. }
A MIP formulation to determine an optimal multiway-split tree ({OMT}) can be written as follows,
\begin{align}
\min\quad & \sum_{j=1}^L \xi_jz_j +\sum_{i=1}^N c_is_i\nonumber\\
\textrm{s.t.} \quad & \sum_{j=1}^L a_{ij}z_j + s_i = 1, \quad\forall i=1,\cdots,N\label{constraint_coverage}\\
&\sum_{j=1}^L z_j \leq l \label{capacity}\\
&z_j\in\{1,0\}, \quad \forall j=1,\cdots,L \nonumber\\ &s_i \geq 0, \quad \forall i=1,\cdots,N \nonumber
\end{align}
The input parameter $a_{ij}=1$ if sample $i$ satisfies the conditions specified in rule $j$, and 0 otherwise.
While several rules may contain the same data sample, {the set partitioning constraint (\ref{constraint_coverage})
with a sufficiently large penalty $c_i$ ensures that each sample is ultimately assigned to exactly one rule.}
The cardinality constraint (\ref{capacity}) stipulates that no more than $l$ rules are active in the optimal solution $\mathbf{z}^*$.
It is well-known that set partitioning problems are \textit{NP}-hard \cite{hartmanis1982computers}.
Thus, while optimal solutions can be obtained in practice for moderate sized instances \cite{atamturk1996combined}, the problem may easily become intractable to solve directly as the cardinality of feasible rules grows exponentially with the feature space. We will present an efficient algorithm to overcome this computational challenge in the next section.
\section{Column generation}\label{sect_algorithm}
Column generation (CG) is a classical technique to solve large MIPs for which it is not practical to explicitly generate all columns (variables) of the problem \cite{lubbecke2005selected}.
Specifically, we first consider a \emph{restricted master program} (\textsf{RMP}) version of \textsf{OMT}, where we 1) consider only a subset of paths $\hat{L}$, which is typically much smaller than $L$ and is determined dynamically, and 2) relax the integrality constraints on $z_j$ to $0\leq z_j\leq 1$ for all $j=1,\cdots,\hat{L}$.
Denote the dual variables associated with the set partitioning constraints in (\ref{constraint_coverage}) and the cardinality constraint in (\ref{capacity}) as ${\lambda}_i$ and $\mu$ respectively. The dual formulation of RMP can be written as follows,
\begin{align}
\quad\max\quad & \sum_{i=1}^N \lambda_i + n\mu\nonumber\\
\textrm{s.t.} \quad & \sum_{i=1}^N a_{ij}\lambda_i+\mu\leq \xi_j, \quad \forall j=1,\cdots,\hat{L}\label{dual_constraint}\\
& \mu\leq0, \quad \lambda_i\leq c_i, \quad\forall i = 1,\cdots, N
\nonumber
\end{align}
Note that since $z_j\leq 1$ is implied by Eq (\ref{constraint_coverage}), we are only left with $z_j \geq 0$ in the primal \textsf{RMP}.
Based on the dual feasibility constraint for path $j$ in (\ref{dual_constraint}), the reduced cost for path $j$ is equal to
\begin{equation}\small
rc_j = \xi_j -\left(\sum_{i=1}^N a_{ij}\lambda_i + \mu\right). \label{eq_reduced cost}
\end{equation}
When RMP is solved to optimality, dual feasibility is guaranteed only for the rules included in $\hat{L}$.
A path violating the dual constraint (\ref{dual_constraint}) has a negative reduced cost and must be added to the RMP for the next iteration.
To identify paths that maximally violate Eq (\ref{dual_constraint}), we need to solve $\min rc_j$.
We define the \emph{subproblem}, as the search for some $K$ paths having the most negative reduced cost,
which is a
\emph{K-shortest path problem} (KSP) over the feature graph $G$,
where the cost on a path is its reduced cost defined in (\ref{eq_reduced cost}). If no violating path exists, we obtain dual feasibility, and the CG has converged.
The high-level view of the CG procedure is as follows: we start
by solving RMP with an empty set of candidate rules with $\hat{L}=\emptyset$, which sets slack variables $s_i$ to 1.0 and yields an initial dual solution.
Next, we use these dual values to solve the subproblem to identify up to $K$ candidate rules having the most negative reduced cost. They are added to $\hat{L}$ and the RMP is re-optimized to obtain a new primal-dual solution, and the process iterates.
The CG procedure converges to an optimal solution of the LP relaxation of \textsf{OMT} when dual feasibility is achieved, i.e., $rc_j \geq 0$ for all $j = 1,\cdots,M$.
After the CG procedure has converged, the binary restrictions are reimposed on $\mathbf{z}$ and we solve the resultant Master-MIP.
The CG procedure can be implemented exactly for provable optimality, or approximately to obtain a near-optimal decision tree. Prior works \cite{barnhart1998branch} have proposed a branch-and-price technique to solve the Master-MIP to provable optimality, which also requires the subproblem to be solved exactly
\cite{desrosiers2005primer}.
For simplicity and replicability, we employ a CG heuristic that solves the resultant Master-MIP directly using a standard optimization package \cite{cplex2020}.
{As noted in \citealp{lubbecke2005selected}, the role of the subproblem is to provide a potential column or to prove that none exists.
Since any negative reduced cost path contributes to this aim,
we employ a heuristic subproblem algorithm that is a path-dependent adaptation of a $K$-shortest path method for acyclic graphs with additive arc costs \cite{horne1980finding}. \sun{Our KSP method is similar to the resource-constrained shortest path heuristics used in CG applications \cite{desrosiers2005primer,desaulniers2006column}} and has polynomial-time complexity $\mathcal{O}(K\eta|V|N log N)$. Details are given in the appendix. In our experiments, the CG procedure terminates when the dual solution converges to within a tolerance or we reach a maximum iteration limit. }
{To preserve interpretability, shallow trees are often preferred, where the depth $d$ is small. The following result shows how this constraint dramatically shrinks the CG search space, especially for high-dimensional datasets.
\begin{proposition}
The constrained search space is $\mathcal{O}\big({k \choose d}\eta^d\big)$.
\end{proposition}
Comparing this result to the unconstrained setting shown in Proposition~\ref{prop_feature_space}, we see that restricting each path to contain no more than $d$ features significantly reduces the total number of feasible paths. In other words, the worst-case value for $\hat{L}\ll L$. Considering an example with {$d=\eta=2$, $\hat{L}$ is bounded by $\mathcal{O}(k^2)$ versus $L$ is $\mathcal{O}(2^k)$.
{Prior MIP models are restricted to a fixed-depth binary tree representation which requires categorical attributes to be encoded into a generic set of binary features. Doing so prevents them from exploiting the beneficial hierarchical feature graph structure used in our approach.}
}
\iffalse
Note that when we are required to constrain the decision tree to a depth $d$ or less, the worst case number of feasible paths considerably shrinks and is bounded by $|V|^d$. Therefore, the worst case value for $\hat{L} << L$. As a practical safeguard, we also impose a maximum column limit (e.g., 10,000) on $\hat{L}$ and periodically flush poor quality columns during the CG procedure.
While the number of binary decision variables in our formulation is independent of sample size $N$,
the sample size does impact the runtime. {Specifically, the runtime of the KSP is bounded polynomially in $|V|$, $k$, and $N$ (refer to appendix for its proof). }
\fi
\iffalse
\section{CG: Unified Decision Tree Framework for Constrained Classification and Regression}\label{sect_constraints}
The proposed CG approach to find a (near) optimal solution to OMT can satisfy a rich variety of constraints. For example, ODT classification methods in the literature have been extended to satisfy fairness and imbalance reduction constraints at the tree-level as long as they can be represented as linear inequalities [Fairness2019, ..]. These are examples of \textit{global constraints}, and represented via Eqn x (Dx <= d). The corresponding dual variable values influence the path selection in the subproblem through the reduced costs (Eqn x). For example, a precision constraint that requires a minimum true positivity rate (TP/(TP+FP) >= TPRmin) can be reformulated as the following linear constraint:
sum(j)[TPj - delta(TP+FP)j]zj >= 0
Such constraints influence the reduced cost computation (egn 4) in the KSP through the corresponding dual variables. Unlike prior MIP approaches, the CG method not only handles global constraints via the RMP but can also satisfy \textit{generic} rule-level constraints via the KSP subproblem. Satisfying such path-level constraints in the KSP reduces to a simple feasibility check while extending a partial path to the next node in G. Enforcing a minimum fairness score for every path eliminates blatantly unfair rules, which may be admissible in prior MIP models that can only specify a global requirement. Using CG, we can require every path to strictly satisfy a fairness threshold and prevent violating rules from entering the RMP. Alternatively, we can impose a 'soft penalty' that increases nonlinearly with the degree of violation in order to reduce the attractiveness of violating rules within the RMP.
\cite{nanfack2022constraint} present a wide survey of prior works that analyze constrained decision trees. They broadly categorize constraints as structural, attribute-level, and sample-level. We list practical examples of such constraints that our path-formulation can naturally handle that cannot be as efficiently managed by prior optimal decision tree approaches.
1. \textbf{Maximum and Minimum Rule Cardinality}: ODT methods solve MIPs that model a binary tree of fixed depth, which makes it difficult to model and optimally satisfy such a requirement.
\shi{Min samples per Rule(easy for oct i think?)}
2. \textbf{Resource constraints}. Decision trees in healthcare have been used to predict medical outcomes where a tradeoff exists between classification accuracy and measurement cost \cite{nunez1991use}. For example, we can minimize classification error while also ensuring that the measurement cost associated with a decision rule is within a user-specified limit. This requirement can be satisfied in the KSP.
3. \textbf{Feature Hierarchy}. Here, the sequence of features in a decision rule have to satisfy precedence constraints \cite{nanfack2022constraint}.This is achieved by suitably re-arranging the nodes in G.
\shi{overlapping material above with constraints section}
\shi{
distill some points from this for the model, if needed and skip here
\textbf{Nonlinear objectives and Metrics}
Unlike prior ODT models BD phd dissert ref?], our CG approach can exactly model generic nonlinear objective functions such as squared error or entropy without the need to solve nonlinear, nonconvex optimization models [. Any required error metric can be naturally computed for any generated path within the KSP subproblem. These values simply show up as objective coefficients 'eta-j'? in the master program, or constraint coefficients if they are part of secondary goals.
2. An immediate benefit of this property is that the CG becomes a flexible framework where the RMP is agnostic to whether the underlying problem being solved is a regression or classification problem. The CG approach remains the same. In fact, a path can even represent the branch of a regression tree (?) where one branches on a subset of features (e.g. categorical) followed by a regression model [should be keep this for another paper?].
3. constraints involving nonlinear/nonconvex combination of attributes (e.g. cost). To do: check crew scheduling for examples
}
\fi
\iffalse
\section{Flexible Framework}
\subsection{Nonlinear metrics}\label{sect_nonlinear_metrics}
Most MIP-based decision trees only consider linear metrics. Nonlinear metrics, such as the F1-score, Matthews correlation coefficient, and Fowlkes–Mallows index, are often used to evaluate the performance of machine learning models trained on imbalanced data \cite{nonlinearmetric2021}.
One of the advantages of our approach is that once the feature graph is defined, all path-based metrics can be computed in the subproblem itself. These nonlinear metrics may be represented as $\xi_j$ and enter the OMT formulation as an input parameter in the objective as shown in Section \ref{sect_formulation}, and they may also enter the problem as constraints which we will describe next. This transformative modeling capability of CG is invaluable when solving decision optimization problems that are highly nonlinear and nonconvex in their original form \cite{barnhart1998branch}.
\subsection{Constraint enforcement}\label{sect_constraints}
Besides optimality, another key advantage brought by a MIP-based decision tree is its ability to incorporate constraints which are critical in many settings such as healthcare and jurisprudence, among others. Existing optimal classification trees in the literature have been extended to incorporate constraints to address fairness and imbalance in the dataset \cite{aghaei2019learning,aghaei2020learning,gunluk2021optimal}.
In this section, we show how our method provides an elegant and unified framework to handle the constraints, including those which existing arc-based formulations are unable to handle.
\paragraph{Path-level constraints}
The constraints analyzed in prior MIP-based decision trees are typically applied at \emph{sample-level}. For instance, constraining precision or recall conditioned on samples' class labels \cite{aghaei2020learning,gunluk2021optimal}, or constraining fairness metrics such as statistical parity conditioned on sensitive features \cite{aghaei2019learning,aghaei2020learning}.
However, none of these MIP-based methods are able to efficiently handle constraints at \emph{path-level}, as the notion of ``path'' is not explicitly defined in the arc-based formulation.
Consider the example of cost-sensitive decision trees, which is motivated by the medical domain. Often, doctors must arrive at a diagnosis by taking into account the economic constraints faced by a patient when different test options involve a tradeoff between accuracy and measurement cost \cite{lomax2013survey, nunez1991use}. One approach is to learn a decision tree with the objective of minimizing misclassification error while limiting the cost pertinent to the diagnostic tests in each decision rule to be bounded by some budget. Specifically, we denote the cost associated with each decision rule $j$ as $\rho_j$ and denote the budget as $C$. Then we can specify the following constraint, ie., $\rho_j\leq C$ for all $j=1,\cdots, L$ which can be processed within the KSP subproblem.
One may also want to enforce the fairness constraint over each rule to eliminate blatantly unfair rules, which may be admissible in prior MIP models. Suppose the sensitive feature is ``gender'' =\{M,F\}. For each rule $j$, we denote the gender-specific fairness metric as $\rho_{Mj}$ and $\rho_{Fj}$, then a path-level fairness constraint can be written as $|\rho_{Mj}-\rho_{Fj}|\leq \delta$, where $\delta$ refers to a user-specified bias threshold permitted by the model. To model a sample-level ``fairness budget'' constraint akin to \cite{aghaei2020learning}, we just sum over all paths with appropriate weighting.
It is worth re-emphasizing that that once path $j$ is delineated in the feature graph, the associated metrics can be defined and simply enter the RMP as coefficients associated with rule $j$.
The cardinality constraint in (\ref{capacity}) is another example of a path-level constraint. This constraint which provides an easy way to control the number of active leaf nodes in a tree is relatively difficult to enforce in the prior MIP-based methods.
More generally, these \emph{path-level} constraints are modeled as linear inequalities in the RMP, which in turn influence the subproblem by modifying the reduced cost. To make it more concrete, we model path-level constraints as a set of polyhedral constraints on $\mathbf{z}$, i.e.,
\begin{align}
\sum_{j=1}^L\rho_{mj}z_j\geq q_m, \forall m=1,\cdots, M. \label{eqn_constraint}
\end{align}
Let $\tau_m \geq 0$ denote the dual variables corresponding to these constraints. Then, the reduced cost for path $j$ can be updated to
\begin{equation}
rc_j = \xi_j -\left(\sum_{i=1}^N a_{ij}\lambda_i + \mu +\sum_{m=1}^M \rho_{mj}\tau_m \right). \label{eq_new_reduced cost}
\end{equation}
These constraints influence the subproblem through the resultant reduced costs to ensure that paths that are more likely to be feasible are added to the RMP.
\paragraph{Path-level constraints}{Attribute-level constraints}
Another type of constraint which existing MIP methods may find challenging to implement is \emph{attribute-level} constraint. They may involve complex path-dependent nonlinear conditions involving several features that can not be abstracted efficiently into linear constraints. For instance, disallowing certain feature combinations.
These constraints are easily handled within the KSP subproblem as a feasibility check while extending a partial path to the next node in $G$.
Another example of an attribute-level requirement is to preserve attribute hierarchy, i.e., an attribute must be selected before another one over a path or even over the entire decision tree. In the medical domain, doctors might want to perform temperature checks or blood tests before going after more advanced tests. As a result, enforcing such hierarchy makes the decision tree more reliable and comprehensible for domain experts \cite{nanfack2022constraint}. This can be easily achieved
by suitably arranging the nodes in the feature graph according to the preference on the hierarchy.
\subsection{Cumulative binning} \label{sect_cumulative_binning}
For tree-based approaches, numerical input are typically handled via thresholding. For example, consider a numerical feature with values in [0, 1] being divided into 3 intervals, e.g., [0, 0.33), [0.33, 0.67) and [0.67, 1.0]. One can transform this numerical feature to a categorical feature with 3 values, and create 3 nodes representing this feature in the graph. However, this approach may be limiting as a binary-split tree can branch on conditions such as $x\leq 0.67$ or $x> 0.3$. To address this limitation, we consider a \emph{cumulative binning} method, where intervals can be overlapping. We create additional nodes representing intervals {[0, 0.67), [0.33, 1.0]}, yielding a total of 5 nodes.
Note that the MIP formulation includes the coverage constraint in (\ref{constraint_coverage}) which ensures that the final rule set does not contain overlapping samples. In our experiments, quantile discretization is used as thresholds to create intervals. More generally, with $\kappa$ thresholds, the cumulative binning method results in \textit{O}($\kappa^2$) nodes, i.e, a gain in a rule's expressiveness at the expense of higher computational effort.
Besides numerical features, cumulative binning can also be applied to ordinal features, whose feature values bear a clear ordering. For example, in \textsf{car-evaluation}, feature \textit{Buying} (purchase cost) has 4 values, i.e., Low, Medium, High and Very high.
When applying cumulative binning to ordinal features, it allows a decision rule to include combinatorial conditions. Figure \ref{std_OMT} and \ref{cumulative_binning_OMT} depict two trees
with the same number of leaves and depth trained on \textsf{car-evaluation} when we turn off and on cumulative binning on the ordinal features respectively. In particular, the in-sample accuracy achieved by the trees in Figure \ref{std_OMT} is 0.78 (without cumulative binning), and 0.82 (with cumulative binning) for the tree in Figure \ref{cumulative_binning_OMT}, demonstrating the expressiveness of the tree with combinatorial conditions. Moreover, as these combinatorial groupings respect the order in ordinal features (e.g., a split on \emph{Buying} into ``High OR Very high'' versus ``Low OR Medium''), the resulting tree remains intuitive and interpretable.
\fi
\section{Flexible framework}
\subsection{Nonlinear metrics}\label{sect_nonlinear_metrics}
Nonlinear metrics such as the F1-score, Matthews correlation coefficient, and Fowlkes–Mallows index, are often used to evaluate the performance of machine learning models trained on imbalanced data \cite{nonlinearmetric2021}.
Existing MIP-based decision trees only consider linear metrics. In our approach, once the feature graph is defined, we know the samples $S_j$ which satisfy this rule and we can compute the metric associated with this rule in the subproblem.
These nonlinear metrics may be represented as $\xi_j$ and enter RMP as an input parameter in the objective as shown in Section \ref{sect_formulation}. They can also enter RMP as constraints and we provide an example of incorporating F1-score in the appendix. This transformative modeling capability of CG is invaluable when solving optimization problems that are highly nonlinear and nonconvex in their original form.
\subsection{Constraints enforcement}\label{sect_constraints}
Existing optimal classification trees in the literature have been extended to incorporate constraints to address fairness and imbalance issues.
In this section, we show how our method provides an elegant and unified framework to handle constraints, including those which existing arc-based MIP formulations cannot efficiently manage.
\paragraph{Path-level constraints}
Prior MIP-based decision trees typically manage \emph{sample-level} constraints, e.g., constraining precision or recall conditioned on samples' class labels \cite{aghaei2020learning,gunluk2021optimal}, or fairness metrics such as statistical parity conditioned on sensitive features \cite{aghaei2019learning,aghaei2020learning}.
However, none of the existing MIP-based methods are able to efficiently handle constraints at \emph{path-level}, as the notion of ``path'' is not explicitly defined in the arc-based formulation.
Consider the example of cost-sensitive decision trees, which is motivated by the medical domain. Often, doctors must arrive at a diagnosis by taking into account the economic constraints faced by a patient when different test options involve a tradeoff between accuracy and measurement cost \cite{lomax2013survey, nunez1991use}.
Denoting the cost associated with each decision rule $j$ which consists of several medical tests as $\rho_j$ and the budget as $C$, we specify the following constraint: $\rho_jz_j\leq C$ for all $j=1,\cdots, L$. This constraint which ensures that the final selected diagnostic methods (with $z_j=1)$ are staying within the budget, can be processed within the KSP subproblem.
Additional examples
can be found in the appendix.
In general, \emph{path-level} constraints can be expressed as a set of polyhedral inequalities on $\mathbf{z}$, i.e., $\sum_{j=1}^L\rho_{mj}z_j\geq q_m, \forall m=1,\cdots, M.$
Let $\tau_m \geq 0$ denote the dual variables corresponding to these constraints. Then, the reduced cost $rc_j$ for path $j = \xi_j -\left(\sum_{i=1}^N a_{ij}\lambda_i + \mu +\sum_{m=1}^M \rho_{mj}\tau_m \right).$
These constraints influence the subproblem through the resultant reduced costs to ensure that paths that are more likely to be feasible are added to the RMP.
\begin{table*}[t] \small
\centering
\begin{tabular}{lllrccccc}
\toprule
dataset & $N$ &$d$ & OMT & OCT & BinOCT & FlowOCT & CART \\
\midrule
soybean-small & 47& 4 & 0.883±0.139 & 0.944±0.048 & 0.833±0.22 & 0.944±0.083 & \B 1.0±0.0 \\
soybean-small & 47& 5 & 0.95±0.075 & 0.972±0.096 & 0.722±0.21 & 0.972±0.083 & \B 1.0±0.0 \\
monks-3 & 122& 4 & 0.987±0.008 & 0.99±0.015 & 0.988±0.011 & 0.99±0.011 & \B 0.993±0.007 \\
monks-3 & 122& 5 & 0.987±0.008 & 0.978±0.014 & 0.983±0.015 & 0.99±0.012 & \B 0.993±0.007 \\
monks-1 & 124& 4 & \B 1.0±0.0 & \B 1.0±0.029 & \B 1.0±0.0 & \B 1.0±0.03 & 0.806±0.064 \\
monks-1 & 124& 5 & \B 1.0±0.0 & 0.935±0.142 & \B 1.0±0.0 & \B 1.0±0.03 & 0.787±0.042 \\
hayes-roth & 132& 4 & 0.75±0.075 & 0.75±0.038 & 0.642±0.138 & \B0.8±0.066 & 0.55±0.087 \\
hayes-roth &132& 5 & 0.77±0.089 & 0.75±0.076 & 0.575±0.066 & \B 0.817±0.029 & 0.708±0.058 \\
monks-2 & 169& 4 & 0.603±0.045 & \B 0.662±0.031 & 0.581±0.027 & \B 0.662±0.023 & 0.598±0.033 \\
monks-2 & 169& 5 & \B 0.779±0.03 & 0.662±0.05 & 0.607±0.03 & 0.662±0.055 & 0.651±0.055 \\
house-votes-84 & 232& 4 & 0.962±0.046 & \B 0.971±0.01 & 0.914±0.03 & \B 0.971±0.01 & 0.96±0.02 \\
house-votes-84 & 232& 5 & 0.955±0.026 & \B 0.971±0.01 & 0.96±0.01 & \B 0.971±0.01 & 0.96±0.02 \\
spect & 267& 4 & \B 0.834±0.04 & 0.801±0.086 & 0.746±0.065 & 0.791±0.079 & 0.731±0.026 \\
spect & 267& 5 & \B 0.803±0.039 & 0.791±0.06 & 0.721±0.048 & 0.796±0.085 & 0.731±0.03 \\
breast-cancer & 277& 4 & 0.74±0.046 & 0.724±0.038 & 0.662±0.091 & \B 0.743±0.079 & 0.69±0.058 \\
breast-cancer & 277& 5 & 0.669±0.094 & \B 0.738±0.0 & 0.567±0.157 & 0.676±0.036 & 0.671±0.049 \\
balance-scale & 625& 4 & \B 0.781±0.016 & 0.747±0.048 & 0.707±0.006 & 0.699±0.01 & 0.769±0.033 \\
balance-scale & 625& 5 & \B 0.772±0.012 & 0.735±0.01 & 0.565±0.1 & 0.72±0.045 & 0.762±0.013 \\
tic-tac-toe & 958& 4 & \B 0.802±0.039 & 0.776±0.073 & 0.786±0.021 & 0.757±0.032 & 0.758±0.019 \\
tic-tac-toe & 958& 5 & \B 0.82±0.061 & 0.711±0.025 & 0.812±0.029 & 0.788±0.05 & 0.778±0.046 \\
car-evaluation & 1728& 4 & \B 0.879±0.009 & 0.796±0.076 & 0.848±0.01 & 0.823±0.016 & 0.842±0.029 \\
car-evaluation & 1728& 5 & \B 0.91±0.005 & 0.742±0.041 & 0.815±0.052 & 0.8±0.016 & 0.857±0.019 \\
kr-vs-kp & 3196& 4 & \B 0.96±0.006 & 0.847±0.094 & 0.938±0.012 & 0.94±0.011 & 0.94±0.011 \\
kr-vs-kp & 3196& 5 & \B 0.968±0.003 & 0.652±0.098 & 0.847±0.164 & 0.946±0.057 & 0.94±0.011 \\
\bottomrule
\end{tabular}
\caption{Mean ± standard deviation of out of sample accuracy on the small/medium datasets.
}\label{table_small_data}
\end{table*}
\paragraph{Attribute-level constraints}
Existing MIP methods are also challenged by \emph{attribute-level} constraints that represent complex nonlinear conditions involving several features that can not be abstracted efficiently into linear constraints, e.g., disallowing certain feature combinations.
These constraints are easily handled within the KSP subproblem as a feasibility check while extending a partial path to the next node in $G$.
A practical example of an attribute-level requirement is the need to preserve attribute hierarchy.
{For example, in} the medical domain, doctors may perform temperature checks or blood tests before proceeding to more advanced tests. Enforcing such hierarchy makes the decision tree more reliable and comprehensible for domain experts \cite{nanfack2022constraint}. This can be easily achieved
by appropriately arranging the nodes in the feature graph.
\subsection{Cumulative binning on numerical features} \label{sect_cumulative_binning}
For tree-based approaches, numerical input are typically handled via thresholding. For example, consider a numerical feature with values in [0, 1] being divided into 3 intervals, e.g., [0, 0.33), [0.33, 0.67) and [0.67, 1.0]. One can transform this numerical feature to a categorical feature with 3 values, and create 3 nodes representing this feature in the graph. This approach may be limiting as a binary-split tree can branch on conditions such as $x\leq 0.67$ or $x> 0.3$. To address this limitation, we employ \emph{cumulative binning}, where intervals can be overlapping. We create additional nodes representing intervals {[0, 0.67), [0.33, 1.0]}, yielding a total of 5 nodes.
The coverage constraints (\ref{constraint_coverage})
ensure that the final rule set does not contain overlapping samples. In our experiments, quantile discretization is used to create intervals. More generally, with $\kappa$ intervals, cumulative binning results in $\mathcal{O}(\kappa^2$) nodes, i.e, a gain in a rule's expressiveness at the cost of higher computational effort.
\iffalse
Besides numerical features, cumulative binning can also be applied to ordinal features, whose feature values bear a clear ordering. For example, in \textsf{car-evaluation}, feature \textit{Buying} (purchase cost) has 4 values, i.e., Low, Medium, High and Very high.
When applying cumulative binning to ordinal features, it allows a decision rule to include combinatorial conditions. Figure \ref{std_OMT} and \ref{cumulative_binning_OMT} depict two trees
with the same number of leaves and depth trained on \textsf{car-evaluation}.
In particular, the in-sample accuracy achieved by the trees in Figure \ref{std_OMT} is 0.78 (without cumulative binning), and 0.82 (with cumulative binning) for the tree in Figure \ref{cumulative_binning_OMT}, demonstrating the expressiveness of the tree with combinatorial conditions. Moreover, as these combinatorial groupings respect the order in ordinal features (e.g., a split on \emph{Buying} into ``High OR Very high'' versus ``Low OR Medium''), the resulting tree remains intuitive and interpretable.
\fi
\section{Experiments}\label{sect_experiments}
While {OMT} is a general framework which can
also be used in regression settings,
we focus on classification tasks in our experiments as most of the existing ODTs are classification trees.
We group our experiments by dataset size, where
\textit{small/medium} datasets are loosely defined as having a few thousand samples,
and a dataset is considered \textit{large} if it has many thousands of samples and several hundred binary features \cite{dash2018boolean}.
To benchmark our approach \textsf{OMT}, we implement the following MIP-based methods, i.e., \textsf{FlowOCT} \cite{aghaei2020learning}, \textsf{BinOCT} \cite{verwer2019learning} and \textsf{OCT} \cite{bertsimas2017optimal}.
Although \textsf{CART} cannot produce constrained decision rules, we still include it as a baseline.
As the benchmarks produce binary-split trees of a given depth $d$, we construct a comparable multiway-split tree via the cardinality constraint that restricts the number of leaf nodes $l$ to be at most $2^d$, i.e., $l=2^d$, and limit the rule length to $d$.
CPLEX 20.1 \cite{cplex2020} was used to solve the MIP-based methods.
\textsf{CART} was trained using scikit-learn \cite{pedregosa2011scikit} using default hyper-parameters.
The minimum number of samples per rule for \textsf{OMT} was set to 1\% of the training data. The maximum value for $K$ was set to 1000 in the KSP for all instances except the large dataset experiments, where it was reduced to 100 to stay within the RAM limit. We set a maximum CG iteration limit of 40 and a $\hat{L}$ limit of 10,000. All experiments were run on an Intel 8-core i7 PC with 32GB RAM. Details on the experiment setup can be found in the appendix.
\subsection{Small/medium datasets}
We evaluate the same 12 classification datasets from the UCI repository \cite{dua2017uci} that have been used in \textsf{FlowOCT} \cite{aghaei2020learning}, which is considered the state-of-the-art ODT.
We closely follow the experiment setup in \citealp{aghaei2020learning} to construct decision trees with $d\in\{2,3,4,5\}$.
We create 5 random splits for each dataset into training (50\%), validation (25\%), and test sets (25\%).
A time limit of 20 minutes is imposed on each experiment, in contrast to the one hour limit used in \citealp{aghaei2020learning}.
Table \ref{table_small_data} reports the
achieved out-of-sample accuracy averaged over five splits for $d\in\{4,5\}$ (the complete results across all depths can be found in the appendix). Best accuracy in a given row is reported in \textbf{bold}. Among 48 (data, depth) combinations, \textsf{OMT} dominates other methods in 56.3\% of them, \textsf{FlowOCT} 33.3\%, \textsf{OCT} 29.2\%, \textsf{BinOCT} 8.3\%, and \textsf{CART} 14.6\% (including ties). These results demonstrate that our \textsf{OMT} method achieves competitive and often superior results compared to MIP-based binary-split ODT models.
\iffalse
Next we analyze runtime performance.
With a 5-fold cross validation, there are $12\times4\times5=240$ unique (dataset, depth, data split) combinations.
230 of these instances were solved by \textsf{OMT} within the time limit,
while 175, 118 and 90 instances were solved by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT} respectively (detailed results on runtime are included in appendix).
\fi
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{fig/runtime}
\caption{Runtime performance on a total of 240 instances}\label{fig_runtime1}
\end{figure}
Next, we analyze the solution time taken by different MIP-based methods. We define an ``instance'' as a unique (dataset, depth, data split) combination. With 5 random data splits, there are $12\times4\times5=240$ instances in total.
In Figure \ref{fig_runtime1}, the x-axis shows the solution time in seconds, while the y-axis shows the number of instances solved. With the MIP-based ODT benchmarks, when the solution time is under the time limit, implying the instance is solved to near optimality.
Figure \ref{fig_runtime1} shows that \textsf{OMT} solves most instances before the 20 minutes time limit, followed by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT}. Specifically, 188 out of 240 instances was solved under 1 minute by \textsf{OMT}, in contrast to 137, 95 and 59 instances by \textsf{FlowOCT}, \textsf{BinOCT} and \textsf{OCT} respectively; 230 instances out of 240 was solved under 10 minutes by \textsf{OMT},
compared to 175, 118 and 90 instances by the benchmarks.
The bulk of the unsolved instances by the time limit is the medium datasets with $d\geq3$.
Table \ref{table_runtime} reports the solution times of different MIP methods with respect to tree depth. The solution time of unsolved instances is capped at 20 minutes. Due to the skewed distributions contributed by huge discrepancies in terms of data size and feature size, we report the median values. As CART finishes all instances in seconds, we omit the results in the table. {In general, CART's solutions can be infeasible as it ignores many of the constraints we consider in this work.}
Table \ref{table_runtime} shows that the state-of-the-art MIP-based benchmark, \textsf{FlowOCT}
takes the least time per instance among all methods including \textsf{OMT} at $d=2$, However, it experiences a nearly 20X increase in runtime as $d$ increases from 2 to 3, which grows by another 4.5X when $d$ = 4. We believe this is an under-statement as the solution time of many unsolved instances at higher depth are capped at 20 minutes.
A further increase in runtime is observed in \textsf{OCT} and \textsf{BinOCT} whose binary variables increase exponentially with $d$. In contrast, for \textsf{OMT}, the runtime increases are merely 1.4X, 0.68X, 0.4X as $d$ increases from 2 to 5.
Overall, our speedup over \textsf{FlowOCT} at $d=4$ and 5 is 10.3X and 24.6X respectively. The exact OMT speedup is likely to be even higher if we compare runtime to achieve the same solution quality.
{The strong CG performance can also be explained by examining the gap between the objective value achieved by the Master-MIP ($\nu_{IP}$) and the final RMP ($\nu_{LP}$), i.e., $\Delta = (\nu_{IP} - \nu_{LP})/\nu_{IP}$. A table that reports on this MIP-LP gap with respect to $d$ is included in the appendix.
In particular, the median gap is no more than 0.3\% across all depths, suggesting that the path-based LP relaxation is a relatively strong approximation of the nonconvex discrete OMT model. This tight gap also underscores the advantage of converging to a relatively small subset of high quality paths from which an effective decision tree can be distilled using a standard MIP solver. }
\begin{table}\small
\centering
\begin{tabular}{rrrrr}
\toprule
$d$ & OMT & FlowOCT & BinOCT & OCT \\
\midrule
2 & 3.9 & 1.9 & 6.1 & 33.9 \\
3 & 9.3 & 32.5& 727.8 & 1200.4 \\
4 & 15.6 & 177.0 & 1201.9 & 1202.5 \\
5 & 21.9 & 561.4 & 1203.5 & 1205.9 \\
\bottomrule
\end{tabular}
\caption{Median runtime (seconds) for MIP-based methods}
\label{table_runtime}\end{table}
\iffalse
\begin{table}\small
\centering
\begin{tabular}{rrrrr}
\toprule
$d$ & OMT & FlowOCT & BinOCT & OCT \\
\midrule
2 & 3.9 (6.6) & 1.9 (10.9) & 6.1 (29.1) & 33.9 (282.4) \\
3 & 9.3 (58.7) & 32.5 (414.5) & 727.8 (479.7) & 1200.4 (490.6) \\
4 & 15.6 (133.0) & 177.0 (537.8) & 1201.9 (348.9) & 1202.5 (349.2) \\
5 & 21.9 (225.6) & 561.4 (543.8) & 1203.5 (411.2) & 1205.9 (305.3) \\
\bottomrule
\end{tabular} \label{table_runtime}
\caption{Median (MAD) solution time in seconds for MIP-based methods}
\end{table}
\fi
\begin{table*}[t]
\small
\makebox[\textwidth][c]{
\begin{tabular}{cccccccccc}
\toprule
& & & & \multicolumn{5}{c}{Univariate} & \multicolumn{1}{c}{Multivariate} \\
\cmidrule(lr){5-9}\cmidrule(lr){10-10}
dataset & $N$ & $k$ & $d$ & OMT & OCT & BinOCT & FlowOCT & CART & S1O \\
\midrule
pendigits & 7494 & 16 & 2 & \B0.395$^*$ & 0.348 & 0.291 & 0.329 & 0.352/0.362 & 0.389 \\
pendigits & 7494 & 16 & 3 & \B0.685$^*$ & 0.517 & 0.347 & 0.459 & 0.558/0.579 & 0.625 \\
avila & 10430 & 10 & 2 & 0.475 & 0.461 & 0.096 & \B0.509 & 0.501/0.503 & 0.526$^*$ \\
avila & 10430 & 10 & 3 & 0.524 & 0.417 & 0.403 & 0.504 & \textbf{0.527}/0.535 & 0.558$^*$ \\
EEG & 14980 & 14 & 2 & \B0.658 & 0.602 & 0.584 & 0.648 & 0.624/0.586 & 0.665$^*$ \\
EEG & 14980 & 14 & 3 & \B0.690$^*$ & 0.572 & 0.493 & 0.649 & 0.659/0.642 & 0.665 \\
HTRU & 17898 & 8 & 2 & 0.973 & 0.977 & 0.705 & 0.956 & \textbf{0.978}/0.973 & 0.978$^*$ \\
HTRU & 17898 & 8 & 3 & 0.977 & 0.978 & 0.552 & 0.973 & \textbf{0.979}/0.981$^*$ & 0.979 \\
shuttle & 43500 & 9 & 2 & \B0.968$^*$ & 0.821 & 0.285 & 0.920 & 0.939/0.938 & 0.940 \\
shuttle & 43500 & 9 & 3 & 0.984 & 0.793 & 0.390 & 0.914 & \textbf{0.996}/0.997 & 0.995$^*$ \\
skin & 245057 & 3 & 2 & 0.875$^*$ & 0.899 & 0.774 & 0.802 & \textbf{0.907}/0.806 & 0.863 \\
skin & 245057 & 3 & 3 & \B0.967$^*$ & 0.793 & 0.855 & 0.801 & 0.965/0.871 & 0.949 \\
\bottomrule
\end{tabular}
}
\caption{Out of sample accuracy, using the large datasets from \cite{NEURIPS2020_1373b284}. }
\label{table_zhu_dataset_full}
\end{table*}%
\subsection{Large datasets}
We test our method on the six largest datasets analyzed in the MIP-based ODT literature \cite{NEURIPS2020_1373b284}.
We follow the same experiment setup described earlier but limit runtime to one hour to account for the larger data size.
As in \citealp{NEURIPS2020_1373b284}, we focus on $d=\{2,3\}$. Table \ref{table_zhu_dataset_full} summarizes the average out-of-sample accuracy of different MIP approaches. The \emph{left} entries under column ``CART" shows the \textsf{CART} results that we obtain.
None of the arc-based MIP methods are able to obtain an optimal solution on any of the 12 large instances at $d=3$ within the time limit, with \textsf{FlowOCT} winning in one instance, and no wins for \textsf{OCT} and \textsf{BinOCT}. On the other hand, \textsf{OMT} dominates 6 out of 12, and
\textsf{CART} wins 5 out of 12 cases. {While the quality improvement over \textsf{CART} is not dramatic, the latter is unable to handle constraints, which is a key practical feature of our approach.} {Meanwhile, we improve upon the average misclassification error achieved by \textsf{FlowOCT} by 8.4\%.}
Table \ref{table_zhu_dataset_full} also includes a column ``S1O", taken from \citealp{NEURIPS2020_1373b284}. For ease of comparison, we include their \textsf{CART} values shown as the right entries under column ``CART" in Table \ref{table_zhu_dataset_full}. We want to point out that it is not quite fair to compare the two: Firstly, \textsf{OMT} is a univariate tree with axis-parallel splits, whereas \textsf{S1O} is a multivariate (oblique) tree, wherein splits at a node use multiple variables, or hyperplanes. These multivariate splits tend to be much stronger than univariate splits as shown in \cite{bertsimas2017optimal}, at the expense of losing interpretability.
Furthermore, the time limit for \textsf{S1O} reported in \cite{NEURIPS2020_1373b284} was four hours, while we limit \textsf{OMT} to an hour ({the average \textsf{OMT} runtime per instance is 413 and 835 seconds at $d = 2$ and 3, respectively}). Finally, instead of running on the raw data as in \textsf{OMT}, \textsf{S1O} employs a LP-based data-selection preprocessing step.
Nevertheless, we benchmark \textsf{OMT} against \textsf{S1O} (and their CART results) in Table \ref{table_zhu_dataset_full}, and the best achieved accuracy is marked with a star ($*$). Of the 12 cases, \textsf{OMT} still wins 6, while \textsf{S1O} and \textsf{CART} wins 5 and 2 cases respectively.
These results highlight our method's ability to produce competitive results in a relatively short training time, and without sacrificing interpretability.
Lastly, we stress-test our method by analyzing challenging datasets in the UCI repository that are an order of magnitude larger than those reported for prior optimal ODT methods. We increase the degree of difficulty in two dimensions: More samples (up to one million) and more raw features (up to $k=175$).
For such large datasets, we found a negligible change in solution quality for different random seeds, so we solved each instance once using a fixed random seed and a six-hour time limit and report the results in Table \ref{table_additional_data}. As none of the prior optimal ODTs could process such large data, we compare the achieved solution quality to CART.
Note via proposition~\ref{prop_feature_space} that setting $d = 2$ for the \textit{crop-mapping} dataset ($\eta=4$}) yields $\hat{L} \leq 16\times175^2$ versus $L=4^{175}$. We improve upon CART's test accuracy in 6 of the 8 instances with an average runtime of 3.7 hours per large instance.
\begin{table}[t]
\centering\small
\begin{tabular}{lrrrrr}
\toprule
dataset & $N$ & $k$ & $d$ & OMT & CART \\
\midrule
MiniBooNE & 130065 & 50 & 2 & \B0.850 & 0.835 \\
MiniBooNE & 130065 & 50 & 3 & \B0.873 & 0.865 \\
crop mapping & 325834 & 175 & 2 & \B0.769 & 0.679 \\
crop mapping & 325834 & 175 & 3 & \B0.841 & 0.797 \\
covertype & 581012 & 54 & 2 & 0.667 & \B0.668 \\
covertype & 581012 & 54 & 3 & \B0.684 & 0.677 \\
susy & 1008372
& 18 & 2 & 0.744 & \B0.748 \\
susy & 1008372
& 18 & 3 & \B0.760 & 0.755 \\
\bottomrule
\end{tabular}
\caption{Out-of-sample accuracy on additional large datasets}\label{table_additional_data}
\end{table}
\section{Conclusion}
In this paper, we propose a scalable and flexible mixed-integer optimization framework based on column generation (CG) to identify
optimal multiway-split decision trees with constraints.
We present a novel path-based MIP formulation for decision trees where the number of columns or binary variables is independent of the training data size.
Our method can generate both classification and regression trees by minimizing any of the commonly used nonlinear error metrics while still solving a linear MIP model. As each categorical feature only enters a rule once, multiway-split trees are easier to comprehend than their binary-split counterpart.
A first ``plain vanilla'' CG implementation is tested on several public datasets ranging up to a million data samples. We are able to achieve a performance that is comparable to or superior than those achieved by the state-of-art optimal classification tree methods in the literature while consuming only a small fraction of their runtime.
Furthermore, our CG based formulation is able to seamlessly handle a wide variety of practical constraints that cannot be efficiently managed by prior ODT models or popular heuristics like \textsf{CART}.
The current framework has to discretize numerical features. One way to handle numerical features \emph{without} discretization is to employ a preprocessing step like BinOCT [Verwer \& Zhang 2019], which we leave as future work.
| {
"timestamp": "2023-02-15T02:06:07",
"yymm": "2302",
"arxiv_id": "2302.06812",
"language": "en",
"url": "https://arxiv.org/abs/2302.06812",
"abstract": "There has been a surge of interest in learning optimal decision trees using mixed-integer programs (MIP) in recent years, as heuristic-based methods do not guarantee optimality and find it challenging to incorporate constraints that are critical for many practical applications. However, existing MIP methods that build on an arc-based formulation do not scale well as the number of binary variables is in the order of $\\mathcal{O}(2^dN)$, where $d$ and $N$ refer to the depth of the tree and the size of the dataset. Moreover, they can only handle sample-level constraints and linear metrics. In this paper, we propose a novel path-based MIP formulation where the number of decision variables is independent of $N$. We present a scalable column generation framework to solve the MIP optimally. Our framework produces a multiway-split tree which is more interpretable than the typical binary-split trees due to its shorter rules. Our method can handle nonlinear metrics such as F1 score and incorporate a broader class of constraints. We demonstrate its efficacy with extensive experiments. We present results on datasets containing up to 1,008,372 samples while existing MIP-based decision tree models do not scale well on data beyond a few thousand points. We report superior or competitive results compared to the state-of-art MIP-based methods with up to a 24X reduction in runtime.",
"subjects": "Machine Learning (cs.LG)",
"title": "Scalable Optimal Multiway-Split Decision Trees with Constraints",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517456453798,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.708960638981192
} |
https://arxiv.org/abs/2203.08978 | Flooding in weighted sparse random graphs of active and passive nodes | This paper discusses first passage percolation and flooding on large weighted sparse random graphs with two types of nodes: active and passive nodes. In mathematical physics passive nodes can be interpreted as closed gates where fluid flow or water cannot pass through and active nodes can be interpreted as open gates where water may keep flowing further. The model of this paper has many applications in real life, for example, information spreading, where passive nodes are interpreted as passive receivers who may read messages but do not respond to them. In the epidemic context passive nodes may be interpreted as individuals who self-isolate themselves after having a disease to stop spreading the disease any further. When all weights on edges between active nodes and between active and passive nodes are independent and exponentially distributed (but not necessary identically distributed), this article provides an approximation formula for the weighted typical flooding time. | \section{Introduction}
First passage percolation is one of the classical models in probability theory and mathematical physics introduced by Hammersley and Welsh \cite{HW} in $1965$ as a generalization of Bernoulli percolation \cite{BH,DC}. Its large interest due to the model simplicity and its various applications from theoretical physics to biology \cite{ME, GK, AP, GM, OG}.
The first study rooted in $1957$ by Broadbent and Hammersley \cite{BH}, when they studied
the first passage time of fluid flow through the random properties of a porous medium.
The porous medium can be modelled as a random undirected connected graph $G$ where each undirected edge $e$ is attached with an independent nonnegative random weight $W_e$ drawn from distribution $L_e$. The weight $W_e$ represents a transmission time of fluid flow from an active node to another node.
Hammersley and Welsh \cite{HW} defined a transmission time from $a$ to $b$ along a connected path $\pi$ on $G$ by
\begin{align}
\label{definition: The active transmitter time of the path}
t_G(\pi)= \sum_{e\in\pi}W_e.
\end{align}
The first passage time from $u$ to $v$ on $G$ is defined by
\begin{align*}
\tau_G(u,v) = \inf_{\pi\in S}t_G(\pi),
\end{align*}
where $S$ is a set of all possible paths from $a$ to $b$.
The flooding time of node $a$ in $G$ is defined by
\begin{align}
\flood_G(a) = \max_b\tau_G(a,b),
\end{align}
where the maximum is taken over all nodes $b$ in $G$. \\
When $G$ is complete graph with $n$ nodes and each edge weight is independent and has an identical exponential distribution $L_e = \Exp(1)$, Janson \cite{J} showed in $1998$ that
\begin{align}
\frac{\tau_G(a,b)}{\log n/n}&\prto 1,\\
\frac{\flood_G(U_n)}{\log n/n}&\prto 2,\\
\frac{\max_a\flood_G(a)}{\log n/n}&\prto 3,
\end{align}
where $U_n$ is a uniformly chosen node in $[n]=\{1,2,3,...,n\}$ and notation "$\prto$" means convergence in probability.
The results have inspired many studies following other results in various settings (see e,g. \cite{DKLP,BHH, BHH17, ADL, AL, PAL, LN19}).
When $G = G(n, (d_i)_{i=1}^{n})$ is a sparse random graph with a given degree sequence $(d_i)_{i=1}^{n}$ constructed via configuration model \cite{B,RHI} and edge weights are independently of each other and identically exponentially distributed with parameter $1$ ($L_e = \Exp(1)$), it has been showed in \cite{ADL} that,
\begin{align}
\frac{\tau_G(a,b)}{\log n}&\prto \frac{1}{\nu-1},\\
\frac{\flood_G(U_n)}{\log n}&\prto \frac{1}{\nu-1}+\frac{1}{\dmin},\\
\frac{\max_a\flood_G(a)}{\log n}&\prto \frac{1}{\nu-1}+\frac{2}{\dmin},
\end{align}
where $\dmin = \min_{1\le i\le n} d_i\ge 3$,
$\nu = \lim_{n\to\infty}\frac{\E(d_{U_n}^2)}{\E(d_{U_n})}-1\gr 1$ and $U_n$ is a uniformly chosen node in $[n]$ .\\
This paper generalizes the model of Hammersley and Welsh by including passive nodes and studies how these nodes slow down the typical flooding time on a large weighted sparse random graph.
\section{Definitions and notations}
Let $G = (V,E)$ be a finite undirected random graph, where $V$ is the set of nodes and $E$ is the set of edges. In this paper it is assumed that the set of nodes $V$ consists of active nodes $V_1$ and passive nodes $V_2$; the set of edges $E$ consists of edges from active to active $E_{11}$, from active to passive $E_{12}$ and from passive to passive $E_{22}$. The indices $i=1,2$ are used to refer to different types of nodes: active and passive nodes. The size of set $V$ is denoted by $n := \abs{V}$. Note that $V = V_1\cup V_2$ and $n = n_1 + n_2$, where $n_1$ and $n_2$ are the number of active and passive nodes, respectively. By symmetry, we see that $E_{12} = E_{21}$, and hence $E = E_{11}\cup E_{12}\cup E_{22}$.
\subsubsection*{Walkable path}
A path of length $\ell$ from $a$ to $b$ is a sequence $\pi:= (v_0,e_1,v_1,e_2,\dots, v_{\ell-1},e_\ell,v_\ell)$, where nodes $v_0 = a, v_1,\dots, v_\ell=b$ in $V$ (not necessary distinct) and edges between two nodes $e_j=\{v_{j-1},v_j\}$ ($j=1,2,\dots,\ell$) in $E$. A path $\pi$ from $a$ to $b$ is said to be \textit{walkable} if nodes $(v_j)_{j\ls\ell}$ in $\pi$ are active.
An inverse path of $\pi$ is defined by $\pi^{-1}:= (v_\ell,e_{\ell},v_{\ell-1},\dots,e_2,v_1,e_1,v_0)$.
We say that a path $\pi$ is \textit{strongly walkable}, if also its inverse path is walkable. Note that all paths in undirected (active) subgraph $G_1: = (V_1,E_{11})$ are strongly walkable.
\subsection{Weighted random graphs}
An undirected graph $G$ equipped with random weights $W = \{W_e\}$ on its edges is called a \textit{weighted random undirected graph} $G$.
\subsubsection*{Weighted first passage time}
The \textit{weighted first passage time} from $a$ to $b$ on $G$ is defined by
\begin{align*}
\tau(a,b) = \inf_{\pi\in S} \sum_{e\in\pi} W_e,
\end{align*}
where $S$ is the set of all walkable paths from $a$ to $b$. If there does not exist a walkable path from $a$ to $b$ (\ie $S = \emptyset$), we set $\tau(a,b) = \infty$.
\subsubsection*{Weighted flooding times}
The weighted flooding time of active node $a$ in $V_1$ and on $G$ is defined by
\begin{align*}
\label{definition:flooding time}
\flood(a) &= \max\{\tau(a,b)\ls \infty : b\in V\}.
\end{align*}
\subsection{General configuration model}
\label{configuration model of three types of half-edges}
A classical way to construct an undirected random graph $G$ with a given degree sequence $(d_i)_{i=1}^{n}$ is to use configuration model introduced by \Bollobas \cite{B} (see also e.g.\; \cite{RHI}). This classical configuration consists only one type of nodes and half-edges, where each half-edge is matched uniformly at random to unmatched half-edges until none of them are left and the same processes is repeated until all half-edges are matched. Let us introduce in the following a \textit{general configuration model with two types of nodes} (see related work \cite{CO,SB}). In this presented general configuration model type-$1$ and type-$2$ nodes can be anything, need not be necessary active or passive nodes. \\
Let $\bd = \{(d_{11}(v))_{v\in V_1}, (d_{12}(v))_{v\in V_1}, (d_{21}(v))_{v\in V_2}, (d_{22}(v))_{v\in V_2}\}$ be a collection of integer sequences satisfying the following properties:
\begin{itemize}
\item[(i)] $\sum_{v\in V_1}d_{11}(v)$ and $\sum_{v\in V_1}d_{22}(v)$ are even,
\item[(ii)]
\label{balance condition}
$\sum_{v\in V_1}d_{12}(v) = \sum_{v\in V_2}d_{21}(v)$ (\textit{the sum condition of bipartite graph}).
\end{itemize}
For each node $u$ in $V_1$ we attach $d_{11}(u)$ and $d_{12}(u)$ labelled elements called type-$11$ and type-$12$ half-edges, respectively. Similarly each node $v$ in $V_2$ are attached with $d_{22}(v)$ and $d_{21}(v)$ labelled elements called type-$22$ and type-$21$ half-edges, respectively. We pair half-edges uniformly at random until no half-edges are left according to the following rules:
\begin{enumerate}
\label{matching 1}
\item Type-$ii$ half-edges are paired with each other for all $i=1,2$,
\item Each type-$12$ half-edge is paired to a type-$21$ half-edge.
\end{enumerate}
A pair of matched half-edges in above form the different types of edges: type-$ij$ edges, where $i,j = 1,2$. The matchings may result in self-loops or parallel edges, and hence the obtained graph is called a multigraph, denoted by $\tG = (V,\bd)$, where $V=V_1\cup V_2$. Note that in this construction $\tG$ has three multi-subgraphs \ie $\tG = \tG_1\cup \tG_2\cup \tG_3$, where $\tG_1:=(V_1, \bd_{11})$, $\tG_2:=(V_2, \bd_{22})$, $\tG_3:=(V, \bd_{12}\cup\bd_{21})$ (obtained by pairing half-edges according to the above rules $1$ and $2$) and $\bd_{ij} = (d_{ij}(v))_{v\in V_i}$ for all $i,j= 1,2$.
\\
A graph is said to be \textit{simple} if it does not contain self-loops and parallel edges. Conditional on $\tG$ is being simple, we obtain a simple random graph with given degree sequence $\bd$ denoted by $G=G_{1}\cup G_{2}\cup G_{3}$ \cite{BS13,J09}, where all subgraphs $G_k$ ($k=1,2,3$) are simple and uniformly distributed with given degree sequences \cite{RHI}. We see that $G$ is simple if and only if all its subgraphs $G_k$ are simple. To ensure the probability that $\tG$ is simple stays away from zero, we assume that the degree sequences $\bd_{11}$ and $\bd_{22}$ satisfy \Erdos--Gallai conditions \cite{MOA} and the degree sequence $\bd_{12}\cup\bd_{21}$ satisfies Gale--Ryser conditions \cite{G57}.
\section{Model descriptions}
\label{model}
Let $G^{(\kappa)} = (V^{(\kappa)},\bd^{(\kappa)})$ be a random graph of active and passive nodes constructed as in Chapter \ref{configuration model of three types of half-edges}, where each index $\kappa$ takes positive integer value. All edges in $G^{(\kappa)}$ are attached with random weights $W_e^{(\kappa)}$ according to the following:
\begin{itemize}
\item $W_e^{(\kappa)} \st \Exp(\lambda_{11})$ if $e\in E_{11}^{(\kappa)}$,
\item $W_e^{(\kappa)} \st \Exp(\lambda_{12})$ if $e\in E_{12}^{(\kappa)}$,
\item all edge weights mentioned above are independent,
\end{itemize}
where rate parameters $\lambda_{11}$ and $\lambda_{12}$ are strictly positive.
\subsection{Conditions for nodes}
The number of active and passive nodes are assumed to have the same order of $\kappa$ i.e. $n_1^{(\kappa)}=\btheta(\kappa)$ and $n_2^{(\kappa)}=\btheta(\kappa)$ implying also that $n^{(\kappa)} =\btheta(\kappa)$.
The empirical type-$i1$ degree distribution is defined by
\[p_{i1}^{(\kappa)}(j) = \frac{\abs{\{v\in V_i^{(\kappa)}:d_{i1}^{(\kappa)}(v) = j\}}}{n_i^{(\kappa)}}\]
for all integers $j\ge 0$, where $i=1,2$. The degree sequence $\bd^{(\kappa)}$ is assumed to satisfies the following two conditions: Regularity conditions \ref{regularity conditions} (introduced in \cite{MR} or \cite{RHII}) and Blanchet and Stauffer conditions \ref{asymptotic bipartite graph conditions} \cite{BS13} (asymptotic bipartite graph conditions).
\begin{condition}
\label{regularity conditions}
\textit{(Regularity conditions)}. There exist limiting distributions $p_{i1} = (p_{i1}(j))_{j\in\N}$ ($i=1,2$) and positive integer $\kappa_0$ such that
\begin{enumerate}
\item[(1)] $p_{i1}^{(\kappa)}(j)\to p_{i1}(j)$ for all integers $j\ge 1$ as $\kappa\to\infty$,
\item[(2)] $\sum_j j^{2+\epsilon} p_{i1}(j) = O(1)$ for some $\epsilon\gr 0$ (bounded second moment),
\item[(3)] $\min_{v\in V_i}d_{i1}^{(\kappa)}(j) = \delta_{i1} $ for all $\kappa\ge \kappa_0$ and $p_{i1}(\delta_i)\gr 0$, where $\delta_{11}\ge 3$ and $\delta_{21}\ge 1$.
\end{enumerate}
\end{condition}
The means of limiting distribution $p_{11}$ and its downshifted size biasing
distribution (see definition in \cite{LN17}) are denoted by
\begin{align*}
\mu_{11} := \sum_{j}j p_{11}(j)\quad\text{and}\quad \nu_{11} := \frac{1}{\mu_{11}}\sum_{j}j(j-1)p_{11}(j),
\end{align*}
respectively, and both means are assumed to be strictly positive numbers.\\
Let us denote $N = \sum _{v\in V_1}d_{12}^{(\kappa)}(v) = \sum _{v\in V_2}d_{21}^{(\kappa)}(v)$.
We order degree sequences $(d_{12}^{(\kappa)}(v))_{v\in V_1}$ and $(d_{21}^{(\kappa)}(v))_{v\in V_2}$ in decreasing orders as $s_1\ge s_2\ge\cdots\ge s_{n_1}$ and $t_1\ge t_2\ge\cdots\ge t_{n_2}$, respectively. Denote $s = \max_i s_i$ and $t = \max_i t_i$.
\begin{condition} (Blanchet and Stauffer conditions \cite{BS13}).
\label{asymptotic bipartite graph conditions}
The following two conditions hold:
\begin{enumerate}
\item[(i)]\
\begin{align*}
\sum_i\sum_j s_i(s_i-1)t_j(t_j-1) = O(N^2).
\end{align*}
\item[(ii)] For any $m\ge 1$,
\begin{align*}
\sum_{i= t \wedge m}^{n_1} s_i &= \Omega(N),\\
\sum_{i= s \wedge m}^{n_2} t_i &= \Omega(N).
\end{align*}
\end{enumerate}
\end{condition}
\subsection{Main result}
\label{main result}
\begin{theorem}
Consider a sequence of random simple graphs $G^{(\kappa)}=(V^{(\kappa)},\bd^{(\kappa)})$ so that its degree sequence $\bd^{(\kappa)}$ satisfies Conditions \ref{regularity conditions} and \ref{asymptotic bipartite graph conditions}. Then for a uniformly chosen active node $A$ in $G^{(\kappa)}$,
\begin{align*}
\frac{\flood(A)}{\log\kappa} &\prto \frac{1}{\lambda_{11}(\nu_{11}-1)} + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}
\end{align*}
as $\kappa$ tends to infinity.
\end{theorem}
\section{Proof of main result}
To keep notations simple all indexes $\kappa$ are dropped out from now (unless otherwise mentioned), and it is written $n_1, n_2, n, V_1, d_{11}$ and so on. Note that $n_i\to\infty$ if and only if $\kappa\to\infty$.\\
By assumption the minimum type-$11$ degree is at least $3$, and hence $G_1$ is connected \whp (see e.g.\;\cite{ADL}[Lemma 2.1]). Since each passive node has at least an active neighbour ($\delta_{21}\ge 1)$, we conclude that \whp $G$ is connected. Furthermore, for any given two nodes $a$ in $V_1$ and $b$ in $V$, \whp there exists a walkable path from $a$ to $b$ in $G$. Note that $G$ may remain still connected even if its subgraphs $G_2$ and $G_3$ are not connected. To ease an analysis of weighted flooding time we may remove all edges between two passive nodes since their edge weights do not affect on flooding time.
\subsection{Proof of upper bound}
In this section it will be showed that for any $\epsilon\gr 0$ \whp,
\begin{align}
\label{typical flooding time: upper bound}
\flood(A) &\le \Big(\frac{1}{\lambda_{11}(\nu_{11}-1)} + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\ + \epsilon\Big)\log \kappa.
\end{align}
We notice that the flooding time of $A$ consists of two parts: flooding from $A$ to all active nodes and flooding from $A$ to all passive nodes. These quantities are defined by
\begin{align*}
\flood_1(A) = \max_{b\in V_1}\tau(A,b)\quad\quad\text{and}\quad\quad \flood_2(A) = \max_{b\in V_2}\tau(A,b)
\end{align*}
and called the \textit{flooding time 1} and the \textit{flooding time 2}, respectively. In the following subchapters it will be derived upper bounds for flooding times $1$ and $2$.
\subsubsection{Upper bound of flooding time 1}
\subsubsection*{Definitions and notations}
The set of $t$-radius \textit{active neighbourhood} of $a$ in $V_1$ is defined by
\begin{align*}
B_1(a,t) = \{b\in V_1: \tau(a,b)\le t\}
\end{align*}
Note that the restriction on set $V_1$, $\tau$ is metric on $V_1$. The time to reach $k$ active nodes is defined by
\begin{align*}
T_a(k) = \min\{t\ge 0: \abs{B_1(a,t)}\ge k+1\}
\end{align*}
Denote scale parameters (depending on $\kappa$)
\begin{align*}
\alpha &= \lfloor \log^3 n_1 \rfloor,\\
\beta &=\Big\lfloor 3\sqrt{\tfrac{\mu_{11}}{\nu_{11}-1}n_1\log n_1} \Big\rfloor,
\end{align*}
and define $T_A(\alpha,\beta) = T_A(\beta) -T_A(\alpha)$.
Let us introduce the next three results from \textit{Amini and Lelarge} \cite{AL} (or see alternatively \cite{DKLP}).
\begin{proposition}
\label{weighted flooding time 1: two balls with the size of beta intersect}
With high probability,
\begin{align*}
\tau(a,b) \le T_a(\beta) + T_b(\beta)
\end{align*}
for all $a$ and $b$ in $V_1$.
\end{proposition}
\begin{lemma}
\label{weighted flooding time 1: typical time to reach alpha active nodes}
For a uniformly chosen active node $A$ in $V_1$ and any $\epsilon\gr 0$,
\begin{align*}
\pr\Big(T_A(\alpha) \ge \Big(\frac{1}{\lambda_{11}\delta_{11}} + \epsilon\Big)\log n_1\Big) = o(n_1^{-1}).
\end{align*}
\end{lemma}
\begin{lemma}
\label{weighted flooding time 1: typical time to reach from alpha to beta active nodes}
For a uniformly chosen active node $A$ in $V_1$ and any $\epsilon\gr 0$,
\begin{align*}
\pr\Big(T_A(\alpha,\beta) \ge \Big(\frac{1}{\lambda_{11}(\nu_{11}-1)}+\epsilon\Big)\log n_1\Big) = o(n_1^{-1}).
\end{align*}
\end{lemma}
Applying the above three results, we have the following results.
\begin{proposition} For any $\epsilon\gr 0$ \whp,
\label{weighted flooding time 1: time to reach beta active nodes}
\begin{enumerate}
\item [(1)]$T_A(\beta) \le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \epsilon\log n_1$,
\item [(2)]$T_a(\beta) \le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}}\log n_1 + \epsilon\log n_1$\:\: for all $a\in V_1$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item[(1)] By Lemmas \ref{weighted flooding time 1: typical time to reach alpha active nodes} and \ref{weighted flooding time 1: typical time to reach from alpha to beta active nodes} \whp,
\begin{align*}
T_A(\beta) &= T_A(\alpha) + T_A(\alpha,\beta)\\
&\le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + 2\epsilon\log n_1.
\end{align*}
\item[(2)] By Lemmas \ref{weighted flooding time 1: typical time to reach alpha active nodes} and \ref{weighted flooding time 1: typical time to reach from alpha to beta active nodes} \whp,
\begin{align*}
\max_a T_a(\beta) &= \max_a \big(T_a(\alpha) + T_a(\alpha,\beta)\big)\\
&\le\frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \epsilon\log n_1 + \max_a T_a(\alpha)\\
&\le\frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}}\log n_1 + 2\epsilon\log n_1.\\
\end{align*}
\end{enumerate}
\end{proof}
\begin{theorem} For any $\epsilon\gr 0$ \whp,
\label{weighted flooding time 1:upper bound}
\begin{align*}
\flood_1(A)\le\frac{1}{\lambda_{11}(\nu_{11}-1)}\log \kappa + \frac{1}{\lambda_{11}\delta_{11}}\log \kappa + \epsilon\log \kappa.
\end{align*}
\end{theorem}
\begin{proof}
By Propositions \ref{weighted flooding time 1: two balls with the size of beta intersect} and \ref{weighted flooding time 1: time to reach beta active nodes} \whp,
\begin{align*}
\flood_1(A) &= \max_u\tau(A,u)\\
&\le \max_u \big(T_A(\beta) + T_{u}(\beta)\big)\\
&\le T_A(\beta) + \max_u T_{u}(\beta)\\
&\le\frac{1}{\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}}\log n_1 + 2\epsilon\log n_1.
\end{align*}
The claim follows now by using assumption $n_1(\kappa) = \Theta(\kappa)$.
\end{proof}
\subsubsection{Upper bound of flooding time 2}
We define the active neighbourhood and the number of active neighbours of $v$ in $V$ by
\[
N_1(v) = \{a\in V_1: \{a,v\} \in E\}\quad\quad\text{and}\quad\quad \deg_1(v) = \abs{N_1(v)},
\]
respectively. The quantity $\deg_1(v)$ is called type-$1$ degree of $v$ interpreted as the number of active neighbours of node $v$. Note that in this paper $\deg_1(v)$ is not random, since the degree sequences $(d_{11}(v))_{v\in V_1}$ and $(d_{21}(v))_{v\in V_1}$ are fixed.\\
Recall that every passive node $b$ is assumed to have at least one active neighbour ($\delta_{21}\ge 1)$. Hence, for any passive node $b$ in $V_2$ there exists an active neighbour of $b$, denoted by $u_b$, that has a minimum edge weight $W_b = \min_{a\in N_1(b)}W_{ab}$ between $b$ and $u_b$. Let us introduce the following proposition.
\begin{proposition}
\label{weighted flooding time 2; upper bound: two balls with the size of beta nodes intersect}
With high probability,
\begin{align}
\tau(a,b) \le T_a (\beta) + T_{u_b}(\beta) + W_b
\end{align}
for all $a$ in $V_1$ and $b$ in $V_2$.
\end{proposition}
\begin{proof}
By the definition of the first passage time and Proposition \ref{weighted flooding time 1: two balls with the size of beta intersect} \whp,
\begin{align*}
\tau(a,b)&\le \tau(a,u_b) + W_b\\
&\le T_a(\beta) + T_{u_b}(\beta) + W_b
\end{align*}
for all $a\in V_1$ and $b\in V_2$.
\end{proof}
\begin{proposition}
\label{weighted flooding time 2: time to reach alpha + 1 active nodes}
For a uniformly chosen passive node $B$ in $V_2$, any $0\le x\le 1$ and any $\epsilon\gr 0$,
\begin{align*}
\pr\left( T_{u_B}(\alpha_{n_1}) + W_B \ge \Big( \frac{x}{\lambda_{11} \delta_{11} \wedge \lambda_{12}\delta_{21}} + \epsilon \Big) \log n_1 \right)
= o(n_1^{-x}).
\end{align*}
\begin{proof}
Let $G_{\delta_{11}}$ be $\delta_{11}$-regular random graph on $n_1$ nodes (see definition \cite{DKLP}). Let $u^*$ be a uniformly chosen node and $T_{u^*}(\alpha_{n_1})$ be the time to reach $\alpha_{n_1}$ nodes in $G_{\delta_{11}}$. Denote $t_{n_1} = \Big(\frac{x}{\lambda_{11} \delta_{11} \wedge \lambda_{12}\delta_{21}} + \epsilon \Big) \log n_1$ and let $X$ be an exponential random weight with rate parameter $\lambda_{11}\delta_{11}\gr 0$ independent of $T_{u^*}(\alpha_{n_1})$ and $G_{\delta_{11}}$. By stochastic ordering $T_{u_B}(\alpha_{n_1})\lest T_{u^*}(\alpha_{n_1})$ and $W_B\lest X$, and independences of random variables,
\begin{align*}
T_{u_B}(\alpha_{n_1}) + W_B \lest T_{u^*}(\alpha_{n_1}) + X.
\end{align*}
Now by \cite{LN19}[Proposition 1] $\pr(T_{u^*}(\alpha_{n_1}) + X \ge t_{n_1}) = o(n_1^{-x})$, and hence,
\begin{align*}
\pr( T_{u_B}(\alpha_{n_1}) + W_B \ge t_{n_1})\le \pr(T_{u^*}(\alpha_{n_1}) + X \ge t_{n_1}) = o(n_1^{-x}).
\end{align*}
\end{proof}
\end{proposition}
\begin{proposition} For any $\epsilon\gr 0$ \whp,
\label{weighted flooding time 2: time to reach beta active nodes}
\begin{enumerate}
\item [(i)]$T_{u_B}(\beta) + W_B \le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \epsilon\log n_1$,
\item [(ii)]$T_{u_b}(\beta) + W_b \le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\log n_1 + \epsilon\log n_1$\\ for all $b\in V_2$.
\end{enumerate}
\end{proposition}
\begin{proof} By Proposition \ref{weighted flooding time 2: time to reach alpha + 1 active nodes}, Lemma \ref{weighted flooding time 1: typical time to reach from alpha to beta active nodes} and the union bound \whp,
\begin{enumerate}
\item[(i)]
\begin{align*}
T_{u_B}(\beta) + W_B & = T_{u_B}(\alpha) + W_B + T_{u_B}(\alpha,\beta)\\
&\le \frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + 2\epsilon\log n_1.
\end{align*}
\item[(ii)]
\begin{align*}
\max_b (T_{u_b}(\beta) + W_b) &= \max_b \big(T_{u_b}(\alpha) + W_b + T_{u_b}(\alpha,\beta)\big)\\
&\le\frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \epsilon\log n_1 + \max_b\ \big(T_{u_b}(\alpha) + W_b\big)\\
&\le\frac{1}{2\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\log n_1 + 2\epsilon\log n_1.\\
\end{align*}
\end{enumerate}
\end{proof}
\begin{theorem} For any $\epsilon\gr 0$ \whp,
\label{weighted flooding time 2:upper bound}
\begin{align*}
\flood_2(A)\le\frac{1}{\lambda_{11}(\nu_{11}-1)}\log \kappa + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\log \kappa + \epsilon\log \kappa.
\end{align*}
\end{theorem}
\begin{proof}
By Propositions \ref{weighted flooding time 1: time to reach beta active nodes}, \ref{weighted flooding time 2; upper bound: two balls with the size of beta nodes intersect} and \ref{weighted flooding time 2: time to reach beta active nodes} \whp,
\begin{align*}
\flood_2(A) &= \max_b\tau(A,b)\\
&\le \max_b \big(T_A(\beta) + T_{u_b}(\beta) + W_{u_b}\big)\\
&\le T_A(\beta) + \max_b\big(T_{u_b}(\beta) + W_{u_b})\\
&\le\frac{1}{\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\log n_1 + 2\epsilon\log n_1.
\end{align*}
The claim follows now by using assumption $n_1(\kappa) = \Theta(\kappa)$.
\end{proof}
\subsubsection{ Proof of \eqref{typical flooding time: upper bound}}
Taking maximum of flooding times $1$ and $2$, we have
\begin{align*}
\flood(A) = \max(\flood_1(A),\flood_2(A)).
\end{align*}
Applying Theorems \ref{weighted flooding time 1:upper bound} and \ref{weighted flooding time 2:upper bound} we conclude that \whp,
\begin{align*}
\flood(A)\le \Big(\frac{1}{\lambda_{11}(\nu_{11}-1)}+\frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}}\Big)\log \kappa +\epsilon\log \kappa.
\end{align*}
\subsection{Proof of lower bound}
In this section it will be showed that for any $\epsilon\gr 0$ \whp,
\begin{align}
\label{typical broadcast time: lower bound}
\flood(A) &\ge \Big(\frac{1}{\lambda_{11}(\nu_{11}-1)} + \frac{1}{\lambda_{11}\delta_{11}\wedge\lambda_{12}\delta_{21}} - \epsilon\Big) \log \kappa.
\end{align}
\subsection*{Notations and definitions}
The (weighted) first passage time from subset $S\subset V_1$ to $y$ in $V$ is defined by,
\[
\tau(S,y) = \inf_{x\in S}\tau(x,y).
\]
The set of active nodes which are $t\ge 0$ time from active neighbourhood $N_1(v)$ of $v$ in $V$ is defined by
\[B'(v,t) = \{a\in V_1: \tau(N_1(v),a)\le t\}.\]
Since the weights on edges incident to $v$ in $B'(v,t)$ do not play a part, especially for passive nodes $v$ in $V_2$, we have by \cite{ADL} (replace $\dmin$ by bounded degrees in the proof of Proposition 4.13) (see also \cite{AL}[Proposition 4.2] and \cite{DKLP}[Lemma 3.5]) the following proposition.
\begin{proposition}
\label{lower bound proofs: two balls B' with size t_n1 do not intersect}
For any $\epsilon\gr 0$ and any two distinct nodes $u$ and $v$ in $V$ with bounded degrees $\deg_1(u) = O(1)$ and $\deg_1(v)=O(1)$ \whp,
\[
B'(u,t_{n_1}) \cap B'(v,t_{n_1}) = \emptyset,
\]
where $t_{n_1} =\frac{1-\epsilon}{\lambda_{11}(\nu_{11}-1)}\log n_1$. If node $u$ (resp., $v$) is chosen uniformly at random, the same result holds without the condition $\deg_1(u) = O(1)$ (resp, $\deg_1(v) = O(1)$).
\end{proposition}
\textit{Proof of \eqref{typical broadcast time: lower bound}}.
The claim follows by applying assumptions $n_1(\kappa) = \Theta(\kappa)$ and $n_2(\kappa) = \Theta(\kappa)$ into the following proposition.
\begin{proposition}
For a uniformly chosen active node $A$ in $V_1$ and any $\epsilon\gr 0$ \whp,
\begin{align*}
\flood(A)\ge \frac{1-\epsilon}{\lambda_{11}(\nu_{11}-1)}\log n_1 + \max_{i=1,2}\Big(\frac{1-\epsilon}{\lambda_{1i}\delta_{i1}}\log n_i\Big).
\end{align*}
\end{proposition}
\begin{proof}
It is sufficient to show that each group $V_i$ has \whp a node $v_i^*$ such that
\begin{align}
\label{broadcast time: lower bound proof; the existences of bad nodes}
\tau(A,v_i^*)\ge \frac{1-\epsilon}{\lambda_{11}(\nu_{11}-1)}\log n_1 + \frac{1-\epsilon}{\lambda_{1i}\delta_{i1}} \log n_i
\end{align}
for all $i=1,2$.\\
Denote parameters $a=\frac{1-\epsilon}{2\lambda_{11}(\nu_{11}-1)}\log n_1 $ and $b_i=\frac{1-\epsilon}{\lambda_{1i}\delta_{i1}}\log n_i$ (depending on $\kappa$), where $i=1,2$. Let $S_{\delta_{i1}}$ be the set of type-$i$ nodes with minimum degree $\delta_{i1}$. A node in $S_{\delta_{i1}}$ is said to be bad if all of its $\delta_{i1}$-edge weights is greater than $b_i$ ($i=1,2)$. Let $C^{(i)}_v$ be the event that node $v$ in $S_{\delta_i}$ is bad. The probability of this event is
\begin{align*}
\pr(C^{(i)}_v) = \pr(\min_{u\in N_1(v)}W_{uv}\ge b_i) = n_i^{-1+\epsilon}.
\end{align*}
Let $M_i = \sum_{v}\1_{C^{(i)}_v}$ be the count of type-$i$ bad nodes in $S_{\delta_i}$. Then the expected value and the variance of $M_i$ are
\begin{align*}
\E(M_i) &= \sum_v\pr(C^{(i)}_v) = \abs{S_{\delta_i}} n_i^{-1+\epsilon} = (p_{i1}(\delta_i) + o(1)) n_i^{\epsilon},\\
\Var(M_i) &= \sum_{u,v}\cov(\1_{C^{(i)}_u},\1_{C^{(i)}_v}) \\
&= \sum_{u}\var(\1_{C^{(i)}_u}) + \sum_u \sum_{v\in N_1(u)}\cov(\1_{C^{(i)}_u},\1_{C^{(i)}_v})\\
&\le (\delta_i + 1) \E(M_i).
\end{align*}
By Chebyshev's inequality \whp,
\begin{align}
\label{the numbers of bad nodes}
M_i\ge \frac{1}{2}\E(M_i) = \frac{1}{2} (p_{i1}(\delta_i) + o(1))n_i^{\epsilon}.
\end{align}
Let $N_i$ be the number of type-$i$ bad nodes which takes at most $2a + b_i$ time to reach from $A$. Since the weights on edges incident to $A$ and $v$ do not play a part, by Proposition \ref{lower bound proofs: two balls B' with size t_n1 do not intersect}
\begin{align*}
\pr(C^{(i)}_v,B'(A,t_{n_1}) \cap B'(v,t_{n_1}) \neq \emptyset) = o(\pr(C^{(i)}_v))
\end{align*}
and
\begin{align*}
\E(N_i)= \sum_{v} \pr(C^{(i)}_v,B'(A,t_{n_1}) \cap B'(v,t_{n_1}) \neq \emptyset) = o(\E(M_i)).
\end{align*}
By Markov's inequality \whp $N_i\le\frac{1}{4}\E(M_i)$. Hence \whp,
\begin{align*}
M_i-N_i\ge\frac{1}{4}\E(M_i) =\frac{1}{4}(p_{i1}(\delta_i)+ o(1))n_i^{\epsilon}\ge 1,
\end{align*}
where $i=1,2$. These implies \whp there exist desired nodes $v_i^*$ satisfying \eqref{broadcast time: lower bound proof; the existences of bad nodes}.
\end{proof}
\newpage
| {
"timestamp": "2022-03-18T01:08:51",
"yymm": "2203",
"arxiv_id": "2203.08978",
"language": "en",
"url": "https://arxiv.org/abs/2203.08978",
"abstract": "This paper discusses first passage percolation and flooding on large weighted sparse random graphs with two types of nodes: active and passive nodes. In mathematical physics passive nodes can be interpreted as closed gates where fluid flow or water cannot pass through and active nodes can be interpreted as open gates where water may keep flowing further. The model of this paper has many applications in real life, for example, information spreading, where passive nodes are interpreted as passive receivers who may read messages but do not respond to them. In the epidemic context passive nodes may be interpreted as individuals who self-isolate themselves after having a disease to stop spreading the disease any further. When all weights on edges between active nodes and between active and passive nodes are independent and exponentially distributed (but not necessary identically distributed), this article provides an approximation formula for the weighted typical flooding time.",
"subjects": "Probability (math.PR)",
"title": "Flooding in weighted sparse random graphs of active and passive nodes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517450056273,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7089606385174545
} |
https://arxiv.org/abs/0909.5388 | A Universal Crease Pattern for Folding Orthogonal Shapes | We present a universal crease pattern--known in geometry as the tetrakis tiling and in origami as box pleating--that can fold into any object made up of unit cubes joined face-to-face (polycubes). More precisely, there is one universal finite crease pattern for each number n of unit cubes that need to be folded. This result contrasts previous universality results for origami, which require a different crease pattern for each target object, and confirms intuition in the origami community that box pleating is a powerful design technique. | \section{Introduction}
An early result in computational origami is that every polyhedral surface
can be folded from a large enough square of paper
\cite{Demaine-Demaine-Mitchell-2000}.
But each such folding uses a different crease pattern.
Into how many different shapes can a single crease pattern fold?
Our motivation is developing programmable matter out of a foldable sheet.
The idea is to statically manufacture a sheet with specific creases,
and then dynamically program how much to fold each crease in the sheet.
Thus a single manufactured sheet can be programmed to fold into anything that
the single crease pattern (or a subset thereof) can fold.
We prove a universality result: a single $n \times n$ crease pattern
can fold into all face-to-face gluings of $O(n)$ cubes.
Thus, by setting the resolution $n$ sufficiently large,
we can fold any 3D solid up to a desired accuracy.
Our crease patterns are finite (rectangular) portions
of a single infinite tiling.
The \emph{tetrakis tiling} \cite{Gruenbaum-Shephard-1987}
is formed from the unit square grid by subdividing
each square in half vertically, horizontally, and by the two diagonals,
forming eight right isosceles triangles; see Figure~\figref{tetrakis}.
Note the scaling: one unit is the side length of an original square.
Equivalently, the tetrakis tiling can be formed from a half-unit square grid
with squares filled alternately with positive- and negative-slope diagonals.
\begin{figure}
\centering
\includegraphics[scale=0.6]{tetrakis}
\caption{A $4 \times 4$ region of the tetrakis tiling.}
\figlab{tetrakis}
\end{figure}
A \emph{tetrakis crease pattern} is a rectangular region (with integer
coordinates) of the tetrakis tiling.
Tetrakis crease patterns are similar to a style of origami
called \emph{box pleating} in which all creases are horizontal,
vertical, or diagonal, but their endpoints may not lie on the integer grid.
Box pleating was developed and popularized by Neal Elias in the 1960s
\cite{Kirschenbaum-Elias}, and explored more mathematically in
\cite{Lang-2003-secrets}.
Our universality result may have been suspected by origami artists,
but has not been proved until now.
\section{Definitions}
We start with a few definitions about origami,
specified somewhat informally for brevity.
For more formal definitions, see \cite[ch.~11]{Demaine-O'Rourke-2007}.
For our purposes, a \emph{piece of paper} is a two-dimensional surface
(formally, a metric 2-manifold).
A \emph{crease pattern} is graph drawn on the piece of paper
with straight edges and no crossings; edges are called \emph{creases}.
An \emph{angle assignment} is an assignment of real numbers in
$[-180^\circ,+180^\circ]$ to creases in the crease pattern,
specifying a fold angle (negative for valley, positive for mountain).
We allow a crease to be assigned an angle of~$0$,
in which case we call the crease \emph{trivial},
though we do not draw trivial creases in figures.
A crease pattern and angle assignment determine the 3D geometry of a folded
state, where each face maps as a 3D polygon via a rigid motion (isometry).
A \emph{folded state} consists of this geometry together with an
\emph{overlap order} defining the stacking relationship among faces of the
crease pattern that touch in the 3D geometry, allowing the paper to touch
but not cross itself.
We will specify such overlap orders visually using diagrams
that exaggerate the infinitesimal space between layers.
A \emph{polycube} is an interior-connected union of unit cubes from the
unit cube lattice. The \emph{dual graph} of a polycube has a vertex
for each unit cube and an edge between two vertices whose corresponding
cubes share a face.
By the interior-connected property, the dual graph is connected.
The \emph{faces} of the polycube are the (square) faces of the individual cubes
that are not shared by any other cubes.
A \emph{folding of a polycube} is a folded state that covers all faces
of the polycube, and nothing outside the polycube. In particular, we
allow the folded state to cover squares of the cubic lattice
interior to the polycube. (Indeed, our foldings cover all such squares.)
A face of the folded polycube is \emph{seamless} if the outermost layer of
paper covering it is an uncreased unit square of paper.
\section{Folding Polycubes}
In this section, we describe an algorithm for folding a given $n$-cube
polycube from a square sheet of paper with crease pattern equal
to an $O(n) \times O(n)$ region of the tetrakis tiling.
\begin{theorem} \theolab{rectangle seam}
Any polycube of $n$ cubes can be folded from a tetrakis crease pattern
on a $(4 n + 1) \times (2 n + 1)$ rectangle of paper,
with all faces seamless and made from one side of the paper,
except for one specified face which has seams.
\end{theorem}
\begin{proof}
The base case is $n=1$.
Figures~\figref{OneCubeCrease} and \figref{Extrusion} show the crease
pattern and folded state, respectively, for a single cube folded from a
$5 \times 3$ rectangular sheet of paper, which is within the desired bound.
For this base case, we fold just the shaded $5 \times 3$ part of the crease
pattern in Figure~\figref{OneCubeCrease}, making exactly the desired cube.
Although hidden in Figure~\figref{Extrusion}, the bottom face of the
cube is indeed covered, with seams. All other faces are seamless.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{one_cube_edge_labels}
\caption{Crease pattern for a folding a single unit cube. Red line segments
are mountain folds by $180^\circ$, orange segments are $90^\circ$ mountain
folds, green segments are $90^\circ$ valley folds, and blue segments are
$180^\circ$ valley folds.}
\figlab{OneCubeCrease}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{InductiveStepLayered2}
\caption{Folding of a unit cube from the crease pattern
in Figure~\protect\figref{OneCubeCrease}.}
\figlab{Extrusion}
\end{figure}
It remains to prove the inductive step.
Consider a polycube $P$ of $n$ cubes
and a specified face $g \in P$ for seams.
Let $b$ be the unique cube having $g$ as a face.
Let $T$ be a spanning tree of the dual graph of~$P$,
Because every tree has at least two leaves,
$T$ has a leaf corresponding to a cube $l \neq b$.
Let $t$ be the unique cube sharing a face with~$l$,
and let $f$ be the face shared by $t$ and~$l$.
Now consider the polycube $P' = P \setminus \{l\}$, with $n-1$ cubes.
Because $l \neq b$, $g$ remains a face of~$P'$.
By induction, the $(4(n-1)+1) \times (2(n-1)+1)$ tetrakis crease pattern
folds $C'$ into $P'$ with an angle assignment~$A'$,
without seams on all faces but~$g$.
By symmetry we can assume that all faces are made
from the top side of the paper.
We modify $(C',A')$ as follows to obtain an angle assignment $A$ for the
$(4 n + 1) \times (2 n + 1)$ tetrakis crease pattern $C$ folding into~$P$,
without seams on all faces but~$g$.
Because $f$ is a face of $P'$ and $f \neq g$,
the folding of $(C',A')$ has $f$ seamless.
Hence there is a unique unit square $s$ of $C'$
corresponding to the outermost layer of paper covering $f$.
Suppose that $s$ lies in row $i$ and column $j$ of~$C'$.
We insert two columns to the left of column~$j$, two columns to the right of
column~$j$, a single row above row~$i$, and a single row below row~$i$,
and add the crease pattern shown in Figure~\figref{InsertionStep}.
In particular, we have replaced $s$ by the $5 \times 3$ crease pattern for a
unit cube, and we filled the remainder of the added rows and columns
with the creases shown in Figure~\figref{OneCubeCrease},
We also reflect the creases in the original row $i$ into the added rows,
and similarly for the creases in the original column $j$, as described below.
The new crease pattern $C'$ has width $(4(n-1)+1)+4 = 4 n + 1$
and height $(2(n-1)+1)+2 = 2 n + 1$ as desired.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{CreasePrePostInsertion}
\caption{Modifying $(C',A')$ (green) to produce $(C,A)$.
Crease coloring is the same as Figure~\protect\figref{OneCubeCrease}.}
\figlab{InsertionStep}
\end{figure}
Finally we show that the constructed crease pattern $C$ and
angle assignment $A$ fold into the desired polycube~$P$.
We construct the folded state in two steps.
First, we fold using just the inserted crease pattern,
producing a sheet with a cube $l$ sticking out in place of~$s$,
as in Figure~\figref{Extrusion}.
As mentioned in the base case, all faces of $l$ except $f$
are seamless in this folding.
Furthermore, all other unit squares of the sheet
are seamless on the top side.
Second, we apply the folding of $(C',A')$ to this folded object,
pretending that the cube $l$ was just the unit square~$s$.
Because $(C',A')$ does not fold~$s$, the folding of $(C',P')$ still works.
The only difference is that all folds in row $i$ and column $j$
apply now to three layers, not just one, causing additional creases
in the inserted rows and columns.
Because all faces of $P'$ are made from the top side of the paper,
the folding remains seamless on all faces of $P'$ except $f$ and~$g$.
Face $f$ is now the cube~$l$, which means that we have folded~$P$.
\end{proof}
A simple extension makes the folding entirely seamless:
\begin{corollary} \theolab{rectangle seamless}
Any polycube of $n$ cubes can be folded from a tetrakis crease pattern
on a $(4 n + 1) \times (2 n + 2)$ rectangle of paper,
with all faces seamless and made from one side of the paper.
\end{corollary}
\begin{proof}
Let $P$ be a polycube of $n$ cubes and let $f$ be any of its faces.
We compute the folding from Theorem~\theoref{rectangle seam}
of a $(4 n + 1) \times (2 n + 1)$ rectangle into~$P$,
seamless except for~$f$.
We add an extra column on the right,
extending any nontrivial horizontal creases into this column.
The resulting crease pattern and angle assignment fold into
the desired polycube, with an extra seamless square
attached along an edge of~$f$.
We fold the seamless square on top of $f$
to obtain an entirely seamless folding.
\end{proof}
Finally we show that a slightly more careful construction improves the
size of the required square of paper.
\begin{theorem} \theolab{cube}
Any polycube of $n$ cubes can be folded from a tetrakis crease pattern
on a square of paper of side length $3 n + 2$,
with all faces seamless and made from one side of the paper.
\end{theorem}
\begin{proof}
We follow the same construction as Theorem~\theoref{rectangle seam},
but through the induction on~$n$, we alternate between the same modification
and the $90^\circ$ rotation of the modification. In other words,
in odd steps we add four rows and two columns, and in even steps
we add two rows and four columns. Thus we add three rows and columns
on average per step, starting from a $5 \times 3$ rectangle.
For $n$ odd, we have an additional row, which is accounted for by the $+2$
(instead of $+1$).
In all cases, we have an additional column of paper, and for $n$ even,
we have an additional row as well. Folding these over in sequence,
similar to Corollary~\theoref{rectangle seamless}, removes the seams
from the last face.
\end{proof}
Note that these bounds are tight up to constant factors for square paper,
as folding an $n\times 1\times 1$ tower of unit cubes requires starting from
a square of side length $\Omega(n)$ in order to have diameter $\Omega(n)$,
because the diameter of the tower is $n$ and folding can only decrease
diameter.
\iffull
\section{Folding Arbitrary Orthogonal Shapes}
Let $\L^3$ denote a partitioning of the Euclidean 3-space by orthogonal planes. Let $R$ be a collection of rectangular boxes in $\L^3$ (where one rectangular prism corresponds to a volume completely enclosed by partitioning planes).
Define the \emph{dual graph of $R$} as follows:
There is one node per prism, and two nodes are joined via an edge if their corresponding prisms in $R$ share a face. We say that $R$ is \emph{connected} if its dual graph is connected. A \emph{polyprism} refers to a connected collection of a prisms.
Let $P$ be a polyprism. A face $f$ in $P$ is \emph{exposed} if it is the face of exactly one prism in $P$; in other words, no pair of prisms in $P$ shares $f$. Let $C(P)$ be a crease pattern in $\mathbb{T}$ which folds to $P$. A face in the folding of crease pattern $C(P)$ is \emph{seamless} if the outermost rectangle in the folding of that face has no creases. For any polyprism $P$, we will find a crease pattern $C(P)$ in $\mathbb{T}$ such that $C(P)$ folds to $P$. Only one exposed face of $P$ will not be seamless in the folding of $C(P)$, and we call this face the \emph{base face}.
\begin{theorem}
Any polyprism \xxx{do this} cube $P$ of size $n$ can be folded from a square sheet of paper with side length $(3n+1)$ for even $n$ and $(3n+2)$ for odd $n$. Moreover the crease pattern used is a subgraph of the tetrakis tiling.
\theolab{PolycubeResult}
\end{theorem}
\section{Variations}
\xxx{complete draft: this is more of notes of what to write later than anything else. it would be much more elegant if we first introduced the 22.5 degree version, and then modify the bounds for box pleating.}
\subsection{Folding Polycubes when allowing 22.5 degree angles}
If the one allows additional crease angles such as those shown in Figure \xxx{add figure}, then additional paper is not needed to create `buffer' regions. This decreases both the total number of needed tiles and the total number of layers.
\subsection{Folding Polycubes Using Paper with Slits}
Similarly, if the folded paper has a pattern of slits such as that shown in Figure \xxx{add figure}, then additional paper is not needed to create the `buffer' regions, having the same effect as 22.5 degree angles on all bounds. However, this introduces many `seams' into the paper, where slits lie along faces. Ensuring that all faces are seamless requires a custom slit pattern for a particular polycube, or a different general pattern with additional paper and layers. An algorithm for coming up with an optimal general slit pattern given a particular set of polycube to be folded is an open problem.
\fi
| {
"timestamp": "2009-09-29T21:32:58",
"yymm": "0909",
"arxiv_id": "0909.5388",
"language": "en",
"url": "https://arxiv.org/abs/0909.5388",
"abstract": "We present a universal crease pattern--known in geometry as the tetrakis tiling and in origami as box pleating--that can fold into any object made up of unit cubes joined face-to-face (polycubes). More precisely, there is one universal finite crease pattern for each number n of unit cubes that need to be folded. This result contrasts previous universality results for origami, which require a different crease pattern for each target object, and confirms intuition in the origami community that box pleating is a powerful design technique.",
"subjects": "Computational Geometry (cs.CG)",
"title": "A Universal Crease Pattern for Folding Orthogonal Shapes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517507633985,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7089606368779654
} |
https://arxiv.org/abs/1506.07779 | On phase separation in systems of coupled elliptic equations: asymptotic analysis and geometric aspects | We consider a family of positive solutions to the system of $k$ components \[-\Delta u_{i,\beta} = f(x, u_{i,\beta}) - \beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta}^2 \qquad \text{in $\Omega$}, \] where $\Omega \subset \mathbb{R}^N$ with $N \ge 2$. It is known that uniform bounds in $L^\infty$ of $\{\mathbf{u}_{\beta}\}$ imply convergence of the densities to a segregated configuration, as the competition parameter $\beta$ diverges to $+\infty$. In this paper %we study more closely the asymptotic property of the solutions of the system in this singular limit: we establish sharp quantitative point-wise estimates for the densities around the interface between different components, and we characterize the asymptotic profile of $\mathbf{u}_\beta$ in terms of entire solutions to the limit system \[\Delta U_i = U_i \sum_{j\neq i} a_{ij} U_j^2. \] Moreover, we develop a uniform-in-$\beta$ regularity theory for the interfaces. | \section{Introduction}
The aim of this paper is to prove qualitative properties of positive solutions to competing systems with variational interaction, whose prototype is the coupled Gross-Pitaevskii equation
\[
\begin{cases}
-\Delta u_{i,\beta} +\lambda_{i,\beta} u_{i,\beta} = \mu_i u_{i,\beta}^3 -\beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta}^2 & \text{in $\Omega$} \\
u_i >0 & \text{in $\Omega$},
\end{cases} \quad i=1,\dots,k,
\]
in the limit of strong competition $\beta \to +\infty$. This problem naturally arises in different contexts: from the physics world, it is of interest in nonlinear optics and in the Hartree-Fock approximation for Bose-Einstein condensates with multiple hyperfine states, see e.g. \cite{AkAn, Timm}. From a mathematical point of view, it is useful in the approximation of optimal partition problems for Laplacian eigenvalues, and in the theory of harmonic maps into singular manifolds, see \cite{CaffLin, CoTeVe2002,CoTeVe2003, rtt,TaTePoin}. Several papers are devoted to the development of a common regularity theory for families of solutions associated to families of parameters $\beta \to +\infty$, to the analysis of the convergence of such families to some limit profile, and to the regularity issues for the emerging free-boundary problem, see \cite{CaffLin, CaffLin2010, ChangLinLinLin, CoTeVe2002, CoTeVe2003, NoTaTeVe, rtt, SoTaTeZi, SoZi, WeiWeth}. On the other hand, not much is known about finer qualitative properties, such as:
\begin{itemize}
\item the decay rate of convergence of the solutions,
\item the geometric structure of the solutions in a neighbourhood of the ``interface" between different components (a concept which will be conveniently defined),
\item the geometric structure of the interface itself.
\end{itemize}
To our knowledge, the only contribution dealing with this kind of problem is \cite{BeLiWeZh}, where Berestycki et al. considered the $1$-dimensional system
\begin{equation}\label{main system}
\begin{cases}
-w_{1,\beta}'' +\lambda_{1,\beta} w_{1,\beta} = \mu_1 w_{1,\beta}^3-\beta w_{1,\beta} w_{2,\beta}^2 & \text{in $(0,1)$} \\
-w_{2,\beta}'' +\lambda_{2,\beta} w_{2,\beta} = \mu_2 w_{2,\beta}^3-\beta w_{1,\beta}^2 w_{2,\beta} & \text{in $(0,1)$} \\
w_{i,\beta},>0 \quad \text{in $(0,1)$}, \quad w_i \in H_0^1(0,1) & i=1,2 \\
\int_0^1 w_{1,\beta}^2 = \int_0^1 w_{2,\beta}^2 = 1.
\end{cases}
\end{equation}
Under the assumption that $(\lambda_{1,\beta})$ and $(\lambda_{2,\beta})$ are bounded sequences, they proved that if $x_\beta \in \{w_{1,\beta}=w_{2,\beta}\}$ (the interface between $w_{1,\beta}$ and $w_{2,\beta}$), then there exists $C>1$ such that
\begin{equation}\label{eq: lower ext}
\frac{1}{C} \le \beta w_{1,\beta}^2(x_\beta) w_{2,\beta}^2(x_\beta) \le C \quad \forall \beta > 0;
\end{equation}
that is, any family of solutions decays, along sequences of points where $w_{1,\beta}=w_{2,\beta}$, like $\beta^{-1/4}$, see \cite[Theorem 1.1]{BeLiWeZh}. Furthermore, they showed that suitable scalings of $(w_{1,\beta},w_{2,\beta})$ in a neighbourhood of the interface converge, in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R})$, to an entire solution of
\begin{equation}\label{entire 1-d intro}
\begin{cases}
W_1'' = W_1 W_2^2 \\
W_2'' = W_1^2 W_2 & \text{in $\mathbb{R}$}\\
W_1, W_2 >0,
\end{cases}
\end{equation}
see \cite[Theorem 1.2]{BeLiWeZh}. This means that the geometry of the solutions to \eqref{entire 1-d intro} is related to the geometry of the solutions to \eqref{main system} near the interface; and in this perspective it is remarkable that, up to scaling, translations and exchange of the components, \eqref{entire 1-d intro} has only one solution, see \cite[Theorem 1.1]{BeTeWaWe}.
The purpose of this paper is to generalize the analysis in \cite{BeLiWeZh} in higher dimension and to $k \ge 2$ components systems with general form. In order to present and motivate our study, we introduce some notation and review some known results. For simplicity, in the rest of the paper the expression ``up to a subsequence" will be understood without always being mentioned.
We consider weak solutions to
\begin{equation}\tag{$P_\beta$}\label{main system k comp}
\begin{cases}
-\Delta u_{i,\beta} = f_{i,\beta}(x,u_{i,\beta})-\beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta}^2 & \text{in $\Omega$} \\
u_{i,\beta}>0 & \text{in $\Omega$},
\end{cases} \quad i=1,\dots,k,
\end{equation}
where $a_{ij}=a_{ji}>0$, $\beta>0$, and $\Omega$ is a domain of $\mathbb{R}^N$ neither necessarily bounded, nor necessarily smooth, with $N \le 4$. Since any coupling parameter $\beta a_{ij}$ is positive, with the considered sign convention the relation between any pair of densities $u_{i,\beta}$ and $u_{j,\beta}$ is of competitive type. Concerning the nonlinearities $f_{i,\beta}$, we always assume that $f_{i,\beta} \in \mathcal{C}^1(\Omega \times \mathbb{R})$ are such that:
\begin{itemize}
\item[(F1)] $f_{i,\beta}(x,s) = O(s)$ as $s \to 0$, uniformly in $x \in \Omega$, that is there exists $C>0$ such that
\[
\max_{s \in [0,1]} \sup_{x \in \Omega} \left|\frac{f_{i,\beta}(x,s)}{s}\right| \leq C \qquad i=1,\dots,k;
\]
\item[(F2)] for any sequence $\beta \to +\infty$ there exist a subsequence (still denoted $\beta$) and functions $f_i \in \mathcal{C}^1(\Omega \times \mathbb{R})$ such that $f_{i,\beta} \to f_i$ in $\mathcal{C}_{\mathrm{loc}}(\Omega \times \mathbb{R})$.
\end{itemize}
We explicitly remark that for the nonlinearity appearing in the classical Gross-Pitaevskii equation, i.e.
\[
f_{i,\beta}(x,s) = \mu_i s^3-\lambda_{i,\beta} s
\]
with $\mu_i,\lambda_{i,\beta} \in \mathbb{R}$, both (F1) and (F2) are satisfied provided $\{\lambda_{i,\beta}\}$ is bounded. This is exactly the assumption in \cite{BeLiWeZh}.
Let us suppose that
\begin{equation}\tag{U1}\label{boundedness}
\text{$\{\mf{u}_\beta:\beta >0\}$ is a family of solutions to \eqref{main system k comp}, uniformly bounded in $L^\infty(\Omega)$},
\end{equation}
where we used the vector notation $\mf{u}_\beta=(u_{1,\beta},\dots,u_{k,\beta})$. Then, as we showed in \cite{SoZi}, for any compact $K \Subset \Omega$ we have that $\{\mf{u}_\beta\}$ is uniformly bounded in $\mathrm{Lip}(K)$. As a consequence, it is possible to infer that there exists a locally Lipschitz continuous limit $\mf{u}$ such that $\mf{u}_{\beta} \to \mf{u}$ as $\beta \to +\infty$ in $\mathcal{C}^{0,\alpha}_{\mathrm{loc}}(\Omega)$ (for every $0<\alpha<1$) and in $H^1_{\mathrm{loc}}(\Omega)$, and
\begin{equation}\label{limit system}
\begin{cases}
-\Delta u_i = f_i(x,u_i) & \text{in $\{u_{i}>0\}$} \\
u_{i} \cdot u_{j} \equiv 0 & \text{in $\Omega$, for every $i \neq j$},
\end{cases}
\end{equation}
where $f_i$ is the limit of the considered sequence $\{f_{i,\beta}\}$ (see \cite{NoTaTeVe,SoTaTeZi,Wa}).
In the present paper we always assume that (F1), (F2) and \eqref{boundedness} are satisfied, and therefore we will not explicitly recall them in all our statements. Moreover, from now on we shall always focus on a particular converging subsequence, and on the corresponding limit profile, without changing the notation for the sake of simplicity.
Since the limit $\mf{u}$ is segregated, it is natural to define the \emph{nodal set}, or \emph{free-boundary}, as $\Gamma:= \{ u_{i}=0 \ \text{for every $i$}\}$. The properties of the free-boundary were studied in \cite{tt} (see also \cite{CaffLin}). As limit of strongly competing system, \cite[Theorem 8.1]{tt} establishes that $\mf{u}$ belongs to a class of segregated vector valued functions, called $\mathcal{G}(\Omega)$ (see Definition 1.2 in \cite{tt}), and hence the nodal set $\Gamma$ has the following properties: it has Hausdorff dimension $N-1$, and it is decomposed into two parts $\mathcal{R}$ and $\Sigma$. The set $\mathcal{R}$, called \emph{regular part}, is relatively open in $\Gamma$ and is the union of hyper-surfaces of class $\mathcal{C}^{1,\alpha}$ for every $0<\alpha<1$. The set $\Sigma=\Gamma \setminus \mathcal{R}$, the \emph{singular part}, is relatively closed in $\Gamma$ and has Hausdorff dimension at most $N-2$. By means of the \emph{Almgren's frequency function}
\begin{equation}\label{def: Almgren intro}
N(\mf{u},x_0,r):= \frac{r \int_{B_r(x_0)} \sum_{i=1}^k \left( |\nabla u_i|^2 - f_i(x,u_i) u_i \right) }{ \int_{\partial B_r(x_0)} \sum_{i=1}^k u_i^2},
\end{equation}
regular and singular part are defined by
\[
\mathcal{R} := \left\{ x \in \Gamma: N(\mf{u},x_0,0^+)=1 \right\}, \quad \quad
\Sigma:= \left\{ x \in \Gamma: N(\mf{u},x_0,0^+)>1 \right\}.
\]
Combining the results in \cite{tt} with those in Section 10 of \cite{DaWaZh}, it is possible to deduce also that every point $x_0 \in \mathcal{R}$ has multiplicity exactly equal to $2$, that is
\[
\# \left\{i=1,\dots,k: \text{ for every $r>0$ it results $B_r(x_0) \cap \{u_i>0\} \neq \emptyset$}\right\}=2.
\]
This prevents in particular the occurrence of self-segregation.
\subsection{Main results: asymptotic estimates}
For a large part of the paper we will be interested in studying the decay rate of the sequence $\{\mf{u}_\beta\}$ as $\beta \to +\infty$ in a neighbourhood of points of the free-boundary $\Gamma=\{\mf{u}=\mf{0}\}$. As already recalled, the only results available in this context are those contained in \cite[Theorem 1.1]{BeLiWeZh}. We mention that decay estimates are not only relevant for themselves, but are useful since they suggest the correct asymptotic behaviour in some approximated optimal partition problems, finally leading to powerful monotonicity formulae for competing systems, see \cite[Theorem 1.6]{BeLiWeZh}, \cite[Theorem 5.6]{BeTeWaWe} and \cite[Lemma 4.2]{Wa}.
In what follows we discuss the generalization of the analysis in \cite{BeLiWeZh} in higher dimension. Already in the plane, the situation is much more involved with respect to the $1$-dimensional problem: first, due to the richer structure of the free-boundary $\Gamma$.
Second, due to the fact that we deal with more than $2$ components, so that, as we shall see, we have to distinguish the dominating functions (which will have a suitable decay) from the other ones (which will decay much faster). Finally due to fact that the Hamiltonian structure of the problem, one of the key tools used in \cite{BeLiWeZh} for the proof of \eqref{eq: lower ext} in dimension $N=1$, is lost for general nonlinearities $f_{i,\beta}$, and in any case is much less powerful in subsets of $\mathbb{R}^N$ with $N \ge 2$ than in $\mathbb{R}$ (we point out that for general $f_{i,\beta}$ the forthcoming results are new results also in dimension $N=1$).
In order to overcome these difficulties, we develop a new approach based upon monotonicity formulae and tools from geometric measure theory.
The first of our results is a consequence of the uniform Lipschitz boundedness of $\{\mf{u}_\beta\}$ in compacts of $\Omega$, see \cite{SoZi}, and extends the upper estimate in \eqref{eq: lower ext} to the present setting.
\begin{theorem}\label{thm: global upper estimate}
For every compact set $K \Subset \Omega$ there exists $C >0$ such that
\[
\beta u_{i,\beta}^2 u_{j,\beta}^2 \le C \quad \text{in $K$, for every $i \neq j$}.
\]
\end{theorem}
Notice that, by the lower estimate in \eqref{eq: lower ext}, the result is optimal in general.
In order to derive finer properties, we introduce the concept of \emph{interface} of $\mf{u}_\beta$.
\begin{definition}\label{def interface}
We define the \emph{interface} of $\mf{u}_\beta$ as
\[
\Gamma_\beta:= \left\{ x \in \Omega \left| \begin{array}{l}\text{$u_{i,\beta}(x) = u_{j,\beta}(x)$ for some $i \neq j$} \\
\text{and $u_{i,\beta}(x) \ge u_{l,\beta}(x)$ for all the other indices $l$}\end{array}\right. \right\}
\]
\end{definition}
Roughly speaking, a point $x$ is on the interface of $\mf{u}_\beta$ if at least two components coincide in $x$, and the remaining ones are smaller. Notice that, if the number of components is $k=2$, then the interface is naturally defined as
\[
\Gamma_\beta:= \{ u_{1,\beta} = u_{2,\beta}\}.
\]
As we shall see, the interface plays the role of the free boundary $\Gamma=\{\mf{u}=\mf{0}\}$ for the $\beta$-problem \eqref{main system k comp}. A simple intuitive reason for this is that any converging sequence of points in $\Gamma_\beta$ necessarily converges to a limit in $\Gamma$. Moreover, if $x \in \Gamma$, then there exists a sequence of points in $\Gamma_\beta$ approaching $x$.
\begin{proposition}\label{prop: interfaces are good approximation}
If $x_\beta \in \Gamma_\beta$ and $x_\beta \to x_0 \in \Omega$ as $\beta \to +\infty$, then $x_0 \in \Gamma$. Moreover, if $\mf{u} \not \equiv \mf{0}$, then for any $x_0 \in \Gamma$ there exists $x_\beta \in \Gamma_\beta$ such that $x_\beta \to x_0$.
\end{proposition}
In what follows we consider the problem of estimating the rate of convergence of $\mf{u}_\beta$ in sequences of points on the interfaces $\Gamma_\beta$. By Theorem \ref{thm: global upper estimate}, if $x_\beta \in \Gamma_\beta$, then
\begin{equation}\label{eq: general decay}
u_{i,\beta}(x_\beta) \le \frac{C}{\beta^{1/4}} \qquad \text{for every $i=1,\dots,k$}.
\end{equation}
This estimate holds for all the components $u_{i,\beta}$. On the other hand, since on the interface we have two (or more) components dominating over the others, it is natural to expect that for the remaining ones the rate of convergence to $0$ is faster. We can prove this assuming that $x_\beta \to x_0 \in \mathcal{R}$, the regular part of $\Gamma$; recall that in this case $x_0$ has multiplicity $2$.
\begin{theorem}\label{corol: decay components vanishing}
Let $x_0 \in \mathcal{R}$. Let $i_1$ and $i_2$ be the only two indices such that $u_{i_1}, u_{i_2} \not \equiv 0$ in a neighbourhood of $x_0$. There exist a radius $R>0$ and a constant $C>0$ independent of $\beta \gg 1$ such that:
\begin{itemize}
\item $B_R(x_0) \cap \Gamma_\beta = \{ u_{i_1, \beta} = u_{i_2, \beta} \} \cap B_R(x_0)$, and moreover $B_R(x_0) \setminus \Gamma_\beta$ is constituted exactly by two connected components which are $\{ u_{i_1, \beta} > u_{i_2, \beta} \} \cap B_R(x_0)$ and $\{ u_{i_1, \beta} < u_{i_2, \beta} \} \cap B_R(x_0)$;
\item for any $j \neq i_1, i_2$, the density $u_{j,\beta}$ decays exponentially, in the sense that there exists $C_1,C_2>0$ independent of $\beta$ such that
\[
\sup_{B_R(x_0)} u_{j,\beta} \le C_1 e^{-C_1 \beta^{C_2}}.
\]
\item in $B_R(x_0)$ the system reduces to
\[
\begin{cases}
-\Delta u_{i_1,\beta} =f_{i_1,\beta}(x,u_{i_1,\beta}) - \beta u_{i_1,\beta} u_{i_2,\beta}^2 - u_{i_1,\beta}o_\beta(1) \\
-\Delta u_{i_2,\beta} =f_{i_2,\beta}(x,u_{i_2,\beta}) - \beta u_{i_2,\beta} u_{i_1,\beta}^2 - u_{i_2,\beta}o_\beta(1)
\end{cases}
\]
where $o_\beta(1)$ is a exponentially small perturbation in the $L^{\infty}$-norm.
\end{itemize}
\end{theorem}
The theorem establishes that the components that converge to zero in a neighbourhood of $x_0$ decay much faster (indeed exponentially in $\beta$) than those who survive in the limit. This, although naturally expected, is far from being trivial, and is new also in dimension $N=1$. Moreover, an important consequence of the first point is that in a neighbourhood of any point $x_0 \in \mathcal{R}$ the interfaces $\Gamma_\beta$ do not self-intersect, and separates $B_R(x_0)$ in exactly two connected components.
We now turn to the problem of extending the lower bound in \eqref{eq: lower ext} to higher dimension. It is interesting that such estimate does not always hold; this is related to the fact that, while in $\mathbb{R}$ the free-boundary is made of single points and is purely regular, in $\mathbb{R}^N$ with $N \ge 2$ the singular part $\Sigma$ appears, and it turns out that therein the decay of the solutions is faster. In order to prove this, we suppose that:
\begin{equation}\tag{U2}\label{nontrivial}
\text{the limit profile of the sequence $\{\mf{u}_\beta\}$ is $\mf{u} \not \equiv \mf{0}$}.
\end{equation}
When compared with the setting considered in \cite{BeLiWeZh}, equation \eqref{main system}, \eqref{nontrivial} reduces to the normalization condition on the $L^2$-mass of the components (our assumption is in fact weaker).
\begin{theorem}\label{thm: decay higher multiplicity}
Under \eqref{nontrivial}, if $x_\beta \in \Gamma_\beta$ and $x_\beta \to x_0 \in \Sigma$ as $\beta \to +\infty$, then
\[
\limsup_{\beta \to +\infty} \beta^{1/4} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta)\right) = 0.
\]
\end{theorem}
The previous result leaves open the possibility that the lower estimate \eqref{eq: lower ext} still holds for sequences in $\Gamma_\beta$ converging to the regular part of the free-boundary. We believe that this is the case, but for the moment we can only prove a sub-optimal result.
\begin{theorem}\label{thm: lowe estimate}
Under \eqref{nontrivial}, let $x_\beta \in \Gamma_\beta$, and suppose that $x_\beta \to x_0 \in \mathcal{R} \subset \Gamma$ as $\beta \to +\infty$. Let $i_1$ and $i_2$ be the only two indices such that $u_{i_1}, u_{i_2} \not \equiv 0$ in a neighbourhood of $x_0$.
Then, for every $\varepsilon>0$ there exists $C_\varepsilon>0$ such that
\[
\beta^{1/4+\varepsilon}u_{i_1,\beta}(x_\beta) = \beta^{1/4+\varepsilon}u_{i_2,\beta}(x_\beta)\ge C_\varepsilon.
\]
\end{theorem}
We conjecture that the previous estimate holds replacing the exponent $1/4+\varepsilon$ with $1/4$. Theorem \ref{thm: lowe estimate} is actually a corollary of a more general statement. We recall the definition of the Almgren quotient, equation \eqref{def: Almgren intro}.
\begin{theorem}\label{thm: lower general}
Under assumption \eqref{nontrivial}, let $x_\beta \in \Gamma_\beta$ such that $x_\beta \to x_0 \in \Gamma$. Let $N(\mf{u},x_0,0^+)=D$. Then for any $\varepsilon>0$ there exists $C_\varepsilon>0$ such that
\[
\liminf_{\beta \to +\infty} \beta^{(D+\varepsilon)/(2+2D)} \left(\sum_{i=1}^k u_{i,\beta}(x) \right)\ge C_\varepsilon.
\]
\end{theorem}
The last asymptotic estimate we present regards the quantification of the improvement of the decay arround the singular part of the free-boundary, under additional assumptions. We suppose that \begin{equation}\tag{A}\label{a_ij=1}
\text{$a_{ij} = 1$ for every $i \neq j$},
\end{equation}
and introduce the following notion:
\begin{definition}\label{def singular interface}
We define the \emph{singular part of the interface $\Gamma_\beta$} as
\[
\Sigma_\beta:=\Gamma_\beta \setminus \left\{ x \in \Gamma_\beta\left| \begin{array}{l}
\text{there exist exactly two indices $i_1 \neq i_2$ such that} \\
\text{$u_{i_1,\beta}(x) = u_{i_2,\beta}(x) > u_{j,\beta}(x)$ for all $j \neq i_1,i_2$}, \\
\text{and $\nabla(u_{i_1,\beta} -u_{i_2,\beta})(x) \neq 0$}
\end{array} \right. \right\}.
\]
\end{definition}
The definition is inspired by classical contributions regarding the singular set of solutions of elliptic equations, see for instance \cite{HanHardtLin} and the references therein. Actually, thanks to (F1) the main results in \cite{HanHardtLin} are applicable for any $\beta$ fixed, and hence the closed set $\Gamma_\beta$ can be decomposed in $(\Gamma_\beta \setminus \Sigma_\beta) \cup \Sigma_\beta$, where $\Gamma_\beta \setminus \Sigma_\beta$ is relatively open in $\Gamma_\beta$ and is the collection of $\mathcal{C}^{1,\alpha}$ hyper-surfaces, while $\Sigma_\beta$ is relatively closed and has Hausdorff dimension at most $N-2$. In other words, the same decomposition holding for $\Gamma$ holds also for $\Gamma_\beta$.
\begin{theorem}\label{cor: improved decay singular sequence}
Under assumptions \eqref{nontrivial} and \eqref{a_ij=1}, let $x_\beta \in \Sigma_\beta$ for every $\beta$, $x_\beta \to x_0$ as $\beta \to +\infty$. Then $x_0 \in \Sigma$, and for every $\varepsilon>0$ there exists $C_\varepsilon>0$ such that
\[
\limsup_{\beta \to +\infty} \beta^{3/10-\varepsilon} \left( \sum_{i=1}^k u_{i,\beta}(x_\beta)\right) \le C_\varepsilon.
\]
\end{theorem}
Condition $x_\beta \in \Sigma_\beta$ means that we can reach the singular part of the free-boundary through a sequence of points on the singular part of $\Gamma_\beta$ (in general the existence is not guaranteed).
\begin{remark}
In the previous discussion we believe that assumption \eqref{a_ij=1} is not really necessary, and that the last result hold also for general symmetric matrices $(a_{ij})_{i,j}$. Nevertheless, in the proof of Theorem \ref{cor: improved decay singular sequence} we shall make use of several intermediate results proved in \cite{BeTeWaWe, SoTe}, where a system with $a_{ij}=1$ is considered. For this reason, we prefer to assume \eqref{a_ij=1}.
\end{remark}
In what follows we briefly describe the strategy of the proofs of the previous results. While Theorem \ref{thm: global upper estimate} rests essentially only on the Lipschitz boundedness of $\{\mf{u}_\beta\}$ in compacts of $\Omega$, the other decay estimates are much more involved and require several intermediate propositions of independent interest.
\subsection{Main results: normalization and blow-up}
The following is a crucial intermediate step in the proofs of Theorems \ref{thm: decay higher multiplicity}-\ref{cor: improved decay singular sequence}:
\begin{theorem}\label{thm: blow-up}
Under assumption \eqref{nontrivial}, let $x_\beta \in \Gamma_\beta$, and suppose that $x_\beta \to x_0 \in \Gamma$ as $\beta \to +\infty$. There exists a sequence of radii $r_\beta>0$, $r_\beta \to 0$ as $\beta \to +\infty$, such that the scaled sequence
\[
\mf{v}_\beta(x):=\frac{\mf{u}_{\beta}(x_\beta + r_\beta x)}{H(\mf{u}_\beta,x_\beta,r_\beta)^{1/2}}, \quad \text{where} \quad H(\mf{u}_\beta,x_\beta,r_\beta) := \frac{1}{r_\beta^{N-1}} \int_{\partial B_{r_\beta}} \sum_{i=1}^k u_{i,\beta}^2,
\]
is convergent in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R}^N)$ to a limit $\mf{V}$, solution to
\begin{equation}\label{entire system a_{ij}}
\begin{cases}
\Delta V_i = \sum_{i \neq j} a_{ij} V_i V_j^2 \\
V_i \ge 0
\end{cases} \quad \text{in $\mathbb{R}^N$}.
\end{equation}
The profile $\mf{V}$ has at least two non-trivial components and at most polynomial growth, in the sense that
\[
V_1(x) + \dots+ V_k(x) \le C(1+|x|^d) \qquad \forall x \in \mathbb{R}^N
\]
for some $C,d \ge 1$.
\end{theorem}
Hence, for any dimension $N \ge 1$, the geometry of the solutions with polynomial growth of \eqref{entire system a_{ij}} is responsible for the geometry of $\mf{u}_\beta$ near the interface $\Gamma_\beta$, at least for $\beta$ sufficiently large (cf. \cite[Theorem 1.2]{BeLiWeZh}).
In this perspective, we can completely characterize the solution $\mf{V}$, and hence the geometry of $\{\mf{u}_\beta\}$, around the regular part of the free boundary.
\begin{corollary}\label{thm: classification limits regular part}
Under the assumptions of Theorem \ref{thm: blow-up}, let $x_0 \in \mathcal{R}$. Then $\mf{V}$ has only two non-trivial components, say $V_1$ and $V_2$; $(V_1,V_2)$ has linear growth, and is the unique $1$-dimensional solution of
\begin{equation}\label{system 2}
\begin{cases}
\Delta V_1 = a_{12} V_1 V_2^2 \\
\Delta V_2 = a_{12} V_1^2 V_2\\
V_1,V_2>0
\end{cases} \quad \text{in $\mathbb{R}^N$.}
\end{equation}
\end{corollary}
Here and in what follows we write that a function is $1$-dimensional if, up to a rotation, it depends only on one variable. We postpone a detailed review of the known results about \eqref{entire system a_{ij}} to Section \ref{sec: prel}. For the moment, we anticipate that solutions of \eqref{entire system a_{ij}} having linear growth are classified: up to rigid motions and suitable scaling, there exists a unique $1$-dimensional solution \cite{SoTe,Wa,Wa2}. Therefore, the theorem establishes that, along sequences of points converging to the regular part of $\Gamma$, suitable scaling of the original solutions approaches a uniquely determined archetype profile in $\mathcal{C}^2$-sense.
If $x_0 \in\Sigma$, the singular part of the free-boundary, then the picture is more involved and a complete classification of the admissible limits solving \eqref{entire system a_{ij}} seems out of reach. Indeed, in such case the emerging profile $\mf{V}$ has not linear growth, and \eqref{entire system a_{ij}} has infinitely many distinct solutions superlinear solutions \cite{BeTeWaWe,SoZi2,SoZi1}.
In any case, under additional assumptions we can still say something on the emerging limit profile. Recall that $\Sigma_\beta$ has been defined in Definition \ref{def singular interface}.
\begin{corollary}\label{thm: non-simple blow-up}
Under assumptions \eqref{nontrivial} and \eqref{a_ij=1}, let $x_\beta \in \Sigma_\beta$ for every $\beta$. Then $x_0 \in \Sigma$, and the limit profile $\mf{V}$ obtained in Theorem \ref{thm: blow-up} is not $1$-dimensional.
\end{corollary}
The relation between Theorem \ref{thm: blow-up} and the proofs of Theorems \ref{thm: decay higher multiplicity}-\ref{cor: improved decay singular sequence} can be summarized by the following simple idea:
\begin{itemize}
\item firstly, we can deduce properties of the emerging limit $\mf{V}$, imposing different assumptions on $x_\beta$;
\item secondly, we can use the properties of $\mf{V}$ in order to prove the desired decay estimates.
\end{itemize}
For instance Corollary \ref{thm: non-simple blow-up} will be the base point in the derivation of Theorem \ref{cor: improved decay singular sequence}.
\subsection{Main results: uniform regularity of the interfaces and its consequences}
We now present our analysis concerning uniform regularity properties for the interfaces $\Gamma_\beta$ away from its singular set $\Sigma_\beta$.
Notice that, by definition and by the regularity of $\mf{u}_\beta$ for $\beta$ fixed, the sets $\Gamma_{\beta}$ are closed subsets. Moreover, $\Sigma_\beta$ is a relatively closed subset of $\Gamma_\beta$.
It is now the time to introduce a convenient notion of ``regular part" of $\Gamma_\beta$.
\begin{definition}\label{def: regular interface}
For $\rho > 0$ fixed, we let
\[
\mathcal{R}_{\beta}(\rho) = \left\{ x \in \Gamma_{\beta} : N_\beta(\mf{u}_\beta, x, \rho) < 1+ \frac14 \right\}.
\]
\end{definition}
\begin{figure}[h]
\centering
\begin{overpic}[width=0.6\textwidth]{dessin_new}
\put(30,20) {$u_{i,\beta} > u_{j,\beta}$}
\put(60,40) {$u_{i,\beta}=u_{j,\beta}$}
\put(76,21) {$\Sigma_\beta$}
\put(40,37) {$\mathcal{R}_\beta(\rho)$}
\end{overpic}
\caption{A sketch of the interface $\Gamma_\beta$ for $\beta$ fixed: the dashed line represents the regular part of the free boundary $\mathcal{R}_\beta(\rho)$, while the corner points belong to the singular part $\Sigma_\beta$. As it will be proved, the singular part $\Sigma_\beta$ is detached from $\mathcal{R}_\beta(\rho)$.}
\end{figure}
As we shall see in Lemma \ref{lem: basic prop regular part}, by taking the parameter $\rho$ sufficiently small, the sets $\mathcal{R}_{\beta}(\rho)$ is a subset of $\Gamma_\beta \setminus \Sigma_\beta$ and
is detached, uniformly in $\beta$, from the singular part $\Sigma$ of the limit free-boundary $\Gamma=\{\mf{u}=\mf{0}\}$ (and thus is also uniformly detached from $\Sigma_\beta$). Our main result states that for any fixed $\rho > 0$, the sets $\mathcal{R}_{\beta}(\rho)$ enjoy a \emph{uniform vanishing Reifenberg flatness condition}. Specifically, we have:
\begin{theorem}\label{thm: reifenberg flat uniform}
Let $K \Subset \Omega$ be a compact set, let $\rho>0$, and let us assume that \eqref{nontrivial} holds. For any $\delta > 0$ there exists $R > 0$ such that for any $x_\beta \in \mathcal{R}_{\beta}(\rho) \cap K$ and $0 < r < R$ there exists a hyper-plane $H_{x_\beta,r} \subset \mathbb{R}^N$ containing $x_\beta$ such that
\[
{\rm dist}_{\mathcal{H}}(\mathcal{R}_{\beta}(\rho) \cap B_r(x_\beta), H_{x_\beta,r} \cap B_r(x_\beta)) \leq \delta r.
\]
\end{theorem}
Here and in what follows ${\rm dist}_{\mathcal{H}}$ denotes the Hausdorff distance, defined by
\[
{\rm dist}_{\mathcal{H}}(A,B) := \sup\left\{\sup_{a \in A} {\rm dist}(a,B), \sup_{b \in B} {\rm dist}(b, A) \right\}.
\]
We emphasize that in the previous theorem, the radius $R$ depends on $\rho$ and $\delta$, but not on $\beta$: this is what we mean writing that the condition holds uniformly.
The uniform vanishing Reifenberg flatness condition has several consequences:
first, it implies a uniform-in-$\beta$ local separation property of $\Gamma_\beta$ in a neighbourhood of any point of $\mathcal{R}_\beta(\rho)$. In turn, recalling also Proposition \ref{prop: interfaces are good approximation}, is the key in the proof of Theorem \ref{corol: decay components vanishing}.
At the moment we do not know if the vanishing Reifenberg flatness condition is the optimal property which holds, uniformly in $\beta$, for a subset of $\Gamma_\beta \setminus \Sigma_\beta$. In order to understand if Theorem \ref{thm: reifenberg flat uniform} is really satisfying or not, let us focus for simplicity on a $2$ components system, so that $\Gamma_\beta = \{ u_{1,\beta}-u_{2,\beta} =0\}$,
and let $x_0$ be a regular point of the limit free boundary $\Gamma=\{\mf{u}=\mf{0}\}$. Recalling the decompositions of $\Gamma$ and $\Gamma_\beta$ in regular and singular part, and also the first point in Theorem \ref{corol: decay components vanishing}, we know that for $R>0$ small enough $\{\Gamma_\beta \cap B_R(x_0): \beta\} \cup \Gamma \cap B_R(x_0)$ is a family of $\mathcal{C}^{1,\alpha}$-hypersurfaces. It is natural to wonder if this family is uniformly of class $\mathcal{C}^{1,\alpha}$, that is, if any $\Gamma_\beta$ is locally the graph of a function $\phi_\beta$, with $\{\phi_{\beta}\}$ bounded in $\mathcal{C}^{1,\alpha}$. This would imply the uniform Reifenberg flatness, being a much stronger results. A natural attempt in order to prove uniform $\mathcal{C}^{1,\alpha}$ regularity consists in trying to show that $\{u_{1,\beta}-u_{2,\beta}\}$ is uniformly bounded in $\mathcal{C}^{1,\alpha}(B_R(x_0))$ (we recall that by the reflection law in \cite{tt}, even though the limit function $\mf{u}$ is not regular, the difference $u_1-u_2$ is of class $\mathcal{C}^1$ in a neighbourhood of any point in the regular part of $\Gamma$). With the $\mathcal{C}^{1,\alpha}$-boundedness of $\{u_{1,\beta}-u_{2,\beta}\}$ and other considerations, one could prove the uniform $\mathcal{C}^{1,\alpha}$ regularity of $\Gamma_\beta \cap B_R(x_0)$, thus a natural question is now: is it true that $\{u_{1,\beta}-u_{2,\beta}\}$ is uniformly bounded in $\mathcal{C}^{1,\alpha}$ in $B_R(x_0)$?
\begin{proposition}\label{prop: non C^1}
If $x_\beta \in \Gamma_\beta$ is such that $x_\beta \to x_0 \in \mathcal{R}$, then in general
\[
\lim_{\beta \to +\infty} \nabla (u_{1,\beta}-u_{2,\beta})(x_\beta) \neq \nabla (u_1-u_2)(x_0).
\]
In particular, in this case $\{u_{1,\beta}-u_{2,\beta}\}$ cannot be bounded in $\mathcal{C}^{1,\alpha}$.
\end{proposition}
For this reason, we think that the uniform Reifenberg flatness can be already considered as a relevant result.
\begin{remark}
Thanks to \cite[Section 8]{tt}, it is known that limits of the strongly competing system \eqref{main system k comp} share a number of properties with limits of the Lotka-Volterra system
\begin{equation}\label{LV}
-\Delta u_{i,\beta} = f_{i,\beta}(x,u_{i,\beta})-\beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta} \qquad \text{in $\Omega$}.
\end{equation}
It is then remarkable to observe that, while by the nature of the interaction $\{u_{1,\beta}-u_{2,\beta}\}$ is uniformly bounded in $\mathcal{C}^{1,\alpha}$ if $\{\mf{u}_\beta\}$ is a family of solutions to \eqref{LV}, this is not the case for \eqref{main system k comp}.
\end{remark}
\begin{remark}
At a first glance the reader could think that $\Gamma_\beta \setminus \Sigma_\beta$ would have been a more natural notion of regular part of $\Gamma_\beta$. But we point out that we cannot expect any uniform-in-$\beta$ regularity property for $\Gamma_\beta \setminus \Sigma_\beta$, since this relatively open subset of $\Gamma_\beta$ naturally approaches the singular part $\Sigma_\beta$ (and thus $\Sigma$). This is why we introduced $\mathcal{R}_\beta(\rho)$.
\end{remark}
\subsection*{Structure of the paper and some notation} The second section is devoted to some preliminaries on monotonicity formulae for solutions to \eqref{main system k comp} and their limits, most of which are already known, and to the collection of some useful results regarding entire solutions of system \eqref{entire system a_{ij}}.
Theorem \ref{thm: global upper estimate} and Proposition \ref{prop: interfaces are good approximation} are proved in Section \ref{sec: decay 1}. Theorems \ref{thm: decay higher multiplicity}-\ref{cor: improved decay singular sequence} are the object of Section \ref{sec: decay 2}, where we also prove Theorem \ref{thm: blow-up} and its corollaries. The uniform Reifeberg flatness condition and its consequences, among which Theorem \ref{corol: decay components vanishing}, are addressed in Section \ref{sec: Reif}.
With the exception of the proof of Theorem \ref{thm: global upper estimate}, we will consider for the sake of simplicity the system with $f_{i,\beta} \equiv0$, that is
\begin{equation}\label{system simplified}
\begin{cases}
\Delta u_i= \beta u_i \sum_{j \neq i} a_{ij} u_j^2 &\text{ in $\Omega$}\\
u_i > 0 &\text{ in $\Omega$},
\end{cases}
\end{equation}
with $a_{ij}=a_{ji}>0$ and $\beta>0$. All the results that we present hold for the complete system \eqref{main system k comp}, as stated in the introduction. The proofs differ mainly for technical details, related to the fact that we shall use several monotonicity formulae, which in presence of $f_{i,\beta} \not \equiv 0$ become almost-monotonicity formulae, and hence in most of the forthcoming estimates exponential remainder terms appear. The point is that, thanks to (F1) and \eqref{boundedness}, such terms can be conveniently controlled. The interested reader can fill the details combining the approach here with that in \cite{SoZi}, where all the results are proved in full generality, and where we had to deal with the same technical complications, see also the remarks in the next sections for further details. We chose to focus on system \eqref{system simplified} with the aim of making our ideas more transparent, and the proofs technically simpler.
In this paper we adopt a notation which is mainly standard. We mention that we denote by $B_r(x)$ the ball of center $x$ and radius $r$, writing simply $B_r$ in the frequent case $x=0$. We recall that we often omit the expression ``up to a subsequence". Finally, $C$ will always denote a positive constant independent of $\beta$, whose exact value can be different from line to line.
\section{Preliminaries}\label{sec: prel}
\subsection{Monotonicity formulae for solutions to competing systems}
We collect some known and new results concerning monotonicity formulae for solutions of \eqref{system simplified}, for which we refer to \cite[Section 3.1]{SoZi} (see also \cite{BeTeWaWe,CaffLin,NoTaTeVe} for similar results).
For $x_0 \in \Omega$ and $r>0$ such that $B_r(x_0) \Subset \Omega$, we define
\begin{equation}\label{def N regular}
\begin{split}
\bullet \quad & H(\mf{u},x_0,r):= \frac{1}{r^{N-1}} \int_{\partial B_r(x_0)} \sum_{i=1}^k u_i^2\\
\bullet \quad & E(\mf{u},x_0,r):= \frac{1}{r^{N-2}} \int_{B_r(x_0)} \sum_{i=1}^k |\nabla u_i|^2+ 2\beta\sum_{1\le i<j\le k} a_{ij} u_i^2 u_j^2\\
\bullet \quad & N(\mf{u},x_0,r):= \frac{E(\mf{u},x_0,r)}{H(\mf{u},x_0,r)} \qquad (\text{Almgren frequency function}).
\end{split}
\end{equation}
\begin{proposition}\label{prop: almgren}
In the previous setting, for $N \le 4$ the function $r \mapsto N(\mf{u},x_0,r)$ is monotone non-decreasing.
Moreover,
\begin{equation}\label{der H}
\frac{d}{dr} \log H(\mf{u},x_0,r) = \frac{2}{r} N(\mf{u},x_0,r) \ge 0.
\end{equation}
\end{proposition}
As a consequence of the monotonicity of the Almgren frequency function, we have the following.
\begin{proposition}\label{prop: monot e+h}
Let $\mf{u}$ be a solution of \eqref{system simplified}, and for some $x_0 \in \Omega$ and $\tilde r>0$, let $\gamma:= N(\mf{u},x_0,\tilde r)$. Then
\[
r \mapsto \frac{E(\mf{u},x_0,r)+H(\mf{u},x_0,r)}{r^{2\gamma}} \quad \text{is non-decreasing for $r>\tilde r$}.
\]
\end{proposition}
\begin{proof}
At first, integrating \eqref{der H} in $(\tilde r,r)$, we deduce that
\[
r \mapsto \frac{H(\mf{u},x_0,r)}{r^{2\gamma}} \quad \text{is non-decreasing for $r>\tilde r$}.
\]
Therefore, using also the monotonicity of $N(\mf{u},x_0,\cdot)$, it results
\begin{align*}
\frac{d}{dr} \log \left( \frac{E(\mf{u},x_0,r)+H(\mf{u},x_0,r)}{r^{2\gamma}} \right) &= \frac{d}{dr} \log \left( N(\mf{u},x_0,r) + 1 \right) \\
& + \frac{d}{dr} \log \left( \frac{H(\mf{u},x_0,r)}{r^{2\gamma}} \right) \ge 0. \qedhere
\end{align*}
\end{proof}
Finally, we recall a version of the Alt-Caffarelli-Friedman monotonicity formula suited to deal with solutions of \eqref{system simplified}, see Theorem 3.14 in \cite{SoZi} and also Theorem 4.3 in \cite{Wa}. To this aim, we introduce the functionals
\begin{align*}
J_1(r) & := \int_{B_r} \frac{|\nabla u_1|^2 + \beta a_{12} u_1^2 u_2^2}{|x|^{N-2}} \\
J_2(r) & := \int_{B_r} \frac{|\nabla u_2|^2 + \beta a_{12} u_1^2 u_2^2}{|x|^{N-2}},
\end{align*}
and we define $J(r):= J_1(r) J_2(r) / r^4$.
\begin{proposition}\label{prop: ACF}
Let $\mf{u}$ be a solution of \eqref{system simplified}, with $\Omega \Supset B_R(0)$ for some $R>1$, and let us assume that there exist $\lambda,\mu>0$ such that
\[
\frac{1}{\lambda} \le \frac{ \int_{\partial B_r} u_1^2 }{\int_{\partial B_r} u_2^2} \le \lambda \quad \text{and} \quad \frac{1}{r^{N-1}} \int_{\partial B_r} u_1^2 \ge \mu
\]
for every $r \in [1,R]$. Then there exists $C>0$ depending only on $\lambda,\mu$, and on the dimension $N$, such that
\[
r \mapsto J(r) \exp\{-C (\beta r^2)^{-1/4}\} \quad \text{is non-decreasing for $r \in [1,R]$}.
\]
\end{proposition}
\subsection{Almgren monotonicity formulae for segregated configurations}\label{sub: segregated configurations}
In \cite[Definition 1.2]{tt}, the authors introduced the sets $\mathcal{G}(\Omega)$ and $\mathcal{G}_{\mathrm{loc}}(\Omega)$, classes of segregated vector valued functions sharing several properties with solutions of competitive systems, including a version of the Almgren monotonicity formula. What is important for us is that, as already recalled in the introduction, if $\{\mf{u}_\beta\}$ is a sequence of solutions of \eqref{main system k comp} (or of the simplified system \eqref{system simplified}) and $\mf{u}_\beta \to \mf{u}$ locally uniformly and in $H^1_{\mathrm{loc}}$, then $\mf{u} \in \mathcal{G}(\Omega)$.
For elements of $\mathcal{G}(\Omega)$, with a slight abuse of notation, let
\begin{equation}\label{def N segregated}
\begin{split}
\bullet \quad & E(\mf{v},x_0,r):= \frac{1}{r^{N-2}} \int_{B_r(x_0)} \sum_{i=1}^k |\nabla v_i|^2\\
\bullet \quad & N(\mf{v},x_0,r):= \frac{E(\mf{v},x_0,r)}{H(\mf{v},x_0,r)} \qquad (\text{Almgren frequency function}),
\end{split}
\end{equation}
where $H$ is defined as in \eqref{def N regular}.
We recall some known facts. The following are a monotonicity formula for functions of $\mathcal{G}(\Omega)$, and a lower estimate of $N(\mf{v},x_0,0^+)$ for points $x_0$ on the free boundary $\{\mf{v}=\mf{0}\}$, for which we refer to \cite[Theorem 2.2 and Corollary 2.7]{tt} and \cite[Lemma 4.2]{SoTe}.
\begin{proposition}\label{prop: monot segregated}
Let $\mf{v} \in \mathcal{G}(\Omega)$. For every $x_0 \in \Omega$ and $r >0$ such that $B_r(x_0) \Subset \Omega$, we have $H(\mf{v},x_0,r) \neq 0$, and the function $N(\mf{v},x_0,r)$ is absolutely continuous and non-decreasing in $r$. Moreover
\[
\frac{d }{d r} \log H(\mf{v},x_0,r) = \frac{2N(\mf{v},x_0,r)}{r},
\]
and $N(\mf{v},x_0,r) \equiv \alpha$ is constant for $r \in (r_1,r_2)$ if and only if $\mf{v}=r^\alpha \mf{g}(\theta)$ is homogeneous of degree $\alpha$ in $\{r_1<|x|<r_2\}$ (here $(r,\theta)$ is a system of polar coordinates centred in $x_0$). Finally, if $x_0 \in \{\mf{v}=\mf{0}\}$, then either $N(\mf{v},x_0,0^+) =1$, or $N(\mf{v},x_0,0^+) \ge 3/2$.
\end{proposition}
\begin{remark}
In \cite[Lemma 4.2]{SoTe} it is shown that the alternative $N(\mf{v},x_0,0^+) =1$, or $N(\mf{v},x_0,0^+) \ge 3/2$, holds for the subclass of $\mathcal{G}_{\mathrm{loc}}(\mathbb{R}^N)$ containing all the homogeneous functions. This is sufficient to have the result in $\mathcal{G}(\Omega)$ for any $\Omega$, and to prove this we argue in the following way: let $\mf{v} \in \mathcal{G}(\Omega)$, not necessarily homogeneous, and let $x_0 \in \{\mf{v}=\mf{0}\}$. Let us consider a normalized blow-up
\[
w_{i,\rho}(x):= \frac{v_i(x_0+\rho x)}{H(\mf{v},x_0,\rho)^{1/2}}.
\]
Up to a subsequence, the family $\{\mf{w}_\rho\}$ is convergent in $\mathcal{C}^{0,\alpha}_{\mathrm{loc}}(\mathbb{R}^N)$ and $H^1_{\mathrm{loc}}(\mathbb{R}^N)$, for $\rho \to 0^+$, to a limit \emph{homoegenous} function $\mf{w} \in \mathcal{G}_{\mathrm{loc}}(\mathbb{R}^N)$ (see Section 3 in \cite{tt}). Thus, for every $r>0$
\begin{align*}
N(\mf{w},0,0^+) &= ( \text{by homogeneity} ) = N(\mf{w},0,r) = \lim_{\rho \to 0^+} N(\mf{w}_\rho,0, r)\\
&= \lim_{\rho \to 0^+} N(\mf{v},x_0, \rho r) = N(\mf{v},x_0,0^+) ,
\end{align*}
and Lemma 2.7 in \cite{SoTe} applies.
\end{remark}
\begin{remark}\label{rem: homogeneity non-segregated}
It is worth to observe that the characterization ``$N(\mf{v},x_0,r) \equiv \alpha$ is constant for $r \in (r_1,r_2)$ if and only if $\mf{v}=r^\alpha \mf{g}(\theta)$ is homogeneous of degree $\alpha$ in $\{r_1<|x|<r_2\}$" remains true also if $\mf{v}$ is a solution of \eqref{system simplified}. But, since such problem does not admit homogeneous solutions (but constant ones), this means that for any non-constant solution of \eqref{system simplified} the Almgren frequency function is strictly monotone.
\end{remark}
\subsection{On entire solutions of system \ref{entire system a_{ij}}}\label{sub: entire}
Theorem \ref{thm: blow-up} establishes a relationship between the behaviour of solutions to \eqref{main system k comp} near the interface and the geometry of the solutions to \eqref{entire system a_{ij}}:
\[
\begin{cases}
\Delta V_i = V_i \sum_{j \neq i} a_{ij} V_j^2 \\
V_i \ge 0
\end{cases} \quad \text{in $\mathbb{R}^N$},
\]
with $k \ge 2$, $N \ge 1$, and $a_{ij}=a_{ji}$. As stated in the introduction, this relationship will be exploited many times in the rest of the paper, and to this aim we recall some known results concerning existence and classification of solutions to \eqref{entire system a_{ij}}.
The first trivial observation is that, by the strong maximum principle, the dichotomy $V_i >0$ or $V_i \equiv 0$ in $\mathbb{R}^N$ holds.
Let us consider now the $k=2$ components system; in such a situation, without loss of generality we can suppose $a_{12}=a_{21}=1$. The $1$-dimensional problem $N =1$ is classified: up to rigid motions and suitable scaling, there exists a unique $1$-dimensional solution satisfying the symmetry property $V_2(x) = V_1(-x)$, the monotonicity condition $V_1'>0$ and $V_2'<0$ in $\mathbb{R}$, and having at most linear growth, see \cite[Lemma 4.1 and Theorem 1.3]{BeLiWeZh} and \cite[Theorem 1.1]{BeTeWaWe}.
The linear growth is the minimal admissible growth for non-constant solutions of \eqref{entire system a_{ij}}, in the sense that in any dimension $N \ge 1$, if $(V_1,V_2)$ is a nonnegative solution and satisfies the sublinear growth condition
\[
V_1(x)+V_2(x) \le C(1+|x|^\alpha) \qquad \text{in $\mathbb{R}^N$}
\]
for some $\alpha \in (0,1)$ and $C>0$, then one between $V_1$ and $V_2$ is $0$, and the other has to be constant. This \emph{Liouville-type theorem} has been proved by B. Noris et al. in \cite[Propositions 2.6]{NoTaTeVe}.
In contrast to the $1$-dimensional case, already for $N = 2$ the $2$ components system \eqref{entire system a_{ij}} has infinitely many positive solutions with algebraic growth, see \cite{BeTeWaWe}, and also solutions with exponential growth, see \cite{SoZi1}. These existence results was extended also to systems with $k >2$ arbitrary, but only under assumption \eqref{a_ij=1}. Notice that by Theorem \ref{thm: blow-up} solutions with exponential growth cannot be obtained as blow-up limits of sequences $\{\mf{u}_\beta\}$ satisfying \eqref{boundedness} and \eqref{nontrivial}.
We also observe that the existence of solutions in $\mathbb{R}^N$ with $N \ge 3$ which are not obtained by solutions in $\mathbb{R}^2$ has been recently proved in \cite{SoZi2}.
In parallel to the study of the existence, great efforts have been devoted to the research of reasonable conditions which, if satisfied by a solution of \eqref{entire system a_{ij}}, imply the $1$-dimensional symmetry of such solution; this, as explained in \cite{BeLiWeZh}, is inspired by some analogy in the derivation of \eqref{entire system a_{ij}} and of the Allen-Chan equation, for which symmetry results in the spirit of the celebrated De Giorgi's conjecture have been widely investigated. For systems of $k=2$ components, we refer to \cite{Fa}, dealing with monotone solutions in dimension $N=2$; to \cite{FaSo}, where a Gibbons-type conjecture for \eqref{entire system a_{ij}} is proved for any $N \ge 2$; and to \cite{Wa,Wa2}, where it is showed that in any dimension $N \ge 2$, any solution of \eqref{entire system a_{ij}} having linear growth is $1$-dimensional. Writing that $(V_1,V_2)$ has linear growth, we mean that there exists $C>0$ such that
\[
V_1(x)+ V_2(x) \le C (1+|x|) \qquad \forall x \in \mathbb{R}^N.
\]
It is worth to point out that the linear growth condition can be rephrased requiring that $N(\mf{V},0,+\infty) \le 1$, where $N(\mf{V},0,+\infty) = \lim_{r \to +\infty} N(\mf{V},0,r)$ (which exists by monotonicity of the frequency function).
Other symmetry results for $k=2$ are \cite[Theorem 1.8]{BeLiWeZh} and \cite[Theorem 1.12]{BeTeWaWe}, which are now particular cases of the Wang's results, and the theorems in \cite{Dip}, where stable or monotone solutions with linear growth of more general systems are considered.
Regarding $1$-dimensional symmetry for systems with several components, we refer to \cite[Theorem 1.3]{SoTe}, where for any $k \ge 2$ the authors generalized the main results in \cite{FaSo} and \cite{Wa,Wa2} under assumption \eqref{a_ij=1}. Another important result, which we shall use in the following, is \cite[Corollary 1.9]{SoTe}, where it is proved that if \eqref{a_ij=1} holds and $\mf{V}$ is a non-constant solution to \eqref{entire system a_{ij}}, then
\begin{itemize}
\item either $N(\mf{V},0,+\infty) =1$, and in such case $\mf{V}$ has linear growth, has exactly $2$ nontrivial components, and is $1$-dimensional,
\item or $N(\mf{V},0,+\infty) \ge 3/2$, and hence $\mf{V}$ has not linear growth. In this latter case, adapting Lemma 4.2 in \cite{BeLiWeZh} to systems with several components, it is not difficult to deduce that $\mf{V}$ cannot be $1$-dimensional.
\end{itemize}
To conclude this session, we remark that when $k >2$ but \eqref{a_ij=1} does not hold, it is still possible to recover the classification results in \cite{Wa,Wa2}. Indeed, independently of $a_{ij}$, by \cite[Corollary 1.12]{SoTe} any non-constant solution to \eqref{entire system a_{ij}} having linear growth has exactly $2$-nontrivial components, and hence the results in \cite{Wa,Wa2} are applicable.
\section{Decay estimates I}\label{sec: decay 1}
This section is devoted to the proof of Theorem \ref{thm: global upper estimate} and Proposition \ref{prop: interfaces are good approximation}. Thus, (F1), (F2) and \eqref{boundedness} are in force. We start recalling an important decay estimate which will be frequently used in this paper.
\begin{lemma}[Lemma 4.4 in \cite{ctv}]\label{lem: decay}
Let $x_0 \in \mathbb{R}^N$ and $r>0$. Let $v \in H^1(B_{2r}(x_0))$ satisfy
\[
\begin{cases}
-\Delta v \le -K v & \text{in $B_{2r}(x_0)$} \\
v \ge 0 & \text{in $B_{2r}(x_0)$} \\
v \le A & \text{on $\partial B_{2r}(x_0)$},
\end{cases}
\]
where $K$ and $A$ are two positive constants. Then for every $\alpha \in (0,1)$ there exists $C_\alpha>0$, not depending on $A$, $K$, $R$ and $x_0$, such that
\[
\sup_{x \in B_r(x_0)} v(x) \le \alpha A e^{-C_\alpha K^{1/2} r}.
\]
\end{lemma}
This result, together with the uniform boundedness in the Lipschitz norm of $\{\mf{u}_\beta\}$ (proved in \cite{SoZi}), is the main ingredient in the proof of Theorem \ref{thm: global upper estimate}.
\begin{proof}[Proof of Theorem \ref{thm: global upper estimate}]
For an arbitrary compact $K \Subset \Omega$, let $K'$ be such that $K \Subset K' \Subset \Omega$. By contradiction, we assume that there exist sequences $\beta_n \to +\infty$ and $x_n \in K$ such that
\[
\beta_n^{1/2} u_{i,n}(x_n) u_{j,n}(x_n) \to +\infty \qquad \text{as $n \to \infty$ for some $i,j =1,\dots,k$},
\]
where $\mf{u}_n= \mf{u}_{\beta_n}$. By compactness, up to a subsequence $x_n \to x_0 \in K$. Moreover, without loss of generality, we can suppose that $i=1$, $j=2$, and
\begin{equation}\label{ordering}
u_{1,n}(x_n) , u_{2,n}(x_n) \ge u_{h,n}(x_n) \qquad \forall h \neq 1,2.
\end{equation}
\textbf{Step 1)} \emph{For every $i$, the sequence $(u_{i,n}(x_n))$ converges to $0$ as $n \to \infty$.} \\
As already observed in the introduction, by (F1), (F2) and \eqref{boundedness} we know that $\mf{u}_n \to \mf{u}$ in $\mathcal{C}^0(K')$ and $H^1(K')$. If for instance $u_{1,n}(x_n) \ge 3\delta > 0$ for every $n$, then $u_1(x_0) \ge 2\delta$, and in turn, by the uniform Lipschitz boundedness \cite{SoZi}, $u_{1,n} \ge \delta$ in a neighbourhood $B_{2\rho}(x_0)$. In particular, this implies by (F2) that for any $j \neq 1$
\begin{equation}\label{elliptic inequality}
-\Delta u_{j,n} = - \beta_n a_{ij} u_{i,n}^2 u_{j,n} + f_{j,\beta_n}(x,u_{j,n}) \le (C- C \beta_n) u_{j,n} \le -C \beta_n u_{j,n}
\end{equation}
in $B_{2\rho}(x_0)$. Since $u_{j,n}$ is also positive and bounded in $L^\infty(B_{2\rho}(x_0))$, uniformly in $n$, we deduce by Lemma \ref{lem: decay} that
\[
\sup_{B_{\rho}(x_0)} u_{j,n} \le C e^{-C \beta_n^{1/2} \rho} \qquad \forall j=2,\dots,k, \ \forall n.
\]
It follows that
\[
\beta_n^{1/2} u_{1,n}(x_n) u_{2,n}(x_n) \le C \beta_n^{1/2} e^{-C \beta_n^{1/2} \rho}
\]
for every $n$ sufficiently large, in contradiction with the unboundedness of the left-hand side.
\medskip
\noindent \textbf{Step 2)} \emph{Conclusion of the proof.} \\
Let $\varepsilon_n:= \sum_{i=1}^k u_{i,n}(x_n)$. By Step 1, $\varepsilon_n \to 0$ as $n \to \infty$. Let
\[
\tilde u_{i,n}(x):= \frac{1}{\varepsilon_n} u_{i,n}(x_n+\varepsilon_n x) \qquad i=1,\dots,k,
\]
well defined on scaled domains $K_n':= (K'-x_n)/\varepsilon_n$ exhausting $\mathbb{R}^N$ as $n \to \infty$ (here we used the fact that $K \Subset K'$). Note that the normalization has been chosen in such a way that $\sum_i \tilde u_{i,n}(0)= 1$, and the sequence $\{\tilde{\mf{u}}_n\}$ inherits by $\{\mf{u}_n\}$ the uniform boundedness of the Lipschitz semi-norm. As a consequence, $\{\tilde{\mf{u}}_n\}$ is uniformly bounded on compact sets. Now,
\[
-\Delta \tilde u_{i,n} = - \varepsilon_n^4\beta_n \tilde u_{i,n} \tilde u_{j,n}^2 -\varepsilon_n f_{i,\beta_n}(x_n +\varepsilon_n x, \varepsilon_n \tilde u_{i,n}(x)) \qquad \text{in $K_n'$},
\]
and the new competition parameter diverges: indeed by assumption
\[
\varepsilon_n^4 \beta_n = \left(\sum_i u_{i,n}(x_n) \right)^4 \beta_n \ge 6 \beta_n u_{1,n}^2(x_n) u_{2,n}^2(x_n) \to +\infty.
\]
Therefore, by \cite{SoTaTeZi} (see also \cite{NoTaTeVe,Wa}) we infer that up to a subsequence $\tilde{\mf{u}}_n \to \tilde{\mf{u}}$ locally uniformly, in $H^1_{\mathrm{loc}}(\mathbb{R}^N)$, and $\tilde u_i \tilde u_j \equiv 0$ in $\mathbb{R}^N$ for every $j \neq i$. Together with the considered normalization, this implies that for instance $\tilde u_1(0) =1$, while $\tilde u_j(0)=0$ for all the other indices $j$. Recalling again the uniform Lipschitz boundedness of $\{\tilde{\mf{u}}_n\}$ on compact sets, for every $n$ sufficiently large we have $\tilde u_{1,n} \ge 1/2$ in a neighbourhood $B_{2\rho}$. By (F1), we finally conclude
\[
-\Delta \tilde u_{j,n} \le - C \varepsilon_n^4 \beta_n \tilde u_{j,n} + C \varepsilon_n^2 \tilde u_{j,n} \le - C \varepsilon_n^4 \beta_n \tilde u_{j,n} \qquad \forall j \neq 1, \ \forall n
\]
in the ball $B_{2\rho}$. Thanks to Lemma \ref{lem: decay}, this implies that
\[
\beta_n u_{1,n}^2 (x_n) u_{2,n}^2(x_n) = \beta_n \varepsilon_n^4 \tilde u_{1,n}^2(0) \tilde u_{2,n}^2(0) \le C \beta_n \varepsilon_n^4 e^{-C \beta_n^{1/2} \varepsilon_n^{2} \rho} \to 0
\]
as $n \to \infty$, a contradiction.
\end{proof}
\begin{remark}
We wish to observe that the uniform Lipschitz boundedness of the sequence $\{\mf{u}_n\}$ is essential in our proof in order to deduce that $\{\tilde{\mf{u}}_n\}$ is locally $L^\infty$-bounded on compact sets of $\mathbb{R}^N$. Notice that the uniform H\"older boundedness would not be sufficient.
\end{remark}
Now we proceed with the:
\begin{proof}[Proof of Proposition \ref{prop: interfaces are good approximation}] If $x_\beta \in \Gamma_\beta$ and $x_\beta \to x_0$, then clearly $x_0 \in \Gamma$ by local uniform convergence $\mf{u}_\beta \to \mf{u} \in \mathcal{G}(\Omega)$. Let now $x_0 \in \Sigma$ with $\mf{u} \not \equiv \mf{0}$, and let us show that there exists $x_\beta \in \Gamma_\beta$ such that $x_\beta \to x_0$. If this is not the case, then ${\rm dist}(x_0, \Gamma_\beta) \ge \delta>0$ independently on $\beta$. But then there exists an index $i$ such that (up to a subsequence)
\[
x_0 \in \{u_{i,\beta}> u_{j,\beta} \text{ for every $j \neq i$}\} \qquad \forall \beta \quad \Longrightarrow \quad B_{\delta/2}(x_0) \subset \{u_i>0\} \cup \Gamma.
\]
It cannot be $B_{\delta/2} (x_0) \cap \mathcal{R} \neq \emptyset$, otherwise we would have self-segregation around the regular part of the free-boundary, in contradiction with \cite[Section 10]{DaWaZh}. This means that $B_{\delta/2}(x_0) \cap \Gamma = B_{\delta/2}(x_0) \cap \Sigma$, being the Hausdorff dimension of $\Sigma$ at most $N-2$ (see \cite{tt} and the previous section). In turn, we deduce that $-\Delta u_i=f_i(x,u_i)$ and $u_i>0$ in $B_{\delta/2}(x_0) \setminus \Sigma$, which implies $-\Delta u_i=f_i(x,u_i)$ in $B_{\delta/2}(x_0)$ since $\Sigma$ has $0$ capacity, and in turn gives $u_i(x_0)>0$ by the strong maximum principle, a contradiction.
\end{proof}
\section{Blow-up and decay estimates II}\label{sec: decay 2}
In the first part of this section we prove Theorem \ref{thm: blow-up}. This, together with Corollaries \ref{thm: classification limits regular part} and \ref{thm: non-simple blow-up}, will be the base point to obtain Theorems-\ref{cor: improved decay singular sequence}. Theorem \ref{thm: decay higher multiplicity} will be the object of the last part of the section.
As announced at the end of the introduction, for the sake of simplicity we deal with a sequence of solutions to \eqref{system simplified},
\[
\begin{cases}
\Delta u_{i,\beta} = \beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta}^2 & \text{in $\Omega$} \\
u_{i,\beta} > 0 & \text{in $\Omega$},
\end{cases}
\]
satisfying \eqref{boundedness} and \eqref{nontrivial}: $\{\mf{u}_\beta\}$ is uniformly bounded in $L^\infty(\Omega)$, and is convergent to a nontrivial limit profile $\mf{u} \in \mathcal{G}(\Omega)$. Let $K \Subset \Omega$ be a compact set; then there exists $\bar r>0$ such that $B_{2\bar r}(x) \Subset \Omega$ for every $x \in K$.
Firstly, we derive a simple consequence of assumption \eqref{nontrivial}.
\begin{lemma}\label{lem: N bounded}
There exists $\bar C>0$ such that
\[
E(\mf{u}_\beta,x_0,r) , N(\mf{u}_\beta,x_0,r) \le \bar C
\]
for every $x_0 \in K$, $r \in (0,\bar r]$, and for every $\beta$.
\end{lemma}
\begin{proof}
Since $E$ is monotone non-decreasing as function of $r$, for the first part of the thesis it is sufficient to bound $E(\mf{u}_\beta,x_0,\bar r)$ uniformly in $x_0$ and $\beta$.
This can be done as in point (6) of Lemma 2.1 in \cite{SoZi} (in the present setting it is actually easier), and we only sketch the proof for the sake of completeness. Let $\varphi \in \mathcal{C}^\infty_c(B_{2\bar r})$, with $\varphi \equiv 1$ in $B_{\bar r}$ and $0 \le \varphi \le 1$. Let $\varphi_{x_0}(y):= \varphi(x-x_0)$. By testing the equation in \eqref{system simplified} with $\varphi_{x_0}$, we can show that
\[
\int_{B_{\bar r}(x_0)} \beta u_{i,\beta} \sum_{j \neq i} a_{ij} u_{j,\beta}^2 \le C
\]
for some $C>0$ independent of $x_0$ and $\beta$. This, together with assumption \eqref{boundedness}, gives the boundedness of
\[
\int_{B_{\bar r}(x_0)}\sum_{j \neq i} a_{ij} u_{i,\beta}^2 u_{j,\beta}^2.
\]
To control the integrals of the square of the gradient, we test the equation in \eqref{system simplified} with $u_{i,\beta} \varphi_{x_0}^2$, and obtain the desired estimate after some integration by parts.
Concerning the boundedness of the Almgren quotient, by monotonicity again (Proposition \ref{prop: almgren}) it is sufficient to check that $N(\mf{u}_\beta,x_0,\bar r)$ is bounded uniformly in $x_0$ and $\beta$. Thanks to the first part, it is equivalent to prove that $H(\mf{u}_\beta,x_0,\bar r) \ge C >0$ independently of $x_0$ and $\beta$. If this is not true, then there exist sequences $\beta \to +\infty$ and $x_\beta \in K$ such that $H(\mf{u}_\beta,x_\beta,\bar r) \to 0$. On the other hand, observing that $x_\beta \to x_0 \in K$ since $K$ is compact, by uniform convergence we have $H(\mf{u}_\beta,x_\beta,\bar r) \to H(\mf{u},x_0,\bar r)$. This is a strictly positive quantity, as ensured by Proposition \ref{prop: monot segregated} and assumption \eqref{nontrivial}, and hence we reached the desired contradiction.
\end{proof}
The following statement suggests the proper choice of $r_\beta$ in Theorem \ref{thm: blow-up}.
\begin{lemma}\label{lem: choice of r}
For any $x_0 \in K$ and $\beta > 0$ sufficiently large, there exists a unique $r_\beta(x_0)> 0$ such that
\[
\beta H(\mf{u}_\beta, x_0, r_\beta(x_0)) r_\beta(x_0)^2 = 1.
\]
Moreover, let $\{x_\beta\} \subset K$. Then $r_\beta(x_\beta) \to 0$, and consequently
\[
\frac{\Omega - x_\beta}{r_\beta(x_\beta)} \to \mathbb{R}^N \qquad \text{as $\beta \to +\infty$},
\]
in the sense that for any $R>0$ there exists $\bar \beta$ sufficiently large such that $B_R \subset (\Omega - x_\beta)/r_\beta(x_\beta)$ provided $\beta > \bar \beta$.
\end{lemma}
\begin{proof}
First of all, by Proposition \ref{prop: almgren}
\[
r \mapsto \beta H(\mf{u}_\beta,x_0,r) r^2
\]
is increasing for any $x_0 \in K$ and $\beta$ fixed. Since $H(\mf{u}_\beta,x_0,\bar r) \to H(\mf{u},x_0,\bar r)$ and assumption \eqref{nontrivial} is in force, we have
\[
\beta H(\mf{u}_\beta,x_0,\bar r) \bar r^2 >1 \qquad \forall \beta>\bar \beta.
\]
Moreover, since $\mf{u}_\beta$ is a vector valued smooth function with positive components, it results
\[
\lim_{r \to 0^+} \beta H(\mf{u}_\beta,x_0,r) r^2 = 0,
\]
and hence the thesis follows by the mean value theorem.
For the second part of the lemma we argue by contradiction, assuming that for a sequence $x_\beta \in K$ it results $r_\beta(x_\beta) \ge \tilde r>0$. By compactness $x_\beta \to x_0 \in K$, and thanks to \eqref{nontrivial} and uniform convergence
\[
1 = \beta H(\mf{u}_\beta,x_\beta,r_\beta(x_\beta)) r_\beta(x_\beta)^2 \ge \frac{\beta}{2} H(\mf{u},x_0,\tilde r) \tilde r^2 \to +\infty
\]
as $\beta \to +\infty$, a contradiction.
\end{proof}
With the previous lemmas in hand we can proceed with the proof of Theorem \ref{thm: blow-up}. Before, we observe that by definition
\begin{equation}\label{scaled quantities}
\begin{split}
H(\mf{v}_\beta,0,r) & = \frac{H(\mf{u}_\beta,x_\beta,r_\beta r)}{H(\mf{u}_\beta,x_\beta,r_\beta)} \\
E(\mf{v}_\beta,0,r) & = \frac{E(\mf{u}_\beta,x_\beta,r_\beta r)}{H(\mf{u}_\beta,x_\beta,r_\beta)} \\
N(\mf{v}_\beta,0,r) & = N(\mf{u}_\beta,x_\beta,r_\beta,r).
\end{split}
\end{equation}
\begin{proof}[Proof of Theorem \ref{thm: blow-up}]
Let us consider the scaled sequence $\mf{v}_\beta$:
\[
v_{i,\beta}(x):= \frac{u_{i,\beta}(x_\beta + r_\beta x)}{H(\mf{u}_\beta,x_\beta,r_\beta)^{1/2}}
\]
where $r_\beta=r_\beta(x_\beta)$ is given by Lemma \ref{lem: choice of r}, and we recall that $x_\beta$ is a sequence of points on the interfaces $\Gamma_\beta$.
Thanks to the choice of $r_\beta$
\begin{equation}\label{scaled equation}
\Delta v_{i,\beta}(x) = v_{i,\beta} \sum_{j \neq i} a_{ij} v_{j,\beta}^2 \qquad \text{in } \frac{\Omega-x_\beta}{r_\beta},
\end{equation}
and moreover by \eqref{scaled quantities} and Lemma \ref{lem: N bounded}
\[
N(\mf{v}_\beta,0,r) \le \bar C \qquad \forall r \le \frac{\bar r}{r_\beta},
\]
where we recall that $\bar r>0$ has been chosen so that $B_{2\bar r}(x) \Subset \Omega$ for every $x \in K$.
The previous estimate, together with Proposition \ref{prop: almgren}, implies that
\[
\frac{d}{dr}\log H(\mf{v}_\beta,0,r) = \frac{2 N(\mf{v}_\beta,0,r)}{r} \le \frac{2\bar C}{r} \qquad \forall r \le \frac{\bar r}{r_\beta};
\]
by integrating
\[
H(\mf{v}_\beta,0,r) \le H(\mf{v}_\beta,0,1) r^{2 \bar C} \qquad \forall r \in \left[1,\frac{\bar r}{r_\beta}\right].
\]
Consequentlym for any fixed $r>1$, the sequence $\{H(\mf{v}_\beta,0,r)\}_\beta$ is bounded, and since $\{N(\mf{v}_\beta,0,r)\}_\beta$ is also bounded by Lemma \ref{lem: N bounded}, we infer that $\{E(\mf{v}_\beta,0,r)\}_\beta$ is in turn bounded. Using a Poincar\'e inequality, it is not difficult to deduce that this gives boundedness of $\{\mf{v}_\beta\}$ in $H^1(B_r)$, and hence also in $L^2(\partial B_r)$. By subharmonicity, $\{\mf{v}_\beta\}$ is then $L^\infty$-bounded in any compact set of $B_r$, and, by regularity theory for elliptic equations, this provides $\mathcal{C}^2_{\mathrm{loc}}(B_r)$ convergence to a limit $\mf{V}^{(r)}$, solution to \eqref{entire system a_{ij}} in $B_r$. Since in the previous argument $r>1$ has been arbitrarily chosen, we can take a sequence of radii diverging to $+\infty$, and with a diagonal selection we obtain $\mathcal{C}^2_{\mathrm{loc}}$ convergence to an entire limit profile $\mf{V}$. Notice that $\mf{V}$ has at least two nontrivial components. Indeed, by definition of $\Gamma_\beta$, we known that $u_{i_1,\beta}(x_\beta)= u_{i_2,\beta}(x_\beta) \ge u_{j,\beta}(x_\beta)$ for at least two indices $i_1 \neq i_2$, for all $j$. This implies that $v_{i_1,\beta}(0) = v_{i_2,\beta}(0) \ge v_{j,\beta}(0)$. Now, it is easy to check that $v_{i_1}(0) \ge C >0$: if not, then $V_j(0) = 0$ for all $j$, and since $V_j$ is nonnegative and solves
\[
\Delta V_j = V_j \sum_{i \neq j} a_{ij} V_i^2 \qquad \text{in $\mathbb{R}^N$}
\]
by the strong maximum principle $V_{j} \equiv 0$ in $\mathbb{R}^N$ for all $j$, in contradiction with the fact that
\[
\int_{\partial B_1} \sum_{i=1}^k v_{i,\beta}^2 = 1. \qedhere
\]
\end{proof}
\begin{remark}\label{rem: conv also not on gamma}
If $x_\beta$ is not necessarily a sequence in $\Gamma_\beta$, the previous proof establishes that the scaled sequence $\mf{v}_\beta$ is convergent to a limit $\mf{V} \not \equiv \mf{0}$. It is worth to point out that in case $x_\beta \not \in \Gamma_\beta$ for $\beta$ large, such convergence is not really informative, since the limit function will have only $1$ nontrivial components, being a constant, and all the others will be $0$.
\end{remark}
\begin{remark}\label{rem: con f 1}
It is worth to observe explicitly that the limit system does not change in presence of nontrivial $f_{i,\beta}(x,u_{i,\beta})$. Indeed, the transformed nonlinearities appearing in \eqref{scaled equation} takes the form
\[
\frac{r_\beta^2}{H(\mf{u}_\beta,x_\beta,r_\beta)} f_{i,\beta}\left( x_\beta + r_\beta x, u_{i,\beta}(x_\beta+r_\beta x) \right),
\]
and by (F1) can be easily controlled by
\[
\frac{r_\beta^2 u_{i,\beta}(x_\beta+r_\beta x) }{H(\mf{u}_\beta,x_\beta,r_\beta)} = r_\beta^2 v_{i,\beta}(x).
\]
Therefore, once the local $L^\infty$ boundedness of $\{\mf{v}_\beta\}$ is proved (instead of the subharmonicity, one can use a Brezis-Kato argument), the transformed nonlinearities converge to $0$ locally uniformly since $r_\beta \to 0$.
In the same spirit, we observe that if $f_{i,\beta} \not \equiv 0$ and (F1) holds, then both $H(\mf{u}_\beta,x_\beta,\cdot)$ and $N(\mf{u}_\beta,x_\beta,\cdot)$ are not necessarily monotone, but satisfy some almost monotonicity formulae, see Proposition 3.5 in \cite{SoZi}. Using such a result and refining a little bit the previous computations, it is not difficult to check that what we proved in this subsection hold also in that context, as stated in the introduction.
\end{remark}
\subsection{Lower estimates on the decay}
We aim at proving Theorem \ref{thm: lower general}. As a first step, we relate the value of $\mf{u}_\beta$ on the interface with $H(\mf{u}_\beta,x_\beta,r_\beta)$.
\begin{lemma}\label{lem: H con m}
Let $\{x_\beta\} \subset K$, and let $r_\beta=r_\beta(x_\beta)$ be defined by Lemma \ref{lem: choice of r}. There exists $C>1$ such that
\[
\frac{1}{C} \left(\sum_{i=1}^k u_{i,\beta}(x) \right)^2 \le H(\mf{u}_\beta,x_\beta,r_\beta) \le C \left(\sum_{i=1}^k u_{i,\beta}(x) \right)^2.
\]
\end{lemma}
\begin{proof}
By Theorem \ref{thm: blow-up} (see also Remark \ref{rem: conv also not on gamma}) we know that $\mf{v}_\beta \to \mf{V} \not \equiv \mf{0}$. Thus there exists $C>0$ such that
\[
\frac{1}{C} \le \sum_{i=1}^{k} v_{i,\beta}^2(0) \le C.
\]
Recalling the definition of $\mf{v}_\beta$ and using the triangular inequality, we obtain the desired result.
\end{proof}
We are ready to proceed with the:
\begin{proof}[Proof of Theorem \ref{thm: lower general}]
Let $x_\beta \in \Gamma_\beta$, and let $r_\beta=r_\beta(x_\beta)$ be defined by Lemma \ref{lem: choice of r}. Let $\varepsilon>0$ be fixed. By the continuity of the Almgren frequency function there exists $\bar r>0$ such that $N(\mf{u},x_0,\bar r) \le D+2\varepsilon$. By convergence $N(\mf{u}_\beta,x_\beta,\bar r) \le D+\varepsilon$ at least for $\beta$ sufficiently large, and as a consequence of Proposition \ref{prop: almgren} we deduce that
\[
N(\mf{u}_\beta,x_\beta, r) \le D+\varepsilon \qquad \forall r \leq \bar r.
\]
Therefore
\[
\frac{d}{dr} \log H(\mf{u}_\beta,x_\beta,r) = \frac{2N(\mf{u}_\beta,x_\beta,r)}{r} \le \frac{2(D+\varepsilon)}{r} \qquad \forall r \in (0,\bar r],
\]
which by integration implies that
\[
C_\varepsilon \le \frac{H(\mf{u}_\beta,x_\beta,\bar r)}{\bar r^{2(D+\varepsilon)}} \le \frac{H(\mf{u}_\beta,x_\beta, r)}{r^{2(D+\varepsilon)}} \qquad \forall r \in (0,\bar r],
\]
with $C_\varepsilon>0$ by \eqref{nontrivial}. In particular, recalling that $r_\beta \to 0$, this estimate holds for $r=r_\beta$, at least for $\beta$ sufficiently large. But then, thanks to Lemma \ref{lem: H con m} and the choice of $r_\beta$, we obtain
\begin{align*}
\left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^2 & \ge C_\varepsilon r_\beta^{2(D+\varepsilon)} \ge \frac{C_\varepsilon}{\beta^{(D+\varepsilon)} H(\mf{u}_\beta,x_\beta,r_\beta)^{(D+\varepsilon)}} \\
& \ge \frac{C_\varepsilon }{\beta^{(D+\varepsilon)} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^{2(D+\varepsilon)} },
\end{align*}
that is
\[
\beta^{D+\varepsilon} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^{2(1+D+\varepsilon)} \ge C_\varepsilon,
\]
whence the thesis follows.
\end{proof}
As a corollary:
\begin{proof}[Proof of Theorem \ref{thm: lowe estimate}]
If $x_0 \in \mathcal{R}$, then Theorem \ref{thm: lower general} holds with $D=1$, or equivalently
\[
\liminf_{\beta \to +\infty} \beta^{1/4+\varepsilon} \sum_{i=1}^k u_{i,\beta}(x_\beta) \ge C_\varepsilon
\]
The thesis is then a consequence of this estimate and Theorem \ref{corol: decay components vanishing}, which will be proved in the next section with an independent argument.
\end{proof}
\begin{remark}
If $f_{i,\beta} \not \equiv 0$, then we know that $N(\mf{u}_\beta,x_\beta,r)$ is not necessarily monotone in $r$. But, using Proposition 3.5 in \cite{SoZi}, we have however that there exists $C>0$ independent of $\beta$ (for this we use (F1) and \eqref{boundedness}) such that $(N(\mf{u}_\beta,x_\beta,r) +1)\exp\{C r\}$ is non-decreasing in $r$. This allows to prove Theorem \ref{thm: lower general} in the following way: for $\varepsilon>0$, we firstly fix $\rho>0$ such that
\[
(N(\mf{u},x_0,\rho)+1)e^{C \rho} \le (D+2\varepsilon+1).
\]
Since $N(\mf{u},x_0,0^+)=D+\varepsilon$, this is possible. At least for $\beta$ sufficiently large, this choice implies that $N(\mf{u}_\beta,x_\beta,r) \le D+\varepsilon$ for any $0<r<\rho$ and $\beta$. As a consequence, we can proceed with the proof of Theorem \ref{thm: lower general} without further changes.
\end{remark}
\subsection{Further consequences of the existence of non-segregated blow-up}
In the rest of the section we keep the notation introduced in the proof of Theorem \ref{thm: blow-up}: $\{\mf{u}_\beta\}$ denotes the original sequence with limit $\mf{u} \not \equiv \mf{0}$ in $\mathcal{G}_{\mathrm{loc}}(\Omega)$, $\{\mf{v}_\beta\}$ denotes the scaled sequence defined in the quoted statement, which is converging in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R}^N)$ to a limit $\mf{V}$, solution to \eqref{entire system a_{ij}} with at least two non-trivial components. As reviewed in Section \ref{sec: prel}, a relevant quantity to understand the properties of $\mf{V}$ is the limit at infinity of the Almgren frequency. Let
\[
d:= \lim_{r \to +\infty} N(\mf{V},0,r), \quad \text{and} \quad D:= \lim_{r \to 0^+} N(\mf{u},x_0,0^+).
\]
\begin{lemma}
In the previous notation, $d \le D$.
\end{lemma}
\begin{proof}
By the convergence of $\{\mf{u}_\beta\}$ and $\{\mf{v}_\beta\}$, together with the monotonicity of the Almgren frequency function (see Proposition \ref{prop: almgren}), we have that for any $r,\rho>0$
\begin{align*}
N(\mf{V},0,r) &= \lim_{\beta \to +\infty} N(\mf{v}_\beta,0,r) = \lim_{\beta \to +\infty} N(\mf{u}_\beta,x_\beta,r_\beta r) \\
&\le \lim_{\beta \to +\infty} N(\mf{u}_\beta, x_\beta, \rho) = N(\mf{u},x_0,\rho)
\end{align*}
(notice that, for any $r,\rho>0$ it results that $r_\beta r \le \rho$ for $\beta$ sufficiently large). Passing to the limit as $r \to +\infty$ and $\rho \to 0^+$, we obtain the desired result.
\end{proof}
Let us point out that the previous result is somehow sharp: without further technical assumptions, it is not possible to show that $d = D$ for a generic point $x_0$. Actually, it is possible to construct counterexamples with $d < D$: this can be done considering suitable translations of the original functions $\{\mf{u}_\beta\}$, so that the macroscopic scale and the blow-up scale behave in a different way.
In any case, the previous lemma has two direct consequences:
\begin{proof}[Proof of Corollary \ref{thm: classification limits regular part}]
If $x_0 \in \mathcal{R}$, then by definition $D=1$. Therefore, the thesis is a simple consequence of the uniqueness of solutions of \eqref{entire system a_{ij}} with $N(\mf{V},0,+\infty) \le 1$ and having at least two non-trivial components (see the main results in \cite{Wa,Wa2} for $k=2$, and Theorem 1.3 in \cite{SoTe} for an arbitrary $k \ge 2$).
\end{proof}
\begin{proof}[Proof of Corollary \ref{thm: non-simple blow-up}]
If $x_\beta \in \Sigma_\beta$, then we can show that $0$ is a singular point for the function $\mf{V}$, in the following sense: denoting the interface of $\mf{V}$ by $\Gamma_{\mf{V}}$, and its singular part by
$\Sigma_{\mf{V}}$ (see Definitions \ref{def interface} and \ref{def singular interface}), we prove that since $x_\beta \in \Sigma_\beta$, then $\mf{0} \in \Sigma_{\mf{V}}$. Notice that by definition of $\Sigma_\beta$ there are two possibilities: either (up to a subsequence) there exist at least $3$ distinct indices such that
\begin{equation}\label{singular condition 1}
u_{i_1,\beta}(x_\beta) = u_{i_2,\beta}(x_\beta) = u_{i_3,\beta}(x_\beta) \ge u_{j,\beta}(x_\beta) \qquad \forall \beta, \ \forall j,
\end{equation}
or (up to a subsequence) there exist two indices $i_1$ and $i_2$ such that
\begin{equation}\label{singular condition 2}
\begin{cases}
u_{i_1,\beta}(x_\beta) = u_{i_2,\beta}(x_\beta) > u_{j,\beta}(x_\beta) \quad \forall j \neq i_1,i_2,\\
\nabla (u_{i_1,\beta}-u_{i_2,\beta})(x_\beta) = 0
\end{cases} \qquad \forall \beta.
\end{equation}
If \eqref{singular condition 1} is in force, then
\[
v_{i_1,\beta}(0) = v_{i_2,\beta}(0) = v_{i_3,\beta}(0) \ge v_{j,\beta}(0) \qquad \forall \beta, \forall j,
\]
while if \eqref{singular condition 2} holds, then
\[
\begin{cases}
v_{i_1,\beta}(0) = v_{i_2,\beta}(0) > v_{j,\beta}(0) \quad \forall j \neq i_1,i_2,\\
\nabla (v_{i_1,\beta}-v_{i_2,\beta})(0) = 0
\end{cases} \qquad \forall \beta.
\]
Recalling that $\mf{v}_\beta \to \mf{V}$ in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R}^N)$, we infer that in any case $0 \in \Sigma_{\mf{V}}$. Now, if $\mf{V}$ is $1$-dimensional, then $\Gamma_{\mf{V}}$ is a hyperplane and $\Sigma_{\mf{V}} = \emptyset$. Therefore, $\mf{V}$ cannot be $1$-dimensional. Since $\mf{V}$ is not $1$-dimensional, by \cite[Theorem 1.3 and Corollary 1.9]{SoTe} we have $3/2 \le d$, and since $d \le D$ we conclude that $x_0 \in \Sigma$.
\end{proof}
We now proceed with proof of Theorem \ref{cor: improved decay singular sequence}.
\begin{proposition}\label{prop 151}
Under assumption \eqref{nontrivial}, let $\mf{V}$ be the limit profile given by Theorem \ref{thm: blow-up}, and let $d = N(\mf{V},0,+\infty)$. For any $\varepsilon>0$ there exists $C_\varepsilon>0$ such that
\[
\limsup_{\beta \to +\infty} \beta^{(d-\varepsilon)/(2+2d)} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right) \le C_\varepsilon.
\]
\end{proposition}
\begin{proof}
We study the monotonicity of the function
\[
r \mapsto \frac{H(\mf{u}_\beta,x_\beta,r)}{r^{2N(\mf{u}_\beta,x_\beta,r)}} \qquad r \in (0,\bar r],
\]
where we recall that $\bar r>0$ has been chosen so that $B_{2\bar r}(x) \Subset \Omega$ for every $x \in K$. Recalling Proposition \ref{prop: almgren}, we have
\begin{align*}
\frac{d}{dr} \log \frac{H(\mf{u}_\beta,x_\beta,r)}{r^{2N(\mf{u}_\beta,x_\beta,r)}} &= \frac{d}{dr} \log H(\mf{u}_\beta,x_\beta,r) - \frac{d}{dr}\left( 2N(\mf{u}_\beta,x_\beta,r) \log r \right) \\
& = \frac{2N(\mf{u}_\beta,x_\beta,r)}{r} - \frac{2N(\mf{u}_\beta,x_\beta,r)}{r} - 2(\log r) \frac{d}{dr} N(\mf{u}_\beta,x_\beta,r) \\
& \ge 0 \qquad \forall r \in (0,\bar r].
\end{align*}
Therefore, using also the boundedness of $\{\mf{u}_\beta\}$, we infer that
\[
H(\mf{u}_\beta,x_\beta,r) \le H(\mf{u}_\beta,x_\beta,\bar r) r^{2N(\mf{u}_\beta,x_\beta,r)} \le C r^{2N(\mf{u}_\beta,x_\beta,r)} \qquad \forall r \in (0,\bar r],
\]
Now, since $d= N(\mf{V},0,+\infty)$, for any $\varepsilon>0$ there exists $\rho=\rho(\varepsilon)>1$ sufficiently large such that $N(\mf{V},0,\rho)=d-\varepsilon/2$, and hence by convergence $d-\varepsilon \le N(\mf{v}_\beta,0,\rho) \le d$ for $\beta$ sufficiently large. With this choice of $\rho$, we observe that always for $\beta$ large
\begin{equation}\label{stima 151}
H(\mf{u}_\beta,x_\beta,\rho r_\beta) \le C (\rho r_\beta)^{2N(\mf{u}_\beta,x_\beta, \rho r_\beta)},
\end{equation}
as $r_\beta \to 0$ as $\beta \to +\infty$.
The left hand side can be controlled from below as follows:
\begin{equation}\label{stima basso 151}
\left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^2 \le C H(\mf{u}_\beta,x_\beta,r_\beta) \le C H(\mf{u}_\beta,x_\beta,r_\beta \rho),
\end{equation}
where we used Lemma \ref{lem: H con m} and the monotonicity of $H$ (recall that $\rho>1$). To control the right hand side in \eqref{stima 151}, we recall \eqref{scaled quantities} and that $N(\mf{v}_\beta,0,\rho) \le d$ for every $\beta$ large, so that
\[
C (\rho r_\beta)^{2N(\mf{u}_\beta,x_\beta, \rho r_\beta)} \le C \rho^{2d} r_\beta^{2N(\mf{v}_\beta,0, \rho)} \le C_\varepsilon r_\beta^{2N(\mf{v}_\beta,0, \rho)},
\]
where the dependence of $C$ on $\varepsilon$ comes from the dependence $\rho=\rho(\varepsilon)$. By definition of $r_\beta$ (see Lemma \ref{lem: choice of r}), the previous estimate implies that
\begin{equation}\label{stima alto 151}
\begin{split}
C (\rho r_\beta)^{2N(\mf{u}_\beta,x_\beta, \rho r_\beta)} &\le \frac{C_\varepsilon}{ \beta^{N(\mf{v}_\beta,0, \rho)} H(\mf{u}_\beta,x_\beta,r_\beta)^{N(\mf{v}_\beta,0, \rho) } } \\
& \le \frac{C_\varepsilon}{ \beta^{N(\mf{v}_\beta,0, \rho)} \left( \sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^{2N(\mf{v}_\beta,0, \rho)} },
\end{split}
\end{equation}
where in the last step we used Lemma \ref{lem: H con m}. Collecting \eqref{stima basso 151} and \eqref{stima alto 151}, and coming back to \eqref{stima 151}, we conclude that
\[
\beta^{d-\varepsilon} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^{2+2d} \le \beta^{N(\mf{v}_\beta,0, \rho)} \left(\sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^{2+ 2 N(\mf{v}_\beta,0, \rho)} \le C_\varepsilon
\]
for some $C_\varepsilon>0$ independent of $\beta$.
\end{proof}
As a consequence:
\begin{proof}[Proof of Theorem \ref{cor: improved decay singular sequence}]
Let $\mf{V}$ be the limit profile defined in Theorem \ref{thm: blow-up}. By Corollary \ref{thm: non-simple blow-up} it is not $1$-dimensional, and hence by \cite[Theorem 1.3 and Corollary 1.9]{SoTe} it results $3/2 \le d$. Since the function $d \mapsto (d-\varepsilon)/(2+2d)$ is strictly increasing for $d \ge 1$, this together with Proposition \ref{prop 151} gives the desired result.
\end{proof}
\begin{remark}
As in the previous subsections, we point out that replacing Proposition \ref{prop: almgren} with Proposition 3.5 in \cite{SoZi} and refining the computations, it is not difficult to extend the above proofs in case $f_{i,\beta} \not \equiv 0$.
\end{remark}
\subsection{General decay estimate around singular points}
In this subsection we prove Theorem \ref{thm: decay higher multiplicity}. Let us fix $x_0 \in \Sigma$, so that by definition $D:= N(\mf{u},x_0,0^+)>1$, and let $0<\varepsilon<D-1$ be arbitrarily chosen. Using the notation introduced in Theorem \ref{thm: blow-up}, let $d:= N(\mf{V},0,+\infty)$. If $d>1$, then we can proceed as in Corollary \ref{cor: improved decay singular sequence}, whose thesis is in fact stronger than the one considered here. Thus, we have only to examine the case $d=1$: we recall that this means that $\mf{V}$ is the only $1$-dimensional solution of \eqref{entire system a_{ij}}, \emph{having linear growth}. Let us introduce
\[
\begin{split}
R_\beta &:= \inf\left\{r>0: N(\mf{u}_\beta,x_\beta,r) >D-\varepsilon \right\} \\
\rho_\beta& := \inf\left\{r>0: N(\mf{u}_\beta,x_\beta,r) >1 \right\}.
\end{split}
\]
Let $\bar r>0$ be such that $B_{2 \bar r}(x) \Subset \Omega$ for any $x \in K$. Recall now that $N(\mf{u}_\beta,x_\beta,\cdot)$ is non-decreasing. Thus, observing that for any $r \in (0,\bar r]$ one has $N(\mf{u}_\beta,x_\beta,r) \to N(\mf{u},x_0,r) \ge D$ as $\beta \to +\infty$, while $N(\mf{u}_\beta,x_\beta,0^+)= 0$ for any $\beta$ fixed, we deduce that $\rho_\beta$ and $R_\beta$ are positive real numbers, and $0<\rho_\beta<R_\beta \to 0^+$.
With the notation of Theorem \ref{thm: blow-up}, let
\[
\begin{split}
\bar R_\beta &:= \inf\left\{r>0: N(\mf{v}_\beta,0,r) >D-\varepsilon \right\} \\
\bar \rho_\beta& := \inf\left\{r>0: N(\mf{v}_\beta,0,r) >1 \right\}.
\end{split}
\]
Notice that, by definition and \eqref{scaled quantities}, one has
\begin{equation}\label{rapporti scales}
\bar R_\beta = \frac{R_\beta}{r_\beta} \quad \text{and} \quad \bar \rho_\beta = \frac{\rho_\beta}{r_\beta}.
\end{equation}
Moreover, recall that $\mf{v}_\beta$ is defined in a domain containing the ball $B_{\bar r/r_\beta}$.
Having introduced $\bar \rho_\beta$ and $R_\beta$, we can now borrow some ideas from the proof of Theorem 1.3 in \cite{SoZi}, see Section 4 therein. We shall carry some information through the different scales $1<\rho_\beta<R_\beta<\bar r/r_\beta$. In doing so, we shall use three different monotonicity formulae, one from each interval $(1,\rho_\beta)$, $(\rho_\beta,R_\beta)$, $(R_\beta, \bar r/r_\beta)$, whose validity rests essentially on the corresponding estimate on $N(\mf{v}_\beta,0,\cdot)$.
\begin{lemma}
It results that $\bar \rho_\beta, \bar R_\beta \to +\infty$ as $\beta \to +\infty$.
\end{lemma}
\begin{proof}
Since by definition $\bar \rho_\beta \le \bar R_\beta$, it is sufficient to check that $\bar \rho_\beta \to +\infty$. This is a simple consequence of the convergence $\mf{v}_\beta \to \mf{V}$, and of the fac that $N(\mf{V},0,r) \le 1$ for every $r>0$. As observed in Remark \ref{rem: homogeneity non-segregated}, since $\mf{V}$ solves \eqref{entire system a_{ij}} this implies that $N(\mf{V},0,r) <1$ for every $r >0$. Therefore, if by contradiction we suppose that $\{\bar \rho_\beta\}$ is bounded, we obtain up to a subsequence $\bar \rho_\beta \to \bar \rho$, and hence
\[
N(\mf{V},0,\bar \rho) = \lim_{\beta \to +\infty} N(\mf{v}_\beta,0,\bar \rho_\beta) =1,
\]
a contradiction.
\end{proof}
By definition and by Proposition \ref{prop: monot e+h}, in the intervals $(\bar \rho_\beta,\bar R_\beta)$ and $(\bar R_\beta, \bar r/r_\beta)$ we have two powerful monotonicity formulae. In $(1,\bar \rho_\beta)$ we do not have any estimate from below on the Almgren's frequency, and hence we shall use a perturbed Alt-Caffarelli-Friedman monotonicity formula. To this end, we recall again that $\mf{v}_\beta \to \mf{V}$ in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R}^N)$, and $\mf{V}$ the unique $1$-dimensional solution of \eqref{entire system a_{ij}}, which have exactly two non-trivial components. Up to a relabelling, it is not restrictive to suppose that $V_1,V_2 \not \equiv 0$, so that we are naturally led to consider $J_\beta(r) :=r^{-4} J_{1,\beta}(r) \cdot J_{2,\beta}(r)$, where
\begin{align*}
J_{1,\beta}(r) &:= \int_{B_r} \left(\left|\nabla v_{1,\beta}\right|^2 + a_{12} v_{1,\beta}^2 v_{2,\beta}^2 \right)|x|^{2-N} \\
J_{2,\beta}(r)& := \int_{B_r} \left(\left|\nabla v_{2,\beta}\right|^2 + a_{12} v_{1,\beta}^2 v_{2,\beta}^2\right)|x|^{2-N}.
\end{align*}
The validity of the following monotonicity formula will be the key in our concluding argument.
\begin{lemma}\label{lem: ACF uniform}
There exists $C>0$ independent of $\beta$ such that $J_{1,\beta}(r) \ge C$ and $J_{2,\beta}(r) \ge C$ for every $r \in [1,\bar \rho_\beta/3]$, and
\[
r \mapsto J_{\beta}(r) e^{-C r^{-1/2}} \quad \text{is non-decreasing for $r \in [1,\bar \rho_\beta/3]$}.
\]
\end{lemma}
The proof consists in checking that the assumptions of Proposition \ref{prop: ACF} are satisfied by $(v_{1,\beta},v_{2,\beta})$ uniformly in $\beta$: that is, one has to show that there exist $\lambda,\mu>0$ such that
\[
\frac{1}{\lambda} \le \frac{ \int_{\partial B_r} v_{1,\beta}^2 }{\int_{\partial B_r} v_{2,\beta}^2} \le \lambda \quad \text{and} \quad \frac{1}{r^{N-1}} \int_{\partial B_r} v_{1,\beta}^2 \ge \mu
\]
for every $r \in [1,\bar \rho_\beta/3]$, for every $\beta$. This can be done arguing exactly as in the proof of Lemma 4.9 in \cite{SoZi} (actually the proof is easier in the present setting, since we neglected the nonlinearities $f_{i,\beta}$), see Section 4.1 in the quoted paper, and thus we omit the details. We emphasize that there we only used the fact that $(v_{1,\beta},v_{2,\beta}) \to (V_1,V_2)$ locally uniformly and in $H^1_{\mathrm{loc}}(\mathbb{R}^N)$, with $V_1,V_2 \not \equiv 0$, and the control $N(\mf{v}_\beta,0,r) \le 1$ for $r \le \bar \rho_\beta/3$. Both these properties are satisfied in the present setting.
With Lemma \ref{lem: ACF uniform} in hands, we can proceed with the:
\begin{proof}[Conclusion of the proof of Theorem \ref{thm: decay higher multiplicity}]
By Lemma \ref{lem: ACF uniform}, and since $(V_1,V_2)$ are two positive non-constant functions, for some $C>0$ we have
\begin{equation}\label{chain 1}
C \le J_\beta(1) e^{-C} \le J_\beta\left(\frac{\bar \rho_\beta}{3} \right)e^{-C \bar \rho_\beta^{-1/2}} \le C J_\beta\left(\bar \rho_\beta \right).
\end{equation}
We claim that
\[
J_\beta\left(\bar \rho_\beta \right) \le \left( \frac{E(\mf{v}_\beta,0,\bar \rho_\beta) + \frac{N-2}{2} H(\mf{v}_\beta,0,\bar \rho_\beta)}{\bar{\rho}_\beta^2}\right)^2.
\]
To prove it, we firstly test the equation for $v_{1,\beta}$ with $v_{1,\beta} |x|^{2-N}$ in $B_r$; integrating by parts twice we obtain
\begin{align*}
J_{1,\beta}(r) & = -\frac12 \int_{B_r} \nabla(v_{1,\beta}^2)\cdot \nabla (|x|^{2-N}) + \frac{1}{r^{N-2}}\int_{\partial B_r} v_{1,\beta} \partial_{\nu} v_{1,\beta} \\
& \le \frac{1}{r^{N-2}}\int_{\partial B_r} v_{1,\beta} \partial_{\nu} v_{1,\beta} +\frac{N-2}{2r^{N-1}} \int_{\partial B_r} v_{1,\beta}^2.
\end{align*}
Now the divergence theorem yields
\begin{equation}\label{da J a E}
\begin{split}
J_{1,\beta}(r) & \le \frac{1}{r^{N-2}} \int_{B_r} |\nabla v_{1,\beta}|^2 + a_{12} v_{1,\beta}^2 \sum_{j \neq 1} v_{j,\beta}^2 + \frac{N-2}{2r^{N-1}} \int_{\partial B_r} v_{1,\beta}^2 \\
& \le E(\mf{v}_\beta,0,r) + \frac{N-2}{2}H(\mf{v}_\beta,0,r).
\end{split}
\end{equation}
If we choose $r=\bar \rho_\beta$ and we use the same argument on $J_{2,\beta}$, the claim follows.
Thus, coming back to \eqref{chain 1} we have
\[
C \le J_\beta(\bar \rho_\beta) \le C \left( \frac{E(\mf{v}_\beta,0,\bar \rho_\beta) + H(\mf{v}_\beta,0,\bar \rho_\beta)}{ \bar \rho_\beta^2} \right)^2,
\]
and on the last term we can apply the monotoncity formula of Proposition \ref{prop: monot e+h}, available firstly in the interval $(\bar \rho_\beta,\bar R_\beta)$ with $\gamma=1$, and secondly in $(\bar R_\beta, \bar r/r_\beta)$ with $\gamma= D-\varepsilon$: recalling \eqref{scaled quantities} and Lemma \ref{lem: N bounded} this gives
\begin{align*}
C & \le \left( \frac{E(\mf{v}_\beta,0,\bar \rho_\beta) + H(\mf{v}_\beta,0,\bar \rho_\beta)}{\bar \rho_\beta^2} \right)^2 \\
& \le \left( \frac{E(\mf{v}_\beta,0,\bar R_\beta) + H(\mf{v}_\beta,0,\bar R_\beta)}{\bar R_\beta^2} \cdot \left( \frac{\bar R_\beta}{\bar R_\beta}\right)^{2(D-\varepsilon-1)} \right)^2 \\
& \le \left( \frac{E(\mf{v}_\beta,0,\bar r/r_\beta) + H(\mf{v}_\beta,0,\bar r/r_\beta)}{\bar r^{2(D-\varepsilon)}} r_\beta^{2(D-\varepsilon)} \right)^2 \cdot \bar R_\beta^{4(D-\varepsilon-1)} \\
& = \left( \frac{E(\mf{u}_\beta,0,\bar r) + H(\mf{u}_\beta,0,\bar r)}{\bar r^{2(D-\varepsilon)}} \right)^2 \frac{r_\beta^{4(D-\varepsilon)}}{H(\mf{u}_\beta,x_\beta,r_\beta)^2} \cdot \left( \frac{R_\beta}{r_\beta} \right)^{4(D-\varepsilon-1)} \\
& \le C \frac{r_\beta^4 R_\beta^{4(D-\varepsilon-1)} }{H(\mf{u}_\beta,x_\beta,r_\beta)^2},
\end{align*}
whence $H(\mf{u}_\beta,x_\beta,r_\beta)^2 \le C r_\beta^4 R_\beta^{4(D-\varepsilon-1)}$. Finally, using also Lemmas \ref{lem: choice of r} and \ref{lem: H con m}, we deduce that
\begin{align*}
\beta^2\left( \sum_{i=1}^k u_{i,\beta}(x_\beta) \right)^8 & \le C \left(\beta H(\mf{u}_\beta,x_\beta,r)\right)^2 H(\mf{u}_\beta,x_\beta,r)^2 \le C\cdot \frac{1}{r_\beta^4} \cdot r_\beta^4 R_\beta^{4(D-\varepsilon-1)}
\end{align*}
and since $D>1$ and $R_\beta \to 0$ the last term vanishes as $\beta \to +\infty$, which is the desired result.
\end{proof}
\begin{remark}
As already pointed out, in order to proof Theorem \ref{thm: global upper estimate} in presence of $f_{i,\beta} \not \equiv 0$ it is possible to combine the techniques used here with the almost monotonicity formulae introduced in \cite{SoZi} (see Theorem 3.14 and Lemma 4.7 therein).
\end{remark}
\section{Uniform regularity of the interfaces and decay estimates III}\label{sec: Reif}
The aim of this section is to study the uniform regularity of the interfaces $\Gamma_\beta$, and in a second time to prove as a corollary Theorem \ref{corol: decay components vanishing}.
Before proceeding, we make some remarks about Definition \ref{def: regular interface}, where we introduced $\mathcal{R}_\beta(\rho)$. First, since for $\beta$ finite the functions $\mf{u}_\beta$ are smooth, the function $(x, \rho) \mapsto N_\beta(\mf{u}_\beta, x, \rho)$ is continuous; in particular, for any $\rho>0$, $\mathcal{R}_{\beta}(\rho)$ is a relative open subset of $\Gamma_{\beta}$. In Definition \ref{def: regular interface}, in light of the dichotomy $N(\mf{u},x,0^+) = 1$ or $N(\mf{u},x,0^+) \ge 3/2$ (see Proposition \ref{prop: monot segregated}), we could replace $1/4$ with any positive number strictly less than $1/2$, without affecting the rest of the section. We observe also that, thanks to the monotonicity of the Almgren quotient, for a fixed $\mf{u}_\beta$ we can show the following monotonicity property of the proposed decomposition
\[
\forall \rho_1, \rho_2, \; 0 < \rho_1 < \rho_2 \implies \mathcal{R}_{\beta}(\rho_1) \supset \mathcal{R}_{\beta}(\rho_2).
\]
The stratification induced by the previous construction on the free boundary $\Gamma_{\beta}$ may seem to be useless: indeed we have
\[
\Gamma_{\beta} = \cup_{\rho > 0} \mathcal{R}_{\beta}(\rho).
\]
This is due to the fact that the maximum principle implies that all the functions $\mf{u}_\beta$ are strictly positive in $\Omega$, and thus for any $x \in \Omega$ we can easily prove that $N_\beta(\mf{u}_\beta, x, 0^+) = 0$. Nonetheless, the following result can be used to acquire the geometrical intuition behind the definition.
\begin{lemma}\label{lem: basic prop regular part}
Let us assume that $x_\beta \in \Gamma_{\beta}$, for every $\beta$.
\begin{itemize}
\item If there exists $x_0 \in \mathcal{R}$ such that $x_\beta \to x_0$, then there exist $\rho > 0$ and $\bar \beta >0$ such that
\[
x_\beta \in \mathcal{R}_{\beta}(\rho) \qquad \forall \beta > \bar \beta.
\]
\item If there exists $x_0 \in \Sigma$ such that $x_\beta \to x_0$, then for every $\rho > 0$ there exists $\bar \beta >0$ such that
\[
x_\beta \not \in \mathcal{R}_{\beta}(\rho) \qquad \forall \beta > \bar \beta.
\]
In particular, for any compact $K \Subset \Omega$ and $\rho>0$ there exists $s>0$ independent of $\beta$ such that
\[
B_s(x) \cap \Sigma = \emptyset \quad \text{for every $x \in \mathcal{R}_\beta(\rho)$, for every $\beta$}.
\]
\end{itemize}
\end{lemma}
\begin{proof}
We show only the first conclusion, since the second one is similar. Let $x_0 \in \mathcal{R}$; then
\[
\begin{split}
N(\mf{u}, x_0,0^+) = 1
&\quad\implies N(\mf{u}, x_0, \rho) < 1 + \frac{1}{2 \cdot 4} \qquad \text{for some small $\rho$}\\
&\quad\implies N(\mf{u}_\beta, x_\beta, \rho) < 1 + \frac{1}{4}
\end{split}
\]
for sufficiently large $\beta$, by the $\mathcal{C}^0_{\mathrm{loc}}(\Omega)$ and the strong $H^1_{\mathrm{loc}}(\Omega)$ convergence of $\mf{u}_\beta$ to $\mf{u}$.
\end{proof}
We now investigate the uniform regularity of the regular part of the subsets $\mathcal{R}_{\beta}(\rho) \cap K$, proving Theorem \ref{thm: reifenberg flat uniform}. Recall that $K$ is an arbitrary compact set in $\Omega$. In order to establish that $\mathcal{R}_{\beta}(\rho) \cap K$ enjoy what we defined as the \emph{uniform vanishing Reifenberg flatness condition}, we proceed in two steps. First of all, we show it under a smallness assumption.
\begin{lemma}\label{lem: Reif small}
Let $K \Subset \Omega$ be a compact set, $\rho > 0$ and $C>0$. For $\beta$ sufficiently large, for any $\delta > 0$, $x_\beta \in \mathcal{R}_{\beta}(\rho) \cap K$ and $0 < r < C r_\beta(x_\beta)$ there exists a hyper-plane $H_{x_\beta,r} \subset \mathbb{R}^N$ containing $x_\beta$ such that
\[
{\rm dist}_{\mathcal{H}}(\mathcal{R}_{\beta}(\rho) \cap B_r(x_\beta), H_{x_\beta,r} \cap B_r(x_\beta)) \leq \delta r.
\]
\end{lemma}
In the thesis of Theorem \ref{thm: reifenberg flat uniform} we required $R$ to be independent of $\beta$, thus the uniformity of the vanishing Reifenberg flatness of the ``regular part" of the interfaces. Here instead we prove a preliminary result in the case $R = C r_\beta$.
For future convenience, we recall that the notation $B_r$ is used for balls with center in $0$.
\begin{proof}
By contradiction, we suppose that there exist $\bar \delta >0$, $x_\beta \in \mathcal{R}_\beta(\rho) \cap K$ and $0<r_\beta'<C r_\beta$ such that
\[
\inf_H {\rm dist}_{\mathcal{H}}(\mathcal{R}_\beta(\rho) \cap B_{r_\beta'}(x_\beta) , H \cap B_{r_\beta'}(x_\beta)) \ge \bar \delta r_\beta' \qquad \forall \beta,
\]
where the infimum is taken over all the hyperplanes passing through $x_\beta$. Since the notion of Reifenberg flatness commutes with translations and scalings, the previous condition is equivalent to
\begin{equation}\label{contr Reif step 1}
\inf_H {\rm dist}_{\mathcal{H}}\left(\mathcal{R}_\beta^{(S)}(\rho) \cap B_{r_\beta'/r_\beta(x_\beta)} , H \cap B_{r_\beta'/r_\beta(x_\beta)} \right) \ge \bar \delta \frac{r_\beta'}{r_\beta(x_\beta)}
\end{equation}
for every $\beta$, where $\mathcal{R}_\beta^{(S)}(\rho)$ is obtained by $\mathcal{R}_\beta(\rho)$ after the change of variable $x = x_\beta + r_\beta y$, and now the infimum is taken over the hyperplanes through the origin.
The contradiction will be achieved proving that $\mathcal{R}_\beta^{(S)}(\rho)$ are uniformly Reifenberg flat arround $0$ up to the scale $C$, in the sense that for any $\delta>0$ and $0<r<C$ it results
\begin{equation}\label{4251}
\inf_H {\rm dist}_{\mathcal{H}}(\mathcal{R}_\beta^{(S)}(\rho) \cap B_r , H \cap B_{r} ) \le \delta r \qquad \forall \beta.
\end{equation}
Since $r_\beta'/r_\beta(x_\beta) \le C$, this contradicts \eqref{contr Reif step 1} and completes the proof. To prove \eqref{4251}, we introduce as usual the sequence
\[
\mf{v}_\beta(x):= \frac{\mf{u}_\beta(x_\beta + r_\beta x)}{H(\mf{u}_\beta,x_\beta,r_\beta)^{1/2}}.
\]
Since $x_\beta \in \mathcal{R}_\beta(\rho) \cap K$, up to a subsequence $x_\beta \to x_0$. By Proposition \ref{prop: interfaces are good approximation} we have $x_0 \in \Gamma = \mf\{\mf{u}=\mf{0}\}$, and by Lemma \ref{lem: basic prop regular part} it follows that $x_0 \in \mathcal{R}$, the regular part of $\Gamma$. As a consequence, Corollary \ref{thm: classification limits regular part} establishes that $\mf{v}_\beta \to \mf{V}$ in $\mathcal{C}^2_{\mathrm{loc}}(\mathbb{R}^N)$, where $\mf{V}$ is a $1$-dimensional solution of \eqref{entire system a_{ij}}. Up to a rotation and a relabelling, we can suppose that $\{V_1= V_2\} = \{x_N=0\}$ and $V_1, V_2$ are the only nontrivial components of $\mf{V}$. By $\mathcal{C}^2_{\mathrm{loc}}$ convergence, this implies that:
\begin{itemize}
\item $\mathcal{R}_\beta^{(S)}(\rho) \cap B_C = \{v_{1,\beta} -v_{2,\beta}=0\} \cap B_C $;
\item there exists $C_1>0$ such that $|\partial_{x_N} (v_{1,\beta}-v_{2,\beta})| > C_1>0$ in $B_C$, for every $\beta$;
\item for every $\delta>0$ there exists $\bar \beta>0$ such that $|\partial_{x_i} (v_{1,\beta}-v_{2,\beta})| < \delta/(C_1(N-1))$ in $B_C$ provided $\beta >\beta$.
\end{itemize}
Therefore, for $\beta>\bar \beta$ we can apply the implicit function theorem: there exists a $\mathcal{C}^1$ function $f_\beta$, defined on the projection $U_\beta$ of $\mathcal{R}_\beta^{(S)}(\rho) \cap B_C $ into $\mathbb{R}^{N-1}$, such that $\mathcal{R}_\beta^{(S)}(\rho) \cap B_C = \{x_N= f_\beta(x')\}$. Moreover, $f_\beta(0) = 0$ (since $0 \in \mathcal{R}_\beta^{(S)}(\rho) \cap B_C $) and $|\nabla' f_\beta| \le \delta$ in $U_\beta$. As a result, choosing $\bar H= \{x_N=0\}$, and denoting by $U_\beta^r$ the set $U_\beta \cap \{|x'|<r\}$, we have
\begin{align*}
{\rm dist}_{\mathcal{H}}(\mathcal{R}_{\beta}^{(S)}(\rho) \cap B_r , \bar H \cap B_r ) & = \sup_{\mathcal{R}_{\beta}^{(S)}(\rho) \cap B_r } |x_N| \le \sup_{U_\beta^r} |f_\beta| \\
& \le \sup_{U_\beta^r} | \nabla' f_\beta| |x'| \leq \delta r,
\end{align*}
which gives the desired contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm: reifenberg flat uniform}]
We now conclude the proof of the uniform vanishing Reifenberg flatness of the sets $\mathcal{R}_\beta(\rho)$.
By contradiction again, let us assume that there exist $\bar \delta >0$ and sequences $\beta_n \to +\infty$, $x_{n} \in \mathcal{R}_{\beta_n}(\rho) \cap K$, $r_{n} \to 0^+$ such that
\begin{equation}\label{eqn: failure reif}
{\rm dist}_{\mathcal{H}}(\mathcal{R}_{\beta_n}(\rho) \cap B_{r_n}(x_{n}), H \cap B_{r_n}(x_{n})) \geq \bar \delta r_n
\end{equation}
for every $H$ hyperplane passing through $x_n$. We start by the simple observation that, thanks to Lemma \ref{lem: Reif small}, a constant $C >0$ such that $r_n < C r_{\beta_n}(x_{n})$ cannot exist: in other words, it must be
\begin{equation}\label{eqn: rn to infinity}
\liminf_{n \to \infty} \frac{r_n}{r_{\beta_n}(x_{n})} = +\infty.
\end{equation}
Now we introduce the scaled functions
\[
\mf{w}_n(x) = \frac{1}{\sqrt{H(\mf{u}_{\beta_n}, x_n, r_n)}} \mf{u}_{\beta_n}(x_n + r_n x).
\]
The equation for $\mf{w}_n$ is
\[
\Delta w_{i,n} = r_n^2 H(\mf{u}_{\beta_n},x_n,r_n) \beta_n w_{i,n} \sum_{j \neq i} a_{ij} w_{j,n}^2,
\]
and by \eqref{eqn: rn to infinity} and the choice of $r_{\beta_n}(x_n)$, Lemma \ref{lem: choice of r}, the interaction parameter is
\[
r_n^2 H(\mf{u}_{\beta_n},x_n,r_n) \beta_n = r_{\beta_n}^2 H(\mf{u}_{\beta_n},x_n,r_n) \beta_n \cdot \left(\frac{r_n}{r_{\beta_n}(x_n)}\right)^2 \to +\infty.
\]
Moreover, for any $R>1$ and $0<r<R$
\[
N(\mf{w}_n,0,r) \le N(\mf{w}_n,0,R) = N(\mf{u}_{\beta_n},x_n,r_n R) \le N(\mf{u}_{\beta_n},x_n,\rho) \le \frac{5}{4}
\]
provided $n$ is sufficiently large, which implies
\[
\frac{d}{dr} \log H(\mf{w}_n,0,r) \le \frac{5}{2r} \quad \Longrightarrow \quad H(\mf{w}_n,0,R) = \frac{H(\mf{w}_n,0,R)}{H(\mf{w}_n,0,1)} \le R^{5/2}.
\]
In turn, by subharmonicity, and since $R$ has been arbitrarily chosen, this ensures that $\{\mf{w}_n\}$ is locally bounded in $L^\infty$, and applying as usual \cite{SoTaTeZi} (see also \cite{NoTaTeVe,tt,Wa}) we finally infer that $\mf{w}_n \to \mf{W} \in \mathcal{G}_{\mathrm{loc}}(\mathbb{R}^N)$, locally uniformly and in $H^1_{\mathrm{loc}}(\mathbb{R}^N)$. We recall that the main properties of the class $\mathcal{G}$ have been reviewed in Section \ref{sec: prel}, and we point out that $\mf{W} \not \equiv \mf{0}$ since the $L^2$-norm of $\mf{W}$ on the unit sphere is normalized to $1$. Directly from the convergence we deduce that $N(\mf{W},0,r) \le 5/4$ for every $r>0$. Actually a stronger estimate holds, since for any $r,\tilde r>0$ we have
\begin{align*}
N(\mf{W},0,r) &= \lim_{n \to \infty} N(\mf{w}_n,0,r) = \lim_{n \to \infty} N(\mf{u}_{\beta_n},x_n,r_n r)\\
& \le \lim_{n \to \infty} N(\mf{u}_{\beta_n},x_n,\tilde r) = N(\mf{u},x_0,\tilde r),
\end{align*}
where we used the compactness of $K$ to infer that $x_n \to x_0$. Notice that, by Lemma \ref{lem: basic prop regular part}, $x_0 \in \mathcal{R}$. Therefore, since $r$ and $\tilde r$ in the previous estimate are arbitrarily chosen, we can pass to the limit as $r \to +\infty$ and $\tilde r \to 0^+$, deducing that $N(\mf{W},0,+\infty) \le 1$. Using also the monotonicity of the Almgren quotient and the lower bound on $N(\mf{W},0,0^+)$ (see Proposition \ref{prop: monot segregated}), we conclude that
\[
1 \le N(\mf{W},0,0^+) \le N(\mf{W},0,+\infty) \leq 1 \quad \Longrightarrow \quad N(\mf{W},0,r) = 1 \quad \forall r.
\]
As a consequence, up to a rotation and a relabelling $\mf{W} = \alpha(x_N^+, x_N^-, 0, \dots, 0)$ for some positive $\alpha$, and in particular $\{\mf{W}=\mf{0}\} = \{x_N = 0\}$.
To complete the proof, we observe that scaling \eqref{eqn: failure reif} we have
\[
{\rm dist}_{\mathcal{H}}(\mathcal{R}^{(S)}_{\beta_n}(\rho) \cap B_{1} , H \cap B_{1} ) \geq \bar \delta \quad \text{for every hyperplane $H$ passing in $0$},
\]
for every $\beta$. On the other hand, by the uniform convergence $\mf{w}_n \to \mf{W}$ it is not difficult to check that
\begin{equation}\label{4821}
{\rm dist}_{\mathcal{H}}(\mathcal{R}^{(S)}_{\beta_n}(\rho) \cap B_{1} , \{x_N=0\} \cap B_{1} ) \to 0 \qquad \text{as $n \to +\infty$},
\end{equation}
which gives the sought contradiction (concerning the detailed verification of \eqref{4821}, we refer the interested reader to the proof of Lemma 5.3 in \cite{tt}, where the authors deal with a similar context).
\end{proof}
An important consequence of the Reifenberg flatness of the free boundary is given by a local separation property. We write that a set $\omega \subset \Omega$ \emph{separates $\Omega$ in a neighbourhood of $x \in \omega$} if there exists $r>0$ such that $B_r(x) \subset \Omega$ and $B_r(x) \setminus \omega$ consists of two connected components. As we shall see, the interface $\Gamma_\beta$ enjoys this important property in a neighbourhood of any point $x \in \mathcal{R}_\beta(\rho)$, with separation radius uniform in $x$. Consequently, we have that:
\begin{itemize}
\item in a $R$-neighbourhood of $\mathcal{R}_\beta(\rho) \cap K$ (with $R$ independent of $\beta$), the interface $\Gamma_\beta$ never self-intersects;
\item in a $R$-neighbourhood of $\mathcal{R}_\beta(\rho) \cap K$ (with $R$ independent of $\beta$), two densities dominate on the other ones.
\end{itemize}
\begin{proposition}\label{prp: local sep}
Let $K \Subset \Omega$ be a compact set and let $\rho > 0$. There exists $R > 0$ such that $B_R(x_\beta) \cap \Gamma_\beta$ has exactly two connected components for every $x_\beta \in \mathcal{R}_\beta(\rho)$.
\end{proposition}
The proof of this result is very similar to the one given in the limit setting by Tavares and Terracini in \cite{tt}, which was in turn based on the \cite[Theorem 4.1]{HongWang}. Thus, we only sketch it.
\begin{proof}
The fundamental observation here is that the family $\mathcal{R}_\beta(\rho)$ consists of sets which enjoy the uniform vanishing Reifenberg flatness property: as a consequence, if one proves that the local separation property holds for one of them, and the proof is based only on uniform-in-$\beta$ assumptions, the general case follows immediately.
Let $\rho > 0$ be fixed, we consider a small $\delta$-flatness parameter ($\delta < 1/6$ for instance is sufficient), and let $R' = R'(\delta)$ the uniform-in-$\beta$ radius for which the $(\delta,R')$-Reifenberg flatness condition holds for each set $\mathcal{R}_\beta(\rho)$. Let also $s>0$ be defined by Lemma \ref{lem: basic prop regular part}. We define $R:= \min\{s/2,R/2\}$, and we show that this is a local separation radius for every $x \in \mathcal{R}_\beta(\rho)$, for every $\beta$. To this aim, we can replicate almost word by word the proof of \cite[Proposition 5.4]{tt}
In particular, since $B_R(x) \cap \mathcal{R}_\beta(\rho)$ is $(\delta,R)$-Reifenberg flat and is detached from $\Sigma$, the set $B_R(x) \cap \mathcal{R}_\beta(\rho)$ is trapped between to parallel hyperplanes at distance $2\delta$, and the complementary region is given by two open and disjoint subsets of $B_R(x)$. We now consider inductively the radius $R/2^k$, $k \geq 1$ and balls $B_{R/2^k}(y)$ centered at points $y \in B_R(x) \cap \mathcal{R}_\beta(\rho)$ and the new connected components generated by the respective trapping hyperplanes. Thanks to the fact that $\delta$ is small, it is possible to show that each of these pairs of new components intersect one and only one of the connected components of the previous step. Joining all the corresponding sets we find two new connected components of $B_R(x)$ that are at distance $\delta/2^{k-1}$, and set $B_R(x) \cap \mathcal{R}_\beta(\rho)$ is again trapped between the two. Iterating this process we conclude the proof.
\end{proof}
Using the properties so far shown for $\mathcal{R}_\beta(\rho)$, we can better describe the behaviour of the functions near the interface set.
\begin{proposition}\label{clean up regular}
Let $K \Subset \Omega$ be a compact set, $\rho > 0$, and let $R > 0$ be the separation radius of Proposition \ref{prp: local sep}, independent of $\beta$. For any $x \in \mathcal{R}_\beta(\rho) \cap K$, there exist two indices $i_1 \neq i_2$ such that:
\begin{itemize}
\item $\mathcal{R}_\beta(\rho) \cap B_R(x) = \{ u_{i_1, \beta} = u_{i_2, \beta} \} \cap B_R(x)$ and moreover the two connected components of $ B_R(x) \setminus \mathcal{R}_\beta(\rho)$ are given by $\{ u_{i_1, \beta} > u_{i_2, \beta} \} \cap B_R(x)$ and $\{ u_{i_1, \beta} < u_{i_2, \beta} \} \cap B_R(x)$;
\item for any $j \neq i_1, i_2$, the density $u_{j,\beta}$ is exponentially small with respect to $u_{i_1,\beta}$ and $u_{i_2,\beta}$, in the sense that there exist $C_1,C_2>0$ such that
\[
\qquad \sup_{ B_{R/2}(x)} u_{j,\beta} \leq C_1 e^{- C_1 \beta^{C_2}};
\]
\item in $B_{R/2}(x)$ the system reduces to
\[
\begin{cases}
-\Delta u_{i_1,\beta} = - \beta u_{i_1,\beta} u_{i_2,\beta}^2 - u_{i_1,\beta}o_\beta(1) \\
-\Delta u_{i_2,\beta} = - \beta u_{i_2,\beta} u_{i_1,\beta}^2 - u_{i_2,\beta}o_\beta(1)
\end{cases}
\]
where $o_\beta(1)$ is a (exponentially) small perturbation in the $L^{\infty}$-norm.
\end{itemize}
\end{proposition}
As we shall see, Theorem \ref{corol: decay components vanishing} is a simple consequence of this proposition together with the compactness of $K$ and the definition of $\mathcal{R}_\beta(\rho)$.
\begin{proof}
For $x \in \mathcal{R}_\beta(\rho) \cap K$, the set $B_R(x) \setminus \Gamma_\beta$ is given by two connected components. By Lemma \ref{lem: basic prop regular part} and by the choice of the local separation radius $R \le s/2$, it follows also that $B_R(x) \cap \Sigma_\beta =\emptyset$, where we recall that the singular part of the interface was introduced in Definition \ref{def singular interface}. Indeed, if this is not the case we can find a sequence $x_\beta \in K \cap \mathcal{R}_\beta(\rho)$ and, correspondingly, $y_\beta \in B_R(x_\beta) \cap \Sigma_\beta$. By compactness and Corollary \ref{cor: improved decay singular sequence}, we deduce that $y_\beta \to y \in \Sigma$, in contradiction with the second point in Lemma \ref{lem: basic prop regular part} and the fact that $R \le s/2$. Therefore, in each of the connected components of $B_R(x) \setminus \Gamma_\beta$, one function dominates the others $k-1$, and by \cite[Section 10]{DaWaZh} the two dominating functions must be different.
We explicitly remark that, if necessary replacing $R$ with a smaller quantity, it is possible to assume that
\[
\text{the closure of }\left(\bigcup_{\beta} \bigcup_{x \in \mathcal{R}_\beta(\rho) \cap K} B_{R}(x)\right) \quad \text{is a compact subset of $\Omega$}.
\]
Therefore, by Lemma \ref{lem: N bounded}, there exists $\bar C>0$ independent of $\beta$ such that
\begin{equation}\label{universal bound on N}
\sup_{\beta} \, \sup_{x \in \mathcal{R}_\beta(\rho) \cap K} \, \sup_{y \in B_R(x)} N(\mf{u}_\beta,y,R/4) \le \bar C.
\end{equation}
Let $\tilde C:= 1/(2+2 \bar C)$. We claim that there exists $C>0$ such that
\begin{equation}\label{eqn lower clean up}
\inf_{x \in \mathcal{R}_\beta(\rho) \cap K} \inf_{y \in B_{3R/4}(x)} \sum_{i = 1}^{k} u_{i,\beta}(y) \geq C \beta^{-\frac12 + \tilde C}.
\end{equation}
To prove the previous claim, we argue as in Theorem \ref{thm: lowe estimate}. Suppose by contradiction that the claim is not true: then there exist sequences $\beta \to +\infty$, $x_\beta \in \mathcal{R}_\beta(\rho)$ and $y_\beta \in B_{3R/4}(x_\beta)$ such that
\begin{equation}\label{abs eqn lower bound clean up}
\lim_{\beta \to +\infty} \beta^{\frac{1}{2}-\tilde C}\sum_{i =1}^k u_{i,\beta}(y_\beta) =0.
\end{equation}
Thus $y_\beta \to \bar y \in \Omega$, and since \eqref{nontrivial} is in force, we find a sequence $r_\beta = r_\beta(y_\beta) \to 0$ as in Lemma \ref{lem: choice of r}. Moreover, recalling \eqref{universal bound on N} and \eqref{prop: almgren}, we have also
\[
\frac{d}{dr}\log H(\mf{u}_\beta,y_\beta,r) \le \frac{2 \bar C}{r} \qquad \forall 0 < r < \frac{R}{4},
\]
whence by \eqref{nontrivial} we infer
\[
\frac{H(\mf{u}_\beta,y_\beta,r_\beta)}{r_\beta^{2 \bar C}} \ge \frac{H(\mf{u}_\beta,y_\beta,R/4)}{R^{2 \bar C}} \ge C.
\]
This estimate can be used as in Theorem \ref{thm: lowe estimate}: thanks to Lemmas \ref{lem: choice of r} and \ref{lem: H con m},
\begin{align*}
\left(\sum_{i=1}^k u_{i,\beta}(y_\beta)\right)^2 & \ge C
H(\mf{u}_\beta,y_\beta,r_\beta) \ge C {r_\beta^{2 \bar C}} \ge \frac{C}{H(\mf{u}_\beta,y_\beta,r_\beta)^{\bar C} \beta^{\bar C} } \\
& \ge \frac{C}{\beta^{\bar C}} \left(\sum_{i=1}^k u_{i,\beta}(y_\beta)\right)^{-2 \bar C}.
\end{align*}
It is not difficult to obtain a contradiction with \eqref{abs eqn lower bound clean up}, thus proving claim \eqref{eqn lower clean up}.
By the local separation property we know that for any $x \in \mathcal{R}_\beta(\rho)$ there are two indices $i_1,i_2$ such that the functions $u_{i_1,\beta}$ and $u_{i_2,\beta}$ are dominating the remaining $k-2$ components in $B_{3R/4}(x)$. Combining this with \eqref{eqn lower clean up}, we obtain
\[
\inf_{y \in B_{3R/4}(x)} \left( u_{i_1,\beta}(y) + u_{i_2,\beta}(y) \right) \geq C \beta^{-\frac12 + \tilde C}
\]
(here $x$ depends on $\beta$, and $i_1,i_2$ could depend both on $x$ and on $\beta$, but we do not stress this to keep the notation simple; what it is important is that $R$ is independent of $\beta$). To complete the proof, we shall use the previous estimate in the equation satisfied by the function $u_{j,\beta}$, $j \neq i_1, i_2$ in the ball $B_{3R/4}(x)$: this gives
\[
- \Delta u_{j,\beta} = - \beta u_{j,\beta} \sum_{i \neq j} u_{i,\beta}^2 \leq - C \beta u_{j,\beta} \left( u_{i_1,\beta} + u_{i_2,\beta} \right)^2 \leq - C \beta^{2\tilde C} u_{j,\beta},
\]
and thus, invoking Lemma \ref{lem: decay} and assumption \eqref{boundedness}, we finally infer
\[
\sup_{B_{R/2}(x)} u_{j,\beta} \leq C e^{- C \beta^{2\tilde C}},
\]
proving the second point in the thesis. The third point follows easily.
\end{proof}
Theorem \ref{corol: decay components vanishing} is a simple corollary of the previous statement.
\begin{proof}[Proof of Theorem \ref{corol: decay components vanishing}]
Under the assumptions of the corollary, there exists $x_\beta \in \Gamma_\beta$ such that $x_\beta \to x_0$, see Proposition \ref{prop: interfaces are good approximation}. Moreover, $x_\beta \not \in \Sigma_\beta$, otherwise we would have a contradiction with Corollary \ref{thm: non-simple blow-up}. We claim that there exists $\rho>0$ (independent of $\beta$) such that $x_\beta \in \mathcal{R}_\beta(\rho) \cap K$ for every $\beta$. Once that this is proved, the thesis follows by Proposition \ref{clean up regular}. Suppose by contradiction that a value $\rho$ as before does not exist. Then there exists $\rho_\beta \to 0^+$ such that
\[
N(\mf{u}_\beta,x_\beta,\rho_\beta) \ge 1+ \frac{1}{4}.
\]
On the other hand, since $x_0 \in \mathcal{R}$ there exists $\bar r>0$ such that $N(\mf{u},x_0,\bar r) \le 1+1/8$, and by monotonicity of the Almgren quotient and the usual convergence we easily reach a contradiction:
\[
1+\frac{1}{8} >N(\mf{u},x_0,\bar r) = \lim_{\beta \to +\infty}
N(\mf{u}_\beta,x_\beta,\bar r)
\ge \lim_{\beta \to +\infty} N(\mf{u}_\beta,x_\beta,\rho_\beta) \ge 1+ \frac{1}{4}.
\]
This proves the existence of $\rho$, and in turn the desired result.
\end{proof}
We conclude this section with the:
\begin{proof}[Proof of Proposition \ref{prop: non C^1}]
We can provide a counterexample to the convergence of the gradients. As reviewed in the preliminaries, there exists a unique solution to the system of ordinary differential equations
\[
\begin{cases}
u'' = uv^2 \\
v'' = u^2 v \\
u,v>0
\end{cases} \quad \text{in $\mathbb{R}$}, \quad \text{with $u'(+\infty) = 1$ and $v(x) = u(-x)$}.
\]
Notice that, consequently, for the (constant) Hamiltonian function we have
\[
(u')^2(x) + (v')^2(x) -u^2(x)v^2(x) =1 \qquad \forall x \in \mathbb{R}.
\]
Let us consider
\[
(u_R(x),v_R(x)):= \frac{1}{R}(u(Rx), v(Rx)).
\]
This is a sequence of solutions to \eqref{system simplified} with $\beta(R)=R^4 \to +\infty$, and it is not difficult to deduce by usual arguments that it is locally uniformly bounded in $L^\infty$. Thus, by \cite{SoTaTeZi} (see also \cite{NoTaTeVe,Wa}), it is convergent in $\mathcal{C}^0_{\mathrm{loc}}(\mathbb{R})$ and in $H^1_{\mathrm{loc}}(\mathbb{R})$, up to a subsequence, to a limit profile $(U,V)$, such that $U-V$ is harmonic, and thus affine, in $\mathbb{R}$. Since $u'_R(1) \to 1$ as $R \to +\infty$, and since $u_R \to U$ in $\mathcal{C}^1_{\mathrm{loc}}(\mathbb{R} \setminus\{0\})$, we deduce that $(U,V)=(x^+,x^-)$. Let us suppose now by contradiction that $u_R-v_R \to U-V$ in $\mathcal{C}^1([-\varepsilon,\varepsilon])$ for some $\varepsilon>0$; then, recalling the symmetry of the solution, we infer that
\[
1= U'(0)-V'(0) = \lim_{R \to \infty} u_R'(0) - v_R'(0) = u'(0) -v'(0) = 2 u'(0),
\]
so that $u'(0) =-v'(0) = 1/2$. Coming back to the definition of the energy, we finally obtain
\[
1= \frac{1}{2} - u^2(0) v^2(0) < 1,
\]
a contradiction.
\end{proof}
\noindent \textbf{Acknowledgements:} part of this work was carried out while Nicola Soave was visiting the Centre d'Analyse et de Math\'{e}matique Sociales in Paris, and he wishes to thank for the hospitality. The authors are partially supported through the project ERC Advanced Grant 2013 n. 339958 ``Complex Patterns for Strongly Interacting Dynamical Systems - COMPAT''. Alessandro Zilio is also partially supported by the ERC Advanced Grant 2013 n. 321186 ``ReaDi -- Reaction-Diffusion Equations, Propagation and Modelling''.
| {
"timestamp": "2015-09-04T02:08:54",
"yymm": "1506",
"arxiv_id": "1506.07779",
"language": "en",
"url": "https://arxiv.org/abs/1506.07779",
"abstract": "We consider a family of positive solutions to the system of $k$ components \\[-\\Delta u_{i,\\beta} = f(x, u_{i,\\beta}) - \\beta u_{i,\\beta} \\sum_{j \\neq i} a_{ij} u_{j,\\beta}^2 \\qquad \\text{in $\\Omega$}, \\] where $\\Omega \\subset \\mathbb{R}^N$ with $N \\ge 2$. It is known that uniform bounds in $L^\\infty$ of $\\{\\mathbf{u}_{\\beta}\\}$ imply convergence of the densities to a segregated configuration, as the competition parameter $\\beta$ diverges to $+\\infty$. In this paper %we study more closely the asymptotic property of the solutions of the system in this singular limit: we establish sharp quantitative point-wise estimates for the densities around the interface between different components, and we characterize the asymptotic profile of $\\mathbf{u}_\\beta$ in terms of entire solutions to the limit system \\[\\Delta U_i = U_i \\sum_{j\\neq i} a_{ij} U_j^2. \\] Moreover, we develop a uniform-in-$\\beta$ regularity theory for the interfaces.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "On phase separation in systems of coupled elliptic equations: asymptotic analysis and geometric aspects",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7089606366625044
} |
https://arxiv.org/abs/2212.09919 | Generating Functions for Asymmetric Random Walk Processes With Double Absorbing Barriers | Generating functions for asymmetric step-size paths restricted by two absorbing barriers are derived. The method begins by applying the Lagrange inversion formula to arbitrary powers of roots of the characteristic equation, that being a trinomial, which produces generating function as function (z) of the conditional probability of absorption of a particle, on a path restricted by two absorbing barriers. The exact enumeration of an asymmetric walk with two absorbing barriers is given. | \section{Introduction}
The one step forward, one step back process is well understood. Examples are given by Feller (1957), Lengyel, T. (2009), Krattenthaler (2000), etc. Because the characteristic function can be expressed as a quadratic, it's possible to generate closed-form, analytic expressions as either sums of trigonometric functions or binomials of exact probabilities of absorption after some number of steps, assuming two absorbing barriers and a starting position between the barriers, such as by applying Legendre polynomials. However, less information is available about such a process in which the random walk is asymmetric, also assuming two absorbing barriers. In this case, the characteristic function is a cubic or some higher degree equation. This paper derives an algorithm for generating functions for conditional, time-discrete probabilities of absorption for asymmetric random walks in which there are two absorbing barriers, which may be unique to the literature, including a more restricted generalization of the so-called 'Duchon's club' problem.
Sato (1983) and Krattenthaler (2017) derived generating functions for asymmetric double barriers, but it does not apply to absorbing barriers, nor does the supplied formulas readily generalize beyond slopes of the form 3/2 (corresponding to 3-steps forward, two back). Shehawey (2008) derived closed-form formulas for absorbing double barriers for symmetric steps. This paper provides a complete enumeration, as a generating function, for y-steps forward, 2-step back process constrained by double absorbing barriers, and generalizes to arbitrary y-steps forward and b-steps back processes. Likewise, Banderier (2002) provides an enumeration of the conditional 3-steps forward, 2-steps back process with a single absorbing barrier--the so-called 'Duchon club' problem, I extend the result to two barriers, but as a generating function. This requires using all of the roots of the characteristic function, that being a trinomial, instead of only the small ones.
Although the transfer matrix method allows one to produce generating functions for the probabilities, it does not tell us how to compute the actual determinant, particularly when the transition matrix is of an arbitrary size $m$. This paper provides an algorithm for computing a modified Vandermonde matrix, which corresponds to the determinant of the matrix which encodes the underlying process with the necessary boundary conditions. It's also possible to derive the generating functions by solving systems of equations in which the solutions correspond to entries of the powers of the transition matrix, but this still requires solving potentially large systems of equations for arbitrary matrix sizes. The approach with the Lagrange inversion formula bypasses having to solve a large array of linear equations.
\section{Single barrier}\label{sec:2}
For a single absorbing barrier set at $y=0$ and any starting position for integers $k \geq 0$, exact expressions for probability of absorption for a two-steps forward, one-step back process after some arbitrary number of steps are given by the Fuss-Catalan numbers. The generating function for the exact probabilities of absorption (on the left-sided barrier) is thus:
\begin{displaymath}
\sum_{n=0} \frac{kz^n}{2^k 8^n (3n+k)} {3n+k \choose n}
\end{displaymath}
For $z=1$, the above sum converges to $(\phi-1)^k$ as the probability of absorption after an infinite number of steps. This is a well-known result found in many introductory textbooks. (In which $\phi$ is the golden ratio.) . The Fuss Catalan series generalizes to any n-step forward, one step back process with a single absorbing barrier.
\section{Two barriers}\label{sec:3}
In this section we describe the above process but with two barriers, the first at 0 and the second set at some value $m$. The usual methods that work for the quadratic double-barrier case does not readily generalize to the asymmetric cubic problem. We can derive a generating function that encodes the exact probabilities for absorption at either side of the barrier or both barriers after some arbitrary number of trials, and Fuss-like series for exact probabilities valid for a limited number of trials.
The two-steps forward, one-step back process is described by the heptadiagonal transition matrix (in this example, $m=5$). Here we can see absorption occurs if the particle hits the squares corresponding to $0$, $m$, or $m+1$:
\begin{displaymath}
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 & 0 & 0 \\
0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 & 0 \\
0 & 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} & 0 \\
0 & 0 & 0 & \frac{1}{2} & 0 & 0 & \frac{1}{2} \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\end{displaymath}
The characteristic equation is given by:
\begin{displaymath}
x^3-x/p+z(1-p)/p=0
\end{displaymath}
for $p=1/2$ of either left or right:
\begin{displaymath}
x^3-2x+z=0
\end{displaymath}
with roots denoted as $a,b,c$
\begin{equation}
\begin{cases}
a=-b-c\\
b=2\,\sqrt{\frac{2}{3}}\,\sin\left[\,\frac{1}{3} \arcsin\left(\frac{3z}{4}\sqrt{\frac{3}{2}}\,\right) \,\right]\\
c=-2\,\sqrt{\frac{2}{3}}\,\cos\left[\,\frac{1}{3} \arccos\left(\frac{3z}{4}\sqrt{\frac{3}{2}}\,\right) \,\right]
\end{cases}\,.
\end{equation}
Using the Lagrange–Bürmann formula, it's possible to derive series for roots raised to an arbitrary power as a single summation:
\begin{equation}
c^{m}=2^{m/2}\left((-1)^{m}-\frac{m}{2}\sum_{n=0} \frac{(-1)^{1+n-m}z^{n+1}}{(2\sqrt{2})^{1+n} (n+1)} {3n/2-m/2+1/2 \choose n} \right)
\end{equation}
\begin{equation}
a^{m}=2^{m/2}\left(1-\frac{m}{2}\sum_{n=0} \frac{z^{n+1}}{(2\sqrt{2})^{1+n} (n+1)} {3n/2-m/2+1/2 \choose n} \right)
\end{equation}
\begin{equation}
b^{m}=mz^{m}\sum_{n=0} \frac{z^{2n}}{2^{3n+m} (2n+m)} {3n+m-1 \choose n}
\end{equation}
This leads to one of two possible systems of equations (depending on success is defined as hitting one of the left barriers,$0$, or one of the right ones $(m,m+1)$:
\begin{equation}
\begin{cases}
x+y+z=0\\
xa^{m}+yb^{m}+zc^{m}=1\\
xa^{m+1}+yb^{m+1}+zc^{m+1}=1
\end{cases}\,.
\end{equation}
\begin{equation}
\begin{cases}
x = \frac{-b^{m + 1} + b^m + (c - 1) c^m}{(a^{m + 1} (b^m - c^m) + a^m (c^{m + 1} - b^{m + 1}) + (b - c) b^m c^m)}\\
y = \frac{-a^{m + 1} + a^m + (c - 1) c^m}{-(a^{m + 1} (b^m - c^m) - a^m (c^{m + 1} - b^{m + 1}) - (b - c) b^m c^m)}\\
z = \frac{-a^{m + 1} + a^m + (b - 1) b^m}{(a^{m + 1} (b^m - c^m) + a^m (c^{m + 1} - b^{m + 1}) + (b - c) b^m c^m)}\\
\end{cases}\,.
\end{equation}
\begin{equation}
\begin{cases}
x+y+z=1\\
xa^{m}+yb^{m}+zc^{m}=0\\
xa^{m+1}+yb^{m+1}+zc^{m+1}=0
\end{cases}\,.
\end{equation}
\begin{equation}
\begin{cases}
x = \frac{(b-c) b^m c^m}{(a^{m + 1} (b^m - c^m) + a^m (c^{m + 1} - b^{m + 1}) + (b - c) b^m c^m)}\\
y = \frac{(a-c) a^m c^m}{-(a^{m + 1} (b^m - c^m) - a^m (c^{m + 1} - b^{m + 1}) - (b - c) b^m c^m)}\\
z = \frac{(a-b) b^m a^m}{(a^{m + 1} (b^m - c^m) + a^m (c^{m + 1} - b^{m + 1}) + (b - c) b^m c^m)}\\
\end{cases}\,.
\end{equation}
Either of these is valid
The probability generating function is thus of the form:
\begin{equation}
f(z,s,m)=xa^s+yb^s+zc^s ; m+1 \geq s\geq 0
\end{equation}
For either of the above forms, the denominator is the same.
\subsection{Approximation}
If we let $s=m-1$ for (5) and $b\to 0$ , then after some labor we have the Fuss-like generating function for the exact probability of absorption up to $m$ terms:
\begin{equation}
\frac{b(b+1)}{z}
\end{equation}
and for $s=m-2$, ad infinitum...
\begin{equation}
\frac{-b^4+4b^2-zb+b^3}{z^2}
\end{equation}
Using (4) these can be converted to a single summation.
For example, letting $m=8$ and $s=7$ we have the series
\begin{equation}
\frac{1}{2} + \frac{z}{4}+ \frac{z^2}{16} + \frac{z^3}{16} + \frac{3 z^4}{128} + \frac{7 z^5}{256} + \frac{3 z^6}{256} +\frac{15 z^7}{1024} ...
\end{equation}
This is valid up to the 7th term, corresponding to the underlying transition matrix being raised to the 10th power. Letting $z=1$ and summing the first 7 terms gives an exact probability of 15/16 of absorption. It fails on the 11th trial, overestimating the probability by 1/2048, whereas the exact probability is 1949/2048. This error corresponds to the particle, which starts at square 7, making seven consecutive steps backwards, hitting zero, and then being absorbed on the 8th square $m$, which is not allowed.
\subsection{Exact Generating Function for 1-Step Back Process}
For two barriers, the exact generating function for the simplest but non-trivial case $s=m-1$ corresponding to (7) for the exact probability of absorption on the left-side barrier $p_l(z)$ after some arbitrary number of steps, conditional on having not been absorbed before, is given by:
\begin{displaymath}
u[m]=\sum_{n=0}\frac{z^{2n}}{2^{3n+1-m}}{3n-m \choose n};3n-m<0
\end{displaymath}
\begin{equation}
p_l(z)=\frac{1}{u[m]}
\end{equation}
Example ($m=8$) gives:
\begin{displaymath}
\frac{1}{6z^4-80z^2+128}
\end{displaymath}
The generating function for the probability of being absorbed on either of the two right-sided barriers $p_r(z)$, $m$ and $m+1$ for the particle starting at $s=m-1$:
\begin{equation}
p_r(z)=\frac{zu[m-1]+z^{2}u[m-2]}{u[m]}
\end{equation}
For example, for $m=12$ with starting position at $s=11$, the probability of absorption on either of the two right-sided barriers conditional on having not been absorbed before is given by:
\begin{displaymath}
\frac{z (z^7 + 8 z^6 - 80 z^5 - 240 z^4 + 448 z^3 + 1024 z^2 - 512 z - 1024)}{8 (5 z^6 - 84 z^4 + 288 z^2 - 256)}
\end{displaymath}
which has the expansion $z/2 + z^2/4 + z^3/16 + z^4/16 + 3 z^5/128 + 7 z^6/256 + 3 z^7/256 + 15 z^8/1024...$
It can be generalized for any y-steps forward:
\begin{equation}
u[y,m]=\sum_{n=0}\frac{z^{yn}}{2^{(y+1)n+1-m}}{(y+1)n-m \choose n};(y+1)n-m<0
\end{equation}
\begin{displaymath}
p_l(z)=\frac{1}{u[y,m]}
\end{displaymath}
\begin{displaymath}
p_r(z)=\frac{zu[y,m-1]+z^{2}u[y,m-2]+...+z^{y}u[y,m-y]}{u[y,m]}
\end{displaymath}
For example, for $m=9$ and $y=3$ and particle starting at 8, the exact generating function for absorption on any of the three right-sided barriers corresponding to $m, m+1,m+2$, conditional on having not been absorbed before, is given by:
\begin{displaymath}
\frac {-32 z (z^3 - 4)+4 z^2 (16 - 3 z^3)-4 z^3 (z^3 - 8)}{ z^6 - 80 z^3 + 256}
\end{displaymath}
Although Helmut Prodinger, 2020, alludes to a proof of a related generating function involving certain Dyck paths, none is supplied. [1] Although many proofs are given for 2-nd degree formulas, but none for cubics. A non-inductive proof is supplied later for the case of $y=3$, which generalizes to any $y$.
\section{More generalized walks}\label{sec:4}
It logically follows that if the particle is allowed to go one step back, why not two steps, or any number. The characteristic function is more general $x^v-2x^u+z=0$ for integers u,v. This becomes more difficult because the usual procedure for (14) does not generalize for $u>1$ since the numerator for the $v$ quantity of system of equations does not have solutions in which the $m$ superscript terms go away.
It's also vastly more complicated. The 4 determinants that derive (13) and (14) require only 6 terms for the numerator and 6 for the denominator. The three steps forward, two-steps back walks needs five linear equations which have hundreds of total terms even for the simplest of cases.
Nevertheless, it's still possible to derive, albeit slightly more complicated, binomial-type generating functions similar to (13) for the exact probabilities of double barrier processes of $y$ steps forward, 2 steps back. For $y$ steps forward, 2 steps back, we have (for certain values $m$, as discussed in the appendix).
Define :
\begin{displaymath}
u[m]=\sum_{n=0}\frac{z^{(yn-m)/2-1}(-1)^{n-m}}{2^{((y+2)n-m)/2+1}}{(y+2)n/2-m/2-1 \choose n}
\end{displaymath}
\begin{displaymath}
g[m]=\sum_{n=0}\frac{z^{(yn-m)/2-1}}{2^{((y+2)n-m)/2+1}}{(y+2)n/2-m/2-1 \choose n}
\end{displaymath}
We have:
\begin{equation}
\frac{u[m-2]+g[m-2]}{2u[m-2]g[m-2]-2u[m-1]g[m-3]}
\end{equation}
The above is the generating function for the probability of being absorbed at the inner-most left barrier, conditional on having not been absorbed before, for the particle starting at $s=m-1$. This is possibly related to the problem of Dyck paths of rational slopes of $n/2$ for odd $n>2$. (C Banderier, 2016)
Using the procedure for (3.1) it's possible to derive elegant, Fuss-like generating functions for left-sided barriers.
For $x^v-2x^2+z=0$ corresponding to a two steps back process, we have the probability of absorption on either of the two left-sided barriers at some starting point $s$. In this case, s=0 and s=1 are both absorbing barriers.
\begin{equation}
\frac{(1-b)a^s+b^s(a-1)}{a-b}
\end{equation}
For $s=2$ is just $a+b-ab$ In which $a,b$ are the two roots of the corresponding characteristic function which have a euclidean norm less than one. A series solution as a function of $z$ are found with same procedure as (4). Applying the Cauchy product formula for $ab$, we can derive the well-known summation formula for the so-called Duchon club problem (Banderier 2002), (Banderier, Wallner 2016) and variants therein.
This can be generalized by solving $u$ systems of equations for barriers corresponding to $s=0,1,..u-1$.
For example, for a three steps forward, two steps back process, for a particle starting at $s=2$ with absorbing barriers at $m=0,m=1$, the probability of the particle landing on the square corresponding to $m=1$ (the inner-most left-sided barrier) has the closed form Fuss-like series:
\begin{equation}
\sum_{n=0} \frac{z^{6n+4}}{2^{5n+3} (2n+1)} {5n+2 \choose 2n}
\end{equation}
\section{Derivation of double barrier generating functions}\label{sec:4}
Deriving generating functions like (16) and (13) for double barriers requires the following steps:
1. Expressing the generating function with the necessary boundary conditions as the determinants of Vandermonde-like matrices. The determinant is computed using block matrices based on small roots $~z$ and large roots $~1+z$.
2. Use of the Lagrange–Bürmann formula for raised powers of roots of the characteristic equation to generate binomial expressions.
3. Reduction of the related Schur polynomials via the derivative method. Although the determinant can be expressed as a Schur polynomial, we have to convert this as a series in term of powers of z to get the generating function.
4. Bounding the asymptotic behavior or cancellation of terms of the numerator and denominator of the generating function.
\subsection{Proof of Cubic walk}
For (13) using (8) and (9) and setting $s=m-1$ we have:
\begin{displaymath}
\frac{(a-b)(b-c)(a-c)}{z((b-c)a^{-m}+(a-b)c^{-m}+(c-a)b^{-m})}
\end{displaymath}
In which $a,b,c$ are the roots of the trinomial $x^3-2x+z=0$, as described on page 2.
The products of the root pairs is equal to an algebraic number as a function of $z$, denoted $h(z)$. In this case, $(a-b)(b-c)(a-c)=±\sqrt{32-27z^2}$, which factors out and does not contribute to the generating function
Conjecture: for a trinomial, the relationship holds for each root $r_i$ for $i=1,2..n$. For example for $r_1$:
\begin{equation}
-\frac{r_1^m}{(r_1-r_2)(r_1-r_3)(r_1-r_4)...}=\frac{r_1^{m+1}}{m+1}\frac{d}{dz}
\end{equation}
also
\begin{equation}
-r_1^m \prod_{2 \leq j < k \leq n} (r_j-r_k)=h(z)\left[\frac{r_1^{m+1}}{m+1}\frac{d}{dz}\right]
\end{equation}
...in which $r^{m}_i$ is the series expansion of the specified root
Using (20), we have:
\begin{equation}
\left[{z(\frac{c^{-m+1}}{-m+1}\frac{d}{dz}+\frac{a^{-m+1}}{-m+1}\frac{d}{dz}+\frac{b^{-m+1}}{-m+1} \frac{d}{dz})}\right]^{-1}
\end{equation}
Let $m=2u$ for (21), we have for the denominator:
\begin{equation}
-\frac{2^{-u}}{2}\sum_{n=0} \frac{z^{2n}}{8^n} {3n+u \choose 2n} +\frac{4^{u}}{2}\sum_{n=0} \frac{z^{2n-2u}}{8^n} {3n-2u \choose n}
\end{equation}
The second binomial sum is zero for $u>n>2u/3$. For the second summation, let $n=n+u$. Hence, the first summation cancels out with the remaining terms of the second one, giving (13).
This procedure generalizes for the general process (15)
\subsection{Proof of the reduction formula}
Consider a trinomial $x^v-jx^u+z=0$
For one of the roots $r_1$, we have the product polynomial $p$ of the difference roots as follows:
$(r_1-r_2)(r_1-r_3)(r_1-r_4)...=P_{r_1}$
Which is expanded as a polynomial of the form $r_1^{v-1}+q_1r_1^{v-2}+...-z/r_1$
And then for polynomial: $x^v+d_1x^{v-1}+...+d_u x^u+...+d_v$
Through some labor via Vieta's formulas, we can equate the $q_v$ and $d_v$ terms without needing $d_u$ to find a simple recursion for $q_v$.
Hence we have (for each root $r_i$):
\begin{equation}
(v-u)r_1^{v-1}-uz/r_1=P_{r_1}
\end{equation}
Plugging (23) into (19) gives:
\begin{equation}
f(z)-\left[f(z)\frac{d}{dz}\right]uz+\left[f^{v+1}(z)\frac{d}{dz}\right]\frac{v-u}{v+1}=0
\end{equation}
Define the generalized series solution for powers $k$ of the $u$ quantity of small roots of the trinomial $x^v-jx^u+z=0$:
\begin{equation}
f^{k}(z)=r_I^k=k\sum_{n=0} \frac{e^{2i{\pi}I( (v-u)n+k)/u} z^{(n(v-u)+k)/u}}{j^{(nv+k)/u} (n(v-u)+k)} {nv/u +k/u-1 \choose n}
\end{equation}
for $I=0,1,...u-1$
Plugging (25) into (24) and manipulating the binomials and series ${a \choose b+1}= (a-b)/(b+1){a \choose b}$ gives the desired result, completing the proof.
\subsection{Proof of (16)}
The y-steps forward, 2-steps back process can be represented as a generating function of the form:
\begin{equation}
\frac{v_1(z)+z^{-m/2}{v_2(z)}}{\mu_1(z)+z^{m/2}\mu_2(z)+z^{m}\mu_3(z)}
\end{equation}
In which the $\mu_1(z)$ values are functions of z.
This allows for the denominator to be simplified in such a way that the $\mu_3(z)$ terms can always be disregarded, and for certain values of $m$ the $\mu_2(z)$ terms can also be disregarded.
Let $y=3$ and denote the 5 roots of the associated characteristic function $x^5-2x^2-z=0$ be $a,b$ as the small roots and $c,d,f$ as the large ones.
We have 5 Vandermonde-like matrices for the numerator, one for each root, after applying the necessary boundary conditions and letting $s=m-1$:
\begin{equation}
A=a^{m-1} det \begin{bmatrix}
0 & b & c & d & f \\
1 & b^2 & c^2 & d^2 & f^2 \\
0 & b^m & c^m & d^m & f^m \\
0 & b^{m+1} & c^{m+1} & d^{m+1} & f^{m+1} \\
0 & b^{m+2} & c^{m+2} & d^{m+2} & f^{m+2}
\end{bmatrix}
\end{equation}
The denominator:
\begin{equation}
\rho= det \begin{bmatrix}
1 & 1 & 1 & 1 & 1 \\
a & b & c & d & f \\
a^m & b^m & c^m & d^m & f^m \\
a^{m+1} & b^{m+1} & c^{m+1} & d^{m+1} & f^{m+1} \\
a^{m+2} & b^{m+2} & c^{m+2} & d^{m+2} & f^{m+2}
\end{bmatrix}
\end{equation}
The generating function via Cramer's rule is thus: $\frac{A+B+C+D+F}{\rho}$, which has the following form as (26). Using Vieta's formulas $(abc..)^{m} = (-z)^m $, we have for the numerator (in which the $(-z)^m$ term is disregarded:
$-a^{1-m}(b-c)(b-d)(c-d)(c-f)(d-f)(b-f)+b^{1-m}(a-c)(a-d)(a-f)(c-d)(c-f)(d-f)...$
The numerator via applying (20), we have for even $m$:
\begin{equation}
(-z)^{m}h(z)\sum_{n=0} \frac{z^{3n-m/2}}{2^{5n-m/2+1}} {5n-m/2 \choose 2n}
\end{equation}
For odd $m$:
\begin{equation}
(-z)^{m}h(z)\sum_{n=0} \frac{z^{3n+3/2-m/2}}{2^{5n+7/2-m/2}} {5n+5/2-m/2 \choose 2n+1}
\end{equation}
The complete numerator involving the $c^{-m},d^{-m},f^{-m}$ terms for even $m$ is given by (58).
The $\mu_1(z)$ term of denominator can be simplified via applying (19) and (20):
\begin{equation}
=(a-b)(c-d)(c-f)(d-f)(ab)^{-m}(-z)^{m}
\end{equation}
\begin{equation}
=\frac{h(z)(-z)^{m}(ab)^{-m}}{(a-c)(a-d)(a-f)(b-c)(b-d)(b-f)}
\end{equation}
\begin{equation}
=h(z)(-z)^{m}(a-b)(b-a)\frac{b^{1-m}}{1-m}\frac{d}{dz}\frac{a^{1-m}}{1-m}\frac{d}{dz}
\end{equation}
\begin{equation}
=h(z)(-z)^{m}\left[-\frac{a^{3-m}}{3-m}\frac{d}{dz}\frac{b^{1-m}}{1-m}\frac{d}{dz}+2\frac{a^{2-m}}{2-m}\frac{d}{dz}\frac{b^{2-m}}{2-m}\frac{d}{dz}-\frac{a^{1-m}}{1-m}\frac{d}{dz}\frac{b^{3-m}}{3-m}\frac{d}{dz}\right]
\end{equation}
\begin{equation}
=h(z)(-z)^{m}(2u[m-2]g[m-2]-2u[m-1]g[m-3])
\end{equation}
(The double series for (35) in the section 5.6)
The $\mu_2(z)$ part has 6 terms similar to (31) and the $\mu_3(z)$ has three, which does not contribute to the generating function for certain values of $m$ and can be disregarded. The $h(z)(-z)^{m}$ terms cancel out, giving (16). (26) cam be represented as a finite polynomial of the form, for some integer constants $c_n$, and $\beta_n$:
\begin{equation}
\frac{\sum_{n=0}^{u-1}\beta_n z^{yn}}{\sum_{n=0}^u c_n z^{yn} }
\end{equation}
In which $u(y+b)+b-1-m<0$ and $y$ is number of steps forward and $b$ is number steps backwards.
For constants $c_n$, in which extraneous powers of $z$ are discarded.
If $yu<(m-1+\epsilon)/2$ for $\epsilon = 0,1,2$ and $(m-1+\epsilon) \mod 2 =0$ then (16) yields the exact generating function. Otherwise, $\mu_2(z)$ terms are needed. This is a consequence of finding the overlap, discussed in section 5.8.
Example: $m=20, y=3$ , and $u=3$
The generating function is $-z^{10}(15 z^3 + 32)/(4 (5 z^9 - 492 z^6 + 3328 z^3 - 4096))=z^{10}/512 + 41 z^{13}/16384 + 943 z^{16}/524288...$
Example: $m=14, y=3$ , and $u=2$
We have the generating function $(64z^7+6z^{10})/(4096-1792z^3+36z^6)$ with the expansion $z^7/64 + 17 z^{10}/2048 + 229 z^{13}/65536...$
Example: For $y=5, m=22$ we have $u=2$
The generating function: $z^{11}(80 z^5 + 1024)/(64 (93 z^{10} - 4608 z^5 + 16384))$
$= z^{11}/1024 + 23 z^{16}/65536 + (1563 z^{21})/16777216 ...$
\subsection{Generalizing the denominator}
In this section we derive a generalized formula for $\mu_1(z)$.
Consider a process in which the particle advances $y$ steps forward and $y-1$ steps back, with a 50-percent likelihood of either event. We thus have the characteristic equation: $x^{2y-1}-2x^{y-1}+z=0$. There $y$ large roots (with a euclidean norm $>1$) and $y-1$ small roots. So we have a matrix of the form (28) with dimensions $2y-1 $ by $2y-1$. This determinant is composed of sums of permutations of y-tuples of roots. For the particle starting at $s=m-1$ like above, the numerator is $y-2$-tuples of roots. Because the small roots have the series approximation of the form $z^{1/(y-1)}$ via (25) and the large roots have the behavior $\alpha +\beta z$ via (48), (26) can be generalized for arbitrary step-size walks.
Thus the generating function for the generalized walk is given by:
\begin{equation}
\frac{(\nu_1(z)+\nu_2(z)z^{-m/(y-1)}+\nu_3(z)z^{-2m/(y-1)}...\nu_{y-1}(z)z^{-(y-2)m/(y-1)})}{\mu_1(z)+\mu_2(z)z^{m/(y-1)}+\mu_3(z)z^{2m/(y-1)}...\mu_{y}(z)z^{m}}
\end{equation}
[The $\mu_{y}(z)$, $\nu_{y-1}(z)$ terms are functions of products of $b$-length strings of large $R_i$ and small $r_i$ roots defined in (59). (36) is explained in more detail in section 5.8 for arbitrary step-size walks.]
Using the same procedure as (31), the associated matrix for the $\mu_1(z)$ term has a simple form in which all the entries corresponding to small roots raised to $m$ powers can be zeroed-out. The result is a block matrix with one of the blocks zero. The determinant is the product of the determinant of the $y-1$ by $y-1$ Vandermonde matrix, denoted by $\Gamma_1$, times the determinant of the $y$ by $y$ matrix, denoted by $\Gamma_2$, the first corresponding to the small roots and the second the big roots.
The $\Gamma_1$ determinant has $(y-1)(y-2)/2$ unique product pairs of the small roots. The $\Gamma_2$ determinant has $(y)(y-1)/2$ unique product pairs of the big roots. The total unique pairs is $(y - 1)^2$ For $h(z)$ has $(2 y - 1) (y - 1)$ unique product pairs, which we'll denote as $\Gamma_3$. Note $\Gamma_1 \Gamma_2 / \Gamma_3 $ cancels such that the result is $1/\Gamma_4$ in which $\Gamma_4$ are unique pairs differences of large and small roots. The total unique pairs for $\Gamma_4$ is $(y - 1) y$, corresponding to (32).
So we have, in which $r_i$ are small roots:
\begin{equation}
\mu_1(z)=\frac{h(z)(-z)^{m}(\prod_{i = 1}^{y-1} r_{i})^{-m}}{\Gamma_4}
\end{equation}
Applying (19) we notice that there are $2y-2$ unique pairs for (19), and this is repeated for each small root, the entire product which will be denoted as $\Gamma_5$. $\Gamma_5/\Gamma_4$ gives twice as meany elements $(y - 1) (y - 2)$ compared to $\Gamma_1$ , because each pair is repeated twice for the small roots (eg. $(a-b)(b-a)$).
For example:
$y=3; (a-b)(b-a)$
$y=4; (a-b)(a-c)(b-a)(b-c)(c-a)(c-b)$
$y=5; (a-b)(a-c)(a-d)(b-a)(b-c)(b-d)(c-a)(c-b)(c-d)(d-a)(d-b)(d-c)$
Applying (19) gives:
\begin{equation}
=-h(z)(-z)^{m}\Gamma_1^2 \left[ \frac{ (\prod_{i = 1}^{y-1} r^{1-m}_{i})}{1-m}\frac{d}{dz} \right]
\end{equation}
The highest weights of $\Gamma_1^2$ has the leading terms $r^{2(y-2)}_i$. So we have , applying (34) the first approximation (ignoring $-h(z)(-z)^{m}$):
\begin{equation}
= \frac{r^{2y-3-m}_{i}}{2y-3-m}\frac{d}{dz}
\end{equation}
Via (25), the value of $u$ is given by $u(y+b)+b-1-m<0$ again.
This can also be derived by inspection of the leading $z$ exponent of the characteristic polynomial of the larger $m-2$ by $m-2$ matrix, in which the leading b-quantity of columns and rows are deleted, and the ending y-quantity of columns and rows are deleted.
\subsection{Exact 3-forward, 2-backwards process}
This section extends on the results of 5.3 to find exact generating functions for an arbitrarily large class of matrix sizes for the 3-forward, 2-backwards process. This also gives a partial solution, as a generating function, to the more general version of the so-called 'Duchon’s club' in which there a discrete time process, $z$ , at which pairs of people leave the club or trios enter. If the club becomes too overcrowded at some capacity $m$ the club also closes. [The below derivation is assuming that the club initially has $m-1$ patrons.] [5]
Let $m=6\kappa$ for positive integers $\kappa$. This is chosen so that powers of $z$ of $\mu_2(z)$ for (26) are multiples of 3. Thus the denominator will be of the form:
\begin{equation}
a_0+a_1z^3+a_2z^6...+z^{3m}(b_0+b_1z^3+b_2z^6...)+z^{6m}(c_0+c_1z^3+c_2z^6...)
\end{equation}
Like above:
\begin{equation}
a_0+a_1z^3+a_2z^6...=(z)^{6\kappa}(2u[6\kappa-2]g[6\kappa-2]-2u[6\kappa-1]g[6\kappa-3])
\end{equation}
Using the procedure for (34), we have 6 partitions, each with three triples, for $\mu_2(z)$. This can be expressed more compactly (the $f(z)$ terms are disregarded because they cancel out):
\begin{equation}
z^{3m}(b_0+b_1z^3+b_2z^6...)=z^{6\kappa}(A_1+A_2+A_3)
\end{equation}
\begin{equation}
A_1=-\left[\frac{a^{3-6\kappa}}{3-6\kappa}\frac{d}{dz}+\frac{b^{3-6\kappa}}{3-6\kappa}\frac{d}{dz}\right] \frac{ {\Phi^{-6\kappa+1}(z)}} {{-6\kappa+1}} \frac{d}{dz}
\end{equation}
\begin{equation}
A_2=-\left[\frac{a^{1-6\kappa}}{1-6\kappa}\frac{d}{dz}+\frac{b^{1-6\kappa}}{1-6\kappa}\frac{d}{dz}\right]\frac{ {\Phi^{-6\kappa+3}(z)}} {{-6\kappa+3}} \frac{d}{dz}
\end{equation}
\begin{equation}
A_3=2\left[\frac{a^{2-6\kappa}}{2-6\kappa}\frac{d}{dz}+\frac{b^{2-6\kappa}}{2-6\kappa}\frac{d}{dz}\right]\frac{ {\Phi^{-6\kappa+2}(z)}} {{-6\kappa+2}} \frac{d}{dz}
\end{equation}
In which $\Phi^k(z)$ is the sum of the large roots of $x^5-2x^2+z$, each raised to $k$ power. Hence:
$\left(R_0^k(z)+R_1^k(z)+R_2^k(z)\right)\frac{d}{dz}=\Phi^k(z)\frac{d}{dz}$=
\begin{equation}
=\frac{-k}{3}\left(\frac{1}{2}\right)^{-k/3}\sum_{n=0} \theta z^n \left(\frac{1}{2}\right)^{5(n+1)/3} {(-k+2+5n)/3 \choose n},
\end{equation}
And define: $\theta=(1+e^{-2i\pi(2(1+n)-k)/3}+e^{-4i\pi(2(1+n)-k)/3}) $
This follows from the series solution for powers $k$ of the large roots for the trinomial: $x^v-jx^u+z=0$:
\begin{equation}
R_I^k(z) = j^{\frac{k}{v-u}}\left[e^{\frac{2kI\pi i}{u-v}}+\frac{k}{u-v}\sum_{n=0} \frac{z^{1+n}}{n+1} j^{\frac{v(n+1)}{u-v}} e^{\frac{2I\pi i(u(1+n)-k)}{v-u}} {\frac{vn+u-k}{v-u} \choose n}\right]
\end{equation}
for $I=0,1,..v-u-1$
Because the generating function derived from $A_n$ must be rational, certain terms of (47) can be omitted. Hence we have:
\begin{equation}
A_3=-z^{-3\kappa}2^{\kappa-1}\sum_{n=0}\frac{z^{3n}}{2^{5n}}{2\kappa+5n \choose 3n} \sum_{n=0}\frac{z^{3n}}{2^{5n}}{5n-3\kappa \choose 2n}
\end{equation}
\begin{equation}
A_2=z^{-3\kappa+3}2^{\kappa-7}\sum_{n=0}\frac{z^{3n}}{2^{5n}}{2\kappa+3+5n \choose 3n+2} \sum_{n=0}\frac{z^{3n}}{2^{5n}}{5n-3\kappa+2 \choose 2n+1}
\end{equation}
\begin{equation}
A_1=z^{-3\kappa+3}2^{\kappa-7}\sum_{n=0}\frac{z^{3n}}{2^{5n}}{2\kappa+2+5n \choose 3n+1} \sum_{n=0}\frac{z^{3n}}{2^{5n}}{5n-3\kappa+3 \choose 2n+1}
\end{equation}
The total solution, which describes the probability of being absorbed by any of the barriers instead of just one of the left-sided barriers requires modifying (27) as below, for each column for the numerator, which is considerably more complicated because the numerator of the generating function also involves permutation of pairs of the roots, similar to the denominator:
\begin{equation}
A=a^{m-1} det \begin{bmatrix}
1 & 1 & 1 & 1 & 1 \\
1 & b^1 & c^1 & d^1 & f^1 \\
1 & b^m & c^m & d^m & f^m \\
1 & b^{m+1} & c^{m+1} & d^{m+1} & f^{m+1} \\
1 & b^{m+2} & c^{m+2} & d^{m+2} & f^{m+2}
\end{bmatrix}
\end{equation}
\subsection{Explicit polynomial for y-steps forward, 2-steps back process}
Eq. (35) can be expressed as a polynomial of z as double summation using a combination of the Cauchy product and various binomial manipulations (for any integer $m$).
\begin{equation}
\addtolength\jot{3pt}
\begin{split}
\mu_1(z) &= \frac{(-2)^m}{8}
\sum_{K=0}^{}\frac{z^{yK}}{2^{K(y+2)}}
\sum_{L=0}^{K}
\left( \frac{(-1)^L}{1+ \lfloor (1+L)/(1+K) \rfloor} \right) \\
& \quad \left[
\left(\frac{Ly-m+1}{(y+2)L-m+1}+
\frac{(2K-L)y-m+1}{(y+2)(2K-L)-m+1}
\right)
\right. \\
& \qquad
\binom{-yL/2+m/2-3/2}{L}
\binom{-y(2K-L)/2+m/2-3/2}{2K-L} \\
& \qquad \left.
{}+ 2\binom{-yL/2+m/2 -1}{L}
\binom{-y(2K-L)/2+m/2 -1}{2K-L}
\right]
\end{split}
\end{equation}
Or (which doesn't have potential singularities):
\begin{equation}
\begin{split}
\mu_1(z) &= \frac{(-2)^m}{8}
\sum_{K=0}^{}\frac{z^{yK}}{2^{K(y+2)}}
\sum_{L=0}^{2K}(-1)^L(S_1+S_2)\\
\end{split}
\end{equation}
\begin{displaymath}
\begin{split}
S_1={-yL/2+m/2-1/2 \choose L}
{-y(2K-L)/2+m/2-3/2 \choose 2K-L}\\
S_2={-yL/2+m/2 -1 \choose L}
{-y(2K-L)/2+m/2 -1 \choose 2K-L}
\end{split}
\end{displaymath}
And define $\mu_2(z)z^{m/2}=(a_1+a_2+a_3)$ (for odd $y$ and even $m$)
\begin{align}
\begin{split}
a_1 & = 2^{(-yv_o-y-2v_o-m-1)/y}z^{v_o+(y+1+m)/2} \times \\
&\sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(2+y)n+(y+3-m)/2 \choose 2n+1} \sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(y+2)n+(2v_o+yv_o+1+m)/y \choose yn+v_o}
\end{split}
\\[2ex]
\begin{split}
a_2 & = 2^{(-yv_2-y-2v_2-m+1)/y}z^{v_2+(y-1+m)/2} \times \\
&\sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(2+y)n+(y+1-m)/2 \choose 2n+1} \sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(y+2)n+(2v_2+yv_2-1+m)/y \choose yn+v_2}
\end{split}
\\[2ex]
\begin{split}
a_3 & = -2^{m/2+(-yv_1-y-2v_1-m)/y}z^{v_1+m/2} \times \\
&\sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(2+y)n-m/2 \choose 2n} \sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(y+2)n+((y+2)v_1+m)/y \choose yn+v_1}
\end{split}
\end{align}
The numerator corresponding to the starting position $s=m-1$ is
\begin{align}
\begin{split}
& = -2^{(-yv_1-y-2v_1-m)/y} z^{v_1+m} \times \\
&\sum_{n=0}\frac{z^{yn}}{2^{(y+2)n}}{(y+2)n+((y+2)v_1+m)/y \choose yn+v_1}+
\sum_{n=0}\frac{z^{yn+m/2}}{2^{(y+2)n-m/2+1}}{(2+y)n-m/2 \choose 2n}
\end{split}
\end{align}
And (for integers $v_0,v_1 \geq 0$):
$(2v_0+m+1) \mod y=0$
$(2v_1+m) \mod y=0$
$v_2=v_0+1$ (from $(2v_2+m-1) \mod y=0$)
Example: For $y=5, m=22$ we have the generating function: $z^{11}(80 z^5 + 1024)/(64 (93 z^{10} - 4608 z^5 + 16384))$. We can show that the $z^{15}$ and $z^{20}$ terms for the denominator are zero using (56),(57), (58). The $z^{25}$ and beyond terms requires computing the $\mu_3(z)z^{m}$ terms, which for any $m$ does not contribute to the generating function and can be ignored. We have $-(15301 z^{25})/1024 - (92309 z^{20})/256 - 315 z^{15}$. (With $v_o=1,v_1=4, v_2=2$) Using (55)and computing the $z^{15}$ and $z^{20}$ terms and adding, we see it's equal to zero, which is to be expected given there is no overlap.
Example: For $y=3, m=18$, we have $u=3$, and an overlap at $z^9$. Applying (58) and (55-57) , we have:
\begin{displaymath}
\frac{16 z^9(5 z^3 + 16)}{-8 z^9 + 4416 z^6 - 45056 z^3 + 65536}=\frac{z^9}{256} + \frac{z^{12}}{256} + \frac{635 z^{15}}{262144} + \frac{5883z^{18}}{194304}...
\end{displaymath}
For the denominator, we can confirm that the $z^{12},z^{15}$ coefficients $325z^{12}/8$ and $9443z^{15}/128$ generated by (54) cancel out with (55-57), as expected.
(Computing $z^{18}$ and above coefficients requires another another set of summations, in which similar to (22), but because there is no overlap for any $m$, this summation can be disregarded.)
\subsection{Deriving formulas 55-57}
Because $\mu_2(z)z^{m/(y-1)}$ is rational, $R_i$, $r_i$ must also be rational if $y$ and $b$ have no common factors, so we let $y$ be odd (and $b=2$). Thus, the following condition must hold: $(m-w_i+(y+b)(n+1)-1) \mod y = 0$. This follows from applying (59) to (48). Let $n=ny+v_i,b=2$ in which $v_i$ is a positive integer, we have $(2v_i+m-w_i+1) \mod y =0$. So we have to solve:
$(2v_0+m+1) \mod y=0$ ( $w_i=0$);
$(2v_1+m) \mod y=0$ ( $w_i=1$);
$(2v_2+m-1) \mod y=0$ ( $w_i=2$)
For $y=3$ and $m=12$, the solution pairs $(w_i, v_i)$ are $(0,1),(1,0),(2,2)$. This corresponds to (49), (50), and (51).
Regarding the trigonometric component of (48) requires solving $( m-w_i+b(n+1)-1) \mod y=0$ . Letting $b=2$ and $n=yn+v_i$, one sees it's the same as above.
For $r_i$, (setting $b=2$), the solution is of the form $n=2n+v_i$. We have to solve (This follows from applying (59) to (25)):
$(yv_0+m-1) \mod 2 =0 $ ( $w_i=0$)
$(yv_1+m) \mod 2 =0 $ ( $w_i=1$)
$(yv_2+m+1) \mod 2 =0 $ ( $w_i=2$)
The solution pairs $(w_i, v_i)$ for any odd positive $y>1$ and even $m$ are $(0,1),(1,0),(2,1)$. This again corresponds to (49), (50), and (51).
To prove that the above are the only possible solutions, we apply the Cauchy product to pairs of $z^{m}R_i^{w_1}r_i^{w_2}$ (defined in (59) in which $w_1+w_2=2=b(b-1)$. Here we have $b=2$, $y$ odd, and $m$ even integers. So merging (48) with (25) we have to ensure the resulting convolution is rational, which means that powers of $z$ for the resulting generating function of the convolution have the form $(y+b)n_2$ for positive integers $n_2 \geq 0$ as shown in section 5.4. Likewise, powers of 2 of the convolution, $j=2$, must be rational. Thus we have (for positive integers $n_1,n_2 \geq 0$):
$[m(b-1)+1+w_1-b+yn_2+bn_1] \mod b=0$
$[b(y+b)(n_1+1)-b(1-m+w_1)+y(n_2(y+b)+1-m-w_2)] \mod (by)=0$
Here we let $w_1$ correspond to the $w_i$ value for $r_i$, and $w_2$ correspond to the $w_i$ value for $R_i$.
For $w_1=w_2=1$ and $b=2$, after some labor the first equation must have even values of $n_2$. The second equation has the form $(-4n_1-n_2y^2-2m) \mod 2y=0$, so $2n_1+m \mod y=0$ follows, in which $n_1$ is the form $yn_1$.
For $w_1=0,w_2=2$, the first solution is of the form $n_2=2n_2+1$. Plugging $n_2=2n_2+1$ into the second equation, (noting that $-y(y+1) \mod 2y =0$) and we have $(2n_1+1+m) \mod y =0$. Following the same procedure, for $w_1=2,w_2=0$ also gives $n_2=2n_2+1$ and $(2n_1-1+m) \mod y =0$
\subsection{Arbitrary y-steps forward, b-steps backwards process}
In this section, referring to 5.4 again, we attempt to generalize the above formulas for an arbitrary walk, to find the denominator of the generating function for a y-steps forward, b-steps back process. The same procedure can be used to evaluate the generating function for the numerator as well for an arbitrary starting position and boundary conditions, but for brevity this is omitted.
For a y-steps forward, b-steps backwards process, denote the large roots of the characteristic equation $x^{y+b}-2x^b+z=0$ as $R_1, R_2...R_{y}$ and the small roots as $r_1, r_2...r_{b}$, which have the series expansion for powers of $R^k$ and $r^k$ given by (48) and (25), respectively.
We define a string $s_{i,q}(gR,jr)$ as being a product of $g$-quantity of large roots and $j$-quantities of small roots, each raised to some power of $w$, and $g+j=b$.
(e.g. $r_{i1}^{w_1}r_{i2}^{w_2}...r_{ib}^{w_b}$ or $r_{i1}^{w_{1}}r_{i2}^{w_{2}}...r_{i(b-1)}^{w_{b-1}}R_{i1}^{w_b}$).
And:
\begin{equation}
R_{I,i}^w,r_{I,i}^w =\frac{(r_{I,i},R_{I,i})^{1-m+w}}{w+1-m} \frac{d}{dz}
\end{equation}
The matrix (28) in section 5.3 is generalized for y-forward, b-back process:
\begin{equation}
\begin{bmatrix}
1 & 1 & ... & 1 & 1 & 1 & ... & 1 \\
r_1 & r_2 & ... & r_b & R_1 & R_2 & ... & R_y \\
... & ... & ... & ... & ... & ... & ... & ... \\
r_1^b & r_2^b & ... & r_b^b & R_1^b & R_2^b & ... & R_y^b \\
r_1^m & r_2^m & ... & r_b^m & R_1^m & R_2^m & ... & R_y^m \\
... & ... & ... & ... & ... & ... & ... & ... \\
r_1^{m+y-2} & r_2^{m+y-2} & ... & r_b^{m+y-2} & R_1^{m+y-2} & R_2^{m+y-2} & ... & R_y^{m+y-2} \\
r_1^{m+y-1} & r_2^{m+y-1} & ... & r_b^{m+y-1} & R_1^{m+y-1} & R_2^{m+y-1} & ... & R_y^{m+y-1}
\end{bmatrix}
\end{equation}
Deferring to the Leibniz determinant formula, the determinant has ${y+b \choose b}$ = ${y+b \choose y}$ tuples of b-length or y-length strings of products of large and small roots, like $r_1^m...R_1^m....$
So for the 7x7 version of the above matrix $(y=4,b=3)$, we have a sum of 35 strings of the form $\prod_{i=1}^{4} R_i^m,r_i^m$ or $(-z)^m\prod_{i=1}^{3} R_i^{-m},r_i^{-m}$ (via Vieta's formulas).
Consider that $r_1,r_2,r_3 \approx z^{1/3}$ and $R_1,R_2,R_3,R_4 \approx 1$ as expansion of infinite series.
So the determinant will be of the form $f_1(z)+f_2(z)z^{m/3}+f_3(z)z^{2m/3}+f_4(z)z^{m}$
What this is interpreted to mean is we choose the four large roots $R_I^m$ for $(I=1,2,3,4)$ corresponding to $f_1(z)$, and then choose three large ones and one small one ($r_I^m$ for $(i=1,2,3$ ) corresponding $f_2(z)z^{m/3}$, and then two large ones and two small ones corresponding to $f_4(z)z^{m}$, etc.
So we have ${4 \choose 4}{3 \choose 0}+{4 \choose 3}{3 \choose 1}+{4 \choose 2}{3 \choose 2}+{4 \choose 1}{3 \choose 3}={7 \choose 4}=35$
$f_1(z)$ is the easiest and only has a single permutation, that being $C(4,4)$. This means that the $r_I^{m},r_I^{m+1},r_I^{m+2}$ terms can be zeroed-out, so one obtains a partition block matrix with one of the blocks zero, and determinant is easy to evaluate on this being the product of two small regular Vandermonde matrices determinants, which is the product of 9 pairs (three for the 3x3 matrix and six for the 4x4 one). Generalized, we have (38) and (39) for the first term.
Based on the properties of $\Gamma_1^2$ and using the procedure for (34) (swapping out a single small root for a large one, and then swapping out 2 small roots, etc.) , we can write out the $\mu (z)$ values denominator of (37) a sum of products of roots:
\begin{equation}
\begin{cases}
\mu_1(z)=-h(z)(-z)^m \sum_{ i=1}^{P} \lambda_i \sum_{q=1 }^{p_i} s_{i,q}(0R,br)\\
\mu_2(z)=h(z)(-z)^m \sum_{ i=1}^{P} \lambda_i \sum_{q=1 }^{p_i} s_{i,q}(1R,(b-1)r) \\
...\\
\mu_{b-c}(z)=h(z)(-1)^{b-c+1}(-z)^m \sum_{ i=1}^{P} \lambda_i \sum_{q=1 }^{p_i} s_{i,q}((b-c-1)R,(c+1)r) \\
\end{cases}
\end{equation}
This gives the values of the $\mu_1$ terms of denominator for (36).
The choice of $w_i$ are conditional on the following rules holding:
1. $w_1+w_2+...+w_b=b(b-1)$
2. Maximum height/weight: $2(b-1) \geq w_i$
3. String length of $R,r$ = $b$
4. The number of duplicates of quantities of $w_i$ for the given string is restricted by the number of pair partitions that satisfy the above criteria, and constrained by $\Gamma_1^2$, which means neither of weights of possible pair partitions for a given $w$ may exceed $b-1$. For example, there can only be a single $w_i^0$ $(0+0)$, two partitions of $w^1$ $(1+0,0+1)$, three partitions of $w^2$ $(2+0,0+2,1+1)$, etc.
Thus, for example, if $b=5$ and we denote five roots as $a,b,c,d,f$, we have a max sum of 20 and a max weight of 8. $a^8b^8c^3d^1f^0$ is not valid because there is only a single possible pair partition $(4+4)$ that can be constructed from the $\Gamma_1^2$ (which is $(a^4b^3c^2d^1f^0...)^2$ and has a total of $5!$ terms inside the parenthesis for all possible permutations of the powers of roots from 0-4). However , $a^7b^7c^5d^1f^0$ is valid because there are two pair partitions less than or equal to four ($b-1$), those being $(4+3,3+4)$.
$c$ is the overlap , discussed in section 5.9. It determines exactly or gives a bound for how many terms of $\mu_{z}$ need to be computed to get the exact generating function.
$\lambda_i$ is a partition constant for a specified string, which may be negatively signed. It is defined by the number of unique pair partitions conditional on the above conditions for $w_i$. For $b=2$ we have two partitions ($P=2$), those being a square term and the product of two linear terms, by expanding $(a-b)^2= b^2 +a^2 -2ab$. We have the following $\lambda$ constants and their associated partitions, each which add up to $b(b-1)=2$: $(1:2,0),(-2:1,1)$
$P$ is given explicitly as the $[q^{b(b-1)}]$ coefficient of the q-binomial function: $ {3b-2 \choose b}_q$, as the number of possible partitions that satisfy the above criteria.
For $b=3$ we have $((a-b)(a-c)(c-b))^2=a^4 b^2 - 2 a^4 b c + a^4 c^2 - 2 a^3 b^3 + 2 a^3 b^2 c + 2 a^3 b c^2 - 2 a^3 c^3 + a^2 b^4 + 2 a^2 b^3 c - 6 a^2 b^2 c^2 + 2 a^2 b c^3 + a^2 c^4 - 2 a b^4 c + 2 a b^3 c^2 + 2 a b^2 c^3 - 2 a b c^4 + b^4 c^2 - 2 b^3 c^3 + b^2 c^4$. We have the following $\lambda$ constants and their five ($P=5$) associated partitions, each which add up to $b(b-1)=6$: $(1:4,2,0),(-2:4,1,1), (-2:3,3,0), (2:3,2,1), (-6:2,2,2) $
Example: for $r_1^3r_2^3$ , $\lambda=2$ because it has the following unique pairs of decompositions: $r_1^2r_2^1 \cdot r_1^1r_2^2$ and $r_2^2r_1^1 \cdot r_2^1r_1^2$.
$p_i$ is the number of permutations for a string, in which $r_{i}$ is the number of times values of $w_i$ repeat for specified partition $i$. There are $g$ entries of $R$ and $j$ entries of $r$, in which $g+j =b$:
\begin{equation}
p_i={b \choose j} {y \choose g} \frac{b!}{\prod r_{i}!}
\end{equation}
The summation $\sum_{q=1 }^{p_i}$ denotes a symmetric-type polynomial containing $p_i$ terms (defined above) for a specified string.
Example: $y=3,b=2$ , we have two partitions $(0+2)$ and $(1+1)$, thus $P=2$. $\lambda_1=1$ corresponds to the $(0+2)$ partition, and $\lambda_2=-2$ corresponds to the $(1+1)$ partition.
For $\mu_1$ , the string corresponding to $\lambda_2$ is $r_1^1r_2^1$, and $\lambda_1$ is $r_2^0r_1^2$.
For $\lambda_1$, $p_1=2$ because $y=3,b=2,j=2,g=0$, and no repeats of $w$: ${2 \choose 2} {3 \choose 0} \frac{2!}{0!}=2$
For $\lambda_2$, $p_2=1$. This is because there is a single repeat of $w$, for $r_1^1r_2^1$, so ${2 \choose 2} {3 \choose 0} \frac{2!}{2!}=1$.
Putting it together, we have $\mu_1=h(z)(-z)^m(-2r_1^1r_2^1+r_2^0r_1^2+r_2^2r_1^0)$. Over the rationals, $r_2^0r_1^2=r_2^2r_1^0$ (for all $y$), so we have $\mu_1=h(z)(-z)^m(-2r_1^1r_2^1+2r_2^2r_1^0)$, which is (35).
[Proof: Via the trigonometric component of (25) and applying the Cauchy formula to $r_2^0r_1^2$, with $I=0,1$ and using (59), we have to show that $\cos(\pi(yn+1-m))=\cos(\pi(yn+3-m))$ for positive integers $y,m$, which through elementary operations is easy to verify.]
For $\mu_2(z)z^{m/2}$, the associated strings have a single copy of $R$ and $r$, so $(1R,1r)$. For the string associated with the $\lambda_1$ partition, we have a polynomial with 6 entries, $p_1=6$. This is because $y=3,b=2,j=1,g=1$, and no repeats of $w$ for either R, r, so: ${2 \choose 1} {3 \choose 1} \frac{2!}{2!}=6$. For the string associated with $\lambda_2$, we have $p_2=12$= ${2 \choose 1} {3 \choose 1} \frac{2!}{1!}$ via (62). This means there are a total of 18 entries (for $y=3$). Hence we have:
$\mu_2(z)z^{m/2}=h(z)(-z)^m((r_1^0+r_2^0)(R_1^2+R_2^2+R_3^2+...+R_y^2)+(r_1^1+r_2^1)(-2(R_1^1+R_2^1+R_3^1+...+R_y^1))+(r_1^2+r_2^2)(R_1^0+R_2^0+R_3^0+...+R_y^0))$
This formula is used for the exact 3 steps forward, 2-steps back process described in sections 5.5. It can also be compressed: $h(z)(-z)^m(-4yR_I^1r_i^1+2yR_I^2r_i^0+2yR_I^0r_i^2$) which over the rationals is valid for all $R_I$ and $r_i$ for $I=1,2,3...y$. And $i=1,2$. This means there are three pairs of double product summations, those being (55), (56), and (57).
For $y=4,b=3$ we have (for constants $c_i$):
$\mu_2(z)z^{m/3}=h(z)(-z)^m ( (R_1^0+R_2^0+R_3^0+R_4^0)\sum_{}^{}(c_1r_I^2r_i^4+c_2r_I^3r_i^3)+(R_1^1+R_2^1+R_3^1+R_4^1)\sum_{}^{}(c_3r_I^3r_i^2+c_4r_I^4r_i^1)+
(R_1^2+R_2^2+R_3^2+R_4^2)\sum_{}^{}(c_5r_I^2r_i^2+c_6r_I^1r_i^3+c_7r_I^0r_i^4)+
(R_1^3+R_2^3+R_3^3+R_4^3)\sum_{}^{}(c_8r_I^1r_i^2+c_9r_I^0r_i^3)+(R_1^4+R_2^4+R_3^4+R_4^4)\sum_{}^{}(c_{10}r_I^0r_i^2+c_{11}r_I^1r_i^1)) $
This has 228 elements, because $r_I,r_i$ run through $i, I= 1,2,3$ ( $r_I \neq r_i$), which gives 6 elements for each $r$ pair of differing weights and 3 elements of equal weights.
Via (62), ${3 \choose 2} {4 \choose 1}3!({1+\frac{1}{2}+\frac{1}{2}+1+\frac{1}{6}})=228$ (summed over the 5 possible partitions).
And:
$\mu_3(z)z^{2m/3}=h(z)(-z)^m ( (r_1^0+r_2^0+r_3^0)\sum_{}^{}(c_1R_I^2R_i^4+c_2R_I^3R_i^3)+(r_1^1+r_2^1+r_3^1)\sum_{}^{}(c_3R_I^3R_i^2+c_4R_I^4R_i^1)+
(r_1^2+r_2^2+r_3^2)\sum_{}^{}(c_5R_I^2R_i^2+c_6R_I^1R_i^3+c_7R_I^0R_i^4)+
(r_1^3+r_2^3+r_3^3)\sum_{}^{}(c_8R_I^1R_i^2+c_9R_I^0R_i^3)+(r_1^4+r_2^4+r_3^4)\sum_{}^{}(c_{10}R_I^0R_i^2+c_{11}R_I^1R_i^1)) $
This has 342 elements, because $R_I,R_i$ run through $i, I= 1,2,3,4$ ( $R_I \neq R_i$), which gives 12 elements for each $R$ pair of differing weights and 6 elements of equal weights. Via (62), ${3 \choose 1} {4 \choose 2}3!({1+\frac{1}{2}+\frac{1}{2}+1+\frac{1}{6}})=342$.
\subsection{Finding the overlap}
Because the denominator of (37) is of the form $\sum_{n=0}^u {c_n}{z^{yn}}$ it means that there may be overlap between $\mu_2(z)z^{m/(y-1)}$ and $\mu_1(z)$, or anywhere else depending on the values of $b$ and $m$ chosen.
$\mu_2(z)z^{m/(y-1)}$ has the form $R_ir_iz^{m}$ (for brevity all 18 terms are not shown). Referring to (25), let $v=y+b$, $u=b$, and $k=1-m+w_i$. Applying (59) and multiplying by $z^m$ (disregarding $h(z)$) we can pull out the leading power of $z$ (letting $b=2$): $o=z^{(m+w_i-1)/2}$, in which $w_i=0,1,2$ is chosen such that $o$ (the overlap) is rational, or equivalently, that $(m+w_i-1) \mod 2=0$, like in section 5.3.
We can approximate $R_i$ as a linear equation $\alpha+\beta z$. This follows from inspection of applying (59) to (49). The linearity of $R_i$ means that it can be disregarded in the calculation.
Because the degree of the generating function polynomial for the denominator is given by $u(y+b)+b-1-m<0$ (referring to section 5.4), an overlap (for $b=2$) occurs when $yu\geq (m+w_i-1)/2$ (for maximum possible $u$).
We observe that letting $m=(b+y)p+b-1$ and $u=p-1$ maximizes $m$ relative to $u$, for some positive integer $p$. So letting $y=3, b=2$, we see that $m=26$ (corresponding to $p=5$) is the largest value of $m$ which has no overlap for any $y>2$ or $b>1$.
As shown earlier, computing each $\mu$ term in the denominator is not necessary to obtain the exact generating function, but we only need enough terms to account for any overlaps.
The question is how many $\mu$ terms need to be computed. Deferring to (61), each successive $\mu$ term has one fewer $r_i$ terms for the entire string of length $b$.
Consider a string with $c$ quantity of $r_i$. Hence like above we can factor out the leading exponent, times c-quantity of copies, so like above we have to solve:
$yu<(m(b-c)+c(1-b)+\sum_{i=1}^{c}w_i)/b$.
For example: for $ y=4,b=3$, c=1 would correspond to $\mu_3(z)z^{2m/3}$ because there is only a single copy of $r_i$. Likewise, $\mu_2(z)z^{m/3}$ corresponds to c=2 because there is a product of two $r_i$. ($\mu_3(z)z^{2m/3}$ and $\mu_2(z)z^{m/3}$ are described in section 5.8)
We will let $\sum_{i=1}^{c}w_i=0$. This tightens the bound and does not affect the final calculation. The bound is tightened again by letting $n=1-p$ and $m=(y+b)p+b$ for some positive integer $p$, which maximizes $u$ relative to $m$. After some algebra, we have: $0<p(b^2-bc-cy)+by+c(1-b)$. Because $p$ is unbounded, we can tighten the bound even more by setting $b^2-bc-cy=0$, which gives $c=b^{2}/(b+y)$. We then have $0<b^2y+by^2+b^2-b^3$ which because $y>b$ is obviously true. The limiting behavior is thus $c=b/2$. Letting for example, $b=3,y=4$ gives $c=9/7$, which means that $\mu_3(z)z^{2m/3}$ can be disregarded for any $m$. Likewise, for $b=2,y=3$ we have $c=4/5$ which means that $c=0$ for (61).
\subsection{Enumerating strings}
In this section we make a slight modification to (61) to enumerate strings explicitly. Rules 1-4 apply, but denote capital $W$ for the large roots $R$, and lowercase $w$ for the small roots, $r$. (and again $j+g=b$).
1. $w_1+w_2+..w_j+W_1+W_2...+W_g=b(b-1)$
Like in section 5.8, denote a string as: $r^{w_1}_1r_2^{w_2}...r^{w_j}_jR^{W_1}_1R^{W_2}_2...R^{W_g}_g$
(For the trinomial, here we let $j=1$ for (48) and (25), which only superficially affects the answer. To recover the probabilities, we let $z=z_1z$ in the resulting generating function for some constant $z_1$.)
Applying a convolution $j$-times to (25) and $g$-times to (48), and then combining the two for string of length $b$, we can factor out the following trigonometric component, in which for integers $ 0 \leq I^i \leq b-1$ (for small roots), and $ 0 \leq I_i \leq y-1$ (for large roots) is rational:
\begin{equation}
\begin{split}
\theta_a=\cos (2\pi ( b(b-1+m)\sum_{i=1}^{g}I_i+b^2\sum_{i=1}^{g}V_{i,a}I_i-b\sum_{i=1}^{g}W_iI_i+\\
y^2\sum_{i=1}^{j}I^iv_{i,a}+y(1-m)\sum_{i=1}^{j}I^i+y\sum_{i=1}^{j}I^iw_i)/(yb))
\end{split}
\end{equation}
And :
\begin{equation}
Q=\sum_{i=1}^{g}V_{i,a}+\frac{1}{b}\left [\sum_{i=1}^{j}(yv_{i,a}+w_i)+j(1-b-m)\right]
\end{equation}
Putting it all together, we have:
\begin{equation}
\begin{split}
\frac{(-1)^{g+1+m} \lambda}{b^jy^g} \sum_{a=1}^{r} \theta_a z^{Q+m} \left [ \prod_{i=1}^{j}\sum_{n=0}^{}z^{yn}{((bn+v_{i,a})(y+b)+1+w_i-m-b)/b \choose bn+v_{i,a}}\right] \\
\left [ \prod_{i=1}^{g}\sum_{n=0}^{}z^{yn}{((yn+V_{i,a})(y+b)+b+m-1-W_i)/y) \choose yn+V_{i,a}}\right]
\end{split}
\end{equation}
\newpage
For a given string, $r$ denotes the number of solutions in which $\theta_a$ is rational. And $V_{i,a}, v_{i,a}$ are positive integer solutions $0 \leq V_{i,a} < y$, $0 \leq v_{i,a} < b$ that satisfy (64) and (63), and conditional on the following restrictions:
\begin{equation}
\begin{split}
Q+m \mod y = 0\\
(\sum_{i=1}^{j}(yv_{i,a}+w_i)+j(1-b-m)) \mod b =0
\end{split}
\end{equation}
As an example to see how this works. For $b=1$, we have $\lambda=1,w_i=0,j=1,g=0$, and the $v_i,V_i$ terms are zero. This recovers (15).
For the 2-steps back, 3-steps forward process, ($y=3,b=2, m=12$), we have the first three strings: $-2r_1^1r_2^1+r_2^0r_1^2+r_2^2r_1^0$. For $r_1^1r_2^1$ we have $w_1,w_2=1, \lambda=-2,j=2,g=0, I^1=1, I^2=0$
For $a=1$, $v_{1,1}=0, v_{2,1}=0$, and $\theta_1=1$. For $a=2$, $v_{1,2}=1, v_{2,2}=1$, and $\theta_2=-1$.
Likewise, for the string $r_2^0r_1^2$ we have $w_1=2,w_2=0, \lambda=1,j=2,g=0, I^1=1, I^2=0$. For $a=1$, $v_{1,1}=0, v_{2,1}=0$, and $\theta_1=-1$. For $a=2$, $v_{1,2}=1, v_{2,2}=1$, and $\theta_2=1$. Interchanging $w_2=0,w_1=2$ are equivalent, so $2r_2^0r_1^2=2r_2^2r_1^0=r_2^0r_1^2+r_2^2r_1^0$. We thus have:
$\frac{1}{2} (\sum_{n=0}^{} z^{3n} {5n-6 \choose 2n})^2 -\frac{z^3}{2} (\sum_{n=0}^{} z^{3n} {5(2n+1)/2-6 \choose 2n})^2
\\
-\frac{z^3}{2} (\sum_{n=0}^{} z^{3n} {5(2n+1)/2-13/2 \choose 2n+1}) (\sum_{n=0}^{} z^{3n} {5(2n+1)/2-11/2 \choose 2n+1})
\\
+\frac{1}{2} (\sum_{n=0}^{} z^{3n} {5n-13/2 \choose 2n}) (\sum_{n=0}^{} z^{3n} {5n-11/2 \choose 2n})$
Which has the expansion: $1-10z^3+3z^6...$
The next six strings are of the form $-2r_i^1R_I^1$, in which $i=1,2$ and $I=1,2,3$. Here we have $\lambda=-2,g=1,j=1,w_1,w_2=1$. Letting all the $v,V$ terms be zero gives $\theta_1=1$ and satisfies the restrictions. So $Q=-6$, for each of the six strings. Hence we have $-2z^6$ as the leading term, which when added to the earlier strings gives the correct polynomial of the generating function $1-10z^3+1z^6$.
\newpage
| {
"timestamp": "2022-12-21T02:04:00",
"yymm": "2212",
"arxiv_id": "2212.09919",
"language": "en",
"url": "https://arxiv.org/abs/2212.09919",
"abstract": "Generating functions for asymmetric step-size paths restricted by two absorbing barriers are derived. The method begins by applying the Lagrange inversion formula to arbitrary powers of roots of the characteristic equation, that being a trinomial, which produces generating function as function (z) of the conditional probability of absorption of a particle, on a path restricted by two absorbing barriers. The exact enumeration of an asymmetric walk with two absorbing barriers is given.",
"subjects": "Probability (math.PR); Combinatorics (math.CO)",
"title": "Generating Functions for Asymmetric Random Walk Processes With Double Absorbing Barriers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517482043892,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7089606350230158
} |
https://arxiv.org/abs/0712.2337 | Mould expansions for the saddle-node and resurgence monomials | This article is an introduction to some aspects of Écalle's mould calculus, a powerful combinatorial tool which yields surprisingly explicit formulas for the normalising series attached to an analytic germ of singular vector field or of map. This is illustrated on the case of the saddle-node, a two-dimensional vector field which is formally conjugate to Euler's vector field $x^2\frac{\pa}{\pa x}+(x+y)\frac{\pa}{\pa y}$, and for which the formal normalisation is shown to be resurgent in $1/x$. Resurgence monomials adapted to alien calculus are also described as another application of mould calculus. | \section{Introduction}
Mould calculus was developed by J.~\'Ecalle in relation with his Resurgence
theory almost thirty years ago (\cite{Eca81}, \cite{dulac}, \cite{Eca93}).
The primary goal of this text is to give an introduction to mould calculus,
together with an exposition of the way it can be applied to a specific geometric
problem pertaining to the theory of dynamical systems: the analytic
classification of saddle-node singularities.
The treatment of this example was indicated in \cite{Eca84} in concise manner
(see also \cite{CNP}), but I found it useful to provide a self-contained presentation of
mould calculus and detailed explanations for the saddle-node problem,
in the same spirit as Resurgence theory and alien calculus were presented in
\cite{kokyu} together with the example of the analytic classification of
tangent-to-identity transformations in complex dimension~$1$.
Basic facts from Resurgence theory are also recalled in the course of
the exposition, with the hope that this text will serve to a broad readership.
I also included a section on the relation between the resurgent approach to the
saddle-node problem and Martinet-Ramis's work \cite{MR}.
The text consists of three parts.
\begin{enumerate}[A.]
\item
Section~\ref{secSN} describes the problem of the normalisation of the
saddle-node and Section~\ref{secMCexpSN} outlines its treatment by the method of
mould-comould expansions.
\item
The second part has an ``algebraic'' flavour: it is devoted to a systematic
exposition of some features of mould algebras (Sections~\ref{secAlgMoulds}
and~\ref{secAltSym})
and mould-comould expansions (Sections~\ref{secGenMcM}
and~\ref{secContrAltSym}).
\item
The third part is mainly concerned by the applications
to Resurgence theory of the previous results
(Sections~\ref{secResurSN}--\ref{secMR} show the consequences for the problem
of the saddle-node and have an ``analytic'' flavour,
Section~\ref{secResurMonom} describes the construction of resurgence monomials
which allow one to check the freeness of the algebra of alien derivations);
other applications are also briefly alluded to in Section~\ref{secOtherAppli}
(with a few words about arborification and multizetas).
\end{enumerate}
All the ideas come from J.~\'Ecalle's articles and lectures.
An effort has been made to provide full details, which occasionally may have
resulted in original definitions, but they must be considered as auxiliary {with respect to}\
the overall theory.
The details of the resurgence proofs which are given in
Sections~\ref{secResurSN} and~\ref{secBE} are original, at least I did not see
them in the literature previously.
\vfill
\pagebreak
\part{The saddle-node problem}
\section{The saddle-node and its formal normalisation} \label{secSN}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let us consider a germ of complex analytic $2$-dimensional vector field
\begin{equation} \label{eqdefX}
X = x^2 \frac{\partial\,}{\partial x} + A(x,y) \frac{\partial\,}{\partial y},
\qquad A \in \mathbb{C}\{x,y\},
\end{equation}
for which we assume
\begin{equation} \label{eqassA}
A(0,y) = y, \qquad \frac{\partial^2 A}{\partial x\partial y}(0,0) = 0.
\end{equation}
Assumption~\eqref{eqassA} ensures that $X$ is formally conjugate to the normal
form
\begin{equation}
X_0 = x^2 \frac{\partial\,}{\partial x} + y \frac{\partial\,}{\partial y}.
\end{equation}
We shall be interested in the formal tranformations which conjugate~$X$ and~$X_0$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
This is the simplest case from the point of view of {\em formal} classification of
saddle-node singularities of analytic differential equations.\footnote{
A singular differential equation is essentially the same thing as a differential
$1$-form which vanish at the origin. It defines a singular foliation, the leaves
of which can also be obtained by integrating a singular vector field, but
classifying singular foliations (or singular differential equations) is
equivalent to classifying singular vector fields \emph{up to time-change}.
See {e.g.}\ \cite{Moussu}.}
Indeed, when a differential equation $B(x,y)\,{\mathrm d} y - A(x,y)\,{\mathrm d} x = 0$ is singular
at the origin ($A(0,0)=B(0,0)=0$) and its $1$-jet has eigenvalues $0$ and~$1$,
it is always formally conjugate to one of the normal forms
$x^{p+1}\,{\mathrm d} y - (1+\lambda x)y\,{\mathrm d} x = 0$ ($p\in\mathbb{N}^*$, $\lambda\in\mathbb{C}$)
or $y\,{\mathrm d} x=0$.
What we call saddle-node singularity corresponds to the first case, and the
normal form~$X_0$ corresponds to $p=1$ and $\lambda=0$.
Moreover, a saddle-node singularity can always be analytically reduced to the
form $x^{p+1}\,{\mathrm d} y - A(x,y)\,{\mathrm d} x=0$ with $A(0,y)=y$
(this result goes back to Dulac---see \cite{MR}, \cite{Moussu}),
it is thus legitimate to
consider vector fields of the form~\eqref{eqdefX}, which generate the same
foliations
(we restrict ourselves to $(p,\lambda)=(1,0)$ for the sake of simplicity).
The problem of the {\em analytic} classification of saddle-node singularities was
solved in~\cite{MR}. The resurgent approach to this problem is indicated
in~\cite{Eca84} and~\cite[Vol.~3]{Eca81} (see also~\cite{CNP}).
The resurgent approach consists in analysing the divergence of the normalising
transformation through alien calculus.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Normalising transformation means a formal diffeomorphism $\th$ solution of the
conjugacy equation
\begin{equation} \label{eqconjugeq}
X = \th^* X_0.
\end{equation}
Due to the shape of~$X$, one can find a
unique formal solution of the form
\begin{equation} \label{eqdefthph}
\th(x,y) = \big(x, \varphi(x,y) \big), \qquad
\varphi(x,y) = y + \sum_{n\ge0} \varphi_n(x) y^n, \qquad
\varphi_n(x) \in x\mathbb{C}[[x]].
\end{equation}
The first step in the resurgent approach consists in proving that the formal
series~$\varphi_n$ are resurgent {with respect to}\ the variable $z=-1/x$.
We shall prove this fact by using \'Ecalle's mould calculus (see
Theorem~\ref{thmResur} in Section~\ref{secResur} below).
The Euler series $\varphi_0(x) = - \sum_{n\ge1} (n-1)! x^n$ appears in the case
$A(x,y)=x+y$, for which the solution of the conjugacy equation is simply
$\th(x,y) = \big(x, y + \varphi_0(x)\big)$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Observe that $\th(x,y) = \left(x, y + \sum_{n\ge0} \varphi_n(x) y^n\right)$ is solution of the conjugacy
equation if and only if
\begin{equation} \label{eqdefYzu}
\widetilde Y(z,u) = u \, {\mathrm e}^z + \sum_{n\ge0} u^n {\mathrm e}^{nz} \widetilde\varphi_n(z), \qquad
\widetilde\varphi_n(z) = \varphi_n(-1/z) \in z^{-1}\mathbb{C}[[z^{-1}]],
\end{equation}
is solution of the differential equation
\begin{equation} \label{eqdiffeqX}
\partial_z \widetilde Y = A(-1/z,\widetilde Y)
\end{equation}
associated with the vector field~$X$.
(Indeed, the first component of the flow of~$X$ is trivial and the
second component is determined by solving~\eqref{eqdiffeqX}; on the other hand,
the flow of~$X_0$ is trivial and, by plugging it into~$\th$, one obtains the
flow of~$X$.)
The formal expansion~$\widetilde Y(z,u)$ is called {\em formal integral}
of the differential equation~\eqref{eqdiffeqX}.
One can obtain its components~$\widetilde\varphi_n(z)$
(and, consequently, the formal series~$\varphi_n(x)$ themselves)
as solutions of ordinary differential
equations, by expanding~\eqref{eqdiffeqX} in powers of~$u$:
\begin{gather}
\label{eqdiffeqzero}
\frac{{\mathrm d} \widetilde\varphi_0}{{\mathrm d} z\,} = A\big(-1/z,\widetilde\varphi_0(z)\big),\\
\label{eqdiffeqn}
\frac{{\mathrm d} \widetilde\varphi_n}{{\mathrm d} z\,} + n\widetilde\varphi_n(z) =
\partial_y A\big(-1/z,\widetilde\varphi_0(z)\big) \widetilde\varphi_n(z) + \widetilde\chi_n(z),
\end{gather}
with $\widetilde\chi_n$ inductively determined by $\widetilde\varphi_0,\dotsc,\widetilde\varphi_{n-1}$.
Only the first equation is non-linear. One can prove the resurgence of the
$\widetilde\varphi_n$'s by exploiting their property of being the unique solutions in
$z^{-1}\mathbb{C}[[z^{-1}]]$ of these equations and devising a perturbative scheme to solve
the first one,\footnote{
See Section~2.1 of~\cite{kokyu} for an illustration of this method on a
non-linear difference equation.
}
but mould calculus is quite a different approach.
\section{Mould-comould expansions for the saddle-node}
\label{secMCexpSN}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The analytic vector fields~$X$ and~$X_0$ can be viewed as derivations of the
algebra $\mathbb{C}\{x,y\}$,
but since we are interested in formal conjugacy, we now consider them as
derivations of $\mathbb{C}[[x,y]]$. We shall first rephrase our problem as a
problem about operators of this algebra.\footnote{
Our algebras will be, unless otherwise specified, associative unital algebras
over~$\mathbb{C}$ (possibly non-commutative).
In this article, operator means endomorphism of the underlying vector space;
thus an operator of $\mathscr A=\mathbb{C}[[x,y]]$ is an element of $\operatorname{End}_\mathbb{C}(\mathscr A)$.
The space $\operatorname{End}_\mathbb{C}(\mathscr A)$ has natural structures of ring and of $\mathscr A$-module,
which are compatible in the sense that $f\in\mathscr A \mapsto f \operatorname{Id}\in \operatorname{End}_\mathbb{C}(\mathscr A)$
is a ring homomorphism.
}
The commutative algebra $\mathscr A=\mathbb{C}[[x,y]]$ is also a local ring; as such, it is
endowed with a metrizable topology, in which the powers of the maximal ideal
$\mathfrak M=\ao f\in\mathbb{C}[[x,y]]\mid f(0,0)=0\af$ form a system of neighbourhoods of~$0$,
which we call Krull topology or topology of the formal convergence and which is
complete (as a uniform structure).\footnote{
This is also called the $\mathfrak M$-adic topology, or the $(x,y)$-adic topology.
Beware that $\mathbb{C}[[x,y]]$ is a topological algebra only if we put the discrete
topology on~$\mathbb{C}$.
}
\begin{lemma}
The set of all continuous algebra homomorphisms of $\mathbb{C}[[x,y]]$ coincides with the
set of all substitution operators, {i.e.}\ operators of the form $f \mapsto
f\circ\th$ with $\th\in\mathfrak M\times\mathfrak M$.
\end{lemma}
\begin{proof}
Any substitution operator is clearly a continuous algebra homomorphism of
$\mathbb{C}[[x,y]]$.
Conversely, let $\Theta$ be a continuous algebra homomorphism.
The idea is that $\Theta$ will be determined by its action on the two
generators of the maximal ideal, and setting $\th = (\Theta x,\Theta y)$ we can
identify~$\Theta$ with the substitution operator $f \mapsto f\circ\th$.
We just need to check that $\Theta x$ and $\Theta y$ both belong to the maximal ideal,
which is the case because, by continuity, $(\Theta x)^n = \Theta(x^n)$ and $(\Theta y)^n
= \Theta(y^n)$ must tend to~$0$ as $n\to\infty$;
one can then write any~$f$ as a convergent---for the Krull topology---series of
monomials $\sum f_{m,n} x^m y^n$ and its image as the formally convergent series
$\Theta f = \sum \Theta(f_{m,n} x^m y^n) = \sum f_{m,n} (\Theta x)^m (\Theta y)^n$.
\end{proof}
A formal invertible transformation thus amounts to a continuous automorphism of
$\mathbb{C}[[x,y]]$.
Since the conjugacy equation~\eqref{eqconjugeq} can be written
$$
X f = \big[ X_0(f\circ\th) \big] \circ \th^{-1},
\qquad f\in\mathbb{C}[[x,y]],
$$
if we work at the level of the substitution operator,
we are left with the problem of finding a continuous automorphism~$\Theta$ of
$\mathbb{C}[[x,y]]$ such that $\Theta (Xf) = X_0(\Theta f)$ for all $f$, {i.e.}
\begin{equation} \label{eqconjugOp}
\Theta X = X_0 \Theta.
\end{equation}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The idea is to construct a solution to~\eqref{eqconjugOp} from the
``building blocks'' of~$X$.
Let us use the Taylor expansion
\begin{equation} \label{eqdefiancN}
A(x,y) = y + \sum_{n\in\mathcal{N}} a_n(x) y^{n+1}, \qquad
\mathcal{N} = \ao n\in\mathbb{Z} \mid n \ge -1 \af
\end{equation}
to write
\begin{gather}
\label{eqdefBn}
X = X_0 + \sum_{n\in\mathcal{N}} a_n(x) B_n, \qquad
B_n = y^{n+1} \frac{\partial\,}{\partial y}, \\
\label{eqassan}
a_n(x) \in x\mathbb{C}\{x\}, \qquad a_0(x) \in x^2\mathbb{C}\{x\}
\end{gather}
(thus incorporating the information from~\eqref{eqassA}).
The series in~\eqref{eqdefBn} must be interpreted as a simply convergent
series of operators of~$\mathbb{C}[[x,y]]$ (the series $\sum a_n B_n f$ is formally
convergent for any $f\in\mathbb{C}[[x,y]]$).
Let us introduce the differential operators
\begin{equation} \label{eqdefbBn}
\mathbf{B}_\emptyset = \operatorname{Id}, \qquad
\mathbf{B}_{\omega_1,\dotsc,\omega_r} = B_{\omega_r} \dotsm B_{\omega_1}
\end{equation}
for $\omega_1,\dotsc,\omega_r\in\mathcal{N}$.
We shall look for an automorphism~$\Theta$ solution of~\eqref{eqconjugOp} in the
form
\begin{equation} \label{eqexpmouldcomould}
\Theta = \sum_{r\ge0} \, \sum_{\omega_1,\dotsc,\omega_r\in\mathcal{N}}
\mathcal{V}^{\omega_1,\dotsc,\omega_r}(x) \mathbf{B}_{\omega_1,\dotsc,\omega_r},
\end{equation}
with the convention that the only term with $r=0$ is
$\mathcal{V}^\emptyset\mathbf{B}_\emptyset$, with $\mathcal{V}^\emptyset = 1$,
and with coefficients
$\mathcal{V}^{\omega_1,\dotsc,\omega_r}(x) \in x\mathbb{C}[[x]]$
to be determined from the data $\{a_n,\, n\in\mathcal{N}\}$
in such a way that
\begin{enumerate}[(i)]
\item the expression~\eqref{eqexpmouldcomould} is a formally convergent series of
operators of~$\mathbb{C}[[x,y]]$ and defines an operator~$\Theta$ which is continuous for
the Krull topology,
\item the operator~$\Theta$ is an algebra automorphism,
\item the operator~$\Theta$ satisfies the conjugacy equation~\eqref{eqconjugOp}.
\end{enumerate}
In \'Ecalle's terminology,
the collection of operators $\{ \mathbf{B}_{\omega_1,\dotsc,\omega_r} \}$ is a typical
example of {\em comould};
any collection of coefficients $\{ \mathcal{V}^{\omega_1,\dotsc,\omega_r} \}$ is a {\em
mould} (here with values in~$\mathbb{C}[[x]]$, but other algebras may be used);
a formally convergent series of the form~\eqref{eqexpmouldcomould} is a {\em
mould-comould expansion}, often abbreviated as
$$
\Theta = \sum \mathcal{V}^\bullet \mathbf{B}_\bullet
$$
(we shall clarify later what ``formally convergent'' means for such multiply-indexed series of
operators).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let us indicate right now the formulas for the problem of the saddle-node~\eqref{eqdefX}:
\begin{lemma} \label{lemdefcV}
The equations
\begin{align}
&\mathcal{V}^\emptyset = 1 \nonumber \\
\label{eqdefcV}
(x^2\frac{{\mathrm d}\,}{{\mathrm d} x} + \omega_1 + \dotsb + \omega_r) &\mathcal{V}^{\omega_1,\dotsc,\omega_r}
= a_{\omega_1} \mathcal{V}^{\omega_2,\dotsc,\omega_r}, \qquad
\omega_1,\dotsc,\omega_r \in \mathcal{N}
\end{align}
inductively determine a unique collection of formal series
$\mathcal{V}^{\omega_1,\dotsc,\omega_r} \in x\mathbb{C}[[x]]$ for $r\ge1$.
Moreover,
\begin{equation} \label{eqvalcV}
\mathcal{V}^{\omega_1,\dotsc,\omega_r} \in x^{\ceil{r/2}}\mathbb{C}[[x]],
\end{equation}
where $\ceil{s}$ denotes, for any $s\in\mathbb{R}$, the least integer not smaller than~$s$.
\end{lemma}
\begin{proof}
Let $\nu$ denote the valuation in $\mathbb{C}[[x]]$: $\nu(\sum c_m x^m) = \min\ao m \mid
c_m\neq0 \af \in\mathbb{N}$ for a non-zero formal series and $\nu(0)=\infty$.
Since $\partial = x^2\frac{{\mathrm d}\,}{{\mathrm d} x}$ increases valuation by at least one unit,
$\partial + \mu$ is invertible for any $\mu\in\mathbb{C}^*$ and the inverse operator
\begin{equation} \label{eqdefinvpamu}
(\partial+\mu)^{-1} = \sum_{r\ge0} \mu^{-r-1}(-\partial)^r
\end{equation}
(formally convergent series of operators) leaves $x\mathbb{C}[[x]]$ invariant.
On the other hand, we {\em define} $\partial^{-1} \colon x^2\mathbb{C}[[x]] \to x\mathbb{C}[[x]]$ by the
formula
$\partial^{-1} \varphi(x) = \int_0^x \big( t^{-2}\varphi(t) \big) \,{\mathrm d} t$,
so that $\psi=\partial^{-1}\varphi$ is the unique solution in $x\mathbb{C}[[x]]$ of the equation
$\partial\psi=\varphi$ whenever $\varphi\in x^2\mathbb{C}[[x]]$.
For $r=1$, equation~\eqref{eqdefcV} has a unique solution~$\mathcal{V}^{\omega_1}$ in
$x\mathbb{C}[[x]]$, because the {right-hand side}\ is~$a_{\omega_1}$, element of $x\mathbb{C}[[x]]$, and even of
$x^2\mathbb{C}[[x]]$ when $\omega_1=0$.
By induction, for $r\ge2$, we get a {right-hand side}\ in $x^2\mathbb{C}[[x]]$ and a unique solution
$\mathcal{V}^\text{\boldmath{$\om$}}$ in $x\mathbb{C}[[x]]$ for $\text{\boldmath{$\om$}}=(\omega_1,\dotsc,\omega_r)\in\mathcal{N}^r$.
Moreover, with the notation $`\text{\boldmath{$\om$}} = (\omega_2,\dotsc,\omega_r)$, we have
$$
\nu(\mathcal{V}^\text{\boldmath{$\om$}}) \ge \alpha^\text{\boldmath{$\om$}} + \nu(\mathcal{V}^{`\text{\boldmath{$\om$}}}), \qquad
\text{with}\;
\alpha^\text{\boldmath{$\om$}} = \left| \begin{aligned}
0 \quad &\text{if $\omega_1+\dotsb+\omega_r=0$ and $\omega_1\neq0$,} \\
1 \quad &\text{if $\omega_1+\dotsb+\omega_r\neq0$ or $\omega_1=0$.}
\end{aligned} \right.
$$
Thus $\nu(\mathcal{V}^\text{\boldmath{$\om$}}) \ge \operatorname{card} \mathcal{R}^\text{\boldmath{$\om$}}$, with
$\mathcal{R}^\text{\boldmath{$\om$}} = \ao i\in[1,r] \mid \omega_i+\dotsb+\omega_r\neq0 \;\text{or}\; \omega_i=0
\af$
for $r\ge1$.
Let us check that $\operatorname{card}\mathcal{R}^\text{\boldmath{$\om$}} \ge \ceil{r/2}$.
This stems from the fact that if $i\not\in\mathcal{R}^\text{\boldmath{$\om$}}$, $i\ge2$, then $i-1\in\mathcal{R}^\text{\boldmath{$\om$}}$
(indeed, in that case $\omega_{i-1}+\dotsb+\omega_r = \omega_{i-1}$),
and that $\mathcal{R}^\text{\boldmath{$\om$}}$ has at least one element, namely~$r$.
The inequality is thus true for $r=1$ or~$2$; by induction, if $r\ge3$, then
$\mathcal{R}^\text{\boldmath{$\om$}} \cap [3,r] = \mathcal{R}^{``\text{\boldmath{$\om$}}}$ with $``\text{\boldmath{$\om$}} = (\omega_3,\dotsc,\omega_r)$ and
either $2\in \mathcal{R}^\text{\boldmath{$\om$}}$, or $2\not\in\mathcal{R}^\text{\boldmath{$\om$}}$ and $1\in\mathcal{R}^\text{\boldmath{$\om$}}$,
thus $\operatorname{card}\mathcal{R}^\text{\boldmath{$\om$}} \ge 1 + \operatorname{card}\mathcal{R}^{``\text{\boldmath{$\om$}}}$.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
To give a definition of formally summable families of operators adapted to our
needs, we shall consider our operators as elements of a topological ring of a
certain kind and make use of the Cauchy criterium for summable families.
\begin{definition} \label{defpseudoval}
Given a ring~$\mathscr E$ (possibly non-commutative), we call pseudovaluation any map
$\operatorname{val} \colon \mathscr E \to
\mathbb{Z}\cup\{\infty\}$ satisfying, for any $\Theta,\Theta_1,\Theta_2\in\mathscr E$,
\begin{itemize}
\item $\vl{\Theta}=\infty$ iff $\Theta=0$,
\item $\vl{\Theta_1-\Theta_2} \ge \min\big\{ \!\vl{\Theta_1}, \vl{\Theta_2} \big\}$,
\item $\vl{\Theta_1\Theta_2} \ge \vl{\Theta_1} + \vl{\Theta_2}$,
\end{itemize}
The formula $\operatorname{d}_{\operatorname{val}}(\Theta_1,\Theta_2) = 2^{-\vl{\Theta_2-\Theta_1}}$ then defines a
distance, for which $\mathscr E$ is a topological ring.
We call $(\mathscr E,\operatorname{val})$ a complete pseudovaluation ring if the distance~$\operatorname{d}_{\operatorname{val}}$ is complete.
\end{definition}
We use the word pseudovaluation rather than valuation because
$\mathscr E$ is not assumed to be an integral domain, and we dot impose equality in the
third property.
The distance~$\operatorname{d}_{\operatorname{val}}$ is ultrametric, translation-invariant, and it satisfies
$\operatorname{d}_{\operatorname{val}}(0,\Theta_1\Theta_2) \le \operatorname{d}_{\operatorname{val}}(0,\Theta_1) \operatorname{d}_{\operatorname{val}}(0,\Theta_2)$.
Let us denote by~$1$ the unit of~$\mathscr E$.
Giving a pseudovaluation on~$\mathscr E$ such that $\vl{1}=0$ is equivalent to giving a filtration
$(\mathscr E_\delta)_{\delta\in\mathbb{Z}}$ that is compatible with its ring structure ({i.e.}\ a sequence if
additive subgroups such that $1\in\mathscr E_0$, $\mathscr E_{\delta+1}\subset\mathscr E_\delta$ and
$\mathscr E_\delta\mathscr E_{\delta'}\subset\mathscr E_{\delta+\delta'}$ for all $\delta,\delta'\in\mathbb{Z}$),
exhaustive ($\bigcup\mathscr E_\delta=\mathscr E$) and separated ($\bigcap\mathscr E_\delta=\{0\}$).
Indeed, the order function~$\operatorname{val}$ associated with the filtration, defined by $\vl{\Theta} =
\sup\ao\delta\in\mathbb{Z} \mid \Theta\in\mathscr E_\delta \af$, is then a
pseudovaluation; conversely, one can set $\mathscr E_\delta = \ao \Theta\in\mathscr E \mid
\vl{\Theta}\ge\delta \af$.
\begin{definition} \label{defformsumfam}
Let $(\mathscr E,\operatorname{val})$ be a complete pseudovaluation ring.
Given a set $I$, a family $(\Theta_i)_{i\in I}$ in~$\mathscr E$ is said to be formally
summable if, for any $\delta\in\mathbb{Z}$,
the set $\ao i\in I \mid \vl{\Theta_i} \le \delta \af$ is finite
(the support of the family is thus countable, if not~$I$ itself).
\end{definition}
One can then check that, for any exhaustion $(I_k)_{k\in\mathbb{N}}$ by finite sets of
the support of the family, the sequence $\sum_{i\in I_k} \Theta_i$ is a Cauchy
sequence for $\operatorname{d}_{\operatorname{val}}$, and that the limit does not depend on the chosen
exhaustion; the common limit is then denoted $\sum_{i\in I}\Theta_i$.
Observe that there must exist $\delta_*\in\mathbb{Z}$ such that $\vl{\Theta_i}\ge\delta_*$ for
all $i\in I$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We apply this to operators of $\mathscr A = \mathbb{C}[[x,y]]$ as follows.
The Krull topology of~$\mathscr A$ can be defined with the help of the monomial valuation
$$
\nu_4(f) = \min\ao 4m+n \mid f_{m,n}\neq0 \af
\quad \text{for} \; f = \sum f_{m,n} x^m y^n \neq 0,
\qquad \nu_4(0)=\infty.
$$
Indeed, for any sequence $(f_k)_{k\in\mathbb{N}}$ of~$\mathscr A$,
$$
f_k \xrightarrow[k\to\infty]{} 0
\quad\Longleftrightarrow\quad
\sum_{k\in\mathbb{N}} f_k \;\text{formally convergent}
\quad\Longleftrightarrow\quad
\nu_4(f_k)\xrightarrow[k\to\infty]{}\infty.
$$
In particular, $(\mathbb{C}[[x,y]],\nu_4)$ is a complete pseudovaluation ring.
Suppose more generally that $(\mathscr A,\nu)$ is any complete pseudovaluation ring
such that~$\mathscr A$ is also an algebra.
Corresponding to the filtration $\mathscr A_p = \ao f\in\mathscr A \mid \nu(f)\ge p\af$,
$p\in\mathbb{Z}$, there is a filtration of $\operatorname{End}_\mathbb{C}(\mathscr A)$:
$$
\mathscr E_\delta = \ao \Theta \in \operatorname{End}_\mathbb{C}(\mathscr A) \mid \Theta(\mathscr A_p) \subset \mathscr A_{p+\delta}
\;\text{for each $p$} \af, \qquad \delta\in\mathbb{Z}.
$$
\begin{definition} \label{defopval}
Let $\delta\in\mathbb{Z}$. An element~$\Theta$ of~$\mathscr E_\delta$ is said to be
an ``operator of valuation~$\ge\delta$''.
We then define $\vln\Theta \in \mathbb{Z}\cup\{\infty\}$, the ``valuation of~$\Theta$'',
as the largest $\delta_0$ such that $\Theta$ has
valuation~$\ge\delta_0$; this number is infinite only for $\Theta=0$.
\end{definition}
Denote by $\mathscr E$ the union $\bigcup\mathscr E_\delta$ over all $\delta\in\mathbb{Z}$:
these are the operators of~$\mathscr A$ ``having a valuation'' ({with respect to}~$\nu$), {i.e.}
$$
\mathscr E = \ao \Theta\in\operatorname{End}_\mathbb{C}(\mathscr A) \mid
\vln{\Theta} = \inf_{f\in\mathscr A} \{ \nu(\Theta f) - \nu(f) \} > -\infty \af.
$$
They clearly are continuous for the topology induced by~$\nu$ on~$\mathscr A$;
they form a subalgebra of the algebra of all continuous operators\footnote{
Not all continuous operators of~$\mathscr A$ belong to~$\mathscr E$: think of the operator
of~$\mathbb{C}[[y]]$ which maps $y^m$ to~$y^{m/2}$ if $m$ is even and to~$0$ if $m$
is odd.
}
and $(\mathscr E,\operatorname{val}_\nu)$ is a complete pseudovaluation ring.
For any formally summable family $(\Theta_i)_{i\in I}$ of sum~$\Theta$ in~$\mathscr E$ and
$f\in\mathscr A$, the family $(\Theta_i f)_{i\in I}$ is summable in the topological
ring $\mathscr A$, with sum~$\Theta f$.
\begin{lemma} \label{lemCVformal}
With the notation of formula~\eqref{eqdefbBn} and Lemma~\ref{lemdefcV}, the
family $(\mathcal{V}^{\omega_1,\dotsc,\omega_r} \mathbf{B}_{\omega_1,\dotsc,\omega_r})_{r\ge1,\,
\omega_1,\dotsc,\omega_r\in\mathcal{N}}$ is formally summable in the algebra of
operators of $\mathbb{C}[[x,y]]$ having a valuation {with respect to}~$\nu_4$.
In particular the resulting operator~$\Theta$ is continuous for the Krull
topology.
Similarly, the formula
\begin{equation} \label{eqdefcVt}
{\cV\hspace{-.45em}\raisebox{.35ex}{--}}^{\omega_1,\dotsc,\omega_r} = (-1)^r \mathcal{V}^{\omega_r,\dotsc,\omega_1}
\end{equation}
gives rise to a formally summable family
$({\cV\hspace{-.45em}\raisebox{.35ex}{--}}^{\omega_1,\dotsc,\omega_r} \mathbf{B}_{\omega_1,\dotsc,\omega_r})_{r\ge1,\,
\omega_1,\dotsc,\omega_r\in\mathcal{N}}$.
\end{lemma}
\begin{proof}
Clearly $\nu_4(B_n f) \ge \nu_4(f) + n$ and, by induction,
$$
\nu_4(\mathbf{B}_{\omega_1,\dotsc,\omega_r} f) \ge \nu_4(f) + \omega_1+\dotsb+\omega_r.
$$
As a consequence of~\eqref{eqvalcV},
$$
\nu_4(\mathcal{V}^{\omega_1,\dotsc,\omega_r} \mathbf{B}_{\omega_1,\dotsc,\omega_r} f)
\ge \nu_4(f) + \omega_1+\dotsb+\omega_r + 2r, \qquad
\omega_1,\dotsc,\omega_r\in\mathcal{N}.
$$
Hence, with the above notations, each $\mathcal{V}^{\omega_1,\dotsc,\omega_r}
\mathbf{B}_{\omega_1,\dotsc,\omega_r}$ is an element $\mathscr E$ with valuation $\ge
\omega_1+\dotsb+\omega_r + 2r$,
and the same thing holds for each ${\cV\hspace{-.45em}\raisebox{.35ex}{--}}^{\omega_1,\dotsc,\omega_r}
\mathbf{B}_{\omega_1,\dotsc,\omega_r}$.
The $\omega_i$'s may be negative but they are always $\ge-1$, thus
$\omega_1+\dotsb+\omega_r + r\ge0$.
Therefore, for any $\delta>0$, the condition $\omega_1+\dotsb+\omega_r + 2r \le \delta$
implies $r\le \delta$ and
$\sum (\omega_i+1) = \omega_1+\dotsb+\omega_r + r \le \delta$.
Since this condition is fulfilled only a finite number of times, the conclusion
follows.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Here is the key statement, the proof of which will be spread over
Sections~\ref{secAlgMoulds}--\ref{secPfThm}:
\begin{thm} \label{thmSNformal}
The continuous operator $\Theta = \sum \mathcal{V}^\bullet \mathbf{B}_\bullet$ defined by
Lemmas~\ref{lemdefcV} and~\ref{lemCVformal} is an algebra automorphism of
$\mathbb{C}[[x,y]]$ which satisfies the conjugacy equation~\eqref{eqconjugOp}.
The inverse operator is $\sum {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\bullet \mathbf{B}_\bullet$.
\end{thm}
Observe that $\Theta x = x$, thus $\Theta$ is must be the substitution operator for a
formal transformation of the form $\th(x,y) = \big( x,\varphi(x,y)\big)$,
with $\varphi = \Theta y$, in accordance with~\eqref{eqdefthph}.
An easy induction yields
\begin{equation} \label{eqdefbeomb}
\mathbf{B}_\text{\boldmath{$\om$}} y = \beta_\text{\boldmath{$\om$}} y^{\omega_1+\dotsb+\omega_r+1}, \qquad \text{\boldmath{$\om$}}\in\mathcal{N}^r,\, r\ge 1,
\end{equation}
with $\beta_\text{\boldmath{$\om$}} = 1$ if $r=1$,
$\beta_\text{\boldmath{$\om$}} = (\omega_1+1)(\omega_1+\omega_2+1)\dotsm(\omega_1+\dotsb+\omega_{r-1}+1)$ if $r\ge2$.
We have $\beta_\text{\boldmath{$\om$}}=0$ whenever $\omega_1+\dotsb+\omega_r \le -2$ (since \eqref{eqdefbeomb}
holds a priori in the fraction field $\mathbb{C}(\!(y)\!)$ but $\mathbf{B}_\text{\boldmath{$\om$}} y$ belongs to~$\mathbb{C}[[y]]$), hence
\begin{multline} \label{eqseriesgivphn}
\th(x,y) = \big( x,\varphi(x,y)\big), \\
\varphi(x,y) = y + \sum_{n\ge0} \varphi_n(x) y^n, \qquad
\varphi_n = \sum_{\substack{r\ge1, \, \text{\boldmath{$\om$}}\in\mathcal{N}^r \\ \omega_1+\dotsb+\omega_r+1=n}}
\beta_\text{\boldmath{$\om$}} \mathcal{V}^\text{\boldmath{$\om$}}
\end{multline}
(in the series giving $\varphi_n$, there are only finitely many terms for each~$r$,
\eqref{eqvalcV} thus yields its formal convergence in $x\mathbb{C}[[x]]$).
Similarly, $\Theta^{-1} = \sum {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\bullet \mathbf{B}_\bullet$ is the substitution operator of
a formal transformation $(x,y) \mapsto \big(x,\psi(x,y)\big)$, which is nothing
but~$\th^{-1}$, and
\begin{equation} \label{eqpsiThii}
\psi(x,y) = \Theta^{-1} y = y + \sum_{n\ge0} \psi_n(x) y^n,
\end{equation}
where each coefficient can be represented as a formally convergent series
$\displaystyle
\psi_n = \sum_{\omega_1+\dotsb+\omega_r+1=n}
\beta_\text{\boldmath{$\om$}} {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\text{\boldmath{$\om$}}$.
\smallskip
See Lemma~\ref{lemLagrpsinphn} on p.~\pageref{lemLagrpsinphn} for formulas
relating directly the~$\varphi_n$'s and the~$\psi_n$'s.
\begin{remark} \label{remheuri}
The $\mathcal{V}^\text{\boldmath{$\om$}}$'s are generically divergent with at most Gevrey-$1$ growth of the
coefficients, as can be expected from formula~\eqref{eqdefinvpamu}; for
instance, for $\omega_1\neq0$, we get
$\mathcal{V}^{\omega_1}(x) = \sum \omega_1^{-r-1} \left(-x^2\frac{{\mathrm d}\,}{{\mathrm d}
x}\right)^r a_{\omega_1}$ which is generically divergent because the repeated
differentiations are not compensated by the division by any factorial-like
expression.
This divergence is easily studied through formal Borel transform {with respect to}\ $z=-1/x$,
which is the starting point of the resurgent analysis of the saddle-node---see
Section~\ref{secResur}.
We shall see in Section~\ref{secBESN} why the $\mathcal{V}^\text{\boldmath{$\om$}}$'s can be called
``resurgence monomials''.
\end{remark}
The proof of Theorem~\ref{thmSNformal} will follow easily from the general
notions introduced in the next sections.
\part{The formalism of moulds}
\section{The algebra of moulds} \label{secAlgMoulds}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
In this section and the next three ones, we assume that we are given a non-empty
set~$\Omega$ and a commutative $\mathbb{C}$-algebra~$\mathbf{A}$, the unit of which is
denoted~$1$.
In the previous section, the roles of~$\Omega$ and~$\mathbf{A}$ were played by~$\mathcal{N}$ and
$\mathbb{C}[[x]]$.
It is sometimes convenient to have a commutative semigroup structure on~$\Omega$;
then we would rather take $\Omega=\mathbb{Z}$ in the previous section and consider that the
mould $\{ \mathcal{V}^{\omega_1,\dotsc,\omega_r} \}$ was defined on~$\mathbb{Z}$ but supported
on~$\mathcal{N}$ ({i.e.}\ we extend it by~$0$ whenever one of the $\omega_i$'s is $\le-2$).
We consider~$\Omega$ as an alphabet and denote by $\Omega^\bullet$ the free monoid of
{\em words}: a word is any finite sequence of letters,
$\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_r)$ with $\omega_1, \dotsc,\omega_r\in\Omega$;
its {\em length} $r = r(\text{\boldmath{$\om$}})$ can be any non-negative integer.
The only word of zero length is the empty word, denoted~$\emptyset$, which is
the unit of {\em concatenation}, the monoid law $(\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}})\mapsto\text{\boldmath{$\om$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\eta$}}$
defined by
$$
(\omega_1,\ldots,\omega_r) \raisebox{.15ex}{$\centerdot$} (\eta_1,\ldots,\eta_s) =
(\omega_1,\ldots,\omega_r,\eta_1,\ldots,\eta_s)
$$
for non-empty words.
As previously alluded to, a {\em mould on~$\Omega$ with values in~$\mathbf{A}$} is nothing but
a map $\Omega^\bullet \to \mathbf{A}$.
It is customary to denote the value of the mould on a word~$\text{\boldmath{$\om$}}$ by
affixing~$\text{\boldmath{$\om$}}$ as an upper index to the symbol representing the mould,
and to refer to the mould itself by using~$\bullet$ as upper index.
Hence $\mathcal{V}^\bullet$ is the mould, the value of which at~$\text{\boldmath{$\om$}}$ is denoted $\mathcal{V}^\text{\boldmath{$\om$}}$.
A mould with values in~$\mathbb{C}$ is called a {\em scalar mould}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Being the set of all maps from a set to the ring~$\mathbf{A}$, the set of moulds
$\mathscr M^\bullet(\Omega,\mathbf{A})$ has a natural structure of $\mathbf{A}$-module:
addition and ring multiplication are defined component-wise
(for instance, if $\mu\in\mathbf{A}$ and $M^\bullet\in\mathscr M^\bullet(\Omega,\mathbf{A})$,
the mould $N^\bullet = \mu M^\bullet$ is defined by $N^\text{\boldmath{$\om$}} = \mu M^\text{\boldmath{$\om$}}$
for all $\text{\boldmath{$\om$}}\in\Omega^\bullet$).
The ring structure of~$\mathbf{A}$ together with the monoid structure
of~$\Omega^\bullet$ also give rise to a {\em multiplication of moulds}, thus defined:
\begin{equation} \label{eqdefmultiplimould}
P^\bullet = M^\bullet \times N^\bullet \colon \;
\text{\boldmath{$\om$}} \mapsto P^\text{\boldmath{$\om$}} = \sum_{\text{\boldmath{$\om$}}= \text{\boldmath{$\om$}}^1\!\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\om$}}^2} M^{\text{\boldmath{$\om$}}^1} N^{\text{\boldmath{$\om$}}^2},
\end{equation}
with summation over the $r(\text{\boldmath{$\om$}})+1$ decompositions of~$\text{\boldmath{$\om$}}$ into two words (including $\text{\boldmath{$\om$}}^1$
or $\text{\boldmath{$\om$}}^2 = \emptyset$).
Mould multiplication is associative but not commutative (except if $\Omega$ has
only one element).
We get a ring structure on $\mathscr M^\bullet(\Omega,\mathbf{A})$, with unit
$$
1^\bullet \colon\; \text{\boldmath{$\om$}} \mapsto 1^\text{\boldmath{$\om$}} =
\left| \begin{aligned}
1 \quad &\text{if $\text{\boldmath{$\om$}} = \emptyset$}\\
0 \quad &\text{if $\text{\boldmath{$\om$}} \neq \emptyset$.}
\end{aligned} \right.
$$
One can check that a mould~$M^\bullet$ is invertible if and only if $M^\emptyset$
is invertible in~$\mathbf{A}$ (see below).
One must in fact regard $\mathscr M^\bullet(\Omega,\mathbf{A})$ as an $\mathbf{A}$-algebra, {i.e.}\ its module
structure and ring structure are compatible:
$\mu\in\mathbf{A} \mapsto \mu\, 1^\bullet\in\mathscr M^\bullet(\Omega,\mathbf{A})$ is indeed a ring
homomorphism, the image of which lies in the center of the ring of moulds.
The reader familiar with Bourbaki's {\em Elements of mathematics} will have
recognized in $\mathscr M^\bullet(\Omega,\mathbf{A})$ the large algebra (over~$\mathbf{A}$) of the
monoid~$\Omega^\bullet$ ({\em Alg.}, chap.~III, \S2, n$^{\text{o}}$10).
Other authors use the notation $\mathbf{A} \langle\!\langle \Omega \rangle\!\rangle$ or $\mathbf{A}[[ \mathrm{T}^\Omega ]]$
to denote this $\mathbf{A}$-algebra, viewing it as the completion of the
free $\mathbf{A}$-algebra over~$\Omega$ for the pseuvoluation~$\operatorname{ord}$ defined below.
The originality of moulds lies in the way they are used:
\begin{enumerate}[--]
\item the shuffling operation available in the free monoid~$\Omega^\bullet$ will lead us in
Section~\ref{secAltSym} to single out specific classes of moulds, enjoying
certain symmetry or antisymmetry properties of fundamental importance (and this
is only a small amount of all the structures used by \'Ecalle in wide-ranging contexts);
\item we shall see in Sections~\ref{secGenMcM} and~\ref{secContrAltSym} how to contract moulds into
``comoulds'' (and this yields non-trivial results in the local study of analytic
dynamical systems);
\item the extra structure of commutative semigroup on~$\Omega$ will allow us to
define another operation, the ``composition'' of moulds (see below).
\end{enumerate}
There is a pseudovaluation $\operatorname{ord} \colon \mathscr M^\bullet(\Omega,\mathbf{A}) \to
\mathbb{N}\cup\{\infty\}$, which we call ``order'':
we say that a mould~$M^\bullet$ has order $\ge s$ if $M^\text{\boldmath{$\om$}}=0$ whenever
$r(\text{\boldmath{$\om$}})<s$,
and $\od{M^\bullet}$ is the largest such~$s$.
This way, we get a complete pseudovaluation ring
$\big(\mathscr M^\bullet(\Omega,\mathbf{A}),\operatorname{ord}\big)$.
In fact, if $\mathbf{A}$ is an integral domain (as is the case of~$\mathbb{C}[[x]]$), then
$\mathscr M^\bullet(\Omega,\mathbf{A})$ is an integral domain and $\operatorname{ord}$ is a valuation.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
It is easy to construct ``mould derivations'', {i.e.}\ $\mathbb{C}$-linear operators~$D$ of
$\mathscr M^\bullet(\Omega,\mathbf{A})$ such that $D(M^\bullet\times N^\bullet) = (D M^\bullet)\times N^\bullet
+ M^\bullet \times D N^\bullet$.
For instance, for any function $\varphi \colon \Omega \to \mathbf{A}$, the formula
$$
D_\varphi M^\text{\boldmath{$\om$}} = \left| \begin{aligned}
0 \qquad\qquad\qquad \quad & \text{if $\text{\boldmath{$\om$}} = \emptyset$}\\
\big( \varphi(\omega_1) + \dotsb + \varphi(\omega_r) \big) M^\text{\boldmath{$\om$}}
\quad & \text{if $\text{\boldmath{$\om$}} = (\omega_1,\ldots,\omega_r)$}
\end{aligned} \right.
$$
defines a mould derivation $D_\varphi$.
With $\varphi\equiv1$, we get $D M^\text{\boldmath{$\om$}} = r(\text{\boldmath{$\om$}})M^\text{\boldmath{$\om$}}$.
When $\Omega$ is a commutative semigroup (the operation of which is denoted
additively), we define the {\em sum of a non-empty word} as
$$
\norm{\text{\boldmath{$\om$}}} = \omega_1 + \dotsb + \omega_r \in \Omega,
\qquad \text{\boldmath{$\om$}} = (\omega_1, \dotsc, \omega_r) \in \Omega^\bullet.
$$
Then, for any mould $U^\bullet$ such that $U^\emptyset=0$, the formula
\begin{equation} \label{eqdefnaUbul}
\nabla_{U^\bullet} M^\text{\boldmath{$\om$}} = \sum_{\text{\boldmath{$\om$}}=\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}},\,\text{\boldmath{$\beta$}}\neq\emptyset}
U^\text{\boldmath{$\beta$}} \, M^{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$} \norm{\text{\boldmath{$\beta$}}} \raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
\end{equation}
defines a mould derivation $\nabla_{U^\bullet}$.
The derivation~$D_\varphi$ is nothing but $\nabla_{U^\bullet}$ with $U^\text{\boldmath{$\om$}} =
\varphi(\omega_1)$ for $\text{\boldmath{$\om$}}=(\omega_1)$ and $U^\text{\boldmath{$\om$}} = 0$ for $r(\text{\boldmath{$\om$}})\neq1$.
When $\Omega\subset\mathbf{A}$, an important example is
\begin{equation} \label{eqdefna}
\nabla M^\text{\boldmath{$\om$}} = \norm{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}},
\end{equation}
obtained with $\varphi(\eta)\equiv\eta$.
On the other hand, every derivation $d\colon\mathbf{A}\to\mathbf{A}$ obviously induces a mould
derivation $D$, the action of which on any mould~$M^\bullet$ is defined by
\begin{equation} \label{eqdefDd}
D M^\text{\boldmath{$\om$}} = d(M^\text{\boldmath{$\om$}}), \qquad \text{\boldmath{$\om$}}\in\Omega^\bullet.
\end{equation}
\begin{remark} \label{remmouldVsol}
With $\Omega=\mathcal{N}$ defined by~\eqref{eqdefiancN} and $\mathbf{A}=\mathbb{C}[[x]]$, the
mould~$\mathcal{V}^\bullet$ determined in Lemma~\ref{lemdefcV} is the unique solution of
the mould equation
\begin{equation} \label{eqmouldeq}
(D+\nabla)\mathcal{V}^\bullet = J_a^\bullet \times \mathcal{V}^\bullet,
\end{equation}
such that $\mathcal{V}^\emptyset=1$ and $\mathcal{V}^\text{\boldmath{$\om$}}\in x\mathbb{C}[[x]]$ for $\text{\boldmath{$\om$}}\neq\emptyset$,
with $D$ induced by $d=x^2\frac{{\mathrm d}\,}{{\mathrm d} x}$ and
\begin{equation} \label{eqdefJa}
J_a^\text{\boldmath{$\om$}} = \left| \begin{aligned}
a_{\omega_1} \quad & \text{if $\text{\boldmath{$\om$}} = (\omega_1)$}\\
0 \enspace\; \quad & \text{if $r(\text{\boldmath{$\om$}})\neq1$}.
\end{aligned} \right.
\end{equation}
\end{remark}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
When $\Omega$ is a commutative semigroup, the {\em composition of moulds} is defined as follows:
\label{seccomposmoulds}
\begin{multline*}
C^\bullet = M^\bullet \circ U^\bullet \colon \qquad
\emptyset \enspace\mapsto\enspace C^\emptyset = M^\emptyset, \\
\text{\boldmath{$\om$}}\neq\emptyset \mapsto C^\text{\boldmath{$\om$}} = \sum_{ \substack{s\ge1,\,
\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s\neq\emptyset \\
\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\om$}}^s} }
M^{(\norm{\text{\boldmath{$\om$}}^1},\dotsc,\norm{\text{\boldmath{$\om$}}^s})}
U^{\text{\boldmath{$\om$}}^1} \dotsm U^{\text{\boldmath{$\om$}}^s}, \qquad \,
\end{multline*}
with summation over all possible decompositions of $\text{\boldmath{$\om$}}$ into non-empty words
(thus $1\le s\le r(\text{\boldmath{$\om$}})$ and the sum is finite).
The map $M^\bullet \mapsto M^\bullet \circ U^\bullet$ is clearly $\mathbf{A}$-linear; it is in fact an
$\mathbf{A}$-algebra homomorphism:
$$
(M^\bullet \circ U^\bullet) \times (N^\bullet \circ U^\bullet) =
(M^\bullet\times N^\bullet) \circ U^\bullet
$$
(the verification of this distributivity property is left as an exercise).
Obviously,
$ 1^\bullet \circ U^\bullet = 1^\bullet $ for any mould~$U^\bullet$.
The {\em identity mould}
$$
I^\bullet \colon \text{\boldmath{$\om$}} \mapsto I^\text{\boldmath{$\om$}} =
\left| \begin{aligned}
1 \quad &\text{if $r(\text{\boldmath{$\om$}}) = 1$}\\
0 \quad &\text{if $r(\text{\boldmath{$\om$}}) \neq 1$}
\end{aligned} \right.
$$
satisfies $ M^\bullet \circ I^\bullet = M^\bullet $ for any mould~$M^\bullet$.
But $ I^\bullet \circ U^\bullet = U^\bullet $ only if $U^\emptyset = 0$ (a requirement that we
could have imposed when defining mould composition, since the value of~$U^\emptyset$
is ignored when computing $M^\bullet\circ U^\bullet$); in general,
$ I^\bullet \circ U^\bullet = U^\bullet - U^\emptyset\,1^\bullet$.
Mould composition is associative\footnote{%
Hint: The computation of $M^\bullet\circ(U^\bullet\circ V^\bullet)$ at~$\text{\boldmath{$\om$}}$ involves all the
decompositions $\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\om$}}^s$ into non-empty words and then
all the decompositions of each factor $\text{\boldmath{$\om$}}^i$ as
$\text{\boldmath{$\om$}}^1 = \text{\boldmath{$\alpha$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_1},
\text{\boldmath{$\om$}}^2 = \text{\boldmath{$\alpha$}}^{i_1+1} \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_2}, \dotsc,
\text{\boldmath{$\om$}}^s = \text{\boldmath{$\alpha$}}^{i_{s-1}+1} \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_s}$
(where $1\le i_1 < i_2 < \dotsb < i_s=t$, with each $\text{\boldmath{$\alpha$}}^j$ non-empty);
it is equivalent to sum first over all the decompositions
$\text{\boldmath{$\om$}} = \text{\boldmath{$\alpha$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^t$ and then to consider all manners of regrouping
adjacent factors $(\text{\boldmath{$\alpha$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_1}) \raisebox{.15ex}{$\centerdot$}
(\text{\boldmath{$\alpha$}}^{i_1+1} \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_2}) \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,
(\text{\boldmath{$\alpha$}}^{i_{s-1}+1} \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\alpha$}}^{i_s})$,
which yields the value of $(M^\bullet\circ U^\bullet)\circ V^\bullet$ at~$\text{\boldmath{$\om$}}$.
}
and not commutative. One can check that a
mould~$U^\bullet$ admits an inverse for composition (a mould~$V^\bullet$ such that
$V^\bullet \circ U^\bullet = U^\bullet \circ V^\bullet = I^\bullet$) if and only if
$U^\text{\boldmath{$\om$}}$ is invertible in~$\mathbf{A}$ whenever $r(\text{\boldmath{$\om$}})=1$
and $U^\emptyset=0$.
These moulds thus form a group under composition.
In the following, we do not always assume~$\Omega$ to be a commutative semigroup
and mould composition is thus not always defined.
However, observe that, in the absence of semigroup structure, the definition of
$M^\bullet \circ U^\bullet$ makes sense for any mould~$M^\bullet$ such that $M^\text{\boldmath{$\om$}}$ only
depends on~$r(\text{\boldmath{$\om$}})$ and that most of the above properties can be adapted to
this particular situation.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
As an elementary illustration, one can express the multiplicative inverse of a
mould $M^\bullet$ with $\mu=M^\emptyset$ invertible as
$$
(M^\bullet)^{\times(-1)} = G^\bullet \circ M^\bullet, \qquad
\text{with}\enspace
G^\text{\boldmath{$\om$}} = (-1)^{r(\text{\boldmath{$\om$}})} \mu^{-r(\text{\boldmath{$\om$}})-1}.
$$
Indeed, $G^\bullet$ is nothing but the multiplicative inverse of
$\mu\,1^\bullet+I^\bullet$ and
$$
M^\bullet = \mu\,1^\bullet + I^\bullet\circ M^\bullet = (\mu\,1^\bullet+I^\bullet)\circ M^\bullet,
$$
whence the result follows immediately.
The above computation does not require any semigroup structure on~$\Omega$.
Besides, one can also write
$ (M^\bullet)^{\times(-1)} = \sum_{s\ge0} (-1)^s \mu^{-s-1} (M^\bullet-\mu \, 1^\bullet)^{\times s}$
(convergent series for the topology of $\mathscr M^\bullet(\Omega,\mathbf{A})$ induced by~$\operatorname{ord}$).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We define elementary scalar moulds $\exp_t^\bullet$, $t\in\mathbb{C}$, and $\log^\bullet$ by the formulas
$\exp_t^\text{\boldmath{$\om$}} = \frac{t^{r(\text{\boldmath{$\om$}})}}{r(\text{\boldmath{$\om$}})!}$ and
$$
\log^\text{\boldmath{$\om$}} = 0 \quad \text{if $\text{\boldmath{$\om$}} = \emptyset$}, \qquad
\log^\text{\boldmath{$\om$}} = \tfrac{ (-1)^{r(\text{\boldmath{$\om$}})-1} }{ r(\text{\boldmath{$\om$}}) } \quad \text{if $\text{\boldmath{$\om$}} \neq \emptyset$.}
$$
One can check that
\begin{gather*}
\exp_0^\bullet = 1^\bullet, \qquad
\exp_{t_1}^\bullet \times \exp_{t_2}^\bullet = \exp_{t_1+t_2}^\bullet,
\qquad t_1,t_2\in\mathbb{C},\\
(\exp_t^\bullet - 1^\bullet) \circ \frac{1}{t}\log^\bullet
= \frac{1}{t}\log^\bullet \circ \, (\exp_t^\bullet - 1^\bullet)
= I^\bullet, \qquad t\in\mathbb{C}^*
\end{gather*}
(use for instance $\exp_t^\bullet = \sum_{s\ge0} \frac{t^s}{s!}(I^\bullet)^{\times
s}$ and $\log^\bullet = \sum_{s\ge1} \frac{(-1)^{s-1}}{s} (I^\bullet)^{\times s}$;
mould composition is well-defined here even if~$\Omega$ is not a semigroup).
Now, consider on the one hand the Lie algebra
\begin{multline} \label{eqdefgLbul}
\mathfrak L^\bullet(\Omega,\mathbf{A}) = \ao U^\bullet \in \mathscr M^\bullet(\Omega,\mathbf{A}) \mid U^\emptyset = 0 \af,\\
\quad \text{with bracketting
$[U^\bullet,V^\bullet] = U^\bullet\times V^\bullet - V^\bullet\times U^\bullet$,}
\end{multline}
and on the other hand the subgroup
\begin{equation} \label{eqdefGbul}
G^\bullet(\Omega,\mathbf{A}) = \ao M^\bullet \in \mathscr M^\bullet(\Omega,\mathbf{A}) \mid M^\emptyset = 1 \af
\end{equation}
of the multiplicative group of invertible moulds.
Then, for each $U^\bullet \in \mathfrak L^\bullet(\Omega,\mathbf{A})$, $( \exp_t^\bullet\circ \, U^\bullet)_{t\in\mathbb{C}}$
is a one-parameter group inside $G^\bullet(\Omega,\mathbf{A})$.
Moreover, the map
$$
E_t \colon
U^\bullet \in \mathfrak L^\bullet(\Omega,\mathbf{A}) \mapsto
M^\bullet = \exp_t^\bullet \circ\, U^\bullet \in G^\bullet(\Omega,\mathbf{A})
$$
is a bijection for each $t\in\mathbb{C}^*$ (with reciprocal $M^\bullet\mapsto
\frac{1}{t}\log^\bullet\circ\, U^\bullet$),
which allows us to consider $\mathfrak L^\bullet(\Omega,\mathbf{A})$ as the Lie algebra of
$G^\bullet(\Omega,\mathbf{A})$ in the sense that
$$
[U^\bullet,V^\bullet] = \frac{{\mathrm d}\,}{{\mathrm d} t} \Big(
E_t(U^\bullet) \times V^\bullet \times E_t(U^\bullet)^{\times(-1)}
\Big)_{\textstyle | t=0}.
$$
Observe that mould composition is not necessary to define the map~$E_t$ and its
reciprocal: one can use the series
\begin{equation} \label{eqdefEt}
E_t(U^\bullet) = \sum_{s\ge0} \frac{t^s}{s!} (U^\bullet)^{\times s}, \quad
E_t^{-1}(M^\bullet) = \frac{1}{t} \sum_{s\ge1} \tfrac{(-1)^{s-1}}{s} (M^\bullet-1^\bullet)^{\times s}
\end{equation}
(they are formally convergent because $\od{U^\bullet}$ and $\od{M^\bullet-1^\bullet}\ge1$).
\section{Alternality and symmetrality} \label{secAltSym}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Even if $\Omega$ is not a semigroup,
another operation available in~$\Omega^\bullet$ is {\em shuffling}: if two non-empty words
$\text{\boldmath{$\om$}}^1 = (\omega_1,\dotsc,\omega_\ell)$
and $\text{\boldmath{$\om$}}^2 = (\omega_{\ell+1},\dotsc,\omega_r)$
are given, one says that a word $\text{\boldmath{$\om$}}$ belongs to their shuffling
if it can be written $(\omega_{{\sigma}(1)},\dotsc,\omega_{{\sigma}(r)})$ with a
permutation~${\sigma}$ such that
${\sigma}(1)<\dotsm<{\sigma}(\ell)$ and ${\sigma}(\ell+1)<\dotsm<{\sigma}(r)$
(in other words, $\text{\boldmath{$\om$}}$ can be obtained by interdigitating the letters
of~$\text{\boldmath{$\om$}}^1$ and those of~$\text{\boldmath{$\om$}}^2$ while preserving their internal order
in~$\text{\boldmath{$\om$}}^1$ or~$\text{\boldmath{$\om$}}^2$).
We denote by $\sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}}$ the number of such permutations~${\sigma}$,
and we set $\sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}}=0$ if $\text{\boldmath{$\om$}}$ does not belong to the
shuffling of~$\text{\boldmath{$\om$}}^1$ and~$\text{\boldmath{$\om$}}^2$.
\begin{definition} \label{defialtsym}
A mould~$M^\bullet$ is said to be alternal if $M^\emptyset=0$ and, for any two non-empty words $\text{\boldmath{$\om$}}^1$,
$\text{\boldmath{$\om$}}^2$,
\begin{equation} \label{eqdefaltal}
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}} = 0.
\end{equation}
It is said to be symmetral if $M^\emptyset=1$ and, for any two non-empty words $\text{\boldmath{$\om$}}^1$,
$\text{\boldmath{$\om$}}^2$,
\begin{equation} \label{eqdefsymal}
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}} = M^{\text{\boldmath{$\om$}}^1} M^{\text{\boldmath{$\om$}}^2}.
\end{equation}
\end{definition}
Of course the above sums always have finite support.
For instance, if $\text{\boldmath{$\om$}}^1=(\omega_1)$ and $\text{\boldmath{$\om$}}^2=(\omega_2,\omega_3)$, the {left-hand side}\ in both
previous formulas is
$M^{\omega_1,\omega_2,\omega_3} + M^{\omega_2,\omega_1,\omega_3} + M^{\omega_2,\omega_3,\omega_1}$.
The motivation for this definition lies in formula~\eqref{eqmotivsh} below.
We shall see in Section~\ref{secmotivsh} the interpretation of alternality or
symmetrality in terms of the operators obtained by mould-comould expansions:
alternal moulds will be related to the Lie algebra of derivations, symmetral
moulds to the group of automorphisms.
Alternal (resp.\ symmetral) moulds have to do with primitive (resp.\ group-like)
elements of a certain graded cocommutative Hopf algebra, at least when $\mathbf{A}$ is a
field---see the remark on Lemma~\ref{lemxitenstau} below.
An obvious example of alternal mould is $I^\bullet$, or any moud~$J^\bullet$ such that
$J^\text{\boldmath{$\om$}}=0$ for $r(\text{\boldmath{$\om$}})\neq1$ (as is the case of~$J_a^\bullet$ defined by~\eqref{eqdefJa}).
An elementary example of symmetral mould is $\exp_t^\bullet$ for any $t\in\mathbb{C}$;
a non-trivial example is the mould~$\mathcal{V}^\bullet$ determined by
Lemma~\ref{lemdefcV}, the symmetrality of which is the object of
Proposition~\ref{propcVsym} below.
The mould $\log^\bullet$ is not alternal (nor symmetral), but ``alternel'';
alternelity and symmetrelity are two other types of symmetry introduced by
\'Ecalle, parallel to alternality and symmetrality, but we shall not be
concerned with them in this text (see however the end of Section~\ref{secAlludel}).
The next paragraphs contain the proof of the following properties:
\begin{prop} \label{propstructaltsym}
Alternal moulds form a Lie subalgebra $\mathfrak L^\bullet_{\text{alt}}(\Omega,\mathbf{A})$ of the
Lie algebra $\mathfrak L^\bullet(\Omega,\mathbf{A})$ defined by~\eqref{eqdefgLbul}.
Symmetral moulds form a subgroup $G^\bullet_{\text{sym}}(\Omega,\mathbf{A})$ of the
multiplicative group $G^\bullet(\Omega,\mathbf{A})$ defined by~\eqref{eqdefGbul}.
The map~$E_t$ defined by~\eqref{eqdefEt} induces a bijection from
$\mathfrak L^\bullet_{\text{alt}}(\Omega,\mathbf{A})$ to $G^\bullet_{\text{sym}}(\Omega,\mathbf{A})$ for each $t\in\mathbb{C}^*$.
\end{prop}
\begin{prop} \label{propinvsym}
Given a mould~$M^\bullet$, we define a mould $S M^\bullet = \widetilde M^\bullet$ by the formulas
\begin{equation} \label{eqdefinvol}
\widetilde M^\emptyset = M^\emptyset, \qquad
\widetilde M^{\omega_1,\dotsc,\omega_r} = (-1)^r M^{\omega_r,\dots,\omega_1}, \qquad
r\ge1, \; \omega_1, \dotsc, \omega_r \in \Omega.
\end{equation}
Then $S$ is an involution and an antihomomorphism of the $\mathbf{A}$-algebra
$\mathscr M^\bullet(\Omega,\mathbf{A})$, and
\begin{align*}
M^\bullet \enspace\text{alternal} &\quad\Rightarrow\quad
S M^\bullet = - M^\bullet, \\
M^\bullet \enspace\text{symmetral} &\quad\Rightarrow\quad
S M^\bullet = (M^\bullet)^{\times(-1)} \enspace \text{(multiplicative inverse)}.
\end{align*}
\end{prop}
\begin{prop} \label{propcomposalt}
If $\Omega$ is a commutative semigroup and $U^\bullet$ is alternal, then
\begin{align}
M^\bullet \enspace\text{alternal} &\quad\Rightarrow\quad
M^\bullet\circ U^\bullet \enspace\text{alternal,}\\
M^\bullet \enspace\text{symmetral} &\quad\Rightarrow\quad
M^\bullet\circ U^\bullet \enspace\text{symmetral.}
\end{align}
If moreover $U^\bullet$ admits an inverse for composition ({i.e.}\ if $U^\text{\boldmath{$\om$}}$ has a
multiplicative inverse in~$\mathbf{A}$ whenever $r(\text{\boldmath{$\om$}})=1$), then this inverse is
alternal itself; thus alternal invertible moulds form a subgroup of the group
(for composition) of invertible moulds.
\end{prop}
\begin{prop} \label{propDerivSym}
If $D$ is a mould derivation induced by a derivation of~$\mathbf{A}$, or of the form
$D_\varphi$ with $\varphi\colon\Omega\to\mathbf{A}$, or of the form $\nabla_{J^\bullet}$ with $J^\bullet$
alternal (with the assumption that $\Omega$ is a
commutative semigroup in this last case), and if $M^\bullet$ is symmetral, then
$(D M^\bullet)\times (M^\bullet)^{\times(-1)}$ and $(M^\bullet)^{\times(-1)} \times(D M^\bullet)$
are alternal.
\end{prop}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The following definition will facilitate the proof of most of these properties
and enlighten the connection with derivations and algebra automorphisms to be
discussed in Section~\ref{secaltsym}.
\begin{definition}
We call dimould\footnote{
Not to be confused with the {\em bimoulds} introduced by \'Ecalle in
connection with Multizeta values, which correspond to the case where the
set~$\Omega$ itself is the cartesian product of two sets---see the end of
Section~\ref{secOtherAppli}.
}
any map~$\mathbf{M}^{\bul,\bul}$ from $\Omega^\bullet\times\Omega^\bullet$ to~$\mathbf{A}$;
its value on $(\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}})$ is denoted $\mathbf{M}^{\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}}}$.
The set of dimoulds is denoted $\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$; when viewed as the large
algebra of the monoid $\Omega^\bullet\times\Omega^\bullet$, it is a non-commutative $\mathbf{A}$-algebra.
\end{definition}
Observe that, the monoid law on $\Omega^\bullet\times\Omega^\bullet$ being
$$
\text{\boldmath{$\vpi$}}^1=(\text{\boldmath{$\om$}}^1,\text{\boldmath{$\eta$}}^1),\; \text{\boldmath{$\vpi$}}^2=(\text{\boldmath{$\om$}}^2,\text{\boldmath{$\eta$}}^2)
\enspace\Rightarrow\enspace
\text{\boldmath{$\vpi$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\vpi$}}^2 = (\text{\boldmath{$\om$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\eta$}}^1,\text{\boldmath{$\om$}}^2\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\eta$}}^2),
$$
the finitess of the number of decompositions of any
$\text{\boldmath{$\vpi$}}\in\Omega^\bullet\times\Omega^\bullet$ as $\text{\boldmath{$\vpi$}} = \text{\boldmath{$\vpi$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\vpi$}}^2$ allows us to
consider this large algebra, in which the multiplication is defined by a formula
similar to~\eqref{eqdefmultiplimould}.
The unit of dimould multiplication is $1^{\bul,\bul} \colon (\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}})\mapsto 1$ if
$\text{\boldmath{$\om$}}=\text{\boldmath{$\eta$}}=\emptyset$ and $0$ otherwise.
\begin{lemma} \label{lemhomomtau}
The map $\tau \colon M^\bullet\in\mathscr M^\bullet(\Omega,\mathbf{A}) \mapsto \mathbf{M}^{\bul,\bul}\in\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$ defined by
$$
\mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}},
\qquad \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}} \in \Omega^\bullet
$$
is an $\mathbf{A}$-algebra homomorphism.
\end{lemma}
\begin{proof}
The map~$\tau$ is clearly $\mathbf{A}$-linear and $\tau(1^\bullet) = 1^{\bul,\bul}$.
Let $P^\bullet=M^\bullet \times N^\bullet$ and $\mathbf{P}^{\bul,\bul}=\tau(P^\bullet)$; since
$$
\mathbf{P}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum_{\text{\boldmath{$\gamma$}}^1,\text{\boldmath{$\gamma$}}^2\in\Omega^\bullet}
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}^2} M^{\text{\boldmath{$\gamma$}}^1} N^{\text{\boldmath{$\gamma$}}^2},
\qquad \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet,
$$
the property $\mathbf{P}^{\bul,\bul} = \tau(M^\bullet) \times \tau(N^\bullet)$
follows from the identity
\begin{equation} \label{eqidentiteshdeux}
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}^2} =
\sum_{\text{\boldmath{$\alpha$}}=\text{\boldmath{$\alpha$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\alpha$}}^2,\, \text{\boldmath{$\beta$}}=\text{\boldmath{$\beta$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}^2}
\sh{\text{\boldmath{$\alpha$}}^1}{\text{\boldmath{$\beta$}}^1}{\text{\boldmath{$\gamma$}}^1} \sh{\text{\boldmath{$\alpha$}}^2}{\text{\boldmath{$\beta$}}^2}{\text{\boldmath{$\gamma$}}^2}
\end{equation}
(the verification of which is left to the reader).
\end{proof}
As in the case of moulds, we can define the ``order'' of a dimould and get a
pseudovaluation $\operatorname{ord} \colon \mathscr M^{\bul\bul}(\Omega,\mathbf{A}) \to \mathbb{N}\cup\{\infty\}$:
by definition $\od{\mathbf{M}^{\bul,\bul}} \ge s$ if $\mathbf{M}^{\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}}}=0$ whenever
$r(\text{\boldmath{$\om$}})+r(\text{\boldmath{$\eta$}})<s$.
We then get a complete pseudovaluation ring $\big(\mathscr M^{\bul\bul}(\Omega,\mathbf{A}),\operatorname{ord}\big)$
and the homomorphism $\tau$ is continuous since
$\od{\tau(M^\bullet)} \ge \od{M^\bullet}$.
\begin{definition} \label{defidec}
We call decomposable a dimould $\mathbf{P}^{\bul,\bul}$ of the form
$\mathbf{P}^{\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}}} = M^\text{\boldmath{$\om$}} N^\text{\boldmath{$\eta$}}$ (for all $\text{\boldmath{$\om$}},\text{\boldmath{$\eta$}}\in\Omega^\bullet$), where
$M^\bullet$ and $N^\bullet$ are two moulds.
We then use the notation
$\mathbf{P}^{\bul,\bul} = M^\bullet \otimes N^\bullet$.
\end{definition}
One can check that the relation
\begin{equation} \label{eqrhohomomAalg}
(M_1^\bullet \otimes N_1^\bullet) \times (M_2^\bullet \otimes N_2^\bullet) =
(M_1^\bullet \times M_2^\bullet) \otimes (M_2^\bullet \times N_2^\bullet)
\end{equation}
holds in $\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$, for any four moulds $M_1^\bullet,N_1^\bullet,M_2^\bullet,N_2^\bullet$.
With this notation for decomposable dimoulds, we can now rephrase
Definition~\ref{defialtsym} with the help of the homomorphism~$\tau$ of
Lemma~\ref{lemhomomtau}:
\begin{lemma} \label{lemtaualtsym}
A mould~$M^\bullet$ is alternal iff $\tau(M^\bullet)=M^\bullet\otimes 1^\bullet + 1^\bullet\otimes M^\bullet$.
A mould~$M^\bullet$ is symmetral iff $M^\emptyset=1$ and $\tau(M^\bullet)=M^\bullet\otimes M^\bullet$.
\end{lemma}
Notice that the image of~$\tau$ is contained in the set of {\em symmetric dimoulds},
{i.e.}\ those $\mathbf{M}^{\bul,\bul}$ such that $\mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \mathbf{M}^{\text{\boldmath{$\beta$}},\text{\boldmath{$\alpha$}}}$, because of
the obvious relation
\begin{equation} \label{eqcommsh}
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} = \sh{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\om$}}},
\qquad \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\om$}}\in\Omega^\bullet.
\end{equation}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Remark on Definition~\ref{defidec}.}
The tensor product is used here as a mere notation, which is related to the
tensor product of $\mathbf{A}$-algebras as follows:
there is a unique $\mathbf{A}$-linear map
$\rho \colon \mathscr M^\bullet(\Omega,\mathbf{A}) \otimes_\mathbf{A} \mathscr M^\bullet(\Omega,\mathbf{A}) \to \mathscr M^{\bul\bul}(\Omega,\mathbf{A})$
such that $\rho(M^\bullet \otimes N^\bullet)$ is the above dimould~$\mathbf{P}^{\bul,\bul}$.
The map~$\rho$ is an $\mathbf{A}$-algebra homomorphism, according to~\eqref{eqrhohomomAalg}, however its
injectivity is not obvious when~$\mathbf{A}$ is not a field, and denoting $\rho(M^\bullet
\otimes N^\bullet)$ simply as $M^\bullet \otimes N^\bullet$, as in
Definition~\ref{defidec}, is thus an abuse of notation.
In fact, if $\mathbf{A}$ is an integral domain, then the $\mathbf{A}$-module
$\mathscr M^\bullet(\Omega,\mathbf{A})$ is torsion-free ($\mu M^\bullet=0$ implies $\mu=0$ or
$M^\bullet=0$) and $\operatorname{Ker}\rho$ coincides with the set $\mathscr T$ of all torsion elements of
$\mathscr M^\bullet(\Omega,\mathbf{A}) \otimes_\mathbf{A} \mathscr M^\bullet(\Omega,\mathbf{A})$.
Indeed, for any $\xi\in\mathscr T$, there is a non-zero $\mu\in\mathbf{A}$ such that
$\mu\xi=0$, thus $\mu\rho(\xi)=0$ in $\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$, whence $\rho(\xi)=0$.
Conversely, suppose $\xi = \sum_{i=1}^n M_i^\bullet \otimes N_i^\bullet \in \operatorname{Ker}\rho$,
where the moulds $M_i^\bullet$ are not all zero; without loss of generality we can
suppose $M_n^\bullet\neq0$ and choose $\text{\boldmath{$\om$}}^1\in\Omega^\bullet$ such that $\mu_n =
M_n^{\text{\boldmath{$\om$}}^1} \neq0$. Setting $\mu_i = M_i^{\text{\boldmath{$\om$}}^1}$ for the other $i$'s, we get
$\sum_{i=0}^n \mu_i N_i^\bullet = 0$, whence
$\mu_n \xi = \sum_{i=0}^{n-1} (\mu_n M_i^\bullet - \mu_i M_n^\bullet)\otimes N_i^\bullet$,
still with $\mu_n\xi\in\operatorname{Ker}\rho$. By induction on~$n$, one gets a non-zero
$\mu\in\mathbf{A}$ such that $\mu\xi=0$.
Therefore, $\rho$ is injective when~$\mathbf{A}$ is a principal integral domain,
as is the case of $\mathbb{C}[[x]]$,
because any torsion-free $\mathbf{A}$-module is then flat (Bourbaki, {\em Alg.\ comm.},
chap.~I, \S2, n$^{\text{o}}$4, Prop.~3), hence its tensor product with itself is also
torsion-free (by flatness, the injectivity of $\phi \colon M^\bullet \mapsto \mu
M^\bullet$, for $\mu\neq0$, implies the injectivity of $\phi\otimes\operatorname{Id} \colon \xi
\mapsto \mu\xi$).
This is a fortiori the case when $\mathbf{A}$ is a field; this is used in the
remark on Lemma~\ref{lemxitenstau} below.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of Proposition~\ref{propstructaltsym}.}
The set $\mathfrak L^\bullet_{\text{alt}}(\Omega,\mathbf{A})$ of alternal moulds is clearly an
$\mathbf{A}$-submodule of $\mathfrak L^\bullet(\Omega,\mathbf{A})$.
Given $U^\bullet$ and $V^\bullet$ in this set, the alternality of $[U^\bullet,V^\bullet]$ is
easily checked with the help of Lemma~\ref{lemhomomtau},
formula~\eqref{eqrhohomomAalg} and Lemma~\ref{lemtaualtsym}.
Let $M^\bullet$ and $N^\bullet$ be symmetral. The symmetrality of $M^\bullet\times
N^\bullet$ follows from Lemma~\ref{lemhomomtau}, formula~\eqref{eqrhohomomAalg} and
Lemma~\ref{lemtaualtsym}.
Similarly, the multiplicative inverse $\widetilde M^\bullet$ of~$M^\bullet$ satisfies
$\tau(\widetilde M^\bullet)\times\tau(M^\bullet) = \tau(M^\bullet)\times\tau(\widetilde
M^\bullet) = \tau(1^\bullet) = 1^{\bul,\bul}$, by uniqueness of the multiplicative inverse in
$\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$ it follows that $\tau(\widetilde M^\bullet) = \widetilde
M^\bullet\otimes \widetilde M^\bullet$ and $\widetilde M^\bullet$ is symmetral.
Now let $t\in\mathbb{C}^*$. Suppose first $U^\bullet\in\mathfrak L^\bullet_{\text{alt}}(\Omega,\mathbf{A})$.
We check that $M^\bullet = E_t(U^\bullet)$ is symmetral by using the continuity of~$\tau$
and formula~\eqref{eqdefEt}:
$\tau(M^\bullet) = \exp(a+b)$ with $a=t U^\bullet \otimes 1^\bullet$ and $b=1^\bullet \otimes
t U^\bullet$, where the exponential series is well-defined in $\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$
because $\od{a},\od{b}>0$; since $a\times b=b\times a$, the standard properties
of the exponential series yield
$$
\tau(M^\bullet) = \exp(a) \times \exp(b) =
\big( \exp(t U^\bullet)\otimes 1^\bullet \big) \times \big( 1^\bullet \otimes \exp(t U^\bullet) \big)
= M^\bullet \otimes M^\bullet.
$$
Conversely, supposing $M^\bullet = 1^\bullet+N^\bullet \in G_{\text{sym}}(\Omega,\mathbf{A})$, we check that
$U^\bullet = E_t^{-1}(M^\bullet)$ is alternal:
by continuity, we can apply $\tau$ termwise to the logarithm series
in~\eqref{eqdefEt} and write $\tau(M^\bullet-1^\bullet) = N^\bullet\otimes 1^\bullet +
1^\bullet\otimes N^\bullet + N^\bullet\otimes N^\bullet = a + b + a\times b$, with
$a = N^\bullet\otimes 1^\bullet$ and $b = 1^\bullet\otimes N^\bullet$ commuting in
$\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$, the conclusion then follows from the identity
$$
\sum_{s\ge1}\tfrac{(-1)^{s-1}}{s} (a + b + a\times b)^{\times s} =
\sum_{s\ge1}\tfrac{(-1)^{s-1}}{s} a^{\times s} +
\sum_{s\ge1}\tfrac{(-1)^{s-1}}{s} b^{\times s}
$$
(which follows from the observation that, given $c\in\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$ with
$\od{c}>0$, $\sum\tfrac{(-1)^{s-1}}{s} c^{\times s}$ is the only dimould~$\ell$
of positive order such that $\exp(\ell)=1^{\bul,\bul}+c$).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of Proposition~\ref{propinvsym}.}
It is obvious that $S$ is an involution and the identity
$$
S(M^\bullet\times N^\bullet) = S N^\bullet \times S M^\bullet,
\qquad M^\bullet, N^\bullet \in \mathscr M^\bullet(\Omega,\mathbf{A})
$$
clearly follows from the Definition~\eqref{eqdefmultiplimould} of mould multiplication.
Let us define an $\mathbf{A}$-linear map
$$
\xi \,\colon\; \mathbf{M}^{\bul,\bul} \in \mathscr M^{\bul\bul}(\Omega,\mathbf{A}) \mapsto
P^\bullet = \xi(\mathbf{M}^{\bul,\bul}) \in \mathscr M^\bullet(\Omega,\mathbf{A})
$$
by the formula
\begin{equation} \label{eqdefxiPbM}
P^\text{\boldmath{$\om$}} = \sum_{\text{\boldmath{$\om$}}=\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}} (-1)^{r(\text{\boldmath{$\alpha$}})} \mathbf{M}^{\widetilde\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}},
\qquad \text{\boldmath{$\om$}} \in \Omega^\bullet,
\end{equation}
where $\widetilde\text{\boldmath{$\alpha$}} = (\omega_i,\dotsc,\omega_1)$ for $\text{\boldmath{$\alpha$}} = (\omega_1,\dotsc,\omega_i)$ with $i\ge1$
and $\widetilde\emptyset = \emptyset$.
Thus
\begin{multline*}
P^\emptyset = \mathbf{M}^{\emptyset,\emptyset}, \quad
P^{(\omega_1)} = \mathbf{M}^{\emptyset,(\omega_1)} - \mathbf{M}^{(\omega_1),\emptyset}, \\
P^{(\omega_1,\omega_2)} = \mathbf{M}^{\emptyset,(\omega_1,\omega_2)} - \mathbf{M}^{(\omega_1),(\omega_2)} + \mathbf{M}^{(\omega_2,\omega_1),\emptyset},
\end{multline*}
and so on. The rest of Proposition~\ref{propinvsym} follows from
\begin{lemma} \label{lemxitenstau}
For any two moulds $M^\bullet,N^\bullet$, one has
\begin{gather}
\label{eqxitens}
\xi(M^\bullet\otimes N^\bullet) = (S M^\bullet) \times N^\bullet, \\
\label{eqxitau}
\xi\circ\tau(M^\bullet) = M^\emptyset \, 1^\bullet,
\end{gather}
with the homomorphism~$\tau$ of Lemma~\ref{lemhomomtau}.
\end{lemma}
Indeed, if $M^\bullet$ is alternal, then
$$
S M^\bullet + M^\bullet = (S M^\bullet)\times 1^\bullet + 1^\bullet\times M^\bullet
= \xi(M^\bullet \otimes 1^\bullet + 1^\bullet \otimes M^\bullet) = \xi\circ\tau(M^\bullet) = 0,
$$
and if $M^\bullet$ is symmetral, then
$$
(S M^\bullet) \times M^\bullet = \xi(M^\bullet \otimes M^\bullet) =
\xi\circ\tau(M^\bullet) = 1^\bullet
$$
and similarly $M^\bullet \times S M^\bullet = 1^\bullet$ because $S M^\bullet$ is
clearly symmetral too.
\medskip
\noindent
\emph{Proof of Lemma~\ref{lemxitenstau}.}
Formula~\eqref{eqxitens} is obvious.
Let $\mathbf{M}^{\bul,\bul} = \tau(M^\bullet)$ and $P^\bullet = \xi(\mathbf{M}^{\bul,\bul})$.
Clearly $P^\emptyset = \mathbf{M}^{\emptyset,\emptyset} = M^\emptyset$.
Let $\text{\boldmath{$\om$}} = (\omega_1,\ldots,\omega_r)$ with $r\ge 1$: we must show that $P^\text{\boldmath{$\om$}}=0$.
Using the notations $\text{\boldmath{$\alpha$}}^i = (\omega_1,\dotsc,\omega_i)$
and $\text{\boldmath{$\beta$}}^i = (\omega_{i+1},\dotsc,\omega_r)$ for $0 \le i \le r$
(with $\text{\boldmath{$\alpha$}}^0 = \text{\boldmath{$\beta$}}^r = \emptyset$),
we can write
$P^\text{\boldmath{$\om$}} = \sum_{i=0}^r (-1)^i \mathbf{M}^{\widetilde\text{\boldmath{$\alpha$}}^i,\text{\boldmath{$\beta$}}^i}$;
we then split the sum
$$
\mathbf{M}^{\widetilde\text{\boldmath{$\alpha$}}^i,\text{\boldmath{$\beta$}}^i} = \sum_\text{\boldmath{$\gamma$}} \tsh{\widetilde\text{\boldmath{$\alpha$}}^i}{\text{\boldmath{$\beta$}}^i}{\text{\boldmath{$\gamma$}}} M^\text{\boldmath{$\gamma$}}
$$
according to the first letter of the mute variable:
$\mathbf{M}^{\widetilde\text{\boldmath{$\alpha$}}^i,\text{\boldmath{$\beta$}}^i} = Q_i + R_i$ with
\begin{alignat*}{3}
Q_i &= \sum_{\text{\boldmath{$\gamma$}}}
\tsh{\widetilde\text{\boldmath{$\alpha$}}^{i-1}}{\text{\boldmath{$\beta$}}^i}{\text{\boldmath{$\gamma$}}} M^{(\omega_i)\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
&\enspace &\text{if $1\le i\le r$},&\qquad Q_0&=0,\\
R_i &= \sum_{\text{\boldmath{$\gamma$}}}
\tsh{\widetilde\text{\boldmath{$\alpha$}}^i}{\text{\boldmath{$\beta$}}^{i+1}}{\text{\boldmath{$\gamma$}}} M^{(\omega_{i+1})\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
&\enspace &\text{if $1\le i\le r-1$},&\qquad R_r&=0.
\end{alignat*}
But, if $0 \le i \le r-1$, $Q_{i+1} = R_i$,
whence $\displaystyle P^\text{\boldmath{$\om$}} = \sum_{i=1}^r (-1)^i Q_i + \sum_{i=0}^{r-1} (-1)^i Q_{i+1} = 0$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Remark on Lemma~\ref{lemxitenstau}.}
Although this will not be used in the rest of the article, it is worth noting
here that the structure we have on~$\mathscr M^\bullet(\Omega,\mathbf{A})$ is very reminiscent of
that of a {\em cocommutative Hopf algebra}:
the algebra structure is given by mould
multiplication~\eqref{eqdefmultiplimould}, with its unit~$1^\bullet$;
as for the cocommutative cogebra structure, we may think of the map ${\varepsilon} \colon
M^\bullet\mapsto M^\emptyset$ as of a {\em counit} and of the homomorphism~$\tau$ as of a kind
of {\em coproduct} (although its range is not exactly
$\mathscr M^\bullet(\Omega,\mathbf{A})\otimes_\mathbf{A}\mathscr M^\bullet(\Omega,\mathbf{A})$);
we now may consider that the involution $S \colon M^\bullet \mapsto \widetilde M^\bullet$ behaves as
an {\em antipode}.
Indeed, the identity\footnote{%
derived from the obvious relation
$\tsh{\emptyset}{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\om$}}} =\tsh{\text{\boldmath{$\alpha$}}}{\emptyset}{\text{\boldmath{$\om$}}} = 1_{\{\text{\boldmath{$\om$}}=\text{\boldmath{$\alpha$}}\}}$.
}
$\tau(M^\bullet)^{\emptyset,\text{\boldmath{$\alpha$}}} = \tau(M^\bullet)^{\text{\boldmath{$\alpha$}},\emptyset} = M^\text{\boldmath{$\alpha$}}$
can be interpreted as a counit-like property for~${\varepsilon}$
and the fact that any dimould in the image of~$\rho$ is symmetric (consequence
of~\eqref{eqcommsh}) as a cocommutativity-like property, in the sense that
$\tau(M^\bullet) = \sum P_i^\bullet\otimes Q_i^\bullet$ implies
$\sum {\varepsilon}(P_i^\bullet) Q_i^\bullet = \sum {\varepsilon}(Q_i^\bullet) P_i^\bullet = M^\bullet$
and $\sum P_i^\bullet\otimes Q_i^\bullet = \sum Q_i^\bullet\otimes P_i^\bullet$.
The analogue of coassociativity for~$\tau$ is obtained by considering the
maps~$\tau_{\ell}$ and~$\tau_{r}$ which associate with any dimould~$\mathbf{M}^{\bul,\bul}$ the ``trimoulds''
$\mathbf{P}^{\bul,\bul,\bul} = \tau_{\ell}(\mathbf{M}^{\bul,\bul})$ and $\mathbf{Q}^{\bul,\bul,\bul} = \tau_{r}(\mathbf{M}^{\bul,\bul})$
defined by
$$
\mathbf{P}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}} = \sum_{\text{\boldmath{$\eta$}}\in\Omega^\bullet}
\tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\eta$}}} \mathbf{M}^{\text{\boldmath{$\eta$}},\text{\boldmath{$\gamma$}}},
\quad
\mathbf{Q}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}} = \sum_{\text{\boldmath{$\eta$}}\in\Omega^\bullet}
\tsh{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}}{\text{\boldmath{$\eta$}}} \mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\eta$}}}
$$
and by observing\footnote{%
Proof: for a mould~$M^\bullet$, we have
$\tau_{\ell}\circ\tau(M^\bullet)^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}} = \sum_\text{\boldmath{$\om$}} \tssh M^\text{\boldmath{$\om$}}$
with $\tssh = \sum_{\text{\boldmath{$\eta$}}} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\eta$}}} \tsh{\text{\boldmath{$\eta$}}}{\text{\boldmath{$\gamma$}}}{\text{\boldmath{$\om$}}}$
coinciding with $\sum_{\text{\boldmath{$\eta$}}} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\eta$}}}{\text{\boldmath{$\om$}}} \tsh{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}}{\text{\boldmath{$\eta$}}}$,
hence $\tau_{r}\circ\tau(M^\bullet)^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}} = \sum_\text{\boldmath{$\om$}} \tssh M^\text{\boldmath{$\om$}}$ as well.}
that $\tau_{\ell}\circ\tau = \tau_{r}\circ\tau$: when
$\tau(M^\bullet) = \sum_i P_i^\bullet\otimes Q_i^\bullet$
with $\tau(P_i^\bullet) = \sum_j A_{i,j}^\bullet\otimes B_{i,j}^\bullet$
and $\tau(Q_i^\bullet) = \sum_k C_{i,k}^\bullet\otimes D_{i,k}^\bullet$,
this yields
$$
\sum_{i,j} A_{i,j}^\bullet\otimes B_{i,j}^\bullet\otimes Q_i^\bullet =
\sum_{i,k} P_i^\bullet \otimes C_{i,k}^\bullet\otimes D_{i,k}^\bullet.
$$
Finally, the compatibility of ${\varepsilon}$, $\tau$ and~$S$ is expressed through formulas
\eqref{eqxitens}--\eqref{eqxitau}
(complemented by relations $\xi'(M^\bullet\otimes N^\bullet) = M^\bullet \times
S N^\bullet$ and $\xi'\circ \tau(M^\bullet) = M^\emptyset \, 1^\bullet$
involving a map~$\xi'$ defined by replacing $(-1)^{r(\text{\boldmath{$\alpha$}})} \mathbf{M}^{\widetilde\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}}$
with $(-1)^{r(\text{\boldmath{$\beta$}})} \mathbf{M}^{\text{\boldmath{$\alpha$}},\widetilde\text{\boldmath{$\beta$}}}$ in~\eqref{eqdefxiPbM});
therefore
$$
\tau(M^\bullet) = \sum_i P_i^\bullet\otimes Q_i^\bullet
\enspace\Rightarrow\enspace
\sum S P_i^\bullet \times Q_i^\bullet = M^\emptyset\,1^\bullet
= \sum P_i^\bullet \times S Q_i^\bullet.
$$
When $\mathbf{A}$ is a field, we get a true cocommutative Hopf algebra (graded by~$\operatorname{ord}$) by
considering
$\mathscr H^\bullet(\Omega,\mathbf{A}) = \tau^{-1}(\mathscr B)$ with $\mathscr B = \mathscr M^\bullet(\Omega,\mathbf{A})\otimes_\mathbf{A}\mathscr M^\bullet(\Omega,\mathbf{A})$
(we can view $\mathscr B$ as a subalgebra of $\mathscr M^{\bul\bul}(\Omega,\mathbf{A})$ according to the
remark on Definition~\ref{defidec}).
Indeed, in view of the above, it suffices essentially to check that
$M^\bullet\in\mathscr H=\mathscr H^\bullet(\Omega,\mathbf{A})$ implies $\tau(M^\bullet) \in \mathscr H\otimes_\mathbf{A}\mathscr H$ (and not only
$\tau(M^\bullet) \in \mathscr B$), so that the restriction of the homomorphism~$\tau$
to~$\mathscr H$ is a {\em bona fide} coproduct
$$
\Delta \colon \mathscr H \to \mathscr H \otimes_\mathbf{A} \mathscr H.
$$
This can be done by choosing a minimal~$N$ such that $\tau(M^\bullet)$ can be written
as a sum of $N$ decomposable dimoulds:
$\tau(M^\bullet) = \sum_{i=1}^N P_i^\bullet \otimes Q_i^\bullet$ then implies that the
$Q_i^\bullet$'s are linearly independent over~$\mathbf{A}$ and the coassociativity property
allows one to show that each~$P_i^\bullet$ lies in~$\mathscr H$
(choose a basis of~$\mathscr M^\bullet(\Omega,\mathbf{A})$, the first $N$ vectors of which are
$Q_1^\bullet,\dotsc,Q_N^\bullet$, and call $\xi_1,\dotsc,\xi_N$ the first $N$ covectors
of the dual basis: the coassociativity identity can be written
$\sum_i \tau(P_i^\bullet)^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} Q_i^\text{\boldmath{$\gamma$}} =
\sum_j P_i^\text{\boldmath{$\alpha$}} \tau(Q_j^\bullet)^{\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}}$,
thus $\tau(P_i^\bullet) = \sum_j P_j^\bullet \otimes N_{i,j}^\bullet$ with
$N_{i,j}^\text{\boldmath{$\beta$}} = \xi_i\big(\tau(Q_j^\bullet)^{\text{\boldmath{$\beta$}},\bullet}\big)$,
hence $P_i^\bullet\in\mathscr H$); similarly each~$Q_i^\bullet$ lies in~$\mathscr H$.
By definition, all the alternal and symmetral moulds belong to this Hopf
algebra~$\mathscr H$, in which they appear respectively as {\em primitive} and {\em group-like}
elements.
Finally, when $\mathbf{A}$ is only supposed to be an integral domain,
$\mathscr M^\bullet(\Omega,\mathbf{A})$ can be viewed as a subalgebra of~$\mathscr M^\bullet(\Omega,K)$, where $K$
denotes the fraction field of~$\mathbf{A}$; the $\mathbf{A}$-valued alternal and symmetral moulds
belong to the corresponding Hopf algebra $\mathscr H^\bullet(\Omega,K)$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of Proposition~\ref{propcomposalt}.}
The structure of commutative semigroup on~$\Omega$ allows us to define a
composition involving a dimould and a mould as follows:
$\mathbf{C}^{\bul,\bul} = \mathbf{M}^{\bul,\bul} \circ U^\bullet$ if, for all $\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet$,
$$
\mathbf{C}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum
\mathbf{M}^{(\norm{\text{\boldmath{$\alpha$}}^1},\dotsc,\norm{\text{\boldmath{$\alpha$}}^s}),(\norm{\text{\boldmath{$\beta$}}^1},\dotsc,\norm{\text{\boldmath{$\beta$}}^t})}
U^{\text{\boldmath{$\alpha$}}^1} \dotsm U^{\text{\boldmath{$\alpha$}}^s} U^{\text{\boldmath{$\beta$}}^1} \dotsm U^{\text{\boldmath{$\beta$}}^t},
$$
with summation over all possible decompositions of $\text{\boldmath{$\alpha$}}$ and~$\text{\boldmath{$\beta$}}$ into
non-empty words; when $\text{\boldmath{$\alpha$}}$ is the empty word, the convention is to
replace $(\norm{\text{\boldmath{$\alpha$}}^1},\dotsc,\norm{\text{\boldmath{$\alpha$}}^s})$ by~$\emptyset$ and
$U^{\text{\boldmath{$\alpha$}}^1} \dotsm U^{\text{\boldmath{$\alpha$}}^s}$ by~$1$, and similarly when $\text{\boldmath{$\beta$}}$ is
the empty word.
One can check that $\mathbf{M}^{\bul,\bul} \circ I^\bullet = \mathbf{M}^{\bul,\bul}$ and
$\mathbf{M}^{\bul,\bul} \circ (U^\bullet \circ V^\bullet) =
(\mathbf{M}^{\bul,\bul} \circ U^\bullet) \circ V^\bullet$
for any dimould~$\mathbf{M}^{\bul,\bul}$ and any two moulds $U^\bullet,V^\bullet$ (by the same
argument as for the associativity of mould composition).
Proposition~\ref{propcomposalt} will follow fom
\begin{lemma}
For any three moulds $M^\bullet,N^\bullet,U^\bullet$,
\begin{equation} \label{eqcomposotimes}
(M^\bullet\otimes N^\bullet)\circ U^\bullet = (M^\bullet\circ U^\bullet) \otimes (N^\bullet\circ U^\bullet).
\end{equation}
For any two moulds $M^\bullet,U^\bullet$,
\begin{equation} \label{eqcompostaualt}
U^\bullet \enspace\text{alternal} \quad\Rightarrow\quad
\tau(M^\bullet\circ U^\bullet) = \tau(M^\bullet) \circ U^\bullet.
\end{equation}
\end{lemma}
\begin{proof}
The identity~\eqref{eqcomposotimes} is an easy consequence of the definition of
mould composition in Section~\ref{seccomposmoulds}.
As for~\eqref{eqcompostaualt}, let us suppose $U^\bullet$ alternal and let
$\mathbf{M}^{\bul,\bul} = \tau(M^\bullet)$, $\mathbf{U}^{\bul,\bul}=\tau(U^\bullet)$, $\mathbf{C}^{\bul,\bul} = \tau(M^\bullet\circ U^\bullet)$.
We have $\mathbf{C}^{\emptyset,\emptyset} = M^\emptyset = \mathbf{M}^{\emptyset,\emptyset}$, as desired. Suppose now $\text{\boldmath{$\alpha$}}$ or
$\text{\boldmath{$\beta$}} \neq \emptyset$, then
$$
\mathbf{C}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum_{s\ge1, \, \text{\boldmath{$\gamma$}}^1,\dotsc,\text{\boldmath{$\gamma$}}^s\neq\emptyset}
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\gamma$}}^s}
M^{(\norm{\text{\boldmath{$\gamma$}}^1},\dotsc,\norm{\text{\boldmath{$\gamma$}}^s})}
U^{\text{\boldmath{$\gamma$}}^1} \dotsm U^{\text{\boldmath{$\gamma$}}^s}.
$$
Using the identity
(which is an easy generalisation of~\eqref{eqidentiteshdeux})
\begin{equation} \label{eqidentiteshs}
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\gamma$}}^s} =
\sum_{\text{\boldmath{$\alpha$}}=\text{\boldmath{$\alpha$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\alpha$}}^s,\, \text{\boldmath{$\beta$}}=\text{\boldmath{$\beta$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\beta$}}^s}
\sh{\text{\boldmath{$\alpha$}}^1}{\text{\boldmath{$\beta$}}^1}{\text{\boldmath{$\gamma$}}^1} \dotsm \sh{\text{\boldmath{$\alpha$}}^s}{\text{\boldmath{$\beta$}}^s}{\text{\boldmath{$\gamma$}}^s},
\end{equation}
with possibly empty factors $\text{\boldmath{$\alpha$}}^i, \text{\boldmath{$\beta$}}^i$,
we get
\begin{equation} \label{eqsumbC}
\mathbf{C}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum_{s\ge1, \, \text{\boldmath{$\alpha$}}=\text{\boldmath{$\alpha$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\alpha$}}^s,\, \text{\boldmath{$\beta$}}=\text{\boldmath{$\beta$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\beta$}}^s}
M^{(\norm{\text{\boldmath{$\alpha$}}^1}+\norm{\text{\boldmath{$\beta$}}^1},\dotsc,\norm{\text{\boldmath{$\alpha$}}^s}+\norm{\text{\boldmath{$\beta$}}^s})}
\mathbf{U}^{\text{\boldmath{$\alpha$}}^1,\text{\boldmath{$\beta$}}^1} \dotsm \mathbf{U}^{\text{\boldmath{$\alpha$}}^s,\text{\boldmath{$\beta$}}^s},
\end{equation}
with the convention $\norm{\emptyset}=0$. Observe that this last summation involves
only finitely many nonzero terms because $\text{\boldmath{$\alpha$}}^i=\text{\boldmath{$\beta$}}^i=\emptyset$ implies $\mathbf{U}^{\text{\boldmath{$\alpha$}}^i,\text{\boldmath{$\beta$}}^i}=0$.
If $\text{\boldmath{$\alpha$}}$ or~$\text{\boldmath{$\beta$}}$ is the empty word, since
$\mathbf{U}^{\text{\boldmath{$\om$}},\emptyset} = \mathbf{U}^{\emptyset,\text{\boldmath{$\om$}}} = U^\text{\boldmath{$\om$}}$ we obtain that the values of $\mathbf{C}^{\bul,\bul}$
and $\mathbf{M}^{\bul,\bul}\circ U^\bullet$ at $(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})$ coincide.
If neither $\text{\boldmath{$\alpha$}}$ nor~$\text{\boldmath{$\beta$}}$ is empty, then we have moreover
$\mathbf{U}^{\text{\boldmath{$\alpha$}}^i,\text{\boldmath{$\beta$}}^i}\neq0 \;\Rightarrow\; \text{\boldmath{$\alpha$}}^i \ \text{or}\ \text{\boldmath{$\beta$}}^i=\emptyset$, thus
\eqref{eqsumbC} can be rewritten (retaining only non-empty factors)
$$
\mathbf{C}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \sum \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet}
\tsh{(\norm{\text{\boldmath{$\alpha$}}^1},\dotsc,\norm{\text{\boldmath{$\alpha$}}^s})}{(\norm{\text{\boldmath{$\beta$}}^1},\dotsc,\norm{\text{\boldmath{$\beta$}}^t})}{\text{\boldmath{$\om$}}}
M^{\text{\boldmath{$\om$}}}
U^{\text{\boldmath{$\alpha$}}^1} \dotsm U^{\text{\boldmath{$\alpha$}}^s} U^{\text{\boldmath{$\beta$}}^1} \dotsm U^{\text{\boldmath{$\beta$}}^t},
$$
with the first summation over all possible decompositions of $\text{\boldmath{$\alpha$}}$ and~$\text{\boldmath{$\beta$}}$ into
non-empty words.
We thus get the desired result.
\end{proof}
\noindent
\emph{End of the proof of Proposition~\ref{propcomposalt}.}
We now suppose that $U^\bullet$ is an alternal mould.
If $M^\bullet$ is an alternal mould, then
$$
\tau(M^\bullet\circ U^\bullet) = (M^\bullet\otimes 1^\bullet + 1^\bullet\otimes M^\bullet)\circ
U^\bullet
= (M^\bullet\circ U^\bullet)\otimes 1^\bullet + 1^\bullet \otimes (M^\bullet\circ U^\bullet)
$$
by \eqref{eqcomposotimes}--\eqref{eqcompostaualt}, while, for $M^\bullet$ symmetral,
$$
\tau(M^\bullet\circ U^\bullet) = (M^\bullet\otimes M^\bullet)\circ U^\bullet
= (M^\bullet\circ U^\bullet)\otimes(M^\bullet\circ U^\bullet).
$$
Finally, if moreover $U^\bullet$ is invertible for composition and $V^\bullet =
(U^\bullet)^{\circ(-1)}$, then
$\tau(V^\bullet) = \tau(V^\bullet) \circ (U^\bullet\circ V^\bullet)
= \big( \tau(V^\bullet) \circ U^\bullet \big)\circ V^\bullet
= \tau(V^\bullet\circ U^\bullet) \circ V^\bullet
= (I^\bullet\otimes 1^\bullet + 1^\bullet\otimes I^\bullet) \circ V^\bullet
= V^\bullet \otimes 1^\bullet + 1^\bullet \otimes V^\bullet$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of Proposition~\ref{propDerivSym}.}
Let $M^\bullet$ be a symmetral mould.
If $D$ is induced by a derivation $d\colon \mathbf{A}\to\mathbf{A}$, then we can apply~$d$ to
both sides of equation~\eqref{eqdefsymal} and we get
\begin{equation} \label{eqtauDM}
\tau(D M^\bullet) = D M^\bullet \otimes M^\bullet + M^\bullet \otimes D M^\bullet.
\end{equation}
Let us show that the same relation holds when $D=\nabla_{J^\bullet}$ with $J^\bullet$
alternal (this includes the case $D=D_\varphi$).
We set $C^\bullet = \nabla_{J^\bullet} M^\bullet$ and denote respectively by $\mathbf{J}^{\bul,\bul}, \mathbf{M}^{\bul,\bul},
\mathbf{C}^{\bul,\bul}$ the images of $J^\bullet, M^\bullet, C^\bullet$ by the homomorphism~$\tau$.
We first observe that $C^\emptyset = 0$ and $\mathbf{C}^{\emptyset,\emptyset} = 0$.
Let $\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2\in\Omega^\bullet$ with at least one of them non-empty.
From the definition of~$C^\bullet$, we have
\begin{multline*}
\mathbf{C}^{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2} = \sum_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}},\, \text{\boldmath{$\beta$}}\neq\emptyset}
\tsh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
M^{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\norm{\text{\boldmath{$\beta$}}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}} J^\text{\boldmath{$\beta$}} \\[1ex]
= \sum_{ \substack{ \text{\boldmath{$\om$}}^1 = \text{\boldmath{$\alpha$}}^1 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\beta$}}^1 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^1 \\
\text{\boldmath{$\om$}}^2 = \text{\boldmath{$\alpha$}}^2 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\beta$}}^2 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^2 } } \
\sum_{ \substack{ \text{\boldmath{$\alpha$}},\text{\boldmath{$\gamma$}} \\ \text{\boldmath{$\beta$}}\neq\emptyset } } \
\tsh{\text{\boldmath{$\alpha$}}^1}{\text{\boldmath{$\alpha$}}^2}{\text{\boldmath{$\alpha$}}} \tsh{\text{\boldmath{$\gamma$}}^1}{\text{\boldmath{$\gamma$}}^2}{\text{\boldmath{$\gamma$}}}
M^{\text{\boldmath{$\alpha$}} \raisebox{.15ex}{$\centerdot$} ( \norm{\text{\boldmath{$\beta$}}^1}+\norm{\text{\boldmath{$\beta$}}^2} ) \raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
\tsh{\text{\boldmath{$\beta$}}^1}{\text{\boldmath{$\beta$}}^2}{\text{\boldmath{$\beta$}}} J^\text{\boldmath{$\beta$}}
\end{multline*}
by virtue of~\eqref{eqidentiteshs} with $s=3$.
The summation over~$\text{\boldmath{$\beta$}}$ leads to the appearance of the factor
$\mathbf{J}^{\text{\boldmath{$\beta$}}^1,\text{\boldmath{$\beta$}}^2}$. By alternality of~$J^\bullet$, this factor vanishes if
both~$\text{\boldmath{$\beta$}}^1$ and~$\text{\boldmath{$\beta$}}^2$ are non-empty, thus
\begin{multline*}
\mathbf{C}^{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2} = \sum_{
\text{\boldmath{$\om$}}^1 = \text{\boldmath{$\alpha$}}^1 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\beta$}}^1 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^1,\, \text{\boldmath{$\beta$}}^1\neq\emptyset}
\Phi_1(\text{\boldmath{$\alpha$}}^1,\norm{\text{\boldmath{$\beta$}}^1},\text{\boldmath{$\gamma$}}^1;\text{\boldmath{$\om$}}^2) J^{\text{\boldmath{$\beta$}}^1} \\
+ \sum_{
\text{\boldmath{$\om$}}^2 = \text{\boldmath{$\alpha$}}^2 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\beta$}}^2 \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^2,\, \text{\boldmath{$\beta$}}^2\neq\emptyset}
\Phi_2(\text{\boldmath{$\om$}}^1;\text{\boldmath{$\alpha$}}^2,\norm{\text{\boldmath{$\beta$}}^2},\text{\boldmath{$\gamma$}}^2) J^{\text{\boldmath{$\beta$}}^2},
\end{multline*}
with $\displaystyle \Phi_1(\text{\boldmath{$\alpha$}}^1,b,\text{\boldmath{$\gamma$}}^1;\text{\boldmath{$\om$}}^2) =
\sum_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\gamma$}},\ \text{\boldmath{$\om$}}^2=\text{\boldmath{$\alpha$}}^2\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}^2}
\tsh{\text{\boldmath{$\alpha$}}^1}{\text{\boldmath{$\alpha$}}^2}{\text{\boldmath{$\alpha$}}} \tsh{\text{\boldmath{$\gamma$}}^1}{\text{\boldmath{$\gamma$}}^2}{\text{\boldmath{$\gamma$}}}
M^{\text{\boldmath{$\alpha$}} \raisebox{.15ex}{$\centerdot$} b \raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}$
and a symmetric definition for~$\Phi_2$.
A moment of thought shows that
$$
\Phi_1(\text{\boldmath{$\alpha$}}^1,b,\text{\boldmath{$\gamma$}}^1;\text{\boldmath{$\om$}}^2) =
\sum_\text{\boldmath{$\om$}} \sh{\text{\boldmath{$\alpha$}}^1\raisebox{.15ex}{$\centerdot$} b\raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}}
= \mathbf{M}^{\text{\boldmath{$\alpha$}}^1\raisebox{.15ex}{$\centerdot$} b\raisebox{.15ex}{$\centerdot$} \text{\boldmath{$\gamma$}}^1,\text{\boldmath{$\om$}}^2},
$$
with a symmetric formula for~$\Phi_2$, so that
$$
\mathbf{C}^{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2} = \sum_{\text{\boldmath{$\om$}}^1 = \text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
\mathbf{M}^{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\norm{\text{\boldmath{$\beta$}}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}},\text{\boldmath{$\om$}}^2} U^\text{\boldmath{$\beta$}}
+ \sum_{\text{\boldmath{$\om$}}^2 = \text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}}
M^{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\norm{\text{\boldmath{$\beta$}}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}} U^\text{\boldmath{$\beta$}},
$$
whence formula~\eqref{eqtauDM} follows.
Since the multiplicative inverse $\widetilde M^\bullet$ of~$M^\bullet$ is known
to be symmetral by Proposition~\ref{propstructaltsym},
we can multiply both sides of~\eqref{eqtauDM} by $\tau(\widetilde M^\bullet)$
and use Lemma~\ref{lemhomomtau} and formula~\eqref{eqrhohomomAalg};
this yields the symmetrality of $D M^\bullet \times \widetilde M^\bullet$ and $\widetilde M^\bullet \times D M^\bullet$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
There is a kind of converse to Proposition~\ref{propDerivSym}, which is
essential in the application to the saddle-node; we state it in this context only:
\begin{prop} \label{propcVsym}
Let $\Omega = \mathcal{N}$ as in~\eqref{eqdefiancN} and $\mathbf{A}=\mathbb{C}[[x]]$.
Then the mould~$\mathcal{V}^\bullet$ defined by Lemma~\ref{lemdefcV} is symmetral.
\end{prop}
\begin{proof}
We must show
\begin{equation} \label{eqVsymal}
\mathcal{V}^\text{\boldmath{$\alpha$}} \mathcal{V}^\text{\boldmath{$\beta$}} = \sum_{\text{\boldmath{$\gamma$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}} \mathcal{V}^\text{\boldmath{$\gamma$}},
\qquad \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet.
\end{equation}
Since $\mathcal{V}^\emptyset = 1$, this is obviously true for $\text{\boldmath{$\alpha$}}$ or $\text{\boldmath{$\beta$}} = \emptyset$.
We now argue by induction on $r = r(\text{\boldmath{$\alpha$}}) + r(\text{\boldmath{$\beta$}})$.
We thus suppose $r\ge1$ and, without loss of generality, both of $\text{\boldmath{$\alpha$}}$ and
$\text{\boldmath{$\beta$}}$ non-empty.
With the notations $d = x^2\frac{{\mathrm d}\,}{{\mathrm d} x}$,
$\norm{\text{\boldmath{$\alpha$}}}=\alpha_1+\dotsb+\alpha_{r(\text{\boldmath{$\alpha$}})}$
and $\norm{\text{\boldmath{$\beta$}}}=\beta_1+\dotsb+\beta_{r(\text{\boldmath{$\beta$}})}$,
we compute
\begin{multline*}
A := (d+\norm{\text{\boldmath{$\alpha$}}}+\norm{\text{\boldmath{$\beta$}}}) \sum_\text{\boldmath{$\gamma$}} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}} \mathcal{V}^\text{\boldmath{$\gamma$}} \\
= \sum_{\text{\boldmath{$\gamma$}}\neq\emptyset} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}} (d+\norm{\text{\boldmath{$\gamma$}}}) \mathcal{V}^\text{\boldmath{$\gamma$}}
= \sum_{\text{\boldmath{$\gamma$}}\neq\emptyset} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\gamma$}}} a_{\gamma_1} \mathcal{V}^{`\text{\boldmath{$\gamma$}}},
\end{multline*}
using the notation $`\text{\boldmath{$\om$}} = (\omega_2,\dotsc,\omega_s)$ for any non-empty
$\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_s)$ and the defining equation of~$\mathcal{V}^\bullet$.
Splitting the last summation according to the value of~$\gamma_1$, we get
$$
A = \sum_\text{\boldmath{$\delta$}} \tsh{`\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\delta$}}} a_{\alpha_1} \mathcal{V}^\text{\boldmath{$\delta$}}
+ \sum_\text{\boldmath{$\delta$}} \tsh{\text{\boldmath{$\alpha$}}}{`\text{\boldmath{$\beta$}}}{\text{\boldmath{$\delta$}}} a_{\beta_1} \mathcal{V}^\text{\boldmath{$\delta$}}
= a_{\alpha_1} \mathcal{V}^{`\text{\boldmath{$\alpha$}}} \cdot \mathcal{V}^\text{\boldmath{$\beta$}}
+ \mathcal{V}^{\text{\boldmath{$\alpha$}}} \cdot a_{\beta_1} \mathcal{V}^{`\text{\boldmath{$\beta$}}}
$$
(using the induction hypothesis), hence
$$
A = (d+\norm{\text{\boldmath{$\alpha$}}})\mathcal{V}^\text{\boldmath{$\alpha$}} \cdot \mathcal{V}^\text{\boldmath{$\beta$}}
+ \mathcal{V}^\text{\boldmath{$\alpha$}} \cdot (d+\norm{\text{\boldmath{$\beta$}}})\mathcal{V}^\text{\boldmath{$\beta$}}
= (d+\norm{\text{\boldmath{$\alpha$}}}+\norm{\text{\boldmath{$\beta$}}}) (\mathcal{V}^\text{\boldmath{$\alpha$}} \mathcal{V}^\text{\boldmath{$\beta$}}).
$$
We conclude that both sides of~\eqref{eqVsymal} must coincide, because
$d+\norm{\text{\boldmath{$\alpha$}}}+\norm{\text{\boldmath{$\beta$}}}$ is invertible if $\norm{\text{\boldmath{$\alpha$}}}+\norm{\text{\boldmath{$\beta$}}}\neq0$
and both of them belong to $x\mathbb{C}[[x]]$, thus even if $\norm{\text{\boldmath{$\alpha$}}}+\norm{\text{\boldmath{$\beta$}}}=0$
the desired conclusion holds.
\end{proof}
\section{General mould-comould expansions} \label{secGenMcM}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We still assume that we are given a set~$\Omega$ and a commutative $\mathbb{C}$-algebra~$\mathbf{A}$.
When $\Omega$ is the trivial one-element semigroup $\{0\}$, the algebra of
$\mathbf{A}$-valued moulds on~$\Omega$ is nothing but the algebra of formal series
$\mathbf{A}[[\mathrm{T}]]$, with its usual multiplication and composition laws:
the monoid of words is then isomorphic to~$\mathbb{N}$ via the map~$r$, and one can identify a
mould~$M^\bullet$ with the generating series $\sum_{\omega\in\Omega^\bullet} M^\text{\boldmath{$\om$}}
\mathrm{T}^{r(\text{\boldmath{$\om$}})}$; it is then easy to check that the above definitions of
multiplication and composition boil down to the usual ones.
In the case of a general set~$\Omega$, the analogue of this is to identify a
mould~$M^\bullet$ with the element
$\sum M^{\omega_1,\dotsc,\omega_r} \mathrm{T}_{\omega_1}\dotsm\mathrm{T}_{\omega_r}$
of the completion
of the free associative (non-commutative) algebra generated by the symbols
$\mathrm{T}_\eta$, $\eta\in\Omega$.
When replacing the $\mathrm{T}_\eta$'s by elements~$B_\eta$ of an $\mathbf{A}$-algebra, one gets
what is called a mould-comould expansion; we now define these objects in a
context inspired by Section~\ref{secMCexpSN}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Suppose that $(\mathscr F,\operatorname{val})$ is a complete pseudovaluation ring, possibly
non-commutative, with unit denoted by $\operatorname{Id}$, such that~$\mathscr F$ is also an
$\mathbf{A}$-algebra.
We thus have a ring homomorphism $\mu\in\mathbf{A} \mapsto \mu\operatorname{Id}\in\mathscr F$, the image of
which lies in the center of~$\mathscr F$.
\begin{definition} \label{defcomouldmult}
A comould on~$\Omega$ with values in~$\mathscr F$ is any map
$\mathbf{B}_\bullet \colon \text{\boldmath{$\om$}}\in\Omega^\bullet \mapsto \mathbf{B}_\text{\boldmath{$\om$}} \in \mathscr F$ such that
$\mathbf{B}_\emptyset = \operatorname{Id}$ and
\begin{equation} \label{eqcaraccomould}
\mathbf{B}_{\text{\boldmath{$\om$}}^1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\om$}}^2} = \mathbf{B}_{\text{\boldmath{$\om$}}^2} \mathbf{B}_{\text{\boldmath{$\om$}}^1},
\qquad \text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2 \in \Omega^\bullet.
\end{equation}
\end{definition}
Such an object could even be called {\em multiplicative comould} to emphasize
that the map $\mathbf{B}_\bullet \colon \Omega^\bullet \to \mathscr F$ is required to be a monoid
homomorphism from~$\Omega^\bullet$ to the multiplicative monoid underlying the
opposite ring of~$\mathscr F$.
Observe that there is a one-to-one correspondence between comoulds and families
$(B_\eta)_{\eta\in\Omega}$ of~$\mathscr F$ indexed by one-letter words:
the formulas $\mathbf{B}_\emptyset=\operatorname{Id}$ and $\mathbf{B}_\text{\boldmath{$\om$}} = B_{\omega_r}\dotsm B_{\omega_1}$ for
$\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_r) \in\Omega^\bullet$ with $r\ge1$ define a comould,
which we call the {\em comould generated by $(B_\eta)_{\eta\in\Omega}$}, and all
comoulds are obtained this way.
Suppose a comould $\mathbf{B}_\bullet$ is given.
For any $\mathbf{A}$-valued mould~$M^\bullet$ on~$\Omega$ such that the family
$(M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally summable in~$\mathscr F$ (in
particular this family has countable support---{cf.}\
Definition~\ref{defformsumfam}), we can consider the mould-comould expansion,
also called {\em contraction of~$M^\bullet$ into~$\mathbf{B}_\bullet$},
$$
\sum M^\bullet \mathbf{B}_\bullet = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} \,\in\, \mathscr F.
$$
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The example to keep in mind is related to Definition~\ref{defopval}.
Suppose that $(\mathscr A,\nu)$ is any complete pseudovaluation ring such that~$\mathscr A$ is
a commutative $\mathbf{A}$-algebra, the unit of which is denoted by~$1$; thus $\mathbf{A}$ is
identified to a subalgebra of~$\mathscr A$
(for instance $(\mathscr A,\nu)=(\mathbb{C}[[x,y]],\nu_4)$ and $\mathbf{A}=\mathbb{C}[[x]]$).
Denote by $\mathscr E$ the subalgebra of $\operatorname{End}_\mathbb{C}(\mathscr A)$ consisting of operators having
a valuation {with respect to}~$\nu$, so that $(\mathscr E,\operatorname{val}_\nu)$ is a complete pseudovaluation
ring.
Let
\begin{equation} \label{eqdefgFexemp}
\mathscr F_{\mathscr A,\mathbf{A}} =
\ao \Theta\in\mathscr E \mid \text{$\Theta$ and $\mu\operatorname{Id}$ commute for all $\mu\in\mathbf{A}$\af}
= \mathscr E \cap \operatorname{End}_\mathbf{A}(\mathscr A).
\end{equation}
We get an $\mathbf{A}$-algebra, which is a closed subset of~$\mathscr E$ for the topology
induced by~$\operatorname{val}_\nu$, thus $(\mathscr F_{\mathscr A,\mathbf{A}},\operatorname{val}_\nu)$ is also a complete pseudovaluation ring;
these are the $\mathbf{A}$-linear operators of~$\mathscr A$ having a valuation {with respect to}~$\nu$.
In practice, the $B_\eta$'s which generate a comould are related
to the homogeneous components of an operator of~$\mathscr A$ that one wishes to analyse.
In Section~\ref{secMCexpSN} for instance, the derivation $X-X_0$ of $\mathscr A=\mathbb{C}[[x,y]]$
was decomposed into a sum of multiples of $B_n$ according to~\eqref{eqdefBn},
where each term $a_n(x)B_n$ is homogeneous of degree~$n$ in the sense that it sends
$y^{n_0}\mathbb{C}[[x]]$ in $y^{n_0+n}\mathbb{C}[[x]]$ for every~$n_0$.
Observe that the commutation of the $B_\eta$'s with the image of $\mathbf{A}=\mathbb{C}[[x]]$ in~$\mathscr E$
reflects the fact that the vector field $X-X_0$ is ``fibred'' over the
variable~$x$;
similarly, one can look for a solution~$\Theta$ of equation~\eqref{eqconjugOp}
in $\mathscr F_{\mathscr A,\mathbf{A}}$ because the corresponding formal transformation $(x,y)\mapsto\th(x,y)$
is expected to be fibred likewise---{cf.}\ \eqref{eqdefthph}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Returning to the general situation, we now show how, via mould-comould
expansions, mould multiplication corresponds to multiplication in~$\mathscr F$:
\begin{prop} \label{propmultiplimould}
Suppose that $\mathbf{B}_\bullet$ is an $\mathscr F$-valued comould on~$\Omega$ and that $M^\bullet$
and~$N^\bullet$ are $\mathbf{A}$-valued moulds on~$\Omega$ such that the families
$(M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and $(N^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
are formally summable.
Then the mould $P^\bullet = M^\bullet \times N^\bullet$ gives rise to a
formally summable family $(P^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and
$$
\sum \left(M^\bullet \times N^\bullet \right) \mathbf{B}_\bullet =
\left( \sum N^\bullet \mathbf{B}_\bullet \right)\left( \sum M^\bullet \mathbf{B}_\bullet \right).
$$
\end{prop}
\begin{proof}
Let $\delta_*\in\mathbb{Z}$ such that
$v_1(\text{\boldmath{$\om$}}) = \vl{M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}} \ge \delta_*$
and $v_2(\text{\boldmath{$\om$}}) = \vl{N^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}} \ge \delta_*$
for all $\text{\boldmath{$\om$}}\in\Omega^\bullet$.
Then
$$
P^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} = \sum_{\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1\!\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\om$}}^2}
N^{\text{\boldmath{$\om$}}^2} \mathbf{B}_{\text{\boldmath{$\om$}}^2}
M^{\text{\boldmath{$\om$}}^1} \mathbf{B}_{\text{\boldmath{$\om$}}^1}
$$
(since $\mathbf{A}$ is a commutative algebra and its image in~$\mathscr F$ commutes with
the~$\mathbf{B}_{\text{\boldmath{$\om$}}^2}$'s), thus
$\vl{P^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}} \ge \min\ao v_1(\text{\boldmath{$\om$}}^1) + v_2(\text{\boldmath{$\om$}}^2) \mid
\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1\!\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\om$}}^2 \af \ge 2\delta_*$
and, for any $\delta\in\mathbb{Z}$, the condition $\vl{P^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}} \le \delta$ implies
that $\text{\boldmath{$\om$}}$ can be written as $\text{\boldmath{$\om$}}^1\!\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\om$}}^2$ with
$v_1(\text{\boldmath{$\om$}}^1) \le \delta-\delta_*$ and $v_2(\text{\boldmath{$\om$}}^2) \le \delta-\delta_*$,
hence they are only finitely many such $\text{\boldmath{$\om$}}$'s.
To compute $\sum P^\bullet \mathbf{B}_\bullet$, we can suppose $\Omega$ countable (replacing it,
if necessary, by the set of all letters appearing in the union of the supports
of $(M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})$ and $(N^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})$, which is countable),
choose an exhaustion of~$\Omega$ by finite sets $\Omega_K$, $K\ge0$, and use
$ \Omega^{K,R} = \ao \text{\boldmath{$\om$}} \in \Omega^\bullet \mid r = r(\text{\boldmath{$\om$}}) \le R,\;
\omega_1,\dotsc,\omega_r \in \Omega_K \af$, $K,R\ge0$, as an exhaustion of $\Omega^\bullet$.
The conclusion follows from the identity
\begin{multline*}
\bigg( \sum_{\text{\boldmath{$\om$}}\in\Omega^{K,R}} N^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} \bigg)
\bigg( \sum_{\text{\boldmath{$\om$}}\in\Omega^{K,R}} M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} \bigg)
- \sum_{\text{\boldmath{$\om$}}\in\Omega^{K,R}} P^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} \\ =
\sum_{ \substack{ \text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2 \in\Omega^{K,R} \\
r(\text{\boldmath{$\om$}}^1) + r(\text{\boldmath{$\om$}}^2) > R }}
N^{\text{\boldmath{$\om$}}^2} \mathbf{B}_{\text{\boldmath{$\om$}}^2}
M^{\text{\boldmath{$\om$}}^1} \mathbf{B}_{\text{\boldmath{$\om$}}^1},
\end{multline*}
where the {right-hand side}\ tends to~$0$ as $K,R\to\infty$, since its valuation is at least
$\min\ao v_1(\text{\boldmath{$\om$}}^1) + v_2(\text{\boldmath{$\om$}}^2) \mid
\text{\boldmath{$\om$}}^1, \text{\boldmath{$\om$}}^2 \in \Omega^{K,R},\, r(\text{\boldmath{$\om$}}^1)+r(\text{\boldmath{$\om$}}^2)>R \af
\ge \nu_*(K,R) + \delta_*$,
with
$$
\nu_*(K,R) = \min\ao \min(v_1(\text{\boldmath{$\om$}}), v_2(\text{\boldmath{$\om$}})) \mid
\text{\boldmath{$\om$}} \in \Omega^{K,R},\, r(\text{\boldmath{$\om$}})>R/2 \af
\xrightarrow[R\to\infty]{} \infty
$$
for any $K$ (because, for any finite subset~$F$ of~$\Omega^\bullet$, $\text{\boldmath{$\om$}}\notin F$ as
soon as $r(\text{\boldmath{$\om$}})$ is large enough).
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Suppose $\Omega$ is a commutative semigroup.
A motivation for the definition of mould composition in Section~\ref{secAlgMoulds} is
\begin{prop} \label{propcomposexp}
Suppose that $U^\bullet$ and $M^\bullet$ are moulds such that the families
$(U^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and
$$
\Theta_{\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s} =
M^{\norm{\text{\boldmath{$\om$}}^1},\dotsc,\norm{\text{\boldmath{$\om$}}^s}}
U^{\text{\boldmath{$\om$}}^1} \dotsm U^{\text{\boldmath{$\om$}}^s}
\mathbf{B}_{\text{\boldmath{$\om$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\om$}}^s},
\qquad s\ge1,\; \text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s \in \Omega^\bullet
$$
are formally summable.\footnote{
Notice that the formal summability of the second family follows from the formal
sumability of the first one when the valuation $\operatorname{val}$ on~$\mathscr F$ only takes non-negative values.
}
Suppose moreover $U^\emptyset=0$, let
\begin{equation} \label{eqdefBprime}
B'_\eta = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet \,\text{s.t.}\, \norm{\text{\boldmath{$\om$}}}=\eta}
U^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}, \qquad \eta\in\Omega,
\end{equation}
and the consider the comould $\mathbf{B}'_\bullet$ generated by $\{B'_{\eta}, \, \eta\in\Omega\}$.
Then the mould $C^\bullet = M^\bullet \circ U^\bullet$ gives rise to a
formally summable family $(C^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and
$$
\sum \left(M^\bullet \circ U^\bullet \right) \mathbf{B}_\bullet =
\sum M^\bullet \mathbf{B}'_\bullet.
$$
\end{prop}
\begin{proof}
We have $C^\emptyset \mathbf{B}_\emptyset = M^\emptyset \mathbf{B}'_\emptyset$, since $C^\emptyset = M^\emptyset$.
If $\text{\boldmath{$\om$}}$ and $\text{\boldmath{$\eta$}}$ are non-empty words in~$\Omega^\bullet$, with
$\text{\boldmath{$\eta$}} = (\eta_1,\dotsc,\eta_{\sigma})$,
$$
C^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} =
\sum_{\substack{s\ge1,\,\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s \neq\emptyset \\
\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\om$}}^s} }
\Theta_{\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s}, \qquad
M^\text{\boldmath{$\eta$}} \mathbf{B}'_\text{\boldmath{$\eta$}} =
\sum_{ \substack{\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^{\sigma} \neq\emptyset \\
\norm{\text{\boldmath{$\om$}}^1}=\eta_1, \dotsc, \norm{\text{\boldmath{$\om$}}^{\sigma}}=\eta_{\sigma}} }
\Theta_{\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^{\sigma}}.
$$
The conclusion follows easily.
\end{proof}
The idea is that, when indexation by $\eta\in\Omega$ corresponds to a decomposition
of an element of~$\mathscr F$ into homogeneous components, we use the mould~$U^\bullet$ to
go from $X = \sum_{\eta\in\Omega} B_\eta$ to
$Y = \sum_{\eta\in\Omega} B'_\eta$
by contracting it into the comould~$\mathbf{B}_\bullet$ associated with~$X$; then we
use~$M^\bullet$ to go from~$Y$ to the contraction~$Z$ of~$M^\bullet$ into the
comould~$\mathbf{B}'_\bullet$ associated with~$Y$.
Mould composition thus reflects the composition of these operations on elements
of~$\mathscr F$, $X\mapsto Y$ and $Y\mapsto Z$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
For example, suppose that $(\mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally
summable. Then, in particular, $(B_\eta)_{\eta\in\Omega}$ is formally summable, and
$X=\sum B_\eta$ is ``exponentiable'': for any $t\in\mathbb{C}$, the series $\exp(tX) =
\sum_{s\ge0}\frac{t^s}{s!}X^s$ is convergent; moreover, $\exp(tX) =
\sum \exp_t^\bullet \mathbf{B}_\bullet$.
On the other hand, $\operatorname{Id}+X$ has an ``infinitesimal generator'': the series $Y =
\sum_{s\ge1} \frac{(-1)^{s-1}}{s}X^s$ is convergent and $\exp(Y)=\operatorname{Id}+X$; one has
$Y = \sum \log^\bullet\mathbf{B}_\bullet$.
Now, if $U^\bullet \in \mathfrak L^\bullet(\Omega,\mathbf{A})$ is such that $(U^{\text{\boldmath{$\om$}}^1}\dotsm
U^{\text{\boldmath{$\om$}}^s} \mathbf{B}_{\text{\boldmath{$\om$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\om$}}^s})_{s\ge1,\,\text{\boldmath{$\om$}}^1,\dotsc,\text{\boldmath{$\om$}}^s\in\Omega^\bullet}$
is formally summable, then in particular $(U^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally
summable and $X' = \sum U^\bullet\mathbf{B}_\bullet$ is exponentiable, with
$\exp(tX') = \sum (\exp_t^\bullet \circ\, U^\bullet)\mathbf{B}_\bullet$
for any $t\in\mathbb{C}$.
Similarly, if $M^\bullet= 1^\bullet+V^\bullet \in G^\bullet(\Omega,\mathbf{A})$ with $(V^{\text{\boldmath{$\om$}}^1}\dotsm
V^{\text{\boldmath{$\om$}}^s} \mathbf{B}_{\text{\boldmath{$\om$}}^1\raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\,\text{\boldmath{$\om$}}^s})$ formally
summable, then $\sum M^\bullet\mathbf{B}_\bullet$ has infinitesimal generator
$\sum (\log^\bullet \circ\, V^\bullet)\mathbf{B}_\bullet$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
For the interpretation of the mould derivations $\nabla_{U^\bullet}$
defined by~\eqref{eqdefnaUbul},
consider a situation similar to that of Proposition~\ref{propcomposexp}, with a
comould $\mathbf{B}_\bullet \colon \Omega^\bullet \to \mathscr F$, a mould $U^\bullet \in \mathfrak L^\bullet(\Omega,\mathbf{A})$ such that $(U^\text{\boldmath{$\om$}}
\mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally summable and, for each $\eta\in\Omega$,
$B'_\eta \in \mathscr F$ still defined by~\eqref{eqdefBprime}.
But instead of considering the comould~$\mathbf{B}'_\bullet$ generated by $(B'_\eta)$,
{i.e.}\
$\mathbf{B}'_\text{\boldmath{$\om$}} = B'_{\omega_r} \dotsm B'_{\omega_1}$ for $\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_r)$,
set
$$
\mathbf{B}'_\text{\boldmath{$\om$}} = \sum_{\text{\boldmath{$\om$}} = \text{\boldmath{$\alpha$}}\text{\boldmath{$\beta$}}\text{\boldmath{$\gamma$}},\, r(\text{\boldmath{$\beta$}})=1}
\mathbf{B}_\text{\boldmath{$\gamma$}} B'_\text{\boldmath{$\beta$}} \mathbf{B}_\text{\boldmath{$\alpha$}},
\qquad \text{\boldmath{$\om$}} \in \Omega^\bullet
$$
{i.e.}\ $\mathbf{B}'_\emptyset = 0$ and
$\mathbf{B}'_\text{\boldmath{$\om$}} = \sum_{i=1}^r B_{\omega_r}\dotsm B_{\omega_{i+1}} B'_{\omega_i}
B_{\omega_{i-1}} \dotsm B_{\omega_1}$ for $r\ge1$
(beware that $\mathbf{B}'_\bullet \colon \Omega^\bullet \to \mathscr F$ is not a comould, since
multiplicativity fails).
Then one can check the formal summability
of $\big( (\nabla_{U^\bullet} M^\text{\boldmath{$\om$}}) \mathbf{B}_\text{\boldmath{$\om$}} \big)_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
for any mould $M^\bullet$ such that the families
$(M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and
$(M^{\text{\boldmath{$\alpha$}},\norm{\text{\boldmath{$\beta$}}},\text{\boldmath{$\gamma$}}} U^\text{\boldmath{$\beta$}} \mathbf{B}_{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\gamma$}}})_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}},\text{\boldmath{$\gamma$}}\in\Omega^\bullet}$
are formally summable, with
$$
\sum \left( \nabla_{U^\bullet} M^\bullet \right) \mathbf{B}_\bullet = \sum M^\bullet \mathbf{B}'_\bullet.
$$
If, moreover, there is an $\mathbf{A}$-linear derivation $\mathscr D \colon \mathscr F \to \mathscr F$ such
that $\mathscr D B_\eta = B'_\eta$ for each $\eta\in\Omega$, then $\mathbf{B}'_\text{\boldmath{$\om$}}$ is nothing
but $\mathscr D \mathbf{B}_\text{\boldmath{$\om$}}$ and the previous identity takes the form
$$
\sum \left( \nabla_{U^\bullet} M^\bullet \right) \mathbf{B}_\bullet
= \mathscr D \left( \sum M^\bullet \mathbf{B}_\bullet \right).
$$
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
For a given commutative algebra~$\mathbf{A}$, we now consider the case where
$$
\Omega \subset \mathbb{Z}^n, \quad \mathscr A = \mathbf{A}[[y_1,\ldots,y_n]],
$$
for a fixed $n\in\mathbb{N}^*$.
\begin{definition} \label{defhomog}
Given $\eta\in\mathbb{Z}^n$ and $\Theta\in\operatorname{End}_\mathbf{A}(\mathbf{A}[[y_1,\dotsc,y_n]])$, we say that $\Theta$ is homogeneous of
degree~$\eta$ if $\Theta y^m \in \mathbf{A} y^{m+\eta}$ for every $m\in\mathbb{N}^n$
(with the usual notation $y^m=y_1^{m_1}\dotsm y_n^{m_n}$ for monomials).
\end{definition}
For example, any $\lambda\in\mathbf{A}^n$ gives rise to an operator
\begin{equation} \label{eqdefgXla}
\mathscr X_\lambda = \lambda_1 y_1 \frac{\partial\;}{\partial y_1} +
\dotsb + \lambda_n y_n \frac{\partial\;}{\partial y_n}
\end{equation}
which is homogeneous of degree~$0$, since
$\mathscr X_\lambda y^m = \langle m, \lambda\rangle y^m$.
Suppose moreover that we are given a pseudovaluation $\operatorname{val} \colon \mathscr A \to
\mathbb{Z}\cup\{\infty\}$ such that $(\mathscr A,\operatorname{val})$ is complete and
$\frac{\partial\;}{\partial y_1},\dotsc,\frac{\partial\;}{\partial y_n}$ are continuous,
and consider $\mathscr F = \mathscr F_{\mathscr A,\mathbf{A}}$ as defined by~\eqref{eqdefgFexemp}.
We suppose $\Omega\subset\mathbb{Z}^n$ because we are interested in $\mathscr F$-valued {\em
homogeneous} comoulds, {i.e.}\ $\mathscr F$-valued comoulds~$\mathbf{B}_\bullet$ such that
$\mathbf{B}_\text{\boldmath{$\om$}}$ is homogeneous of degree
$\norm{\text{\boldmath{$\om$}}} = \omega_1 + \dotsb + \omega_r \in \mathbb{Z}^n$
for every non-empty $\text{\boldmath{$\om$}}\in\Omega^\bullet$,
and homogeneous of degree~$0$ for $\text{\boldmath{$\om$}}=\emptyset$;
in fact, the multiplicativity property~\eqref{eqcaraccomould} will not be used
for what follows, the following proposition holds for any map $\mathbf{B}_\bullet\colon
\Omega^\bullet \to \mathscr F$ provided it is homogeneous as just defined.
In the case of a comould satisfying the multiplicativity property as required in
Definition~\ref{defcomouldmult}, homogeneity is equivalent to the fact that each
$B_\eta = \mathbf{B}_{(\eta)}$, $\eta\in\Omega$, is homogeneous of degree~$\eta$.
\begin{prop} \label{propcommutXla}
Let $\lambda\in\mathbf{A}^n$ and $\mathbf{B}_\bullet$ be an $\mathscr F$-valued homogeneous comould.
Then, for every $\mathbf{A}$-valued mould~$M^\bullet$ such that $(M^\text{\boldmath{$\om$}}
\mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally summable,
\begin{equation} \label{eqXlaDph}
\big[ \mathscr X_\lambda, \sum M^\bullet \mathbf{B}_\bullet \big] = \sum (D_\varphi M^\bullet) \mathbf{B}_\bullet,
\qquad \text{with} \enspace
\varphi = \langle \, \cdot \,,\lambda \rangle \colon \Omega \to \mathbf{A}.
\end{equation}
\end{prop}
Thus, this mould derivation~$D_\varphi$ reflects the action of the derivation $\operatorname{ad}_{\mathscr X_\lambda}$ of $\operatorname{End}_\mathbf{A}(\mathscr A)$.
\begin{proof}
We first check that, if $\Theta\in\operatorname{End}_\mathbf{A}(\mathbf{A}[[y_1,\dotsc,y_n]])$ is homogeneous of
degree $\eta\in\mathbb{Z}^n$, then
$$
[ \mathscr X_\lambda, \Theta ] = \langle \eta,\lambda \rangle \Theta.
$$
By $\mathbf{A}$-linearity and continuity, it is sufficient to check that both operators
act the same way on a monomial~$y^m$.
We have $\Theta y^m = \beta_{m} y^{m+\eta}$ with a $\beta_{m} \in \mathbf{A}$,
thus $\mathscr X_\lambda \Theta y^m = \langle m+\eta,\lambda \rangle \beta_{m} y^{m+\eta}
= \langle m+\eta,\lambda \rangle \Theta y^m$
while $\Theta \mathscr X_\lambda y^m = \langle m,\lambda \rangle \Theta y^m$, hence
$[ \mathscr X_\lambda, \Theta ] y^m = \langle\eta,\lambda\rangle \Theta y^m$ as required.
It follows that
$$
[ \mathscr X_\lambda, \mathbf{B}_\text{\boldmath{$\om$}} ] = \big(\varphi(\omega_1)+\dotsb+\varphi(\omega_r)\big) \mathbf{B}_\text{\boldmath{$\om$}}, \qquad
\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_r) \in \Omega^\bullet.
$$
Let $N^\bullet = D_\varphi M^\bullet$.
For any exhaustion of~$\Omega^\bullet$ by finite sets $I_k$, letting
$\Theta_k = \sum_{\text{\boldmath{$\om$}}\in I_k} M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}$ and
$\Theta'_k = \sum_{\text{\boldmath{$\om$}}\in I_k} N^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}}$, we get
$[\mathscr X_\lambda,\Theta_k] = \Theta'_k$.
For every $f\in\mathscr A$, we have $\Theta'_k f \xrightarrow[k\to\infty]{}\sum N^\bullet
\mathbf{B}_\bullet f$ on the one hand,
while, by continuity of~$\mathscr X_\lambda$,
$[\mathscr X_\lambda,\Theta_k]f \xrightarrow[k\to\infty]{} \big[\mathscr X_\lambda,\sum M^\bullet \mathbf{B}_\bullet\big]f$
on the other hand.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Notice that, in the above situation,
any $\mathbb{C}$-linear derivation $d \colon \mathbf{A} \to \mathbf{A}$ induces a derivation
$\tilde d \colon \mathscr A \to \mathscr A$ (defined by $\tilde d \sum a_m y^m = \sum (d a_m) y^m$) and a mould
derivation~$D$ (defined just before Remark~\ref{remmouldVsol}).
If $\tilde d$ commutes with the $B_\eta$, $\eta\in\Omega$, one easily gets
\begin{equation} \label{eqcommuttid}
\big[ \tilde d, \sum M^\bullet \mathbf{B}_\bullet \big] = \sum (D M^\bullet) \mathbf{B}_\bullet.
\end{equation}
On the other hand, $D_\varphi M^\text{\boldmath{$\om$}} = \langle \norm{\text{\boldmath{$\om$}}},\lambda \rangle M^\text{\boldmath{$\om$}}$ if $\varphi = \langle \, \cdot,\lambda \rangle$.
Thus $D_\varphi = \nabla$ when $n=1$ and $\lambda=1$.
This is the relevant situation for the saddle-node:
\begin{cor} \label{corEC}
Choose $\Omega = \mathcal{N}$ as in~\eqref{eqdefiancN}, $\mathbf{A} = \mathbb{C}[[x]]$ and
$(\mathscr A,\operatorname{val}) = (\mathbf{A}[[y]],\nu_4)$, $\mathscr F = \mathscr F_{\mathscr A,\mathbf{A}}$.
Let $\mathbf{B}_\bullet$ denote the $\mathscr F$-valued comould generated by $B_\eta =
y^{\eta+1}\frac{\partial\,}{\partial y}$.
Let $(a_\eta)_{\eta\in\Omega}$ be as in~\eqref{eqassan} and
$$
X_0 = x^2 \frac{\partial\,}{\partial x} + y \frac{\partial\,}{\partial y},
\qquad
X = X_0 + \sum_{\eta\in\Omega} a_\eta B_\eta.
$$
Then the mould-comould contraction $\Theta=\sum \mathcal{V}^\bullet \mathbf{B}_\bullet \in \mathscr F$,
where~$\mathcal{V}^\bullet$ is determined by Lemma~\ref{lemdefcV}, is solution of the
conjugacy equation~\eqref{eqconjugOp} $\Theta X = X_0 \Theta$ in $\operatorname{End}_\mathbb{C}(\mathscr A)$.
\end{cor}
\begin{proof}
It was already observed that each $B_\eta$ is homogeneous of degree~$\eta$
and the formal summability of $(\mathcal{V}^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
was checked in Lemma~\ref{lemCVformal}.
Let $d = x^2\frac{{\mathrm d}\,}{{\mathrm d} x} \colon \mathbf{A} \to \mathbf{A}$.
The corresponding derivation of~$\mathscr A$ is $\tilde d = x^2\frac{\partial\,}{\partial x}$, which
commutes with the $B_\eta$'s.
On the other hand, with the notation of Proposition~\ref{propcommutXla},
$X_0 = \tilde d + \mathscr X_1$.
Since $X-X_0 = \sum J_a^\bullet \mathbf{B}_\bullet$ with the notation of Remark~\ref{remmouldVsol},
equation~\eqref{eqconjugOp} is equivalent to
$\big[\tilde d + \mathscr X_1,\Theta\big] = \Theta \sum J_a^\bullet \mathbf{B}_\bullet$;
plugging any formally convergent mould-comould expansion $\Theta = \sum M^\bullet
\mathbf{B}_\bullet$ into it, we find $\sum (D M^\bullet+\nabla M^\bullet) \mathbf{B}_\bullet$ for the {left-hand side}\
by~\eqref{eqXlaDph} and~\eqref{eqcommuttid} while, according to
Proposition~\ref{propmultiplimould}, the {right-hand side}\ can be written $\sum
(J_a^\bullet\times M^\bullet) \mathbf{B}_\bullet$, hence the conclusion follows
from~\eqref{eqmouldeq}.
\end{proof}
\begin{remark} \label{remcommentpf}
The symmetrality of the mould~$\mathcal{V}^\bullet$ obtained in Proposition~\ref{propcVsym}
shows us that $\Theta$ is invertible, with inverse
$\Theta^{-1} = \sum {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\bullet \mathbf{B}_\bullet$.
The proof of Theorem~\ref{thmSNformal} will thus be complete when we have
checked that $\Theta$ is an algebra automorphism; this will follow from the results
of next section on the contraction of symmetral moulds into a comould generated
by derivations.
\end{remark}
\section{Contraction into a cosymmetral comould} \label{secContrAltSym}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
For the interpretation of alternality and symmetrality of moulds in terms
of the corresponding mould-comould expansions, we focus on the case where the
comould~$\mathbf{B}_\bullet$ is generated by a family of $\mathbf{A}$-linear derivations
$(B_\eta)_{\eta\in\Omega}$ of a commutative algebra~$\mathscr A$.
The main result of this section is Proposition~\ref{propaltsym} below, according to
which, in this case,
\emph{the contraction of an alternal mould into~$\mathbf{B}_\bullet$ gives rise to a
derivation}
and \emph{the contraction of a symmetral mould gives rise to an algebra
automorphism}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We thus assume that $(\mathscr A,\nu)$ is a complete pseudovaluation ring such that~$\mathscr A$ is
a commutative $\mathbf{A}$-algebra, and we define $\mathscr F = \mathscr F_{\mathscr A,\mathbf{A}}$ by~\eqref{eqdefgFexemp}.
Since we shall be interested in the way the elements of~$\mathscr F$ act on products of
elements of~$\mathscr A$, we consider the left $\mathscr F$-module $\operatorname{Bil}_\mathbf{A}$ of $\mathbf{A}$-bilinear maps
from $\mathscr A\times\mathscr A$ to~$\mathscr A$
(with ring multiplication $(\Theta,\Phi)\in\mathscr F\times\operatorname{Bil}_\mathbf{A} \mapsto \Theta\circ\Phi\in\operatorname{Bil}_\mathbf{A}$)
and its filtration
$$
\mathscr B_\delta = \ao \Phi \in \operatorname{Bil}_\mathbf{A} \mid \nu\big( \Phi(f,g) \big) \ge \nu(f)+\nu(g)+\delta
\;\text{for all $f,g\in\mathscr A$} \af, \qquad \delta\in\mathbb{Z}.
$$
By defining $\mathscr B = \bigcup_{\delta\in\mathbb{Z}} \mathscr B_\delta$ we get a left $\mathscr F$-submodule of
$\operatorname{Bil}_\mathbf{A}$, for which the filtration $(\mathscr B_\delta)_{\delta\in\mathbb{Z}}$ is exhaustive,
separated and compatible with the filtration of~$\mathscr F$ induced by~$\operatorname{val}_\nu$: the
subgroups $\mathscr F_\delta = \ao \Theta\in\mathscr F \mid \vln{\Theta}\ge\delta \af$ satisfy
$\mathscr F_\delta \mathscr B_{\delta'} \subset \mathscr B_{\delta+\delta'}$ for all $\delta,\delta'\in\mathbb{Z}$.
The corresponding distance on~$\mathscr B$ is complete, by completeness of
$(\mathscr A,\nu)$.
We now define a map ${\sigma} \colon \mathscr F \to \mathscr B$ by
$\Theta\in\mathscr F \mapsto {\sigma}(\Theta) = \Phi$ such that
$$
\Phi \colon
(f,g) \in \mathscr A\times\mathscr A \mapsto \Phi(f,g) = \Theta(fg) \in \mathscr A.
$$
This map is to be understood as a kind of coproduct.
Observe that ${\sigma}$ is $\mathscr F$-linear, {i.e.}\ ${\sigma}(\Theta \Theta')=\Theta{\sigma}(\Theta')$
(thus it boilds down to ${\sigma}(\Theta)=\Theta\circ{\sigma}(\operatorname{Id})$, and ${\sigma}(\operatorname{Id})$ is just the
multiplication of~$\mathscr A$)
and continuous because ${\sigma}(\mathscr F_\delta) \subset \mathscr B_\delta$ for each $\delta\in\mathbb{Z}$.
Viewing $\mathscr F$ as an $\mathbf{A}$-module, we also define an $\mathbf{A}$-linear map
$$
\rho \colon \mathscr F_2 = \mathscr F \otimes_\mathbf{A} \mathscr F \to \mathscr B
$$
by its action on decomposable elements:
$$
\rho(\Theta_1\otimes\Theta_2)(f,g) = (\Theta_1 f)(\Theta_2 g), \qquad f,g\in\mathscr A
$$
for any $\Theta_1,\Theta_2\in\mathscr F$.
(A remark parallel to the remark on Definition~\ref{defidec} applies: the kernel
of~$\rho$ is the torsion submodule of~$\mathscr F_2$ when $\mathscr A$ is an integral
domain; if moreover $\mathbf{A}$ is principal, then $\rho$ is injective.)
Notice that, for $\Theta_1\in\mathscr F_{\delta_1}$ and $\Theta_2\in\mathscr F_{\delta_2}$, one has
$\rho(\Theta_1\otimes\Theta_2) \in \mathscr B_{\delta_1+\delta_2}$,
hence the map $\tilde\rho \colon (\Theta_1,\Theta_2) \mapsto \rho(\Theta_1\otimes\Theta_2)$
from $\mathscr F\times\mathscr F$ to~$\mathscr B$ is continuous.
Using the $\mathbf{A}$-algebra structure of~$\mathscr F_2$, we see that
\begin{equation} \label{eqCorrespAlg}
{\sigma}(\Theta) = \rho(\xi),\; {\sigma}(\Theta') = \rho(\xi')
\quad \Rightarrow \quad
{\sigma}(\Theta\Th') = \rho(\xi\xi')
\end{equation}
for any $\Theta,\Theta' \in \mathscr F$, $\xi,\xi'\in\mathscr F_2$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
With the above notations, the set of all $\mathbf{A}$-linear derivations of~$\mathscr A$
having a valuation is
$$
\mathfrak L_\mathscr F = \ao \Theta\in\mathscr F \mid {\sigma}(\Theta) = \rho(\Theta\otimes\operatorname{Id} + \operatorname{Id}\otimes\Theta) \af
$$
(it is a Lie algebra for the bracketting $[\Theta_1,\Theta_2] = \Theta_1\Theta_2 - \Theta_2\Theta_1$).
Letting $\mathscr F^*$ denote the multiplicative group of invertible elements of~$\mathscr F$,
we may also consider its subgroup
$$
G_\mathscr F = \ao \Theta\in\mathscr F^* \mid {\sigma}(\Theta) = \rho(\Theta\otimes\Theta) \af,
$$
the elements of which are $\mathbf{A}$-linear algebra automorphisms of~$\mathscr A$.
\begin{lemma} \label{lemcosym}
Assume that the generators $B_\eta$, $\eta\in\Omega$, of an $\mathscr F$-valued
comould~$\mathbf{B}_\bullet$ all belong to~$\mathfrak L_\mathscr F$.
Then
\begin{equation} \label{eqmotivsh}
{\sigma}(\mathbf{B}_\text{\boldmath{$\om$}}) = \sum_{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2\in\Omega^\bullet} \sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}}
\rho( \mathbf{B}_{\text{\boldmath{$\om$}}^1} \otimes \mathbf{B}_{\text{\boldmath{$\om$}}^2} ), \qquad \text{\boldmath{$\om$}}\in\Omega^\bullet.
\end{equation}
\end{lemma}
Such a comould is said to be {\em cosymmetral}.
\begin{proof}
Let $\text{\boldmath{$\om$}}=(\omega_1,\ldots,\omega_r)\in\Omega^\bullet$.
We proceed by induction on~$r$.
Equation~\eqref{eqmotivsh} holds if $r=0$, since ${\sigma}(\operatorname{Id})=\rho(\operatorname{Id}\otimes\operatorname{Id})$,
or $r=1$ (by assumption); we thus suppose $r\ge2$.
By~\eqref{eqcaraccomould}, we can write $\mathbf{B}_\text{\boldmath{$\om$}} = \mathbf{B}_{`\text{\boldmath{$\om$}}} B_{\text{\boldmath{$\om$}}_1}$ with
$`\text{\boldmath{$\om$}} = (\omega_2,\dotsc,\omega_r)$.
Using the induction hypothesis and~\eqref{eqCorrespAlg}, we get
${\sigma}(\mathbf{B}_\text{\boldmath{$\om$}})=\rho(\xi)$ with
\begin{multline*}
\xi = \sum_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{`\text{\boldmath{$\om$}}}
(\mathbf{B}_\text{\boldmath{$\alpha$}}\otimes\mathbf{B}_\text{\boldmath{$\beta$}})(B_{\omega_1}\otimes\operatorname{Id} + \operatorname{Id}\otimes B_{\omega_1}) \\
= \sum_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{`\text{\boldmath{$\om$}}}
(\mathbf{B}_{\omega_1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\alpha$}}}\otimes\mathbf{B}_\text{\boldmath{$\beta$}} + \mathbf{B}_\text{\boldmath{$\alpha$}}\otimes\mathbf{B}_{\omega_1\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}}).
\end{multline*}
This coincides with
$\displaystyle\sum_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet} \tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} \mathbf{B}_\text{\boldmath{$\alpha$}}\otimes\mathbf{B}_\text{\boldmath{$\beta$}}$,
since
$$
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} = \sh{`\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{`\text{\boldmath{$\om$}}} 1_{\{\alpha_1=\omega_1\}}
+ \sh{\text{\boldmath{$\alpha$}}}{`\text{\boldmath{$\beta$}}}{`\text{\boldmath{$\om$}}} 1_{\{\beta_1=\omega_1\}}
$$
(particular case of~\eqref{eqidentiteshdeux} with $\text{\boldmath{$\gamma$}}^1=\omega_1$ and $\text{\boldmath{$\gamma$}}^2=`\text{\boldmath{$\om$}}$).
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We are now ready to study the effect of alternality or symmetrality in this context.
\label{secaltsym} \label{secmotivsh}
\begin{prop} \label{propaltsym}
Suppose that $\mathbf{B}_\bullet$ is an $\mathscr F$-valued cosymmetral comould
and let $M^\bullet \in \mathscr M^\bullet(\Omega,\mathbf{A})$ be such that
$(M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is formally summable.
Then:
\begin{enumerate}[--]
\item If $M^\bullet$ is alternal, then $\sum M^\bullet \mathbf{B}_\bullet \in \mathfrak L_\mathscr F$.
\item If $M^\bullet$ is symmetral, then $\sum M^\bullet \mathbf{B}_\bullet \in G_\mathscr F$.
\item More generally, denoting by $\mathbf{M}^{\bul,\bul}$ the image of~$M^\bullet$ by the
homomorphism~$\tau$ of Lemma~\ref{lemhomomtau} and assuming that the family
$\big( \rho(\mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} \mathbf{B}_{\text{\boldmath{$\alpha$}}}\otimes\mathbf{B}_{\text{\boldmath{$\beta$}}})
\big)_{(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})\in\Omega^\bullet\times\Omega^\bullet}$
is formally summable in~$\mathscr B$,
\begin{equation} \label{eqsigtau}
{\sigma}\left( \sum M^\bullet \mathbf{B}_\bullet \right)
= \sum_{(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})\in\Omega^\bullet\times\Omega^\bullet}
\rho(\mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} \mathbf{B}_{\text{\boldmath{$\alpha$}}}\otimes\mathbf{B}_{\text{\boldmath{$\beta$}}}).
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
Let $\delta_*\in\mathbb{Z}$ such that $v(\text{\boldmath{$\om$}}) = \vln{M^\text{\boldmath{$\om$}}\mathbf{B}_\text{\boldmath{$\om$}}} \ge \delta_*$ for all $\text{\boldmath{$\om$}}\in\Omega^\bullet$
and $\Theta = \sum M^\bullet \mathbf{B}_\bullet$.
We shall use the notation
$\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = \mathbf{M}^{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} \mathbf{B}_{\text{\boldmath{$\alpha$}}}\otimes\mathbf{B}_{\text{\boldmath{$\beta$}}}$
for $(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})\in\Omega^\bullet\times\Omega^\bullet$.
Lemma~\ref{lemtaualtsym} yields
$$
\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = 1_{\{\text{\boldmath{$\beta$}}=\emptyset\}} M^\text{\boldmath{$\alpha$}}\mathbf{B}_\text{\boldmath{$\alpha$}} \otimes \operatorname{Id}
+ 1_{\{\text{\boldmath{$\alpha$}}=\emptyset\}} \operatorname{Id} \otimes M^\text{\boldmath{$\beta$}}\mathbf{B}_\text{\boldmath{$\beta$}}
$$
for $M^\bullet$ alternal and
$\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}} = M^\text{\boldmath{$\alpha$}} \mathbf{B}_\text{\boldmath{$\alpha$}} \otimes M^\text{\boldmath{$\beta$}} \mathbf{B}_\text{\boldmath{$\beta$}}$
for $M^\bullet$ symmetral.
In both cases, the set $\ao (\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})\in\Omega^\bullet\times\Omega^\bullet \mid
\rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}}) \notin \mathscr B_\delta \af$ is thus finite for any $\delta\in\mathbb{Z}$, in
view of the formal summability hypothesis,
and the sum of the family $\big(\rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}})\big)$ is respectively
$\rho(\Theta\otimes\operatorname{Id}+\operatorname{Id}\otimes\Theta)$ or $\rho(\Theta\otimes\Theta)$, by continuity of~$\tilde\rho$.
Therefore it is sufficient to prove the third property
(the invertibility of~$\Theta$ when $M^\bullet$ is symmetral is a simple consequence
of Proposition~\ref{propmultiplimould} and of the invertibility of~$M^\bullet$).
We thus assume $\big(\rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}})\big)$ formally summable in~$\mathscr B$.
As in the proof of Proposition~\ref{propmultiplimould}, we can suppose $\Omega$
countable and choose an exhaustion of~$\Omega^\bullet$ by finite sets of the
form~$\Omega^{K,R}$.
Then, by virtue of the definition of~$\tau$ and of Lemma~\ref{lemcosym},
\begin{multline*}
A_{K,R} :=
\sum_{(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})\in\Omega^{K,R}\times\Omega^{K,R}} \rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}})
- {\sigma}\Bigg( \sum_{\text{\boldmath{$\om$}}\in\Omega^{K,R}} M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} \Bigg) \\
= \Bigg( \sum_{ \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^{K,R}, \, \text{\boldmath{$\om$}}\in\Omega^\bullet }
- \sum_{ \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^\bullet, \, \text{\boldmath{$\om$}}\in\Omega^{K,R} } \Bigg)
\sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} M^\text{\boldmath{$\om$}} \rho(\mathbf{B}_\text{\boldmath{$\alpha$}}\otimes\mathbf{B}_\text{\boldmath{$\beta$}}) \\
= \sum_{ \substack{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^{K,R} \\ \text{s.t.}\, r(\text{\boldmath{$\alpha$}})+r(\text{\boldmath{$\beta$}})>R} }
\rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}})
\end{multline*}
(the last equality stems from the fact that, if $\text{\boldmath{$\om$}}\in\Omega^{K,R}$, then
$\tsh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}}\neq0$ implies $\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in\Omega^{K,R}$).
The formal summability of $\big(\rho(\Phi_{\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}})\big)$ yields
$A_{K,R}\to0$ as $K,R\to\infty$, which is the desired result since ${\sigma}$ is
continuous.
\end{proof}
\begin{remark}
The proof of Theorem~\ref{thmSNformal} is now complete:
in view of the symmetrality of~$\mathcal{V}^\bullet$ with $\Omega=\mathcal{N}$ and $\mathbf{A}=\mathbb{C}[[x]]$
(Proposition~\ref{propcVsym})
and the cosymmetrality of~$\mathbf{B}_\bullet$ defined by~\eqref{eqdefbBn}
with $\mathscr F=\mathscr F_{\mathscr A,\mathbf{A}}$, $(\mathscr A,\operatorname{val}) = (\mathbb{C}[[x,y]],\nu_4)$,
Proposition~\ref{propaltsym} shows that $\Theta = \sum \mathcal{V}^\bullet \mathbf{B}_\bullet$ is an
automorphism of~$\mathscr A$.
As noticed in Remark~\ref{remcommentpf}, this was the only thing which remained to be
checked.
\end{remark}
\label{secPfThm}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Another way of checking that the contraction of an alternal mould into a
cosymmetral comould~$\mathbf{B}_\bullet$ is a derivation is to express it as a sum of iterated Lie
brackets of the derivations~$B_\eta$ which generate the comould.
For $\text{\boldmath{$\om$}}=(\omega_1,\dotsc,\omega_r)\in\Omega^\bullet$ with $r\ge2$, let
$$
\mathbf{B}_{[\text{\boldmath{$\om$}}]} = [ B_{\omega_r}, [ B_{\omega_{r-1}}, [ \dotsb
[B_{\omega_2}, B_{\omega_1}] \dotsb ]]].
$$
One can check that, for any alternal mould~$M^\bullet$ and for any finite
subset~$\Omega_{\!f}$ of~$\Omega$,
$$
\sum_{\text{\boldmath{$\om$}}\in\Omega_{\!f}^r} M^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} =
\frac{1}{r} \sum_{\text{\boldmath{$\om$}}\in\Omega_{\!f}^r} M^\text{\boldmath{$\om$}} \mathbf{B}_{[\text{\boldmath{$\om$}}]},
\qquad r\ge2
$$
(identifying $\Omega_{\!f}^r$ with the sets of all words of length~$r$ the letters of
which belong to~$\Omega_{\!f}$).
The proof is left to the reader.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let $\mathbf{B}_\bullet$ denote an $\mathscr F$-valued comould.
Suppose that $(B_\eta)_{\eta\in\Omega}$ is formally summable and consider
$Y=\sum_{\eta\in\Omega} B_\eta \in \mathscr F$.
We have seen that, by definition, the comould is cosymmetral iff each~$B_\eta$
is a derivation of~$\mathscr A$; then $Y$ is itself a derivation.
This is the situation when there is an appropriate notion of homogeneity, as in
Definition~\ref{defhomog}, and we expand an $\mathbf{A}$-linear derivation~$Y$ into a
sum of homogeneous components, each $B_\eta$ being homogeneous of degree~$\eta$.
\label{secAlludel}
Suppose now that the object to analyse is not a singular vector field, as in the
case of the saddle-node, but a local transformation; considering the associated
substitution operator, we are thus led to an automorphism of~$\mathscr A$, typically of
the form $\phi=\operatorname{Id}+Y$. Then the homogeneous components~$B_\eta$ of~$Y$ are no longer
derivations; expanding ${\sigma}(\phi) = \rho(\phi\otimes\phi)$, we rather get
\begin{equation} \label{eqmodifLeib}
{\sigma}(B_\eta) = \rho\Big( B_\eta \otimes \operatorname{Id} +
\sum_{\eta'+\eta''=\eta} B_{\eta'}\otimes B_{\eta''}
+ \operatorname{Id} \otimes B_\eta \Big).
\end{equation}
The comould~$\mathbf{B}_\bullet$ they generate is then called {\em cosymmetrel}.
A cosymmetrel comould is characterized by identities similar
to~\eqref{eqmotivsh} but with the shuffling coefficients
$\tsh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}}$ replaced by new ones, denoted by
$\tcsh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}}$ and called ``contracting shuffling coefficients''.
Dually, using these new coefficients instead of the previous shuffling coefficients in
formulas~\eqref{eqdefaltal} and~\eqref{eqdefsymal}, one gets the definition of
{\em alternel} and {\em symmetrel} moulds, which were only briefly alluded to at
the beginning of Section~\ref{secAltSym}.
The contraction of alternel or symmetrel moulds into cosymmetrel comoulds
enjoy properties parallel to those that we just described in the cosymmetral
case.
This allows one to treat local vector fields and local discrete
dynamical systems with completely parallel formalisms.
\part{Resurgence, alien calculus and other applications}
\section{Resurgence of the normalising series} \label{secResurSN}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The purpose of this section is to use the mould-comould representation of the
formal normalisation of the saddle-node given by Theorem~\ref{thmSNformal} to
deduce ``resurgent properties''.
We begin by a few reminders about \'Ecalle's Resurgence theory. We
follow the notations of~\cite{kokyu}.
\medskip
\noindent --
The formal Borel transform is the $\mathbb{C}$-linear homomorphism
$$
\mathcal{B} \,:\;
\widetilde\varphi(z) = \sum_{n\ge0} c_n z^{-n-1} \in z^{-1}\mathbb{C}[[z^{-1}]]
\enspace\mapsto\enspace
\widehat\varphi({\zeta}) = \sum_{n\ge0} c_n \frac{{\zeta}^n}{n!} \in \mathbb{C}[[{\zeta}]].
$$
In the case of a convergent~$\widetilde\varphi$, one gets a convergent series~$\widehat\varphi$
which defines an entire function of exponential type.
Namely, if $\varphi(x) = \widetilde\varphi(-1/x)\in x\mathbb{C}[[x]]$ has radius of convergence $>\rho$, then
there exists $K>0$ such that $|\varphi(x)|\le K|x|$ for $|x|\le\rho$ and this
implies, by virtue of the Cauchy inequalities, that
$|c_n| \le K \rho^{-n}$, hence $\widehat\varphi$ is entire and
\begin{equation} \label{ineqentire}
|\widehat\varphi({\zeta})| \le K\, {\mathrm e}^{\rho^{-1}|{\zeta}|}, \qquad {\zeta}\in\mathbb{C}.
\end{equation}
We are particularly interested in the case where $\widehat\varphi({\zeta})\in\mathbb{C}\{{\zeta}\}$
without being necessarily entire; this is equivalent to Gevrey-$1$ growth
for the coefficients of~$\widetilde\varphi$:
\begin{multline*}
\widetilde\varphi(z) \in z^{-1}\mathbb{C}[[z^{-1}]]_1
\quad \overset{\text{def}}{\Longleftrightarrow} \quad
\exists C,K>0 \; \text{s.t.}\; |c_n|\le K C^n n! \; \text{for all $n$} \\
\quad\Longleftrightarrow\quad
\mathcal{B}\widetilde\varphi({\zeta}) \in \mathbb{C}\{{\zeta}\}.
\end{multline*}
\medskip
\noindent --
The counterpart in~$\mathbb{C}[[{\zeta}]]$ of the multiplication (Cauchy product) of
$z^{-1}\mathbb{C}[[z^{-1}]]$ is called {\em convolution} and denoted by~$*$, thus
$\mathcal{B}(\widetilde\varphi\cdot\widetilde\psi) = \mathcal{B}(\widetilde\varphi) * \mathcal{B}(\widetilde\psi)$.
Now, if $\widehat\varphi = \mathcal{B}(\widetilde\varphi)$ and $\widehat\psi= \mathcal{B}(\widetilde\psi)$ belong
to~$\mathbb{C}\{{\zeta}\}$, then $\widehat\varphi * \widehat\psi \in \mathbb{C}\{{\zeta}\}$ and this germ of
holomorphic function is determined by
\begin{equation} \label{eqdefconvol}
(\widehat\varphi * \widehat\psi)({\zeta}) = \int_0^{\zeta} \widehat\varphi({\zeta}_1) \widehat\psi({\zeta}-{\zeta}_1)
\qquad \text{for $|{\zeta}|$ small enough.}
\end{equation}
We have an algebra $(\mathbb{C}\{{\zeta}\},*)$ without unit, isomorphic via~$\mathcal{B}$ to
$(z^{-1}\mathbb{C}[[z^{-1}]]_1,\cdot)$.
By adjunction of unit, we get an algebra isomorphism
$$
\mathcal{B} \colon \mathbb{C}[[z^{-1}]]_1 \overset{\sim}{\to} \mathbb{C}\, \delta \oplus \mathbb{C}\{{\zeta}\}
$$
(where $\delta = \mathcal{B} 1$ is a symbol for the unit of convolution).
We can even take into account the differential $\frac{{\mathrm d}\,}{{\mathrm d} z}$: its
counterpart via~$\mathcal{B}$ is
$\widehat\partial \colon c\,\delta + \widehat\varphi({\zeta}) \mapsto -{\zeta}\widehat\varphi({\zeta})$.
\medskip
\noindent --
Let us now consider all the rectifiable oriented paths of~$\mathbb{C}$ which start from
the origin and then avoid~$\mathbb{Z}$, {i.e.}\ oriented paths represented by absolutely
continuous maps $\gamma \colon [0,1] \to
\mathbb{C}\setminus\mathbb{Z}^*$ such that $\gamma(0)=0$ and $\gamma^{-1}(0)$ is connected.
We denote by~$\mathscr R(\mathbb{Z})$ the set of all homotopy classes~$[\gamma]$ of
such paths~$\gamma$ and by~$\pi$ the map $[\gamma]\in\mathscr R(\mathbb{Z}) \mapsto \gamma(1)\in\mathbb{C}\setminus\mathbb{Z}^*$;
considering $\pi$ as a covering map, we get a Riemann surface structure
on~$\mathscr R(\mathbb{Z})$.
Observe that $\pi^{-1}(0)$ consists of a single point, the ``origin''
of~$\mathscr R(\mathbb{Z})$; this is the only difference between $\mathscr R(\mathbb{Z})$ and the universal
cover of~$\mathbb{C}\setminus\mathbb{Z}$.
The space $\wHR\mathbb{Z}$ of all holomorphic functions of~$\mathscr R(\mathbb{Z})$ can be identified
with the space of all $\widehat\varphi({\zeta}) \in \mathbb{C}\{{\zeta}\}$ which admit an
analytic continuation along any representative of any element of~$\mathscr R(\mathbb{Z})$
({cf.}\ \cite{kokyu}, Definition~3 and Lemma~2).
\begin{definition}
We define the convolutive model of the algebra of resurgent functions over~$\mathbb{Z}$
as $\widehat{\boldsymbol{R}}_\mathbb{Z} = \mathbb{C}\, \delta \oplus \wHR\mathbb{Z}$.
We define the formal model of the algebra of resurgent functions over~$\mathbb{Z}$
as $\widetilde{\boldsymbol{R}}_\mathbb{Z} = \mathcal{B}^{-1}(\widehat{\boldsymbol{R}}_\mathbb{Z})$.
\end{definition}
It turns out that $\widehat{\boldsymbol{R}}_\mathbb{Z}$ is a subalgebra of the convolution algebra
$\mathbb{C}\, \delta \oplus \mathbb{C}\{{\zeta}\}$, {i.e.}\ the aforementioned property of analytic
continuation is stable by convolution;
the proof of this fact relies on the notion of symmetrically contractile path
(see for instance {\em op.\ cit.}, \S1.3), which we shall not develop here.
Therefore $\widetilde{\boldsymbol{R}}_\mathbb{Z}$ is a subalgebra of $\mathbb{C}[[z^{-1}]]$ and $\mathcal{B}$ induces an algebra
isomorphism $\widetilde{\boldsymbol{R}}_\mathbb{Z} \to \widehat{\boldsymbol{R}}_\mathbb{Z}$.
\medskip
\noindent --
An obvious example of element of~$\wHR\mathbb{Z}$ is an entire function, or a
meromorphic function of~$\mathbb{C}$ without poles outside~$\mathbb{Z}^*$.
Indeed, for such a function~$\widehat\varphi$, we can define $\hat\phi\in\wHR\mathbb{Z}$ by
$\hat\phi({\zeta}) = \widehat\varphi\big(\pi({\zeta})\big)$ for all ${\zeta}\in\mathscr R(\mathbb{Z})$.
We usually identify~$\widehat\varphi$ and~$\hat\phi$.
For example, if $\omega_1\neq0$, Remark~\ref{remheuri} shows that
$\widetilde\mathcal{V}^{\omega_1}(z) = \mathcal{V}^{\omega_1}(-1/z) \in z^{-1}\mathbb{C}[[z^{-1}]]$
has formal Borel transform
$\widehat\mathcal{V}^{\omega_1}({\zeta}) = \sum \omega_1^{-r-1}(-\widehat\partial\,)^r \widehat a_{\omega_1}
= \frac{\widehat a_{\omega_1}({\zeta})}{\omega_1-{\zeta}}$,
where $\widehat a_{\omega_1}$ denotes the formal Borel transform of $a_{\omega_1}(-1/z)$,
which is an entire function (since $a_{\omega_1}$ is convergent),
thus $\widehat\mathcal{V}^{\omega_1}$ is meromorphic with at most one simple pole, located
at~$\omega_1$.
On the other hand, $\widehat\mathcal{V}^0({\zeta}) = \frac{1}{{\zeta}}\widehat a_0({\zeta})$ is entire.
We shall see that, for each non-empty word $\text{\boldmath{$\om$}}$, the formal Borel transform of
$\mathcal{V}^\text{\boldmath{$\om$}}(-1/z)$ belongs to~$\wHR\mathbb{Z}$, but this function is usually not
meromorphic if $r(\text{\boldmath{$\om$}})\ge2$.
For instance, for $\text{\boldmath{$\om$}}=(\omega_1,\omega_2)$, one gets
$\frac{1}{-{\zeta}+\omega_1+\omega_2} \big(\widehat a_{\omega_1}*\widehat\mathcal{V}^{\omega_2} \big)$
which is multivalued in general
(see formula~\eqref{eqcViter} below for the general case).
\medskip
\noindent --
A formal series~$\widetilde\varphi(z)$ without constant term belongs to~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$ iff its
formal Borel transform~$\widehat\varphi({\zeta})$ converges to a germ of holomorphic function
which extends analytically to~$\mathscr R(\mathbb{Z})$.
In particular, the principal branch\footnote{
The principal branch is defined as the analytic continuation of~$\widehat\varphi$ in the
maximal open subset of~$\mathbb{C}$ which is star-shaped {with respect to}~$0$; its domain is
the cut plane obtained by removing the singular half-lines
$\left[1,+\infty\right[$ and $\left]-\infty,-1\right]$, unless $\widehat\varphi$ happens
to be regular at~$1$ or~$-1$.
}
of~$\widehat\varphi$ is holomorphic in sectors which extend up to infinity. If it has at
most exponential growth in a sector $\ao {\zeta}\in\mathbb{C} \mid \th_1\le \arg{\zeta} \le
\th_2 \af$
(as is the case of~$\widehat\mathcal{V}^{\omega_1}({\zeta})$ for instance),
then one can perform a Laplace transform and get a function
$$
\widetilde\ph^{\text{ana}}(z) = \int_0^{{\mathrm e}^{{\mathrm i}\th}\infty} \widehat\varphi({\zeta})\,{\mathrm e}^{-z{\zeta}} \, {\mathrm d}{\zeta},
\qquad \th \in [\th_1,\th_2],
$$
which is analytic for~$z$ belonging to a sectorial neighbourhood of
infinity.
This is called {\em Borel-Laplace summation} (see
{e.g.}~\cite{kokyu}, \S1.1).
Since multiplication is turned into convolution by~$\mathcal{B}$ and then turned again
into multiplication by the Laplace transform, and similarly with
$\frac{{\mathrm d}\,}{{\mathrm d} z}$ which is transformed into multiplication by $-{\zeta}$
by~$\mathcal{B}$, the Borel-Laplace process transforms the formal solution of a
differential equation like~\eqref{eqdiffeqzero}, \eqref{eqdiffeqn}
or~\eqref{eqdefcV} into an analytic solution of the same equation.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The stability of~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$ under multiplication together with the previous computation
explains to some extent why we can expect the solutions of a non-linear problem
like the formal classification of the saddle-node to be resurgent.
However, controlling products in~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$ means controlling convolution products
in~$\widehat{\boldsymbol{R}}_\mathbb{Z}$, and it is not so easy to extract from the stability statement the
quantitative information which would guarantee the convergence in~$\widehat{\boldsymbol{R}}_\mathbb{Z}$ of a
method of majorant series for instance
(see the discussion at the end of the sketch of proof of Theorem~2 of~\cite{kokyu}).
Thanks to the mould-comould expansion given in Section~\ref{secMCexpSN}, we
shall be able to use much simpler arguments: the convolution product of an
element of~$\widehat{\boldsymbol{R}}_\mathbb{Z}$ with an entire function belongs to~$\widehat{\boldsymbol{R}}_\mathbb{Z}$ and efficient
bounds are available in this particular case of the stability statement (much
easier than the general one---see Lemma~~\ref{lemAnCont}
below).
\begin{thm} \label{thmResur}
Consider the saddle-node problem, with hypotheses
\eqref{eqdefX}--\eqref{eqassA}.
Let $\th(x,y) = \left(x, y + \sum_{n\ge0} \varphi_n(x) y^n\right)$ denote the formal
transformation, the substitution operator of which is $\Theta = \sum \mathcal{V}^\bullet
\mathbf{B}_\bullet$, in accordance with Lemmas~\ref{lemdefcV} and~\ref{lemCVformal} and Theorem~\ref{thmSNformal}.
Let $\th^{-1}(x,y) = \left(x, y + \sum_{n\ge0} \psi_n(x) y^n\right)$ denote the inverse
transformation.
Then, for each $n\in\mathbb{N}$, the formal series $\widetilde\varphi_n(z) = \varphi_n(-1/z)$ and $\widetilde\psi_n(z) =
\psi_n(-1/z)$ belong to~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$, and
the analytic continuation of the formal Borel transforms
$\widehat\varphi_n({\zeta}),\widehat\psi_n({\zeta})\in\mathbb{C}\{{\zeta}\}$ satisfy the following:
\begin{enumerate}[(i)]
\item All the branches of the analytic continuation of~$\widehat\varphi_n$ are regular at
the points of $n+\mathbb{N} = \{n, n+1, n+2, \dotsc\}$.
\item All the branches of the analytic continuation of~$\widehat\psi_n$ are regular at
the points of $-\mathbb{N}^* = \{-1, -2, -3 \dotsc\}$,
with the sole exception that the branches of~$\widehat\psi_0$ may have simple poles
at~$-1$.
\item Given any $\rho\in\left]0,\frac{1}{2}\right[$ and $N\in\mathbb{N}^*$, there exist
positive constants $K,L,C$ which depend only on $\rho,N$ such that,
for any $(\rho,N,n-\mathbb{N}^*)$-adapted infinite path~$\gamma$ issuing from the origin,
\begin{equation} \label{ineqwhphn}
\big| \widehat\varphi_n\big( \gamma(t) \big) \big| \le K L^n \, {\mathrm e}^{(n^2+1) C t}
\enspace\text{for all $t\ge0$ and $n\in\mathbb{N}$,}
\end{equation}
while, for any $(\rho,N,\mathbb{N})$-adapted infinite path~$\gamma$ issuing from the origin,
\begin{equation} \label{ineqwhpsin}
\big| \widehat\psi_n\big( \gamma(t) \big) \big| \le K L^n \, {\mathrm e}^{C t}
\enspace\text{for $n\ge1$,}\quad
\big| \big(\gamma(t)+1\big) \widehat\psi_0\big( \gamma(t) \big) \big| \le K \, {\mathrm e}^{C t}
\end{equation}
for all $t\ge0$.
\end{enumerate}
\end{thm}
\label{secResur}
What we call $(\rho,N,\mathscr P^\pm)$-adapted infinite path, with $\mathscr P^+ = \mathbb{N}$ or $\mathscr P^- = n-\mathbb{N}^*$, is defined
below in Definition~\ref{defiadaptedpath}; see Figure~\ref{figrhoNadapt}.
These are arc-length parametrised paths $\gamma \colon \left[0,+\infty\right[
\to \mathbb{C}$ ({i.e.}\ $\gamma$ is absolutely continuous and $|\dot\gamma(t)|=1$ for almost
every~$t$) which start as rectilinear segments of length~$\rho$ issuing from the
origin and which then do not approach~$\mathscr P^\pm$ nor $\pm{\Sigma}(\rho,N)$ at a distance
$<\rho$, where $\pm{\Sigma}(\rho,N)$ denotes the sector of half-opening
$\arcsin(\rho/N)$ bissected by $\pm\left[N,+\infty\right[$.
In particular, inequalities \eqref{ineqwhphn}--\eqref{ineqwhpsin} yield an
exponential bound at infinity for the principal branch of each~$\widehat\varphi_n$
or~$\widehat\psi_n$ along all the half-lines issuing from~$0$
except the singular half-lines $\pm\left[0,+\infty\right[$
(the half-line $\left[0,+\infty\right[$ is not singular for $\widehat\varphi_0$ and the
half-line $-\left[0,+\infty\right[$ is not singular for any $\widehat\psi_n$).
We recall that $\Theta$ establishes a conjugacy between the saddle-node vector
field~$X$ and its normal form~$X_0$, thus the formal series $\widetilde\varphi_n(z)$ are
the components of a formal integral~$\widetilde Y(z,u)$, as described
in~\eqref{eqdefYzu}--\eqref{eqdiffeqX}.
The resurgence statement contained in Theorem~\ref{thmResur} thus means that the
formal solutions of the singular differential
equations~\eqref{eqdiffeqzero}--\eqref{eqdiffeqn} may be divergent but that this
divergence is of a very precise nature.
We shall briefly indicate in Section~\ref{secBE} how alien calculus allows one
to take advantage of this information to study the problem of analytic
classification.
\begin{remark} \label{remgetrid}
Theorem~\ref{thmResur} also permits the obtention of analytic solutions
of~\eqref{eqdiffeqzero}--\eqref{eqdiffeqn} via Borel-Laplace summation.
It is thus worth mentioning that one can get rid of the dependence on~$n$ in the
exponential which appears in~\eqref{ineqwhphn}, provided one restricts oneself
to paths which start from the origin and then do not approach at a distance $<\rho$ the set
$\mathbb{Z}\cup{\Sigma}(\rho,N)\cup\big(-{\Sigma}(\rho,N)\big)$, and which cross the cuts (the
segments between consecutive points of~$\mathbb{Z}$) at most~$N'$ times.
For instance, with $N'=0$, one obtains
\begin{equation} \label{inequniformphn}
\big| \widehat\varphi_n( {\zeta} ) \big| \le K L^n \, {\mathrm e}^{C |{\zeta}|}
\end{equation}
for the principal branch of~$\widehat\varphi_n$, possibly with larger constants~$K,L,C$
but still independent of~$n$.
For the other branches, which correspond to $N'\ge1$ and $N\ge2$, one has to resort to
symmetrically contractile paths and the implied constants~$K,L,C$ depend only
on~$\rho,N,N'$.
Therefore, when performing Laplace transform,
inequalities~\eqref{inequniformphn} allow one to get the same domain of
analyticity for all the functions~$\widetilde\ph^{\text{ana}}_n(z)$ solutions
of~\eqref{eqdiffeqzero}--\eqref{eqdiffeqn} (a sectorial neighbourhood of
infinity which depends only on~$C$; see {e.g.}~\cite{kokyu}, \S1.1),
with explicit bounds which make it possible to study the domain of analyticity
of a {\em sectorial formal integral}
$\widetilde Y^{\text{ana}}(z,u) = u\,{\mathrm e}^z + \sum u^n \, {\mathrm e}^{nz}\widetilde\ph^{\text{ana}}_n(z)$
or of analytic normalising transformations $\ph^{\text{ana}}(x,y)$, $\psi^{\text{ana}}(x,y)$.
This will be used in Section~\ref{secMR}.
\end{remark}
The rest of this section is devoted to the proof of Theorem~\ref{thmResur} and
to the derivation of inequalities~\eqref{inequniformphn}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Using $\Omega=\mathcal{N} = \ao \eta\in\mathbb{Z} \mid \eta\ge -1 \af$ as an alphabet, we know that
the $\mathbb{C}[[x]]$-valued moulds~$\mathcal{V}^\bullet$ and~${\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\bullet$ are symmetral and
mutually inverse for mould multiplication. We recall that
$$
\Theta^{-1} = \sum {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\bullet \mathbf{B}_\bullet, \qquad
{\cV\hspace{-.45em}\raisebox{.35ex}{--}}^{\omega_1,\dotsc,\omega_r} = (-1)^r \mathcal{V}^{\omega_r,\dotsc,\omega_1}.
$$
With the notation $\norm{\text{\boldmath{$\om$}}}=\omega_1+\dotsb+\omega_r$ for any non-empty word
$\text{\boldmath{$\om$}}\in\Omega^\bullet$,
equation~\eqref{eqdefbeomb} can be written
$\mathbf{B}_\text{\boldmath{$\om$}} y = \beta_\text{\boldmath{$\om$}} y^{\norm{\text{\boldmath{$\om$}}}+1}$,
with the coefficients~$\beta_\text{\boldmath{$\om$}}$ defined at the end of Section~\ref{secMCexpSN}.
As was already observed, since $\Theta y = \sum\varphi_n(x)y^n$ and $\Theta^{-1} y =
\sum\psi_n(x)y^n$ , the formal series we are interested in can be written as
formally convergent series in $\mathbb{C}[[x]]$:
\begin{equation} \label{eqformulphnpsin}
\varphi_n = \sum_{\norm{\text{\boldmath{$\om$}}}=n-1} \beta_\text{\boldmath{$\om$}} \mathcal{V}^\text{\boldmath{$\om$}}, \qquad
\psi_n = \sum_{\norm{\text{\boldmath{$\om$}}}=n-1} \beta_\text{\boldmath{$\om$}} {\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\text{\boldmath{$\om$}}, \qquad n\in\mathbb{N},
\end{equation}
with summation over all words~$\text{\boldmath{$\om$}}$ of positive length subject to the condition
$\norm{\text{\boldmath{$\om$}}}=n-1$. In fact, not all of these words contribute in these series:
\begin{lemma} \label{lemexo}
For any non-empty $\text{\boldmath{$\om$}} = (\omega_1,\dotsc,\omega_r)\in\Omega^\bullet$,
using the notations
\begin{equation} \label{eqnotahatcheck}
\wc\omega_i = \omega_1 + \dotsb + \omega_i, \quad
\htb\omega_i = \omega_i + \dotsb + \omega_r, \qquad
1 \le i \le r,
\end{equation}
we have
$$
\beta_\text{\boldmath{$\om$}} \neq0 \enspace\Rightarrow\enspace
\norm{\text{\boldmath{$\om$}}} \ge -1, \enspace
\wc\omega_1,\dotsc,\wc\omega_{r-1}\ge0 \enspace\text{and}\enspace
\htb\omega_1,\dotsc,\htb\omega_r\le\norm{\text{\boldmath{$\om$}}}.
$$
\end{lemma}
\begin{proof}
We have
\begin{equation} \label{eqdefibeomb}
\beta_\text{\boldmath{$\om$}} = 1 \enspace\text{if $r=1$},\qquad
\beta_\text{\boldmath{$\om$}} = (\wc\omega_1+1)(\wc\omega_2+1)\dotsm(\wc\omega_{r-1}+1) \enspace\text{if $r\ge2$}.
\end{equation}
The property $\beta_\text{\boldmath{$\om$}}\neq0 \;\Rightarrow\; \norm{\text{\boldmath{$\om$}}}\ge-1$ was already observed
at the end of Section~\ref{secMCexpSN}, as a consequence of $\mathbf{B}_\text{\boldmath{$\om$}} y
\in\mathbb{C}[[y]]$
(one can also argue directly from formula~\eqref{eqdefibeomb}).
Now suppose $\beta_\text{\boldmath{$\om$}}\neq0$ and $1\le i \le r-1$.
The identity
$$
\beta_\text{\boldmath{$\om$}} = \beta_{\omega_1,\dotsc,\omega_i}
(\wc\omega_i+1)\dotsm(\wc\omega_{r-1}+1)
$$
implies $\wc\omega_i\neq-1$ and $\beta_{\omega_1,\dotsc,\omega_i}\neq0$, hence
$\omega_1+\dotsb+\omega_i\ge-1$.
Therefore $\wc\omega_i\ge0$ and $\htb\omega_{i+1} = \norm{\text{\boldmath{$\om$}}} - \wc\omega_i \le
\norm{\text{\boldmath{$\om$}}}$, while $\htb\omega_1=\norm{\text{\boldmath{$\om$}}}$.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We recall that the convergent series $a_\eta(x)$ were defined
in~\eqref{eqdefiancN} as Taylor coefficients {with respect to}~$y$ of the saddle-node vector
field~\eqref{eqdefX}.
We define $\widetilde\varphi_n(z)$, $\widetilde\psi_n(z)$, $\widetilde a_\eta(z)$, $\widetilde\mathcal{V}^\text{\boldmath{$\om$}}(z)$, ${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}(z)$
from $\varphi_n(x)$, $\psi_n(x)$, $a_\eta(x)$, $\mathcal{V}^\text{\boldmath{$\om$}}(x)$, ${\cV\hspace{-.45em}\raisebox{.35ex}{--}}^\text{\boldmath{$\om$}}(x)$
by the change of variable $z=-1/x$ (for any $n\in\mathbb{N}$, $\eta\in\Omega$, $\text{\boldmath{$\om$}}\in\Omega^\bullet$),
and we denote by $\widehat\varphi_n({\zeta})$, $\widehat\psi_n({\zeta})$, etc.\ the formal Borel transforms of these
formal series.
In view of Lemma~\ref{lemdefcV}, the formal series~$\widetilde\mathcal{V}^\text{\boldmath{$\om$}}$ are uniquely
determined by the equations $\widetilde\mathcal{V}^\emptyset = 1$ and
\begin{equation*}
\big(\frac{{\mathrm d}\,}{{\mathrm d} z} + \norm{\text{\boldmath{$\om$}}}\big) \widetilde\mathcal{V}^\text{\boldmath{$\om$}}
= \widetilde a_{\omega_1} \widetilde\mathcal{V}^{`\text{\boldmath{$\om$}}}, \qquad
\widetilde\mathcal{V}^\text{\boldmath{$\om$}} \in z^{-1}\mathbb{C}[[z^{-1}]
\end{equation*}
for $\text{\boldmath{$\om$}}$ non-empty, with $`\text{\boldmath{$\om$}}$ denoting $\text{\boldmath{$\om$}}$ deprived from its first letter.
Since $\mathcal{B}$ transforms $\frac{{\mathrm d}\,}{{\mathrm d} z}$ into multiplication by $-{\zeta}$ and
multiplication into convolution, we get
$\widehat\mathcal{V}^\emptyset = \delta$
and
$$
\widehat\mathcal{V}^\text{\boldmath{$\om$}}({\zeta}) = -\frac{1}{{\zeta}-\norm{\text{\boldmath{$\om$}}}} \big( \widehat a_{\omega_1} * \widehat\mathcal{V}^{`\text{\boldmath{$\om$}}} \big),
\qquad \text{\boldmath{$\om$}}\neq\emptyset,
$$
where the {right-hand side}\ belongs to $\mathbb{C}[[{\zeta}]]$ even if $\norm{\text{\boldmath{$\om$}}}=0$, by the same
argument as in the proof of Lemma~\ref{lemdefcV}.
It belongs in fact to $\mathbb{C}\{{\zeta}\}$, by induction on $r(\text{\boldmath{$\om$}})$, and
\begin{equation} \label{eqcViter}
\widehat\mathcal{V}^\text{\boldmath{$\om$}} = (-1)^r
\frac{1}{{\zeta}-\htb\omega_1} \Big( \widehat a_{\omega_1} *
\Big( \frac{1}{{\zeta}-\htb\omega_2} \Big( \widehat a_{\omega_2} *
\Big( \dotsb
\Big( \frac{1}{{\zeta}-\htb\omega_r} \widehat a_{\omega_r}
\Big) \dotsm \Big)\Big)\Big)\Big)
\end{equation}
with the notation of~\eqref{eqnotahatcheck}.
In view of the stability properties of~$\wHR\mathbb{Z}$ (stability by convolution with
another element of~$\wHR\mathbb{Z}$, a fortiori with an entire function, or by
multiplication with a meromorphic function regular on $\mathbb{C}\setminus\mathbb{Z}^*$), this
implies that \emph{the functions $\widehat\mathcal{V}^\text{\boldmath{$\om$}}$ are resurgent}, as announced in
the introduction to this section.
We shall give more details on this later.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Here is a first consequence for the functions $\widehat\varphi_n$ and~$\widehat\psi_n$:
\begin{lemma} \label{lemtcSm}
For each $n\in\mathbb{N}$,
\begin{equation} \label{eqwhphnwhpsin}
\widehat\varphi_n = \sum_{\norm{\text{\boldmath{$\om$}}}=n-1} \beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}},
\qquad
\widehat\psi_n = \sum_{\norm{\text{\boldmath{$\om$}}}=n-1} \beta_\text{\boldmath{$\om$}} {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb},
\end{equation}
with formally convergent series in $\mathbb{C}[[{\zeta}]]$, and
for each non-empty~$\text{\boldmath{$\om$}}$ such that $\norm{\text{\boldmath{$\om$}}}=n-1$,
\begin{equation} \label{eqbewhcV}
\beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}} = \mathcal{S}_{\shtb\omega_1}\mathcal{A}_{\omega_1}
\dotsm \mathcal{S}_{\shtb\omega_r}\mathcal{A}_{\omega_r} \delta, \qquad
\beta_\text{\boldmath{$\om$}} {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} = \tfrac{1}{{\zeta}-(n-1)} \mathcal{A}_{\omega_r}
{\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_{\swc\omega_{r-1}}\mathcal{A}_{\omega_{r-1}}
\dotsm {\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_{\swc\omega_1}\mathcal{A}_{\omega_1} \delta,
\end{equation}
with convolution operators
$$
\mathcal{A}_\eta \colon \widehat\varphi \mapsto \widehat a_\eta * \widehat\varphi, \qquad
\eta \in \Omega
$$
and multiplication operators
\begin{equation} \label{eqdefiStSm}
\mathcal{S}_m \colon \widehat\varphi \mapsto -\tfrac{n-m}{{\zeta}-m}\, \widehat\varphi, \quad
{\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m \colon \widehat\varphi \mapsto \tfrac{m+1}{{\zeta}-m}\, \widehat\varphi, \qquad
m \in \mathbb{Z}
\end{equation}
\end{lemma}
\begin{proof}
Formula~\eqref{eqwhphnwhpsin} is a direct consequence
of~\eqref{eqformulphnpsin}.
In order to deal with ${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}$, we pass from $\text{\boldmath{$\om$}}=(\omega_1,\dots, \omega_r)$ to
$\widetilde\text{\boldmath{$\om$}} = (\omega_r,\dots, \omega_1)$ and this exchanges $\htb\omega_i$ and
$\wc\omega_{r-i+1}$, thus \eqref{eqcViter} implies
$$
{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} =
\frac{1}{{\zeta}-\wc\omega_r} \Big( \widehat a_{\omega_r} *
\Big( \frac{1}{{\zeta}-\wc\omega_{r-1}} \Big( \widehat a_{\omega_{r-1}} *
\Big( \dotsb
\Big( \frac{1}{{\zeta}-\wc\omega_1} \widehat a_{\omega_1}
\Big) \dotsm \Big)\Big)\Big)\Big).
$$
Since $\wc\omega_r=n-1$, multiplying by $\beta_\text{\boldmath{$\om$}} =
(\wc\omega_{r-1}+1)\dotsm(\wc\omega_1+1)$, we get the second part of~\eqref{eqbewhcV}.
The first part of this formula is obtained by multiplying~\eqref{eqcViter}
by~$\beta_\text{\boldmath{$\om$}}$ written in the form
$\beta_\text{\boldmath{$\om$}} = (n-\htb\omega_1)(n-\htb\omega_2)\dotsm(n-\htb\omega_r)$
(indeed, $n-\htb\omega_1=1$ and $n-\htb\omega_i = \wc\omega_{i-1}+1$ for $2\le i\le r$).
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The appearance of singularities in our problem is due to the multiplication
operators $\mathcal{S}_{\shtb\omega_i}$ or~${\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_{\swc\omega_i}$. In view of
Lemma~\ref{lemexo} and formulas~\eqref{eqbewhcV}--\eqref{eqdefiStSm},
we are led to introduce subspaces of~$\wHR\mathbb{Z}$ formed of functions with smaller
sets of singularities.
We do this by considering Riemann surfaces $\mathscr R(\mathscr P)$ slightly more general than
$\mathscr R(\mathbb{Z})$.
Let $\mathscr P$ denote a subset of~$\mathbb{Z}$.
We define the Riemann surface $\mathscr R(\mathscr P)$ as the set of all homotopy classes of
rectifiable oriented paths which start from the origin and then avoid~$\mathscr P$.
The Riemann surface $\mathscr R(\mathscr P)$ and the universal cover of $\mathbb{C}\setminus\mathscr P$
coincide if $0\not\in\mathscr P$; there is a difference between them when $0\in\mathscr P$:
there is no point which projects onto~$0$ in the second one, while the first one
still has an ``origin''.
The space $\wHR\mathscr P$ of all holomorphic functions of $\mathscr R(\mathscr P)$ can
be identified with the space of all $\widehat\varphi({\zeta}) \in \mathbb{C}\{{\zeta}\}$ which admit an
analytic continuation along any representative of any element of $\mathscr R(\mathscr P)$.
It can thus also be identified with the subspace of~$\wHR\mathbb{Z}$ consisting of
those functions holomorphic in~$\mathscr R(\mathbb{Z})$, the branches of which are regular at each
point of~$\mathbb{Z}\setminus\mathscr P$.
We shall particularly be interested in two cases: $\mathscr P^- = n-\mathbb{N}^*$ and $\mathscr P^+ = \mathbb{N}$.
Indeed, our aim is to show that
the functions $\widehat\varphi_n$ belong to $\wHR{n-\mathbb{N}^*}$ for any $n\in\mathbb{N}$
and that the functions $\widehat\psi_n$ belong to $\wHR\mathbb{N}$ for any $n\ge1$,
while $({\zeta}+1)\widehat\psi_1({\zeta}) \in \wHR\mathbb{N}$.
One could prove (with the help of symmetrically contractile paths) that the
spaces $\wHR\mathbb{N}$,
$\wHR{-\mathbb{N}^*}$ or $\wHR{-\mathbb{N}}$ are stable by convolution because the corresponding
sets~$\mathscr P$ are stable by addition,
but beware that this is not the case of $\wHR{n-\mathbb{N}^*}$ if $n\ge2$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
As previously mentioned, for each $\text{\boldmath{$\om$}}\neq\emptyset$,
$\beta_\text{\boldmath{$\om$}}\widehat\mathcal{V}^\text{\boldmath{$\om$}}$ and $\beta_\text{\boldmath{$\om$}}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}$ belong to~$\wHR\mathbb{Z}$ by virtue of
general stability properties.
But formula~\eqref{eqbewhcV} permits a more elementary argument and more precise
conclusions.
Indeed $\mathcal{A}_{\omega_r}\delta = \widehat a_{\omega_r}$, resp.\ $\mathcal{A}_{\omega_1}\delta = \widehat a_{\omega_1}$,
is an entire function, which vanishes at the origin if $\omega_r=0$, resp.\
$\omega_1=0$.
Thus
\begin{equation} \label{eqinifcns}
\mathcal{S}_{\shtb\omega_r}\mathcal{A}_{\omega_r} \delta = -\frac{n-\omega_r}{{\zeta}-\omega_r} \, \widehat a_{\omega_r},
\qquad \text{resp.}\quad
{\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_{\swc\omega_1}\mathcal{A}_{\omega_1} \delta = \frac{\omega_1+1}{{\zeta}-\omega_1} \, \widehat a_{\omega_1},
\end{equation}
is meromorphic on~$\mathbb{C}$ and regular at the origin if $\omega_r\neq0$, resp.\ $\omega_1\neq0$,
and entire if $\omega_r=0$, resp.\ $\omega_1=0$.
In fact,
$$
\htb\omega_r \le n-1 \enspace\Rightarrow\enspace \mathcal{S}_{\shtb\omega_r}\mathcal{A}_{\omega_r} \delta \in \wHR{n-\mathbb{N}^*},
\qquad
\wc\omega_1 \ge 0 \enspace\Rightarrow\enspace \mathcal{S}_{\swc\omega_1}\mathcal{A}_{\omega_1} \delta \in \wHR\mathbb{N}.
$$
Therefore, one can apply $r-1$ times the following
\begin{lemma} \label{lemAnCont}
Suppose that $\mathscr P\subset\mathbb{Z}$, $\widehat\varphi \in \wHR\mathscr P$ and $\widehat b$ is entire.
Then $\widehat b*\widehat\varphi \in \wHR\mathscr P$.
If furthermore $\widehat s$ is a meromorphic function, the poles of which all belong
to~$\mathscr P$ and with at most a simple pole at the origin,
then $\widehat s(\widehat b*\widehat\varphi) \in \wHR\mathscr P$.
Consider a rectifiable oriented path with arc-length parametrisation $\gamma \colon
[0,T] \to \mathbb{C}$, such that $\gamma(0)=0$ and $\gamma(t) \in \mathbb{C}\setminus\mathscr P$ for $0<t\le
T$.
Denoting the analytic continuation of~$\widehat\varphi$ along~$\gamma$ by the same symbol~$\widehat\varphi$, we
suppose moreover that
$$
\big| \widehat\varphi\big( \gamma(t) \big) \big| \le P(t) \,{\mathrm e}^{Ct}, \qquad
0 \le t \le T,
$$
with a continuous function~$P$ and a constant $C\ge0$, and that there is a
continuous monotonic non-decreasing function~$Q$ such that $|\widehat b({\zeta})| \le
Q\big(|{\zeta}|\big)\,{\mathrm e}^{C|{\zeta}|}$ for all ${\zeta}\in\mathbb{C}$.
Then, for all $t\in [0,T]$, the analytic continuation of $\widehat b*\widehat\varphi$ at
${\zeta}=\gamma(t)$ satisfies
\begin{equation} \label{eqbstarwhph}
\widehat b*\widehat\varphi({\zeta}) = \int_{\gamma_{\zeta}}
\widehat b({\zeta}-{\zeta}') \widehat\varphi({\zeta}') \,{\mathrm d}{\zeta}',
\qquad
\big|\widehat b*\widehat\varphi\big( \gamma(t) \big)\big| \le
P*Q(t) \, {\mathrm e}^{Ct},
\end{equation}
with $\gamma_{\zeta}$ denoting the restriction $\gamma_{| [0,t]}$ and
$P*Q(t) = \int_0^t P(t')Q(t-t')\,{\mathrm d} t'$.
\end{lemma}
\begin{proof}
The first statement and the first part of~\eqref{eqbstarwhph} are obtained by
means of the Cauchy theorem:
if $\gamma_1$ and~$\gamma_2$ are two representatives of the same element of~$\mathscr R(\mathscr P)$ and
$\xi=\pi([\gamma_1])=\pi([\gamma_2])$, then $\int_{\gamma_1}\widehat b(\xi-\xi')
\widehat\varphi(\xi') \,{\mathrm d}\xi'$ and $\int_{\gamma_2}\widehat b(\xi-\xi') \widehat\varphi(\xi')
\,{\mathrm d}\xi'$ coincide; one can check that the function thus defined on~$\mathscr R(\mathscr P)$ is
holomorphic and this is clearly an extension of $\widehat b*\widehat\varphi$.
Moreover $\widehat b*\widehat\varphi$ vanishes at the origin, thus $\widehat s(\widehat
b*\widehat\varphi)\in\wHR{\mathscr P}$ even if~$\widehat s$ has a simple pole at~$0$.
We thus have
$$
\widehat b*\widehat\varphi\big( \gamma(t) \big) = \int_0^t
\widehat b\big( \gamma(t)-\gamma(t') \big) \widehat\varphi\big( \gamma(t') \big)
\dot\gamma(t')\,{\mathrm d} t'.
$$
For almost every $t'\in[0,t]$, $|\dot\gamma(t')|=1$ and $|\gamma(t)-\gamma(t')|\le t-t'$, whence
$|\widehat b\big( \gamma(t)-\gamma(t') \big)| \le Q(t-t')\,{\mathrm e}^{C(t-t')}$ by monotonicity
of $\xi\mapsto Q(\xi)\,{\mathrm e}^\xi$.
The conclusion follows.
\end{proof}
In view of Lemma~\ref{lemexo} and formula~\eqref{eqbewhcV}, the first part of
Lemma~\ref{lemAnCont} implies
\begin{cor} \label{corbecVo}
Let $n\in\mathbb{N}$ and $\text{\boldmath{$\om$}}$ be a non-empty word such that $\norm{\text{\boldmath{$\om$}}}=n-1$.
Then the function $\beta_\text{\boldmath{$\om$}}\widehat\mathcal{V}^\text{\boldmath{$\om$}}$ belongs to $\wHR{n-\mathbb{N}^*}$ and
the function
$$
{\zeta} \mapsto \big({\zeta}-(n-1)\big) \beta_\text{\boldmath{$\om$}}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}({\zeta})
$$
belongs to $\wHR\mathbb{N}$.
\end{cor}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Our aim is now to exploit formula~\eqref{eqbewhcV} and the quantitative information
contained in Lemma~\ref{lemAnCont} to produce upper bounds for
$$
\big|\beta_\text{\boldmath{$\om$}}\widehat\mathcal{V}^\text{\boldmath{$\om$}}({\zeta})\big|, \quad \text{resp.}\enspace
\big| \big({\zeta}-(n-1) \big) \beta_\text{\boldmath{$\om$}}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}({\zeta}) \big|
$$
which will ensure the uniform convergence of the series~\eqref{eqwhphnwhpsin}
(up to the factor ${\zeta}-(n-1)$ for the second one) in any compact subset
of $\mathscr R(n-\mathbb{N}^*)$, resp.\ $\mathscr R(\mathbb{N})$.
We first choose positive constants $K,L,C$ such that
\begin{equation} \label{ineqwhaeta}
\left| \widehat a_\eta({\zeta}) \right| \le K L^\eta \, {\mathrm e}^{C|{\zeta}|},
\qquad {\zeta}\in\mathbb{C},\; \eta\in\Omega.
\end{equation}
This is possible, since $\sum \dfrac{a_\eta(x)}{x} y^{\eta+1} =
\dfrac{A(x,y)-y}{y} \in \mathbb{C}\{x,y\}$ by assumption, thus one can find constants
such that
$\big| \frac{a_\eta(x)}{x} \big| \le K L^\eta$ for $|x|\le C^{-1}$
and use~\eqref{ineqentire}.
We can also assume, possibly at the price of increasing of~$K$, that
\begin{equation} \label{ineqwhazero}
\left| \widehat a_0({\zeta}) \right| \le K |{\zeta}| \, {\mathrm e}^{C|{\zeta}|},
\qquad {\zeta}\in\mathbb{C},
\end{equation}
since $a_0(x) \in x^2\mathbb{C}\{x\}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Next, we define exhaustions of $\mathscr R(n-\mathbb{N}^*)$, resp.\ $\mathscr R(\mathbb{N})$, by subsets
${{\mathscr R}_{\rho,N}}(n-\mathbb{N}^*)$, resp.\ ${{\mathscr R}_{\rho,N}}(\mathbb{N})$, in which we shall be able to derive
appropriate bounds for our functions.
Let $\rho\in\left]0,\frac{1}{2}\right[$ and $N\in\mathbb{N}^*$.
We denote by ${\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(n-\mathbb{N}^*)$ the subset of~$\mathbb{C}$ obtained by removing the open discs
$D(m,\rho)$ with radius~$\rho$ and integer centres $m\le n-1$,
and removing also the points~${\zeta}$ such that the segment $[0,{\zeta}]$ intersect the
open disc $D(-N,\rho)$ ({i.e.}\ the points which are hidden by
$D(-N,\rho)$ to an observer located at the origin).
Similarly, we denote by ${\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathbb{N})$ the subset of~$\mathbb{C}$ obtained by removing the open discs
$D(m,\rho)$ with radius~$\rho$ and integer centres $m\ge 0$,
and removing also the points~${\zeta}$ such that the segment $[0,{\zeta}]$ intersect the
open disc $D(N,\rho)$.
Thus, with the notations $\mathscr P^- = n-\mathbb{N}^*$ and $\mathscr P^+ = \mathbb{N}$,
\begin{multline*}
{\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P^\pm) = \ao {\zeta}\in\mathbb{C} \mid
\operatorname{dist}({\zeta},\mathscr P^\pm) \ge \rho \;\text{and}\;
\operatorname{dist}(\pm N,[0,{\zeta}]) \ge \rho \af \\[.7ex]
= \ao {\zeta}\in\mathbb{C} \mid
\operatorname{dist}\big( {\zeta}, \mathscr P^\pm \cup \pm{\Sigma}(\rho,N) \big) \ge \rho \af,
\end{multline*}
with the notation~${\Sigma}$ introduced after the statement of
Theorem~\ref{thmResur}; see Figure~\ref{figrhoNadapt}.
Now, for $\mathscr P=\mathscr P^\pm$, consider the rectifiable oriented paths~$\gamma$ which start at the origin and either
stay in the disc $D(0,\rho)$, or leave it and then stay in~${\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)$. The
homotopy classes of such paths form a set~${{\mathscr R}_{\rho,N}}(\mathscr P)$ which we can identify with a
subset of~$\mathscr R(\mathscr P)$.
\begin{definition} \label{defiadaptedpath}
If the arc-length parametrisation of a rectifiable oriented path
$\gamma \colon [0,T] \to \mathbb{C}$ satisfies, for each $t\in[0,T]$,
\begin{align*}
0\le t\le \rho &\enspace\Rightarrow\enspace |\gamma(t)| = t, \\
t>\rho &\enspace\Rightarrow\enspace \gamma(t) \in {\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P),
\end{align*}
then we say that the parametrised path~$\gamma$ is $(\rho,N,\mathscr P)$-adapted.
We speak of infinite $(\rho,N,\mathscr P)$-adapted path if $\gamma$ is defined on
$\left[0,+\infty\right[$.
\end{definition}
\begin{figure}
\begin{center}
\psfrag{n}{$\scriptstyle n$}
\psfrag{m}{$\scriptstyle n-1$}
\psfrag{0}{$\scriptstyle \mbox{}\hspace{.2em}0$}
\psfrag{N}{$\scriptstyle \mbox{}\hspace{.4em}N$}
\psfrag{M}{$\scriptstyle \mbox{}\hspace{-1em}-N$}
\psfrag{Pp}{$\mathscr P^+ = \mathbb{N}$}
\psfrag{Pm}{$\mathscr P^- = n-\mathbb{N}^*$}
\psfrag{S}{$\scriptstyle {\Sigma}(N,\rho)$}
\psfrag{mS}{$\scriptstyle -{\Sigma}(N,\rho)$}
\epsfig{file=FigrhoN2.eps,height=2.5cm,angle = 0}
\end{center}
\caption{ \label{figrhoNadapt}
The set $\protect{\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P^\pm)$
and the image of a $(\rho,N,\mathscr P^\pm)$-adapted path.}
\end{figure}
One can characterize~${{\mathscr R}_{\rho,N}}(\mathscr P)$ as follows:
a point of $\mathscr R(\mathscr P)$ belongs to~${{\mathscr R}_{\rho,N}}(\mathscr P)$ iff it can be represented by a
$(\rho,N,\mathscr P)$-adapted path.
Observe that the projection onto~$\mathbb{C}$ of ${{\mathscr R}_{\rho,N}}(\mathscr P)$ is ${\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)\cup
D(0,\rho)$ (only for $\mathscr P=-\mathbb{N}^*$ is $D(0,\rho)$ contained in~${\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)$) and
that $\mathscr R(\mathscr P) = \bigcup_{\rho,N} {{\mathscr R}_{\rho,N}}(\mathscr P)$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We now show how to control the operators~$\mathcal{S}_m$ and~${\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m$ uniformly
in a set~${{\mathscr R}_{\rho,N}}(\mathscr P^\pm)$:
\begin{lemma} \label{leminiSmtSm}
Let $n\in\mathbb{N}$ and $\mathcal{S}_m,{\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m$ as in~\eqref{eqdefiStSm}, and consider the
meromorphic functions $S_m = \mathcal{S}_m 1$ and ${S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m = {\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m 1$.
Given $\rho,N$ as above, there exist $\lambda>0$ which depends only on $\rho,N$ and
$\lambda_n>0$ which depends only on $\rho,N,n$ such that, for
$m\in\mathscr P\setminus\{0\}$,
\begin{alignat}{3}
\label{ineqSm}
&\text{if $\mathscr P = n-\mathbb{N}^*$:}& \qquad &
|S_m({\zeta})| \le \lambda_n & \quad &
\text{for}\enspace {\zeta}\in{\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)\cup D(0,\rho) \\
\tag{\ref{ineqSm}$'$}
&\text{if $\mathscr P = \mathbb{N}$:}& \qquad &
|{S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m({\zeta})| \le \lambda & \quad &
\text{for}\enspace {\zeta}\in{\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)\cup D(0,\rho)
\end{alignat}
and
\begin{alignat}{3}
\label{ineqSzeroout}
&|S_0({\zeta})| \le \lambda_n,& \qquad &|{S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_0({\zeta})| \le \lambda,& \qquad
&\text{for}\enspace {\zeta}\in \mathbb{C}\setminus D(0,\rho) \\
\label{ineqSzeroin}
&|S_0({\zeta})| \le \frac{\rho \lambda_n}{|{\zeta}|},& \qquad &|{S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_0({\zeta})| \le \frac{\rho \lambda}{|{\zeta}|},& \qquad
&\text{for}\enspace {\zeta}\in D(0,\rho).
\end{alignat}
One can take $\lambda = (N+1)\rho^{-1}$ and $\lambda_n = (|n|+N)\rho^{-1}$.
\end{lemma}
\begin{proof}
Let $m\in\mathscr P\setminus\{0\}$ and ${\zeta}\in{\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)\cup D(0,\rho)$,
thus $|{\zeta}-m|\ge\rho$.
Consider first the case $\mathscr P=\mathbb{N}$.
If $m\ge N$, then $|{\zeta}-m| \ge \frac{\rho |m|}{N}$ by Thales theorem;
thus $\frac{1}{|{\zeta}-m|} \le \rho^{-1}$ and $\big|\frac{m}{{\zeta}-m}\big| \le
N\rho^{-1}$ for any $m\in\mathbb{N}^*$.
Therefore $|{S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m({\zeta})| = \big|\frac{m+1}{{\zeta}-m}\big| \le \lambda = (N+1)\rho^{-1}$.
Since $\lambda\ge\rho^{-1}$, ${S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_0({\zeta}) = 1/{\zeta}$ also satisfies the required inequalities.
When $\mathscr P=n-\mathbb{N}^*$, one argues similarly except that the case $N\le m \le n-1$
must be treated separately.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Combining the previous two lemmas, we get
\begin{lemma} \label{lemIneqfond}
Let us fix $n,\rho,N$ as above,
$K,L,C$ as in~\eqref{ineqwhaeta}--\eqref{ineqwhazero} and $\lambda,\lambda_n$ as in
Lemma~\ref{leminiSmtSm}.
Suppose that $\mathscr P = n-\mathbb{N}^*$ or~$\mathbb{N}$,
$\gamma \colon [0,T] \to \mathbb{C}$ is $(\rho,N,\mathscr P)$-adapted and
$\widehat\varphi \in \wHR\mathscr P$ satisfies
$$
\big| \widehat\varphi\big( \gamma(t) \big) \big| \le P(t) \,{\mathrm e}^{Ct}, \qquad
0 \le t \le T,
$$
with a continuous monotonic non-decreasing function~$P$ and a constant $C\ge0$.
Assume $m\in\mathscr P$, with the restriction $m\neq0$ if $n=0$ and $\mathscr P=-\mathbb{N}^*$.
Then, for any $\eta\in\Omega$,
$$
\mathscr P= n-\mathbb{N}^* \;\Rightarrow\;
\mathcal{S}_m \mathcal{A}_\eta\widehat\varphi \in \wHR{n-\mathbb{N}^*}, \quad
\mathscr P= \mathbb{N} \;\Rightarrow\;
{\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m \mathcal{A}_\eta\widehat\varphi \in \wHR\mathbb{N},
$$
and, in the first case,
\begin{align}
m\neq0 \enspace\text{or}\enspace \eta=0 &\enspace\Rightarrow\enspace
\big| \mathcal{S}_m \mathcal{A}_\eta\widehat\varphi\big( \gamma(t) \big) \big| \le
\lambda_n K L^\eta (1*P)(t) \, {\mathrm e}^{C t}
\\
m=0 \enspace\text{and}\enspace \eta\neq0 &\enspace\Rightarrow\enspace
\big| \mathcal{S}_m \mathcal{A}_\eta\widehat\varphi\big( \gamma(t) \big) \big| \le
\lambda_n K L^\eta \big((\delta+1)*P\big)(t) \, {\mathrm e}^{C t}
\end{align}
for all $t\in[0,T]$, while in the second case
the function~${\cS\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m \mathcal{A}_\eta\widehat\varphi$ satisfies the same inequalities with~$\lambda$ replacing~$\lambda_n$.
\end{lemma}
\begin{proof}
We suppose $\mathscr P=n-\mathbb{N}^*$ and show the properties for $\mathcal{S}_m \mathcal{A}_\eta\widehat\varphi$ only,
the other case being similar.
Since $\mathcal{S}_m \mathcal{A}_\eta\widehat\varphi = S_m (\widehat a_\eta * \widehat\varphi)$, this function belongs to
$\wHR\mathscr P$ by the first part of Lemma~\ref{lemAnCont}.
In view of~\eqref{ineqwhaeta}--\eqref{ineqwhazero}, the second part of this lemma yields
\begin{align}
\label{ineqAetawhph}
\big| \mathcal{A}_\eta \widehat\varphi\big( \gamma(t) \big) \big| &\le K L^\eta (1*P)(t) \,{\mathrm e}^{Ct}
\\
\label{ineqAzerowhph}
\big| \mathcal{A}_0 \widehat\varphi\big( \gamma(t) \big) \big| &\le K (I*P)(t) \,{\mathrm e}^{Ct}
\end{align}
for all $t\in[0,T]$, with $I(t)\equiv t$ (notice that the first inequality holds if $\eta=0$
as well).
If $m\neq0$, then \eqref{ineqSm} yields the desired inequality for $\big| \mathcal{S}_m
\mathcal{A}_\eta \widehat\varphi\big( \gamma(t) \big) \big|$.
Suppose $m=0$; thus $n\neq0$ by assumption.
We observe that, if $t>\rho$, then $\gamma(t) \in {\raisebox{0ex}[2.6ex]{$\overset{\raisebox{-.8ex}{$\scriptscriptstyle\bullet$}}{\mathscr R}_{\rho,N}$}}(\mathscr P)$ has modulus $>\rho$ and
\eqref{ineqSzeroout} yields
$\big| S_0\big( \gamma(t) \big) \big| \le \lambda_n$,
whereas if $t\le\rho$, then $|\gamma(t)| = |t|$ and
\eqref{ineqSzeroin} yields
$\big| S_0\big( \gamma(t) \big) \big| \le \frac{\rho \lambda_n}{t}$.
Thus, if $m=0$ and $\eta=0$, then \eqref{ineqAetawhph} yields the desired inequality when
$t>\rho$ and \eqref{ineqAzerowhph} yields
$\big| \mathcal{S}_0 \mathcal{A}_0 \widehat\varphi\big( \gamma(t) \big) \big| \le K\rho\lambda_n
\frac{I*P(t)}{t} \,{\mathrm e}^{Ct}$
for $t\le\rho$, which is sufficient since
$\frac{I*P(t)}{t} = \frac{1}{t} \int_0^t t' P(t-t') \,{\mathrm d} t' \le 1*P(t)$ and $\rho<1$.
We conclude with the case where $m=0$ and $\eta\neq0$.
Using \eqref{ineqAetawhph}, we obtain the result when $t>\rho$, since $1*P\le
P+1*P$.
When $t\le\rho$, we get
$\big| \mathcal{S}_0 \mathcal{A}_\eta \widehat\varphi\big( \gamma(t) \big) \big| \le K L^\eta\rho\lambda_n
\frac{1*P(t)}{t} \,{\mathrm e}^{Ct}$,
which is sufficient since
$\frac{1*P(t)}{t} = \frac{1}{t} \int_0^t P(t') \,{\mathrm d} t' \le P(t)$.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{End of the proof of Theorem~\ref{thmResur}:} case of~$\widehat\varphi_n$.
\medskip
Let $n\in\mathbb{N}$. According to~\eqref{eqwhphnwhpsin}, the formal series $\widehat\varphi_n$
can be written as the formally convergent series
$\sum_{\norm{\text{\boldmath{$\om$}}}=n-1} \beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}}$.
Let $\mathscr P = n-\mathbb{N}^*$; according to Corollary~\ref{corbecVo} each
$\beta_\text{\boldmath{$\om$}}\widehat\mathcal{V}^\text{\boldmath{$\om$}}$ converges to a function of~$\wHR\mathscr P$,
it is thus sufficient to check the uniform convergence of the above series
\emph{as a series of holomorphic functions} in each compact subset
of~$\mathscr R(\mathscr P)$ and to give appropriate bounds.
Let us fix $\rho\in\left]0,\frac{1}{2}\right[$, $N\in\mathbb{N}^*$ and $K,L,C,\lambda,\lambda_n$ as
in Lemma~\ref{lemIneqfond}.
\medskip
\noindent --
We first show that, for any $(\rho,N,\mathscr P)$-adapted path~$\gamma$ (infinite or not)
and for any $\text{\boldmath{$\om$}}=(\omega_1,\dotsc,\omega_r)\in\Omega^r$ with $r\ge1$ and $\norm{\text{\boldmath{$\om$}}}=n-1$,
one has for all~$t$
\begin{equation} \label{ineqbecV}
\big| \beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}}\big( \gamma(t) \big) \big| \le
(\lambda_n K)^r L^{n-1} \widehat P_r(t) \, {\mathrm e}^{Ct},
\qquad \widehat P_r = (\delta+1)^{*\flo{r/2}} * 1^{*\ceil{r/2}},
\end{equation}
with the same notation as in~\eqref{eqvalcV} for $\ceil{r/2}$ and
$\flo{r/2}=r-\ceil{r/2}$.
Observe that $\widehat P_r$ is a polynomial with non-negative coefficients.
If $\omega_r=\wc\omega_r\neq0$, then \eqref{ineqwhaeta} and \eqref{ineqSm} yield
$| \mathcal{S}_{\shtb\omega_r} \widehat a_{\omega_r}({\zeta}) | \le \lambda_n K L^{\omega_r}\, {\mathrm e}^{C|{\zeta}|}$
for all ${\zeta}\in\mathscr R(\mathscr P)$.
The same inequality holds also if $\omega_r=0$ (use \eqref{ineqwhaeta}
and~\eqref{ineqSzeroout} if $|{\zeta}|>\rho$, and \eqref{ineqwhazero}
and~\eqref{ineqSzeroin} if $|{\zeta}|\le\rho$). Therefore
\begin{equation} \label{ineqiniomr}
\big| \mathcal{S}_{\shtb\omega_r} \widehat a_{\omega_r}\big(\gamma(t)\big) \big| \le \lambda_n K L^{\omega_r}\, {\mathrm e}^{Ct},
\qquad t\ge0
\end{equation}
(since $|\gamma(t)|\le t$).
Since Lemma~\ref{lemexo} implies $\htb\omega_1,\dotsc,\htb\omega_{r-1} \le n-1$, we can
apply $r-1$ times Lemma~\ref{lemIneqfond} and get
\begin{equation} \label{ineqba}
\big| \beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}}\big( \gamma(t) \big) \big| \le
(\lambda_n K)^r L^{n-1} \big( (\delta+1)^{*(r-a)} * 1^{*a}\big)(t) \, {\mathrm e}^{Ct},
\end{equation}
with $a = \operatorname{card} \ao i\in[1,r] \mid \htb\omega_i\neq 0 \;\text{or}\; \omega_i=0 \af$.
But $a\ge\ceil{r/2}$, as was shown in Lemma~\ref{lemdefcV}, hence the polynomial
expression in~$t$ appearing in the {right-hand side}\ of~\eqref{ineqba} can be written
$(\delta+1)^{*(r-a)} * 1^{*(a-\ceil{r/2})} * 1^{*\ceil{r/2}} \le
(\delta+1)^{*(r-\ceil{r/2})} * 1^{*\ceil{r/2}}$,
which yields~\eqref{ineqbecV}.
\medskip
\noindent --
We have
\begin{multline*}
\operatorname{card}\ao \text{\boldmath{$\om$}}\in\Omega^r \mid \norm{\text{\boldmath{$\om$}}}=n-1 \af =
\operatorname{card}\ao k\in\mathbb{N}^r \mid \norm{k}=n+r-1 \af \\[1ex]
=\binom{n+2(r-1)}{r-1} \le 2^{n+2(r-1)},
\end{multline*}
hence, for each $r\ge1$,
\begin{equation} \label{ineqCVUdeux}
\sum_{r(\text{\boldmath{$\om$}})=r,\norm{\text{\boldmath{$\om$}}}=n-1}
\big| \beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}}\big( \gamma(t) \big) \big| \le
2 \lambda_n K (2L)^{n-1} \Lambda_n^{r-1} \widehat P_r(t) \, {\mathrm e}^{Ct}
\end{equation}
with $\Lambda_n = 4\lambda_n K$.
But $\widetilde P_r(z) = \mathcal{B}^{-1}\widehat P_r = (1+z^{-1})^{\flo{r/2}} z^{-\ceil{r/2}}$ gives
rise to
$$
\widetilde\Phi_n(z) = \sum_{r\ge1} \Lambda_n^{r-1} \widetilde P_r(z) =
z^{-1} \big( 1+\Lambda_n(1+z^{-1}) \big) \big( 1 - \Lambda_n^2(z^{-1}+z^{-2}) \big)^{-1}
$$
which is convergent (with non-negative coefficients), thus
$\displaystyle\sum_{r\ge1} \Lambda_n^{r-1} \widehat P_r(t) = \mathcal{B}\widetilde\Phi_n(t)$ is convergent for all~$t$.
Therefore $\widehat\varphi_n$ is the sum of a series of holomorphic functions uniformly
convergent in every compact subset of~$\mathscr R_{\rho,N}(\mathscr P)$ satisfying
$$
\big| \widehat\varphi_n\big(\gamma(t)\big) \big| \le 2\lambda_n K (2L)^{n-1} \mathcal{B}\widetilde\Phi_n(t)\,{\mathrm e}^{C t}.
$$
\medskip
\noindent --
We conclude by using inequalities of the form~\eqref{ineqentire} to bound
$\mathcal{B}\widetilde\Phi_n$:
one can check that $|z|\ge 4\Lambda_n^2$ implies
$|z\widetilde\Phi_n(z)| \le 2(2+\Lambda_n)$, hence
$$
\mathcal{B}\widetilde\Phi_n(t) \le 2(2+\Lambda_n) \, {\mathrm e}^{4\Lambda_n^2 t}.
$$
In view of the explicit dependence of~$\lambda_n$ on~$n$ indicated in
Lemma~\ref{leminiSmtSm}, we easily get inequalities of the
form~\eqref{ineqwhphn} (possibly with larger constants $K,L,C$).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{End of the proof of Theorem~\ref{thmResur}:} case of~$\widehat\psi_n$.
\medskip
We only indicate the inequalities that one obtains when adapting the previous arguments to
the case of~$\widehat\psi_n$.
Let $\mathscr P=\mathbb{N}$ and
$$
\widehat\chi_n({\zeta}) = \big({\zeta}-(n-1)\big)\widehat\psi_n({\zeta}), \qquad
\widehat\mathcal{W}^\text{\boldmath{$\om$}}({\zeta}) = \big({\zeta}-(n-1)\big)\beta_\text{\boldmath{$\om$}}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}({\zeta}).
$$
The initial bound corresponding to~\eqref{ineqiniomr} is
$\big| \mathcal{S}_{\swc\omega_1} \widehat a_{\omega_1}\big(\gamma(t)\big) \big| \le
\lambda K L^{\omega_1}\, {\mathrm e}^{Ct}$.
This yields
\begin{equation} \label{ineqCVUtrois}
\big| \widehat\mathcal{W}^\text{\boldmath{$\om$}}\big( \gamma(t) \big) \big| \le
K (\lambda K)^{r-1} L^{n-1} \widehat Q_r(t) \, {\mathrm e}^{Ct},
\qquad \widehat Q_r = (\delta+1)^{*\ceil{\frac{r}{2}-1}} * 1^{*\flo{\frac{r}{2}+1}}
\end{equation}
after $r-2$ applications of Lemma~\ref{lemIneqfond}
(with an intermediary inequality analogous to~\eqref{ineqba} but involving
$b = 1 + \operatorname{card} \ao i\in[1,r-1] \mid \wc\omega_i\neq 0 \;\text{or}\; \omega_i=0 \af
\ge \flo{\frac{r}{2}+1}$ instead of~$a$).
Therefore
$\big| \widehat\chi_n\big(\gamma(t)\big) \big| \le 2 K (2L)^{n-1} \mathcal{B}\widetilde\Psi(t)\,{\mathrm e}^{C t}$,
with
$$
\widetilde\Psi(z) = \sum_{r\ge1} \Lambda^{r-1} \mathcal{B}^{-1}\widehat Q_r(z) =
z^{-1} (1+\Lambda z^{-1}) \big( 1 - \Lambda^2(z^{-1}+z^{-2}) \big)^{-1},
\qquad \Lambda = 4\lambda K,
$$
whence $\big| \widehat\chi_n\big( \gamma(t) \big) \big| \le K_1 (2L)^n \, {\mathrm e}^{C_1 t}$,
with suitable constants $K_1,C_1$ independent of~$n$.
This is the desired conclusion when $n=0$. When $n\ge1$, we can pass
from~$\widehat\chi_n$ to~$\widehat\psi_n$ since $|\gamma(t)-(n-1)|\ge\rho$,
with only one exception; namely, if $n=1$ and $t<\rho$, then we only have a
bound for $|{\zeta}\widehat\psi_1({\zeta})|$ with ${\zeta}=\gamma(t)\in D(0,\rho)$, but in that case
the analyticity of~$\widehat\chi_1$ at the origin of~$\mathscr R(\mathbb{N})$ is sufficient since we
know that its Taylor series has no constant term.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of inequalities~\eqref{inequniformphn}.}
They will follow from a lemma which has its own interest.
\begin{lemma} \label{lemLagrpsinphn}
For every $n\in\mathbb{N}$, the following identity holds in~$\mathbb{C}[[x]]$:
\begin{equation} \label{eqLagrpsinphn}
\varphi_n = \sum_{ \substack{s\ge1,\ n_1,\dotsc,n_s\ge0 \\
n_1 + \dotsb + n_s = n+s-1}}
\frac{(-1)^s}{s} \binom{n+s-1}{s-1}
\psi_{n_1} \dotsm \psi_{n_s},
\end{equation}
where the {right-hand side}\ is a formally convergent series.
\end{lemma}
\begin{proof}
This is the consequence of the following version of Lagrange inversion formula:
If $\chi(t,y)\in\mathbb{C}[[t,y]]$, then the formal transformation
$$
(t,x,y) \mapsto \big( t, x, y-x\chi(t,y) \big)
$$
has an inverse of the form
$(t,x,y) \mapsto \big( t, x, \mathscr Y(t,x,y) \big)$
with $\mathscr Y \in \mathbb{C}[[t,x,y]]$ given by
\begin{equation} \label{eqforminvLagr}
\mathscr Y(t,x,y) = y + \sum_{s\ge1} \, \frac{x^s}{s!} \,
\Big(\frac{\partial\,}{\partial y}\Big)^{\!s-1} \big( \chi(t,y)^s \big).
\end{equation}
(Proof: The transformation is invertible, because its $1$-jet is, and
the inverse must be of the form $\big(t,x,\mathscr Y(t,x,y)\big)$ with $\mathscr Y(t,0,y)=y$
and $\partial_y\mathscr Y(t,0,y)=1$.
It is thus sufficient to check the formula
$$
\partial_x^s \mathscr Y(t,x,y) =
\Big(\frac{\partial\,}{\partial y}\Big)^{\!s-1} \bigg[ \Big( \chi\big(t, \mathscr Y(t,x,y) \big) \Big)^{\!s}
\partial_y\mathscr Y(t,x,y) \bigg], \qquad s\ge1
$$
by induction on~$s$, which is easy.)
Since $\psi_n(x) \in x\mathbb{C}[[x]]$, we can apply this with $\chi(t,y) = \sum_{n\ge0}
\chi_n(t) y^n$ where $\chi_n(x) = -\frac{\psi_n(x)}{x}$:
this way $y-x\chi(x,y) = y + \sum_{n\ge0} \psi_n(x) y^n = \psi(x,y)$, and
\eqref{eqforminvLagr} yields
$$
\varphi(x,y) = y + \sum_{n\ge0} \varphi_n(x) y^n =
\sum_{s\ge1} \, \frac{(-1)^s}{s!} \,
\Big(\frac{\partial\,}{\partial y}\Big)^{\!s-1} \bigg[
\bigg( \sum_{n\ge0} \psi_n(x) y^n \bigg)^{\!\!s}\, \bigg]
$$
by specialization to $t=x$, whence the result follows (one gets a formally
convergent series because $\psi_n(x) \in x\mathbb{C}[[x]]$).
\end{proof}
As a consequence, we get
\begin{equation*}
\widehat\varphi_n = \sum_{ \substack{s\ge1,\ n_1,\dotsc,n_s\ge0 \\
n_1 + \dotsb + n_s = n+s-1}}
\frac{(-1)^s}{s} \binom{n+s-1}{s-1}
\widehat\psi_{n_1} * \dotsm * \widehat\psi_{n_s}
\end{equation*}
a priori in~$\mathbb{C}[[{\zeta}]]$, but the {right-hand side}\ is also a series of holomorphic functions
and inequalities~\eqref{ineqwhpsin} will yield uniform convergence in every
compact subset of the principal sheet of~$\mathscr R(\mathbb{Z})$.
Indeed, let $\rho\in\left]0,\frac{1}{2}\right[$.
The domain considered in~\eqref{inequniformphn} consists of those ${\zeta}\in\mathbb{C}$
such that the segment $[0,{\zeta}]$ does not meet the open discs $D(-1,\rho)$ and
$D(1,\rho)$.
All the $\widehat\psi_n$'s are holomorphic in this domain~$\mathscr D_\rho$ (we had to delete the disc
around~$-1$ only because of~$\widehat\psi_0$).
Since~$\mathscr D_\rho$ is star-shaped {with respect to}~$0$, the analytic continuation of the
convolution product of any two functions~$\widehat\varphi$ and~$\widehat\psi$ holomorphic
in~$\mathscr D_\rho$ is defined by formula~\eqref{eqdefconvol} regardless of the size
of~$|{\zeta}|$.
If moreover one has inequalities of the form
$|\widehat\varphi({\zeta})| \le \Phi(|{\zeta}|)\,{\mathrm e}^{C|{\zeta}|}$ and
$|\widehat\psi({\zeta})| \le \Psi(|{\zeta}|)\,{\mathrm e}^{C|{\zeta}|}$ in~$\mathscr D_\rho$,
then the inequality
$|\widehat\varphi*\widehat\psi({\zeta})| \le \Phi*\Psi(|{\zeta}|)\,{\mathrm e}^{C|{\zeta}|}$ holds in~$\mathscr D_\rho$.
Hence
$$
|\widehat\varphi_n({\zeta})| \le \sum_{ \substack{s\ge1,\ n_1,\dotsc,n_s\ge0 \\
n_1 + \dotsb + n_s = n+s-1}}
\frac{1}{s} \binom{n+s-1}{s-1}
K^s L^{n+s-1} M_s(|{\zeta}|) \, {\mathrm e}^{C|{\zeta}|},
\qquad {\zeta}\in\mathscr D_\rho,
$$
with $M_s({\zeta}) = 1^{*s}({\zeta}) = \frac{{\zeta}^{s-1}}{(s-1)!}$.
The conclusion follows since the {right-hand side}\ is less than $K (4L)^n \,{\mathrm e}^{(C+8KL)|{\zeta}|}$.
\section{The $\widetilde\mathcal{V}^\text{\boldmath{$\om$}}$'s as resurgence
monomials---introduction to alien calculus} \label{secBESN}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Resurgence theory means much more than Borel-Laplace summation.
It incorporates a study of the role of the singularities which appear in the
Borel plane ({i.e.}\ the plane of the complex variable~${\zeta}$),
which can be performed through the so-called {\em alien calculus}.
We shall now recall \'Ecalle's definitions in a particular case which will suffice for
the saddle-node problem.
We shall give less details than in the previous section; see {e.g.}~\cite{kokyu},
\S2.3 for more information (and {\em op.\ cit.}, \S3 for an outline of the
general case and more references).
The reader will thus find in this section the definition of a subalgebra
$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ of~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$, which is called the algebra of {\em simple resurgent
functions over~$\mathbb{Z}$},
and of a collection of operators~$\Delta_{m}$, $m\in\mathbb{Z}^*$, which are
derivations of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ called {\em alien derivations}.
Alien calculus consists in the proper use of these derivations.
We shall see that the formal series $\widetilde\mathcal{V}^{\omega_1,\dotsc,\omega_r}$ belong
to~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ and study the effect of the alien derivations on them.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let $\widehat\varphi$ be holomorphic in an open subset~$U$ of~$\mathbb{C}$ and $\omega\in\partial U$.
We say that~$\widehat\varphi$ has a {\em simple singularity} at~$\omega$ if there exist
$C\in\mathbb{C}$ and $\widehat\chi({\zeta}),\mathrm{reg}({\zeta})\in\mathbb{C}\{{\zeta}\}$ such that
\begin{equation} \label{eqsimplesing}
\widehat\varphi({\zeta}) =
\frac{C}{2\pi{\mathrm i}({\zeta}-\omega)} + \frac{1}{2\pi{\mathrm i}}\widehat\chi({\zeta}-\omega)\log({\zeta}-\omega)
+ \mathrm{reg}({\zeta}-\omega)
\end{equation}
for ${\zeta}$ close enough to~$\omega$. The {\em residuum}~$C$ and the
{\em variation}~$\widehat\chi$ are then determined by~$\widehat\varphi$ (independently of the
choice of the branch of the logarithm):
$$
C = 2\pi{\mathrm i}
\lim_{\substack{{\zeta}\to\omega \\ {\scriptscriptstyle {\zeta}\in U}}}
({\zeta}-\omega)\widehat\varphi({\zeta}), \qquad
\widehat\chi({\zeta}) = \widehat\varphi(\omega+{\zeta}) - \widehat\varphi(\omega+{\zeta}\,{\mathrm e}^{-2\pi{\mathrm i}}),
$$
where it is understood that considering $\omega+{\zeta}\,{\mathrm e}^{-2\pi{\mathrm i}}$ means following
the analytic continuation of~$\widehat\varphi$ along the circular path
$t\in[0,1]\mapsto \omega+{\zeta}\,{\mathrm e}^{-2\pi{\mathrm i} t}$
(which is possible when starting from $\omega+{\zeta}\in U$ provided $|{\zeta}|$ is small enough).
Let us use the notation
$$
\operatorname{sing}_\omega \widehat\varphi = C\,\delta + \widehat\chi \,\in\, \mathbb{C}\,\delta \oplus \mathbb{C}\{{\zeta}\}.
$$
in this situation.
We recall that $\widehat{\boldsymbol{R}}_\mathbb{Z} = \mathbb{C}\, \delta \oplus \wHR\mathbb{Z}$.
\begin{definition}
A simple resurgent function over~$\mathbb{Z}$ is any $c\,\delta + \widehat\varphi \in \widehat{\boldsymbol{R}}_\mathbb{Z}$ such that all
branches of the holomorphic fuction $\widehat\varphi\in\widehat H\big(\mathscr R(\mathbb{Z})\big)$ only have
simple singularities (necessarily located at points of~$\mathbb{Z}$).
The space of simple resurgent functions over~$\mathbb{Z}$ will be denoted~$\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\end{definition}
It turns out that \emph{$\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ is stable by convolution:} it is a subalgebra
of~$\widehat{\boldsymbol{R}}_\mathbb{Z}$.
This is the \emph{convolutive model of the algebra of simple resurgent functions}.
The \emph{formal model} is defined as $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z} = \mathcal{B}^{-1}(\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z})$, which is a
subalgebra of~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
For a simple resurgent function $c\,\delta + \widehat\varphi$ and a path~$\gamma$ which starts
from~$0$ and then avoids~$\mathbb{Z}$, we shall denote by $\operatorname{cont}_\gamma\widehat\varphi$ the analytic
continuation of~$\widehat\varphi$ along~$\gamma$: this function is analytic in a
neighbourhood of the endpoint of~$\gamma$ and admits itself an analytic
continuation along all the paths which avoid~$\mathbb{Z}$.
If the endpoint of~$\gamma$ is close to~$m$ (say at a distance
$<\frac{1}{2}$), then the singularity
$\operatorname{sing}_m(\operatorname{cont}_\gamma\widehat\varphi) \in \mathbb{C}\,\delta \oplus \mathbb{C}\{{\zeta}\}$
is well-defined
(notice that it depends on the branch under consideration, {i.e.}\ on~$\gamma$, and
not only on~$m$ and~$\widehat\varphi$).
It is easy to see that $\operatorname{sing}_m(\operatorname{cont}_\gamma\widehat\varphi)$ is itself a simple resurgent
function; we thus have, for $\gamma$ and~$m$ as above, a $\mathbb{C}$-linear operator
$c\,\delta + \widehat\varphi \mapsto \operatorname{sing}_m(\operatorname{cont}_\gamma\widehat\varphi)$ from $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ to itself.
\begin{definition} \label{defaliender}
Let $m\in\mathbb{Z}^*$. If $m\ge1$, we define an operator from $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ to
itself by using $2^{m-1}$ particular paths~$\gamma$:
\begin{equation} \label{eqdefDeomgen}
\Delta_m(c\,\delta + \widehat\varphi) = \sum_{{\varepsilon}\in\{+,-\}^{m-1}}
\frac{p_{\varepsilon}!q_{\varepsilon}!}{m!} \operatorname{sing}_m(\operatorname{cont}_{\gamma_{\varepsilon}}\widehat\varphi)
\end{equation}
where $p_{\varepsilon}$ and $q_{\varepsilon}=m-1-p_{\varepsilon}$ denote the numbers of signs~`$+$' and
of signs~`$-$' in the sequence ${\varepsilon} = ({\varepsilon}_1,\dotsc,{\varepsilon}_{m-1})$,
and the oriented path~$\gamma_{\varepsilon}$ connects~$0$ and~$m$ following the
segment~$\left]0,m\right[$ but circumventing the intermediary integer points~$k$
to the right if~${\varepsilon}_k=+$ and to the left if~${\varepsilon}_k=-$.
If $m\le-1$, the $\mathbb{C}$-linear operator $\Delta_m$ is defined similarly, using the
$2^{|m|-1}$ paths $\gamma_{\varepsilon}$ which follow~$\left]0,-|m|\right[$
but circumvent the intermediary integer points~$-k$ to the right
if~${\varepsilon}_k=+$ and to the left if~${\varepsilon}_k=-$.
\end{definition}
\begin{prop} \label{propAlDer}
For each $m\in\mathbb{Z}^*$, the operator $\Delta_m$ is a $\mathbb{C}$-linear
derivation of~$\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\end{prop}
\noindent
(For the proof, see \cite{Eca81} or \cite{kokyu}, \S2.3; see also
Lemma~\ref{lemrelDemDepm} and the comment on it below.)
By conjugacy by the formal Borel transform~$\mathcal{B}$, we get a derivation
of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$, still denoted~$\Delta_m$ since there is no risk of confusion.
The operator~$\Delta_m$ is called the \emph{alien derivation of index~$m$} (either in
the convolutive model $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ or in the formal model $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$).
One can easily check from Definition~\ref{defaliender} that
\begin{equation} \label{eqcommutalien}
[\partial,\Delta_m] = m\Delta_m \enspace\text{in $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$},\qquad
[\widehat\partial,\Delta_m] = m\Delta_m \enspace\text{in $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$},
\end{equation}
where $\partial$ denotes the natural derivation $\frac{{\mathrm d}\,}{{\mathrm d} z}$ of $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$
and $\widehat\partial$ is the corresponding derivation
$c\,\delta + \widehat\varphi({\zeta}) \mapsto -{\zeta}\widehat\varphi({\zeta})$
of $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We shall see that the operators~$\Delta_m$ are independent in a strong sense
(see Theorem~\ref{thmfree} below).
This will rely on a study of the way the alien derivations act on the resurgent
functions $\widehat\mathcal{V}^{\omega_1,\dotsc,\omega_r}$.
In this article, for the sake of simplicity, we shall not introduce the larger
commutative algebras $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}$ and $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}$ of simple resurgent functions
``over~$\mathbb{C}$'', {i.e.}\ with simple singularities in the Borel plane which can be
located anywhere.
In these algebras act alien derivations indexed by any non-zero complex number.
One could easily adapt the arguments that we are about to develop to the study
of the alien derivations $\Delta_\omega$, $\omega \in \mathbb{C}^*$, in $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}$.
One can also define an even larger commutative algebra of resurgent functions, without any
restriction on the nature of the singularities to be encountered in the Borel
plane, on which act alien derivations $\Delta_\omega$ indexed by points~$\omega$ of the
Riemann surface of the logarithm, but there is no formal counterpart contained
in~$\mathbb{C}[[z^{-1}]]$ (see {e.g.}~\cite{kokyu}, \S3, and the references therein).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We now check that the formal series $\widetilde\mathcal{V}^{\omega_1,\dotsc,\omega_r}$ are simple
resurgent functions and slightly extend at the same time their definition.
\begin{lemma} \label{defnewcVomb}
Let $\mathbf{A} = \raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ and $\Omega \subset \mathbb{Z}$.
Assume that $a = (\widehat a_\eta)_{\eta\in\Omega}$ is a family of entire functions;
if $0\in\Omega$, we assume furthermore that $\widehat a_0(0)=0$.
Let $\tilde a_\eta = \mathcal{B}^{-1}\widehat a_\eta \in z^{-1}\mathbb{C}[[z^{-1}]]$.
Then the equations $\widetilde\mathcal{V}_a^\emptyset = {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\est} = 1$ and,
for $\text{\boldmath{$\om$}} \in \Omega^\bullet$ non-empty,
\begin{equation} \label{eqdefnewcV}
\big(\frac{{\mathrm d}\,}{{\mathrm d} z} + \norm{\text{\boldmath{$\om$}}}\big) \widetilde\mathcal{V}_a^\text{\boldmath{$\om$}}
= \widetilde a_{\omega_1} \widetilde\mathcal{V}_a^{`\text{\boldmath{$\om$}}}, \qquad
\big(\frac{{\mathrm d}\,}{{\mathrm d} z} + \norm{\text{\boldmath{$\om$}}}\big) {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}
= -\widetilde a_{\omega_r} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\ombp}}
\end{equation}
(with $`\text{\boldmath{$\om$}}$ denoting $\text{\boldmath{$\om$}}$ deprived from its first letter,
$\text{$\omb$\hspace{-.115em}'}$ denoting $\text{\boldmath{$\om$}}$ deprived from its last letter and
$\norm{\text{\boldmath{$\om$}}}$ the sum of the letters of~$\text{\boldmath{$\om$}}$)
determine inductively two moulds $\widetilde\mathcal{V}_a^\bullet, {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \in \mathscr M^\bullet(\Omega,\mathbf{A})$, which
are symmetral and mutually inverse for mould multiplication.
\end{lemma}
\begin{proof}
A mere adaptation of Lemma~\ref{lemdefcV} and Proposition~\ref{propcVsym} (in
which the fact that $\Omega=\mathcal{N}$ played no role) shows that $\widetilde\mathcal{V}_a^\bullet$ and
${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}$ are well-defined by~\eqref{eqdefnewcV} as moulds on~$\Omega$ {\em with values in
$\mathbb{C}[[z^{-1}]]$}, with $\widetilde\mathcal{V}_a^\text{\boldmath{$\om$}},{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb} \in z^{-1}\mathbb{C}[[z^{-1}]]$ as soon as
$\text{\boldmath{$\om$}}\neq\emptyset$, that they are related by the involution~$S$ of Proposition~\ref{propinvsym}:
\begin{equation} \label{eqrelStcVcV}
{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = S\widetilde\mathcal{V}_a^\bullet
\end{equation}
and symmetral, hence mutually inverse.
The formal Borel transforms are given by $\widehat\mathcal{V}_a^\emptyset = {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\est} = \delta$ and,
for $\text{\boldmath{$\om$}}\neq\emptyset$,
\begin{equation} \label{eqdefhcVhtcV}
\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}({\zeta}) = -\frac{1}{{\zeta}-\norm{\text{\boldmath{$\om$}}}} \big( \widehat a_{\omega_1} * \widehat\mathcal{V}_a^{`\text{\boldmath{$\om$}}} \big),
\qquad {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}({\zeta}) = \frac{1}{{\zeta}-\norm{\text{\boldmath{$\om$}}}} \big( \widehat a_{\omega_r} * {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\ombp}} \big),
\end{equation}
where the right-hand sides belong to $\mathbb{C}[[{\zeta}]]$ even if $\norm{\text{\boldmath{$\om$}}}=0$, by the same
argument as in the proof of Lemma~\ref{lemdefcV},
and in fact to $\mathbb{C}\{{\zeta}\}$, by induction on $r(\text{\boldmath{$\om$}})$.
Since $\norm{\text{\boldmath{$\om$}}}$ always lies in~$\mathbb{Z}$, we can apply Lemma~\ref{lemAnCont}: we
get
$\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}, {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb} \in \widehat H\big(\mathscr R(\mathbb{Z})\big)$ for all $\text{\boldmath{$\om$}}\neq\emptyset$ by
induction on~$r(\text{\boldmath{$\om$}})$, hence our moulds take their values in $\widehat{\boldsymbol{R}}_\mathbb{Z}$.
We see that the singularities of $\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}$ and ${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}$ are all simple
singularities, because of the following addendum to Lemma~\ref{lemAnCont}:
with the hypotheses and notations of that lemma, if moreover
$\widehat\varphi\in\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$, then $\widehat b*\widehat\varphi$ vanishes at the origin and only has
simple singularities with vanishing residuum
(this follows from the first formula in~\eqref{eqbstarwhph}),
hence $\widehat s(\widehat b*\widehat\varphi) \in \raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\end{proof}
Notice that, by iterating~\eqref{eqdefhcVhtcV}, one gets
\begin{gather} \label{eqnewcViter}
\widehat\mathcal{V}_a^\text{\boldmath{$\om$}} = (-1)^r
\frac{1}{{\zeta}-\htb\omega_1} \Big( \widehat a_{\omega_1} *
\Big( \frac{1}{{\zeta}-\htb\omega_2} \Big( \widehat a_{\omega_2} *
\Big( \dotsb
\Big( \frac{1}{{\zeta}-\htb\omega_r} \widehat a_{\omega_r}
\Big) \dotsm \Big)\Big)\Big)\Big) \\
\label{eqnewtcViter}
{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb} =
\frac{1}{{\zeta}-\wc\omega_r} \Big( \widehat a_{\omega_r} *
\Big( \frac{1}{{\zeta}-\wc\omega_{r-1}} \Big( \widehat a_{\omega_{r-1}} *
\Big( \dotsb
\Big( \frac{1}{{\zeta}-\wc\omega_1} \widehat a_{\omega_1}
\Big) \dotsm \Big)\Big)\Big)\Big)
\end{gather}
with the notation of~\eqref{eqnotahatcheck}.
These are iterated integrals; for instance, the second formula can be written
\begin{multline} \label{eqiteratedinthcVAo}
{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}({\zeta}) = \frac{1}{{\zeta}-\wc\omega_r}
\underset{0<{\zeta}_1<\dotsb<{\zeta}_{r-1}<{\zeta}}{\idotsint}
\widehat a_{\omega_r}({\zeta}-{\zeta}_{r-1}) \cdot \\[1ex]
\cdot \frac{\widehat a_{\omega_{r-1}}({\zeta}_{r-1}-{\zeta}_{r-2})}{{\zeta}_{r-1}-\wc\omega_{r-1}}
\dotsm\frac{\widehat a_{\omega_{2}}({\zeta}_{2}-{\zeta}_{1})}{{\zeta}_{2}-\wc\omega_{2}}
\frac{\widehat a_{\omega_{1}}({\zeta}_{1})}{{\zeta}_{1}-\wc\omega_{1}}
\,{\mathrm d}{\zeta}_{1}\dotsm{\mathrm d}{\zeta}_{r-1}
\end{multline}
and its analytic continuation along any parametrised path~$\gamma$ which starts
from~$0$ and then avoids~$\mathbb{Z}$ is given by the same integral, but taken over all
$(r-1)$-tuples
$({\zeta}_1,\dotsc,{\zeta}_{r-1}) = \big( \gamma_{\varepsilon}(t_1),\dotsc,\gamma_{\varepsilon}(t_{r-1}) \big)$
with $t_1 < \dotsb < t_{r-1}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We are now ready to study the alien derivatives of the resurgent functions
$\widetilde\mathcal{V}_a^{\omega_1,\dotsc,\omega_r}$ and ${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_1,\dotsc,\om_r}}$
(in the formal model as well as in the convolutive model, the difference is
immaterial here).
\begin{prop} \label{propVam}
Let $\Omega\subset\mathbb{Z}$ and $a = (\widehat a_\eta)_{\eta\in\Omega}$ as in Lemma~\ref{defnewcVomb}.
For each $m\in\mathbb{Z}^*$, denote by the same symbol~$\Delta_m$ the alien derivation of
index~$m$ on $\mathbf{A}=\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ and the mould derivation it induces
on~$\mathscr M^\bullet(\Omega,\mathbf{A})$ by~\eqref{eqdefDd}.
Then there exists a scalar-valued alternal mould $V_a^\bullet(m)
\in \mathscr M^\bullet(\Omega,\mathbb{C})$ such that
\begin{equation} \label{eqDemtcVb}
\Delta_m \widetilde\mathcal{V}_a^\bullet = \widetilde\mathcal{V}_a^\bullet \times V_a^\bullet(m),
\qquad \Delta_m {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = - V_a^\bullet(m) \times {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}.
\end{equation}
Moreover, if $\text{\boldmath{$\om$}}\in\Omega^\bullet$ is non-empty,
\begin{equation} \label{eqannulVm}
\norm{\text{\boldmath{$\om$}}}\neq m
\enspace\Rightarrow\enspace
V_a^\text{\boldmath{$\om$}}(m) = 0.
\end{equation}
\end{prop}
\begin{proof}
Since $\widetilde\mathcal{V}_a^\bullet$ and $S \widetilde\mathcal{V}_a^\bullet = {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}$ are mutually inverse, we
get
$$
\Delta_m \widetilde\mathcal{V}_a^\bullet = \widetilde\mathcal{V}_a^\bullet \times \widetilde V_a^\bullet(m),
\qquad \Delta_m {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = {\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)} \times {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}
$$
by {\em defining} the moulds $\widetilde V_a^\bullet(m)$ and ${\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)}$ as
$$
\widetilde V_a^\bullet(m) = {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \times \Delta_m \widetilde\mathcal{V}_a^\bullet,
\qquad {\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)} = \Delta_m {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \times \widetilde\mathcal{V}_a^\bullet,
$$
but a priori all these moulds take their values in~$\mathbf{A}$.
The operators~$\Delta_m$ and~$S$ clearly commute, thus ${\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)} = S \widetilde V_a^\bullet(m)$.
Proposition~\ref{propDerivSym} shows that $\widetilde V_a^\bullet(m)$ and ${\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)}$ are alternal;
Proposition~\ref{propinvsym} then shows that they are opposite of one another:
${\widetilde V\hspace{-.45em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}^{\hspace{.1em}\bul}(m)} = - \widetilde V_a^\bullet(m)$.
It only remains to be checked that $\widetilde V_a^\bullet(m)$ is scalar-valued and
satisfies~\eqref{eqannulVm}.
This will follow from the equation
\begin{equation} \label{eqpanamVa}
(\partial+\nabla-m) \widetilde V_a^\bullet(m) = 0,
\end{equation}
where $\partial$ denotes the differential $\frac{{\mathrm d}\,}{{\mathrm d} z}$ as well as the mould
derivation it induces by~\eqref{eqdefDd} and $\nabla$ is the mould derivation~\eqref{eqdefna}.
Here is the proof of~\eqref{eqpanamVa}:
$\widetilde\mathcal{V}_a^\bullet$ is defined on non-empty words by the first equation
in~\eqref{eqdefnewcV}, which can be written
\begin{equation} \label{eqdefnewcVmould}
(\partial+\nabla)\widetilde\mathcal{V}_a^\bullet = \widetilde J_a^\bullet \times \widetilde\mathcal{V}_a^\bullet,
\end{equation}
with $\widetilde J_a^\bullet \in \mathscr M^\bullet(\Omega,\mathbf{A})$ defined exactly as in~\eqref{eqdefJa}.
Let us apply the derivation~$\Delta_m$ to both sides of equation~\eqref{eqdefnewcVmould}, using
$\Delta_m(\partial+\nabla) = (\partial+\nabla-m)\Delta_m$ (consequence of~\eqref{eqcommutalien} and of
$[\nabla,\Delta_m]=0$) and $\Delta_m \widetilde J_a^\bullet = 0$ (consequence of the vanishing
of~$\Delta_m$ on entire functions):
$$
(\partial+\nabla-m)\Delta_m\widetilde\mathcal{V}_a^\bullet = \widetilde J_a^\bullet \times \Delta_m \widetilde\mathcal{V}_a^\bullet.
$$
Writing $\Delta_m \widetilde\mathcal{V}_a^\bullet$ as $\widetilde\mathcal{V}_a^\bullet \times \widetilde V_a^\bullet(m)$ and using
the fact that $\partial+\nabla$ is a derivation, we get
$$
\big( (\partial+\nabla)\widetilde\mathcal{V}_a^\bullet \big) \times \widetilde V_a^\bullet(m)
+ \widetilde\mathcal{V}_a^\bullet \times (\partial+\nabla-m)\widetilde V_a^\bullet(m)
= \widetilde J_a^\bullet \times \widetilde\mathcal{V}_a^\bullet \times \widetilde V_a^\bullet(m),
$$
whence $\widetilde\mathcal{V}_a^\bullet \times (\partial+\nabla-m)\widetilde V_a^\bullet(m)=0$ by a further use
of~\eqref{eqdefnewcVmould}. Since $\widetilde\mathcal{V}_a^\emptyset=1$ and $\mathscr M^\bullet(\Omega,\mathbf{A})$ is
an integral domain, this yields~\eqref{eqpanamVa}.
We conclude the proof by interpreting this relation in the convolutive model: we
already knew that $\widetilde V_a^\emptyset(m) = 0$; now, for any non-empty~$\text{\boldmath{$\om$}}$, we have
$$
\mathcal{B}\widetilde V_a^\text{\boldmath{$\om$}}(m) = V_a^\text{\boldmath{$\om$}}(m)\delta + \widehat V_a^\text{\boldmath{$\om$}}(m)({\zeta})
$$
with $V_a^\text{\boldmath{$\om$}}(m) \in \mathbb{C}$ and $\widehat V_a^\text{\boldmath{$\om$}}(m) \in \widehat H\big(\mathscr R(\mathbb{Z})\big)$ satisfying
$$
(\norm{\text{\boldmath{$\om$}}}-m)V_a^\text{\boldmath{$\om$}}(m) = 0,
\qquad (-{\zeta}+\norm{\text{\boldmath{$\om$}}}-m)\widehat V_a^\text{\boldmath{$\om$}}(m) = 0,
$$
whence $V_a^\text{\boldmath{$\om$}}(m)=0$ for $\norm{\text{\boldmath{$\om$}}}\neq m$ and
$\widehat V_a^\text{\boldmath{$\om$}}(m) = 0$ for all~$\text{\boldmath{$\om$}}$ (since both $\mathbb{C}$ and $\widehat
H\big(\mathscr R(\mathbb{Z})\big)\subset\mathbb{C}\{{\zeta}\}$ are integral domains).
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Formulas~\eqref{eqDemtcVb}, when evaluated in the convolutive model on $\text{\boldmath{$\om$}}\in\Omega^\bullet$, read
$$
\Delta_m \widehat\mathcal{V}_a^\emptyset = \Delta_m {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\est} = 0
$$
for $r(\text{\boldmath{$\om$}})=0$, which is obvious since $\widehat\mathcal{V}_a^\emptyset = {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\est} = \delta$.
For $r(\text{\boldmath{$\om$}})=1$, we get
\begin{equation*}
\Delta_m \widehat\mathcal{V}_a^{\omega_1} = - \Delta_m {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_1}} = V_a^{\omega_1}(m)\,\delta
\end{equation*}
and the explicit value of the coefficient is
\begin{equation} \label{eqVmrun}
\omega_1=m \enspace\Rightarrow\enspace
V_a^{\omega_1}(m) = -2 \pi{\mathrm i}\, \widehat a_m(m),
\qquad \omega_1\neq m \enspace\Rightarrow\enspace
V_a^{\omega_1}(m) = 0.
\end{equation}
This is a simple residuum computation for the meromorphic functions
$\widehat\mathcal{V}_a^{\omega_1}({\zeta}) = -{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_1}}({\zeta}) = -\frac{\widehat a_{\omega_1}({\zeta})}{{\zeta}-\omega_1}$
(observe that the value of~$\widehat a_m$ at~$m$ and thus of $V_a^{m}(m)$ depend
transcendentally on the Taylor coefficients of~$\widehat a_m$ at the origin).
For $r=r(\text{\boldmath{$\om$}})\ge2$, we get
\begin{multline}
\Delta_m \widehat\mathcal{V}_a^\text{\boldmath{$\om$}} = V_a^\text{\boldmath{$\om$}}(m)\,\delta + \sum_{i=1}^{r-1}
V_a^{\omega_{i+1},\dotsc,\omega_r}(m) \widehat\mathcal{V}_a^{\omega_1,\dotsc,\omega_i}, \\
\label{eqVaoresidu}
\Delta_m {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb} = - V_a^\text{\boldmath{$\om$}}(m)\,\delta - \sum_{i=1}^{r-1}
V_a^{\omega_1,\dotsc,\omega_i}(m) {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_{i+1},\dotsc,\om_r}}.
\end{multline}
The number $V_a^\text{\boldmath{$\om$}}(m)$ thus appears as the {\em residuum} of a certain simple
singularity, which is a combination of the singularities at~$m$ of certain
branches of $\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}$ or ${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}$; on the other hand, the fact that the
{\em variation} of this singularity can be expressed as a linear combination of the
functions $\widehat\mathcal{V}_a^{\omega_1,\dotsc,\omega_i}$ or ${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_{i+1},\dotsc,\om_r}}$ is related to the very
origin of the name ``resurgent functions'':
the functions $\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}({\zeta})$ or ${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}({\zeta})$, which were initially
defined for ${\zeta}$ close to the origin by \eqref{eqnewcViter}--\eqref{eqnewtcViter},
``resurrect'' in the variation of the singularities of their analytic
continuations.
An even more striking instance of this ``resurgence phenomenon'' is the Bridge
Equation, to be discussed in the case of the saddle-node problem in
Section~\ref{secBE} below.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The computation of the number~$V_a^\text{\boldmath{$\om$}}(m)$ is not as easy when $r(\text{\boldmath{$\om$}})\ge2$ as in the
case $r=1$.
First observe that the vanishing of~$V_a^\text{\boldmath{$\om$}}(m)$ when $\norm{\text{\boldmath{$\om$}}}
= \htb\omega_1 = \wc\omega_r \neq m$ could be obtained as a
consequence of the analytic continuation of formulas
\eqref{eqnewcViter}--\eqref{eqnewtcViter}
(for instance, the singularities of the analytic continuation of~${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}$
can only be located at $\wc\omega_1,\dots,\wc\omega_r$ and, among them, only the one
at~$\wc\omega_r$ can have a non-zero residuum---{cf.}\ the argument at the end of the
proof of Lemma~\ref{defnewcVomb}).
For $\norm{\text{\boldmath{$\om$}}}=m$, using the notations of Definition~\ref{defaliender}, one
can write $V_a^\text{\boldmath{$\om$}}(m)$ as a combination of iterated integrals:
\eqref{eqiteratedinthcVAo} and~\eqref{eqVaoresidu} yield
\begin{multline} \label{eqformGaeps}
V_a^\text{\boldmath{$\om$}}(m) = -2\pi{\mathrm i}
\sum_{{\varepsilon}\in\{+,-\}^{|m|-1}} \frac{p_{\varepsilon}!q_{\varepsilon}!}{|m|!}
\int_{{\Gamma}_{\varepsilon}}
\widehat a_{\omega_r}(m-{\zeta}_{r-1}) \cdot \\[1ex]
\cdot \frac{\widehat a_{\omega_{r-1}}({\zeta}_{r-1}-{\zeta}_{r-2})}{{\zeta}_{r-1}-\wc\omega_{r-1}}
\dotsm\frac{\widehat a_{\omega_{2}}({\zeta}_{2}-{\zeta}_{1})}{{\zeta}_{2}-\wc\omega_{2}}
\frac{\widehat a_{\omega_{1}}({\zeta}_{1})}{{\zeta}_{1}-\wc\omega_{1}}
\,{\mathrm d}{\zeta}_{1}\dotsm{\mathrm d}{\zeta}_{r-1},
\end{multline}
where ${\Gamma}_{\varepsilon}$ consists of all $(r-1)$-tuples
$({\zeta}_1,\dotsc,{\zeta}_{r-1}) = \big( \gamma_{\varepsilon}(t_1),\dotsc,\gamma_{\varepsilon}(t_{r-1}) \big)$
with $t_1 < \dotsb < t_{r-1}$, for any parametrisation of the oriented
path~$\gamma_{\varepsilon}$ (which connects~$0$ and $m=\wc\omega_r$).
In fact, one can restrict oneself to the paths which follow the segment
$\left]0,m\right[$ circumventing the points of
$\{\wc\omega_1,\dotsc,\wc\omega_{r-1}\}\cap\left]0,m\right[ =
\{k_1,\dots,k_s\}$ to the right or to the
left, labelled by sequences ${\varepsilon}\in\{+,-\}^s$, with weights $p_{\varepsilon}!q_{\varepsilon}!/(s+1)!$.
The formula gets simpler when $\Omega\subset\mathbb{Z}^*$ and $\widetilde a_\eta \equiv
z^{-1}$ for each $\eta\in\Omega$, since each $\widehat a_\eta$ is then the
constant function with value~$1$:
\begin{equation*}
\norm{\text{\boldmath{$\om$}}}=m \enspace\Rightarrow\enspace
V_a^\text{\boldmath{$\om$}}(m) = -2\pi{\mathrm i}
\sum_{{\varepsilon}\in\{+,-\}^{|m|-1}} \frac{p_{\varepsilon}!q_{\varepsilon}!}{|m|!}
\int_{{\Gamma}_{\varepsilon}}
\frac{{\mathrm d}{\zeta}_{1}\dotsm{\mathrm d}{\zeta}_{r-1}}{%
({\zeta}_{1}-\wc\omega_{1})\dotsm({\zeta}_{r-1}-\wc\omega_{r-1})}.
\end{equation*}
In this last case, the numbers~$V_a^\text{\boldmath{$\om$}}(m)$ are connected with multiple
logarithms. They are studied under the name ``canonical hyperlogarithmic
mould'' in \cite{Eca81}, chap.~7, without the restriction $\Omega\subset\mathbb{Z}$ (which
we imposed here only to avoid having to define the larger algebra $\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}$;
also the condition $0\notin\Omega$ was imposed here only to simplify the discussion).
Observe that $V_a^\bullet(m)$ is always a primitive element of the graded
cocommutative Hopf algebra $\mathscr H^\bullet(\Omega,\mathbb{C})$ defined in Section~\ref{secAltSym}
(this is just a rephrasing of the shuffle relations encoded by the alternality
of this scalar mould).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Formulas~\eqref{eqDemtcVb} can be iterated so as to express all the successive
alien derivatives of our resurgent functions~$\widetilde\mathcal{V}_a^\text{\boldmath{$\om$}}$ or~${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}$:
\begin{multline} \label{eqiterAlDercVa}
\Delta_{m_s}\dotsm \Delta_{m_1} \widetilde\mathcal{V}_a^\bullet = \widetilde\mathcal{V}_a^\bullet \times
V_a^\bullet(m_s) \times \dotsm \times V_a^\bullet(m_1), \\
\Delta_{m_s}\dotsm \Delta_{m_1} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} =
(-1)^s V_a^\bullet(m_1) \times \dotsm \times V_a^\bullet(m_s) \times {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul},
\end{multline}
for $s\ge1$ and $m_1,\dotsc,m_s\in\mathbb{Z}^*$.
We can consider the collection of resurgent functions
$(\widetilde\mathcal{V}_a^\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ (or $({\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$) as closed
under alien derivation ({i.e.}\ all their alien derivatives can be expressed through
relations involving themselves and scalars);
it was already closed under multiplication (by symmetrality),
and even under ordinary differentiation, in view of~\eqref{eqdefnewcV}, if we
admit relations with coefficients in~$\mathbb{C}\{z^{-1}\}$ (but, after all, convergent
series can be considered as ``resurgent constants'': all alien derivations act
trivially on them).
This is why the $\widetilde\mathcal{V}_a^\text{\boldmath{$\om$}}$'s are called ``resurgent monomials'': they
behave nicely under elementary operations such as multiplication and alien derivations.
In fact, in Section~\ref{secResurMonom} below, we shall deduce from them another
family of resurgence monomials which behave even better under the action of
alien derivations (but the price to pay is that their ordinary derivatives are
not as simple as~\eqref{eqdefnewcV}).
Notice that the operator $\Delta_{m_s}\dotsm \Delta_{m_1}$ measures a combination of
singularities located at $m_1+\dotsb+m_s$.
For instance, the fact that $V_a^\bullet(m_s) \times \dotsm \times V_a^\bullet(m_1)$
vanishes on any word~$\text{\boldmath{$\om$}}$ such that $\norm{\text{\boldmath{$\om$}}}\neq m_1+\dotsb+m_s$ (easy
consequence of~\eqref{eqannulVm}) is consistent with the vanishing of the
residuum at any point $\neq\htb\omega_1$ of any branch of~$\widehat\mathcal{V}_a^\text{\boldmath{$\om$}}$
(consequence of the analytic continuation of~\eqref{eqnewcViter}).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let $\Omega\subset\mathbb{Z}$ and $a = (\widehat a_\eta)_{\eta\in\Omega}$ be a family of entire
functions as in Lemma~\ref{defnewcVomb}, thus with $\widehat a_0(0)=0$ if $0\in\Omega$.
We end this section by illustrating mould calculus to derive quadratic shuffle
relations for the numbers
\begin{equation} \label{eqdefLao}
L_a^\text{\boldmath{$\om$}} = 2\pi{\mathrm i} \int_{{\Gamma}^+}
\widehat a_{\omega_r}(\norm{\text{\boldmath{$\om$}}}-{\zeta}_{r-1})
\tfrac{\widehat a_{\omega_{r-1}}({\zeta}_{r-1}-{\zeta}_{r-2})}{{\zeta}_{r-1}-\wc\omega_{r-1}}
\dotsm\tfrac{\widehat a_{\omega_{2}}({\zeta}_{2}-{\zeta}_{1})}{{\zeta}_{2}-\wc\omega_{2}}
\tfrac{\widehat a_{\omega_{1}}({\zeta}_{1})}{{\zeta}_{1}-\wc\omega_{1}}
\,{\mathrm d}{\zeta}_{1}\dotsm{\mathrm d}{\zeta}_{r-1},
\end{equation}
for $\text{\boldmath{$\om$}}\in\Omega^\bullet$ non-empty,
where ${\Gamma}^+ = {\Gamma}_{\varepsilon}$ with ${\varepsilon} = (+,\dotsc,+)\in\{+,-\}^{|m|-1}$ for
$m=\norm{\text{\boldmath{$\om$}}}$ (notation of~\eqref{eqformGaeps};
if $r=1$, then $L_a^{\omega_1} = 2\pi{\mathrm i} \widehat a_{\omega_1}(\omega_1)$).
This includes the case of the multiple logarithms
$$
L^\text{\boldmath{$\om$}} = 2\pi{\mathrm i}
\int_{{\Gamma}^+}
\frac{{\mathrm d}{\zeta}_{1}\dotsm{\mathrm d}{\zeta}_{r-1}}{%
({\zeta}_{1}-\wc\omega_{1})\dotsm({\zeta}_{r-1}-\wc\omega_{r-1})},
$$
with $\omega_1,\dotsc,\omega_r\in\Omega\subset\mathbb{Z}^*$
(obtained when $\widehat a_\eta({\zeta})\equiv 1$).\footnote{%
We recall that $\wc\omega_1=\omega_1, \wc\omega_2=\omega_1+\omega_2, \dotsc,
\wc\omega_{r-1} = \omega_1+\dotsb+\omega_{r-1}$
(thus $L^\text{\boldmath{$\om$}}$ depends on~$\omega_r$ only through~${\Gamma}^+$
which connects the origin and~$\wc\omega_r$).
}
It is convenient to use here the auxiliary operators~$\Delta^+_m$ of
$\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ defined by the formulas $\Delta^+_0=\operatorname{Id}$ and, for $m\in\mathbb{Z}^*$,
\begin{equation} \label{eqdefDemplus}
\Delta^+_m(c\,\delta + \widehat\varphi) = \operatorname{sing}_m(\operatorname{cont}_{\gamma^+}\widehat\varphi),
\end{equation}
where $\gamma^+ = \gamma_{\varepsilon}$ with ${\varepsilon} = (+,\dotsc,+)\in\{+,-\}^{|m|-1}$.
Thus
\begin{equation} \label{eqlienLaDep}
L_a^\text{\boldmath{$\om$}} = \text{coefficient of~$\delta$ in $\Delta^+_{\norm{\text{\boldmath{$\om$}}}}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}$.}
\end{equation}
We shall consider $L_a^\text{\boldmath{$\om$}}$ as the value at~$\text{\boldmath{$\om$}}$ of a scalar
mould~$L_a^\bullet$; we set $L_a^\emptyset=1$, so that~\eqref{eqlienLaDep} still holds
when $\text{\boldmath{$\om$}}=\emptyset$.
\begin{prop} \label{propLabsym}
The numbers $L_a^\text{\boldmath{$\om$}}$ satisfy the shuffle relations
$$
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}} L_a^\text{\boldmath{$\om$}} \;=\; \left|
\begin{aligned}
& L_a^{\text{\boldmath{$\om$}}^1} L_a^{\text{\boldmath{$\om$}}^2} \quad \text{if $\norm{\text{\boldmath{$\om$}}^1}\cdot\norm{\text{\boldmath{$\om$}}^2}\ge0$}\\[1ex]
& \quad 0 \qquad\quad \text{if not}
\end{aligned} \right.
$$
for any non-empty $\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2\in\Omega^\bullet$.
Equivalently, the scalar moulds $\pmL{\bullet}$ defined by
\begin{equation} \label{eqdefpmL}
\pL{\text{\boldmath{$\om$}}} = 1_{\{\norm{\text{\boldmath{$\om$}}}\ge0\}} L_a^\text{\boldmath{$\om$}}, \qquad
\mL{\text{\boldmath{$\om$}}} = 1_{\{\norm{\text{\boldmath{$\om$}}}\le0\}} L_a^\text{\boldmath{$\om$}}
\end{equation}
(for any $\text{\boldmath{$\om$}}\in\Omega^\bullet$, with the convention $\norm{\emptyset}=0$)
are symmetral.
\end{prop}
This can be rephrased by saying that $\pL\bullet$ and $\mL\bullet$ are
group-like elements of the graded cocommutative Hopf algebra $\mathscr H^\bullet(\Omega,\mathbb{C})$
defined in Section~\ref{secAltSym}.
The rest of this section is devoted to the proof of Proposition~\ref{propLabsym}.
We begin by a few facts about the operators~$\Delta^+_m$; these are not derivations, as the alien derivations $\Delta_m$,
but they are related to them and satisfy modified Leibniz rules analogous
to~\eqref{eqmodifLeib}:
\begin{lemma} \label{lemrelDemDepm}
The operators~$\Delta^+_m$ defined in~\eqref{eqdefDemplus} are related to the alien
derivations~\eqref{eqdefDeomgen} by the following relations:
for any $m\in\mathbb{Z}^*$,
\begin{equation} \label{eqrelDemDepm}
\Delta^+_m = \sum \tfrac{1}{s!} \Delta_{m_s}\dotsm\Delta_{m_1}, \qquad
\Delta_m = \sum \tfrac{(-1)^{s-1}}{s} \Delta^+_{m_s}\dotsm\Delta^+_{m_1},
\end{equation}
with both sums taken over all $s\ge1$ and $m_1,\dotsc,m_s\in\mathbb{Z}^*$ of the same sign
as~$m$ such that $m_1+\dotsb+m_s=m$ (these are thus finite sums).
Moreover, for any $\widehat\chi_1,\widehat\chi_2\in\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ and $m\in\mathbb{Z}$,
\begin{equation} \label{eqLeibDep}
\Delta^+_m(\widehat\chi_1*\widehat\chi_2) = \sum \Delta^+_{m_1}\widehat\chi_1 * \Delta^+_{m_2}\widehat\chi_2
\end{equation}
with summation over all $m_1,m_2\in\mathbb{Z}$ of the same sign as~$m$ (but possibly
vanishing) such that $m_1+m_2=m$.
\end{lemma}
Let us denote by the same symbols the operators of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ obtained
from the~$\Delta^+_m$'s by conjugacy by the formal Borel transform~$\mathcal{B}$, as we did for
the~$\Delta_m$'s.
If we consider the algebras $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}[[{\mathrm e}^{-z}]]$ and $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}[[{\mathrm e}^{z}]]$,
formula~\eqref{eqrelDemDepm} can be written
\begin{equation} \label{eqDepexpDe}
\sum_{m\ge0} {\mathrm e}^{-mz} \Delta^+_m = \exp\bigg( \sum_{m>0} {\mathrm e}^{-mz} \Delta_m \bigg),
\quad
\sum_{m\le0} {\mathrm e}^{-mz} \Delta^+_m = \exp\bigg( \sum_{m<0} {\mathrm e}^{-mz} \Delta_m \bigg).
\end{equation}
We do not give the proof of this lemma here; see {e.g.}\ \cite{kokyu}, Lemmas~4 and~5
(the coefficients $p_{\varepsilon}!q_{\varepsilon}!/|m|!$ in Definition~\ref{defaliender} were
chosen exactly so that~\eqref{eqrelDemDepm} hold; the standard properties of the
logarithm and exponential series then show that~\eqref{eqLeibDep} and
Proposition~\ref{propAlDer} are equivalent;
it is in fact easy to check first~\eqref{eqLeibDep} by deforming the contour of integration in
the integral giving~$\widehat\chi_1*\widehat\chi_2$, and then to deduce Proposition~\ref{propAlDer}).
\begin{lemma} \label{lemW}
For any $m\in\mathbb{Z}^*$, define a scalar mould $L_a^\bullet(m)$ by the formula
$$
L_a^\bullet(m) = \sum \tfrac{(-1)^{s}}{s!} V_a^\bullet(m_1)\times\dotsb\times V_a^\bullet(m_s),
$$
with summation over all $s\ge1$ and $m_1,\dotsc,m_s\in\mathbb{Z}^*$ of the same sign
as~$m$ such that $m_1+\dotsb+m_s=m$.
Define also $L_a^\bullet(0)=1^\bullet$. Then, for every $m\in\mathbb{Z}$,
\begin{enumerate}[(i)]
\item \label{itemfirstpty}
$\Delta^+_m {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = L_a^\bullet(m) \times {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}$,
\item \label{itemsecpty}
$\tau\big( L_a^\bullet(m) \big) = \sum L_a^\bullet(m_1)\otimes L_a^\bullet(m_2)$, with
summation over all $m_1,m_2\in\mathbb{Z}$ of the same sign as~$m$ such that $m_1+m_2=m$,
\item \label{itemthirdpty}
$m=\norm{\text{\boldmath{$\om$}}} \;\Rightarrow\; L_a^\text{\boldmath{$\om$}}(m) = L_a^\text{\boldmath{$\om$}}$, \quad
$m\neq\norm{\text{\boldmath{$\om$}}} \;\Rightarrow\; L_a^\text{\boldmath{$\om$}}(m) = 0$
\enspace (for any $\text{\boldmath{$\om$}}\in\Omega^\bullet$, with the convention $\norm{\emptyset}=0$).
\end{enumerate}
\end{lemma}
\begin{proof}
The first property follows from~\eqref{eqiterAlDercVa} and~\eqref{eqrelDemDepm}.
For the second, we write the symmetrality of~${\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}$ and~$\widehat\mathcal{V}_a^\bullet$ as
identities in $\mathscr M^{\bul\bul}(\Omega,\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z})$:
$$
\tau({\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) = {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \otimes {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}, \qquad
\tau(\widehat\mathcal{V}_a^\bullet) = \widehat\mathcal{V}_a^\bullet \otimes \widehat\mathcal{V}_a^\bullet,
$$
the operator~$\Delta^+_m$ induces operators acting on moulds and dimoulds which clearly satisfy
$\Delta^+_m\circ\tau({\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) = \tau(\Delta^+_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul})$ and relation~\eqref{eqLeibDep}
implies
$$
\tau(\Delta^+_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) = \sum_{\substack{m=m_1+m_2 \\ m_i m\ge0}}
\Delta^+_{m_1}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \otimes \Delta^+_{m_2}{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul},
$$
whence the result follows since
$\tau\big(L_a^\bullet(m)\big) = \tau(\Delta^+_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) \times \tau(\widehat\mathcal{V}_a^\bullet)$
by the homomorphism property of~$\tau$ applied to~(\ref{itemfirstpty}).
The second part of the third property is obvious when $m=0$ and follows
from~\eqref{eqannulVm} when $m\neq0$,
because $\norm{\text{\boldmath{$\om$}}}\neq m_1+\dotsb+m_s$ implies that
$V_a^\bullet(m_1)\times\dotsb\times V_a^\bullet(m_s)$ vanishes on~$\text{\boldmath{$\om$}}$
(even if $\text{\boldmath{$\om$}}=\emptyset$).
The first part of the third property follows from~\eqref{eqlienLaDep}, since
property~(\ref{itemfirstpty}) yields
$$
\Delta^+_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb} = L_a^\text{\boldmath{$\om$}}(m) \,\delta +
\sum_{i=1}^{r-1}
L_a^{\omega_1,\dotsc,\omega_i}(m) {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^{\om_{i+1},\dotsc,\om_r}}
$$
if $r=r(\text{\boldmath{$\om$}})\ge1$ and $\Delta^+_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\est} = L_a^\emptyset(m)\delta$ if $\text{\boldmath{$\om$}}=\emptyset$.
\end{proof}
\medskip
\noindent
\emph{Proof of Proposition~\ref{propLabsym}.}
We have $L_a^\emptyset=1$.
Let $\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2\in\Omega^\bullet$ be non-empty. Property~(\ref{itemsecpty}) with
$m=\norm{\text{\boldmath{$\om$}}^1}+\norm{\text{\boldmath{$\om$}}^2}$ yields
$$
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\om$}}^1}{\text{\boldmath{$\om$}}^2}{\text{\boldmath{$\om$}}} L_a^\text{\boldmath{$\om$}}(m) =
\sum_{\substack{m=m_1+m_2 \\ m_i m\ge0}} L_a^{\text{\boldmath{$\om$}}^1}(m_1) L_a^{\text{\boldmath{$\om$}}^2}(m_2).
$$
According to Property~(\ref{itemthirdpty}), the {left-hand side}\ is
$\tau(L_a^\bullet)^{\text{\boldmath{$\om$}}^1,\text{\boldmath{$\om$}}^2}$ (because any nonzero term in it has $\norm{\text{\boldmath{$\om$}}}=m$).
Among the $|m|+1$ terms of the {right-hand side}, at most one may be nonzero:
if $\norm{\text{\boldmath{$\om$}}^1}$ and $\norm{\text{\boldmath{$\om$}}^2}$ have the same sign, then the term
corresponding to $m_1=\norm{\text{\boldmath{$\om$}}^1}$ is $L_a^{\text{\boldmath{$\om$}}^1} L_a^{\text{\boldmath{$\om$}}^2}$ while all
the others vanish; but in the opposite case, this term does not belong to the
summation and one gets~$0$ as {right-hand side}.
This is the desired shuffle relation; we leave it to the reader to interpret it
in terms of symmetrality for the moulds~$\pmL{\bullet}$ by distinguishing the four
possible cases: $\norm{\text{\boldmath{$\om$}}^1}\cdot\norm{\text{\boldmath{$\om$}}^2} \ge0$ or~$<0$, and
$\norm{\text{\boldmath{$\om$}}^1}+\norm{\text{\boldmath{$\om$}}^2} \ge0$ or~$<0$.
\qed
\medskip
In fact, we can write
\begin{gather} \label{eqrelLaV}
\pL{\bullet} = \sum_{m\ge0} L_a^\bullet(m) = \exp\big(-\pV{\bullet}\big),
\qquad \pV{\bullet} = \sum_{m>0} V_a^\bullet(m) \\
\mL{\bullet} = \sum_{m\le0} L_a^\bullet(m) = \exp\big( -\mV{\bullet} \big),
\qquad \mV{\bullet} = \sum_{m<0} V_a^\bullet(m),
\end{gather}
with well-defined alternal moulds~$\pmV{\bullet}$
(and using $\exp$ as a short-hand for $E_1$---see~\eqref{eqdefEt}),
since Lemma~\ref{lemW}~(\ref{itemthirdpty}) and property~\eqref{eqannulVm}
imply that, when evaluated on a given word~$\text{\boldmath{$\om$}}$, these formulas involve only
finitely many terms; one could thus have invoked
Proposition~\ref{propstructaltsym} to deduce the symmetrality of~$\pmL\bullet$.
\section{The Bridge Equation for the saddle-node} \label{secBE}
In this section, returning to the saddle-node problem, we shall explain why the
formal series $\widetilde\varphi_n(z) = \varphi_n(-1/z)$ and $\widetilde\psi_n(z) = \psi_n(-1/z)$ of
Theorem~\ref{thmResur}, which were proved to belong to~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$, are in fact
simple resurgent functions.
Moreover, we shall express their alien derivatives in terms of
themselves and of the numbers $V_a^\bullet(m)$ of Proposition~\ref{propVam}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We recall the hypotheses and the notations for the saddle-node:
$$ X = x^2 \frac{\partial\,}{\partial x} + A(x,y) \frac{\partial\,}{\partial y} $$
with $A(x,y) = y + \sum_{\eta\in\Omega} a_\eta(x)y^{\eta+1} \in \mathbb{C}\{x,y\}$,
where $\Omega = \ao \eta\in\mathbb{Z} \mid \eta\ge-1\af$,
$\widetilde a_\eta(z) = a_\eta(-1/z) \in z^{-1}\mathbb{C}\{z^{-1}\}$ and
$\widetilde a_0(z) \in z^{-2}\mathbb{C}\{z^{-1}\}$.
We also recall that $\eta\in\Omega \mapsto B_\eta = y^{\eta+1} \frac{\partial\,}{\partial y}$
gives rise to a comould~$\mathbf{B}_\bullet$ such that
$\mathbf{B}_\text{\boldmath{$\om$}} y = \beta_\text{\boldmath{$\om$}} y^{\norm{\text{\boldmath{$\om$}}}+1}$, where the numbers $\beta_\text{\boldmath{$\om$}}$,
$\text{\boldmath{$\om$}}\in\Omega^\bullet$, satisfy Lemma~\ref{lemexo} (we define $\beta_\emptyset=1$ and
$\norm{\emptyset}=0$).
We set $a = (\mathcal{B}\widetilde a_\eta)_{\eta\in\Omega}$, so as to be able to make use of the
constants $V_a^\text{\boldmath{$\om$}}(m)$, $(m,\text{\boldmath{$\om$}})\in\mathbb{Z}^*\times\Omega^\bullet$ defined in
Proposition~\ref{propVam} and more explicitly by formulas~\eqref{eqVmrun}
and~\eqref{eqformGaeps}.
Later in this section we shall prove
\begin{prop} \label{propCm}
The family of complex numbers
$\big( \beta_\text{\boldmath{$\om$}} V_a^\text{\boldmath{$\om$}}(m) \big)_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m}$
is summable for each $m\in\mathbb{Z}^*$. Let
\begin{equation} \label{eqdefCm}
C_m = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m} \beta_\text{\boldmath{$\om$}} V_a^\text{\boldmath{$\om$}}(m),
\qquad m\in\mathbb{Z}^*.
\end{equation}
Then $C_m=0$ for $m\le-2$.
\end{prop}
We call \emph{\'Ecalle's invariants} of~$X$ the complex numbers
$C_{-1},C_1,C_2,\dotsc,C_m,\dotsc$ because of their role in the Bridge Equation
(Theorem~\ref{thmBE} below) and in the classification problem (Theorem~\ref{thmCmInv} and
Section~\ref{secMR} below).
The formal transformations $\th(x,y) = \big(x,\varphi(x,y)\big)$ and
$\th^{-1}(x,y) = \big(x,\psi(x,y)\big)$ which conjugate~$X$ to its normal form
$X_0 = x^2 \frac{\partial\,}{\partial x} + y \frac{\partial\,}{\partial y}$ were constructed in the
first part of this article through mould-comould expansions for the
corresponding substitution operators~$\Theta$ and~$\Theta^{-1}$.
Passing to the resurgence variable $z=-1/x$, we set
$$
\widetilde\varphi(z,y) = \varphi(-1/z,y) = y + \sum_{n\ge0} \widetilde\varphi_n(z)y^n, \quad
\widetilde\psi(z,y) = \psi(-1/z,y) = y + \sum_{n\ge0} \widetilde\psi_n(z)y^n,
$$
where the coefficients $\widetilde\varphi_n(z)$ and~$\widetilde\psi_n(z)$ are known to belong to
the algebra~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$ of resurgent functions, by Theorem~\ref{thmResur}.
We also introduce the substitution operator
\begin{equation} \label{defThti}
\widetilde\Theta \colon
\tilde f(z,y) \mapsto \tilde f\big(z,\widetilde\varphi(z,y)\big)
\end{equation}
(a priori defined in $\mathbb{C}[[z^{-1},y]]$).
Later in this section, we shall prove
\begin{thm} \label{thmBE}
The formal series $\widetilde\varphi_n(z)$ and~$\widetilde\psi_n(z)$ are simple resurgent
functions, thus
$\widetilde\varphi(z,y)$ and~$\widetilde\psi(z,y)$ belong in fact to $\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}[[y]]$.
Moreover, for any $m\in\mathbb{Z}^*$, the formal series of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}[[y]]$
$$
\Delta_m \widetilde\varphi := \sum_{n\ge0} (\Delta_m\widetilde\varphi_n) y^n, \qquad
\Delta_m \widetilde\psi := \sum_{n\ge0} (\Delta_m\widetilde\psi_n) y^n
$$
are given by the formulas
\begin{equation} \label{eqBEtigA}
\Delta_m \widetilde\varphi = C_m y^{m+1} \frac{\partial\widetilde\varphi}{\partial y},
\qquad
\Delta_m \widetilde\psi = - C_m \widetilde\psi^{m+1},
\qquad m\in\mathbb{Z}^*.
\end{equation}
\end{thm}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The two equations in~\eqref{eqBEtigA} are equivalent forms of the so-called {\em
Bridge Equation}, here expressed in~$\mathbf{A}[[y]]$ with $\mathbf{A} = \raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
On the one hand, the {left-hand side} s represent the action of the alien derivation~$\Delta_m$
of~$\mathbf{A}[[y]]$ (we denote by the same symbol the alien derivation~$\Delta_m$
of~$\mathbf{A}$ and the operator it induces in $\mathbf{A}[[y]]$ by acting separately on each
coefficient).
On the other hand, both {right-hand side} s can be expressed with the help of the ordinary
differential operator
$$
\mathscr C(m) = C_m y^{m+1}\frac{\partial\,}{\partial y},
$$
yielding
\begin{align}
\Delta_m \widetilde\varphi &= \mathscr C(m)\widetilde\varphi = \mathscr C(m)\widetilde\Theta y, \\[1ex]
\label{eqDempsiThiigCm}
\Delta_m \widetilde\psi &= -\widetilde\Theta^{-1}\mathscr C(m)y.
\end{align}
See the end of this section for more symmetric formulations of the Bridge
Equation, which involve only the operators $\widetilde\Theta$ or~$\widetilde\Theta^{-1}$ and~$\Delta_m$ for
the {left-hand side} s, and $\mathscr C(m)$ for the {right-hand side} s.
The name ``Bridge Equation'' refers to the link thus established between alien and ordinary
differential calculus when dealing with the solutions~$\widetilde\varphi$ and~$\widetilde\psi$ of
our formal normalisation problem (or with the operator~$\Theta$ solution of the
conjugacy equation~\eqref{eqconjugOp}).
This is a very general phenomenon, in which one sees the advantage of measuring
the singularities in the Borel plane though {\em derivations}:
we are dealing with the solutions of non-linear equations ({e.g.}\ $(\partial + y
\frac{\partial\,}{\partial y})\widetilde\varphi(z,y) = A(-1/z,\widetilde\varphi(z,y))$ in
$\mathbb{C}[[z^{-1},y]]$),
and their alien derivatives must satisfy equations corresponding to the
linearisation of these equations; its is thus natural that these alien
derivatives can be expressed in terms of the ordinary derivatives of the
solutions.
The above argument could be used to derive the form of
equation~\eqref{eqBEtigA}\footnote{
Compare the linear equations
$L \partial_y\widetilde\varphi = 0$ and
$(L-m-1) \Delta_m\widetilde\varphi = 0$
where
$$
L = \widetilde X_0 + \tilde\lambda(z,y), \qquad
\tilde\lambda(z,y) = 1 - \partial_y A(-1/z,\widetilde\varphi(z,y)), \quad
\widetilde X_0 = \partial + y\frac{\partial\,}{\partial y}.
$$
The second equation follows from~\eqref{eqcommutalien} for the computation of
$\Delta_m(\partial + y\frac{\partial\,}{\partial y})\widetilde\varphi(z,y)$,
and from the relation
$\Delta_m A(-1/z,\widetilde\varphi(z,y)) = \big(\partial_y A(-1/z,\widetilde\varphi(z,y))\big)\Delta_m\widetilde\varphi(z,y)$
deduced from Proposition~\ref{propResurzCVy} below
(indeed, $A(-1/z,y)\in\mathbb{C}\{z^{-1},y\} \subset \mathbf{A}\{y\}$).
Since $\partial_y\widetilde\varphi = 1 + \mathscr O(z^{-1},y)$ is invertible, we can set
$\widetilde\chi = (\partial_y\widetilde\varphi)^{-1} \Delta_m\widetilde\varphi$;
the above linear equations imply that
$\widetilde\chi$ is annihilated by $\widetilde X_0-(m+1)$, thus proportional
to~$y^{m+1}$: there exists $c_m\in\mathbb{C}$ such that $\Delta_m\widetilde\varphi = c_m y^{m+1}\partial_y\widetilde\varphi(z,y)$.
The relation $\Delta_m\widetilde\psi = -c_m \widetilde\psi^{m+1}$ follows by the alien chain rule:
$y = \widetilde\varphi\big(z,\widetilde\psi(z,y)\big) = \widetilde\Theta^{-1}\widetilde\varphi$
implies $(\Delta_m\widetilde\varphi)(z,\widetilde\psi) +
\partial_y\widetilde\varphi(z,\widetilde\psi)\Delta_m\widetilde\psi = 0$
by Proposition~\ref{propResurzCVy} below (using $\widetilde\varphi\in\mathbf{A}\{y\}$).
},
however, in the proof below, we prefer to use the
explicit mould representations involving~$\widetilde\mathcal{V}^\bullet$ and~${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\bul}$ so as to
obtain formulas~\eqref{eqdefCm} for the coefficients~$C_m$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Theorem~\ref{thmBE} could also have been formulated in terms of the formal
integral defined by~\eqref{eqdefYzu}:
$\widetilde Y(z,u) = \widetilde\varphi(z,u\,{\mathrm e}^z) \in \raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}[[u\,{\mathrm e}^z]]$
and
$$
\dDem \widetilde Y = C_m u^{m+1} \frac{\partial\widetilde Y}{\partial u},
\qquad m\in\mathbb{Z}^*,
$$
where $\dDem = {\mathrm e}^{-mz} \Delta_m$ is the {\em dotted alien derivation of index~$m$}, which already
appeared in formula~\eqref{eqDepexpDe}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The Bridge Equations~\eqref{eqBEtigA} are a compact writing of infinitely many
``resurgence equations'' for the series $\Delta_m\widetilde\varphi_n$ or~$\Delta_m\widetilde\psi_n$,
obtained by expanding them in powers of~$y$.
For instance, setting
\begin{equation} \label{eqdefPhiph}
\widetilde\Phi_n = \left|
\begin{aligned}
1 + \widetilde\varphi_1 & \qquad \text{if $n=1$} \\
\widetilde\varphi_n\enspace\; & \qquad \text{if $n\neq1$,}
\end{aligned} \right.
\end{equation}
so that $\widetilde\varphi(z,y) = \sum_{n\ge0} \widetilde\Phi_n(z) y^n$, we get
$$
\Delta_m\widetilde\Phi_n = \left|
\begin{aligned}
(n-m)C_m\widetilde\Phi_{n-m} & \qquad \text{if $-1\le m \le n-1$} \\
0\qquad\qquad & \qquad \text{if $m\le-2$ or $m\ge n$.}
\end{aligned} \right.
$$
Thus \begin{itemize}
\item $\Delta_m\widetilde\varphi_0=0$ for $m\neq-1$,
while $\Delta_{-1}\widetilde\varphi_0 = C_{-1}(1+\widetilde\varphi_1)$;
\item $\Delta_m\widetilde\varphi_1=0$ for $m\neq-1$, while $\Delta_{-1}\widetilde\varphi_1 = 2 C_{-1} \widetilde\varphi_2$;
\item $\Delta_m\widetilde\varphi_2=0$ for $m\notin\{-1,1\}$, while $\Delta_{-1}\widetilde\varphi_2 = 3 C_{-1} \widetilde\varphi_3$
and $\Delta_1\widetilde\varphi_2 = C_1 (1+\widetilde\varphi_1)$;
\item $\Delta_m\widetilde\varphi_3=0$ for $m\notin\{-1,1,2\}$, while\ldots
\end{itemize}
$\quad\qquad\vdots$
\medskip
\noindent
Similarly, with
$$
\widetilde\Psi_n = \left|
\begin{aligned}
1 + \widetilde\psi_1 & \qquad \text{if $n=1$} \\
\widetilde\psi_n\enspace\; & \qquad \text{if $n\neq1$,}
\end{aligned} \right.
$$
we have
$ \sum (\Delta_m\widetilde\Psi_n) y^n = - C_m \big( \sum \widetilde\Psi_n y^n \big)^{m+1} $,
which means that $\Delta_m\widetilde\Psi_n = 0$ for all $n\in\mathbb{N}$ when $m\le -2$,
$$
\Delta_{-1}\widetilde\Psi_n = \left|
\begin{aligned}
-C_{-1} & \qquad \text{if $n=0$} \\
0 \quad & \qquad \text{if $n\neq0$}
\end{aligned} \right.
$$
and
$\displaystyle \Delta_m\widetilde\Psi_n = - C_m \sum_{n_1+\dotsc+n_{m+1}=n}
\widetilde\Psi_{n_1} \dotsm \widetilde\Psi_{n_{m+1}}$ for any $n\in\mathbb{N}$ and $m\ge1$.
In particular, $C_m$ is the constant term in $\Delta_m\widetilde\varphi_{m+1}$ or in
$-\Delta_m\widetilde\psi_{m+1}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Proof of Proposition~\ref{propCm} and Theorem~\ref{thmBE}.}
We have a Fr\'echet space structure on $\wHR\mathbb{Z}$, with
seminorms $\norm{\,.\,}_K$ indexed by the compact subsets of~$\mathscr R(\mathbb{Z})$:
$$
\norm{\widehat\varphi}_K = \max_{{\zeta}\in K} \left|\widehat\varphi({\zeta})\right|,
\qquad \widehat\varphi \in \wHR\mathbb{Z}, \quad K\in\mathscr K.
$$
We thus naturally get Fr\'echet space structures on $\widehat{\boldsymbol{R}}_\mathbb{Z} = \mathbb{C}\,\delta\oplus\wHR\mathbb{Z}$,
by defining
$\norm{c\,\delta + \widehat\varphi}_K := \max\big( |c|, \norm{\widehat\varphi}_K \big)$,
and on $\widetilde{\boldsymbol{R}}_\mathbb{Z} = \mathcal{B}^{-1} \widehat{\boldsymbol{R}}_\mathbb{Z}$, with
$\norm{\widetilde\chi}_K := \norm{\mathcal{B}\widetilde\chi}_K$ for $\widetilde\chi = c+\widetilde\varphi \in \widetilde{\boldsymbol{R}}_\mathbb{Z}$.
The space $\mathbf{A}=\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ of simple resurgent functions is a closed subspace
of~$\widetilde{\boldsymbol{R}}_\mathbb{Z}$ and the $\Delta_m$ are continuous operators.
Indeed, the map $\widehat\varphi \mapsto \operatorname{sing}_m(\operatorname{cont}_\gamma\widehat\varphi)$ is continuous on
$\widehat\mathbf{A}=\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ because the variation can be expressed as a difference of branches
and the residuum as a Cauchy integral.
Consider now the formal series
$\widetilde\mathcal{V}^\text{\boldmath{$\om$}}(z) = \widetilde\mathcal{V}_a^\text{\boldmath{$\om$}}(z), {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}(z) = {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\omb}(z) \in \mathbf{A}$,
and their formal Borel transforms, which belong to $\widehat\mathbf{A}$.
The end of the proof of Theorem~\ref{thmResur} shows that
$(\beta_\text{\boldmath{$\om$}} \widehat\mathcal{V}^\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=n-1}$ and
$(\beta_\text{\boldmath{$\om$}} {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb})_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=n-1}$
are summable families of~$\widehat\mathbf{A}$ for each $n\in\mathbb{N}$;
indeed, for any compact subset~$K$ of~$\mathscr R(\mathbb{Z})$, there exist $\rho$, $N$ and~$L$
such that any point of~$K$ is the endpoint of a $(\rho,N,n-\mathbb{N}^*)$-adapted path
of length~$\le L$ and also the endpoint of a $(\rho,N,\mathbb{N})$-adapted path
of length~$\le L$, and one can use \eqref{ineqbecV}, \eqref{ineqCVUdeux}
and~\eqref{ineqCVUtrois}.
Hence the sums $\widehat\varphi_n$ and~$\widehat\psi_n$ of these families belong to~$\widehat\mathbf{A}$.
Equivalently, the formal series $\widetilde\varphi_n$ and~$\widetilde\psi_n$ appear as sums of
summable families of~$\mathbf{A}$:
$$
\widetilde\varphi_n = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=n-1}
\beta_\text{\boldmath{$\om$}} \widetilde\mathcal{V}^\text{\boldmath{$\om$}}
\quad \text{and} \quad
\widetilde\psi_n = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=n-1}
\beta_\text{\boldmath{$\om$}} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}
\quad \text{in $\mathbf{A}$,}
$$
they are thus simple resurgent functions themselves.
To end the proof of Theorem~\ref{thmBE}, we thus only have to study the alien
derivatives $\Delta_m\widetilde\varphi_n$ and $\Delta_m\widetilde\psi_n$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{End of the proof of Proposition~\ref{propCm}:}
Let $m\in\mathbb{Z}^*$.
In view of Lemma~\ref{lemexo}, we can suppose $m\ge-1$.
By continuity of~$\Delta_m$,
$(\beta_\text{\boldmath{$\om$}} \Delta_m{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb})_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm\text{\boldmath{$\om$}}=m}$ is a summable family
of~$\mathbf{A}$, of sum~$\Delta_m\widetilde\psi_{m+1}$.
In particular, the family obtained by extracting the constant terms is summable,
but the constant term in~$\Delta_m{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}$ is $-V_a^\text{\boldmath{$\om$}}(m)$
by~\eqref{eqDemtcVb}.
Hence we get the summability of
$$
C_m = \sum_{\norm{\text{\boldmath{$\om$}}}=m} \beta_\text{\boldmath{$\om$}} V_a^\text{\boldmath{$\om$}}(m)
\quad \text{in $\mathbb{C}$,}
$$
which is the constant term in $-\Delta_m\widetilde\psi_{m+1}$.
\qed
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
As vector spaces, $\mathbb{C}[[y]]$ and $\mathbf{A}[[y]]$ can be identified with~$\mathbb{C}^\mathbb{N}$
and~$\mathbf{A}^\mathbb{N}$ and are thus also Fr\'echet spaces if we put the product topology on them.
As an intermediary step in the proof of Theorem~\ref{thmBE}, let us show
\begin{lemma} \label{lemintermed}
Let $m\in\mathbb{Z}^*$ and
$$
\mathscr C(m) = C_m y^{m+1}\frac{\partial\,}{\partial y}.
$$
Then, for each $n_0\in\mathbb{N}$, the families
$({\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \mathbf{B}_\text{\boldmath{$\om$}} y^{n_0})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
and $(V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}} y^{n_0})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
are summable in~$\mathbf{A}[[y]]$, of sums $\widetilde\Theta^{-1} y^{n_0}$ and $\mathscr C(m) y^{n_0}$.
\end{lemma}
\begin{proof}
Our aim is to show that $({\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
and $(V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
are pointwise summable families of operators of~$\mathbf{A}[[y]]$; in view of the above,
since $\mathbf{B}_\text{\boldmath{$\om$}} y = \beta_\text{\boldmath{$\om$}} y^{\norm{\text{\boldmath{$\om$}}}+1}$, we can already evaluate these operators
on~$y$ and write
\begin{equation} \label{eqdeterminC}
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}} y =
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m} V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}} y
= C_m y^{m+1}
\quad \text{in $\mathbb{C}[[y]]$}
\end{equation}
(the first identity stems from~\eqref{eqannulVm}) and
$$
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \mathbf{B}_\text{\boldmath{$\om$}} y =
y + \sum_{n\ge0} \widetilde\psi_n(z) y^n = \widetilde\Theta^{-1} y
\quad \text{in $\mathbf{A}[[y]]$.}
$$
Although similar to formula~\eqref{eqpsiThii}, the last equation is stronger in
that it gives the sum of a summable family of~$\mathbf{A}[[y]]$ rather
than of a formally summable family of~$\mathbb{C}[[z^{-1},y]]$.
When evaluating the operators~$\mathbf{B}_\text{\boldmath{$\om$}}$ on~$y^{n_0}$, we get
coefficients~$\beta_{\text{\boldmath{$\om$}},n_0}$ which generalise the~$\beta_\text{\boldmath{$\om$}}$'s:
$$
\mathbf{B}_\text{\boldmath{$\om$}} y^{n_0} = \beta_{\text{\boldmath{$\om$}},n_0} y^{n_0+\norm{\text{\boldmath{$\om$}}}}
$$
with $\beta_{\emptyset,n_0} = 1$,
$\beta_{(\omega_1),n_0} = n_0$,
$\beta_{\text{\boldmath{$\om$}},n_0} = n_0 (n_0 + \wc\omega_1) (n_0 + \wc\omega_2) \dotsm (n_0 + \wc\omega_{r-1})$
for $r\ge2$.
Notice that $\beta_{\text{\boldmath{$\om$}},n_0}\neq0 \;\Rightarrow\; \norm{\text{\boldmath{$\om$}}}\ge -n_0$.
A suitable modification of the proof of Theorem~\ref{thmResur} shows that the
families $(\beta_{\text{\boldmath{$\om$}},n_0} {\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb})_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m}$
are summable in~$\widehat\mathbf{A}$ for all $m\ge-n_0$
(replace the functions ${S\hspace{-.65em}\raisebox{.35ex}{--}\hspace{0.07em}}_m({\zeta}) = \tfrac{m+1}{{\zeta}-m}$ of Lemma~\ref{lemtcSm} by
$\tfrac{m+n_0}{{\zeta}-m}$, for which the bounds are
only slightly worse than in Lemma~\ref{leminiSmtSm}).
This yields the first part of the lemma, since we can now write
$$
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \mathbf{B}_\text{\boldmath{$\om$}} y^{n_0} =
\sum_{m\ge -n_0} \bigg( \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\, \norm{\text{\boldmath{$\om$}}}=m}
\beta_{\text{\boldmath{$\om$}},n_0} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \bigg) y^{n_0+m}
= \widetilde\Theta^{-1} y^{n_0}
\quad \text{in $\mathbf{A}[[y]]$.}
$$
By continuity of~$\Delta_m$, we also get the summability of
$(\beta_{\text{\boldmath{$\om$}},n_0} \Delta_m{\widehat\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb})_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m}$
in~$\mathbf{A}$,
hence of the family $(-\beta_{\text{\boldmath{$\om$}},n_0}
V_a^\text{\boldmath{$\om$}}(m))_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m}$ obtained by extracting the
constant terms.
Let
$$
C_{m,n_0} = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m} \beta_{\text{\boldmath{$\om$}},n_0} V_a^\text{\boldmath{$\om$}}(m)
\quad \text{in $\mathbb{C}$}.
$$
Thus $(V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}} y^{n_0})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ is summable in
$\mathbf{A}[[y]]$, with sum $C_{m,n_0} y^{n_0+m}$.
Let $\Omega^{k,R}$ ($k,R\in\mathbb{N}^*$) denote an exhaustion of~$\Omega^\bullet$ by finite
sets as in the proof of Proposition~\ref{propmultiplimould}.
We conclude by showing that $C_{m,n_0} y^{n_0+m} = \mathscr C(m) y^{n_0}$.
This follows from that fact that the operators
$$
\mathscr C^{k,R}(m) = \sum_{\text{\boldmath{$\om$}}\in\Omega^{k,R}} V_a^\text{\boldmath{$\om$}}(m) \mathbf{B}_\text{\boldmath{$\om$}}
$$
are all derivations of $\mathbb{C}[[y]]$ because of the alternality of~$V_a^\bullet(m)$ (the
Leibniz rule is easily checked with the help of the cosymmetrality
of~$\mathbf{B}_\bullet$), thus their pointwise limit is also a derivation, which cannot be
anything but~$\mathscr C(m)$ by virtue of~\eqref{eqdeterminC}.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{End of the proof of Theorem~\ref{thmBE}:}
In $\mathbf{A}[[y]]$, the families
$(\widetilde\mathcal{V}^\text{\boldmath{$\om$}} \mathbf{B}_\text{\boldmath{$\om$}} y)_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$ and
$({\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb} \mathbf{B}_\text{\boldmath{$\om$}} y)_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
are summable, of sums
\begin{equation} \label{eqwtphpsisum}
\widetilde\varphi(z,y) = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \widetilde\mathcal{V}^\text{\boldmath{$\om$}}(z) \mathbf{B}_\text{\boldmath{$\om$}} y,
\qquad
\widetilde\psi(z,y) = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}(z) \mathbf{B}_\text{\boldmath{$\om$}} y.
\end{equation}
The derivation of~$\mathbf{A}[[y]]$ induced by~$\Delta_m$ is clearly continuous;
applying $\Delta_m$ to both sides of the first equation in~\eqref{eqwtphpsisum} and
using~\eqref{eqcaraccomould} and~\eqref{eqDemtcVb}, we find
\begin{multline*}
\Delta_m\widetilde\varphi = \sum_\text{\boldmath{$\om$}} (\Delta_m \widetilde\mathcal{V}^\text{\boldmath{$\om$}}) \mathbf{B}_\text{\boldmath{$\om$}} y
= \sum_{\text{\boldmath{$\om$}}^1, \text{\boldmath{$\om$}}^2} \widetilde\mathcal{V}^{\text{\boldmath{$\om$}}^1} V_a^{\text{\boldmath{$\om$}}^2}(m) \mathbf{B}_{\text{\boldmath{$\om$}}^2} \mathbf{B}_{\text{\boldmath{$\om$}}^1} y
\\[1ex]
= \sum_{\text{\boldmath{$\om$}}^2} V_a^{\text{\boldmath{$\om$}}^2}(m) \mathbf{B}_{\text{\boldmath{$\om$}}^2} \widetilde\Theta y
= \mathscr C(m)\widetilde\varphi
\end{multline*}
(with the help of Lemma~\ref{lemintermed} for the last identities).
Similarly,
\begin{multline*}
\Delta_m\widetilde\psi = \sum_\text{\boldmath{$\om$}} (\Delta_m {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^\omb}) \mathbf{B}_\text{\boldmath{$\om$}} y
= -\sum_{\text{\boldmath{$\om$}}^1, \text{\boldmath{$\om$}}^2} V_a^{\text{\boldmath{$\om$}}^1}(m) {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^{\omb^2}} \mathbf{B}_{\text{\boldmath{$\om$}}^2} \mathbf{B}_{\text{\boldmath{$\om$}}^1} y
\\[1ex]
= -\sum_{\text{\boldmath{$\om$}}^2} {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}^{\omb^2}} \mathbf{B}_{\text{\boldmath{$\om$}}^2} \mathscr C(m) y
= -\widetilde\Theta^{-1}(C_m y^{m+1}) = -C_m(\widetilde\Theta^{-1} y)^{m+1}.
\end{multline*}
\qed
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{Operator form of the Bridge Equation.}
As announced after the statement of Theorem~\ref{thmBE}, the Bridge Equation can be
given a form which involves the operators~$\widetilde\Theta$ or~$\widetilde\Theta^{-1}$ in a more symmetric
way.
This will require a further construction.
\begin{prop} \label{propResurzCVy}
Let $\mathbf{A} = \raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$. The set
$$
\mathbf{A}\{y\} = \Big\{ \sum_{n\ge0} \tilde f_n(z) y^n \in \mathbf{A}[[y]] \mid
\forall K\in \mathscr K,\exists c,\Lambda>0 \;\text{s.t.}\; \norm{\tilde f_n}_K \le c \Lambda^n
\; \text{for all $n$}
\Big\}
$$
is a subalgebra of~$\mathbf{A}[[y]]$, which contains $\widetilde\varphi(z,y)$ and~$\widetilde\psi(z,y)$
and which is invariant by all the alien derivations~$\Delta_m$.
Moreover, the substitution operators~$\widetilde\Theta$ and~$\widetilde\Theta^{-1}$
leave $\mathbf{A}\{y\}$ invariant and the operators they induce on $\mathbf{A}\{y\}$ satisfy
the ``alien chain rule''
$$
\Delta_m \widetilde\Theta \tilde f = \widetilde\Theta \Delta_m \tilde f + (\widetilde\Theta\partial_y\tilde f) \Delta_m\widetilde\varphi, \quad
\Delta_m \widetilde\Theta^{-1} \tilde f = \widetilde\Theta^{-1} \Delta_m \tilde f + (\widetilde\Theta^{-1}\partial_y\tilde f) \Delta_m\widetilde\psi.
$$
\end{prop}
\noindent
\emph{Idea of the proof:}
The fact that $\widetilde\varphi,\widetilde\psi\in\mathbf{A}\{y\}$ follows easily from \eqref{ineqwhphn}--\eqref{ineqwhpsin}.
The other statements require symmetrically contractile paths, first to control the seminorm
$\norm{\,.\,}_K$ of a product of simple resurgent functions ($\mathbf{A}$ is in fact
a Fr\'echet algebra),
and then to study $\partial_y^n\tilde f(z,\widetilde\varphi_0(z))$ which appears in the substitution
of~$\widetilde\varphi$ inside a series with resurgent coefficients:
$$
\tilde f(z,\widetilde\varphi) = \tilde f(z,\widetilde\varphi_0) + y \partial_y\tilde f(z,\widetilde\varphi_0)\widetilde\Phi_1 +
y^2\Big(\partial_y\tilde f(z,\widetilde\varphi_0)\widetilde\Phi_ 2 + \frac{1}{2!}\partial_y^2\tilde f(z,\widetilde\varphi_0)\widetilde\Phi_1^2\Big)
+ \dotsb
$$
with the notation~\eqref{eqdefPhiph}.
See \cite{kokyu} ({e.g.}\ \S2.3, formula~(41)).
\qed
\begin{thm} \label{thmBrOp}
We have the following identities in $\operatorname{End}_\mathbb{C}(\mathbf{A}\{y\})$:
\begin{equation} \label{eqBrOp}
\big[ \Delta_m, \widetilde\Theta \big] = \mathscr C(m) \widetilde\Theta,
\qquad
\big[ \Delta_m, \widetilde\Theta^{-1} \big] = - \widetilde\Theta^{-1} \mathscr C(m),
\end{equation}
for all $m\in\mathbb{Z}^*$.
\end{thm}
\begin{proof}
We must prove that $\widetilde\Theta\Delta_m\widetilde\Theta^{-1}-\Delta_m = -\mathscr C(m)$.
The operators $\widetilde\Theta$ and $\widetilde\Theta^{-1}$ are mutually inverse $\mathbf{A}$-linear automorphisms
of $\mathscr A=\mathbf{A}\{y\}$
and $\mathscr C(m)$ is an $\mathbf{A}$-linear derivation.
The operator~$\Delta_m$ is a derivation, it is not $\mathbf{A}$-linear, but
$D = \widetilde\Theta\Delta_m\widetilde\Theta^{-1}-\Delta_m$ is an $\mathbf{A}$-linear derivation;
indeed, if $\mu(z)\in\mathbf{A}$ and $ f(z,y)\in\mathscr A$, then
\begin{multline*}
D(\mu f) =
\widetilde\Theta\Delta_m(\mu\widetilde\Theta^{-1} f) - \Delta_m(\mu f) = \\
\widetilde\Theta\big( \mu \Delta_m\widetilde\Theta^{-1} f + (\Delta_m\mu) \widetilde\Theta^{-1} f \big)
-\big( \mu\Delta_m f + (\Delta_m\mu) f \big) = \\
\mu\widetilde\Theta\Delta_m\widetilde\Theta^{-1} f + (\Delta_m\mu) f
- \mu\Delta_m f - (\Delta_m\mu) f
= \mu Df.
\end{multline*}
It is thus sufficient to check that the operator $D+\mathscr C(m)$ vanishes on~$y$
(being a continuous $\mathbf{A}$-linear derivation of~$\mathscr A$, it'll have
to vanish everywhere).
But, in view of~\eqref{eqDempsiThiigCm}, $D y = \widetilde\Theta\Delta_m\widetilde\psi =
-C_m(\widetilde\Theta\widetilde\psi)^{m+1} = - C_m y^{m+1}$, as required.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
\emph{The Bridge Equation and the problem of analytic classification.}
We now explain why the coefficients~$C_m$ implied in the Bridge Equation are
``analytic invariants'' of the vector field~$X$.
Suppose we are given two saddle-node vector fields, $X_1$ and~$X_2$, of the
form~\eqref{eqdefX} and satisfying~\eqref{eqassA}.
Both of them are formally conjugate to the normal form~$X_0$, hence they are
mutually formally conjugate.
Namely, we have formal subsitution automorphisms $\Theta_i$ (or~$\widetilde\Theta_i$, when
using the variable~$z$ instead of~$x$) conjugating $X_i$ with~$X_0$, for $i=1,2$, hence
$$
\Theta X_1 = X_2 \Theta, \qquad \Theta = \Theta_2^{-1} \Theta_1.
$$
The operator~$\Theta$ is the substitution operator associated with
$$
\th \colon (x,y) \mapsto \big( x, \varphi(x,y) \big),
\qquad \varphi(x,y) = \Theta y = \varphi_1\big(x,\psi_2(x,y)\big),
$$
which is the unique formal transformation of the form~\eqref{eqdefthph} such
that $X_1 = \th^* X_2$.
One can check that, when passing to the variable~$z$, one gets as a consequence
of Proposition~\ref{propResurzCVy} and Theorem~\ref{thmBrOp}:
$$
\widetilde\Theta \in \operatorname{End}_\mathbb{C}(\mathbf{A}\{y\}), \qquad
\big[ \Delta_m, \widetilde\Theta \big] = \widetilde\Theta_2^{-1} \big( \mathscr C_1(m) - \mathscr C_2(m) \big) \widetilde\Theta_1,
\qquad m\in\mathbb{Z}^*,
$$
where $\mathscr C_i(m) = C_{i,m} y^{m+1}\frac{\partial\,}{\partial y}$ is the derivation
appearing in the {right-hand side}\ of the Bridge Equations~\eqref{eqBrOp} for~$X_i$.
If $X_1$ and~$X_2$ are holomorphically conjugate, then the unique formal
conjugacy~$\th$ is given by a convergent series~$\th(x,y)$, thus all the alien
derivatives of~$\widetilde\varphi$ vanish and $\mathscr C_1(m) = \mathscr C_2(m)$ for all~$m$.
We thus have proved half of
\begin{thm} \label{thmCmInv}
Two saddle-node vector fields of the
form~\eqref{eqdefX} and satisfying~\eqref{eqassA}
are analytically conjugate if and only if their Bridge
Equations~\eqref{eqBEtigA} share the same collection of coefficients
$(C_m)_{m\in\mathbb{Z}^*}$.
\end{thm}
According to this theorem, the numbers~$C_m$ thus constitute a complete system
of analytic invariants for a saddle-node vector field.
To complete the proof of Theorem~\ref{thmCmInv}, one needs to show the reverse
implication,
{i.e.}\ that the identities $\mathscr C_1(m)=\mathscr C_2(m)$ imply the convergence of
$\varphi_1\big(x,\psi_2(x,y)\big)$.
This will follow from the results of next section, according to which the
coefficients~$C_m$ are related to another complete system of analytic
invariants, which admits a more geometric description.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We end this section with a look at simple cases of the general theory.
``Euler equation'' corresponds to $A(x,y) = x+y$, as mentioned in
Section~\ref{secSN}.
We may call Euler-like equations those which correspond to the case in which
$a_\eta=0$ for $\eta\ge1$, thus $A(x,y) = a_{-1}(x) + \big(1+a_0(x)\big) y$.
For them, the formal integral is explicit.
Set $\widetilde a_0(z) = a_0(-1/z) \in z^{-2}\mathbb{C}\{z^{-1}\}$ and
$\widetilde a_{-1}(z) = a_{-1}(-1/z) \in z^{-1}\mathbb{C}\{z^{-1}\}$ as usual.
Let $\widetilde\alpha(z)$ be the unique series such that $\partial_z\widetilde\alpha = \widetilde
a_0$ and $\widetilde\alpha \in z^{-1}\mathbb{C}\{z^{-1}\}$.
Set also $\widetilde\beta = \widetilde a_{-1} \,{\mathrm e}^{-\widetilde\alpha} \in z^{-1}\mathbb{C}\{z^{-1}\}$ and $\widehat\beta =
\mathcal{B}\widetilde\beta$ (which is an entire function of exponential type).
One finds
$$
\widetilde Y(z,u) = \widetilde\varphi_0(z) + u\,{\mathrm e}^{z+\widetilde\alpha(z)}, \qquad
\widetilde\varphi_0 = - {\mathrm e}^{\widetilde\alpha} \, \mathcal{B}^{-1} \Big( {\zeta} \mapsto \frac{\widehat\beta({\zeta})}{{\zeta}+1} \Big).
$$
Correspondingly, $\varphi(x,y) = \Phi_0(x) + \Phi_1(x) y$ with
$\Phi_0(x) = \widetilde\varphi_0(-1/x)$ generically divergent and $\Phi_1(x) =
{\mathrm e}^{\widetilde\alpha(-1/x)}$ convergent.
One has $C_m=0$ for every $m\in\mathbb{Z}\setminus\{-1\}$, but
$$
C_{-1} = {\mathrm e}^{-\widetilde\alpha} \Delta_{-1}\widetilde\varphi_0 = -2\pi{\mathrm i} \, \widehat\beta(-1).
$$
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Another particular case, much less trivial, is that of Riccati equations (see
\cite{Eca84}, \cite[Vol.~2]{Eca81} or \cite{BSSV}):
when $a_1\neq0$ and $a_\eta=0$ for $\eta\ge2$, hence $A(x,y) = a_{-1}(x) +
\big(1+a_0(x)\big) y + a_1(x) y^2$, one can check that the formal integral has a linear
fractional dependence upon the parameter~$u$:
$$
\widetilde Y(z,u) = \frac{\widetilde\varphi_0(z) + u\,{\mathrm e}^z \widetilde\chi(z)}{
1 + u\,{\mathrm e}^z \widetilde\chi(z)\widetilde\varphi_\infty(z)},
$$
where $\widetilde\varphi_0$, $\widetilde\varphi_\infty$ and $-1+\widetilde\chi$ belong to $z^{-1}\mathbb{C}[[z^{-1}]]$;
$\widetilde\varphi_0$ and $1/\widetilde\varphi_\infty$ can be found as the unique solutions of the
differential equation~\eqref{eqdiffeqX} in the fraction field $\mathbb{C}(\!(z^{-1})\!)$.
Correspondingly, the normalising series $\varphi(x,y)$ and~$\psi(x,y)$ have a linear
fractional dependence upon~$y$.
In the Riccati case, only $C_{-1}$ and~$C_1$ may be nonzero. Indeed,
$$
\Delta_m\widetilde\varphi_0 \neq 0 \;\Rightarrow\; m=-1,
\quad
\Delta_m\widetilde\varphi_\infty \neq 0 \;\Rightarrow\; m=1,
\quad
\Delta_m\widetilde\chi \neq 0 \;\Rightarrow\; m=\pm1.
$$
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We may call ``canonical Riccati equations'' the equations corresponding to a
function~$A$ of the form
$A(x,y) = y + \frac{1}{2\pi{\mathrm i}}B_- x + \frac{1}{2\pi{\mathrm i}}B_+ x y^2$, with $B_-,B_+\in\mathbb{C}$.
Thus, for them, the differential equation~\eqref{eqdiffeqX} reads
$$
\partial_z \widetilde Y = \widetilde Y - \frac{1}{2\pi{\mathrm i} z}(B_- + B_+ \widetilde Y^2).
$$
A direct mould computation based on~\eqref{eqdefCm}
is given in \cite{Eca81}, Vol.~2, pp.~476--480, yielding
$$
C_{-1} = B_- {\sigma}(B_- B_+), \quad
C_{1} = - B_+ {\sigma}(B_- B_+),
$$
with ${\sigma}(b) = \frac{2}{b^{1/2}} \sin \frac{b^{1/2}}{2}$
(see \cite{BSSV} for a computation by another method).
\section{Relation with Martinet-Ramis's invariants} \label{secMR}
In this section, we continue to investigate the consequences of the resurgence
of the solution of the conjugacy equation for a saddle-node~$X$.
We shall now connect the ``alien computations'' of the previous section with
Martinet-Ramis's solution of the problem of analytic classification \cite{MR},
completing at the same time the proof of Theorem~\ref{thmCmInv}.
This will be done by comparing sectorial solutions of the conjugacy problem
obtained by Borel-Laplace summation on the one hand, and by deriving geometric
consequences of the Bridge Equation through exponentiation and summation on
the other hand
(this amounts to a resurgent description of the ``Stokes phenomenon'' for the
differential equation~\eqref{eqdiffeqX}).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let us call \emph{Martinet-Ramis's invariants} of~$X$ the numbers
$\xi_{-1},\xi_1,\xi_2,\dotsc$ defined in terms of \'Ecalle's
invariants by the formulas
\begin{align}
\label{defxiCminus}
\xi_{-1} &= - C_{-1}, \\
\label{defxiCplus}
\xi_m &= \sum_{r\ge1}
\sum_{\substack{m_1,\dotsc,m_r\ge1 \\ m_1+\dotsb+m_r = m}}
\frac{(-1)^r}{r!} \beta_{m_1,\ldots,m_r} C_{m_1}\dots C_{m_r},
\qquad m\ge1,
\end{align}
where, as usual, $\beta_{m_1} = 1$ and
$\beta_{m_1,\ldots,m_r} = (m_1+1)(m_1+m_2+1)\dotsm(m_1+\dotsb+m_{r-1})$
for $r\ge2$.
Observe that they are obtained by integrating backwards the vector fields
$$
\mathscr C_- = \mathscr C(-1) = C_{-1} \frac{\partial\,}{\partial u}, \qquad
\mathscr C_+ = \sum_{m>0} \mathscr C(m) = \sum_{m>0} C_m u^{m+1} \frac{\partial\,}{\partial u}.
$$
Indeed, the time-$(-1)$ maps of~$\mathscr C_-$ and~$\mathscr C_+$ are
\begin{equation} \label{eqdefxipm}
u \mapsto \xi_-(u) = u + \xi_{-1}, \qquad
u \mapsto \xi_+(u) = u + \sum_{m>0} \xi_m u^{m+1}
\end{equation}
(as can be checked by viewing $-\mathscr C_+$ as an elementary mould-comould expansion
on the alphabet $\mathbb{N}^*$; the reason for changing the variable~$y$ into~$u$ will
appear later).\footnote{
Thus one always has $\xi_-(u) = u - C_{-1}$, and in the Riccati case as at the
end of the previous section
$\xi_+(u) = \frac{u}{1 - C_{1} u}$.
}
These numbers can also be defined directly from the iterated
integrals~$L_a^\text{\boldmath{$\om$}}$ of~\eqref{eqdefLao}:
\begin{prop}
The family
$\big( \beta_\text{\boldmath{$\om$}} L_a^\text{\boldmath{$\om$}} \big)_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\,\norm{\text{\boldmath{$\om$}}}=m}$
is summable in~$\mathbb{C}$ for each $m\in\mathbb{Z}^*$ and
$$
\xi_m = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet,\, \norm{\text{\boldmath{$\om$}}} = m} \beta_\text{\boldmath{$\om$}} L_a^\text{\boldmath{$\om$}},
$$
with the convention $\xi_m = 0$ for $m\le-2$.
\end{prop}
\noindent
\emph{Idea of the proof.} The relations $\xi_\pm(u) = \sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \pmL\text{\boldmath{$\om$}}
\mathbf{B}_\text{\boldmath{$\om$}} u$ (where $\pmL\bullet$ is defined by~\eqref{eqdefpmL}) formally follow
from the formula $\pmL{\bullet} = \exp\big(-\pmV{\bullet}\big)$ and
Lemma~\ref{lemintermed}, according to which
$(\pmV{\text{\boldmath{$\om$}}} \mathbf{B}_\text{\boldmath{$\om$}})_{\text{\boldmath{$\om$}}\in\Omega^\bullet}$
is a pointwise summable family of operators of~$\mathbf{A}[[u]]$ with sum~$\mathscr C_\pm$.
The summability can be justified by the same kind of arguments as in the proof
of Proposition~\ref{propCm} and Theorem~\ref{thmBE}.
\qed
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The formulas~\eqref{defxiCminus}--\eqref{defxiCplus} can be inverted so as to
express the $C_m$'s in terms of the $\xi_m$'s.
Theorem~\ref{thmCmInv} is thus equivalent to the fact that the $\xi_m$'s
constitute themselves a complete system of analytic invariants for the
saddle-node classification problem.
We shall now prove this fact directly.
In fact, we shall obtain more: the pair $(\xi_-,\xi_+)$ is a complete system of
analytic invariants and \emph{$\xi_+$ is necessarily convergent}.
Thus, not all collections of numbers $(C_m)_{m\in\{-1\}\cup\mathbb{N}^*}$
can appear as analytic invariants,
only those for which the corresponding $\xi_m$'s admit geometric bounds
$|\xi_m| \le K^m$ for $m\ge1$
(hence they have to satisfy Gevrey bounds themselves: $|C_m| \le K_1^m m!$ for
$m\ge1$).
This information will follow from the geometric interpretation of~$\xi_\pm$.
Martinet and Ramis have also showed that any collection
$(\xi_m)_{m\in\{-1\}\cup\mathbb{N}^*}$ subject to the
previous growth constraint can be obtained as a system of analytic invariants
for some saddle-node vector field, but we shall not consider this question here.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let us consider the saddle-node vector field~$X$ and its normal form~$X_0$ in
the variable $z=-1/x$ instead of~$x$:
$$
\widetilde X = \frac{\partial\,}{\partial z} + A(-1/z,y) \frac{\partial\,}{\partial y}, \qquad
\widetilde X_0 = \frac{\partial\,}{\partial z} + y \frac{\partial\,}{\partial y}.
$$
For ${\varepsilon}\in \left]0,\pi/2\right[$ and $R>0$, we set
\begin{align*}
\cD^{up}(R,\eps) &= \ao z \in\mathbb{C} \mid
-\tfrac{\pi}{2}+{\varepsilon} \le \arg z \le \tfrac{3\pi}{2}-{\varepsilon}, \;
|z| \ge R \af, \\
\cD^{low}(R,\eps) &= \ao z \in\mathbb{C} \mid
-\tfrac{3\pi}{2}+{\varepsilon} \le \arg z \le \tfrac{\pi}{2}-{\varepsilon}, \;
|z| \ge R \af,
\end{align*}
which are ``sectorial neighbourhoods of infinity'' in the $z$-plane
(corresponding to certain sectorial neighbourhoods of the origin in the $x$-plane).
Their intersection has two connected components:
\begin{align*}
\cD_-(R,\eps) &=
\ao z \in\mathbb{C} \mid
\tfrac{\pi}{2}+{\varepsilon} \le \arg z \le \tfrac{3\pi}{2}-{\varepsilon}, \;
|z| \ge R \af
\subset \ao\mathop{\Re e}\nolimits z<0\af, \\
\cD_+(R,\eps) &=
\ao z \in\mathbb{C} \mid
-\tfrac{\pi}{2}+{\varepsilon} \le \arg z \le \tfrac{\pi}{2}-{\varepsilon}, \;
|z| \ge R \af
\subset \ao\mathop{\Re e}\nolimits z>0\af. \\
\end{align*}
\begin{thm} \label{thmximInv}
Let ${\varepsilon}\in \left]0,\pi/2\right[$. Then there exist $R,\rho>0$ such that:
\begin{enumerate}[(i)]
\item
By Borel-Laplace summation, the formal series~$\widetilde\varphi_n(z)$ give
rise to functions $\widetilde\varphi_n^{up}(z)$, resp.\ $\widetilde\varphi_n^{low}(z)$, which are analytic
in $\cD^{up}(R,\eps)$, resp.\ $\cD^{low}(R,\eps)$, such that the formulas
$$
\widetilde\varphi^{up}(z,y) = \sum_{n\ge0} \widetilde\varphi_n^{up}(z) y^n, \quad
\widetilde\varphi^{low}(z,y) = \sum_{n\ge0} \widetilde\varphi_n^{low}(z) y^n
$$
define two functions $\widetilde\varphi^{up}$ and~$\widetilde\varphi^{low}$ analytic in
$\cD^{up}(R,\eps)\times \ao |y|\le \rho\af$,
resp.\ $\cD^{low}(R,\eps)\times \ao |y|\le \rho\af$,
and each of the transformations
$$
\widetilde\th^{{up}}(z,y) = \big(z,\widetilde\varphi^{up}(z,y)\big), \quad
\widetilde\th^{{low}}(z,y) = \big(z,\widetilde\varphi^{low}(z,y)\big)
$$
is injective in its domain and establishes there a conjugacy between the
normal form~$\widetilde X_0$ and the saddle-node vector field~$\widetilde X$.
\item
The series~$\xi_+$ of~\eqref{eqdefxipm} has positive radius of convergence and
the upper and lower normalisations are connected by the formulas
\begin{alignat*}{3}
&\widetilde\th^{{up}}(z,y) &=& \widetilde\th^{{low}}\big(z,\xi_-(y\,{\mathrm e}^{-z})\,{\mathrm e}^z\big)
&=& \widetilde\th^{{low}}(z,y + \xi_{-1}{\mathrm e}^z)
\\
\intertext{for $z\in\cD_-(R,\eps)$ and $|y|\le \rho$, whereas}
&\widetilde\th^{{low}}(z,y) &=& \widetilde\th^{{up}}\big(z,\xi_+(y\,{\mathrm e}^{-z})\,{\mathrm e}^z\big)
&=& \widetilde\th^{{up}}\big(z, y + \xi_1 y^2{\mathrm e}^{-z} + \xi_2 y^3{\mathrm e}^{-2z} + \dotsb)\\
\intertext{for $z\in\cD_+(R,\eps)$ and $|y|\le \rho$.}
\end{alignat*}
\item
The pair $(\xi_-,\xi_+)$ is a complete system of
analytic invariants for~$X$.
\end{enumerate}
\end{thm}
As already mentioned, Theorem~\ref{thmximInv} contains Theorem~\ref{thmCmInv}.
The rest of this section is devoted to the proof of Theorem~\ref{thmximInv}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
In view of inequalities~\eqref{ineqwhpsin}--\eqref{inequniformphn}, the
principal branches of the Borel transforms~$\widehat\varphi_n({\zeta})$ and~$\widehat\psi_n({\zeta})$
admit exponential bounds of the form $K L^n \, {\mathrm e}^{C |{\zeta}|}$ in the sectors
$\ao {\zeta}\in\mathbb{C} \mid \tfrac{{\varepsilon}}{2} \le \arg{\zeta} \le \pi-\tfrac{{\varepsilon}}{2} \af$
and $\ao {\zeta}\in\mathbb{C} \mid \pi+\tfrac{{\varepsilon}}{2} \le \arg{\zeta} \le 2\pi-\tfrac{{\varepsilon}}{2} \af$.
Using the directions of the first sector for instance, we can
define analytic functions by gluing the Laplace transforms
corresponding to various directions
$$
\widetilde\varphi_n^{low}(z) = \int_0^{{\mathrm e}^{{\mathrm i}\th}\infty} \widehat\varphi_n({\zeta})\,{\mathrm e}^{-z{\zeta}} \, {\mathrm d}{\zeta},
\qquad
\widetilde\psi_n^{low}(z) = \int_0^{{\mathrm e}^{{\mathrm i}\th}\infty} \widehat\psi_n({\zeta})\,{\mathrm e}^{-z{\zeta}} \, {\mathrm d}{\zeta},
$$
with $\th \in [\tfrac{{\varepsilon}}{2},\pi-\tfrac{{\varepsilon}}{2}]$.
If we take $R$ large enough, then the union of the
half-planes $\ao
\mathop{\Re e}\nolimits(z\,{\mathrm e}^{{\mathrm i}\th}) > C \af$ contains $\cD^{low}(R,\eps)$ and the functions
$$
\widetilde\varphi^{low}(z,y) = \sum_{n\ge0} \widetilde\varphi_n^{low}(z) y^n,
\qquad
\widetilde\psi^{low}(z,y) = \sum_{n\ge0} \widetilde\psi_n^{low}(z) y^n
$$
are analytic for $z\in\cD^{low}(R,\eps)$ and $|y|\le\rho$ as soon as $\rho<1/L$.
The standard properties of Borel-Laplace summation ensure that the relations
$y = \widetilde\varphi\big( z, \widetilde\psi(z,y) \big) =
\widetilde\psi\big( z, \widetilde\varphi(z,y) \big)$
and
$\widetilde X_0 \widetilde\varphi(z,y) = A\big( -1/z, \widetilde\varphi(z,y) \big)$
yield similar relations for $\widetilde\varphi^{low}$ and $\widetilde\psi^{low}$, possibly in smaller
domains (because $\widetilde\varphi^{low}(z,y)-y$ and $\widetilde\psi^{low}(z,y)-y$ can be made
uniformly small by increasing~$R$ and diminishing~$\rho$).
Hence the transformations
$$
(z,y) \mapsto \big(z,\widetilde\varphi^{low}(z,y)\big), \qquad
(z,y) \mapsto \big(z,\widetilde\psi^{low}(z,y)\big)
$$
(or rather the sectorial germs they represent)
are mutually inverse and establish a conjugacy between~$\widetilde X_0$ and~$\widetilde X$.
We define similarly $\widetilde\varphi^{up}(z,y)$ and $\widetilde\psi^{up}(z,y)$ with the desired
properties, by means of Laplace transforms in directions belonging to
$[\pi+\tfrac{{\varepsilon}}{2},2\pi-\tfrac{{\varepsilon}}{2}]$.
This yields the first statement in Theorem~\ref{thmximInv}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We now have at our disposal two sectorial normalisations
$$
\widetilde\th^{{low}} \colon (z,y) \mapsto \big(z,\widetilde\varphi^{low}(z,y)\big), \qquad
\widetilde\th^{{up}} \colon (z,y) \mapsto \big(z,\widetilde\varphi^{up}(z,y)\big),
$$
which are defined in different but overlapping domains, and which admit the same
asymptotic expansion {with respect to}~$z$ (when one first expands in powers of~$y$).
If we consider $\big(\widetilde\th^{{up}}\big)^{-1}\circ\,\widetilde\th^{{low}}$ or $\big(\widetilde\th^{{low}}\big)^{-1}\circ\,\widetilde\th^{{up}}$ in
one of the two components of $\cD^{low}(R,\eps)\cap\cD^{up}(R,\eps)$, we thus get a transformation of the
form
\begin{equation} \label{eqdefchizy}
(z,y) \mapsto \big( z, \chi(z,y) \big)
\end{equation}
which conjugates the normal form~$\widetilde X_0$ with itself, to which one can apply the following:
\begin{lemma}
Let $\mathcal{D}$ be a domain in~$\mathbb{C}$.
Suppose that the transformation $(z,y) \mapsto \big( z, \chi(z,y) \big)$ is
analytic and injective for
$z \in \mathcal{D}$ and $|y| \le \rho$,
and that it conjugates~$\widetilde X_0$ with itself.
Then there exists $\xi(u)\in\mathbb{C}\{u\}$ such that
\begin{equation} \label{eqchixi}
\chi(z,y) = \xi(y\,{\mathrm e}^{-z}) {\mathrm e}^z.
\end{equation}
\end{lemma}
Such transformations are called \emph{sectorial isotropies} of the normal form.
\begin{proof}
By assumption $\chi = \widetilde X_0\chi$.
Since $y = \widetilde X_0 y$, this implies that $\frac{1}{y}\chi(z,y)$ is a first
integral of~$\widetilde X_0$.
Thus $\frac{1}{u\,{\mathrm e}^z}\chi(z,u\,{\mathrm e}^z)$ is independent of~$z$ and can be
written $\frac{\xi(u)}{u}$, where obviously $\xi(u)\in\mathbb{C}\{u\}$.
\end{proof}
When $\chi(z,y)$ comes from
$\big(\widetilde\th^{{up}}\big)^{-1}\circ\,\widetilde\th^{{low}}$ or $\big(\widetilde\th^{{low}}\big)^{-1}\circ\,\widetilde\th^{{up}}$, we have a
further piece of information:
in the Taylor expansion $\chi(z,y)- y = \sum_{n\ge0} \chi_n(z) y^n$,
each component $\chi_n(z)$ admits the null series as asymptotic expansion in
$\cD_\pm(R,\eps)$
(the transformation~\eqref{eqdefchizy} is asymptotic to the identity because
$\widetilde\th^{{up}}$ and~$\widetilde\th^{{low}}$ share the same asymptotic expansion).
This has different implications according to whether the domain $\mathcal{D}$ is $\cD_-(R,\eps)$
or $\cD_+(R,\eps)$.
Indeed, if we expand $\xi(u)-u = \sum_{n\ge0} \alpha_n u^n$, we get
$$
\chi_0(z) = \alpha_0\, {\mathrm e}^{z}, \quad
\chi_1(z) = \alpha_1, \quad
\chi_2(z) = \alpha_2\, {\mathrm e}^{-z}, \quad
\chi_3(z) = \alpha_3\, {\mathrm e}^{-2z}, \,
\dotsc
$$
hence
\begin{align*}
\mathcal{D} = \cD_-(R,\eps) \subset \ao \mathop{\Re e}\nolimits z <0 \af &\enspace\Rightarrow\enspace
\alpha_n = 0 \enspace\text{for $n\neq0$},\\
\mathcal{D} = \cD_+(R,\eps) \subset \ao \mathop{\Re e}\nolimits z >0 \af &\enspace\Rightarrow\enspace
\alpha_0 = \alpha_1 = 0.
\end{align*}
The upshot is that there exist $\alpha_0\in\mathbb{C}$ and
$\xi(u) = u + \alpha_2 u^2 + \alpha_3 u^3 \in \mathbb{C}\{u\}$
such that
\begin{alignat*}{3}
&\big(\widetilde\th^{{low}}\big)^{-1}\circ\,\widetilde\th^{{up}}(z,y) &=& (z,y + \alpha_0{\mathrm e}^z),
&\qquad& z\in\cD_-(R,\eps),
\\
&\big(\widetilde\th^{{up}}\big)^{-1}\circ\widetilde\th^{{low}}(z,y) &=&
(z, y + \alpha_2 y^2{\mathrm e}^{-z} + \alpha_3 y^3{\mathrm e}^{-2z} + \dotsb),
&\qquad& z\in\cD_+(R,\eps).
\end{alignat*}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
It is elementary to check that the pair of sectorial isotropies
$\Big(
\big(\widetilde\th^{{low}}\big)^{-1} \circ\, \widetilde\th^{{up}}_{\hspace{1.2em}|\cD_-(R,\eps)},
\big(\widetilde\th^{{up}}\big)^{-1} \circ\, \widetilde\th^{{low}}_{\hspace{.9em}|\cD_+(R,\eps)}
\Big)$
is a complete system of analytic invariants for~$X$:
suppose indeed that two saddle-node vector fields~$X_1$ and~$X_2$ are given and
that we wish to know whether the unique formal transformation~$\th$ of the
form~\eqref{eqdefthph} which conjugate them is convergent,
then
$\widetilde\th^{{up}}_2 \circ \big(\widetilde\th^{{up}}_1\big)^{-1}$ and
$\widetilde\th^{{low}}_2 \circ \big(\widetilde\th^{{low}}_1\big)^{-1}$ are two sectorial conjugacies between~$X_1$
and~$X_2$ defined in different but overlapping domains and admitting~$\th$ as
asymptotic expansion (up to the change $x=-1/z$);
they coincide and define an analytic conjugacy iff
$\big(\widetilde\th^{{low}}_2\big)^{-1} \circ \widetilde\th^{{up}}_2 =
\big(\widetilde\th^{{low}}_1\big)^{-1} \circ \widetilde\th^{{up}}_1$
in both components of the intersection of the domains.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Therefore, it only remains to be checked that
$\alpha_0=\xi_1$ and $\xi=\xi_+$.
This will follow from the interpretation of the operators~$\Delta_m^+$ as
components of the ``Stokes automorphism''.
For this part, the reader may consult the end of \S2.4 in \cite{kokyu}.
Suppose that a simple resurgent functions $c\,\delta+\widehat\varphi\in\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ has the
following property:
the functions~$\widehat\chi_m$ defined by $\Delta_m^+(c\,\delta+\widehat\varphi) = \gamma_m\,\delta +
\widehat\chi_m$ and~$\widehat\varphi$ itself have at most exponential growth in each
non-horizontal directions, so that one can consider the Laplace transforms
$\mathcal{L}^\th\widehat\varphi(z) = \int_0^{{\mathrm e}^{{\mathrm i}\th}\infty} \widehat\varphi({\zeta})\,{\mathrm e}^{-z{\zeta}} \, {\mathrm d}{\zeta}$
or $\mathcal{L}^\th\widehat\chi_m(z)$ for $\th\in\left]{\varepsilon},\pi-{\varepsilon}\right[$ or
$\th\in\left]\pi+{\varepsilon},2\pi-{\varepsilon}\right[$, which are analytic in
sectorial neighbourhoods of infinity of the form $\cD^{low}(R,\eps)$ or $\cD^{up}(R,\eps)$.
Let $\th<0<\th'$, with $\th$ and~$\th'$ both close to~$0$;
by deforming a contour of integration,
one deduces from the definition~\eqref{eqdefDemplus} that,
for any $M\in\mathbb{N}^*$ and ${\sigma}\in\left]0,1\right[$,
$$
c + \mathcal{L}^\th\widehat\varphi(z) = c + \mathcal{L}^{\th'}\widehat\varphi(z) + \sum_{m=1}^M {\mathrm e}^{-mz}
\big( \gamma_m + \mathcal{L}^{\th'}\widehat\chi_m(z) \big) + O(|{\mathrm e}^{-(M+{\sigma})z}|)
$$
in the sectorial neighbourhood of infinity obtained by imposing that both
$\mathop{\Re e}\nolimits(z\,{\mathrm e}^{{\mathrm i}\th})$ and $\mathop{\Re e}\nolimits(z\,{\mathrm e}^{{\mathrm i}\th'})$ be large enough, which is
contained in the right half-plane $\ao \mathop{\Re e}\nolimits z > 0 \af$.
Let us denote this by:
$\mathcal{L}^\th(c\,\delta+\widehat\varphi) \sim
\sum_{m\ge0} {\mathrm e}^{-mz}\mathcal{L}^{\th'}\Delta_m^+(c\,\delta+\widehat\varphi)$ in $\ao \mathop{\Re e}\nolimits z > 0 \af$.
Similarly, if $\th<\pi<\th'$ with $\th$ and~$\th'$ both close to~$\pi$, one gets
$\mathcal{L}^\th(c\,\delta+\widehat\varphi) \sim
\sum_{m\le0} {\mathrm e}^{-mz}\mathcal{L}^{\th'}\Delta_m^+(c\,\delta+\widehat\varphi)$ in the left half-plane
$\ao \mathop{\Re e}\nolimits z < 0 \af$.
We can even write $\mathcal{L}^\th \sim \mathcal{L}^{\th'}\circ \sum_{m \ge 0} \dDep$ in $\ao \mathop{\Re e}\nolimits z > 0 \af$
and $\mathcal{L}^\th \sim \mathcal{L}^{\th'}\circ \sum_{m \le 0} \dDep$ in $\ao \mathop{\Re e}\nolimits z < 0 \af$, if
we define properly $\dDep$ in the convolutive model. See \cite{kokyu}:
$\dDep = \tau_m \circ \Delta_m^+$, with a shift operator $\tau_m \colon \raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z} \to
\tau_m(\raisebox{0ex}[1.9ex]{$\widehat{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z})$, the target space being the set of simple resurgent
functions ``based at~$m$'' (instead of being based at the origin).
On the other hand, we can rephrase~\eqref{eqDepexpDe} as
$\sum_{m \ge 0} \dDep = \exp\Big( \sum_{m>0} \dDem \Big)$,
$\sum_{m\le0} \dDep = \exp\Big( \sum_{m<0} \dDem \Big)$.
Apply this to $\widetilde Y(z,u)$ (or, rather, to each of its components):
when $\th$ and~$\th'$ are close to~$0$, we have $\mathcal{L}^\th\widehat Y = \widetilde Y^{up}$ and
$\mathcal{L}^{\th'}\widehat Y = \widetilde Y^{low}$ in~$\cD_+(R,\eps)$, hence, in view of the Bridge Equation,
$\widetilde Y^{up} \sim (\mathcal{L}^{\th'}\circ \exp\mathscr C_+) \widehat Y$,
which yields $\widetilde Y^{up}(z,u) \sim \widetilde Y^{low}\big(z,(\xi_+)^{-1}(u)\big)$ in~$\cD_+(R,\eps)$.
Similarly, $\widetilde Y^{low}(z,u) \sim \widetilde Y^{up}\big(z,(\xi_-)^{-1}(u)\big)$ in the domain~$\cD_-(R,\eps)$.
When interpreting these relations componentwise with respect to~$u$ and modulo
$O(|{\mathrm e}^{\pm(M+{\sigma})z}|)$ in $\cD_\pm(R,\eps)$ with arbitrarily large~$M$, we get the desired
relations between $\widetilde\varphi^{up}(z,y) = \widetilde Y^{up}(z,y {\mathrm e}^{-z})$ and
$\widetilde\varphi^{low}(z,y) = \widetilde Y^{low}(z,y {\mathrm e}^{-z})$.
\section{The resurgence monomials $\widetilde\mathcal{U}_a^\text{\boldmath{$\om$}}$'s and the freeness of alien
derivations} \label{secResurMonom}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
The first goal of this section is to construct families of simple resurgent functions
which form closed systems for multiplication and alien derivations in the
following sense:
\begin{definition}
We call $\Delta$-friendly monomials the members of any family of simple resurgent functions
$(\widetilde\mathcal{U}^{\omega_1,\dotsc,\omega_r})_{r\ge0,\,\omega_1,\dotsc,\omega_r\in\mathbb{Z}^*}$, such that on
the one hand
\begin{equation} \label{eqsystDefriend}
\Delta_m \widetilde\mathcal{U}^{\omega_1,\dotsc,\omega_r} = \left| \begin{alignedat}{2}
&\widetilde\mathcal{U}^{\omega_2,\dots,\omega_r} &\quad &\text{if $r\ge1$ and $\omega_1=m$,} \\
& \enspace\quad 0 &\quad &\text{if not,}
\end{alignedat} \right.
\end{equation}
for every $m\in\mathbb{Z}^*$, and on the other hand
$\widetilde\mathcal{U}^\emptyset=1$ and
$$
\widetilde\mathcal{U}^\text{\boldmath{$\alpha$}} \widetilde\mathcal{U}^\text{\boldmath{$\beta$}} =
\sum_{\text{\boldmath{$\om$}}\in\Omega^\bullet} \sh{\text{\boldmath{$\alpha$}}}{\text{\boldmath{$\beta$}}}{\text{\boldmath{$\om$}}} \widetilde\mathcal{U}^\text{\boldmath{$\om$}},
\qquad \text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}}\in (\mathbb{Z}^*)^\bullet,
$$
{i.e.}, when viewed as a mould, $\widetilde\mathcal{U}^\bullet \in \mathscr M^\bullet(\mathbb{Z}^*,\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z})$ is symmetral.
\end{definition}
J.~\'Ecalle calls $\Delta$-friendly such resurgent functions by contrast with the functions
$\widetilde\mathcal{V}_a^{\omega_1,\dotsc,\omega_r}$, which can be termed ``$\partial$-friendly monomials'' because
of~\eqref{eqdefnewcV} (using $\partial$ as short-hand for $\frac{{\mathrm d}\,}{{\mathrm d} z}$).
As a matter of fact, $\Delta$-friendly monomials will be defined with the help of
the moulds $\widetilde\mathcal{V}_a^\bullet$, $V_a^\bullet$ of Section~\ref{secBESN} and mould composition,
but we first need to enlarge slightly the definition of mould composition.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We thus begin with a kind of addendum to Sections~\ref{secAlgMoulds} and~\ref{secAltSym}.
Assume that~$\mathbf{A}$ is a commutative $\mathbb{C}$-algebra, the unit of which is
denoted~$1$, and~$\Omega$ is a commutative semigroup, the operation of which is denoted
additively.
We still use the notations $\norm{\text{\boldmath{$\om$}}} = \omega_1 + \dotsb + \omega_r$ and $\norm{\emptyset}=0$.
Let us call {\em restricted moulds} the elements of $\hM^\bul(\Om^*,\bA)$, where
$\Omega^* = \Omega \setminus \{0\}$.
The example we have in mind is $\Omega = \mathbb{Z}$ and $\mathbf{A} = \mathbb{C}$ or~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\begin{definition}
We call {\em licit mould} any restricted mould~$U^\bullet$ such that
$$
\norm{\text{\boldmath{$\om$}}} = 0 \enspace\Rightarrow\enspace
U^\text{\boldmath{$\om$}} = 0
$$
for any $\text{\boldmath{$\om$}} \in (\Omega^*)^\bullet$.
The set of licit moulds will be denoted $\hM_{\textit{lic}}^\bul(\Om^*,\bA)$.
\end{definition}
The set $\hM_{\textit{lic}}^\bul(\Om^*,\bA)$ is clearly an $\mathbf{A}$-submodule of~$\hM^\bul(\Om^*,\bA)$, but not an
$\mathbf{A}$-subalgebra.
Notice that $U^\bullet\in\hM_{\textit{lic}}^\bul(\Om^*,\bA)$ implies $U^\emptyset=0$.
We now define the {\em composition of a restricted mould and a licit mould} as follows:
$$
(M^\bullet,U^\bullet) \in \hM^\bul(\Om^*,\bA)\times\hM_{\textit{lic}}^\bul(\Om^*,\bA) \mapsto
C^\bullet = M^\bullet \circ U^\bullet \in \hM^\bul(\Om^*,\bA),
$$
with $C^\emptyset = M^\emptyset$ and, for $\text{\boldmath{$\om$}}\neq\emptyset$,
$$
C^\text{\boldmath{$\om$}} = \sum_{ \substack{s\ge1,\,\text{\boldmath{$\om$}} = \text{\boldmath{$\om$}}^1 \raisebox{.15ex}{$\centerdot\!\centerdot\!\centerdot$}\, \text{\boldmath{$\om$}}^s \\
\norm{\text{\boldmath{$\om$}}^1},\dotsc,\norm{\text{\boldmath{$\om$}}^s}\neq0} }
M^{(\norm{\text{\boldmath{$\om$}}^1},\dotsc,\norm{\text{\boldmath{$\om$}}^s})}
U^{\text{\boldmath{$\om$}}^1} \dotsm U^{\text{\boldmath{$\om$}}^s}.
$$
The map $M^\bullet \mapsto M^\bullet \circ U^\bullet$ is clearly $\mathbf{A}$-linear;
we leave it to the reader\footnote{
The verification of most of the properties indicated in this paragraph can be
simplified by observing that the canonical restriction map
$\rho \colon \mathscr M^\bullet(\Omega,\mathbf{A}) \to \hM^\bul(\Om^*,\bA)$
is an $\mathbf{A}$-algebra homomorphism which satisfies
$\rho(M^\bullet\circ U^\bullet) = \rho(M^\bullet) \circ \rho(U^\bullet)$
for any two moulds~$M^\bullet$ and~$U^\bullet$ such that $\rho(U^\bullet)$ is licit
and which preserves alternality and symmetrality.
}
to check that it is an $\mathbf{A}$-algebra homomorphism, that
$$
U^\bullet, V^\bullet \in \hM_{\textit{lic}}^\bul(\Om^*,\bA) \enspace\Rightarrow\enspace U^\bullet \circ V^\bullet \in \hM_{\textit{lic}}^\bul(\Om^*,\bA),
$$
and that
$$
M^\bullet\in \hM^\bul(\Om^*,\bA) \;\text{and}\; U^\bullet, V^\bullet \in \hM_{\textit{lic}}^\bul(\Om^*,\bA)
\enspace\Rightarrow\enspace
(M^\bullet \circ U^\bullet) \circ V^\bullet = M^\bullet \circ (U^\bullet \circ V^\bullet).
$$
The {\em restricted identity mould} is
$$
I_*^\bullet \colon \text{\boldmath{$\om$}}\in(\Omega^*)^\bullet \mapsto I_*^\text{\boldmath{$\om$}} =
\left| \begin{aligned}
1 \quad &\text{if $r(\text{\boldmath{$\om$}}) = 1$,}\\
0 \quad &\text{if $r(\text{\boldmath{$\om$}}) \neq 1$.}
\end{aligned} \right.
$$
It is a licit mould, which satisfies
$M^\bullet \circ I_*^\bullet = M^\bullet$ for any restricted mould~$M^\bullet$
and $I_*^\bullet \circ U^\bullet = U^\bullet $ for any licit mould~$U^\bullet$.
One can check that a licit mould~$U^\bullet$ admits an inverse for composition iff
$U^\text{\boldmath{$\om$}}$ is invertible in~$\mathbf{A}$ whenever $r(\text{\boldmath{$\om$}})=1$.
A proposition analogous to Proposition~\ref{propcomposalt} holds.
In particular, \emph{alternal invertible licit moulds form a subgroup of the
composition group of invertible licit moulds}.\footnote{
This can be checked by means of the restriction homomorphism of the previous
footnote:
if $U^\bullet$ is licit and $U^\text{\boldmath{$\om$}}$ is invertible whenever $r(\text{\boldmath{$\om$}})=1$, then any
$U_0^\bullet\in\mathscr M^\bullet(\Omega,\mathbf{A})$ such that $\rho(U_0^\bullet) = U^\bullet$ and
$U_0^{(0)}=1$ is an invertible mould, the composition inverse of which has a
restriction~$V^\bullet$ which satisfies $U^\bullet\circ V^\bullet = V^\bullet\circ U^\bullet =
I_*^\bullet$; if moreover $U^\bullet$ is alternal, then one can choose $U_0^\bullet$
alternal (take $U_0^\text{\boldmath{$\om$}} = 0$ whenever $r(\text{\boldmath{$\om$}})\ge2$ and one of the letters
of~$\text{\boldmath{$\om$}}$ is~$0$), thus its inverse and the restriction of its inverse are alternal.
}
The property that inversion of licit moulds preserves alternality will be used
in the next paragraph.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We now take $\Omega = \mathbb{Z}$ and $\mathbf{A} = \raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
Assume that $a = (\widehat a_\eta)_{\eta\in\mathbb{Z}^*}$ is any family of entire functions
such that $\widehat a_\eta(\eta)\neq0$ for each $\eta\in\mathbb{Z}^*$.
We still use the notations
$\widetilde a_\eta = \mathcal{B}^{-1}\widehat a_\eta \in z^{-1}\mathbb{C}[[z^{-1}]]$ and
$J_a^\text{\boldmath{$\om$}} = \widetilde a_\eta$ if $\text{\boldmath{$\om$}}=(\eta)$, $0$ if not.
We recall that, according to Section~\ref{secBESN}, the equation
\begin{equation} \label{eqdeftV}
(\partial+\nabla){\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = - {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \times J_a^\bullet
\end{equation}
defines a symmetral mould ${\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \in \hM^\bul(\Om^*,\bA)$, and that, for each $m\in\mathbb{Z}^*$, we
have an alternal scalar mould ${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m) = -V_a^\bullet(m) \in \hM^\bul(\Om^*,\C)$ which satisfies
\begin{gather} \label{eqaldertcV}
\Delta_m{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} = {V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m) \times {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul},\\
\label{eqnultvVm}
{V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\text{\boldmath{$\om$}}(m) \neq 0 \enspace\Rightarrow\enspace \norm{\text{\boldmath{$\om$}}}=m.
\end{gather}
Moreover ${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^{(\eta)}(\eta) = 2\pi{\mathrm i}\, \widehat a_\eta(\eta)$.
\begin{thm}
The formula ${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet = \sum_{m\in\mathbb{Z}^*} {V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m)$ defines an alternal scalar
licit mould, which admits a composition inverse~$U_a^\bullet$.
The formula
\begin{equation} \label{eqdefcUa}
\widetilde\mathcal{U}_a^\bullet = {\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul} \circ U_a^\bullet \,\in\, \hM^\bul(\Om^*,\tRsimpZ)
\end{equation}
defines a family of $\Delta$-friendly monomials~$\widetilde\mathcal{U}_a^\text{\boldmath{$\om$}}$.
\end{thm}
\begin{proof}
In view of~\eqref{eqnultvVm}, the definition of~${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet$ makes sense and its
alternality follows from the alternality of each ${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m)$.
This mould is clearly licit, and
${V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^{(\eta)} = 2\pi{\mathrm i}\, \widehat a_\eta(\eta) \neq0$, hence its invertibility.
The general properties of the composition of a restricted mould and a licit
mould ensure that \eqref{eqdefcUa} defines a symmetral mould.
Its alien derivatives are easily computed since $U_a^\bullet$ is a scalar mould:
$$
\Delta_m \widetilde\mathcal{U}_a^\bullet = (\Delta_m{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) \circ U_a^\bullet =
({V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m)\times{\widetilde\cV\hspace{-.45em}\raisebox{.35ex}{--}_a^\bul}) \circ U_a^\bullet = I_m^\bullet \times \widetilde\mathcal{U}_a^\bullet,
$$
with $I_m^\bullet = {V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet(m)\circ U_a^\bullet$ (the last identity follows from the
$\mathbf{A}$-algebra homomorphism property of post-composition with~$U_a^\bullet$).
The conclusion follows from the fact that
\begin{equation} \label{eqdefIm}
I_m^\text{\boldmath{$\om$}} = 1 \enspace\text{if $\text{\boldmath{$\om$}}=(m)$,} \quad 0 \enspace\text{if not.}
\end{equation}
This formula can be checked by introducing the map
$\rho_m \colon M^\bullet \in \hM^\bul(\Om^*,\bA) \mapsto M_m^\bullet \in \hM^\bul(\Om^*,\bA)$ defined by $M_m^\text{\boldmath{$\om$}}
= M^\text{\boldmath{$\om$}}$ if $\norm{\text{\boldmath{$\om$}}}=m$, $0$ if not, and observing that
$\rho_m(M^\bullet\circ U^\bullet) = \rho_m(M^\bullet) \circ U^\bullet$ for any licit
mould~$U^\bullet$;
thus $I_m^\bullet = \rho_m({V\hspace{-.5em}\raisebox{.35ex}{-}_{\hspace{-.1em}a}\hspace{-0.22em}}^\bullet) \circ U_a^\bullet = \rho_m(I_*^\bullet)$.
\end{proof}
\begin{remark}
An analogous computation yields
$$
(\partial+\nabla) \widetilde\mathcal{U}_a^\bullet = -\widetilde\mathcal{U}_a^\bullet \times \widetilde K^\bullet
$$
with a licit alternal mould $\widetilde K^\bullet \in \mathscr M^\bullet(\mathbb{Z}^*,\mathbb{C}[[z^{-1}]])$ defined by
$\widetilde K^\text{\boldmath{$\om$}} = U_a^\text{\boldmath{$\om$}} \, \widetilde a_{\norm{\text{\boldmath{$\om$}}}}$ if $\norm{\text{\boldmath{$\om$}}}\neq0$.
\end{remark}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
As an application of the existence of $\Delta$-friendly monomials, we now show
\begin{thm} \label{thmfree}
Let $\mathbf{A} = \operatorname{RES^{\mathrm{simp}}_\mathbb{Z}}$.
The subalgebra of $\operatorname{End}_\mathbb{C} \mathbf{A}$ generated by the
operators~$\Delta_m$, $m\in\mathbb{Z}^*$, is isomorphic to the free associative algebra on~$\mathbb{Z}^*$.
\end{thm}
\noindent
In fact, we shall prove a stronger statement:
for any non-commutative polynomial {\em with coefficients in~$\mathbf{A}$},
$$
P = \sum_{(m_1,\dots,m_r)\in\mathscr F} \widetilde\varphi^{m_1,\dots,m_r}
\Delta_{m_r}\dotsm\Delta_{m_1},
\qquad \text{$\mathscr F$ finite subset of~$(\mathbb{Z}^*)^\bullet$,}
$$
there exists $\widetilde\psi\in\mathbf{A}$ such that $P\widetilde\psi\neq0$, unless all the
coefficients $\widetilde\varphi^{m_1,\dots,m_r}$ are zero.
Thus there is no non-trivial polynomial relation between the alien derivations~$\Delta_m$.
\begin{proof}
Assume that not all the coefficients are zero.
We may suppose $\mathscr F\neq\emptyset$ and $\widetilde\varphi^\text{\boldmath{$\om$}}\neq0$ for each $\text{\boldmath{$\om$}}\in\mathscr F$.
Choose $\text{\boldmath{$m$}}=(m_1,\dotsc,m_r)\in\mathscr F$ with minimal length; then, for any
family of $\Delta$-friendly monomials $\widetilde\mathcal{U}^\bullet$, we find
$P\widetilde\mathcal{U}^\text{\boldmath{$m$}} = \widetilde\varphi^{m_1,\dots,m_r} \neq 0$
as a consequence of
\begin{equation} \label{eqiterDeDefr}
\Delta_{m_s}\dotsm\Delta_{m_1} \widetilde\mathcal{U}^\text{\boldmath{$\om$}} = \left| \begin{alignedat}{2}
&\widetilde\mathcal{U}^{\text{\boldmath{$n$}}} &\quad &\text{if $\text{\boldmath{$\om$}} = (m_1,\dotsc,m_s) \raisebox{.15ex}{$\centerdot$} \text{\boldmath{$n$}}$ with $\text{\boldmath{$n$}}\in(\mathbb{Z}^*)^\bullet$,} \\
& \; 0 &\quad &\text{if not.}
\end{alignedat} \right.
\end{equation}
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Let us call \emph{resurgence constant} any $\widetilde\varphi\in\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ such that
$\Delta_m\widetilde\varphi = 0$ for any $m\in\mathbb{Z}^*$.
This is equivalent to saying that $\mathcal{B}\widetilde\varphi = c\,\delta + \widehat\varphi({\zeta})$ with
$c\in\mathbb{C}$ and $\widehat\varphi$ entire
(in particular every convergent series $\widetilde\varphi(z)\in\mathbb{C}\{z^{-1}\}$ is a resurgence
constant, but the converse is not true since we did not require the Borel
transform to be of exponential type: the entire function~$\widehat\varphi$ might have
order~$>1$).
Resurgence constants form a subalgebra~$\widetilde\mathscr P_0$ of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$.
\begin{prop}
Let $\widetilde\mathcal{U}_1^\bullet$ and $\widetilde\mathcal{U}_2^\bullet$ be two moulds in $\hM^\bul(\Om^*,\tRsimpZ)$ and suppose
that $\widetilde\mathcal{U}_1^\bullet$ is a family of $\Delta$-friendly monomials.
Then $\widetilde\mathcal{U}_2^\bullet$ is a family of $\Delta$-friendly monomials iff if
there exists a symmetral mould $\widetilde M^\bullet \in \mathscr M^\bullet(\mathbb{Z}^*,\widetilde\mathscr P_0)$ such that
\begin{equation} \label{eqrelcUcU}
\widetilde\mathcal{U}_2^\bullet = \widetilde\mathcal{U}_1^\bullet \times \widetilde M^\bullet.
\end{equation}
\end{prop}
\noindent
Thus all the families of $\Delta$-friendly monomials can be deduced from one of them.
\begin{proof}
Let $\widetilde M^\bullet = (\widetilde\mathcal{U}_1^\bullet)^{-1} \times \widetilde\mathcal{U}_2^\bullet \in \hM^\bul(\Om^*,\tRsimpZ)$. This mould is symmetral iff
$\widetilde\mathcal{U}_2^\bullet$ is symmetral.
Let $m\in\mathbb{Z}^*$.
We have $\Delta_m\widetilde\mathcal{U}_1^\bullet = I_m^\bullet \times \widetilde\mathcal{U}_1^\bullet$, with the
mould~$I_m^\bullet$ defined by~\eqref{eqdefIm}.
The Leibniz rule applied to~\eqref{eqrelcUcU} yields
$\Delta_m\widetilde\mathcal{U}_2^\bullet = I_m^\bullet \times \widetilde\mathcal{U}_2^\bullet +
\widetilde\mathcal{U}_1^\bullet \times \Delta_m \widetilde M^\bullet$.
Thus $\widetilde\mathcal{U}_2^\bullet$ satisfies \eqref{eqsystDefriend} for all~$m$ iff
$\widetilde\mathcal{U}_1^\bullet \times \Delta_m \widetilde M^\bullet=0$ for all~$m$, which is equivalent to
$\widetilde M^\text{\boldmath{$\om$}}\in\widetilde\mathscr P_0$ since $\widetilde\mathcal{U}_1^\bullet$ admits a multiplicative inverse.
\end{proof}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
Define $\text{\boldmath{$\De$}}_\emptyset=\operatorname{Id}$ and
$\text{\boldmath{$\De$}}_\text{\boldmath{$\om$}} = \Delta_{\omega_r}\dotsm\Delta_{\omega_1}$ for $\text{\boldmath{$\om$}}=(\omega_1,\dotsc,\omega_r)\in(\mathbb{Z}^*)^\bullet$.
We call \emph{resurgence polynomial} any $\widetilde\varphi\in\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ such that
$\text{\boldmath{$\De$}}_{\text{\boldmath{$\om$}}}\widetilde\varphi = 0$ for all but finitely many $\text{\boldmath{$\om$}} \in(\mathbb{Z}^*)^\bullet$.
Resurgence polynomials form a subalgebra~$\widetilde\mathscr P$ of~$\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ (which contains~$\widetilde\mathscr P_0$).
\begin{prop}
Let $\widetilde\mathcal{U}^\bullet$ be any family of $\Delta$-friendly monomials and $\widetilde\varphi$ be any
simple resurgent function.
Then $\widetilde\varphi$ is a resurgence polynomial iff $\widetilde\varphi$ can be written as
\begin{equation} \label{eqreprespol}
\widetilde\varphi = \sum_{\text{\boldmath{$\om$}}\in\mathscr F} \widetilde\mathcal{U}^\text{\boldmath{$\om$}} \widetilde\varphi_\text{\boldmath{$\om$}}, \qquad
\text{$\mathscr F$ finite subset of $(\mathbb{Z}^*)^\bullet$,}
\end{equation}
with $\widetilde\varphi_\text{\boldmath{$\om$}}\in\widetilde\mathscr P_0$ for every $\text{\boldmath{$\om$}}\in\mathscr F$.
Moreover, such a representation of a resurgence polynomial is unique and the formula
$\mathscr E = \sum S \widetilde\mathcal{U}^\bullet \text{\boldmath{$\De$}}_\bullet$
(with $S$ defined by~\eqref{eqdefinvol}, thus $S\widetilde\mathcal{U}^\bullet$ is the
multiplicative inverse of~$\widetilde\mathcal{U}^\bullet$)
defines an algebra homomorphism $\mathscr E\colon\widetilde\mathscr P \to \widetilde\mathscr P_0$ such that
$$
\widetilde\varphi_\text{\boldmath{$\om$}} = \mathscr E \text{\boldmath{$\De$}}_\text{\boldmath{$\om$}}\widetilde\varphi, \qquad \text{\boldmath{$\om$}} \in (\mathbb{Z}^*)^\bullet.
$$
\end{prop}
\begin{proof}
In view of~\eqref{eqiterDeDefr}, formula~\eqref{eqreprespol} defines a
resurgence polynomial whenever the $\widetilde\varphi_\text{\boldmath{$\om$}}$'s are resurgence constants.
The formula $\mathscr E = \sum S \widetilde\mathcal{U}^\bullet \text{\boldmath{$\De$}}_\bullet$ makes sense as an operator
$\widetilde\mathscr P\to\raisebox{0ex}[1.9ex]{$\widetilde{\mathrm{RES}}$}^{\mathrm{simp}}_\mathbb{Z}$ since the sum is locally finite;
an easy adaptation of the arguments of Section~\ref{secContrAltSym} shows that
$\mathscr E$ is an algebra homomorphism because $\widetilde\mathcal{U}^\bullet$ is symmetral and
$\text{\boldmath{$\De$}}_\bullet$ can be viewed as a cosymmetral comould (the $\Delta_m$'s which generate
it are derivations of~$\widetilde\mathscr P$).
Let us check that $\mathscr E\Big(\widetilde\mathscr P\Big) \subset \widetilde\mathscr P_0$.
Let $\widetilde\varphi\in\widetilde\mathscr P$ and $m\in\mathbb{Z}^*$; we can write $\Delta_m = \sum
I_m^\bullet\text{\boldmath{$\De$}}_\bullet$ with the notation~\eqref{eqdefIm}.
A computation analogous to the proof of Proposition~\ref{propmultiplimould}, but
taking into account the fact that~$\Delta_m$ does not commute with the
multiplication by~$(S\widetilde\mathcal{U})^\text{\boldmath{$\om$}}$, shows that
$$
\Delta_m\mathscr E\widetilde\varphi = \sum \big( (S\widetilde\mathcal{U} \times I_m^\bullet) + \Delta_m S\widetilde\mathcal{U}^\bullet \big)\text{\boldmath{$\De$}}_\bullet\widetilde\varphi.
$$
Since $\Delta_m\widetilde\mathcal{U}^\bullet = I_m^\bullet\times\widetilde\mathcal{U}^\bullet$ and $S$ is an
anti-homomorphism such that $S I_m^\bullet = -I_m^\bullet$ and $S\Delta_m=\Delta_mS$, we have
$\Delta_m S\widetilde\mathcal{U}^\bullet = -S\widetilde\mathcal{U} \times I_m^\bullet$, hence $\Delta_m\mathscr E\widetilde\varphi=0$.
We conclude by considering $\widetilde\varphi\in\widetilde\mathscr P$ and setting $\widetilde\varphi_\text{\boldmath{$\alpha$}} =
\mathscr E\text{\boldmath{$\De$}}_\text{\boldmath{$\alpha$}}\widetilde\varphi$ for every word $\text{\boldmath{$\alpha$}} \in (\mathbb{Z}^*)^\bullet$ (but only finitely many
words may yield a nonzero result).
We have
$\widetilde\varphi_\text{\boldmath{$\alpha$}} = \sum_{\text{\boldmath{$\beta$}}} (S\widetilde\mathcal{U}^\bullet)^\text{\boldmath{$\beta$}} \text{\boldmath{$\De$}}_{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}}\widetilde\varphi$,
thus
$\sum_{\text{\boldmath{$\alpha$}}} \widetilde\mathcal{U}^\text{\boldmath{$\alpha$}} \widetilde\varphi_\text{\boldmath{$\alpha$}} = \sum_{(\text{\boldmath{$\alpha$}},\text{\boldmath{$\beta$}})}
\widetilde\mathcal{U}^\text{\boldmath{$\alpha$}} (S\widetilde\mathcal{U}^\bullet)^\text{\boldmath{$\beta$}} \text{\boldmath{$\De$}}_{\text{\boldmath{$\alpha$}}\raisebox{.15ex}{$\centerdot$}\text{\boldmath{$\beta$}}}\widetilde\varphi$,
and the identity $\widetilde\mathcal{U}^\bullet \times S\widetilde\mathcal{U}^\bullet = 1^\bullet$ implies
$\sum_{\text{\boldmath{$\alpha$}}} \widetilde\mathcal{U}^\text{\boldmath{$\alpha$}} \widetilde\varphi_\text{\boldmath{$\alpha$}} = \widetilde\varphi$.
\end{proof}
\section{Other applications of mould calculus} \label{secOtherAppli}
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
In this last section, we wish to indicate how mould calculus can be applied to
another classical normal form problem: the linearisation of a vector field with
non-resonant spectrum.
Let $\mathscr A = \mathbb{C}[[y_1,\dotsc,y_n]]$ with $n\in\mathbb{N}^*$, and consider a vector field
with diagonal linear part:
$$
X = \sum_{i=1}^n a_i(y) \frac{\partial\;}{\partial y_i}, \qquad
a_i(y) = \lambda_i y_i + \sum_{k\in\mathbb{N}^n,\, |k|\ge2} a_{i,k} y^k
$$
(with standard notations for the multi-indices:
$y^k = y_1^{k_1}\dotsm y_n^{k_n}$ and $|k| = k_1+\dotsb+k_n$
if $k = (k_1,\dotsc,k_n)$).
The first problem consists in finding a formal transformation which
conjugates~$X$ and its linear part
$$
X^{\text{lin}} = \sum_{i=1}^n \lambda_i y_i \frac{\partial\;}{\partial y_i}.
$$
This linear part is thus considered as a natural candidate to be a normal form;
it is determined by the spectrum
$\lambda = (\lambda_1,\dotsc,\lambda_n)$.
In fact $X^{\text{lin}} = \mathscr X_\lambda$ with the notation~\eqref{eqdefgXla}.
It is not always possible to find a formal conjugacy between~$X$ and~$X^{\text{lin}}$,
because elementary calculations let appear rational functions of the spectrum,
the denominators of which are of the form
\begin{equation} \label{eqdefdiv}
\langle m,\lambda \rangle = m_1 \lambda_1 + \dots + m_n \lambda_n
\end{equation}
with certain multi-indices $m\in\mathbb{Z}^n$.
Let us make the following {\em strong non-resonance assumption}:
\begin{equation} \label{eqStNRass}
\langle m,\lambda \rangle \neq 0 \quad
\text{for every $m\in\mathbb{Z}^n\setminus\{0\}$.}
\end{equation}
We shall now indicate how to construct a formal conjugacy via mould-comould
expansions under this assumption.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
We are in the framework of Section~\ref{secGenMcM} with $\mathbf{A}=\mathbb{C}$.
Let us use the standard monomial valuation on~$\mathscr A$, defined by $\nu(y^k)=|k|$.
We shall manipulate operators of~$\mathscr A$ having a valuation {with respect to}~$\nu$; they form
a subspace~$\mathscr F$ of $\operatorname{End}_\mathbb{C}\mathscr A$ which was denoted $\mathscr F_{\mathscr A,\mathbf{A}}$
in~\eqref{eqdefgFexemp}.
We first decompose~$X$ as a sum of homogeneous components, in the sense of
Definition~\ref{defhomog}:
$X^{\text{lin}}$ is homogeneous of degree~$0$ and we can write
$$
X - X^{\text{lin}} = \sum_{i=1}^n \sum_{k\in\mathbb{Z}^n} a_{i,k} y^k \frac{\partial\;}{\partial y_i},
$$
thus extending the definition of the $a_{i,k}$'s:
$$
a_{i,k} \neq0 \enspace\Rightarrow\enspace k\in\mathbb{N}^n \enspace\text{and}\enspace |k|\ge2.
$$
Using the canonical basis $(e_1,\dotsc,e_n)$ of~$\mathbb{Z}^n$, we can write
\begin{equation} \label{eqdefBnei}
X-X^{\text{lin}} = \sum_{m\in\mathbb{Z}^n} B_m, \qquad
B_m = \sum_{i=1}^n a_{i,m+e_i} y^m \cdot y_i\frac{\partial\;}{\partial y_i}.
\end{equation}
Observe that each $B_m$ is homogeneous of degree $m\in\mathbb{Z}^n$ and that
\begin{multline}
\notag
\quad B_m\neq0 \enspace\Rightarrow\enspace m\in\mathcal{N}, \\
\label{eqdefcNalph}
\mathcal{N} = \ao m\in\mathbb{Z}^n \mid \exists i \;\text{such that}\; m+e_i\in\mathbb{N}^m
\;\text{and}\; |m|\ge1 \af.\quad
\end{multline}
We thus view~$\mathcal{N}$ as an alphabet and consider
$$
\mathbf{B}_\emptyset = \operatorname{Id}, \qquad
\mathbf{B}_{m_1,\dots,m_r} = B_{m_r} \dotsm B_{m_1}
$$
as a comould on~$\mathcal{N}$ with values in~$\mathscr F$.
For instance, $X-X^{\text{lin}} = \sum I^\bullet \mathbf{B}_\bullet$.
The inequalities
$$
\vln{\mathbf{B}_{m_1,\dots,m_r}} \ge | m_1 + \dotsb + m_r |
$$
show that, for any scalar mould $M^\bullet \in \mathscr M^\bullet(\mathcal{N},\mathbb{C})$, the family
$(M^\text{\boldmath{$m$}} \mathbf{B}_\text{\boldmath{$m$}})_{\text{\boldmath{$m$}}\in\mathcal{N}^\bullet}$ is formally summable in~$\mathscr F$
(indeed, for any $\delta\in\mathbb{Z}$,
$\vln{M^{m_1,\dots,m_r}\mathbf{B}_{m_1,\dots,m_r}} \le \delta$ implies
$r\le |m_1|+\dotsb+|m_r|\le\delta$ and there are only finitely many $\eta\in\mathcal{N}$
such that $|\eta|\le\delta$).
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
According to the general strategy of mould-comould expansions, we now look for a
formal conjugacy~$\th$ bewteen $X$ and~$X^{\text{lin}}$ through its substitution
automorphism~$\Theta$, which should satisfy $X = \Theta^{-1}X^{\text{lin}}\Theta$.
This conjugacy equation can be rewritten
$$
\left[ X^{\text{lin}}, \Theta \right] = \Theta \left( X-X^{\text{lin}} \right).
$$
Propositions~\ref{propmultiplimould} and~\ref{propcommutXla} show that, given
any $M^\bullet\in\mathscr M^\bullet(\mathcal{N},\mathbb{C})$,
$\Theta = \sum M^\bullet \mathbf{B}_\bullet$ is solution as soon as
\begin{equation} \label{eqConjugM}
D_\varphi M^\bullet = I^\bullet \times M^\bullet,
\end{equation}
with $D_\varphi M^\text{\boldmath{$m$}} = \langle \norm{\text{\boldmath{$m$}}},\lambda \rangle M^\text{\boldmath{$m$}}$ for $\text{\boldmath{$m$}}\in\mathcal{N}^\bullet$.
Assumption~\eqref{eqStNRass} allows us to find a unique solution of
equation~\eqref{eqConjugM} such that $M^\emptyset = 1$; it is inductively determined
by
$$
M^{m_1,\dotsc,m_r} = \frac{1}{\langle \norm{\text{\boldmath{$m$}}},\lambda \rangle} M^{m_2,\dotsc,m_r},
$$
hence
\begin{equation} \label{eqdefsollin}
M^{\text{\boldmath{$m$}}} = \frac{1}{\langle {m_1+\dotsb+m_r},\lambda \rangle}
\frac{1}{\langle {m_2+\dotsb+m_r},\lambda \rangle}
\dotsm \frac{1}{\langle {m_r},\lambda \rangle}
\end{equation}
The symmetrality of this solution can be obtained by mimicking the proof of
Proposition~\ref{propcVsym}.
Since $\mathbf{B}_\bullet$ is cosymmetral, we thus have an automorphism $\Theta = \sum
M^\bullet \mathbf{B}_\bullet$;
since $\Theta$ is continuous for the Krull topology, $\th = (\th_1,\dots,\th_n)$
with $\th_i=\Theta y_i$ yields a formal tangent-to-identity transformation which
conjugates~$X$ and~$X^{\text{lin}}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
As was alluded to at the end of Section~\ref{secContrAltSym}, the formalism of
moulds can be equally applied to the normalisation of discrete dynamical
systems.
A problem parallel to the previous one is the linearisation of a formal
transformation with multiplicatively non-resonant spectrum.
Suppose indeed that $f = (f_1,\dotsc,f_n)$ is a $n$-tuple of formal series
of~$\mathscr A$ without constant terms, with diagonal linear part
$f^{\text{lin}} \colon (y_1,\dotsc,y_n) \mapsto (\ell_1 y_1, \dotsc, \ell_n y_n)$.
Conjugating~$f$ and~$f^{\text{lin}}$ is equivalent to finding a continuous
automorphism~$\Theta$ which conjugates the corresponding susbtitution automorphisms:
$F = \Theta^{-1} F^{\text{lin}} \Theta$.
This is possible under the following {\em strong multiplicative non-resonance
assumption} on the spectrum $\ell = (\ell_1,\dotsc,\ell_n)$:
\begin{equation} \label{eqStNRassMult}
\ell^m -1 \neq 0 \quad
\text{for every $m\in\mathbb{Z}^n\setminus\{0\}$.}
\end{equation}
An explicit solution is obtained by expanding $F (F^{\text{lin}})^{-1}$ in homogeneous
components
$$
F = \Big( \operatorname{Id} + \sum_{m\in\mathcal{N}} B_m \Big) F^{\text{lin}},
$$
where the homogeneous operators~$B_m$ are no longer derivations; instead, they
satisfy the modified Leibniz rule~\eqref{eqmodifLeib} and generate a {\em
cosymmetrel} comould~$\mathbf{B}_\bullet$.
Correspondingly, the scalar mould
\begin{equation} \label{eqdefsollinMult}
M^\emptyset = 1, \qquad
M^{\text{\boldmath{$m$}}} = \frac{1}{(\ell^{m_1+\dotsb+m_r}-1)
(\ell^{m_2+\dotsb+m_r}-1)
\dotsm (\ell^{m_r}-1)}
\end{equation}
is {\em symmetrel} and $\Theta = \sum M^\bullet\mathbf{B}_\bullet$ is the desired automorphism
(see \cite{Eca93}),
whence a formal tangent-to-identity transformation~$\th$ which conjugates $f$ and~$f^{\text{lin}}$.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
In both previous problems, it is a classical result that a formal linearising
transformation~$\th$ exists under a weaker non-resonance assumption:
namely, it is sufficient that~\eqref{eqStNRass} or~\eqref{eqStNRassMult}
hold with~$\mathbb{Z}^n\setminus\{0\}$ replaced by~$\mathcal{N}\setminus\{0\}$.
Unfortunately, this is not clear on the mould-comould expansion, since under
this weaker assumption the formula~\eqref{eqdefsollin}
or~\eqref{eqdefsollinMult} may involve a zero divisor, thus the mould~$M^\bullet$
is not well-defined.
J.~\'Ecalle has invented a technique called {\em arborification} which solves
this problem and which goes far beyond: arborification also allows to recover
the Bruno-R\"ussmann theorem, according to which the formal linearisation~$\th$
is convergent whenever the vector field~$X$ or the transformation~$f$ is
convergent and the spectrum~$\lambda$ or~$\ell$ satisfies the so-called {\em Bruno
condition} (a Diophantine condition which states that the divisors $\langle m,\lambda
\rangle$ or $\ell^m - 1$ do not approach zero ``abnormally well'').
The point is that, even when $X$ or~$f$ is convergent and the spectrum is
Diophantine, it is hard to check that~$\th_i(y)$ is convergent because it is
represented as the sum of a formally summable family $(M^\text{\boldmath{$m$}} \mathbf{B}_\text{\boldmath{$m$}}
y_i)_{\text{\boldmath{$m$}}\in\mathcal{N}^\bullet}$ in~$\mathbb{C}[[y_1,\dotsc,y_n]]$, but
the family $\big(|M^\text{\boldmath{$m$}} \mathbf{B}_\text{\boldmath{$m$}} y_i|\big)_{\text{\boldmath{$m$}}\in\mathcal{N}^\bullet}$ may fail to be
summable in~$\mathbb{C}$ for any $y\in\mathbb{C}^n\setminus\{0\}$.
However, arborification provides a systematic way of reorganizing the terms of
the sum: $\th_i(y)$ then appears as the sum of a summable family indexed by
``arborescent sequences'' rather than words.
The reader is referred to \cite{EcaNonAb}, \cite{dulac}, \cite{EV}, and also to the
recent article \cite{Eca05}.
\medskip \addtocounter{parag}{1} \noindent{\theparag\ }
There is another context, totally different, in which J.~\'Ecalle has used mould
calculus with great efficiency.
The multizeta values
$$
{\zeta}(s_1,s_2,\dotsc,s_r) = \sum_{n_1>n_2>\dotsb>n_r>0}
\frac{1}{n_1^{s_1}n_2^{s_2}\dotsm n_r^{s_r}}
$$
naturally present themselves as a scalar mould on~$\mathbb{N}^*$; in fact,
\begin{multline*}
{\zeta}(s_1,\dotsc,s_r) = \operatorname{Ze}^{\left(\begin{smallmatrix}
0,&\dotsc\,,&0\\s_1,&\dotsc\,,&s_r\end{smallmatrix}\right)},\\
\text{with}\quad
\operatorname{Ze}^{\left(\begin{smallmatrix}
{\varepsilon}_1,&\dotsc\,,&{\varepsilon}_r\\s_1,&\dotsc\,,&s_r\end{smallmatrix}\right)}
= \sum_{n_1>\dotsb>n_r>0}
\frac{{\mathrm e}^{2\pi{\mathrm i}(n_1{\varepsilon}_1+\dotsb+n_r{\varepsilon}_r)}}{n_1^{s_1}\dotsm n_r^{s_r}}
\end{multline*}
for $s_1,\dotsc,s_r\in\mathbb{N}^*$, ${\varepsilon}_1,\dotsc,{\varepsilon}_r\in\mathbb{Q}/\mathbb{Z}$
(with a suitable convention to handle possible divergences).
The mould~$\operatorname{Ze}^\bullet$ is the central object;
it turns out that it is {\em symmetrel}.
It is called a {\em bimould} because the letters of the alphabet are naturally given
as members of a product space, here $\mathbb{N}^*\times(\mathbb{Q}/\mathbb{Z})$; this makes it possible
to define new operations and structures.
This is the starting point of a whole theory, aimed at describing the algebraic
structures underlying the relations between multizeta values.
See \cite{EcaBil} or \cite{EcaARI}.
\vspace{.7cm}
\subsubsection*{Acknowledgements}
It is a pleasure to thank F.~Fauvet, F.~Menous, F.~Patras and B.~Teissier for
their help.
I am also indebted to F.~Fauvet and J.-P.~Ramis, for having organized the conference
``Renormalization and Galois theories'' and given me a great opportunity of
lecturing on Resurgence theory.
This work was done in the framework of the project {\em Ph\'enom\`ene de Stokes,
renormalisation, th\'eories de Galois} from the {\em agence nationale pour la recherche}.
\vspace{.3cm}
\frenchspacing
| {
"timestamp": "2007-12-14T13:35:59",
"yymm": "0712",
"arxiv_id": "0712.2337",
"language": "en",
"url": "https://arxiv.org/abs/0712.2337",
"abstract": "This article is an introduction to some aspects of Écalle's mould calculus, a powerful combinatorial tool which yields surprisingly explicit formulas for the normalising series attached to an analytic germ of singular vector field or of map. This is illustrated on the case of the saddle-node, a two-dimensional vector field which is formally conjugate to Euler's vector field $x^2\\frac{\\pa}{\\pa x}+(x+y)\\frac{\\pa}{\\pa y}$, and for which the formal normalisation is shown to be resurgent in $1/x$. Resurgence monomials adapted to alien calculus are also described as another application of mould calculus.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Mould expansions for the saddle-node and resurgence monomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.978051747564637,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7089606345592785
} |
https://arxiv.org/abs/1805.03778 | Threshold functions for substructures in random subsets of finite vector spaces | The study of substructures in random objects has a long history, beginning with Erdős and Rényi's work on subgraphs of random graphs. We study the existence of certain substructures in random subsets of vector spaces over finite fields. First we provide a general framework which can be applied to establish coarse threshold results and prove a limiting Poisson distribution at the threshold scale. To illustrate our framework we apply our results to $k$-term arithmetic progressions, sums, right triangles, parallelograms and affine planes. We also find coarse thresholds for the property that a random subset of a finite vector space is sum-free, or is a Sidon set. | \section{Introduction}\label{sec:introduction}
Let $\mathbb{F}_{q}$ denote the finite field with
$q$ elements, where $q$ is a prime power, and let $\mathbb{F}_{q}^{n}$ be the $n$-dimensional vector space over this
field. Our goal is to establish threshold functions for the
existence of certain structures, which we call ``patterns'' in random subsets of $\mathbb{F}_{q}^{n}$. Threshold functions are
well-studied in many contexts, for example random graphs~\cite{KF},
random subsets of integers~\cite{RSZ} and random fractal sets~\cite{SS}.
We start by describing the random model. Given $0<\delta<1$, let $E=E^{\omega}$ be a random subset of $\mathbb{F}_{q}^{n}$ chosen as follows : let $x\in E$ with probability $\delta$ and $x\notin E$ with probability $1-\delta$, independently for all $x\in \mathbb{F}_{q}^{n}$. Denote the resulting probability space by $\Omega(\mathbb{F}_{q}^{n},\delta)$.
The model $\Omega(\mathbb{F}_{q}^{n},\delta)$ is closely related to the Gilbert random graph model, see for example \cite{KF},
and can be considered as a discrete version of fractal percolation (Mandelbrot percolation), see \cite[Chapter 15]{Falconer}. For some applications of the random model $\Omega(\mathbb{F}_{q}^{n}, \delta)$ to some deterministic problems, we refer to \cite[Theorem 5.2]{Babai} and \cite{Chens} for finding subsets of $\mathbb{F}_{q}^{n}$ with small Fourier coefficients, and \cite[Theorems 1.5-1.6]{Chenp} for finding sets without exceptional projections.
We study the number of subsets of $E$ of certain types, which we define below.
Our results will be asymptotic as at least one of $q, n$ tends to infinity.
The parameter which tends to infinity is specified below, for each pattern.
\begin{definition}\label{def:def}
\emph{
\begin{itemize}
\item A \emph{non-trivial three-term arithmetic progression} consists of three distinct vectors
$x, x+v, x+2v$ in $\mathbb{F}_{q}^{n}$. Asymptotics are as $q+n$ tends to infinity.
\item A \emph{non-trivial parallelogram} consists of four distinct vectors $x_{1}, x_{2}, x_{3}, x_{4}$ of $\mathbb{F}_{q}^{n}$
such that
\[ x_{\sigma(1)}-x_{\sigma(2)}=x_{\sigma(4)}-x_{\sigma(3)} \quad \text{ and } \quad
x_{\sigma(1)}-x_{\sigma(4)}=x_{\sigma(2)}-x_{\sigma(3)}\]
for some permutation $\sigma$. Asymptotics are as $q+n$ tends to infinity.
\item A \emph{non-trivial right triangle} consists of three distinct vectors $x_{1}, x_{2}, x_{3}$ of $\mathbb{F}_{q}^{n}$ such that
\[ (x_{\sigma(2)}-x_{\sigma(1)})\cdot (x_{\sigma(3)}-x_{\sigma(1)})=0 \]
for some permutation $\sigma$.
Asymptotics are as $q$ tends to infinity, with $n\geq 2$.
(Here $n$ may be bounded or growing.)
\item
An $m$-\emph{dimensional plane} is a translation of an
$m$-dimensional subspace, for any $m\in \{1,2,\ldots, n-1\}$, where $n\geq 2$. Each $m$-dimensional plane contains $q^{m}$ vectors.
Asymptotics are as $n$ tends to infinity, with $q$ and $m$ fixed.
\end{itemize}
}
\label{patterns}
\end{definition}
Since our methods may also work for other patterns, we introduce a general framework for our calculations.
Let $\mathcal{A}=\mathcal{A}_{a}$ be a family of subsets of $\mathbb{F}_{q}^{n}$ such that each element of $\mathcal{A}$ has $a$ vectors.
(Note that all patterns introduced above satisfy this condition.)
Let $X=X_{\mathcal{A}}$ be the random variable which counts the elements of $\mathcal{A}$ in the random set $E$.
That is,
\begin{equation}\label{eq:x}
X=X_{\mathcal{A}}=\sum_{T\in \mathcal{A}} {\bf 1}_{E}(T)
\end{equation}
where ${\bf 1}_E(T)$ equals 1 if $T\subseteq E$ and equals 0 otherwise.
We write $|S|$ to denote the cardinality of a set $S$. By linearity of expectation we have $\mathbb{E}(X)= |\mathcal{A}|\,\delta^{a}$.
We write ``a.a.s.'' to mean \emph{asymptotically almost surely},
which means that the property holds with probability which tends to $1$.
Standard asymptotic notation is given at the start of Section~\ref{sec:pre}.
Let $P$ be a combinatorial property and $E\in \Omega(\mathbb{F}_{q}^{n},\delta)$. In
this context, we say that $t(n,q)$ is a \emph{(coarse) threshold function} for the property $P$ if
the following holds:
\begin{itemize}
\item[(i)] if $\delta = o(t(n,q))$ then $\mathbb{P}(\text{$E$ satisfies property $P$}) \rightarrow 0$, and
\item[(ii)] if $\delta = \omega (t(n,q))$ then $\mathbb{P}(\text{$E$ satisfies property $P$} ) \rightarrow 1$.
\end{itemize}
\medskip
\begin{theorem}\label{thm:main}
Let $\mathcal{A}$ be the one of the patterns defined in Definition~\ref{patterns}.
The functions
\begin{equation}
t(n,q)=
\begin{cases}
q^{-2n/3} & \text{3-APs}, \\
q^{-n+1/3} & \text{right triangles},\\
q^{-3n/4} &\text{parallelograms},\\
q^{-(m+1)n/q^{m}} & \text{$m$-dimensional planes}.
\end{cases}
\end{equation}
are threshold functions for the property
``$E$ contains an element of $\mathcal{A}$". All asymptotics are taken with the respect to the
limits described in Definition~\emph{\ref{patterns}}.
\end{theorem}
Let Po$(\mu)$ denote the Poisson distribution with mean $\mu$, and write
$X \xrightarrow{\text{d}} \operatorname{Po}(\mu)$ if $X$ tends in distribution to Po$(\mu)$. In the special case that $\lambda_{\mathcal{A}}$ has a limit $\lambda$, we have the following.
\begin{theorem}\label{thm:Poisson}
Let $\mathcal{A}$ be the one of the patterns defined in Definition~\ref{patterns}.
Suppose that $\lambda_{ \mathcal{A}}\rightarrow \lambda$ where $\lambda\in (0,\infty)$.
Then $X \xrightarrow{\text{d}} \operatorname{Po}(\lambda)$.
All asymptotics are taken with respect to the limits described in Definition~\emph{\ref{patterns}}.
\end{theorem}
We now outline our proof technique.
We will show that when $\delta= o(t(n,q))$ then $\mathbb{E}(X_{\mathcal{A}})$ tends to zero.
The negative side of Theorem \ref{thm:main}
follows directly from Markov's inequality (first moment method), since
\[
\mathbb{P}(X_{\mathcal{A}}\geq 1)\leq \mathbb{E}(X_{\mathcal{A}}).
\]
The proof of Theorem \ref{thm:main} is completed using the second moment method.
To establish Theorem \ref{thm:Poisson}, we study the higher order moments of the random variable $X_{\mathcal{A}}$
and apply the method of moments.
The paper is organised as follows. In Section~\ref{sec:pre} we introduce some notation, then introduce
conditions which are sufficient to imply
Theorems~\ref{thm:main} and~\ref{thm:Poisson}.
In Section~\ref{sec:proofofmaintheorem} we prove that these conditions are satisfied
by each of the patterns described in Definition~\ref{patterns},
completing the proof of Theorems~\ref{thm:main} and \ref{thm:Poisson}.
We treat $3$-APs, right triangles, and parallelograms together, using a general framework
which may be applicable to further problems of this kind.
The case of $m$-dimensional planes requires special care and is treated separately in Section \ref{sec:planes}.
Section~\ref{sec:further} contains some final remarks.
\section{Preliminaries}\label{sec:pre}
The following standard asymptotic notation will be used, assuming that asymptotics are taken with
respect to a parameter $N$.
We write $f=O(g)$ if there is a positive constant $C$ such that $|f(N)|\leq C|g(N)|$ for all $N$ sufficiently large,
and write $f=\Omega(g)$ if $g=O(f)$.
If $f=O(g)$ and $f=\Omega(g)$ then $f=\Theta(g)$.
We write $f = o(g)$ if $f(N)/g(N)\rightarrow 0$, and write
$f = \omega(g)$ if $g=o(f)$. Finally, $f\sim g$ means that $f = g(1+o(1))$.
We also use $f=O_{L}(g)$ to mean that $|f(N)|\leq C|g(N)|$ for all $N$ sufficiently large, where $L$ is a list of parameters and $C$ is a positive constant which depends on $L$.
The parameter $\lambda$ always denotes a fixed positive real number.
Recall that $\mathcal{A}=\mathcal{A}_{a}$ is a family of subsets of $\mathbb{F}_{q}^{n}$ such that every element of $\mathcal{A}$ contains $a$ points.
In the calculations for the second moment, we consider intersections of pairs of elements of $\mathcal{A}$.
For $k=0,1,\ldots, a$, define
\begin{equation}\label{eq:ik}
I_{k}=I_{\mathcal{A}, k}=\{(T,T'): T, T' \in \mathcal{A}, \, |T \cap T'|=k\}.
\end{equation}
Let $Y=Y_{\mathcal{A}}$ be the random variable which counts pairs $(T,T')$ of distinct elements of $\mathcal{A}$
with non-empty intersection
such that both $T$, $T'$ are contained in the random set $E$. That is,
\begin{align}\label{eq:y}
Y=Y_{\mathcal{A}}=\sum_{\substack{T,T'\in \mathcal{A}\\
1\leq |T\cap T'|\leq a-1}} {\bf 1}_{E}(T\cup T')
&=\sum_{k=1}^{a-1}\sum_{(T,T')\in I_{k}}{\bf 1}_{E}(T\cup T').
\end{align}
Recall the definition of $X$ from (\ref{eq:x}).
\begin{lemma}[Second moment] \label{lem:second}
Suppose that $\mathbb{E}(X)=|\mathcal{A}| \delta^{a}\rightarrow \infty$, and that
\begin{itemize}
\item[\emph{(C1)}] $|I_{0}|\sim |\mathcal{A}|^{2}$;
\item[\emph{(C2)}] $\mathbb{E}(Y) = o(\mathbb{E}(X)^{2})$.
\end{itemize}
Then a.a.s. $E$ contains some element of $\mathcal{A}$.
\end{lemma}
\begin{proof}
Observe that $|I_a| = |\mathcal{A}|$, and hence that $|I_a|\delta^a = \mathbb{E}(X) = o(\mathbb{E}(X)^2)$. Therefore, using (C1) and (C2) we have
\begin{align*}
\mathbb{E}(X^{2})&=\mathbb{E} \Big(\sum_{(T, T') \in \mathcal{A}\times\mathcal{A}}{\bf 1}_{E}(T) {\bf 1}_{E}(T') \Big)\\
&=|I_{0}|\,\delta^{2a}+|I_{a}|\,\delta^{a}+\mathbb{E}(Y)\\
&\sim \mathbb{E}(X)^{2}.
\end{align*}
Hence, by the Paley-Zygmund inequality,
\[
\mathbb{P}(X>0)\geq \frac{\mathbb{E}(X)^{2}}{\mathbb{E}(X^{2})}\rightarrow 1,
\]
completing the proof.
\end{proof}
Now suppose that $\mathbb{E}(X)\rightarrow \lambda$ where $\lambda \in (0, \infty)$. We consider the factorial
moment $\mathbb{E}((X)_{r})$ of the random variable $X$ for positive integers $r$. Here
\[
(X)_{r}:=X(X-1)\ldots(X-r+1).
\]
We will show that $\mathbb{E}((X)_{r})\rightarrow \lambda^{r}$ for all $r\in \mathbb{N}$.
From this, the method of moments implies that $X$ converges in distribution to a
Poisson distribution with parameter $\lambda$, as explained below.
\begin{lemma}
Let $\mu$ be a positive real constant.
If $\mathbb{E}((X)_r)\rightarrow \mu^r$ for all $r\in\mathbb{N}$ then $X$ converges in
distribution to a Poisson distribution with mean $\mu$.
\label{method-moments}
\end{lemma}
\begin{proof}
Let $Z\sim \operatorname{Po}(\mu)$.
It is well known that the Poisson distribution $Z$ is uniquely determined by its moments.
Now
$\mathbb{E}((X)_{r})\rightarrow \mu^r = \mathbb{E}((Z)_{r})$ for any $r\in \mathbb{N}$.
This implies that $\mathbb{E}(X^{r})\rightarrow \mathbb{E}(Z^{r})$ for any $r\in \mathbb{N}$.
The proof is completed by applying the method of moments
(see for example~\cite[Section 30]{Bi} or~\cite[Chapter 20]{KF}).
\end{proof}
Recall the definition of $I_k$ from (\ref{eq:ik}) and define
\[ I_{\geq 1}=\bigcup_{k=1}^{a}I_{k}. \]
The following lemma can be used to check that the conditions
of Lemma~\ref{method-moments} hold.
\begin{lemma}[Higher moments] \label{lem:high}
Assume that $|\mathcal{A}|\rightarrow\infty$ and let $X=X_{\mathcal{A}}$ be as defined in (\ref{eq:x}).
Suppose that $\mathbb{E}(X)\rightarrow \lambda$ where $\lambda$ is a positive real number,
and that the following conditions hold:
\begin{itemize}
\item[\emph{(C1)}] $| I_{0}|\sim|\mathcal{A}|^{2}$;
\item[\emph{(C2)}] $\mathbb{E}(Y)=o(\mathbb{E}(X)^{2})$;
\item[\emph{(C3)}] For any fixed integer $r\geq 2$ the high moment
$\mathbb{E}(X^{r})=O_{r}(1)$.
\end{itemize}
Then $\mathbb{E}((X)_r)\rightarrow \lambda^r$ for all $r\in\mathbb{N}$.
\end{lemma}
\begin{proof} For any integer $r\geq 2$, define
\[
(\mathcal{A})_{r}:=\{(T_{1},\ldots, T_{r})\in \mathcal{A}^{r} \, : \, T_1,\ldots, T_r \text{ are pairwise distinct} \}.
\]
Let
\[
\Gamma_{0}=\{(T_{1},\ldots, T_{r})\in (\mathcal{A})_{r} \, : \, T_1,\ldots, T_r \text{ are pairwise disjoint} \},
\]
and write
$\Gamma_{\geq 1}=(\mathcal{A})_{r}\backslash \Gamma_{0}$. Observe that
\[
|\Gamma_{\geq 1}|={r \choose 2}|I_{\geq1}||\mathcal{A}|^{r-2}.
\]
Condition (C1) implies that $|I_{\geq1}|=o(|\mathcal{A}|^{2})$, and hence
\[
|\Gamma_{\geq 1}|=o(|\mathcal{A}|^{r}).
\]
It follows that
\begin{equation}\label{eq:Gammazero}
|\Gamma_{0}|\sim |(\mathcal{A})_{r}|.
\end{equation}
Write the random variable $(X)_{r}$ as
\[
(X)_{r} =\sum_{t \in (\mathcal{A})_{r}} {\bf 1}_{E} (t)
= \sum_{t \in \Gamma_{0} } {\bf 1}_{E} (t) +\sum_{t \in \Gamma_{\geq 1} } {\bf 1}_{E} (t).
\]
Applying the estimate \eqref{eq:Gammazero} and the assumption that $|\mathcal{A}|\rightarrow \infty$, we obtain
\begin{equation}\label{eq:expectationGamma}
\mathbb{E}\Big(\sum_{t \in \Gamma_{0} } {\bf 1}_{E} (t)\Big)=|\Gamma_{0}|\,\delta^{ar}\sim |(\mathcal{A})_{r}|\,\delta^{ar}\sim \lambda^{r}.
\end{equation}
It remains to prove that $\mathbb{E}\Big(\sum_{t \in \Gamma_{\geq 1} } {\bf 1}_{E} (t)\Big) = o(1)$.
Recalling the definition of the random variable $Y$ from (\ref{eq:y}), if $Y=0$ with probability one then $\sum_{t\in \Gamma_{\geq 1} } {\bf 1}_{E} (t)=0$ with probability one. Then we finish the proof for this trivial case.
Now suppose that $Y>0$ with positive probability, and define
\[
N=N_{Y,r}:=\mathbb{E}(Y)^{-\frac{1}{2r}}.
\]
This is well-defined for any particular values of $q, n$. Thus we obtain
\begin{align}
\sum_{t \in \Gamma_{\geq 1} } {\bf 1}_{E} (t)
&= {\bf 1}_{(X< N)}\sum_{t \in \Gamma_{\geq 1} } {\bf 1}_{E} (t)
\,\, + \,\, {\bf 1}_{(X\geq N)}\sum_{t \in \Gamma_{\geq 1} } {\bf 1}_{E} (t)\nonumber \\
&\leq \binom{r}{2}\, Y\, N^{r-2} \,\, +\,\, {\bf 1}_{(X\geq N)}X^{r}.
\label{eq:bad}
\end{align}
By definition of $N$ and by assumption (C2), $\mathbb{E}(Y)=o(\mathbb{E}(X)^{2})\rightarrow0$,
\begin{equation}\label{eq:firstpart}
\mathbb{E}(YN^{r-2})= \mathbb{E}(Y)^{1 - \frac{r-2}{2r}} = \mathbb{E}(Y)^{\frac{r+2}{2r}} \rightarrow 0.
\end{equation}
Applying the Cauchy-Schwarz inequality to ${\bf 1}_{(X\geq N)}X^{r}$, we obtain
\begin{align}
\mathbb{E}\Big({\bf 1}_{(X\geq N)}X^{r}\Big)&\leq \mathbb{P}(X\geq N)^{1/2}\,\, \mathbb{E}(X^{2r})^{1/2}\nonumber \\
&\leq C_{2r}^{1/2}\, \left(\mathbb{E}(X)\, \mathbb{E}(Y)^{\frac{1}{2r}}\right)^{1/2}\rightarrow0.
\label{eq:Cauchy}
\end{align}
The second inequality holds using Markov's inequality, the definition of $N$ and
the assumption (C3).
Combining the estimates \eqref{eq:bad}, \eqref{eq:firstpart}, and \eqref{eq:Cauchy}, we obtain
\[
\mathbb{E}\Big(\sum_{t \in \Gamma_{\geq1} } {\bf 1}_{E} (t)\Big)=O\Big(\mathbb{E}(YN^{r})+ \mathbb{E}({\bf 1}_{(X\geq N)}X^{r})\Big)\rightarrow 0.
\]
Together with the estimate \eqref{eq:expectationGamma}, we obtain that
$\mathbb{E}((X)_{r})\rightarrow \lambda^{r}$, as required.
\end{proof}
\section{Proof of our main results}\label{sec:proofofmaintheorem}
Recall that every element of $\mathcal{A}$ contains $a$ vectors.
\begin{lemma}\label{lem:3}
Suppose there exist $b, c>0, b\leq a$ such that $|\mathcal{A}|=\Theta (q^{bn-c})$. Assume that for any set $S$ of $k$ distinct points in $\mathbb{F}_{q}^{n}$, the number of elements of $\mathcal{A}$ which contain $S$ is
\begin{equation}\label{eq:10}
\begin{cases}
O(q^{(b-k)n-c}) & \text{if $1\leq k\leq b-1$}, \\
O(1) &\text{otherwise}.
\end{cases}
\end{equation}
Here all asymptotics are as $q+n\rightarrow \infty$, if $b<a$, and are as $q\rightarrow \infty$ if $b=a$.
Then the event ``contains an element of $\mathcal{A}$'' has a coarse threshold function
\[
t(n, q)=q^{(c-bn)/a}.
\]
Furthermore, if
$\mathbb{E}(X)\rightarrow \lambda$ for some constant $\lambda>0$ then $X \xrightarrow{\text{d}} \operatorname{Po}(\lambda)$.
\end{lemma}
\begin{proof}
By definition of $t(n,q)$, if $\delta =o(t(n,q))$ then
\[ \mathbb{E}(X) = |\mathcal{A}|\, \delta^a = |\mathcal{A}|\, o( q^{c-bn} ) = o(1).\]
This establishes the negative side of the threshold result.
Next we estimate the cardinality of $I_{k}$ given in \eqref{eq:ik}.
For any $(T,T')\in I_{k}$ with $ k= 1, \ldots, b-1$, the sets $T$ and $T'$ intersect $k$ points of $\mathbb{F}_{q}^{n}$. It follows that
\[
|I_{k}|=O(q^{kn})\,O(q^{2(b-k)n-2c})=|\mathcal{A}|^{2}\,O(q^{-kn}).
\]
For the case $b<k\leq a$, since $k$ points determine $O(1)$ elements of $\mathcal{A}$ and $a$ is fixed constant, we obtain $|\mathcal{A}|{a \choose k}O(1)^{2}\geq |I_{k}|$. Therefore we obtain the following upper bounds:
\begin{equation}\label{eq:3ik}
|I_{k}|=
\begin{cases}
|\mathcal{A}|^{2}\,O(q^{-kn}) & \text{if $1\leq k\leq b-1$}, \\
|\mathcal{A}|^{2}\,O(q^{-bn+c}) &\text{otherwise}.
\end{cases}
\end{equation}
It follows that $|I_{0}|=|A|^{2}\,O(q^{-n+c})\sim |A|^{2}$, and hence we obtain the condition (C1).
For the random variable $X$ given in \eqref{eq:x} and random variable $Y$ given in \eqref{eq:y}, we intend to show that if $\mathbb{E}(X)=|\mathcal{A}|\,\delta^a\rightarrow \infty$ or $\mathbb{E}(X)=|\mathcal{A}|\,\delta^a\rightarrow \lambda$ for some fixed $\lambda >0$ then $\mathbb{E}(Y)=o(\mathbb{E}(X)^{2})$. Observe that
\begin{align*}
\mathbb{E}(Y)&=\sum_{k=1}^{a-1}|I_k|\,\delta^{2a-k}\\
&=|A|^{2}\delta^{2a}\,\left(\sum_{k=1}^{b-1 }q^{-nk}\delta^{-k}+\sum_{k=b}^{a-1}q^{-bn+c}\delta^{-k}\right).
\end{align*}
Note that $\mathbb{E}(X)=\Theta(q^{bn-c})\delta^{a}$.
Under the given asymptotic conditions, we obtain
\[
q^{n}\delta=\Theta(q^{n(1-\frac{b}{a})+\frac{c}{a}}\mathbb{E}(X)^{\frac{1}{a}})\rightarrow \infty,
\]
and
\[
q^{bn-c}\delta^{k}=\Theta(\mathbb{E}(X))\delta^{k-a} \rightarrow \infty
\]
provided $\mathbb{E}(X)\rightarrow \infty$ or $\mathbb{E}(X)\rightarrow \lambda$ for some fixed positive $\lambda$. Hence $\mathbb{E}(Y)=o(\mathbb{E}(X)^{2})$, so (C2) holds.
If $\delta = \omega(t(n, q))$ then $\mathbb{E}(X)\rightarrow \infty$, and the above arguments show that (C1) and (C2) hold.
By Lemma~\ref{lem:second}, we conclude that a.a.s.\ $E$ contains some element of $\mathcal{A}$.
This completes the proof that $t(n,q)$ is a coarse threshold function for $\mathcal{A}$.
Now assume that $\mathbb{E}(X)\rightarrow \lambda$ for some fixed positive $\lambda$.
We will establish (C3) by proving that $\mathbb{E}(X^{r})=O_{r}(1)$, by induction on $r$. The case $r=1$ holds by assumption. Now we suppose that $\mathbb{E}(X^{r})\leq C_{r}$ holds for some $r\geq 1$. Observe that
\begin{align*}
\mathbb{E}(X^{r+1})&=\mathbb{E}\left(\sum_{(T_{i_{1}}, \ldots, T_{i_{r+1}} )\in \mathcal{A}^{r+1}} {\bf 1}_{E}(T_{i_1})\ldots {\bf 1}_{E}(T_{i_{r+1}}) \right)\\
&=\sum_{(T_{i_{1}}, \ldots, T_{i_{r+1}} )\in \mathcal{A}^{r+1}} \mathbb{P}\left(\cup_{\ell=1}^{r+1}T_{i_{\ell}} \subset E \right)\\
&=\sum_{(T_{i_{1}}, \ldots, T_{i_{r+1}} )\in \mathcal{A}^{r+1}} \mathbb{P}\left(T_{i_{r+1}}\subset E\big | \cup_{\ell=1}^{r}T_{i_{\ell}} \subset E \right)\mathbb{P}\left(\cup_{\ell=1}^{r}T_{i_{\ell}} \subset E\right).
\end{align*}
Hence it is sufficient to prove that there exists positive constant $C_{r}$ such that for any
$T_{1}, \ldots, T_{r} \in \mathcal{A}$,
\begin{equation*}
\sum_{T\in \mathcal{A}}\mathbb{P}(T\subset E \big| \cup_{\ell=1}^{r}T_{\ell}\subset E)\leq C_{r}.
\end{equation*}
Let $B=\bigcup_{i=1}^{r}T_{i}$. For $0\leq k\leq a $, define
\[
J_{k}=\{T \in \mathcal{A}: |T \cap B|=k\}.
\]
It follows from \eqref{eq:3ik} that
\begin{equation*}
|J_{k}|=
\begin{cases}
O_{r}(q^{(b-k)n-c}) & \text{if $0\leq k\leq b-1$}, \\
O_{r}(1) &\text{otherwise}.
\end{cases}
\end{equation*}
Since $\mathbb{E}(X)=\Theta(q^{bn-c}\delta^{a})\rightarrow \lambda$, we obtain $q^{n}\delta\rightarrow \infty$. Therefore
\begin{align*}
\sum_{T\in \mathcal{A}}\mathbb{P}(T \subset E \big| \cup_{\ell=1}^{r}T_{\ell}\subset E)&=\sum^{a}_{k=0}|J_{k}|\,\delta^{a-k}\\
&=\sum^{b-1}_{k=0}\lambda\, O(q^{-kn}\,\delta^{-k})+\sum^{a}_{k=b}O(1)\,\delta^{a-k}\\
&=O_{\lambda,r}(1).
\end{align*}
This shows that (C3) holds, and hence by Lemma~\ref{lem:high} we conclude that
$X \xrightarrow{\text{d}} \operatorname{Po}(\lambda)$, completing the proof.
\end{proof}
We can now prove our results for three of the patterns.
\begin{corollary}
Theorem \ref{thm:main} and Theorem \ref{thm:Poisson} hold for $3$-APs, parallelograms, and right triangles.
\end{corollary}
\begin{proof}
It suffices to prove that the conditions of Lemma \ref{lem:3} hold with the parameters given in Table~\ref{t:params},
noting that the asymptotic assumptions of Lemma~\ref{lem:3} match those specified in Definition~\ref{def:def}.
\begin{table}[ht!]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c |c |c| c|}
\hline
$\mathcal{A}_{a}$ & $a$ & $b$ & $c$ \\
\hline
3-APs & 3 & 2 & 0\\
\hline
Parallelograms & 4 & 3 & 0 \\
\hline
Right triangles & 3 & 3 & 1 \\
\hline
\end{tabular}
\caption{Parameter values for 3-APs, parallelograms and right triangles}
\label{t:params}
\end{center}
\end{table}
We give the proof for right triangles only: the proof for the other two patterns follows similarly.
Let $\mathcal{A}$ be the set of all right triangles in $\mathbb{F}_{q}^n$.
Clearly $a=3$ as each right triangle contains $3$ points. Each right triangle can be chosen as follows:
we first choose two distinct vectors $x, y\in \mathbb{F}_{q}^{n}$, in $\Theta(q^{2n})$ different ways, then we choose a
vector $z\in \mathbb{F}_{q}^{n}$ such that
\begin{equation}\label{eq:orthogonal}
(z-x)\cdot(y-x)=0 \quad \text{ or } \quad (z-y) \cdot (x-y)=0.
\end{equation}
Since $\{\xi\in \mathbb{F}_{q}^{n}: \xi \cdot (x-y)=0\}$ is a $(n-1)$-dimensional subspace, there are $\Theta(q^{n-1})$ choices of $z$ such that \eqref{eq:orthogonal} holds.
Thus $|\mathcal{A}|=O(q^{3n-1})$. For a lower bound, observe that each right triangle in $\mathbb{F}_{q}^{n}$ was chosen at most $O(1)$ times using this process. Furthermore, the above process may also produce triples $(x, y, z)$ with $z=x$ or $z=y$, and
these are not right triangles. Therefore
\[
\Theta(q^{3n-1})\leq O(1)\, |\mathcal{A}|\, +O(q^{2n}).
\]
Since $n\geq 2$ it follows that $|\mathcal{A}| = \Theta (q^{3n-1})$,
so we take $b=3$ and $c=1$ as in Table~\ref{t:params}.
Furthermore the above arguments also implies that the condition \eqref{eq:10} holds for right triangles.
\end{proof}
\subsection{Affine planes}\label{sec:planes}
Let $G(n,m)$ be the collection of all $m$-dimensional linear subspaces of $\mathbb{F}_{q}^{n}$, and $\mathcal{A}=A(n,m)$ be the family of all $m$-dimensional planes. It is not hard to obtain (see \cite[Theorem 6.3]{Cameron} or the proof of Lemma~\ref{lem:intersection}(i), below),
\begin{equation*}
|G(n,m)|=\frac{(q^{n}-1)(q^{n}-q)\ldots (q^{n}-q^{m-1})}{(q^{m}-1)(q^{m}-q)\ldots (q^{m}-q^{m-1})},
\end{equation*}
and hence
\begin{equation}\label{eq:G}
|G(n,m)|=\Theta_{q, m}(q^{mn}).
\end{equation}
Since every element of $A(n,m)$ is obtained by translating some $m$-dimensional plane, and $q^{n-m}$ elements of $A(n, m)$ are obtained by translating a given $m$-dimensional plane, we obtain
\begin{equation}\label{eq:A(n,m)}
|A(n,m)|=|G(n,m)|\, q^{n-m}=\Theta_{q,m}(q^{(m+1)n}).
\end{equation}
By definition of $t(n, q)=q^{-\frac{(m+1)n}{q^{m}}}$, if $\delta=o(t(n, q))$ then
\[
\mathbb{E}(X)=|A(n,m)|\delta^{q^{m}}=o(1).
\]
This establishes the negative side of the threshold result.
Note that for any $T, T' \in A(n,m)$, the intersection $T\cap T'$ is ether empty or a $k$-dimensional plane for some $k=0, \ldots, m$. Recall the definition of $I_{q^{k}}$ from \eqref{eq:ik}.
\begin{lemma}\label{lem:intersection} Using above notations, we obtain the following estimates.
\begin{itemize}
\item[\emph{(i)}] Let $V\in A(n,k)$. Then for $m\geq k,$
\[
|\{W\in A(n,m): V\subset W\}|=|G(n-k,m-k)|=\Theta (q^{(m-k)n}).
\]
\item[\emph{(ii)}] For $k=0, \ldots, m$ we have
\[
|I_{q^{k}}|\leq |A(n,k)|| G(n-k,m-k)|^{2}=| A(n,m)|^{2} \, O(q^{-(k+1)n}).
\]
It follows that
\[
| I_{\geq 1}|=| A(n,m)|^{2}\, O(q^{-n}),
\]
and hence
\[
|I_{0}|= |A(n,m)|^{2}\, (1+O(q^{-n})).
\]
\item[\emph{(iii)}] Recall that $\mathbb{E}(X)=| A(n,m)|\delta^{q^{m}}=:\lambda_{\mathcal{A}}$. If $\lambda_{\mathcal{A}}\rightarrow \infty$ or $\lambda_{\mathcal{A}}\rightarrow \lambda$ for some positive $\lambda$, then for any $ k=0, \ldots, m-1$,
\begin{equation}
q^{-n(k+1)}\delta^{-q^{k}}=o(1).
\end{equation}
\end{itemize}
\vspace*{-\baselineskip}
\end{lemma}
\begin{proof}
For (i), we assume that $m>k$ and $V\in G(n,k)$. Observe that to obtain a $m$-dimensional subspace which contains $V$, it is sufficient to choose a set $S=\{u_{1}, \ldots, u_{m-k}\}$ of $\mathbb{F}_{q}^{n}$ such that $S\cup V$ spans an $m$-dimensional subspace. There are $q^{n}-q^{k}$ choices for $u_{1}$ (all except the vectors from the subspace $V$). To choose $u_{2}$, we have $q^{n}-q^{k+1}$ choices to ensure that $u_{2}$ is independent of $\{u_{1}\}\cup V$, and so on. In the end there are
\[
(q^{n}-q^{k})(q^{n}-q^{k+1})\ldots (q^{n}-q^{m-1})
\] ways to choose $S$. Note that for any $m$-dimensional subspace which contains $V$, there are
\[
(q^{m}-q^{k})(q^{m}-q^{k+1})\ldots (q^{m}-q^{m-1})
\] choices of $S$ which generate (or span) the same subspace. It follows that the number of $m$-dimensional subspaces which contain $V$ is
\[
\frac{(q^{n}-q^{k})(q^{n}-q^{k+1})\ldots (q^{n}-q^{m-1})}{(q^{m}-q^{k})(q^{m}-q^{k+1})\ldots (q^{m}-q^{m-1})}=|G(n-k,m-k)|,
\]
and applying \eqref{eq:G} completing the proof of (i).
To prove (ii), note that $T$ intersects $T'$ at some $k$-dimensional plane for any $(T, T')\in I_{q^{k}}$. For every $k$-plane there are $|G(n-k,m-k)|$ $m$-planes which contain it, and hence by (i) above we have
\[
|I_{q^{k}}|\, \leq |A(n,k)|| G(n-k,m-k)|^{2}
= | A(n,m)|^{2}\, O(q^{-n(k+1)}).
\]
To establish (iii), since $\lambda_{\mathcal{A}}=|A(n,m)|\delta^{q^{m}}$, by \eqref{eq:A(n,m)} we have
\[
\delta^{q^{m}}=\lambda_{\mathcal{A}}\, q^{-n(m+1)}\, \Theta(1).
\]
Then
\begin{align*}
q^{n(k+1)}\, \delta^{q^{k}}&=q^{n(k+1)}\,\lambda_{\mathcal{A}}^{q^{k-m}}\, q^{-n(m+1)\,q^{k-m}}\, \Theta(1)\\
&=\lambda_{\mathcal{A}}^{q^{k-m}}q^{n(k+1-(m+1)q^{k-m})}\,\Theta(1).
\end{align*}
Observe that for any $x\geq 0$ and $q\geq 3$,
\[
\frac{x+1}{q^{x}}>\frac{x+2}{q^{x+1}}.
\]
It follows that $k+1>(m+1)q^{k-m} $ for any $k=0, 1, \ldots, m-1$. Define
\[
\alpha=\alpha(q,m)=\min\{k+1-(m+1)q^{k-m}\,: k\in \mathbb{N}, \, k=0, \ldots, m-1\}.
\]
Thus $\alpha >0$. It follows that for any $ k=0, \ldots, m-1$,
\[
q^{n(k+1)}\,\delta^{q^{k}}=\lambda_{\mathcal{A}}^{q^{k-m}}\,q^{n\alpha}\,\Omega_{q,m}(1),
\]
completing the proof.
\end{proof}
We immediately have the following consequence.
\begin{lemma}\label{lem:yy}
Suppose that $\mathbb{E}(X)\rightarrow \infty$ or $\mathbb{E}(X)\rightarrow \lambda$ for some fixed $\lambda >0$. Then $\mathbb{E}(Y)=o(\mathbb{E}(X)^{2})$, so \emph{(C2)} holds.
\end{lemma}
\begin{proof}
Observe that, using Lemma \ref{lem:intersection}(ii),
\begin{equation*}
\mathbb{E}(Y)=\sum_{k=1}^{m-1}|\,I_{q^{k}}|\,\delta^{2q^{m}-q^{k}}=\mathbb{E}(X)^2 \sum_{k=1}^{m-1}O(q^{-n(k+1)}\,\delta^{-q^{k}}).
\end{equation*}
Lemma~\ref{lem:intersection}(iii) implies that
\[
\sum_{k=1}^{m-1}O(q^{-n(k+1)}\,\delta^{-q^{k}})=o(1),
\]
as required.
\end{proof}
Recall that $t(n, q)=q^{-\frac{(m+1)n}{q^{m}}}$. If $\delta=\omega(t(n,q))$ then $\mathbb{E}(X)=|A(n, m)| \delta^{q^{m}}\rightarrow \infty$. Lemma \ref{lem:intersection}(ii) and Lemma \ref{lem:yy} implies that Lemma \ref{lem:second} holds for $m$-dimensional planes, completing the proof of Theorem \ref{thm:main} for $m$-dimensional planes.
For later use, we state the following easy fact as a lemma. It follows directly from Lemma \ref{lem:intersection}(i).
\begin{lemma}\label{lem:cover}
Given $F=\{x_{1}, \ldots, x_{t}\}\subseteq \mathbb{F}_{q}^{n}$, let $d(F)$ be the smallest integer $k$ such that $F\subseteq V$ for some $V\in A(n, k)$. Then for $m\geq d(F)$,
\[
|\{T\in A(n,m): F\subseteq T\}|=|G(n-d(F),m-d(F))|=\Theta_{q, m}(q^{n(m-d(F))}).
\]
\end{lemma}
Now we show that if $\mathbb{E}(X)\rightarrow \lambda$ for some fixed positive $\lambda$ then for any $r\in \mathbb{N}$, $\mathbb{E}(X^{r})=O_{r}(1)$. Applying the same argument as in the proof of Lemma \ref{lem:3}, it is sufficient to prove the following result.
\begin{lemma}\label{lem:condition}
Assume that $\mathbb{E}(X)\rightarrow \lambda$ for some fixed positive $\lambda$. Let $T_{1}, \ldots, T_{r} \in A(n,m)$. Then
\begin{equation*}
\sum_{T\in A(n,m)}\mathbb{P}(T\subseteq E \mid \cup_{i=1}^{r}T_{i}\subseteq E)=O_{q,m,\lambda,r}(1).
\end{equation*}
\end{lemma}
\begin{proof}
Let $B=\cup_{i=1}^{r}T_{i}$. For $k=0, 1, \ldots, q^{m}$, define
\[
J_{k}=\{T \in A(n,m): |T \cap B|=k\}.
\]
We will split the sum up to follows:
\begin{align}
& \sum_{T\in A(n,m)}\mathbb{P}(T\subseteq E \big| \cup_{i=1}^{r}T_{i}\subseteq E) \nonumber \\
&\qquad =\sum_{k=0}^{q^{m}}|J_{k}|\delta^{a-k} \nonumber \\
&\qquad =|J_{0}|\delta^{a} +|J_{1}|\delta^{a-1}+ \sum_{j=1}^{m-1} \sum_{q^{j-1}<k\leq q^{j}}|J_{k}|\delta^{a-q^{j}}+\sum_{q^{m-1}<k\leq q^{m}}|J_{k}| \delta^{a-k}.
\label{eq:split}
\end{align}
Let $a:=q^{m}$ and $\lambda=\mathbb{E}(X)$.
First, $|J_{0}|\leq |A(n,m)|$ so
\begin{equation}
\label{eq:0}
|J_{0}|\delta^{a}\leq |A(n, m)|\delta^{a}=\lambda=O(1).
\end{equation}
Next, by (\ref{eq:A(n,m)}),
\[
|J_{1}|\leq |B| |G(n, m)|=O(q^{-n})|A(m, n)|.
\]
Applying Lemma \ref{lem:intersection}(iii) we have
\begin{equation}
|J_{1}|\delta^{a-1}=O( q^{-n} \delta^{-1} \lambda ) =o(1).
\label{eq:1}
\end{equation}
Now let $k\in \{2, 3, \ldots, q^{m}\}$. Then there exists $j\in \{ 1, \ldots, m\}$ such that $q^{j-1}<k\leq q^{j}$.
Let $F\subseteq B$ with $|F|=k$. The smallest plane which contains $F$ has dimension at least~$j$.
Hence Lemma \ref{lem:cover} implies that
\[
|\{T\in A(n, m): F\subseteq T\}
=O(q^{n(m-j)}).
\]
Since $|B|\leq r q^{m}=O(1)$, there are ${|B| \choose k}=O(1)$ different choices for a subset $F$
consisting of
$k$ points of $B$. Therefore, by (\ref{eq:A(n,m)}),
\begin{equation*}
|J_{k}|=O(q^{n(m-j)})=O(q^{-n(j+1)})\,|A(n,m)|
\end{equation*}
and since $q, m$ are fixed, we obtain
\begin{equation}
\label{sumj}
\sum_{q^{j-1}<k\leq q^{j}} |J_{k} |= O(q^{-n(j+1)})\, |A(n,m)|.
\end{equation}
If $j \leq m-1$ then applying Lemma \ref{lem:intersection}(iii) gives
\begin{align}
\sum_{j=1}^{m-1} \sum_{q^{j-1}<k\leq q^{j}}|J_{k}|\delta^{a-q^{j}}
&=\sum_{j=1}^{m-1} O(q^{-n(j+1)}\delta^{-q^{j}}) |A(n,m)| \delta^{a} \nonumber \\
&= o(1).
\label{eq:thirdterm}
\end{align}
Finally, when $j=m$ we have, by (\ref{eq:A(n,m)}) and (\ref{sumj}),
\begin{equation}
\sum_{q^{m-1}<k\leq q^{m}}|J_{k}| \delta^{a-k} \leq
\sum_{q^{m-1}<k\leq q^{m}}|J_{k}| =
O(1).
\label{eq:m}
\end{equation}
The result follows by substituting (\ref{eq:0}), (\ref{eq:1}), (\ref{eq:thirdterm}) and (\ref{eq:m}) into (\ref{eq:split}).
\end{proof}
Applying Lemma \ref{lem:condition} and arguing in the proof of Lemma \ref{lem:3}, we obtain that Lemma~\ref{lem:high} (C3) holds for $m$-dimensional planes. Together with Lemma \ref{lem:intersection}(ii) and Lemma \ref{lem:yy}, we obtain Lemma~\ref{lem:high} (C1) and (C2), completing the proof of Theorem \ref{thm:Poisson} for $m$-dimensional planes.
\section{Further results}\label{sec:further}
We now give some applications to extremal problems, and discuss an Erd{\H o}s--R{\' e}nyi
variant of the random model.
\subsection{Applications to extremal problems}
Recall the definition of $E=E^{\omega}$ from Section~\ref{sec:introduction}.
Borrowing notation from extremal graph theory, let
$\operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A})$ denote the maximal cardinality of subsets of $\mathbb{F}_{q}^{n}$ which are $\mathcal{A}$ -free.
That is,
\[
\operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A})=\max\{|B|: B \subseteq \mathbb{F}_{q}^{n},\,\, B \text{ does not contain any element of $\mathcal{A}$ }\}.
\]
For the random variable $X$, Markov's inequality implies that
\begin{equation}\label{eq:markov}
\mathbb{P}(X\geq 1)\leq \mathbb{E}(X)=|\mathcal{A}| \delta^{a}.
\end{equation}
Chebyshev's inequality implies that for our random subset $E$,
\begin{equation}\label{eq:chebyshev}
\mathbb{P}\big(| | E| - q^{n}\delta |\geq
\dfrac{1}{2}q^{n}\delta\big)\leq \frac{4q^{n}\delta(1-\delta)}{(q^{n}\delta)^{2}}
= \frac{4(1-\delta)}{q^n \delta}.
\end{equation}
That is, $|E|$ is concentrated around $q^n \delta$ when $q^n\delta$ is large.
Applying the estimates \eqref{eq:markov} and \eqref{eq:chebyshev} leads to the following lower bound on
$\operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A}_{a})$.
\begin{lemma} Suppose that $|\mathcal{A}| \delta^{a}= \nfrac{1}{2}$ and $q^{n}\delta\geq 100$. Then there exists a subset
$S\subseteq \mathbb{F}_q^n$ which is $\mathcal{A}$-free and satisfies $|S| =\Theta(q^{n}\delta)$. This implies that
\[
\operatorname{ex}(\mathbb{F}_{q}^{n}, \mathcal{A})=\Omega(q^{n}\delta)=\Omega(q^{n}|\mathcal{A}|^{-\frac{1}{a}}).
\]
\end{lemma}
This leads to the following lower bounds for the extremal problem for the patterns defined
in Definition~\ref{patterns}.
\begin{corollary}
As usual, let $q\geq 2$ be a prime power and let $n$ be a positive integer.
Let $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ be the collection of all the $3$-APs and
parallelograms in $\mathbb{F}_q^n$, respectively.
Similarly, let $\mathcal{A}_3$ denote the collection of all right triangles in
in $\mathbb{F}_q^n$, where $n\geq 2$, and let
$\mathcal{A}_4$ denote the collection of all $m$-dimensional planes in $\mathbb{F}_q^n$,
where $m,q$ are fixed and $q\geq 3$. Then
\begin{align*}
\operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A}_{1})=\Omega(q^{\frac{n}{3}}), &\qquad \operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A}_{2})=\Omega(q^{\frac{n}{4}}),\\
\operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A}_{3})=\Omega(q^{\frac{1}{3}}), &\qquad \operatorname{ex}(\mathbb{F}_{q}^{n},\mathcal{A}_{4})=\Omega\left(q^{n(1- \frac{m+1}{q^{m}})}\right).
\end{align*}
\end{corollary}
Note that the extremal problem for $3$-APs in $\mathbb{F}_{3}^{n}$ is called the cap set problem, and a much stronger lower bound is known. For the cap set problem, the best known bounds are
\[
2.2^{n}\leq \operatorname{ex}(\mathbb{F}_{3}^{n}, \text{ 3-AP} )\leq 2.756^{n}.
\]
See \cite{Edel} for the lower bound and \cite{EG} for the upper bound (and
further background).
To our knowledge the other lower bounds appear to be new.
\subsection{Erd\H{o}s-R\'enyi model for finite vector spaces}
In this subsection, we consider another random model in $\mathbb{F}_{q}^{n}$ which is an analogue of Erd\H{o}s-R\'enyi random graphs.
Let $M=M_{n,q}\leq q^{n}$ be a positive integer. Choose $E=E^{\omega}$ uniformly at random from the set of all subsets of $\mathbb{F}_{q}^{n}$ of cardinality $M$. Denote this probability space by $\Omega(\mathbb{F}_{q}^{n}, M)$.
Note that for a subset $F\subseteq \mathbb{F}_{q}^{n}$ with $|F|\leq M$, we have
\begin{equation}\label{eq:ppp}
\mathbb{P}(F \subseteq E)=\frac{M(M-1)\ldots(M-|F|+1)}{q^{n}(q^{n}-1)\ldots(q^{n}-|F|+1)}.
\end{equation}
It follows that if $|F|=O(1)$ and $M_{n,q}\rightarrow \infty$ then the identity \eqref{eq:ppp} becomes
\begin{equation*}
\mathbb{P}(F\subseteq E)\sim \left(\frac{M_{n,q}}{q^{n}}\right)^{|F|}.
\end{equation*}
For the model $\Omega(\mathbb{F}_{q}^{n},M)$, we can obtain similar results to Theorems \ref{thm:main} and \ref{thm:Poisson} by taking $\frac{M_{n,q}}{q^{n}}$ instead of $\delta$ in our former proofs. We omit these arguments.
| {
"timestamp": "2018-05-15T02:13:10",
"yymm": "1805",
"arxiv_id": "1805.03778",
"language": "en",
"url": "https://arxiv.org/abs/1805.03778",
"abstract": "The study of substructures in random objects has a long history, beginning with Erdős and Rényi's work on subgraphs of random graphs. We study the existence of certain substructures in random subsets of vector spaces over finite fields. First we provide a general framework which can be applied to establish coarse threshold results and prove a limiting Poisson distribution at the threshold scale. To illustrate our framework we apply our results to $k$-term arithmetic progressions, sums, right triangles, parallelograms and affine planes. We also find coarse thresholds for the property that a random subset of a finite vector space is sum-free, or is a Sidon set.",
"subjects": "Combinatorics (math.CO)",
"title": "Threshold functions for substructures in random subsets of finite vector spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517475646369,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7089606345592783
} |
https://arxiv.org/abs/2207.11299 | Rank-constrained Hyperbolic Programming | We extend rank-constrained optimization to general hyperbolic programs (HP) using the notion of matroid rank. For LP and SDP respectively, this reduces to sparsity-constrained LP and rank-constrained SDP that are already well-studied. But for QCQP and SOCP, we obtain new interesting optimization problems. For example, rank-constrained SOCP includes weighted Max-Cut and nonconvex QP as special cases, and dropping the rank constraints yield the standard SOCP-relaxations of these problems. We will show (i) how to do rank reduction for SOCP and QCQP, (ii) that rank-constrained SOCP and rank-constrained QCQP are NP-hard, and (iii) an improved result for rank-constrained SDP showing that if the number of constraints is $m$ and the rank constraint is less than $2^{1/2-\epsilon} \sqrt{m}$ for some $\epsilon>0$, then the problem is NP-hard. We will also study sparsity-constrained HP and extend results on LP sparsification to SOCP and QCQP. In particular, we show that there always exist (a) a solution to SOCP of cardinality at most twice the number of constraints and (b) a solution to QCQP of cardinality at most the sum of the number of linear constraints and the sum of the rank of the matrices in the quadratic constraints; and both (a) and (b) can be found efficiently. | \section{Introduction}
In this paper, we study rank-constrained and sparsity-constrained hyperbolic programming (HP). Specifically, we consider four types of HP: linear programming (LP), quadratically constrained quadratic program (QCQP), second order cone programming (SOCP), and semidefinite programming (SDP).
Rank-constrained SDP occurs frequently in combinatorial optimization \cite{Anjos2002,Lim2017,Luo2010}. It is well-known that Max-Cut could be viewed as a rank-constrained SDP and dropping this rank constraint yields the standard SDP-relaxation of Max-Cut \cite{Anjos2002}. Thus, it is natural to consider when we can get a solution to SDP of small rank. In \cite{Barvinok1995, LemonSoYe2015, Pataki1998}, it is shown that every feasible SDP with $m$ linear constraints always has a solution of rank at most $(\sqrt{1 + 8m} - 1)/2$. Furthermore, this low-rank solution can be found in polynomial time by first solving the SDP and then run a rank reduction algorithm proposed in \cite{Barvinok1995, LemonSoYe2015, Pataki1998}. Specifically, a rank reduction algorithm takes a solution to SDP as input and outputs another low-rank solution to this SDP. This result implies that rank-constrained SDP is polynomial time solvable for any rank constraint that is at least $(\sqrt{1 + 8m} - 1)/2$.
Parallel to SDP rank reduction, LP sparsification is studied in \cite{Cara1907, LemonSoYe2015, Ye2008}. For any feasible LP with $m$ linear constraints, there always exists a solution of cardinality at most $m$. Moreover, this low-cardinality solution can be found in polynomial time by fist solving the LP and then run a LP sparsification algorithm \cite{Cara1907, LemonSoYe2015, Ye2008}. Specifically, a LP sparsification algorithm takes a solution to LP as input and outputs another sparse solution.
For rank-constrained problems, we use rank in HP \cite{Brand2011, Renegar2006} to define rank in LP, QCQP, SOCP, and SDP, by viewing them as special cases of HP. For each of these problems, we study the corresponding rank-constrained problem in two ways. First, we give a polynomial time rank-reduction algorithm to show that it is always possible to get a solution of "small" rank provided that the problem is feasible. Second, we consider the complexity of these rank-constrained problems. Under certain conditions, we will show that rank-constrained LP is polynomial time solvable and rank-constrained QCQP and rank-constrained SOCP are both NP-hard. For rank-constrained SDP with $m$ linear constraints, we consider rank constraint $r(m)$, that is a function of $m$. Then, we show that the complexity of rank-constrained SDP changes as $r(m)$ passes through $\sqrt{2m}$. In particular, rank-constrained SDP is NP-hard when $r(m) \ll \sqrt{2m}$ and polynomial time solvable when $r(m) \gg \sqrt{2m}$.
For sparsity-constrained problems, we extend the results in LP sparsification \cite{Cara1907, LemonSoYe2015, Ye2008} to QCQP and SOCP. Previous results show that every feasible LP with $m$ linear constraints has a solution of cardinality at most $m$, and we can find such a solution in polynomial time \cite{Cara1907, LemonSoYe2015, Ye2008}. In addition, there are examples of LP with $m$ constraints whose solutions have cardinality at least $m$ \cite{Cara1907, LemonSoYe2015, Ye2008}, which shows that this LP sparsification result cannot be improved without further assumptions. We extend this result to QCQP and SOCP and show that our results cannot be improved without further assumptions.
\subsection{Further related works}
Rank for Lorentz cone has been studied through the lens of Euclidean Jordan algebra \cite{symmetric_cone_analysis,Jordan_rank,SDP_handbook}. The definition of rank for points in a Lorentz cone in \cite{symmetric_cone_analysis,Jordan_rank} is the same as the rank for points in SOCP in our work when there is only one second order cone constraint. In this case, the rank estimation theorem in \cite{Jordan_rank} gives the same result as the SOCP rank reduction result in our work.
In addition, our results on SOCP rank reduction can also be deduced from \cite{SDPgeometry,SDP_handbook}. Specifically, the author gives an algorithm which constructs an extreme point solution from any starting solution \cite{SDPgeometry,SDP_handbook} for conic LP problems. Applying this algorithm to SOCP gives a solution of small rank.
\section{Rank-Constrained SDP}
In this section, we study Semidefinite Programming (SDP) with rank constraint:
\begin{equation} \label{eq:sdp_rank}
\begin{split}
\underset{X \in \mathbb{S}^n}{\textrm{minimize}}
\hspace{2mm} & \tr(AX) \\
\textrm{subject to}
\hspace{2mm} & \tr(A_i X) = b_i, \; i = 1, \dots, m; \\
& X \geq 0; \\
& \rank(X) \leq r(m),
\end{split}
\end{equation}
where $A,A_1,\dots,A_m \in \mathbb{S}^n$, $b_1, \dots, b_m \in \mathbb{R}$, and $r(m)$ is a function in $m$. In this section, we begin with some examples of rank-constrained SDP. Then, we study the condition under which SDP is NP-hard. We will show that there is a phase transition in the complexity of rank-constrained SDP when $r(m)$ passes through $\sqrt{2m}$.
\subsection{Examples of rank-constrained SDP}
Rank-constrained SDP appears in many combinatorial problems such as weighted Max-Cut, clique number, and stability number. In the following, we formulate these combinatorial problems in terms of rank-constrained SDP.
\begin{example}
Consider a graph $G = (V,E)$ with vertex set $V$ and edge set $E$. Let $w: E \longrightarrow \mathbb{R}$ be a weight function on $G$. Without loss of generality, we might assume that $G$ is a complete graph (i.e. $(i,j) \in E$ for all $i,j \in V$) and some of the edges have zero weight (i.e. $w(e) = 0$ for some $e \in E$). In weighted Max-Cut problem, the goal is to find a partition of $V = V_1 \sqcup V_2$, that maximizes the sum of the weights on edges whose endpoints lie in different portions of the partition. To be specific, we want to solve the following problem:
\begin{equation*}
\begin{split}
\underset{V = V_1 \sqcup V_2}{\textrm{maximize}}
\hspace{2mm} & \sum_{i \in V_1, j \in V_2} w(i,j).
\end{split}
\end{equation*}
When $w(i,j) \in \{0,1\}$ for all $i,j \in V$, we call this problem the unweighted Max-Cut problem.
For simplicity, we identify $V$ with $[n] \vcentcolon= \{1,2,\dots,n\}$. We define the weight matrix $W \in \mathbb{R}^{n \times n}$ by
\begin{equation*}
W[i,j] = \begin{cases}
w(i,j)/4 & \textrm{if} \quad i \neq j \\
-\sum_{k \neq i}w(i,k)/4 & \textrm{if} \quad i = j.
\end{cases}
\end{equation*}
Then weighted Max-Cut is equivalent to the following rank constrained SDP \cite{Anjos2002}:
\begin{equation} \label{eq:1}
\begin{split}
\underset{X \in \mathbb{S}^n}{\textrm{minimize}}
\hspace{2mm} & \tr(WX) \\
\textrm{subject to}
\hspace{2mm} & X_{ii} = 1, \; i = 1, \dots, n; \\
& X \geq 0; \\
& \rank(X) \leq 1.
\end{split}
\end{equation}
\end{example}
\begin{example}
Next, we consider the problem of computing clique number. Given a graph $G$, a clique in $G$ is a subgraph $H$ of $G$ such that any two distinct vertices in $H$ are adjacent (i.e. there is an edge between them in $G$). The clique number $\omega(G)$ of $G$ is the maximum number of vertices in a clique in $G$.
By results in \cite{Lim2017},
\[
1 - \frac{1}{\omega(G)} = 2\underset{x \in \Delta^n}{\max} \sum_{(i,j) \in E} x_ix_j,
\]
where $\Delta^n = \{x \in \mathbb{R}^n: x_1 + \dots + x_n = 1, x_i \geq 0\}$ is the unit simplex in $\mathbb{R}^n$. Thus, to compute the clique number, it suffices to solve the follow QP:
\[
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}}
\hspace{2mm} & \sum_{(i,j) \in E} -x_ix_j \\
\textrm{subject to}
\hspace{2mm} & x_i \geq 0, \; i = 1, \dots, n; \\
& \sum_{i = 1}^n x_i = 1.
\end{split}
\]
By results in \cite{Luo2010}, we can convert this QP into a rank constrained SDP. We first homogenize it to obtain the following QP:
\[
\begin{split}
\underset{x \in \mathbb{R}^n, t \in \mathbb{R}}{\textrm{minimize}}
\hspace{2mm} & \sum_{(i,j) \in E} -x_ix_j \\
\textrm{subject to}
\hspace{2mm} & tx_i \geq 0, \; i = 1, \dots, n; \\
& \sum_{i = 1}^n tx_i = 1; \\
& t^2 = 1.
\end{split}
\]
This is clearly equivalent to the original QP by substituting $tx$ for $x$. Let $A \in \mathbb{R}^{(n+1) \times (n+1)}$ be such that
\[
A[i,j] = \begin{cases} -1 \hspace{2mm} &\textrm{if} \hspace{2mm} i\leq n, j \leq n, (i,j) \in E, \\
0 \hspace{2mm} &\textrm{otherwise}.
\end{cases}
\]
Then the homogenized QP is equivalent to the following rank constrained SDP \cite{Luo2010}:
\[
\begin{split}
\underset{X \in S^{n+1}}{\textrm{minimize}}
\hspace{2mm} & \tr(AX) \\
\textrm{subject to}
\hspace{2mm} & X_{i(n+1)} \geq 0, \; i = 1, \dots, n; \\
& \sum_{i = 1}^n X_{i(n+1)} = 1; \\
& X_{(n+1)(n+1)} = 1; \\
& X \geq 0; \\
& \rank(X) \leq 1.
\end{split}
\]
For any solution $X$, $\rank(X) \neq 0$ since $X_{(n+1)(n+1)} = 1$. Thus, $X = vv^T$ for some $v \in \mathbb{R}^{n+1}$. Then $(x,t) = v$ is a solution to the homogenized QP.
\end{example}
\begin{example}
Next, we consider the problem of computing stability number. Given a graph $G$, the stability number $\alpha(G)$ of $G$ is the maximum number of vertices in $G$, of which no two are adjacent (i.e. there is no edge between them in $G$).
By results in \cite{Lim2017},
\[
1 - \frac{1}{\alpha(G)} = 2\underset{x \in \Delta^n}{\max} \sum_{(i,j) \notin E} x_ix_j,
\]
where $\Delta^n = \{x \in \mathbb{R}^n: x_1 + \dots + x_n = 1, x_i \geq 0\}$ is the unit simplex in $\mathbb{R}^n$. Thus, to compute the clique number, it suffices to solve the follow QP:
\[
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}}
\hspace{2mm} & \sum_{(i,j) \notin E} -x_ix_j \\
\textrm{subject to}
\hspace{2mm} & x_i \geq 0, \; i = 1, \dots, n; \\
& \sum_{i = 1}^n x_i = 1.
\end{split}
\]
Let $B \in \mathbb{R}^{(n+1) \times (n+1)}$ be such that
\[
B[i,j] = \begin{cases} -1 \hspace{2mm} &\textrm{if} \hspace{2mm} i\leq n, j \leq n, (i,j) \notin E, \\
0 \hspace{2mm} &\textrm{otherwise}.
\end{cases}
\]
By the same argument as we used in the clique number example, this QP is equivalent to the following rank constrained SDP \cite{Luo2010}:
\[
\begin{split}
\underset{X \in S^{n+1}}{\textrm{minimize}}
\hspace{2mm} & \tr(BX) \\
\textrm{subject to}
\hspace{2mm} & X_{i(n+1)} \geq 0, \; i = 1, \dots, n; \\
& \sum_{i = 1}^n X_{i(n+1)} = 1; \\
& X_{(n+1)(n+1)} = 1; \\
& X \geq 0; \\
& \rank(X) \leq 1.
\end{split}
\]
\end{example}
\subsection{Complexity of rank-constrained SDP}
In this section, we give the condition under which rank constrained SDP is NP-hard. Recall that rank-constrained SDP is formulated as:
\begin{equation*}
\begin{split}
\underset{X \in \mathbb{S}^n}{\textrm{minimize}}
\hspace{2mm} & \tr(AX) \\
\textrm{subject to}
\hspace{2mm} & \tr(A_i X) = b_i, \; i = 1, \dots, m; \\
& X \geq 0; \\
& \rank(X) \leq r(m),
\end{split}
\end{equation*}
where $A,A_1,\dots,A_m \in \mathbb{S}^n$, $b_1, \dots, b_m \in \mathbb{R}$, and $r(m)$ is a function in $m$. In last section, we see that weighted Max-Cut can be formulated as a rank-constrained SDP with $r(m) = 1$. As a result, rank-constrained SDP is NP-hard if $r(m) = 1$ for all $m$. On the other hand, for any feasible SDP, we could first solve the vanilla SDP (i.e. without rank constraint) and then run a rank reduction algorithm to find another optimal solution of rank at most $(\sqrt{1 + 8m} - 1)/2$ \cite{Barvinok1995, LemonSoYe2015, Pataki1998}. Thus, if $r(m) \geq (\sqrt{1 + 8m} - 1)/2$ for all $m$, then we can always solve the rank-constrained SDP by the procedure we just described. Assuming that we can compute real numbers exactly, solving the vanilla SDP (without rank constraint) and running the rank reduction algorithm can both be done in polynomial time \cite{Barvinok1995, LemonSoYe2015, Pataki1998}. Roughly speaking, when $r(m) \gg \sqrt{2m}$, rank-constrained SDP is polynomial time solvable. In the following result, we show that when $r(m) \ll \sqrt{2m}$, rank-constrained SDP is NP-hard.
\begin{theorem} \label{thm:00}
Let $A,A_1,\dots,A_m \in \mathbb{S}^n$, $b_1, \dots, b_m \in \mathbb{R}$, and $r:\mathbb{Z}^+ \longrightarrow \mathbb{Z}^+$ be given. Suppose that there exist constants $M, \epsilon>0$, such that
\begin{equation} \label{eq:rank}
r(m) < 2^{1/2 - \epsilon} \sqrt{m}, \quad \textrm{for all} \quad m \geq M.
\end{equation}
Then, the rank-constrained SDP
\begin{equation*}
\begin{split}
\underset{X \in \mathbb{S}^n}{\operatorname{minimize}}
\hspace{2mm} & \tr(AX) \\
\operatorname{subject \hspace{1mm} to}
\hspace{2mm} & \tr(A_i X) = b_i, \; i = 1, \dots, m; \\
& X \geq 0; \\
& \rank(X) \leq r(m),
\end{split}
\end{equation*}
is NP-hard.
\end{theorem}
The above result, together with results of SDP rank reduction \cite{Barvinok1995, LemonSoYe2015, Pataki1998}, show that there is a phase transition in the complexity of rank-constrained SDP as $r(m)$ passes through $\sqrt{2m}$.
Before giving the formal proof, we explain the main ideas of this proof. To begin with, consider the case $r(m) = 2$. Recall that weighted Max-Cut is equivalent to the following rank-constrained SDP:
\begin{equation} \label{eq:maxcut}
\begin{split}
\underset{X \in \mathbb{S}^n}{\textrm{minimize}}
\hspace{2mm} & \tr(WX) \\
\textrm{subject to}
\hspace{2mm} & X_{ii} = 1, \; i = 1, \dots, n; \\
& X \geq 0; \\
& \rank(X) \leq 1.
\end{split}
\end{equation}
Now, we transform the above rank-constrained SDP into a new rank-constrained SDP, whose rank constraint is two.
Let
\begin{equation*}
W' =
\begin{bsmallmatrix}
0 & 0\\
0 & W
\end{bsmallmatrix} \in \mathbb{R}^{(n+1) \times (n+1)},
\end{equation*}
where $0$ denotes vectors whose entries are zeros.
Then, consider the following rank-constrained SDP:
\begin{equation} \label{eq:maxcut2}
\begin{split}
\underset{X' \in S^{n+1}}{\textrm{minimize}} \hspace{2mm} & \tr(W'X') \\
\textrm{subject to}
\hspace{2mm }& X'_{ii} = 1, \; i = 1, \dots, n+1; \\
& X'_{1j} = 0, \; j = 2, \dots, n+1; \\
& X' \geq 0; \\
& \rank(X') \leq 2.
\end{split}
\end{equation}
Note that any solution $X'$ to the above rank-constrained SDP must have the form
\begin{equation*}
X' =
\begin{bsmallmatrix}
1 & 0\\
0 & X
\end{bsmallmatrix} \in \mathbb{R}^{(n+1) \times (n+1)},
\end{equation*}
where $0$ denotes vectors whose entries are zeros.
The rank-constrained SDP in \eqref{eq:maxcut2} is equivalent to the one in \eqref{eq:maxcut} since $\rank(X') = \rank(X) + 1$.
By applying this ``rank increment'' technique, we can show that rank-constrained SDP with constant rank constraint (i.e. $r(m) = r$ for some constant $r$) is NP-hard.
To get the NP-hardness result for $r$, we need $m = rn + r(r-1)/2 = r(n-1) + r(r+1)/2$ many linear constraints. In other words,
\begin{equation*}
n = \Biggl\lfloor \Biggl(m - \frac{r(r+1)}{2} \Biggr) / r \Biggr\rfloor + 1.
\end{equation*}
Thus, we need $m > r(r+1)/2$. Roughly speaking, $r \ll \sqrt{2m}$. In addition, note that as long as $r \ll \sqrt{2m}$, $n = \Omega(\sqrt{m}) = \Omega(r)$. Thus, the new dimension of the problem $n+r-1$ is polynomial in the original dimension $n$. Thus, any polynomial time algorithm on this transformed problem translates to a polynomial time algorithm on the original problem.
\begin{proof}
We begin with the special case that $r(m)$ is non-deceasing (i.e. $r(m+1) \geq r(m)$ for all $m$). Then, we will drop this additional assumption.
\pmb{Special Case:} \par
Our goal is to reduce weighted Max-Cut to rank-constrained SDP. Suppose that there is a polynomial time algorithm $\mathcal{A}$ for rank-constrained SDP with $r(m)$ satisfying equation \eqref{eq:rank}. Then, we will show that we can use this algorithm to solve weighted Max-Cut in polynomial time. Let
\begin{equation*}
\phi(m) = \Biggl\lfloor \bigg(m - \frac{r(m)(r(m)+1)}{2} \bigg) \bigg/ r(m) \Biggr\rfloor + 1.
\end{equation*}
Given an input graph $G$ with $n$ nodes and a weight function $w$, we consider two cases $(i) n \geq C$ and $(ii) n < C$ separately, where $C$ is some constant which only depends on $\epsilon$ and $M$. We will pick $C$ later in the proof but it can be determined before we receive the input of weighted Max-Cut.
If $n \geq C$, our algorithm works as follows:
\begin{enumerate}
\item find $m$ such that $n = \phi(m)$;
\item construct a rank-constrained SDP with $m$ linear constrains that is equivalent to weighted Max-Cut on the weighted graph $(G,w)$;
\item solve this rank-constrained SDP using algorithm $\mathcal{A}$.
\end{enumerate}
If $n<C$, we use brute force to solve the problem. Now, we discuss each step in detail.
\pmb{$(1)$}
We begin with some observations on $\phi(m)$. For $m \geq M$,
\begin{equation} \label{pfrank:01}
\begin{split}
\phi(m) & \geq \bigg(m - \frac{r(m)(r(m)+1)}{2} \bigg) \bigg/ r(m) \\
& \stackrel{(a)} \geq \bigg(m - 2^{-2\epsilon}m - 2^{-1/2-\epsilon} \sqrt{m} \bigg) \bigg/ r(m) \\
& \stackrel{(b)} \geq \delta \sqrt{m} \quad \textrm{for some} \hspace{1mm} \delta>0, \hspace{1mm} \textrm{for all} \hspace{1mm} m \geq M', \hspace{1mm} \textrm{for some} \hspace{1mm} M'>0,
\end{split}
\end{equation}
where $\delta$ and $M'$ only depend on $\epsilon$, and we used equation \eqref{eq:rank} in $(a)$ and $(b)$. Now, we pick $C = max(M,M')+1$. Recall that we only consider input graph $G$ with $n \geq C$ nodes.
Since $r(m)$ is non-decreasing, for all $m \geq C-1$,
\begin{equation} \label{pfrank:02}
\begin{split}
\phi(m+1) - \phi(m) & = \Biggl\lfloor \bigg(m+1 - \frac{r(m+1)(r(m+1)+1)}{2} \bigg) \bigg/ r(m+1)\Biggr\rfloor - \Biggl\lfloor \bigg(m - \frac{r(m)(r(m)+1)}{2} \bigg) \bigg/ r(m)\Biggr\rfloor \\
& \leq \Biggl\lfloor \bigg(m+1 - \frac{r(m)(r(m)+1)}{2} \bigg) \bigg/ r(m)\Biggr\rfloor - \Biggl\lfloor \bigg(m - \frac{r(m)(r(m)+1)}{2} \bigg) \bigg/ r(m)\Biggr\rfloor \\
& \leq 1.
\end{split}
\end{equation}
Note that
\begin{equation} \label{pfrank:03}
\phi(n-1) \leq n-1+1 = n.
\end{equation}
In addition, by equation \eqref{pfrank:01},
\begin{equation} \label{pfrank:04}
\phi(\lceil n/\delta \rceil^2) \geq n.
\end{equation}
By equation \eqref{pfrank:02}, \eqref{pfrank:03}, and \eqref{pfrank:04}, there exists some $m \in [n-1,\lceil n/\delta \rceil^2]$ such that $\phi(m) = n$. By computing $\phi(m)$ from $m = n-1$ to $m = \lceil n/\delta \rceil^2$, we can find $m$ with $\phi(m) = n$ in $\mathcal{O}(n^2)$ time.
\pmb{$(2)$}
Recall that weighted Max-Cut is equivalent to the following rank constrained SDP \cite{Anjos2002}:
\begin{equation} \label{pfsdp:1}
\begin{split}
\underset{X \in \mathbb{S}^n}{\textrm{minimize}}
\hspace{2mm} & \tr(WX) \\
\textrm{subject to}
\hspace{2mm} & X_{ii} = 1, \; i = 1, \dots, n; \\
& X \geq 0; \\
& \rank(X) \leq 1,
\end{split}
\end{equation}
where $W \in \mathbb{R}^{n \times n}$ is the weight matrix induced by the weighted graph $(G,w)$:
\begin{equation*}
W[i,j] = \begin{cases}
w(i,j)/4 & \textrm{if} \quad i \neq j \\
-\sum_{k \neq i}w(i,k)/4 & \textrm{if} \quad i = j.
\end{cases}
\end{equation*}
Now, we will construct an equivalent rank-constrained SDP of dimension $n+r(m)-1$. Let
\begin{equation*}
W' =
\begin{bmatrix}
0 & 0 \\
0 & W
\end{bmatrix} \in \mathbb{R}^{(n+r(m)-1) \times (n+r(m)-1)},
\end{equation*}
where $0$ denotes a matrix whose entries are zeroes. Then, we claim that the rank-constrained SDP in \eqref{pfsdp:1} is equivalent to the following rank-constrained SDP:
\begin{equation} \label{pfsdp:2}
\begin{split}
\underset{X' \in \mathbb{S}^{n+r(m)-1}}{\textrm{minimize}}
\hspace{2mm} & \tr(W'X') \\
\textrm{subject to}
\hspace{2mm} & X'_{ii} = 1, \; i = 1, \dots, n+r(m)-1; \\
& X'_{ij} = 0, \; j = i+1, \dots, n+r(m)-1; \; i = 1, \dots, r(m)-1; \\
& X' \geq 0; \\
& \rank(X') \leq r(m),
\end{split}
\end{equation}
To see this, note that any solution $X'$ to the rank-constrained SDP in \eqref{pfsdp:2} must have the form
\begin{equation*}
X' =
\begin{bmatrix}
I_{r(m)-1} & 0 \\
0 & X
\end{bmatrix} \in \mathbb{R}^{(n+r(m)-1) \times (n+r(m)-1)},
\end{equation*}
where $I_{r(m)-1} \in \mathbb{R}^{(r(m)-1) \times (r(m)-1)}$ is the identity matrix, $X \in \mathbb{S}^n$, and $0$ denotes a matrix whose entries are zeroes. Thus, $\rank(X') \leq r(m)$ if and only if $\rank(X) \leq 1$ and $X' \geq 0$ if and only if $X \geq 0$. In addition, $\tr(W'X') = \tr(WX)$. Thus, the rank-constrained SDP in \eqref{pfsdp:2} is equivalent to the one in \eqref{pfsdp:1}. Thus, in order to solve weighted Max-Cut, it suffices to solve the rank-constrained SDP in \eqref{pfsdp:2}. Note that the rank-constrained SDP in \eqref{pfsdp:2} has
\begin{equation} \label{pfsdp:count_constraint}
\sum_{i = n}^{n+r(m)-1}i = \frac{(2n+r(m)-1)r(m)}{2} = (n-1)r(m) + \frac{r(m)(r(m)+1)}{2}
\end{equation}
many linear constraints. Since $n = \phi(m)$,
\begin{equation} \label{pfsdp:count_constraint2}
(n-1)r(m) + \frac{r(m)(r(m)+1)}{2} \leq m.
\end{equation}
By adding superfluous linear constraints to \eqref{pfsdp:2}, we get a rank-constrained SDP which is equivalent to \eqref{pfsdp:2} and has exactly $m$ linear constraints.
\pmb{$(3)$}
Finally, we can apply algorithm $\mathcal{A}$ to solve this rank-constrained SDP in $\mathop{\mathrm{poly}}(n+r(m)-1)$ time. Since we search for $m$ in $[n-1,\lceil n/\delta \rceil^2]$ in step $(1)$, $m \geq n-1$. Since $n \geq C$, $m \geq C-1 \geq M$. Thus,
\begin{equation} \label{pfsdp:31}
r(m) < 2^{1/2 - \epsilon} \sqrt{m}.
\end{equation}
Since $n = \phi(m) \geq \delta \sqrt{m}$ by equation \eqref{pfrank:01},
\begin{equation*}
r(m) = \mathcal{O}(n),
\end{equation*}
by equation \eqref{pfsdp:31}. Thus, algorithm $\mathcal{A}$ solves the rank-constrained SDP in $\mathop{\mathrm{poly}}(n+r(m)-1) = \mathop{\mathrm{poly}}(n)$ time.
\pmb{Case when $n<C$}
If the number of nodes $n$ is less than $C$, we solve weighted Max-Cut by brute force. We simply trying each of the $2^n$ possible partitions of the vertex set and compute the value of the cut in each case. Then, we take the maximum one. Since $n<C$, this takes at most $\mathcal{O}(2^C)$ time, which is a constant. Thus, overall, the algorithm runs in $\mathop{\mathrm{poly}}(n) + \mathcal{O}(2^C) = \mathop{\mathrm{poly}}(n)$ time.
\pmb{General Case:} \par
Now, we drop the assumption that $r(m+1) \geq r(m)$ for all $m$. Note that the only place we used this assumption in the proof of special case is to show that
\begin{equation*}
\phi(m+1) - \phi(m) \leq 1.
\end{equation*}
The way to avoid using this assumption is to change the definition of $\phi(m)$. First, note that in the proof of the special case, we used $r(m)$ only when $m \geq M$. Thus, we can assume without loss of generality that
\begin{equation} \label{pfgen:newr}
r(m) < 2^{1/2 - \epsilon} \sqrt{m}, \quad \textrm{for all} \quad m \in \mathbb{Z}^+.
\end{equation}
Then, define $\Tilde{r}:\mathbb{Z}^+ \longrightarrow \mathbb{Z}^+$ as
\begin{equation} \label{pfgen:newr2}
\Tilde{r}(m) = \max(r(i):i = 1,\dots,m).
\end{equation}
Clearly,
\begin{equation} \label{pfgen:eq:02}
\Tilde{r}(m+1) \geq \Tilde{r}(m), \quad \textrm{for all} \hspace{1mm} m.
\end{equation}
Note that for any $m \in \mathbb{Z}^+$,
\begin{equation} \label{pfgen:eq:01}
\begin{split}
\Tilde{r}(m) & = r(t) \quad \textrm{for some} \hspace{1mm} t \leq m \\
& \stackrel{(a)} < 2^{1/2 - \epsilon} \sqrt{t} \\
& \leq 2^{1/2 - \epsilon} \sqrt{m},
\end{split}
\end{equation}
where we used equation \eqref{pfgen:newr} in $(a)$. Let
\begin{equation} \label{pfgen:newphi}
\phi(m) = \Biggl\lfloor \bigg(m - \frac{\Tilde{r}(m)(\Tilde{r}(m)+1)}{2} \bigg) \bigg/ \Tilde{r}(m) \Biggr\rfloor + 1.
\end{equation}
The skeleton of the algorithm is similar as before. Given an input graph $G$ with $n$ nodes and a weight function $w$, we consider two cases $(i) n \geq C$ and $(ii) n < C$ separately, where $C$ is some constant which only depends on $\epsilon$ and $M$. We will pick $C$ later in the proof but it can be determined before we receive the input of weighted Max-Cut.
If $n \geq C$, our algorithm works as follows:
\begin{enumerate}
\item find $m$ such that $n = \phi(m)$;
\item construct a rank-constrained SDP with $m$ linear constrains that is equivalent to weighted Max-Cut on the weighted graph $(G,w)$;
\item solve this rank-constrained SDP using algorithm $\mathcal{A}$.
\end{enumerate}
If $n<C$, we use brute force to solve the problem. Now, we discuss each step in detail.
\pmb{$(1)$} By equations \eqref{pfgen:eq:01}, \eqref{pfgen:eq:02}, and \eqref{pfgen:newphi}, we can get
\begin{equation} \label{pfgen:eq:318}
\begin{split}
& \phi(m) \geq \delta \sqrt{m}; \\
& \phi(m+1) - \phi(m) \leq 1; \\
& \phi(n-1) \leq n; \\
& \phi(\lceil n/\delta \rceil^2) \geq n,
\end{split}
\end{equation}
in the same way as we did in the special case of the proof. Thus, this step remains unchanged.
\pmb{$(2)$} Once we find $m$ such that $n = \phi(m)$, we consider the rank-constrained SDP in \eqref{pfsdp:2}, in exactly the same way as we did in the special case. Note that we use $r(m)$ instead of $\Tilde{r}(m)$ here. It is important to note that we use $\Tilde{r}(m)$ solely to choose the value of $m$. After that, we only use $r(m)$. Thus, the number of linear constraints in \eqref{pfsdp:2} is exactly the same as we counted in equation \eqref{pfsdp:count_constraint}, which is $(n-1)r(m) + r(m)(r(m)+1)/2$. Then,
\begin{equation}
\begin{split}
& (n-1)r(m) + r(m)(r(m)+1)/2 \\
& \stackrel{(a)} \leq (n-1)\Tilde{r}(m) + \Tilde{r}(m)(\Tilde{r}(m)+1)/2 \\
& \stackrel{(b)} \leq m,
\end{split}
\end{equation}
where we used equation \eqref{pfgen:newr2} in $(a)$ and equation \eqref{pfgen:newphi} in $(b)$. The rest of this step remains unchanged.
\pmb{$(3)$}
Finally, we need to show that $r(m) = \mathcal{O}(n)$. This holds since $n = \phi(m) \geq \delta \sqrt{m}$ by equation \eqref{pfgen:eq:318}, and $r(m) < 2^{1/2 - \epsilon} \sqrt{m}$ by equation \eqref{pfgen:newr}.
The case when $n<C$ is exactly the same as before.
\end{proof}
\begin{corollary} \label{cor:sdp_feasible}
Let $A_1,\dots,A_m \in \mathbb{S}^n$, $b_1, \dots, b_m \in \mathbb{R}$, and $r:\mathbb{Z}^+ \longrightarrow \mathbb{Z}^+$ be given. Suppose that there exist constants $M, \epsilon>0$, such that
\begin{equation*}
r(m) < 2^{1/2 - \epsilon} \sqrt{m}, \quad \textrm{for all} \quad m \geq M.
\end{equation*}
Then, the rank-constrained SDP feasibility problem:
\begin{equation} \label{eq:sdp_feasibility}
\begin{split}
\operatorname{Find}
\hspace{2mm} & X \in \mathbb{S}^n \\
\operatorname{subject \hspace{1mm} to}
\hspace{2mm} & \tr(A_i X) = b_i, \; i = 1, \dots, m; \\
& X \geq 0; \\
& \rank(X) \leq r(m),
\end{split}
\end{equation}
is NP-hard.
\end{corollary}
\begin{proof}
Suppose that there is a polynomial time algorithm which solves \eqref{eq:sdp_feasibility}. We call this algorithm a feasibility oracle. Then we show that we can also solve the unweighted Max-Cut problem in polynomial time using this feasibility oracle. Let $G = (V,E,w)$ be a weighted graph. Since we are solving the unweighted Max-Cut problem, we may assume that $w(i,j) \in \{0,1\}$ for all $i,j \in V$. Let $n = |V|$ be the number of nodes in $G$.
By the same arguments as in the proof of Theorem \ref{thm:00}, unweighted Max-Cut is equivalent to some rank-constrained SDP
\begin{equation} \label{pf:sdp_fea:01}
\begin{split}
\underset{X \in \mathbb{S}^N}{\textrm{minimize}}
\hspace{2mm} & \tr(AX) \\
\textrm{subject to}
\hspace{2mm} & \tr(A_i X) = b_i, \; i = 1, \dots, m; \\
& X \geq 0; \\
& \rank(X) \leq r(m),
\end{split}
\end{equation}
where $N = n+r(m)-1 = \mathop{\mathrm{poly}}(n)$, when $n \geq C$ for some constant $C$. When $n < C$, we can use brute force to find the solution to unweighted Max-Cut problem in $2^{|E|} = \mathcal{O}(2^{C^2}) = \mathcal{O}(1)$ time. Then, it suffices to solve \eqref{pf:sdp_fea:01} using the feasibility oracle. To solve \eqref{pf:sdp_fea:01}, we just write the objective in \eqref{pf:sdp_fea:01} as a linear constraint $\tr(AX) \leq c$ and try for different $c$ by bisection search. For the unweighted Max-Cut problem, we know that the objective is bounded below by $0$ and bounded above by $n^2$. Since we know that the solution to unweighted Max-Cut is an integer, we can find the solution by applying the feasibility oracle $O(\log n)$ times. Thus, we could solve the unweighted Max-Cut in polynomial time. Since unweighted Max-Cut is NP-hard, we are done.
\end{proof}
\section{Sparsity-Constrained Problems}
In this section, we study sparsity-constrained problems. We start with sparsity-constrained Linear Programming (LP):
\begin{equation} \label{def:sclp}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & Ax = b; \\
& x \geq 0; \\
& \mathop{\mathrm{card}}(x) \leq \kappa,
\end{split}
\end{equation}
where $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, c \in \mathbb{R}^n$, $\mathop{\mathrm{card}}(x)$ is the cardinality of $x$, which is the number of nonzero entries of $x$, and $\kappa \geq 0$ is a constant. In this case, cardinality is the analogue of rank in SDP. To be specific, if we write LP as SDP such that $x$ is mapped to $X = \mathop{\mathrm{diag}}(x)$, where $\mathop{\mathrm{diag}}(x)$ denotes the diagonal matrix whose diagonal entries are entries of $x$, then $\mathop{\mathrm{card}}(x) = \rank(X)$. Unlike rank-constrained SDP, sparsity constrained LP \eqref{def:sclp} is polynomial time solvable for any constant $\kappa$.
\begin{theorem} \label{thm:lp:complexity}
Let $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m$, and $c \in \mathbb{R}^n$ be given. Then, for any constant $\kappa \geq 0$, the sparsity-constrained LP problem
\[
\begin{split}
\underset{x\in \mathbb{R}^n}{\operatorname{minimize}} \hspace{2mm} & c^Tx \\
\operatorname{subject \hspace{1mm} to} \hspace{2mm} & Ax = b; \\
& x \geq 0; \\
& \mathop{\mathrm{card}}(x) \leq \kappa,
\end{split}
\]
is polynomial time solvable.
\end{theorem}
\begin{proof}
To solve this problem, it suffices to solve $\binom{n}{\kappa}$ many LP problems. We encode each LP by a set $S \subset [n]$ of size $n-\kappa$. For each $S$, we set the variable $x_i$ to be $0$ for all $i \in S$. Then we solve the resulting LP defined as follows. Let $c = (c_1,\dots,c_n)^T$ and $A = (a_1,\dots,a_n)$, where $a_i \in \mathbb{R}^m$. Let $c_S$ be the subvector of $c$ obtained by dropping the entries $c_i$ for all $i \in S$. Similarly, let $A_S$ be the submatrix of $A$ obtained by dropping columns $a_i$ for all $i \in S$. Then, the LP corresponding to $S$ is defined as
\[
\begin{split}
\underset{x' \in \mathbb{R}^{n-\kappa}}{\textrm{minimize}} \hspace{2mm} & c_S^T x' \\
\textrm{subject to} \hspace{2mm} & A_S x' = b; \\
& x' \geq 0.
\end{split}
\]
If the above LP is solvable, we keep its solution $x_S$ and optimal value $y_S$. If it is not solvable, we do nothing. If none of the $\binom{n}{\kappa}$ LP is solvable, we claim that no solution to the sparsity-constrained LP exists. Otherwise, let $S^*$ be the set such that $y_{S^*}$ is minimal among all $y_S$'s. Then, we output the solution $\Tilde{x}_{S^*}$ defined by
\[
\begin{cases}
\Tilde{x}_{S^*}[i] = 0 \hspace{2mm} &\textrm{if} \hspace{2mm} i \in S^*\\
\Tilde{x}_{S^*}[i] = x_{S^*}[k_i] \hspace{2mm} &\textrm{if} \hspace{2mm} i \notin S^*,
\end{cases}
\]
where $i$ is the $k_i$th element in $[n]-S^*$, and the optimal value $y_{S^*}$. Since LP is polynomial time solvable and $\binom{n}{\kappa} \leq n^\kappa$ is polynomial in $n$, this algorithm runs in polynomial time.
\end{proof}
Note that the above result does not contradict the fact that finding minimum-cardinality solution for LP is NP-hard \cite{Garey1979}. The reason is that Theorem \ref{thm:lp:complexity} only applies to constant constraint on cardinality. However, in order to find minimum-cardinality solution for LP, we need to consider cardinality constraints that depend on $n$.
Parallel to the rank reduction results in SDP \cite{Barvinok1995, LemonSoYe2015, Pataki1998}, every feasible LP has a solution $x^*$ whose cardinality is at most $m$ \cite{Cara1907, LemonSoYe2015, Ye2008}.
Next, we extend the sparsification techniques in LP to Quadratically Constrained Quadratic Program (QCQP) and Second Order Cone Programming (SOCP).
Before considering QCQP and SOCP sparsification, we make a simple observation on LP sparsification, which will be used in QCQP and SOCP sparsification. Consider the following LP:
\begin{equation} \label{def:lp}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & Ax = b; \\
& x \geq 0, \\
\end{split}
\end{equation}
where $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m$, and $c \in \mathbb{R}^n$. Given a solution $y$ to \eqref{def:lp}, there is an efficient algorithm to find another solution $x^*$ to \eqref{def:lp}, whose cardinality is at most $m$ \cite{Cara1907, LemonSoYe2015, Ye2008}. Now, we observe that the same result holds without the condition $x \geq 0$.
\begin{lemma} \label{lem:lp_sp}
Let $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m$, and $c \in \mathbb{R}^n$. Suppose that the following LP
\begin{equation} \label{def:lp2}
\begin{split}
\underset{x\in \mathbb{R}^n}{\operatorname{minimize}} \hspace{2mm} & c^Tx \\
\operatorname{subject \hspace{1mm} to} \hspace{2mm} & Ax = b;
\end{split}
\end{equation}
has a finite optimal value (i.e. the problem is feasible and the objective is bounded). Then, there exists a solution $x^*$ to \eqref{def:lp2} such that $\mathop{\mathrm{card}}(x^*) \leq m$. Moreover, $x^*$ can be found in polynomial time.
\end{lemma}
\begin{proof}
Let $y$ be a solution to \eqref{def:lp2}. Let $S = \{i \in [n]: y_i<0\}$. Let $c = (c_1,\dots,c_n)^T$ and $A = (a_1,\dots,a_n)$, where $a_i \in \mathbb{R}^m$. Let $\Tilde{c} \in \mathbb{R}^n$ be defined as
\[
\Tilde{c}_i =
\begin{cases}
c_i & \quad \textrm{if} \quad i \notin S; \\
-c_i & \quad \textrm{if} \quad i \in S.
\end{cases}
\]
Similarly, let $\Tilde{A} = (\Tilde{a}_1, \dots, \Tilde{a}_n)$ be defined as
\[
\Tilde{a}_i =
\begin{cases}
a_i & \quad \textrm{if} \quad i \notin S; \\
-a_i & \quad \textrm{if} \quad i \in S.
\end{cases}
\]
Now, consider the following LP:
\begin{equation} \label{def:lp3}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & \Tilde{c}^T x \\
\textrm{subject to} \hspace{2mm} & \Tilde{A}x = b; \\
& x \geq 0.
\end{split}
\end{equation}
Let $\Tilde{y} = |y|$, where $|\cdot|$ is applied entry-wise. We claim that $\Tilde{y}$ is a solution to \eqref{def:lp3}. To see this, note that $\Tilde{A}\Tilde{y} = Ay = b$ by definition. Suppose that there exists a solution $\Tilde{z} \in \mathbb{R}^n$ to \eqref{def:lp3} such that $\Tilde{c}^T \Tilde{z}<\Tilde{c}^T\Tilde{y}$. Let $z \in \mathbb{R}^n$ be defined as
\[
z_i =
\begin{cases}
\Tilde{z}_i & \quad \textrm{if} \quad i \notin S; \\
-\Tilde{z}_i & \quad \textrm{if} \quad i \in S.
\end{cases}
\]
Then, $Az = \Tilde{A}\Tilde{z} = b$ and $c^Tz = \Tilde{c}^T\Tilde{z}<\Tilde{c}^T\Tilde{y} = c^Ty$, which contradicts the fact that $y$ is a solution to \eqref{def:lp2}. Thus, $\Tilde{y}$ is a solution to \eqref{def:lp3}.
Now, we apply the LP sparsification algorithm to $\Tilde{y}$ to get a solution $\Tilde{x}$ of \eqref{def:lp3} such that $\mathop{\mathrm{card}}(\Tilde{x}) \leq m$. Let $x^* \in \mathbb{R}^n$ be defined as
\[
x^*_i =
\begin{cases}
\Tilde{x}_i & \quad \textrm{if} \quad i \notin S; \\
-\Tilde{x}_i & \quad \textrm{if} \quad i \in S.
\end{cases}
\]
Then, $Ax^* = \Tilde{A}\Tilde{x} = b$ and $c^Tx^* = \Tilde{c}^T\Tilde{x} = \Tilde{c}^T\Tilde{y} = c^Ty$. Thus, $x^*$ is a solution to \eqref{def:lp2}. Note that $\mathop{\mathrm{card}}(x^*) = \mathop{\mathrm{card}}(\Tilde{x}) \leq m$.
\end{proof}
\subsection{QCQP sparsification}
Consider the following Quadratically Constrained Quadratic Program (QCQP):
\begin{equation*}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm} & x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,
\end{split}
\end{equation*}
where $Q_i \in \mathbb{S}^n_+$, $c_i \in \mathbb{R}^n$ for each $i = 0,1,\dots,k$, $d_i \in \mathbb{R}^n$ for each $i = 1,\dots,k$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$. In this section, we first give a sparsification result on QCQP. Then, we show that this sparsification result cannot be improved without additional assumptions.
\begin{theorem} \label{thm:qcqp:sparsity}
Let $Q_i \in \mathbb{S}^n_+$, $c_i \in \mathbb{R}^n$ for each $i = 0,1,\dots,k$, $d_i \in \mathbb{R}^n$ for each $i = 1,\dots,k$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$ be given.
Suppose the following QCQP:
\begin{equation} \label{qcqp:sp:def}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm} & x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,
\end{split}
\end{equation}
is feasible. Then there exists a solution $x^*$ to \ref{qcqp:sp:def} such that
\[\mathop{\mathrm{card}}(x^*)\leq m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1).\]
Moreover, $x^*$ can be found in polynomial time.
\end{theorem}
\begin{proof}
For each $i = 0,1,\dots,k$, let $r_i = \rank(Q_i)$. Since $Q_i \in \mathbb{S}^n_+$, there exist an orthogonal matrix $U_i$ and a diagonal matrix $D_i$ such that $Q_i = U_i^T D_i U_i$ and
\[
D_i =
\begin{bmatrix}
B_i^2 & 0 \\
0 & 0
\end{bmatrix},
\]
where $B_i \in \mathbb{R}^{r_i \times r_i}$ is a diagonal matrix. Let
\[
P_i = \begin{bmatrix}
B_i & 0
\end{bmatrix} U_i \in \mathbb{R}^{r_i \times n}.
\]
Then, $Q_i = P_i^T P_i$ and $x^T Q_i x = \Vert P_i x \rVert_2^2$. Let $y \in \mathbb{R}^n$ be a solution to the QCQP \eqref{qcqp:sp:def}. Now, consider the following LP:
\begin{equation} \label{pf:sp:qcqp:01}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c_0^T x \\
\textrm{subject to} \hspace{2mm} & P_i x = P_i y, \hspace{2mm} i = 0,\dots,k; \\
& c_i^T x = c_i^T y, \hspace{2mm} i = 1,\dots,k; \\
& Ax = b.
\end{split}
\end{equation}
Note that the above LP has a finite optimal value and $y$ is a solution to it. To see this, first $y$ is clearly feasible. Second, if there is a solution $z$ such that $c_0^T z < c_0^T y$, then $z$ is a feasible point of the QCQP \eqref{qcqp:sp:def} and $z^T Q_i z + c_0^T z = \Vert P_i z \rVert_2^2 + c_0^T z < \Vert P_i y \rVert_2^2 + c_0^T y = y^T Q_i y+ c_0^T y$, which contradicts the fact that $y$ is a solution to the QCQP \eqref{qcqp:sp:def}. Thus, $y$ is a solution to the LP \eqref{pf:sp:qcqp:01}. Now, we can find a solution $x^*$ to the LP \eqref{pf:sp:qcqp:01} of cardinality at most
\[m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1),\]
by Lemma \ref{lem:lp_sp}. Since $x^*$ and $y$ are both solution of \eqref{pf:sp:qcqp:01},
\[
c_0^T x^* = c_0^T y.
\]
Thus,
\[
x{^*}^T Q_0 x^* + c_0^T x^* = \Vert P_0 x^* \rVert_2^2 + c_0^T y = \Vert P_0 y \rVert_2^2 + c_0^T y = y^T Q_0 y + c_0^T y.
\]
Thus, $x^*$ is a solution to the QCQP \eqref{qcqp:sp:def}. Since LP sparsification can be done in polynomial time, we can find $x^*$ in polynomial time by the above procedure.
\end{proof}
Theorem \ref{thm:qcqp:sparsity} gives an upper bound on the minimal cardinality of solutions of QCQP. In the following example, we show that this bound is tight, which means that it cannot be improved without additional assumptions.
\begin{example}
Let $n,m,\rank(Q_0),\dots,\rank(Q_k) \in \mathbb{R}$ be given. To make the bound in Theorem \ref{thm:qcqp:sparsity} nontrivial, assume that
\[
m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1) < n.
\]
For each $i = 0,1,\dots,k$, let
\[
r_i = \rank(Q_i), \quad \textrm{and} \quad s_i = \sum_{j = 0}^{i-1}(r_j+1).
\]
Note that $s_0 = 0$ since it is an empty sum.
For each $i = 0,1,\dots,k$, let
\[
Q_i =
\begin{bmatrix}
0_{s_i \times s_i} & 0_{s_i \times r_i} & 0_{s_i \times (n - r_i - s_i)} \\
0_{r_i \times s_i} & I_{r_i} & 0_{r_i \times (n - r_i - s_i)} \\
0_{(n - r_i - s_i) \times s_i} & 0_{(n - r_i - s_i) \times r_i} & 0_{(n - r_i - s_i) \times (n - r_i - s_i)}
\end{bmatrix} \in \mathbb{R}^{n \times n},
\]
where $I_{r_i} \in \mathbb{R}^{r_i \times r_i}$ is the identity matrix and
$0_{s,t} \in \mathbb{R}^{s \times t}$ denotes a matrix whose entries are zeros.
For each $i = 1,\dots,k$, let
\[
c_i = -e_{s_i + r_i + 1} - 2 \sum_{j = s_i+1}^{s_i+r_i} e_j,
\]
where $e_{s_i + r_i + 1} = (0,\dots,0,1,0,\dots,0)^T \in \mathbb{R}^n$ is the $s_i + r_i + 1$th standard basis vector. Let
\[
c_0 = -2\sum_{i = 1}^{r_0}e_i + \sum_{i = 1}^k e_{s_i + r_i + 1}.
\]
Let
\[
A =
\begin{bmatrix}
0_{m \times s_{k+1}} & I_{m} & 0_{m \times (n-m-s_{k+1})}
\end{bmatrix}.
\]
Consider the following QCQP:
\begin{equation} \label{eg:sp:qcqp:01}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^T Q_0 x + c_0^T x \\
\textrm{subject to} \hspace{2mm} & x^T Q_i x + c_i^T x + r_i + 1 \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = \mathbbm{1}_m,
\end{split}
\end{equation}
where $\mathbbm{1}_m \in \mathbb{R}^m$ is a vector whose entries are ones. We claim that any solution to the above QCQP has at least $m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1)$ nonzero entries. Let $z = (z_1,\dots,z_n) \in \mathbb{R}^n$ be a feasible point of the QCQP \eqref{eg:sp:qcqp:01}. Then, for each $i = 1,\dots,k$,
\begin{equation*}
z^T Q_i z + c_i^T z + r_i + 1 \leq 0,
\end{equation*}
which implies
\begin{equation*}
\sum_{j = s_i+1}^{s_i+r_i} z_j^2 - 2 \sum_{j = s_i+1}^{s_i+r_i} z_j - z_{s_i+r_i+1} + r_i + 1 \leq 0,
\end{equation*}
which implies
\begin{equation} \label{pf:eg:qcqp:sp:01}
z_{s_i+r_i+1} \geq 1 + \sum_{j = s_i+1}^{s_i+r_i} (z_j^2 - 2 z_j + 1) \geq 1.
\end{equation}
Then,
\begin{equation} \label{pf:eg:qcqp:sp:02}
z^T Q_0 z + c_0^T z = \sum_{i = 1}^{r_0} (z_i^2 - 2 z_i) + \sum_{i = 1}^k z_{s_i+r_i+1} \geq -r_0 + \sum_{i = 1}^k 1 = k - r_0,
\end{equation}
where the last step follows from equation \eqref{pf:eg:qcqp:sp:01}.
Let $x^* = (1,1,\dots,1,0,\dots,0) \in \mathbb{R}^n$ be a vector whose first $m - 1 + \sum_{i = 0}^k (r_i+1)$ entries are ones and the rest are zeros. Then, $x^*$ satisfies all the constraints in the QCQP \eqref{eg:sp:qcqp:01} and
\begin{equation} \label{pf:eg:qcqp:sp:03}
x{^*}^T Q_0 x^* + c_0^T x^* = k - r_0.
\end{equation}
Thus, by equations \eqref{pf:eg:qcqp:sp:02} and \eqref{pf:eg:qcqp:sp:03}, the optimal value of the QCQP \eqref{eg:sp:qcqp:01} is $k - r_0$.
Now, let $y \in \mathbb{R}^n$ be a solution to the QCQP \eqref{eg:sp:qcqp:01}. We will show that $\mathop{\mathrm{card}}(y) \geq m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1)$. By equation \eqref{pf:eg:qcqp:sp:01}, we have
\begin{equation*}
y_{s_i+r_i+1} \geq 1,
\end{equation*}
for all $i = 1,\dots,k$. Since $y$ is optimal, we have
\begin{equation*}
k - r_0 = y^T Q_0 y + c_0^T y = \sum_{i = 1}^{r_0} (y_i^2 - 2y_i) + \sum_{i = 1}^k y_{s_i+r_i+1} \geq -r_0 + \sum_{i = 1}^k 1 = k - r_0,
\end{equation*}
which implies that
\begin{equation} \label{pf:eg:qcqp:sp:04}
y_{s_i+r_i+1} = 1,
\end{equation}
for all $i = 1,\dots,k$ and
\begin{equation} \label{pf:eg:qcqp:sp:05}
y_i = 1,
\end{equation}
for all $i = 1,\dots,r_0$. By equations \eqref{pf:eg:qcqp:sp:04} and \eqref{pf:eg:qcqp:sp:01},
\begin{equation} \label{pf:eg:qcqp:sp:06}
y_j = 1
\end{equation}
for all $j = s_i + 1, \dots, s_i + r_i$, for all $i = 1,\dots,k$. Then, equations \eqref{pf:eg:qcqp:sp:04}, \eqref{pf:eg:qcqp:sp:05}, \eqref{pf:eg:qcqp:sp:06}, together with the fact that $A y = \mathbbm{1}_m$ implies that
\[
y_i = 1
\]
for all $i = 1,\dots,m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1)$. Thus,
\begin{equation*}
\mathop{\mathrm{card}}(y) \geq m - 1 + \sum_{i = 0}^k (\rank(Q_i)+1).
\end{equation*}
This shows that our bound on Theorem \ref{thm:qcqp:sparsity} is tight.
\end{example}
\subsection{SOCP sparsification}
Consider the following Second Order Cone Programming (SOCP):
\begin{equation*}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,
\end{split}
\end{equation*}
where $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$.
In this section, we first give a sparsification result on SOCP. Then, we show that this sparsification result cannot be improved without additional assumptions.
\begin{theorem} \label{thm:socp:sparsity}
Let $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$ be given.
Suppose the following SOCP:
\begin{equation} \label{def:socp}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,
\end{split}
\end{equation}
is feasible. Then there exists a solution $x^*$ to \ref{def:socp} such that
\[
\mathop{\mathrm{card}}(x^*)\leq m + \sum_{i = 1}^k(m_i + 1).
\]
Moreover, $x^*$ can be found in polynomial time.
\end{theorem}
\begin{proof}
Let $y$ be a solution to the SOCP \eqref{def:socp}.
Then, consider the following LP:
\begin{equation} \label{pf:thm:socp:sp}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & A_ix = A_iy, \hspace{2mm} i = 1,\dots,k; \\
& c_i^Tx = c_i^Ty, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g.
\end{split}
\end{equation}
Note that the above LP has a finite optimal value and $y$ is a solution to it. To see this, first $y$ is clearly feasible. Second, if there is a solution $z$ such that $c^T z < c^T y$, then $z$ is a feasible point of the SOCP \eqref{def:socp} and $c^T z < c^T y$, which contradicts the fact that $y$ is a solution to the SOCP \eqref{def:socp}. Thus, $y$ is a solution to the LP \eqref{pf:thm:socp:sp}.
Now, we can find a solution $x^*$ to the LP \eqref{pf:thm:socp:sp} of cardinality at most
\[m + \sum_{i = 1}^k(m_i + 1),\]
by Lemma \ref{lem:lp_sp}.
Since $x^*$ and $y$ are both solutions of \eqref{pf:thm:socp:sp},
\[
c^Tx^* = c^Ty.
\]
Thus, $x^*$ is a solution to the SOCP \eqref{def:socp}. Since LP sparsification can be done in polynomial time, we can find $x^*$ in polynomial time by the above procedure.
\end{proof}
Theorem \ref{thm:socp:sparsity} gives an upper bound on the minimal cardinality of solutions of SOCP. In the following example, we show that this bound is tight, which means that it cannot be improved without additional assumptions.
\begin{example}
Let $n,m,m_1,m_2,\dots,m_k \in \mathbb{R}$ be given. To make the bound in Theorem \ref{thm:socp:sparsity} nontrivial, assume that
\[m + \sum_{i = 1}^k (m_i+1) < n.\]
For each $i = 1,\dots,k+1$, let
\[r_i = \sum_{j = 1}^{i-1} (m_j+1). \]
Note that $r_1 = 0$ since it is an empty sum.
For each $i = 1,\dots,k$, let
\[c_i = e_{r_i + m_i + 1}\] and
\begin{equation*}
E_i =
\begin{bmatrix}
0_{m_i \times r_i} & I_{m_i} & 0_{m_i \times (n-r_{i+1}+1)}
\end{bmatrix} \in \mathbb{R}^{m_i \times n},
\end{equation*}
where $I_{m_i} \in \mathbb{R}^{m_i \times m_i}$ is the identity matrix, $0_{s,t} \in \mathbb{R}^{s \times t}$ denotes a matrix whose entries are zeros, and $e_{r_i + m_i + 1} = (0,\dots,0,1,0,\dots,0)^T \in \mathbb{R}^n$ is the $r_i+m_i+1$th standard basis vector.
Let
\[
F =
\begin{bmatrix}
0_{m \times r_{k+1}} & I_{m} & 0_{m \times (n-m-r_{k+1})}
\end{bmatrix}.
\]
Let
\[
c = \sum_{i = 1}^k c_i.
\]
Consider the following SOCP:
\begin{equation} \label{socp:sp:tight:eg}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert E_i x - \mathbbm{1}_{m_i} \rVert_2 \leq c_i^Tx - 1,\hspace{2mm} i = 1, \dots, k; \\
& Fx = \mathbbm{1}_m,
\end{split}
\end{equation}
where $\mathbbm{1}_d \in \mathbb{R}^d$ is a vector whose entries are ones.
We claim that any solution to the above SOCP has at least $m + \sum_{i = 1}^k (m_i+1)$ nonzero entries. Let $z$ be a feasible point of the SOCP \eqref{socp:sp:tight:eg}. Then, for each $i = 1, \dots, k$,
\begin{equation*}
z_{r_i+m_i+1} - 1 = c_i^Tz -1 \geq \Vert E_i z - \mathbbm{1}_{m_i} \rVert_2 \geq 0,
\end{equation*}
which implies that
\begin{equation} \label{pf:socp:eg:03}
z_{r_i+m_i+1} \geq 1
\end{equation}
for all $i = 1, \dots, k$. Thus,
\begin{equation} \label{pf:socp:eg:01}
c^Tz = \sum_{i = 1}^k z_{r_i+m_i+1} \geq k.
\end{equation}
Let $x^* = (1,1,\dots,1,0,\dots,0) \in \mathbb{R}^n$ be a vector whose first $m + \sum_{i = 1}^k (m_i+1)$ entries are ones and the rest are zeros. Then, $x^*$ satisfies all the constraints in the SOCP \eqref{socp:sp:tight:eg} and
\begin{equation} \label{pf:socp:eg:02}
c^Tx^* = k.
\end{equation}
Thus, by equations \eqref{pf:socp:eg:01} and \eqref{pf:socp:eg:02}, the optimal value of the SOCP \eqref{socp:sp:tight:eg} is $k$.
Now, let $y \in \mathbb{R}^n$ be a solution to the SOCP \eqref{socp:sp:tight:eg}. We will show that $\mathop{\mathrm{card}}(y) \geq m + \sum_{i = 1}^k (m_i+1)$. By equation \eqref{pf:socp:eg:03}, we have
\begin{equation} \label{pf:socp:eg:04}
y_{r_i+m_i+1} \geq 1,
\end{equation}
for all $i = 1,\dots,k$.
Since $y$ is optimal, we have
\begin{equation} \label{pf:socp:eg:05}
k = c^T y = \sum_{i = 1}^k y_{r_i+m_i+1}.
\end{equation}
By equations \eqref{pf:socp:eg:04} and \eqref{pf:socp:eg:05},
\begin{equation} \label{pf:socp:eg:06}
y_{r_i+m_i+1} = 1,
\end{equation}
for all $i = 1,\dots,k$.
This implies that for each $i = 1,\dots,k$,
\begin{equation}
\Vert E_iy - \mathbbm{1}_{m_i} \rVert_2 = 0,
\end{equation}
which implies that
\begin{equation} \label{pf:socp:eg:07}
E_i y = \mathbbm{1}_{m_i}.
\end{equation}
Then, equations \eqref{pf:socp:eg:06} and \eqref{pf:socp:eg:07}, together with the fact that $F y = \mathbbm{1}_m$, imply that
\begin{equation*}
y_i = 1
\end{equation*}
for all $i = 1,\dots,m+\sum_{j = 1}^k (m_j+1)$. Thus,
\begin{equation*}
\mathop{\mathrm{card}}(y) \geq m+\sum_{j = 1}^k (m_j+1).
\end{equation*}
This implies that our bound in Theorem \ref{thm:socp:sparsity} is tight.
\end{example}
\section{Rank-Constrained Hyperbolic Programming}
In this section, we consider rank-constrained Hyperbolic Programming (HP), which unifies rank-constrained SDP and sparsity-constrained LP. We extend rank reduction techniques to rank-constrained QCQP and rank-constrained SOCP, which are special cases of rank-constrained HP. In addition, we study the complexity of these two optimization problems.
We first recall the definition of a hyperbolic polynomial \cite{Brand2011, hprank2, Renegar2006}.
\begin{definition}
A homogeneous polynomial $p: \mathbb{R}^n \longrightarrow \mathbb{R}$ is hyperbolic if there exists a direction $e \in \mathbb{R}^n$ such that $p(e) \neq 0$ and for each $x \in \mathbb{R}^n$ the univariate polynomial $t \mapsto p(x-te)$ has only real roots. The polynomial $p$ is said to be hyperbolic in direction $e$.
\end{definition}
Then, we recall the definition of characteristic polynomial and eigenvalues in HP \cite{Brand2011, hprank2, Renegar2006}.
\begin{definition}
Given $x \in \mathbb{R}^n$ and a polynomial $p: \mathbb{R}^n \longrightarrow \mathbb{R}$ that is hyperbolic in direction $e \in \mathbb{R}^n$, the characteristic polynomial of $x$ with respect to $p$ in direction $e$ is the univariate polynomial $\lambda \mapsto p(x - \lambda e)$. The roots of the characteristic polynomial are the eigenvalues of $x$.
\end{definition}
Next, we recall the definition of hyperbolic programming \cite{Brand2011, hprank2, Renegar2006}.
\begin{definition}
Given a hyperbolic polynomial $p: \mathbb{R}^n \longrightarrow \mathbb{R}$ that is hyperbolic in direction $e \in \mathbb{R}^n$, the hyperbolic cone for $p$ in direction $e$ is defined as
\begin{equation*}
\Lambda_{++} \vcentcolon = \{x \in \mathbb{R}^n:\lambda_{\textrm{min}}(x) > 0 \},
\end{equation*}
where $\lambda_{\textrm{min}}(x)$ is the minimum eigenvalue of $x$. Let
\begin{equation*}
\Lambda_+ \vcentcolon = \{x \in \mathbb{R}^n:\lambda_{\textrm{min}}(x) \geq 0 \}
\end{equation*}
be the closure of $\Lambda_{++}$.
\end{definition}
\begin{definition}
Given a hyperbolic polynomial $p: \mathbb{R}^n \longrightarrow \mathbb{R}$ that is hyperbolic in direction $e \in \mathbb{R}^n$, a hyperbolic program is an optimization problem of the form
\begin{equation} \label{def:hp}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & Ax = b; \\
& x \in \Lambda_+,
\end{split}
\end{equation}
where $A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m$, and $c \in \mathbb{R}^n$ are given.
\end{definition}
Finally, we recall the definition of rank in HP \cite{Brand2011, hprank2, Renegar2006}.
\begin{definition}
Let $p$ be a hyperbolic polynomial that is hyperbolic in direction $e \in \mathbb{R}^n$. The rank of $x \in \mathbb{R}^n$ is defined as $\rank(x) \vcentcolon = \deg p(e + tx)$, where $t$ is the indeterminate. Equivalently, $\rank(x)$ is the number of non-zero eigenvalues of $x$.
\end{definition}
Note that HP includes SDP and LP as special cases. Moreover, rank-constrained HP includes rank-constrained SDP and sparsity-constrained LP as special cases. To be specific, when $p(X) = \det(X)$ and $e = I$ (in this case, the domain of $p$ is the set of symmetric matrices which can be identified as $\mathbb{R}^{n(n+1)/2}$), HP becomes SDP and $\rank(X)$ is simply the usual rank of a matrix \cite{Brand2011, hprank2, Renegar2006}. In addition, when $p(x) = \prod_{i = 1}^n x_i$, where $x_i$ is the $i$th component of $x$ and $e = (1,1,\dots, 1)$, HP becomes LP and $\rank(x) = \mathop{\mathrm{card}}(x)$ \cite{Brand2011, hprank2, Renegar2006}.
\subsection{Rank-Constrained SOCP}
In this section, we study rank-constrained SOCP. We first define the rank of SOCP by viewing it as a HP. Then, we give a rank reduction result for SOCP. Next, we show that Max-Cut can be written as a rank-constrained SOCP. Finally, we study the complexity of rank-constrained SOCP and show that it is NP-hard in certain circumstances.
\subsubsection{SOCP rank reduction}
We consider the following SOCP:
\begin{equation} \label{hp:socp:def}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,
\end{split}
\end{equation}
where $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$. We first show that it can be written as a HP. This step is similar to the Lorentz cone example in \cite{Sendov2001}. First, we associate each second order cone constraint $\Vert A_ix + b_i \rVert_2 \leq c_i^T x + d_i$ with a new variable $y_i$. Let $y = (y_1, \dots, y_k)$. Let $z = (x,y)$. For each $j \in [m_i]$, let $a_{i,j}^T$ be the $j$th row of $A_i$, $b_{i,j}$ be the $j$th entry of $b_i$, and let $q_{i,j}(x) = a_{i,j}^T x + b_{i,j}$. Then let
\[
p_i(z) = y_i^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2.
\]
Now let
\[
p(z) = \prod_{i = 1}^k p_i(z).
\]
Let $e = (0,\dots,0,1,\dots,1)$ where there are $n$ zeros followed by $k$ ones. For each $i$,
\begin{equation} \label{eq:2}
p_i(z - te) = (y_i - t)^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2 = t^2 - 2y_it + \Biggl(y_i^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2\Biggr).
\end{equation}
The discriminant
\[
\Delta_i = 4y_i^2 - 4\Biggl(y_i^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2\Biggr) = 4\sum_{j = 1}^{m_i} q_{i,j}(x)^2 \geq 0.
\]
Thus, $p_i$ is hyperbolic in $e$ for all $i$. Hence, $p$ is hyperbolic in $e$.
Note that roots of $p(z - te)$ are positive if and only if roots of $p_i(z - te)$ are positive for all $i$.
From equation (\ref{eq:2}), we see that this holds if and only if
\[
y_i \geq 0 \quad \textrm{and} \quad y_i^2 \geq \sum_{j = 1}^{m_i} q_{i,j}(x)^2.
\]
This is equivalent to
\[
y_i \geq \sqrt{\sum_{j = 1}^{m_i} q_{i,j}(x)^2} = \Vert A_ix + b_i \rVert_2.
\]
Now for each $i$, we add the linear constraint
\[
y_i = c_i^Tx + d_i.
\]
Then the resulting HP with the original linear constraint $Fx = g$ and the new linear constraints $y_i = c_i^Tx + d_1$ is equivalent to the SOCP problem \eqref{hp:socp:def}.
Next, we consider the rank. Note that
\[
p_i(e + tz) = (ty_i + 1)^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2 t^2 = \Biggl(y_i^2 - \sum_{j = 1}^{m_i} q_{i,j}(x)^2\Biggr) t^2 + 2y_it + 1.
\]
Then we have
\[
\deg(p_i) = \begin{cases}
0 & \hspace{2mm} \textrm{if} \hspace{2mm} y_i^2 = \sum_{j = 1}^{m_i} q_{i,j}(x)^2 = 0 \\
1 & \hspace{2mm} \textrm{if} \hspace{2mm} y_i^2 = \sum_{j = 1}^{m_i} q_{i,j}(x)^2 \neq 0 \\
2 & \hspace{2mm} \textrm{otherwise}.
\end{cases}
\]
Let $s(x)$ be the number of second order cone constraints that are satisfied with equality. Let $e(x)$ be the number of second order cone constraints that are satisfied with equality and both sides of the constraints are zero.
Since
\[
\deg(p(e+tz)) = \sum_{i = 1}^k \deg(p_i(e + tz)),
\]
we have
\[
\rank(z) = 2k - s(x) - e(x).
\]
In addition, we define
\[\rank(x) = \rank(z).\]
Now we consider SOCP rank reduction.
\begin{theorem} \label{socp:rank:reduction}
Let $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$ be given.
Suppose the following SOCP:
\begin{equation} \label{def:socp:hp}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,
\end{split}
\end{equation}
is feasible and
\[
\bigcap_{i \in [k+1]} \ker B_i = \{0\},
\]
where
\[
B_i = \begin{bmatrix}
A_{i} \\
c_{i}^T
\end{bmatrix} \quad \textrm{for each} \hspace{2mm} i \in [k], \hspace{2mm} \textrm{and} \quad B_{k+1} = \begin{bmatrix}
F \\
c^T
\end{bmatrix}.
\]
Then there exists a solution $x^*$ to \ref{def:socp:hp} such that
\[
\rank(x) \leq 2k - \Bigl \lceil \frac{n}{\max(m,m_1,m_2,\dots,m_k)+1} \Bigr \rceil + 1.
\]
Moreover, $x^*$ can be found in polynomial time.
\end{theorem}
\begin{proof}
It suffices to show that we can find a solution $x$ such that
\[
s(x) \geq \Bigl \lceil \frac{n}{\max(m,m_1,m_2,\dots,m_k)+1} \Bigr \rceil - 1.
\]
Let
\[
m' = \max(m,m_1,m_2,\dots,m_k).
\]
Let $x^{(0)}$ be a solution to the SOCP. We define an algorithm iteratively with the inductive hypothesis that at the beginning of step $i$,
\[
\lVert A_j x^{(i-1)} + b_j \rVert_2 = c_j^T x^{(i-1)} + d_j,
\]
for all $j \in S_{i-1}$, where $x^{(i)}$ is the value of $x$ at the end of the $i$th iteration and $S_i$ is a set of size $i$ which is updated in each iteration. This clearly holds for $i = 1$ with $S_0 = \emptyset$. At step $i$, we do the following. Let $S_{i - 1} = \{u_1, \dots, u_{i-1}\}$ and
\[
M_i = \begin{bmatrix}
B_{k+1} \\
B_{u_1} \\
\vdots \\
B_{u_{i-1}}
\end{bmatrix}
\]
Then $M_i \in \mathbb{R}^{t_i \times n}$ for some $t_i \leq (m'+1)i$. Suppose that
\[
i \leq \bigg \lceil \frac{n}{m'+1} \bigg \rceil - 1.
\]
Then $t_i \leq (m'+1)i<n$. Thus, $\ker M_i \neq \{0\}$. Take $v \in \ker M_i$ such that $v \neq 0$. Since
\[
\bigcap_{i \in [k+1]} \ker B_i = \{0\},
\]
there exists $u \in [k]$, such that $B_{u} v \neq 0$. Clearly, $u \notin S_{i-1}$. Now we consider two cases. If $c_{u}^T v = 0$, then $A_{u} v \neq 0$. Thus,
\[
\lVert A_{u} (x^{(i-1)} +\lambda v) + b_{u} \rVert_2 - (c_{u}^T (x^{(i-1)} +\lambda v) + d_{u}) \longrightarrow \infty,
\]
as $\lambda \longrightarrow \infty$. Since the above expression is less than or equal to $0$ when $\lambda = 0$, there exists $\lambda^*$ such that
\[
\lVert A_{u} (x^{(i-1)} +\lambda^* v) + b_{u} \rVert_2 - (c_{u}^T (x^{(i-1)} +\lambda^* v) + d_{u}) = 0.
\]
Now if $c_{u}^T v \neq 0$, then multiplying $v$ by $-1$ if necessary, we can assume that $c_{u}^T v < 0$. Thus,
\[
\lVert A_{u} (x^{(i-1)} +\lambda v) + b_{u} \rVert_2 - (c_{u}^T (x^{(i-1)} +\lambda v) + d_{u}) \longrightarrow \infty,
\]
as $\lambda \longrightarrow \infty$. Then again we have there exists $\lambda^*$ such that
\[
\lVert A_{u} (x^{(i-1)} +\lambda^* v) + b_{u} \rVert_2 - (c_{u}^T (x^{(i-1)} +\lambda^* v) + d_{u}) = 0.
\]
Now let
\[E = \{u:B_{u} v \neq 0\}.\]
For each $u \in E$, let
\[
\lambda_u = \arg\min_{\lambda \in \mathbb{R}} \Biggl \{|\lambda|: \lVert A_{u} (x^{(i-1)} +\lambda v) + b_{u} \rVert_2 = (c_{u}^T (x^{(i-1)} +\lambda v) + d_{u}) \Biggr \}.
\]
Now let
\[u_i \in \arg\min_{u \in E} |\lambda_u|.\]
Update
\[
\begin{split}
x^{(i)} & = x^{(i-1)} + \lambda_{u_i} v \\
S_i & = S_{i-1} \cup \{u_i\}.
\end{split}
\]
Then clearly
\[
\lVert A_j x^{(i-1)} + b_j \rVert_2 = c_j^T x^{(i-1)} + d_j,
\]
for all $j \in S_{i}$ and
\[
|S_i| = |S_{i-1}| + 1 = i.
\]
Note that $x^{(i)}$ is still a feasible solution since for all $u \in E$,
\[
\lVert A_{u} (x^{(i-1)} +\lambda^* v) + b_{u} \rVert_2 - (c_{u}^T (x^{(i-1)} +\lambda^* v) + d_{u}) \leq 0,
\]
by minimality of $|\lambda_{u_i}|$. This process stop when $i = \lceil n/(m'+1)\rceil$. Then we have
\[
\lVert A_j x^{(i-1)} + b_j \rVert_2 = c_j^T x^{(i-1)} + d_j,
\]
for all $j \in S_{\lceil n/(m'+1) \rceil - 1}$ and thus
\[
s(x^{(i-1)}) \geq \Bigl \lceil \frac{n}{m'+1} \Bigr \rceil - 1.
\]
\end{proof}
\subsubsection{Complexity of rank-constrained SOCP}
In this section, we study the complexity of rank-constrained SOCP. First, we will show that Max-Cut could be formulated as rank-constrained SOCP. Then, we will use this reduction to show that rank-constrained SOCP is NP-hard under certain circumstances. Our reduction of Max-Cut to rank-constrained SOCP is a slight modification of the SOCP relaxation results in \cite{MST2003}.
\begin{example}
First, we consider a nonconvex Quadratically Constrained
Linear Program (QCLP):
\begin{equation} \label{def:nonconvex:qclp}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & x^TQ_ix+ g_i^Tx + f_i = 0,\hspace{2mm} i = 1,\dots,m,
\end{split}
\end{equation}
where $Q_i \in \mathbb{S}^n$, $g_i \in \mathbb{R}^n$, and $f_i \in \mathbb{R}$. We will first show that the above nonconvex QCLP is equivalent to a rank-constrained SOCP. Then, since Max-Cut can be written as a nonconvex QCLP, Max-Cut is also equivalent to a rank-constrained SOCP.
\pmb{QCLP to rank-constrained SOCP:}
Since
\[
x^TQ_ix = \tr(Q_i x x^T),
\]
the QCLP in \eqref{def:nonconvex:qclp} is clearly equivalent to
\[
\begin{split}
\underset{x \in \mathbb{R}^n, X \in \mathbb{S}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \tr(Q_i X)+ g_i^Tx + f_i = 0,\hspace{2mm} i = 1,\dots,m, \\
& X - xx^T = 0.
\end{split}
\]
The above is an LP with the additional constraint $X - xx^T = 0$. Thus, it suffices to write that constraint as a second order cone constraint. Let $C_1, \dots, C_r$ be an orthonormal basis for $\mathbb{S}^n$. Then $X - xx^T = 0$ if and only if
\[
\tr(C_i (X - xx^T)) = 0,
\]
for all $i \in [r]$. For each $i$, there exists $\lambda_i$ such that $C_i + \lambda_i I \in \mathbb{S}^n_+$. Let $\Tilde{C}_i = C_i + \lambda_i I$. Then for all $i \in [r]$, $\tr(C_i (X - xx^T)) = 0$ if
\begin{equation} \label{nonconvex:qclp:eq:1}
\tr(\Tilde{C}_i (X - xx^T)) = 0, \hspace{2mm} \textrm{and} \hspace{2mm} \tr(I (X - xx^T)) = 0.
\end{equation}
On the other hand, if $X - xx^T = 0$, then equation \eqref{nonconvex:qclp:eq:1} certainly holds for all $i \in [r]$. Thus, $X - xx^T = 0$ if and only if equation \eqref{nonconvex:qclp:eq:1} holds for all $i \in [r]$.
Now let $A \in \mathbb{S}^n_+$. Then $A = VV^T$ for some $V \in \mathbb{R}^{n \times n}$. Note that
\begin{equation} \label{maxcut:hp:key}
\tr(A (X - xx^T)) = \tr(AX) - x^TVV^Tx = a - u^Tu,
\end{equation}
where $a = \tr(AX), u = V^T x$. Now note that
\[ a- u^Tu = 0\]
if and only if
\[
(a+1)^2 = (a - 1)^2 + 4u^Tu.
\]
Let $w = \begin{bmatrix}
a-1 \\
2u
\end{bmatrix}$,
then the above equation is equivalent to
\[
\lVert w \rVert_2 = a+1.
\]
In other words, we get
\begin{equation} \label{eq:30}
\Bigg \lVert
\begin{bmatrix}
\tr(AX)-1 \\
2V^T x
\end{bmatrix}
\Bigg \rVert_2
= \tr(AX) + 1,
\end{equation}
which is a second order cone constraint with equality. Note that the left hand side and right hand side of \eqref{eq:30} cannot both be $0$. Otherwise, we would have $tr(AX) = 1$ from the left hand side and $tr(AX) = -1$ from the right hand side, which is a contradiction. Thus,
\[
e(x) = 0,
\]
for all $x$ that satisfies \eqref{eq:30}, where $e(x)$ is the number of second order cone constrained that are satisfied with equality and both sides are zero. Then, we need
\[
s(x) = k \quad \quad \quad \quad \textrm{and} \quad \quad \quad \quad e(x) = 0,
\]
where
\[
k = 1+\frac{n(n+1)}{2}
\]
is the number of second order cone constraints and $s(x)$ is the number of second order cone constrained that are satisfied with equality. Thus, the original QCLP could be written as a SOCP problem with the additional rank constraint
\[
\rank(x) \leq k.
\]
\pmb{Max-Cut to QCLP:}
It is well known that Max-Cut can be written in the form \cite{MST2003}
\[
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^TQx \\
\textrm{subject to} \hspace{2mm} & x_i^2 - 1 = 0,\hspace{2mm} i = 1,\dots,n,
\end{split}
\]
for some $Q \in \mathbb{S}^n$. Then it is equivalent to
\[
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & t \\
\textrm{subject to} \hspace{2mm} & x_i^2 - 1 = 0,\hspace{2mm} i = 1,\dots,n, \\
& x^TQx - t = 0,
\end{split}
\]
which is then in the form of the nonconvex QCLP in \eqref{def:nonconvex:qclp}. Thus, Max-Cut is equivalent to a rank-constrained SOCP.
\end{example}
From the above example, we see that rank-constrained SOCP includes some interesting combinatorial problems, which motivates the study of rank-constrained SOCP:
\[
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,\\
& \rank(x) \leq r(k)
\end{split}
\]
where $r(k)$ is a function in $k$, $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$. Next, we give a complexity result of this problem.
\begin{theorem} \label{socp:rank:hardness}
Let $A_i \in \mathbb{R}^{m_i \times n}, b_i \in \mathbb{R}^{m_i}, c_i \in \mathbb{R}^n$, and $d_i \in \mathbb{R}$ for each $i = 1,\dots,k$, $F \in \mathbb{R}^{m \times n}, c \in \mathbb{R}^n$, and $g \in \mathbb{R}^{m}$ be given. Let $s \geq 0$ be a constant. Then the following rank-constrained SOCP:
\begin{equation*}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,\\
& \rank(x) \leq k+s,
\end{split}
\end{equation*}
is NP-hard.
\end{theorem}
\begin{proof}
From the example at the beginning of this section, we see that rank-constrained SOCP with $r(k) = k$ is NP-hard since we can reduce Max-Cut to it. In other words, the following problem:
\begin{equation} \label{eq:10}
\begin{split}
\underset{x\in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & c^Tx \\
\textrm{subject to} \hspace{2mm} & \Vert A_ix + b_i \rVert_2 \leq c_i^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& Fx = g,\\
& \rank(x) \leq k,
\end{split}
\end{equation}
is NP-hard.
Now, we show how to increase $r(k)$ from $k$ to $k+1$. Consider the following problem:
\begin{equation} \label{eq:19}
\begin{split}
\underset{x\in \mathbb{R}^{n+1}}{\textrm{minimize}} \hspace{2mm} & \Tilde{c}^Tx \\
\textrm{subject to} \hspace{2mm} & \big \lVert \Tilde{A_i}x + \Tilde{b}_i \big \rVert_2 \leq \Tilde{c_i}^Tx + d_i, \hspace{2mm} i = 1,\dots,k; \\
& \Tilde{F}x = g,\\
& x_{n+1} = 1,\\
& \lVert Ex \rVert_2 \leq 2, \\
& \rank(x) \leq (k+1)+1,
\end{split}
\end{equation}
where $\Tilde{A}_i = \begin{bsmallmatrix}
A_i \\
0
\end{bsmallmatrix}, \Tilde{c} = \begin{bsmallmatrix}
c \\
0
\end{bsmallmatrix}, \Tilde{b}_i = \begin{bsmallmatrix}
b_i \\
0
\end{bsmallmatrix}, \Tilde{c}_i = \begin{bsmallmatrix}
c_i \\
0
\end{bsmallmatrix}, \Tilde{F} = \begin{bsmallmatrix}
F & 0
\end{bsmallmatrix}, E = e_{n+1}e_{n+1}^T$, and $e_{n+1} = (0,\dots,0,1)$. Note that since $x_{n+1} = 1$,
\[
\lVert Ex \rVert_2 = 1<2.
\]
Thus, this second order cone constraint will contribute $2$ to the rank. Thus, \eqref{eq:19} is equivalent to \eqref{eq:10}. Note that there are $k+1$ second order cone constraints in \eqref{eq:19}. Using this argument $s$ times finishes the proof.
\end{proof}
\subsection{Rank-Constrained QCQP}
In this section, we study rank-constrained QCQP. We first define the rank of QCQP by viewing it as a SOCP. Then, we apply the results in the previous section to do rank reduction on QCQP. Next, we show that Max-Cut can also be written as a rank-constrained QCQP. Finally, we show that rank-constrained QCQP is NP-hard in certain circumstances.
\subsubsection{QCQP rank reduction}
Consider the following QCQP:
\begin{equation} \label{def:qcqp:rank}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm} & x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,
\end{split}
\end{equation}
where $Q_i \in \mathbb{S}^n_+$, $c_i \in \mathbb{R}^n$ for each $i = 0,1,\dots,k$, $d_i \in \mathbb{R}^n$ for each $i = 1,\dots,k$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$. First, we write QCQP \ref{def:qcqp:rank} as a SOCP \ref{hp:socp:def}. To do this, we first consider the epigraph version of \ref{def:qcqp:rank}:
\begin{equation} \label{def:qcqp:rank2}
\begin{split}
\underset{x \in \mathbb{R}^n, t \in \mathbb{R}}{\textrm{minimize}} \hspace{2mm} & t \\
\textrm{subject to} \hspace{2mm}
& x^TQ_0x + c_0^Tx - t \leq 0 \\
& x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b.
\end{split}
\end{equation}
Then, it suffices to convert a quadratic constraint
\begin{equation} \label{hp:qcqp:eq:1}
x^TQ_ix + c_i^Tx + d_i \leq 0
\end{equation}
to a second order cone constraint. For each $i = 0,1,\dots,k$, let $r_i = \rank(Q_i)$. Then there exists $P_i \in \mathbb{R}^{r_i \times r_i}$ such that $Q_i = P_i^T P_i$ since $Q_i \in \mathbb{S}^n_+$. Then, equation \ref{hp:qcqp:eq:1} becomes
\begin{equation} \label{hp:qcqp:eq:2}
\lVert P_i x \rVert_2^2 \leq -c_i^T x - d_i.
\end{equation}
Since
\[
-c_i^T x - d_i = (1/4 -c_i^T x - d_i)^2 - (1/4 + c_i^T x + d_i)^2,
\]
equation \ref{hp:qcqp:eq:2} is equivalent to
\begin{equation*}
\lVert P_i x \rVert_2^2 + (1/4 + c_i^T x + d_i)^2 \leq (1/4 -c_i^T x - d_i)^2,
\end{equation*}
which is equivalent to
\begin{equation} \label{hp:qcqp:eq:3}
\Bigg \lVert
\begin{bmatrix}
P_i x \\
1/4 + c_i^T x + d_i
\end{bmatrix}
\Bigg \rVert_2
\leq \lVert 1/4 -c_i^T x - d_i \rVert_2,
\end{equation}
which is a second order cone constraint. Note that the left hand side and right hand side of equation \ref{hp:qcqp:eq:3} cannot both be zeroes, since $1/4 + c_i^T x + d_i$ and $1/4 - c_i^T x - d_i$ cannot both be zeroes. Thus, according to the definition of rank for SOCP, the rank in QCQP \ref{def:qcqp:rank} is defined as
\begin{equation*}
\rank(x) \vcentcolon = 2k+1 - s(x),
\end{equation*}
where $s(x)$ is the number of quadratic constraints in \ref{def:qcqp:rank} that are satisfied with equality (i.e. $x^TQ_ix + c_i^Tx + d_i = 0$). Note that by writing QCQP as SOCP, we have $k+1$ second order cone constraints. However, $x^TQ_0x + c_0^Tx - t = 0$ for any $x$ that attains the optimal value.
Note that in QCQP,
\begin{equation*}
\rank(x) \geq k + 1
\end{equation*}
for all $x \in \mathbb{R}^n$.
Now, we do rank reduction on QCQP. By viewing QCQP as a SOCP and applying Theorem \ref{socp:rank:reduction}, we get the following result.
\begin{theorem}
Let $Q_i \in \mathbb{S}^n_+$, $c_i \in \mathbb{R}^n$ for each $i = 0,1,\dots,k$, $d_i \in \mathbb{R}^n$ for each $i = 1,\dots,k$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$ be given.
Suppose the following QCQP:
\begin{equation} \label{qcqp:sp:def2}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm} & x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm} & x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,
\end{split}
\end{equation}
is feasible and
\[
\bigcap_{i = 0}^{k+1} \ker B_i = \{0\},
\]
where
\[
B_i = \begin{bmatrix}
Q_{i} \\
c_{i}^T
\end{bmatrix} \quad \textrm{for each} \hspace{2mm} i = 0,1,\dots,k, \hspace{2mm} \textrm{and} \quad B_{k+1} = A.
\]
Then there exists a solution $x^*$ to \ref{qcqp:sp:def2} such that
\begin{equation*}
\rank(x^*) \leq 2k + 3 - \Bigl \lceil \frac{n}{\max(m-1,\rank(Q_0),\rank(Q_1),\dots,\rank(Q_k))+1} \Bigr \rceil.
\end{equation*}
Moreover, $x^*$ can be found in polynomial time.
\end{theorem}
\begin{proof}
First, we write the QCQP as a SOCP. Then we have $k+1$ second order cone constraints. Note that for $B_{k+1}$, we don't need to concatenate $A$ with $c_0$ as we did in the SOCP case since the objective $x^TQ_0x + c_0^Tx$ becomes a second order cone constraint. In addition, if $Q_i = P_i^T P_i$, then $\ker(Q_i) = \ker(P_i)$. In addition, for $i = 1,\dots,k$ instead of defining $B_i$ as a concatenation of $Q_i, c_i^T$ and $c_i^T$, we just need to define it as a concatenation of $Q_i$ and $c_i^T$ since they have the same kernel. Applying Theorem \ref{socp:rank:reduction} to the resulting SOCP finishes the proof.
\end{proof}
\subsubsection{Complexity of rank-constrained QCQP}
In this section, we study the complexity of rank-constrained QCQP. First, note that Max-Cut can be written as a rank-constrained QCQP. This follows from the fact that equation \ref{maxcut:hp:key} can be seen as a quadratic constraint
\begin{equation*}
x^TAx - \tr(AX) \leq 0,
\end{equation*}
that holds with equality. Thus, Max-Cut can be written as a rank-constrained QCQP where the constraint on rank is $\rank(x) \leq k+1$. With similar techniques as we use in Theorem \ref{socp:rank:hardness}, we obtain the following result.
\begin{theorem}
Let $Q_i \in \mathbb{S}^n_+$, $c_i \in \mathbb{R}^n$ for each $i = 0,1,\dots,k$, $d_i \in \mathbb{R}^n$ for each $i = 1,\dots,k$, $A \in \mathbb{R}^{m \times n}$, and $b \in \mathbb{R}^m$ be given. Let $s \geq 1$ be a constant. Then the following rank-constrained QCQP:
\begin{equation*}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm}
& x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm}
& x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,\\
& \rank(x) \leq k+s,
\end{split}
\end{equation*}
is NP-hard.
\end{theorem}
\begin{proof}
Since we can write Max-Cut as a rank-constrained QCQP with the rank constraint $\rank(x) \leq k+1$, the following problem
\begin{equation} \label{pf:qcqp:rank:1}
\begin{split}
\underset{x \in \mathbb{R}^n}{\textrm{minimize}} \hspace{2mm}
& x^TQ_0x + c_0^Tx \\
\textrm{subject to} \hspace{2mm}
& x^TQ_ix + c_i^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& Ax = b,\\
& \rank(x) \leq k+1,
\end{split}
\end{equation}
is NP-hard. Now, we show how to increase the rank constraint from $k+1$ to $k+2$. For each $i = 0,\dots,k$, $Q_i = P_i^T P_i$ for some $P_i \in \mathbb{R}^{n \times n}$. Consider the following problem:
\begin{equation} \label{pf:qcqp:rank:2}
\begin{split}
\underset{x \in \mathbb{R}^{n+1}}{\textrm{minimize}} \hspace{2mm}
& x^T\Tilde{P_0}^T\Tilde{P_0}x + \Tilde{c_0}^Tx \\
\textrm{subject to} \hspace{2mm}
& x^T\Tilde{P_i}^T\Tilde{P_i}x + \Tilde{c_i}^Tx + d_i \leq 0, \hspace{2mm} i = 1,\dots k; \\
& \Tilde{A}x = b,\\
& x_{k+1} = 1, \\
& x^T E x \leq 2, \\
& \rank(x) \leq (k+1)+2,
\end{split}
\end{equation}
where $\Tilde{A} = \begin{bsmallmatrix}
A &
0
\end{bsmallmatrix}, \Tilde{P}_i = \begin{bsmallmatrix}
P_i &
0
\end{bsmallmatrix}, \Tilde{c}_i = \begin{bsmallmatrix}
c_i \\
0
\end{bsmallmatrix}, E = e_{n+1}e_{n+1}^T$, and $e_{n+1} = (0,\dots,0,1)$. Note that since $x_{n+1} = 1$,
\[
x^T E x = 1<2.
\]
Thus, this quadratic constraint will contribute $2$ to the rank. Thus, \eqref{pf:qcqp:rank:2} is equivalent to \eqref{pf:qcqp:rank:1}. Note that there are $k+1$ quadratic constraints in \eqref{pf:qcqp:rank:2}. Using this argument $s-1$ times finishes the proof.
\end{proof}
\section{conclusion}
In this paper, we study rank-constrained and sparsity-constrained HP. For rank-constrained HP, we design algorithms for rank reduction and study the complexity of rank-constrained HP. We showed that both rank-constrained QCQP and rank-constrained SOCP are NP-hard. In addition, we show that there is a phase transition in the complexity of rank-constrained SDP with $m$ linear constraints when the rank constraint $r(m)$ passes through $\sqrt{2m}$.
For sparsity-constrained HP, we extend results on LP sparsification to QCQP and SOCP and show that our results give tight upper bounds on minimal cardinality solutions to QCQP and SOCP.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-07-26T02:01:09",
"yymm": "2207",
"arxiv_id": "2207.11299",
"language": "en",
"url": "https://arxiv.org/abs/2207.11299",
"abstract": "We extend rank-constrained optimization to general hyperbolic programs (HP) using the notion of matroid rank. For LP and SDP respectively, this reduces to sparsity-constrained LP and rank-constrained SDP that are already well-studied. But for QCQP and SOCP, we obtain new interesting optimization problems. For example, rank-constrained SOCP includes weighted Max-Cut and nonconvex QP as special cases, and dropping the rank constraints yield the standard SOCP-relaxations of these problems. We will show (i) how to do rank reduction for SOCP and QCQP, (ii) that rank-constrained SOCP and rank-constrained QCQP are NP-hard, and (iii) an improved result for rank-constrained SDP showing that if the number of constraints is $m$ and the rank constraint is less than $2^{1/2-\\epsilon} \\sqrt{m}$ for some $\\epsilon>0$, then the problem is NP-hard. We will also study sparsity-constrained HP and extend results on LP sparsification to SOCP and QCQP. In particular, we show that there always exist (a) a solution to SOCP of cardinality at most twice the number of constraints and (b) a solution to QCQP of cardinality at most the sum of the number of linear constraints and the sum of the rank of the matrices in the quadratic constraints; and both (a) and (b) can be found efficiently.",
"subjects": "Optimization and Control (math.OC); Computational Complexity (cs.CC)",
"title": "Rank-constrained Hyperbolic Programming",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517469248845,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7089606340955409
} |
https://arxiv.org/abs/2111.15567 | Distribution-free tests of multivariate independence based on center-outward quadrant, Spearman, Kendall, and van der Waerden statistics | Due to the lack of a canonical ordering in ${\mathbb R}^d$ for $d>1$, defining multivariate generalizations of the classical univariate ranks has been a long-standing open problem in statistics. Optimal transport has been shown to offer a solution in which multivariate ranks are obtained by transporting data points to a grid that approximates a uniform reference measure (Chernozhukov et al., 2017; Hallin, 2017; Hallin et al., 2021), thereby inducing ranks, signs, and a data-driven ordering of ${\mathbb R}^d$. We take up this new perspective to define and study multivariate analogues of the sign covariance/quadrant statistic, Spearman's rho, Kendall's tau, and van der Waerden covariances. The resulting tests of multivariate independence are fully distribution-free, hence uniformly valid irrespective of the actual (absolutely continuous) distribution of the observations. Our results provide the asymptotic distribution theory for these new test statistics, with asymptotic approximations to critical values to be used for testing independence between random vectors, as well as a power analysis of the resulting tests in an extension of the so-called Konijn model. For the van der Waerden tests, this power analysis includes a multivariate Chernoff--Savage property guaranteeing that, under elliptical generalized Konijn models, the asymptotic relative efficiency with respect to Wilks' classical (pseudo-)Gaussian procedure of our van der Waerden tests is strictly larger than or equal to one, where equality is achieved under Gaussian distributions only. We similarly provide a lower bound for the asymptotic relative efficiency of our Spearman procedure with respect to Wilks' test, thus extending the classical result by Hodges and Lehmann on the asymptotic relative efficiency, in univariate location models, of Wilcoxon tests with respect to the Student ones. | \section{Introduction}
The problem of testing for independence between two random variables
with unspecified densities has been among the very first
applications of rank-based methods in statistical inference.
Spearman's correlation coefficient was proposed in the early 1900s \citep{Spearman1904}, and Kendall's rank correlation goes back to \citet{Kendall1938}, long before \citet{10.2307/3001968} gave his rank sum and signed rank tests for location.
The multivariate version of the same problem---testing independence
between two random vectors with unspecified densities---is
significantly harder, crucially due to the difficulty of defining a
multivariate counterpart to univariate ranks. Indeed, for $d>1$ the real space $\mathbbm{R}^d$ lacks a canonical ordering.
{ As a result, the problem of defining, in dimension
$d>1$, concepts of signs and ranks enjoying the properties that make
the traditional ranks so successful in univariate statistical inference has been an open problem for more than half a century. {One of the most important properties is
the exact distribution-freeness} (for i.i.d.~samples from absolutely
continuous distributions).
In an important new development involving optimal transport, the
concept of center-outward ranks and signs was proposed recently by
\citet{MR3611491}, \citet{hallin2017distribution}, and
\citet{MR4255122} and enjoys a property of ``maximal
distribution-freeness", contrary to earlier concepts put forth in work such as
\citet{MR0298844,MR2598854,MR1212489,MR2329471,MR1926170,MR1963662}.}
For testing independence between two random vectors,
the first attempt to provide a rank-based alternative to the Gaussian likelihood ratio
method of \citet{Wilks1935} was developed in Chapter~8 of \citet{MR0298844} and, for almost thirty years, has remained
the only rank-based approach to the problem. The proposed tests,
however, are based on componentwise rankings and are not
distribution-free---unless, of course, both vectors have
dimension one, in which case we are back to the traditional context
of bivariate independence (see, e.g., Chapter~III.6 of \citet{MR0229351}).
This issue persists in more recent work, e.g., that of
\citet{MR0298844}, \citet{MR1134492}, \citet{MR2691505},
\citet{MR1467849}, \citet{MR1965367,MR2088309}, and \citet{MR2201019}.
We note here that the above work does provide test statistics that are
asymptotically distribution-free in subclasses such as elliptical
distributions. From the perspective we take here, such subclasses are
too restrictive. Moreover, there is a crucial difference between
finite-sample and asymptotic distribution-freeness.
Indeed,
one should be wary that a sequence of tests $\psi^{(n)}$ with asymptotic
size~$\lim_{n\to\infty}{\rm E}_{{\rm P}}[\psi^{(n)}]=\alpha$ under any
element~${\rm P}$ in a class $\mathcal P$ of distributions does not
necessarily have asymptotic size~$\alpha$ under
unspecified~${\rm P}\!\in\!\mathcal P$: the convergence of ${\rm
E}_{{\rm P}}[\psi^{(n)}]$ to $\alpha$, indeed, typically is not uniform
over~$\mathcal P$, so that, in general, $\lim_{n\to\infty}\sup_{{\rm P}\in\mathcal
P}{\rm E}_{{\rm P}}[\psi^{(n)}]\neq \alpha$. Genuinely distribution-free
tests $\phi^{(n)}$, where ${\rm E}_{{\rm P}}[\psi^{(n)}]$ does not depend on
${\rm P}$, do not suffer that problem, and this is why finite-sample
distribution-freeness is a fundamental property.
Palliating these limitations of the existing procedures by defining genuinely distribution-free---now over the class of all absolutely continuous distributions---multivariate extensions of the quadrant, Spearman, and Ken\-dall tests, based
on the concept of center-outward ranks and signs,
is thus highly desirable. It is the objective this paper.
While this paper is focusing on quadrant, Spearman, and Kendall tests of independence, other tests have been considered in the literature. Center-outward ranks and signs have been used recently by \citet{shi2019distribution} in the construction of distribution-free versions of distance covariance tests for multivariate independence, and a general framework for designing distribution-free tests of multivariate independence
that are consistent and statistically efficient
based on center-outward ranks and signs
has been developed in \citet{shi2020rate}. Multivariate ranks (based on measure transportation to the unit cube rather than the unit ball) have been used similarly in \citet{ghosal2019multivariate}, \citet{deb2019multivariate}.
Center-outward ranks and signs also have been used successfully in
other statistical problems: construction of R-estimators \citep{hallin2019center,hallin2020rankbased} in VARMA models, rank tests for multiple-output regression and MANOVA \citep{hallin2020efficient}, and two-sample goodness-of-fit tests \citep{deb2019multivariate,deb2021efficiency,hallin2021finitesample}. We show here how center-outward ranks and signs naturally allow us to define distribution-free multivariate versions of the popular quadrant, Spearman, and Kendall tests.
{The paper is organized as follows. Section~2 briefly reviews the notion of center-outward ranks and signs, and Section~3 introduces our tests of multivariate independence based on center-outward ranks and signs.
In Section~4, we establish an elliptical Chernoff--Savage property for our center-outward test based on van der Waerden scores, which uniformly dominates, against Konijn alternatives, Wilks' test for multivariate independence,
and we also derive an analog of \citet{MR79383}'s result for the problem under study.
This paper ends with a short conclusion in Section~5.
All the proofs are relegated to appendix.
}
\section{Center-outward distribution
functions, ranks,\\ and signs}
\subsection{Definitions}\label{FQsec}
Denoting by~${\mathbb{S}_d}$
and ${\mathcal{S}_{d-1}}$, respectively, the open
unit ball and the
unit hypersphere in ${\mathbb R}^d$, let ${\rm U}_d$ stand for the spherical\footnote{Namely, the spherical
distribution with uniform (over $[0,1]$) radial density---equivalently,
the product of a uniform over the distances to the origin and a
uniform over the unit sphere ${\cal S}_{d-1}$. For~$d=1$, ${\rm U}_1$ coincides with the Lebesgue uniform over~$(-1,1)$.} uniform distribution over~${\mathbb{S}_d}$. Let ${\rm P}$ belong to the class ${\cal P}_d$ of Lebesgue-absolutely continuous distributions over $\mathbbm{R}^d$. The main result in \citet{MR1369395} then implies the existence of an a.e.\ unique convex (and lower semi-continuous) function $\phi:\mathbbm{R}^d\to\mathbbm{R}$ with gradient $\nabla\phi$ such that\footnote{We borrow from measure transportation the convenient notation $T\#\mathrm{P}$ ($T:\mathbbm{R}^d\to\mathbbm{R}^d$ {\it pushes $\mathrm{P}$ forward to~$T\#\mathrm{P}$}) for the distribution of $T({\bf Z})$ under ${\bf Z}\sim\mathrm{P}$.}~$\nabla\phi\#{\rm P}={\rm U}_d$. Call {\it center-outward distribution function} of $\rm P$ any version ${\bf F}_{\scriptscriptstyle \pm}$
of this a.e.~unique gradient.
Further properties of ${\bf F}_{\scriptscriptstyle \pm}$ require further regularity assumptions. Assume that ${\rm P}$ is in the so-called class ${\cal P}^+_d\subset{\cal P}_d$ of distributions {\it with nonvanishing densities}---namely, the class of distributions with density $f:={\rm d P}/{\rm d}\mu_d$ ($\mu_d$ the $d$-dimensional Lebesgue measure) such that, for all~$D\in\mathbbm{R}^+$, there exist constants $\lambda^-_{D;\mathrm{P}}$ and $\lambda^+_{D;\mathrm{P}}$ satisfying
\begin{equation}\label{nonvanprop}
0<\lambda^-_{D;\mathrm{P}}\leq f({\bf z})
\leq \lambda^+_{D;\mathrm{P}}<\inft
\end{equation}
for all $\bf z$ with $\Vert{\bf z}\Vert \leq D$.
Then, it follows from \citet{MR3886582}
that there exists
a version of ${\bf F}_{\scriptscriptstyle \pm}$ defining a homeo\-morphism between the punctured unit ball~${\mathbb S}_d\!\setminus~\!\{{\bf 0}\}$ and $\mathbbm{R}^d\setminus {\bf F}_{\scriptscriptstyle \pm}^{-1}(\{{\bf 0}\})$; that version has a continuous inverse ${\bf Q}_{\scriptscriptstyle \pm}$ (with domain ${\mathbb S}_d\!\setminus\!\{{\bf 0}\}$), which naturally qualifies as~${\rm P}$'s {\it center-outward quantile function}. Figalli's result is extended, in \citet{MR4147635}, to a more general\footnote{Namely, ${\cal P}_d^+\subsetneq{\cal P}_d^{\scriptscriptstyle \pm}\subsetneq{\cal P}_d$} class ${\cal P}_d^{\scriptscriptstyle \pm}$ of absolutely continuous distributions, while the definition of ${\bf F}_{\scriptscriptstyle \pm}$ given in \citet{MR4255122} aims at selecting, for each ${\rm P}\in{\cal P}_d$, a version of $\nabla\phi$ which, whenever ${\rm P}\in{\cal P}_d^{\scriptscriptstyle \pm}$, is yielding that homeomorphism. For the sake of simplicity, since we are not interested in quantiles, we stick here to the a.e.\ unique definition given above for ${\rm P}\in{\cal P}_d$, and, whenever asymptotic statements are made, to~${\rm P}\in{\cal P}_d^+$.
Turning to sample quantities, denote by~$\mathbf{Z}^{(n)}\!:=\big(\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}\big)$, $n\in~\!\mathbb{N}$ a triangular array of i.i.d.\ $d$-dimensional random vectors with distribution~$\mathrm{P}$. Associated with~$\mathbf{Z}^{(n)}$ is the {\it
empirical center-outward distribution function}~${\bf F}_{\scriptscriptstyle \pm}^{(n)}$ mapping the $n$-tuple
$\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}$ to a ``regular'' grid $\mathfrak{G}_n$ of the unit
ball~${\mathbb S}_d$. That regular grid $\mathfrak{G}_n$ is obtained as follows
\begin{compactenum}
\item[{\it (a)}] first factorize $n$ into $n=n_Rn_S + n_0$, with
$0\leq n_0<\min(n_R, n_S)$;\footnote{Note that this implies that $n_0/n = o(1)$ as $n\to\infty$. See \citet[Chapter~7.4]{mordant2021transporting} for a suggestion of selecting $n_R$ and~$n_S$. }
\item[{\it (b)}] next consider a ``regular array"
$\mathfrak{S}_{n_S}:=\{{\bf s}^{n_S}_1,\ldots,{\bf s}^{n_S}_{n_S}\}$ of $n_S$ points on the sphere
${\cal S}_{d-1}$ (see the comment below);
\item[{\it (c)}] construct the grid consisting in the collection $\mathfrak{G}_n$ of the
$n_Rn_S $ points $\mathfrak{g}$ of the form
$$\big(r/\big(n_R +1\big)\big){\bf s}^{n_S}_s, \quad
r=1,\ldots,n_R,~s=1,\ldots,n_S,$$ along with ($n_0$ copies of) the
origin in case $n_0\neq 0$: in total $n-(n_0 -1)$ or $n$ distinct points, thus, according as~$n_0>0$ or $n_0=0$.
\end{compactenum}
By ``regular'' we mean ``as regular as
possible'', in the sense, for example of the {\it
low-discrepancy sequences} of the type considered in numerical
integration, Monte-Carlo methods, and experimental design.\footnote{See also \citet{hallin2021finitesample} for a spherical version of the so-called Halton sequences.}
The only mathematical requirement needed for the asymptotic results below is the weak convergence, as~$n_S\to\infty$, of the uniform discrete distribution over~$\mathfrak{S}_{n_S}$ to the uniform distribution over
${\cal S}_{d-1}$. A uniform i.i.d.~sample of points over~${\cal S}_{d-1}$ (almost surely) satisfies such a requirement. However, one easily can construct arrays that are ``more regular" than an i.i.d.~one. For instance, one could see that $n_S$ or $n_S-1$
of the points in~$\mathfrak{S}_n$ are such that $-\, {\bf s}^{n_S}_s$ also belongs to~$\mathfrak{S}_{n_S}$, so that~$\Vert \sum_{s=1}^{n_S}{\bf s}^{n_S}_s\Vert =0$ or~1 according as $n_S$ is even or odd. One also could consider factorizations of the form $n=n_Rn_S + n_0$ with $n_S$ even,
then require~$\mathfrak{S}_{n_S}$ to be symmetric with respect to the origin, yielding~$\sum_{s=1}^{n_S}{\bf s}^{n_S}_s={\bf 0}$.
The empirical counterpart ${\bf F}_{\scriptscriptstyle \pm}^{(n)}$ of ${\bf F}_{\scriptscriptstyle \pm}$
is defined as the (bijective, once the origin is given multiplicity $n_0$)
mapping from~$\mathbf{Z}_1^{(n)},\dots, \mathbf{Z}_n^{(n)}$ to the grid~$\mathfrak{G}_n$ that minimizes
$\sum_{i=1}^n\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)}) - \mathbf{Z}_i^{(n)} \big\Vert ^2$. That
mapping is unique with probability one; in practice, it is obtained
via a simple optimal assignment (pairing) algorithm (a linear program; see \citet{MR4255122}
for details).
Call {\it center-outward rank} of~$\mathbf{Z}_i^{(n)}$ the integer (in~$\{1,\ldots , n_R\}$ or~$\{0, \ldots , n_R\}$ according as $n_0=0$ or not)
$$R^{(n)}_{i;{{{\scriptscriptstyle \pm}}}}:=(n_R +1)\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\big\Vert \quad i=1,\ldots,n$$
and {\it center-outward sign} of~$\mathbf{Z}_i^{(n)}$ the unit vector
$${\bf S}^{(n)}_{i;{{\scriptscriptstyle \pm}}}:={\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})/\big\Vert {\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\big\Vert\quad \text{for ${\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)})\neq{\bf 0}$;}$$
put ${\bf S}^{(n)}_{i;{{{\scriptscriptstyle \pm}}}}={\bf 0}$ for ${\bf F}_{\scriptscriptstyle \pm}^{(n)} (\mathbf{Z}_i^{(n)}) ={\bf 0}$.
Some desirable finite-sample properties, such as strict independence between the ranks and the signs, only hold for~$n_0=0$ or 1, due to the fact that the mapping from the sample to the grid is no longer injective for $n_0\geq 2$. This, which has no asymptotic consequences (since the number~$n_0$ of tied values involved is $o(n)$ as $n\to\infty$), is easily taken care of by the following tie-breaking device:
\begin{compactenum}
\item[{\it (i)}] randomly select $n_0$ directions ${\bf s}^0_1,\ldots,{\bf s}^0_{n_0}$
in~$\mathfrak{S}_{n_S}$, then
\item[{\it (ii)}] replace the $n_0$ copies of the origin with the new gridpoints
\begin{equation}\label{tiebreak}[1/2(n_R+1)]{\bf s}^0_1,\ldots,[1/2(n_R+1)]{\bf s}^0_{n_0}.
\end{equation}
\end{compactenum}
The resulting grid (for simplicity, the same notation ${\mathfrak{G}}_n$ is used) no longer has multiple points, and the optimal pairing between the sample and this grid is bijective; the $n_0$ smallest ranks, however, take the non-integer value~$1/2$.
\subsection{Main properties}\label{Propsec} This section summarizes some of the main properties of the concepts defined in Sections~\ref{FQsec}; further properties and the proofs can be found in \citet{MR4255122}, \citet{hallin2020efficient} and \citet{hallin2021measure}.
\begin{proposition}\label{H2018} Let ${\bf F} _{{\scriptscriptstyle \pm}}$ denote the center-outward distribution function of~${\rm P}\in{\cal P}_d$. Then,
\begin{compactenum}\vspace{.5mm}
\item[(i)]${\bf F} _{{\scriptscriptstyle \pm}}$ is a probability integral transformation of $\mathbbm{R}^d$: namely, ${\bf Z}\sim {\rm P}$ iff~${\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\sim {\rm U}_d$; by construction, $\Vert{\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\Vert$ is uniform over $[0, 1)$, ${\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})/\Vert{\bf F} _{{\scriptscriptstyle \pm}}({\bf Z})\Vert$ is uniform over the sphere ${\cal S}_{d-1}$, and they are mutually independent.
\end{compactenum}\vspace{.5mm}
Let ${\bf Z}^{(n)}_1,\ldots ,{\bf Z}^{(n)}_n$ be i.i.d.\ with distribution ${\rm P}\in{\mathcal P}_d$ and center-outward distribution function~${\bf F} _{{\scriptscriptstyle \pm}}$. Then,
\begin{compactenum}\vspace{.5mm}
\item[(ii)] $\big({\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{1}),\ldots , {\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{n}) \big)$ is uniformly distributed over the $n!/n_0!$ permutations with repetitions of the gridpoints in $\mathfrak{G}_n$ with the origin counted as $n_0$ indistinguishable points (resp. the $n!$ permutations of~$\mathfrak{G}_n$ if either $n_0\leq 1$ or the tie-breaking device described in Section~\ref{FQsec} is adopted);
\item[(iii)] if either $n_0=0$ or the tie-breaking device described in Section~\ref{FQsec} is adopted, the $n$-tuple of center-outward ranks $\big(R^{(n)}_{1;{\scriptscriptstyle \pm} }, \ldots , R^{(n)}_{n;{\scriptscriptstyle \pm} }\big)$ and the $n$-tuple of~center-out\-ward signs $\big({\bf S}^{(n)}_{1;{\scriptscriptstyle \pm} }, \ldots , {\bf S}^{(n)}_{n;{\scriptscriptstyle \pm} }\big)$ are mutually independent;
\item[(iv)] if either $n_0\leq 1$ or the tie-breaking device described in Section~\ref{FQsec} is \parfillskip=0pt\par\clearpage adopted, $\big({\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{1}),\ldots , {\bf F}^{(n)}_{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_{n}) \big)$ is {\em strongly essentially maximal ancillary}.\footnote{See Section~2.4 and Appendices D.1 and D.2 of \citet{MR4255122} for a precise definition and a proof of this essential property.}
\end{compactenum}\vspace{.5mm}
Assuming, moreover,
that ${\rm P}\in{\mathcal P}_d^+$,
\begin{compactenum}\vspace{.5mm}
\item[(v)] (Glivenko--Cantelli)
\begin{equation*
\displaystyle{\max_{1\leq i\leq n}}\Big\Vert {\bf F}^{(n)} _{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_i) - {\bf F} _{{\scriptscriptstyle \pm}}({\bf Z}^{(n)}_i) \Big\Vert \rightarrow 0 ~\textrm{\em a.s.} \quad \text{as}~n\to\infty.
\end{equation*}
\end{compactenum}
\end{proposition}
Center-outward distribution functions, ranks, and signs also inherit, from the invariance of squared Euclidean distances, elementary but quite remarkable invariance and equivariance properties under orthogonal transformations and global rescaling. Denote by ${\bf F}^{{\bf Z}}_{{\scriptscriptstyle \pm}}$ the center-outward distribution function of $\bf Z$ and by~${\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of an i.i.d. sample~${\bf Z}_1,\ldots,{\bf Z}_n$ associated with a grid $\mathfrak{G}_n$.
\begin{proposition}\label{invF} Let $\boldsymbol{\mu}\in\mathbbm{R}^d$, $k\in\mathbbm{R}^+$, and denote by ${\bf O}$ a~$d\times d$ orthogonal matrix. Then,
\begin{compactenum}
\item[(i)] ${\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z}}_{{\scriptscriptstyle \pm}} (\boldsymbol{\mu} + {\bf O}{\bf z})= {\bf O}{\bf F}^{\bf Z}_{{\scriptscriptstyle \pm}}({\bf z})$, ${\bf z}\in\mathbbm{R}^d$;
\item[(ii)] denoting by ${\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of the sample~$\boldsymbol{\mu}+ k{\bf O}{\bf Z}_1,\ldots, \boldsymbol{\mu}+ k{\bf O}{\bf Z}_n$ associated with the grid ${\bf O}\mathfrak{G}_n$ (hence by~${\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}$ the empirical distribution function of the sample~${\bf Z}_1,\ldots, {\bf Z}_n$ associated with the grid $\mathfrak{G}_n$),
\begin{equation}\label{equiv}
{\bf F}^{\boldsymbol{\mu}+ k{\bf O}{\bf Z};(n)}_{{\scriptscriptstyle \pm}} (\boldsymbol{\mu} + k{\bf O}{\bf Z}_i)= {\bf O}{\bf F}^{{\bf Z};(n)}_{{\scriptscriptstyle \pm}}({\bf Z}_i), \quad i=1,\ldots,n.
\end{equation}
\end{compactenum}
\end{proposition}
\section{Rank-based tests for multivariate independence}
\subsection{Center-outward test statistics for multivariate independence}
In this section, we describe the test statistics we are proposing for testing independence between two random vectors. Consider a sample
$$({\mathbf{X}} ^{\prime}_{11},{\mathbf{X}} ^{\prime}_{21})^{\prime},
({\mathbf{X}} ^{\prime}_{12},{\mathbf{X}} ^{\prime}_{22})^{\prime}, \ldots, ({\mathbf{X}} ^{\prime}_{1n},{\mathbf{X}}
^{\prime}_{2n})^{\prime}$$
of $n$ \mbox{i.i.d.} copies of some $(d_1+d_2)=d$-dimensional random vector
$({\mathbf{X}} ^{\prime}_1,{\mathbf{X}} ^{\prime}_2)^{\prime}$ with Lebesgue-absolutely continuous distribution~${\rm P}\in{\cal P}_d$ and density $f$. We are
interested in the null hypothesis under which ${\mathbf{X}} _1$ and ${\mathbf{X}} _2$,
with unspecified marginal distributions~${\rm
P}_1$ (density~$f_1$) and ${\rm
P}_2$ (density~$f_2$), respectively, are mutually
independent: $f$ then factorizes into $f=f_1f_2$.
Denote by $R^{(n)}_{ki;{\scriptscriptstyle \pm}}$ and ${\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}}$, $i=1,2,\ldots,n$ the center-outward rank and the sign of ${\mathbf{X}} _{ki}$ computed from~${\mathbf{X}} _{k1},
{\mathbf{X}} _{k2}, \ldots, {\mathbf{X}} _{kn}$, $k=1,2$, respectively. For the simplicity of notation, assume, without loss of generality as~$n\to\infty$, that the grid used for computing those ranks and signs is such that~$\sum_{s=1}^{n_S}{\bf s}^{n_S}_s={\bf 0}$, for~$d=d_1,d_2$. Also assume that $n_0 =0$ or~1 (if necessary, after implementing the tie-breaking device described in Section~\ref{FQsec}). This implies that $\sum_{i=1}^n{\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}} =~\!{\bf 0}$ for~$k=1,2$, and moreover, that $$\sum_{i=1}^n J_k\big(R^{(n)}_{ki;{\scriptscriptstyle \pm}}/\big(n_R+1\big)\big){\bf S}^{(n)}_{ki;{\scriptscriptstyle \pm}} = {\bf 0}$$ for any {\it score functions} $J_k: [0, 1) \to \mathbbm{R}$, $k=1,2$.
Consider the $d_1\times d_2$ matrices
\begin{align}\label{tildeW}
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}&:=\frac{1}{n} \sum_{i=1}^n{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},\\
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}&:= \frac{1}{n(n_R+1)^2} \sum_{i=1}^nR^{(n)}_{1i;{\scriptscriptstyle \pm}}R^{(n)}_{2i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},\\
{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}&:= {n \choose 2}^{-1
\sum_{i<i^{\prime}}
\text{sign}\Big[
\Big(R^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}- R^{(n)}_{1i^{\prime};{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{1i^{\prime};{\scriptscriptstyle \pm}}\Big)\nonumber\\
&\hspace{16mm}\times\Big(R^{(n)}_{2i;{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{2i;{\scriptscriptstyle \pm}}- R^{(n)}_{2i^{\prime};{\scriptscriptstyle \pm}}{\bf S}^{(n)}_{2i^{\prime};{\scriptscriptstyle \pm}}\Big)^{\prime} \Big],
\end{align}
where sign$\big[{\bf M}\big]$ stands for the matrix collecting the signs of the entries of a real matrix $\bf M$, and
\begin{align}\label{scoreW}
{\tenq{\mathbf W}}\,\!_{J}^{(n)}&:=
\frac{1}{n} \sum_{i=1}^n
J_1\Big(\frac{R^{(n)}_{1i;{\scriptscriptstyle \pm}}}{n_R+1}\Big)
J_2\Big(\frac{R^{(n)}_{2i;{\scriptscriptstyle \pm}}}{n_R+1}\Big){\bf S}^{(n)}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{(n)\prime}_{2i;{\scriptscriptstyle \pm}},
\end{align}
where the {\it score functions} $J_k:[0,1)\to \mathbbm{R}$, $k=1,2$ are the square-integrable differences of two monotone increasing functions, with
\begin{equation}\label{scorevariance}
0<\sigma_{J_k}^2:=\int_0^1J_k^2(u){\rm d} u<\infty.
\end{equation}
Those matrices defined in \eqref{tildeW}--\eqref{scoreW} clearly constitute matrices of cross-covariance measurements based on center-outward ranks and signs (for ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, signs only).
For~$d_1=1=d_2$, it is easily seen that ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, and~${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$,
up to scaling constants, reduce to the quadrant, Spearman, and Kendall test statistics, while ${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ yields a score-based extension of Spearman's correlation coefficient.
\subsection{Asymptotic representation and asymptotic normality}\label{asreprsec}
Each of the rank-based matrices defined in \eqref{tildeW}--\eqref{scoreW} has an asymptotic representation in terms if i.i.d.~variables. More precisely, defining ${\bf S}_{ki;{\scriptscriptstyle \pm}}$ as~${\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})/\big\Vert {\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})\big\Vert$ if ${\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{ki})\neq{\bf 0}$ and ${\bf 0}$ otherwise for $k=1,2$, let
\begin{align}\label{tildeWas}
{{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}&:=\frac{1}{n} \sum_{i=1}^n{\bf S}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{\prime}_{2i;{\scriptscriptstyle \pm}},\\
{{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}&:=
\frac{1}{n} \sum_{i=1}^n {\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i})
{\bf F}^{\prime}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}),
\\
{{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}&:= {n \choose 2}^{-1} \sum_{i<i^{\prime}}
\text{sign}\Big[
\Big({\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i}) - {\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i^{\prime}}) \Big)\nonumber\\
&\hspace{16mm}\times\Big({\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}) -{\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i^{\prime}}) \Big)^{\prime}\, \Big],
\end{align}
and
\begin{align}\label{scoreWas}
\qquad{{\mathbf W}}\,\!_{J}^{(n)}:=\frac{1}{n} \sum_{i=1}^n
J_1\Big(\big\Vert{\bf F}_{1;{\scriptscriptstyle \pm}}({\bf X}_{1i}) \big\Vert\Big)
J_2\Big(\big\Vert {\bf F}_{2;{\scriptscriptstyle \pm}}({\bf X}_{2i}) \big\Vert\Big)
{\bf S}_{1i;{\scriptscriptstyle \pm}}{\bf S}^{\prime}_{2i;{\scriptscriptstyle \pm}} .
\end{align}
The following asymptotic representation results then hold under the
null hypothesis of independence (hence, also under contiguous
alternatives).
\begin{proposition}\label{prop:hajek}
Under the null hypothesis of independence, as $n_R$ and $n_S$ tend to infinity,
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)} \! - {{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}\big),$
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\! - {{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\big),$
$\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\! - {{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\big),$
and, provided that $J_1$ and $J_2$ are the square-integrable differences of two monotone increasing functions, $\text{\rm vec}\big({\tenq{\mathbf W}}\,\!_{J}^{(n)} - {{\mathbf W}}\,\!_{J}^{(n)}\big)$ is~$o_{\text{\rm q.m.}}(n^{-1/2})$.
\end{proposition}
\medskip
The asymptotic normality for {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, {\rm vec}${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and {\rm vec}${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ follows immediately from the asymptotic representation results and the standard central-limit behavior of {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, {\rm vec}${{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and {\rm vec}${{\mathbf W}}\,\!_{J}^{(n)}$.
\begin{proposition}\label{prop:asym} Under the null (independence) hypothesis, as $n_R$ and $n_S$ tend to infinity,
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)},$
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)},$
$n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)},$
and $n^{1/2}{\rm vec}{\tenq{\mathbf W}}\,\!_{J}^{(n)}$
are asymptotically normal with mean vectors ${\bf 0}_{d_1d_2}$
and covariance matrices
$$\frac{1}{d_1d_2}{\bf I}_{d_1d_2}, \quad
\frac{1}{9d_1d_2}{\bf I}_{d_1d_2}, \quad
\frac{4}{9}{\bf I}_{d_1d_2}, \quad
\text{and} \quad \frac{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}{d_1d_2}{\bf I}_{d_1d_2},
$$
respectively.
\end{proposition}
\subsection{Center-outward sign, Spearman, Kendall, and score tests}\label{testprocsec}
Associated with ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}$, and ${\tenq{\mathbf W}}\,\!_{J}^{(n)}$ are the sign, Spearman, Kendall, and score test statistics
\begin{align*
&{\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}:= nd_1d_2\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm sign}}^{(n)}\big\Vert^2_{\mathrm F},\quad
{\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}:= 9nd_1d_2\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm S}}^{(n)}\big\Vert^2_{\mathrm F}, \quad \\
&{\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}:= \frac{9n}{4}\big\Vert {\tenq{\mathbf W}}\,\!_{\text{\tiny\rm K}}^{(n)}\big\Vert^2_{\mathrm F}, \quad \text{and} \quad
{\tenq T}\,\!_{J}^{(n)}:=
\frac{nd_1d_2}{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}
\big\Vert {\tenq{\mathbf W}}\,\!_J^{(n)}\big\Vert^2_{\mathrm F},
\end{align*}
respectively, where $\Vert {\bf M}\Vert _{\mathrm F}$ stands for the Frobenius norm of a matrix $\bf M$, and $\sigma^{2}_{J_k}$, $k=1,2$ are defined as in \eqref{scorevariance}.
In view of the asymptotic normality results in Proposition~\ref{prop:asym}, the tests (denoted respectively by ${\psi}\,\!_{\text{\tiny\rm sign}}^{(n)}$, ${\psi}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\psi}\,\!_{\text{\tiny\rm K}}^{(n)}$, and ${\psi}\,\!_{J}^{(n)}$) rejecting the null hypothesis of independence whenever ${\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}$, $ {\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}$, ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$, or ${\tenq T}\,\!_{J}^{(n)}$ exceed the~$(1-~\!\alpha)$-quan\-tile~$\chi^2_{d_1d_2;1-\alpha}$ of a chi-square distribution with $d_1d_2$ degrees of freedom has asymptotic level $\alpha$. These tests are strictly distribution-free, however, and exact critical values can be computed or simulated as well. The tests based on~${\tenq T}\,\!_{\text{\tiny\rm sign}}^{(n)}$, $ {\tenq T}\,\!_{\text{\tiny\rm S}}^{(n)}$, and ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ are multivariate extensions of the traditional quadrant, Spearman, and Kendall tests, respectively,
to which they reduce for~$d_1=1=d_2$.
\section{Local asymptotic power}\label{powersec}
While there is only one way for two random vectors ${\bf X}_1$
and~${\bf X}_2$ to be independent, their mutual dependence can take
many forms. The classical benchmark, in testing for bivariate
independence, is a ``local" form of an independent component analysis model that goes back to \citet{MR79384}. A multivariate extension of such alternatives has been considered also by \citet{MR1467849} and \citet{MR1965367} in the elliptical context. We extend it further here to more general, non-elliptical situations.
\subsection{Generalized Konijn alternatives}\label{Konijnsec}
Let ${\bf X} ^*=({\bf X} ^{*\prime}_1, {\bf X} ^{*\prime}_2)^{\prime}$, where ${\bf X} ^*_1$ and ${\bf X} ^*_2$ be mutually independent random vectors, with absolutely continuous distributions ${\rm P}_{1}$ over $\mathbbm{R}^{d_1}$ and~${\rm P}_{2}$ over~$\mathbbm{R}^{d_2}$ and densities $f_1$ and $f_2$, respectively; then ${\bf X} ^*$ has density $f=f_1f_2$ over $\mathbbm{R}^{d}$. Consider
\begin{align}\label{Konijn}
{\bf X}=\left(\!
\begin{array}{c}
{\bf X}_1\\ {\bf X}_2
\end{array}
\!\right)
:={\bf M}_\delta\left(\!
\begin{array}{c}
{\bf X} ^*_1\\ {\bf X} ^*_2
\end{array}
\!\right)
:=
\left(\!
\begin{array}{cc}
(1-{\delta}
){\bf I}_{d_1}&{\delta}{\bf M_1}\\
{\delta}{\bf M}_2&(1-{\delta}){\bf I}_{d_2}
\end{array}
\!\right)
\!\left(\!
\begin{array}{c}
{\bf X} ^*_1\\ {\bf X} ^*_2
\end{array}
\!\right)
\end{align}
where ${\delta}\!\in\!\mathbbm{R}$ and ${\bf M}_1\!\in\!\mathbbm{R}^{d_1\times d_2}$, ${\bf M}_2\!\in\!\mathbbm{R}^{d_2\times d_1}$
are nonzero. For given~${\rm P}_{1}$, ${\rm P}_{2}$, ${\bf M}_1$, and ${\bf M}_2$, the distribution ${\rm P}^{\bf X}$ of ${\bf X}$ belongs to a one-parameter family ${\cal P}^{\bf X}:=\{{\rm P}^{\bf X}_{\delta} \vert\, {\delta}\in\mathbbm{R}\}.$
On $f_1$ and $f_2$, we make the following assumption. \smallskip
\begin{assumption}\label{asp:K}
\mbox{}\vspace{1mm}
\begin{compactenum}
\item[(K1)] The densities $f_1$ and $f_2$ are such tha
\begin{align*}
\int_{\mathbbm{R}^{d_k}}\!{\bf x}f_k({\bf x}){\rm d} {\bf x
={\bf 0}\quad\text{and}\quad
0<\int_{\mathbbm{R}^{d_k}}\!{\bf x}{\bf x}^{\prime} f_k({\bf x}){\rm d} {\bf x}=:{\boldsymbol{\Sigma}}_{k}<\infty,\quad k=1,2.
\end{align*}
\item[(K2)] The functions ${\bf x}_k\mapsto (f_k({\bf x}_k))^{1/2}$, $k=1,2$
admit quadratic mean partial derivatives\footnote{Existence of quadratic mean partial derivatives is equivalent to quadratic mean differentiability; this was shown in \citet{MR307329} and independently rediscovered by \citet[Lemma~2.1]{MR1364260}.}
$$D_{\ell}[(f_k)^{1/2}], \quad \ell =1,\ldots,d_k, \
k=1,2.$$
\item[(K3)] Letting
$${\boldsymbol\varphi}:= \left({\boldsymbol\varphi}_1^{\prime} ,{\boldsymbol\varphi}_2^{\prime}\right)^{\prime}:= \left(\varphi_{1;1},\ldots,\varphi_{1;d_1},\varphi_{2;1},\ldots,\varphi_{2;d_1}
\right)^{\prime}$$
with
\begin{align*}\varphi_{k;\ell}:=-2D_\ell[(f_k)^{1/2}]/(f_k)^{1/2}\stackrel{\text{a.e.}}{=} -\partial_\ell f_k/f_k, \quad \ell =1,\ldots,d_k, \
k=1,2,\end{align*}
it holds that, for $k=1,2$ and $ \ell =1,\ldots,d_k$, $0<~\int_{{\mathbbm R}^{d_k} }\big( \varphi_{k;\ell}({\bf x})\big)^{2}<\infty$,
and\footnote{Integration by parts yields
$\int_{{\mathbbm R}^{d_k} } {\boldsymbol\varphi}_k({\bf x}) f_k({\bf x}){\rm d} {\bf x} ={\bf 0}$,
$\int_{{\mathbbm R}^{d_k} } {\bf x}^{\prime} {\boldsymbol\varphi}_k({\bf x}) f_k({\bf x}){\rm d} {\bf x} =d_k$, and~
$\int_{{\mathbbm R}^{d_k} } {\bf x} {\boldsymbol\varphi}_k({\bf x})^{\prime} f_k({\bf x}){\rm d} {\bf x} =~{\bf I}_{d_k}$,
$k=1,2$; see also \citet[page~555]{MR1364260}.}
\begin{align*}
{\cal J}\!_k:={\rm Var}\left({\bf X} ^{*\prime}_k {\boldsymbol\varphi}_k({\bf X} ^*_k)\right)=
\int_{{\mathbbm R}^{d_k} } \left({\bf x}^{\prime} {\boldsymbol\varphi}_k({\bf x}) -d_k\right)^2
f_k({\bf x}){\rm d} {\bf x}<\infty .
\end{align*}
\end{compactenum}
\end{assumption}
\smallskip
It should be stressed, however, that these assumptions are not to be imposed on the observations in order for our tests to be valid but only intend to provide an analytically convenient benchmark for the comparison of local power. Let
$${\boldsymbol{\cal I}}_k:= \int_{{\mathbbm R}^{d_k} }
{\boldsymbol\varphi}({\bf x}){\boldsymbol\varphi}^{\prime}({\bf x})
f_k({\bf x}){\rm d} {\bf x}<\infty.
$$
Under ${\rm P}^{\bf X}_0$, ${\bf X}_1={\bf X} ^*_1$ and ${\bf X}_2=~\!{\bf X} ^*_2$ are mutually independent; for~${\delta}\neq~\!0$, call ${\rm P}^{\bf X}_{\delta}$ a (generalized) {\it Konijn alternative} to ${\rm P}^{\bf X}_0$. Sequences of the form~${\rm P}^{\bf X}_{n^{-1/2}\tau}$ with $\tau\neq 0$, as we shall see, constitute local alternatives to the null hypothesis of independence in a sample of size $n$. More precisely, the following LAN property holds in the vicinity of ${\delta}=0$.
\begin{proposition}\label{prop:lan} Let ${\rm P}_{1}$ and ${\rm P}_{2}$ satisfy Assumption~\ref{asp:K}. Then, denoting by~${\bf X}^{(n)}:=({\bf X}_1,\ldots,{\bf X}_n)$, $n\in\mathbbm{N}$ a triangular array of $n$ independent copies of ${\bf X}\sim{\rm P}^{\bf X}_0$, for given { nonzero}~${\bf M}_1$ and ${\bf M}_2$,
the family ${\cal P}_{\bf X}$ of Konijn alternatives is LAN at~${\delta}=~\!0$ with root-$n$ contiguity rate, central sequence
\begin{multline}\label{KonDelta}
\Delta^{(n)}({\bf X}^{(n)})
:=
\sum_{i=1}^n
\Big[{\bf X}^{\prime}_{1i}{\bf M}_2^{\prime}{\boldsymbol\varphi}_2({\bf X}_{2i}) +
{\bf X}^{\prime}_{2i}{\bf M}_1^{\prime}{\boldsymbol\varphi}_1({\bf X}_{1i}) \\
- \Big({\bf X}^{\prime}_{1i}{\boldsymbol\varphi}_1({\bf X}_{1i}) -d_1
\Big)
-\Big({\bf X}^{\prime}_{2i}{\boldsymbol\varphi}_2({\bf X}_{2i}) -d_2
\Big)\Big]
\end{multline}
and Fisher information
\begin{multline}\label{Kongamma}
\gamma^2:={\cal J}_1 + {\cal J}_2
+ \text{\rm vec} ^{\prime}\! \left({\boldsymbol{\Sigma}}_1\right) \text{\rm vec}\! \left({\bf M}_2^{\prime}{\boldsymbol{\cal I}}_2{\bf M}_2\right) \\
+ \text{\rm vec} ^{\prime}\! \left({\boldsymbol{\Sigma}}_2\right) \text{\rm vec}\! \left({\bf M}_1^{\prime}{\boldsymbol{\cal I}}_1{\bf M}_1\right)
+ \text{\rm tr}({\bf M}_1{\bf M}_2)
+ \text{\rm tr}({\bf M}_2{\bf M}_1).
\end{multline}
Namely, under ${\rm P}^{\bf X}_0$,
\begin{align}
\Lambda^{(n)}({\bf X}^{(n)}):=\log\frac{{\rm d}{\rm P}^{\bf X}_{n^{-1/2}\tau}}{{\rm d}{\rm P}^{\bf X}_0}({\bf X}^{(n)})
=\tau\Delta^{(n)}({\bf X}^{(n)}) -\frac{1}{2}\tau^2\gamma^2 + o_{{\rm P}}(1)
\label{LANKon}
\end{align}
and $\Delta^{(n)}({\bf X}^{(n)})$ is asymptotically normal, with mean zero and variance $\gamma^2$ as $n\to\infty$.
\end{proposition}
\subsection{Limiting distributions and Pitman efficiencies}
In this section, we aim to establish elliptical Chernoff--Savage and Hodges--Lehmann results for our center-outward test based on van der Waerden and Wilcoxon scores
comparing to Wilks' test, respectively; compare \citet{MR100322} and \citet{MR79383}.
To this end, we first derive the limiting distributions of ${\tenq T}\,\!_{J}^{(n)}$ and ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ under the sequence of alternatives~${\rm P}^{\bf X}_{n^{-1/2}\tau}$.
\begin{proposition}\label{prop:J} Let ${\rm P}_{1}$ and ${\rm P}_{2}$ satisfy Assumption~\ref{asp:K}. Then, if observations are $n$ independent copies with distribution~${\rm P}^{\bf X}_{n^{-1/2}\tau}$, for given { nonzero}~${\bf M}_1$ and ${\bf M}_2$,
\begin{compactenum}
\item[(i)] the limiting distribution of the test statistic ${\tenq T}\,\!_{J}^{(n)}$ is noncentral chi-square with $d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
\frac{\tau^2 d_1d_2}{\sigma^{2}_{J_1}\sigma^{2}_{J_2}}\Big\Vert {\rm E}_{H_0}\Big[{\bf J}_1({\bf F}_{1;{\scriptscriptstyle \pm}} ({\bf X}_1))
{\bf R} {\bf J}_2({\bf F}_{2;{\scriptscriptstyle \pm}} ({\bf X}_2))^{\prime}\Big]\Big\Vert^2_{\mathrm F},
\end{equation*}
where ${\bf R}:={\bf X}_{1}^{\prime}{\bf M}_2^{\prime}{\boldsymbol\varphi}_2({\bf X}_{2}) +
{\bf X}_{2}^{\prime}{\bf M}_1^{\prime}{\boldsymbol\varphi}_1({\bf X}_{1})$ and
$${\bf J}_k({\bf u}):= J_k(\Vert{\bf u}\Vert)\frac{{\bf u}}{\Vert{\bf u}\Vert}{\bf 1}_{[\Vert{\bf u}\Vert\neq 0]},\quad {\bf u}\in{\mathbb{S}_d};$$
\item[(ii)] the limiting distribution of the test statistic ${\tenq T}\,\!_{\text{\tiny\rm K}}^{(n)}$ is noncentral chi-square with $d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
{9\tau^2}\Big\Vert {\rm E}_{H_0}\Big[{\bf F}^{\square}_{1;{\scriptscriptstyle \pm}} ({\bf X}_1)
{\bf R} {\bf F}^{\square}_{2;{\scriptscriptstyle \pm}} ({\bf X}_2)^{\prime}\Big]\Big\Vert^2_{\mathrm F},
\end{equation*}
where
$$\big({\bf F}^{\square}_{k;{\scriptscriptstyle \pm}} ({\bf X}_k)\big)_j:=2F_{kj}\Big(\big({\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{k})\big)_{j}\Big)-1$$
(recall $F_{kj}$ denotes the cumulative distribution function of~$\big({\bf F}_{k;{\scriptscriptstyle \pm}}({\bf X}_{k}) \big)_{j}$).
\end{compactenum}
\end{proposition}
Suppose that all the conditions in Proposition~\ref{prop:J} hold. Then the limiting alternative distribution of Wilks' (log) likelihood ratio test statistic
is also noncentral chi-square, with~$d_1d_2$ degrees of freedom and noncentrality parameter
\begin{equation*}
\tau^2 \Big\Vert
{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F};
\end{equation*}
see, e.g., page 919 of \citet{MR2201019}.
Now we are ready to compute the asymptotic relative efficiencies of our center-outward rank tests with respect to Wilks' likelihood ratio test.
\begin{proposition}\label{prop:Pitman}
Let ${\rm P}_{1}$ and ${\rm P}_{2}$ be elliptically symmetric distributions, namely, admit densities of the form
$$f_k({\bf x}_k)\propto ({\rm det}({\boldsymbol{\Sigma}}_k))^{-1/2}
\phi_k\Big(\sqrt{{\bf x}_k^{\prime}{\boldsymbol{\Sigma}}_k^{-1}{\bf x}_k}\,\Big),\quad k=1,2,$$
satisfying Assumption~\ref{asp:K}. Then,
the Pitman asymptotic relative efficiency (ARE)
of the center-outward test based on score functions $J_k$, $k=1,2$
with respect to Wilks' test (denoted by ${\psi}\,\!_{\mathcal N}^{(n)}$) is
\begin{align*}
{\rm ARE}({\psi}\,\!_{J}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})
=\frac{\Big\Vert
D_1C_2{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
D_2C_1{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F}}{d_1d_2\sigma^{2}_{J_1}\sigma^{2}_{J_2}\Big\Vert
{\boldsymbol{\Sigma}}_{1}^{ 1/2}{\bf M}_2^{\prime}{\boldsymbol{\Sigma}}_2^{-1/2} +
{\boldsymbol{\Sigma}}_{1}^{-1/2}{\bf M}_1 {\boldsymbol{\Sigma}}_2^{ 1/2} \Big\Vert^2_{\mathrm F}},
\end{align*}
where
\begin{align*}
&C_k\equiv C_k(J_k,\phi_k):={\rm E}[J_k^{-1}(U)\rho_k(\tilde F_k^{-1}(U))],\\
&D_k\equiv D_k(J_k,\phi_k):={\rm E}[J_k^{-1}(U)\tilde F_k^{-1}(U))],
\end{align*}
$\rho_k:=-\,\phi_k^{\prime}/\phi_k$,
$\tilde F_k$ denotes the cumulative distribution function of $\Vert {\bf Y}_k\Vert$ with ${\bf Y}_k:={\boldsymbol{\Sigma}}_{k}^{-1/2}{\bf X}_k$,
and $U$ stands for a random variable uniformly distributed over $(0,1)$.
In particular, if
${\boldsymbol{\Sigma}}_{1} {\bf M}_2^{\prime}
={\bf M}_1 {\boldsymbol{\Sigma}}_{2}$, we have
\begin{compactenum}
\item[(i)]${\rm ARE}({\psi}\,\!_{J^{\text{\tiny{\rm vdW}}}}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})\ge 1,$
where $J^{\text{\tiny{\rm vdW}}}_k,~k=1,2$~are~the
van der Waerden score functions~$J^{\text{\tiny{\rm vdW}}}_k(u)\!:=\!\big(F_{\chi^2_{d_k}}^{-1}\!(u)\big)^{\!1/2}\!$ with $F_{\chi^2_d}$ the $\chi^2_d$ cumulative distribution function;
\item[(ii)]
$
{\rm ARE}({\psi}\,\!_{J^{\text{\tiny{\rm W}}}}^{(n)}, {\psi}\,\!_{\mathcal N}^{(n)})
\ge \Omega(d_1,d_2)
\ge {9}/{16},
$
where the
Wilcoxon score functions are defined as $J^{\text{\tiny{\rm W}}}_k(u):=u$ for~$k=1,2$, and
\begin{align*}
&\Omega(d_1,d_2):=\frac{9(2c_{d_1}^2+d_1-1)^2(2c_{d_2}^2+d_2-1)^2}{1024 d_1d_2c_{d_1}^2c_{d_2}^2}, \\
&c_d:=\inf\Big\{ x>0 \ \Big\vert\ \Big(\sqrt{x} B_{\sqrt{2d-1} / 2}(x)\Big)^{\prime} = 0\Big\}, \\
&B_{a}(x):=\sum_{m=0}^{\infty}{\frac {(-1)^{m}}{m!\Gamma (m+a+1)}}{\left({\frac {x}{2}}\right)}^{2m+a}.
\end{align*}
\end{compactenum}
\end{proposition}
\citet{MR2691505} notices that
the Pitman ARE depends on the underlying covariance structure (${\boldsymbol{\Sigma}}_{1}$ and ${\boldsymbol{\Sigma}}_{2}$)
for ${\bf X}_1$ and ${\bf X}_2$ with elliptically symmetric distributions,
while most the existing literature (e.g. \citet{MR2691505}, \citet{MR1467849},
\citet{MR1965367,MR2088309}, \citet{MR2201019}, \citet{MR2462206} and \citet{deb2021efficiency}) focuses on the spherically symmetric case.
The proposition above fills this gap
by providing the explicit formula of ARE with general ${\boldsymbol{\Sigma}}_{k}$'s.
The claim (i) shows Pitman non-admissibility under ellipticity of Wilks' test,
which is uniformly dominated by our center-outward test with van der Waerden scores,
for elliptically symmetric distributions.
This is comparable with Theorem~4.1 in \citet{deb2021efficiency}.
Claim (ii) is a multivariate extension of \citet{MR79383}'s result; the minimum of $\Omega(d_1,d_2)$, 9/16, is achieved when $d_1,d_2\to\infty$. One can find more numerical values of~$\Omega(d_1,d_2)$ for fixed $d_1,d_2$ in \citet[Table~3]{MR2462206}.
\section{Conclusion}
Optimal transport provides an entirely new approach to rank-based statistical inference in dimension $d\geq 2$. The new multivariate ranks retain many of the favorable properties one is used to with the classical univariate ranks. Here, we demonstrate how the new multivariate ranks can be used for a definition of multivariate versions of popular rank correlations such as Kendall’s tau or Spearman’s rho. We show how the new multivariate rank correlations yield
fully distribution-free, yet powerful and computationally efficient tests of independence. A highlight of our results is the fact that the use of van der Waerden scores allows one to design a nonparametric test whose asymptotic efficiency under arbitrary elliptical densities never drops below that of Wilks' test---not even under a Gaussian model.
\setcitestyle{numbers}
\bibliographystyle{apalike}
| {
"timestamp": "2021-12-01T02:30:38",
"yymm": "2111",
"arxiv_id": "2111.15567",
"language": "en",
"url": "https://arxiv.org/abs/2111.15567",
"abstract": "Due to the lack of a canonical ordering in ${\\mathbb R}^d$ for $d>1$, defining multivariate generalizations of the classical univariate ranks has been a long-standing open problem in statistics. Optimal transport has been shown to offer a solution in which multivariate ranks are obtained by transporting data points to a grid that approximates a uniform reference measure (Chernozhukov et al., 2017; Hallin, 2017; Hallin et al., 2021), thereby inducing ranks, signs, and a data-driven ordering of ${\\mathbb R}^d$. We take up this new perspective to define and study multivariate analogues of the sign covariance/quadrant statistic, Spearman's rho, Kendall's tau, and van der Waerden covariances. The resulting tests of multivariate independence are fully distribution-free, hence uniformly valid irrespective of the actual (absolutely continuous) distribution of the observations. Our results provide the asymptotic distribution theory for these new test statistics, with asymptotic approximations to critical values to be used for testing independence between random vectors, as well as a power analysis of the resulting tests in an extension of the so-called Konijn model. For the van der Waerden tests, this power analysis includes a multivariate Chernoff--Savage property guaranteeing that, under elliptical generalized Konijn models, the asymptotic relative efficiency with respect to Wilks' classical (pseudo-)Gaussian procedure of our van der Waerden tests is strictly larger than or equal to one, where equality is achieved under Gaussian distributions only. We similarly provide a lower bound for the asymptotic relative efficiency of our Spearman procedure with respect to Wilks' test, thus extending the classical result by Hodges and Lehmann on the asymptotic relative efficiency, in univariate location models, of Wilcoxon tests with respect to the Student ones.",
"subjects": "Statistics Theory (math.ST)",
"title": "Distribution-free tests of multivariate independence based on center-outward quadrant, Spearman, Kendall, and van der Waerden statistics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517456453797,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7089606331680659
} |
https://arxiv.org/abs/1806.00230 | Invariant property for discontinuous mean-type mappings | It is known that if $M,\,N$ are continuous two-variable means such that $|M(x,y)-N(x,y)| < |x-y|$ for every $x,\ y$ with $x\ne y$, then there exists a unique invariant mean (which is continuous too).We are looking for invariant means for pairs satisfying the inequality above, but continuity of means is not assumed.In this setting the invariant mean is no longer uniquely defined, but we prove that there exist the smallest and the biggest one. Furthermore it is shown that there exists at most one continuous invariant mean related to each pair. | \section{Introduction}
The idea of invariant means was first introduced by Gauss \cite{Gau18} who considered so-called arithmetic-geometric means.
It was obtained as a limit in the iteration process
\Eq{*}{
x_{n+1}=\frac{x_n+y_n}{2},\qquad y_{n+1}=\sqrt{x_ny_n} \qquad (n \in \N_+\cup\{0\}),
}
where $x_0,\,y_0$ are two positive arguments. Then it is known that both $(x_n)$ and $(y_n)$ are convergent to a common limit which called the \emph{arithmetic-geometric mean} (of the initial arguments $x_0:=x$ and $y_0:=y$).
In a more general setting a \emph{mean} is an arbitrary function $M \colon I^2 \to I$ (from now on $I$ stands for an arbitrary interval) such that
\Eq{*}{
\min(x,y) \le M(x,y) \le \max(x,y)\quad\text{ for all }\quad x,\,y \in I.
}
If inequalities above remains strict unless $x=y$, then the mean $M$ itself is called \emph{strict}.
For two means $M,\,N$ on $I$ we define a selfmapping $(M,\,N) \colon I^2 \to I^2$ by
$(M,\,N)(x,y):=(M(x,y),N(x,y))$. We call a mean $K$ on $I$ to be an \emph{$(M,\,N)$-invariant mean} if $K=K \circ (M,N)$; more precisely
\Eq{*}{
K(x,y)=K\big(M(x,y),N(x,y)\big)\quad\text{ for all }\quad x,\,y \in I.
}
In this setting the arithmetic-geometric mean is an invariant mean for arithmetic and geometric mean. In fact it was proved \cite[Theorem~8.2]{BorBor87} that if $M$ and $N$ are continuous and strict then such $K$ always exists and is uniquely determined. Later Matkowski \cite{Mat99b} proved that the strictness assumption can be relax to
\Eq{weakIn}{
\abs{M(x,y)-N(x,y)}< \abs{x-y} \text{ for all }x,\,y \in I,\:x \ne y.
}
Finally, similarly like in the case of arithmetic-geometric mean we know (see e.g. \cite{BorBor87}) that the $(M,\,N)$-invariant mean is obtained as a common limit of iterates of the mean-type mapping $(M,\,N)$ given by
\Eq{E:xnyn}{
x_0&=x, &\qquad y_0&=y;\\
x_{n+1}&=M(x_n,y_n), &\qquad y_{n+1}&=N(x_n,y_n) \qquad\text{ for all }n \ge0;
}
where $x$ and $y$ are its arguments. In fact these sequences of iterates are used so often that whenever the quadruple $(M,N,x,y)$ is defined, sequences $(x_n)$ and $(y_n)$ are also given.
Invariant means were extensively studied during recent years, see for example papers by Baj\'ak--P\'ales \cite{BajPal09b,BajPal09a,BajPal10,BajPal13}, by Dar\'oczy--P\'ales \cite{Dar05a,DarPal02c,DarPal03a}, by G{\l}azowska \cite{Gla11b,Gla11a}, by Matkowski \cite{Mat99b,Mat02b,Mat05,Mat06b,Mat13}, by Matkowski--P\'ales \cite{MatPal15}, by the author \cite{Pas15c,Pas16a,Pas1801}, and in the seminal book Borwein-Borwein \cite{BorBor87}.
We will consider $(M,\,N)$-invariant means where $M,\,N$ satisfies inequality \eq{weakIn} but continuity is replaced by symmetry (i.e. $M(x,y)=M(y,x)$ for all $x,\,y \in I$).
Let us just mention that we do not require the means to be discontinuous. On the other hand if both of them are continuous then our consideration reduces to the one which was already done many times (see references above).
\section{Invariant means with no continuity assumption}
In this section we are going to present some examples of constructions which provide $(M,\,N)$-invariant means, where $M$ and $N$ are not necessarily continuous.
There are two somehow independent ways of defining such means. First idea is to extend the meaning of limit which appear in the definition of invariant mean (for example to $\liminf$ or $\limsup$). We realize this idea in section~\ref{sec:BIM}. Second one is related with transfinite iterations (section~\ref{sec:TIM}).
Let us begin with two elementary, however useful, results
\begin{lem}
If $M,N \colon I^2 \to I$ are symmetric means then every $(M,\,N)$-invariant mean is symmetric.
\end{lem}
Indeed, if $K$ is an arbitrary $(M,\,N)$-invariant mean then for every $x,\,y \in I$ we get
\Eq{*}{
K(x,y)=K(M(x,y),N(x,y))=K(M(y,x),N(y,x))=K(y,x).
}
\begin{lem}
\label{lem:symme}
If $M,N \colon I^2 \to I$ are symmetric means then a mean is $(M,\,N)$-invariant if and only if it is $(M\wedge N,M \vee N)$-invariant, where
\Eq{*}{
(M\wedge N)(x,y):=\min(M(x,y),N(x,y)),\quad x,\,y \in I,\\
(M\vee N)(x,y):=\max(M(x,y),N(x,y)),\quad x,\,y \in I.
}
\end{lem}
By the previous lemma every $(M,\,N)$-invariant (or $(M\wedge N,M \vee N)$-invariant) mean is symmetric. Furthermore for every symmetric function $K \colon I^2 \to I$ we have $K\circ(M,N)=K \circ (M\wedge N,M \vee N)$.
\subsection{\label{sec:BIM} Boundary invariant means}
This idea is motivated by generalized limit function. Our consideration covers all standard type of limits (i.e. $\lim$, $\liminf$, $\limsup$) but also more general functionals like Banach limit\footnote{Banach limit is a linear functional $L \colon \ell^\infty \to \R$ such that $\norm{L}_{\infty}=1$; $L(a_2,a_3,\dots)=L(a)$ for every $a \in \ell^\infty$; $L(a) \ge 0$ whenever $a_n \ge 0$ for all $n$; and $L(a)=\lim_{n \to \infty} a_n$ for every convergent sequence $(a_n)$ (cf. Conway \cite{Con90}).}. A function $\phi \colon \ell^{\infty}(I) \to I$ is called
\emph{2-limit-like} if for every $a=(a_1,a_2,\dots) \in \ell^{\infty}(I)$
\begin{enumerate}[(i)]
\item $\phi(a_1,a_2,a_3,\dots)=\phi(a_3,a_4,a_5,\dots)$, and
\item $\liminf_{n \to \infty} a_n \le \phi(a_1,a_2,\dots)\le \limsup_{n \to \infty} a_n$.
\end{enumerate}
Note that whenever the sequence $a$ is convergent, then $\phi(a)= \lim_{n \to \infty} a_n$.
Let us emphasize that 2-limit-like function are much more general objects than common (or even Banach) limits. In fact we can construct $2^{\mathfrak{c}}$ different 2-limit-like functions. Indeed, each function $w \colon [0,1] \to [0,1]$ lead to a 2-limit-like function on $\ell^{\infty}[0,1]$ given by
\Eq{*}{
\phi_w(a):= \liminf_{n \to \infty} a_{n} + w\big(\liminf_{n \to \infty} a_{2n}\big) \cdot \Big( \limsup_{n \to \infty} a_{n}- \liminf_{n \to \infty} a_{n} \Big).
}
Furthermore, by taking a family of $4$-periodic sequences $(0,x,0,1,\dots)$ for $x \in [0,1]$, it can be verified that the mapping $w \mapsto \phi_w$ is one-to-one.
At the moment we can use this definition to introduce the wide class of $(M,\,N)$-invariant means.
\begin{prop}
Let $M,\,N \colon I^2 \to I$ be two means and $\phi \colon \ell^{\infty}(I) \to I$ be a 2-limit-like function. Then the mean $\Bo_\phi$ given by
\Eq{*}{
\Bo_\phi(x,y):=\phi(x_0,y_0,x_1,y_1,x_2,y_2,\dots)
}
is $(M,\,N)$-invariant.
Conversely, every $(M,\,N)$-invariant mean equals $\Bo_\phi$ for some 2-limit-like function $\phi$.
\end{prop}
\begin{proof}
By the definition of mean we have, for all $n \ge 0$,
\Eq{*}{
\max(x_{n+1},y_{n+1}) \le \max(x_n,y_n).
}
Thus the sequence $(\max(x_{n},y_{n}))_{n \in \N}$ is nondecreasing and
\Eq{*}{
\limsup \:(x_0,y_0,x_1,y_1,x_2,y_2,\dots)
&=\limsup_{n \to \infty} \max(x_n,y_n)\\
&\le \max(x_0,y_0)=\max(x,y).
}
Similarly we obtain $\liminf \:(x_0,y_0,x_1,y_1,x_2,y_2,\dots)\ge\min(x,y)$.
Now, as $\phi$ is between $\liminf$ and $\limsup$, we obtain that $\Bo_\phi$ is a mean. Moreover
\Eq{*}{
\Bo_\phi(M(x,y),N(x,y))&=\phi\Big(M(x_0,y_0),N(x_0,y_0),\\
&\qquad\hspace{-4em} M\big(M(x_0,y_0),N(x_0,y_0)\big),
N\big(M(x_0,y_0),N(x_0,y_0)\big),\dots\Big)\\
&=\phi(x_1,y_1,x_2,y_2,\dots)
=\phi(x_0,y_0,x_1,y_1,x_2,y_2,\dots)\\
&=\Bo_\phi(x_0,y_0)=\Bo_\phi(x,y),
}
which concludes the proof.
To prove the converse, for an be an arbitrary $(M,\,N)$-invariant mean $K$,
we define function $\phi$ on the orbit of $(x,\,y)$ by
\Eq{proper}{
\phi(x_0,y_0,x_1,y_1\dots):=K(x,\,y) &\qquad x,\,y \in I,
}
fulfilled by
\Eq{fulfilled}{
\phi(a_1,a_2,a_3,a_4,\dots)=\liminf_{n \to \infty} a_n.
}
By the definition of sequences $(x_n),\,(y_n)$ and elementary properties of $\liminf$ we obtain that $\phi$ satisfies (i). Moreover, in view of (i) and the easy-to-check inequality $\inf(a) \le \phi(a)\le \sup(a)$, the property (ii) is also valid.
\end{proof}
In two particular cases $\phi=\liminf$ and $\phi=\limsup$, as
\Eq{*}{
[\min(M(x,y),N(x,y)),\,\max(M(x,y),N(x,y))] \subset [x,\,y]
}
is valid for every $x,\,y \in I$ with $x <y$, we obtain two very important $(M,\,N)$-invariant means.
Define \emph{lower-} and \emph{upper-invariant means} $\Lo,\,\Up \colon I^2 \to I$ by
\Eq{*}{
\Lo(x,y)&:=\Bo_{\liminf}(x,y)=\lim_{n \to \infty} \min(x_n,y_n), \\
\Up(x,y)&:=\Bo_{\limsup}(x,y)=\lim_{n \to \infty} \max(x_n,y_n).
}
In fact $\Lo$ and $\Up$ are the smallest and the greatest $(M,\,N)$-invariant means, respectively, as every $(M,\,N)$-invariant mean is bounded from below by $\min(x_n,y_n)$ and from above by $\max(x_n,y_n)$ (for all $n \in \N$).
\subsection{\label{sec:TIM}Transfinite invariant mean}
Transfinite invariant mean is the third (after lower- and upper-) natural invariant mean. In order to define it we assume comparability of means $M$ and $N$ --- more precisely $M(x,y) \le N(x,y)$ for all $x,\,y \in I$. Moreover we assume that the inequality \eq{weakIn} is valid.
Let us consider two transfinite sequences\footnote{that is sequences which are enumerated by ordinal numbers; cf. Cantor \cite{Can55}.}
$(x_\alpha)$ and $(y_\alpha)$ by fulfilling convention \eq{E:xnyn} in the following way
\Eq{E:defxalyal}{
x_{\alpha}:=\lim_{\beta\nearrow\alpha} x_\beta, \qquad y_{\alpha}:=\lim_{\beta\nearrow\alpha} y_\beta \qquad\text{ for all limit ordinals }\alpha.
}
To provide the correctness of this definition we observe that $(x_\alpha)$ is nondecreasing while $(y_\alpha)$ is nonincreasing. Still, whenever $M$, $N$, $x$, and $y$ are given, these sequences are automatically provided.
Inequality $M \le N$ implies that $x_\alpha \le y_\alpha$ for every $\alpha>0$. In particular, by the definition of $\Lo$ and $\Up$, we get
\Eq{E:LoUpomega}{
\Lo(x,\,y)= x_{\omega}\quad\text{ and }\quad\Up(x,\,y)=y_{\omega}\,.
}
Thus
\Eq{*}{
A_\alpha\colon I^2 \ni (x,y)\mapsto x_\alpha \in I \qquad \text{and}\qquad B_\alpha\colon I^2 \ni (x,y)\mapsto y_\alpha \in I
}
are expressed as a function of $\Lo(x,\,y)$ and $\Up(x,\,y)$ for all $\alpha>\omega$. In particular they are all $(M,N)$-invariant. Moreover $A_\omega=\Lo$ and $B_\omega=\Up$.
The next lemma shows that iteration sequences $(A_\alpha)$ and $(B_\alpha)$ are eventually fixed. They reach that state after at most $\omega_1$ iterations ($\omega_1$ stands for the first uncountable ordinal). This imply that there is no point to consider indexes greater than $\omega_1$ as no new means are obtained.
\begin{lem}
Let $M,\,N \colon I^2 \to I$ be two means having property \eq{weakIn} such that $M\le N$. Then $A_{\omega_1}(x,y)=B_{\omega_1}(x,y)$ for all $x,\,y \in I$.
\end{lem}
\begin{proof}
We need to prove that $x_{\omega_1}=y_{\omega_1}$.
Inequality $M \le N$ implies $x_\alpha \le y_\alpha$ for all $\alpha \ge 1$. Moreover \eq{weakIn} yields that for every $\alpha<\omega_1$ either $y_\alpha=x_\alpha$ (equivalently $y_\alpha-x_\alpha=0$) or $y_{\alpha+1}-x_{\alpha+1} < y_{\alpha}-x_{\alpha}$.
If $x_{\alpha_0}=y_{\alpha_0}$ for some $\alpha_0<\omega_1$ then by reflexivity of mean we obtain $x_{\alpha}=y_{\alpha}$ for all $\alpha \in [\alpha_0,\omega_1]$. In particular $x_{\omega_1}=y_{\omega_1}$.
From now on we may assume that $(y_\alpha-x_\alpha)_{\alpha<\omega_1}$ is strictly decreasing. As $x_\alpha \le y_\alpha$ we know know that this sequence consists of nonnegative entries only. This lead to a contradiction as every strictly decreasing sequence of nonnegative numbers is countable.
\end{proof}
\begin{rem}
As both $M$ and $N$ are means we obtain, applying the inequality $M \le N$, that the sequence
$(x_\alpha)_{\alpha \le \omega_1}$ is nondecreasing, while $(y_\alpha)_{\alpha \le \omega_1}$ is nonincreasing.
\end{rem}
Based on the lemma above we can define, for $M \le N$, a \emph{transfinite invariant mean} $\Tr \colon I^2 \to I$ by
\Eq{def:Trdev}{
\Tr(x,y):=A_{\omega_1}(x,y)=B_{\omega_1}(x,y).
}
By the virtue of Lemma~\ref{lem:symme}, we can skip the comparability assumption whenever both means are symmetric (like it was already done in the case of $\Lo$ and $\Up$).
Let us now present some important property of transfinite invariant mean.
\begin{thm} \label{thm:uniqcont}
Let $I$ be an interval, $M,\,N \colon I^2 \to I$ be means with $M\le N$ satisfying \eq{weakIn}. Either $\Tr$ is a unique continuous $(M,\,N)$-invariant mean or there are no continuous $(M,\,N)$-invariant means.
\end{thm}
\begin{proof}
Let $K$ be an arbitrary continuous $(M,\,N)$-invariant mean. We show that $K=\Tr$.
Fix $x,\,y \in I$. Using the definition of $\Tr$, it suffices to prove that $K(x,y)=x_{\omega_1}$. We will proof by transfinite induction that
\Eq{E:TrIndas}{
K(x,y)=K(x_\alpha,y_\alpha)\quad\text{ for all }\quad\alpha\ge 0.
}
Indeed, as $K$ is $(M,\,N)$-invariant, we obtain
\Eq{*}{
K(x_{\alpha+1},y_{\alpha+1})=K\big(M(x_\alpha,y_\alpha),N(x_\alpha,y_\alpha)\big)=K(x_\alpha,y_\alpha).
}
Furthermore, as $K$ is continuous, for every limit ordinal number $\alpha$, we get
\Eq{*}{
K(x_\alpha,\,y_\alpha )
=K\Big(\lim_{\beta \nearrow \alpha} x_\beta,\,\lim_{\beta \nearrow \alpha} y_\beta \Big)
=\lim_{\beta \nearrow \alpha} K( x_\beta,\,y_\beta).
}
Now \eq{E:TrIndas} easily follows. Finally, reflexivity of $K$ binded with equality $x_{\omega_1}=y_{\omega_1}$ concludes the proof.
\end{proof}
\begin{xrem}
By Lemma~\ref{lem:symme}, we can skip comparability assumption whenever both $M$ and $N$ are symmetric.
\end{xrem}
\section{Application and conclusions}
\subsection{Example of invariant property for noncontinuous means}
Fix an interval $I$ with $|I|>1$ and functions $M,\,N \colon I^2 \to I$ defined by
\Eq{*}{
M(x,y)&:=\begin{cases} \frac12(x+y) & \text{ for } \abs{x-y} \le 1, \\ \frac12 \big(x+y-\sqrt{\abs{x-y}}\:\big) & \text{ for }\abs{x-y} > 1, \end{cases}\qquad x,\,y \in I;\\
N(x,y)&:=\begin{cases} \frac12(x+y) & \text{ for } \abs{x-y} \le 1, \\ \frac12 \big(x+y+\sqrt{\abs{x-y}}\:\big) & \text{ for }\abs{x-y} > 1, \end{cases}\qquad x,\,y \in I.
}
It is easy to check that both $M$ and $N$ are symmetric and strict means on $I$. Furthermore the arithmetic mean is $(M,\,N)$-invariant. Whence, by Theorem~\ref{thm:uniqcont}, it is a transfinite invariant mean for this pair.
Let $(x_\alpha)$ and $(y_\alpha)$ are two transfinite sequences corresponding to the iteration $(M,\,N)$. Obviously, as $N \ge M$, we have $y_\alpha \ge x_\alpha$ for all $\alpha>0$. Thus, for all $\alpha\ge 0$,
\Eq{*}{
y_{\alpha+1}-x_{\alpha+1}=\begin{cases}
0 & \text{ if }\abs{y_\alpha-x_\alpha}\le 1,\\
\sqrt{\abs{y_\alpha-x_\alpha}} & \text{ if }\abs{y_\alpha-x_\alpha}>1.
\end{cases}
}
However the iteration of square root is well known, so we obtain
\Eq{Ex1}{
y_{\omega}-x_{\omega}=\begin{cases} 0 & \text{ if }\abs{x-y}\le1, \\
1 & \text{ if }\abs{x-y}> 1.
\end{cases}
}
On the other hand we can check by simple induction that
\Eq{Ex2}{
x_\alpha+y_\alpha=x+y\quad \text{ for all }\alpha \ge 0.
}
We now bind \eq{E:LoUpomega}, \eq{Ex1}, and \eq{Ex2} for $\alpha=\omega$ to obtain
\Eq{*}{
\Lo(x,y)&=\begin{cases} \frac{x+y}2 & \text{ if }\abs{x-y}\le1, \\
\frac{x+y-1}2 & \text{ if }\abs{x-y}> 1;
\end{cases} \\
\Up(x,y)&=\begin{cases} \frac{x+y}2 & \text{ if }\abs{x-y}\le1, \\
\frac{x+y+1}2 & \text{ if }\abs{x-y}> 1.
\end{cases}
}
To express it briefly, for every $c \in [-1,1]$, define the mean $K_c \colon I^2 \to I$ by
\Eq{*}{
K_c(x,y):=\begin{cases} \frac{x+y}2 & \abs{x-y} \le 1, \\
\frac{x+y+c}2 & \abs{x-y} > 1.
\end{cases}
}
Having this new notation we can simply write $\Lo=K_{-1}$, $\Up=K_1$, and $\Tr=K_0$.
If we now continue inductive steps we get $A_{\omega+1}=B_{\omega+1}=K_0=\Tr$. Thus (in this example) sequences $(A_\alpha)_{\alpha\ge \omega}$ and $(B_\alpha)_{\alpha\ge \omega}$ contain the lower-, upper-, and transfinite- invariant means only.
On the other hand every convex combination of invariant means is again an invariant mean. Thus $K_c$ is $(M,\,N)$-invariant for all $c \in [-1,1]$. This shows that not every $(M,\,N)$-invariant mean is obtained in sequences $(A_\alpha)$, $(B_\alpha)$.
\subsection{Application to functional equations} There appear a natural problem: which results known for continuous means can be adapted to the discontinuous setting?
In this section we are going to prove just a single result inspired by Matkowski \cite[Theorem 4]{Mat06b}.
\begin{prop}
Let $M,\,N \colon I^2 \to I$ be two means with $M \le N$, having property \eq{weakIn}, and $\Phi \colon I^2 \to \R$ be a continuous function. Then
\Eq{Mat06bAs}{
\Phi(x,y)=\Phi(M(x,y),N(x,y)) \qquad \text{for all }x,\,y \in I
}
if and only if there exists a continuous function $f \colon I \to \R$ such that
\Eq{*}{
\Phi=f \circ \Tr.
}
Moreover if $x \mapsto \Phi(x,x)$ is an injective function then $\Tr$ is continuous.
\end{prop}
Recall that, like in many other results, comparability may be replaced by symmetry.
\begin{proof}
Take $x,\,y \in I$ arbitrarily. Using \eq{E:defxalyal}, equality \eq{Mat06bAs} can be rewritten as
$\Phi(x_\alpha,y_\alpha)=\Phi(x_{\alpha+1},y_{\alpha+1})$ for every $\alpha$.
By continuity of $\Phi$ we may extend the inductive proof to limit ordinals and
obtain $\Phi(x_\alpha,y_\alpha)=\Phi(x_0,y_0)$ for every $\alpha$. If we put $\alpha=\omega_1$, by \eq{def:Trdev}, we obtain
\Eq{E:fTrpr}{
\Phi(\Tr(x,y),\Tr(x,y))=\Phi(x,y).
}
To complete the first implication we can simply define $f(x):=\Phi(x,x)$.
The converse implication is immediate in view of $(M,N)$-invariance of $\Tr$.
Additionally, if $x \mapsto \Phi(x,x)$ is injective, then so is $f$. In particular $f^{-1}$ exists and it is a continuous function.
Consequently $\Tr=f^{-1} \circ \Phi$ is continuous, too.
\end{proof}
\subsection{Conclusions}
In this paper we discussed some invariant means which naturally emerged in a case of two noncontinuous means which are either comparable or both symmetric
(sometimes additionally satisfying condition \eq{weakIn}\:).
There appear some natural problems concerning this new aspect. For example:
{\it (i)} find out the 'noncontinuous counterpart' of results which are stated for continuous means,
{\it (ii)} find out some additional assumption(s) to invariant mean which can be made in order to obtain the uniqueness of the solution (we presented three of those: minimality, maximality, and continuity),
{\it (iii)} generalize this concept to multivariable means (it is relatively natural in case of $\Lo$ and $\Up$ only).
Some progress toward (i) and (ii) was presented while the third aspect is outside the scope of the present paper.
\def$'$} \def\R{\mathbb R} \def\Z{\mathbb Z} \def\Q{\mathbb Q{$'$} \def\R{\mathbb R} \def\Z{\mathbb Z} \def\Q{\mathbb Q}
\def\mathbb C{\mathbb C}
| {
"timestamp": "2018-06-04T02:08:13",
"yymm": "1806",
"arxiv_id": "1806.00230",
"language": "en",
"url": "https://arxiv.org/abs/1806.00230",
"abstract": "It is known that if $M,\\,N$ are continuous two-variable means such that $|M(x,y)-N(x,y)| < |x-y|$ for every $x,\\ y$ with $x\\ne y$, then there exists a unique invariant mean (which is continuous too).We are looking for invariant means for pairs satisfying the inequality above, but continuity of means is not assumed.In this setting the invariant mean is no longer uniquely defined, but we prove that there exist the smallest and the biggest one. Furthermore it is shown that there exists at most one continuous invariant mean related to each pair.",
"subjects": "Functional Analysis (math.FA)",
"title": "Invariant property for discontinuous mean-type mappings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.97805174308637,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7089606313131157
} |
https://arxiv.org/abs/1606.00176 | Large time monotonicity of solutions of reaction-diffusion equations in R^N | In this paper, we consider nonnegative solutions of spatially heterogeneous Fisher-KPP type reaction-diffusion equations in the whole space. Under some assumptions on the initial conditions, including in particular the case of compactly supported initial conditions, we show that, above any arbitrary positive value, the solution is increasing in time at large times. Furthermore, in the one-dimensional case, we prove that, if the equation is homogeneous outside a bounded interval and the reaction is linear around the zero state, then the solution is time-increasing in the whole line at large times. The question of the monotonicity in time is motivated by a medical imagery issue. | \subsubsection*{Framework and main assumptions}
The initial condition $u_0$ is in $L^{\infty}(\mathbb{R}^N)$ with $0\le u_0(x)\le 1$ a.e. in $\mathbb{R}^N$ and $u_0$ is non-trivial, in the sense that $\|u_0\|_{L^{\infty}(\mathbb{R}^N)}>0$. We also assume that either there exists $\beta>0$ such that
\begin{equation}\label{u0gaussian}
u_0(x)=O(e^{-\beta|x|^2})\ \hbox{ as }|x|\to+\infty
\end{equation}
(a particular important case is when $u_0$ is compactly supported), or there exist $0<\gamma\le\delta$ and~$\lambda>0$ such that
\begin{equation}\label{u0}
\gamma\,e^{-\lambda|x|}\le u_0(x)\le\delta\,e^{-\lambda|x|}\ \hbox{ for all }|x|\hbox{ large enough},
\end{equation}
where $|\cdot|$ denotes the Euclidean norm in $\mathbb{R}^N$.
The diffusion term $A$ is assumed to be a symmetric matrix field $A=(A_{ij})_{1\le i,j\le N}$ of class~$C^{1,\alpha}(\mathbb{R}^N)$ for some $0<\alpha<1$ and uniformly definite positive: there exists a constant~$\nu\ge1$ such that
\begin{equation}\label{hypA}
\nu^{-1}I\le A(x)\le\nu I\ \hbox{ for all }x\in\mathbb{R}^N,
\end{equation}
in the sense of symmetric matrices, where $I\in\mathbb{S}_N(\mathbb{R})$ is the identity matrix. One also assumes that~$A$ is locally asymptotically homogeneous at infinity, in the sense that
\begin{equation}\label{oscA}
\forall\,1\le i,j\le N,\ \ |\nabla A_{ij}(x)|\to0\ \hbox{ as }|x|\to+\infty.
\end{equation}
A particular example of a $C^{1,\alpha}(\mathbb{R}^N)$ matrix field satisfying~\eqref{oscA} is when $A_{ij}(x)$ converges to a constant as $|x|\to+\infty$ for every $1\le i,j\le N$. An important subcase is that of a matrix field~$A$ which is independent of $x$. Notice that, since $A$ is of class $C^{1,\alpha}(\mathbb{R}^N)$, the condition~\eqref{oscA} is equivalent to the fact that the local oscillations of the functions $A_{ij}$ converge to $0$ at infinity, that is, for every $R>0$ and $1\le i,j\le N$,
$$\mathop{\rm{osc}}_{\overline{B(x,R)}}A_{ij}:=\max_{\overline{B(x,R)}}A_{ij}-\min_{\overline{B(x,R)}}A_{ij}\ \to\ 0\ \hbox{ as }|x|\to+\infty,$$
where $B(x,R)$ denotes the open Euclidean ball of center $x$ and radius $R$. However, notice that the matrix fields $A(x)$ satisfying this property may not converge as $|x|\to+\infty$ in general, even in dimension $N=1$.
The reaction term $f:\mathbb{R}^N\times[0,1]\to\mathbb{R}$ is a continuous function, of class $C^{0,\alpha}$ in $x$ uniformly with respect to $u\in[0,1]$, and Lipschitz continuous in $u$, uniformly with respect to $x\in\mathbb{R}^N$. Throughout the paper, one assumes that
\begin{equation}\label{f1}
f(x,0)=f(x,1)=0\ \hbox{ for every }x\in\mathbb{R}^N
\end{equation}
and that
\begin{equation}\label{f2}
u\mapsto\frac{f(x,1-u)}{u}\ \hbox{ is non-increasing in }(0,1]
\end{equation}
for every $x\in\mathbb{R}^N$. One also assumes that there exist $\mu>0$ and $s_0\in(0,1)$ such that
\begin{equation}\label{hypf}
f(x,s)\ge\mu\,s\ \hbox{ for all }(x,s)\in\mathbb{R}^N\times[0,s_0].
\end{equation}
These assumptions imply in particular that $f$ is positive in $\mathbb{R}^N\times(0,1)$ and even that $\inf_{x\in\mathbb{R}^N}f(x,s)\ge\mu s>0$ for every $s\in(0,s_0]$ and $\inf_{x\in\mathbb{R}^N}f(x,s)\ge\mu s_0(1-s)/(1-s_0)>0$ for every $s\in[s_0,1)$. Furthermore, $f$ is assumed to be of class $C^1$ with respect to $u$ in~$\mathbb{R}^N\times([0,s_0]\cup[s_1,1])$ for some $s_1\in(0,1)$ with $f_u=\frac{\partial f}{\partial u}$ bounded and uniformly continuous in $\mathbb{R}^N\times([0,s_0]\cup[s_1,1])$, and of class $C^{0,\alpha}$ with respect to $x$ uniformly in $s\in[0,s_0]\cup[s_1,1]$. Lastly, one assumes that~$f_u(\cdot,0)$ is locally asymptotically homogeneous at infinity, in the sense that, for every $R>0$,
\begin{equation}\label{oscf}
\mathop{\rm{osc}}_{\overline{B(x,R)}}f_u(\cdot,0)\to0\ \hbox{ as }|x|\to+\infty.
\end{equation}
Notice that~\eqref{oscf} holds if $f_u(\cdot,0)\in C^1(\mathbb{R}^N)$ and $|\nabla f_u(x,0)|\to0$ as $|x|\to+\infty$ or if~$f_u(x,0)$ converges to a constant as~$|x|\to+\infty$ (in particular, if $f_u(\cdot,0)$ is constant). An important class of examples of functions $f$ satisfying the aforementioned hypotheses is when $f(x,u)=r(x)\,g(u)$, where $g$ is of class $C^1$, concave in $[0,1]$, positive in $(0,1)$ with~$g(0)=g(1)=0$, and $r$ is of class~$C^{0,\alpha}(\mathbb{R}^N)$, locally asymptotically homogeneous at infinity and $0<\inf_{\mathbb{R}^N}r\le\sup_{\mathbb{R}^N}r<+\infty$. The archetype is the homogeneous logistic Fisher-KPP~\cite{f,kpp} reaction $f(x,u)=u(1-u)$ with $r(x)=1$ and $g(u)=u(1-u)$ as above. However, for general functions $f(x,u)$ satisfying the above assumptions, slow oscillations at infinity are not excluded, even in dimension $N=1$ (see~\cite{ggn} for the study of one-dimensional equations of the type~\eqref{eq} with slow oscillations as $x\to\pm\infty$).
From the parabolic regularity theory, the solution $u$ of~\eqref{eq} is well-defined for all $t>0$ and it is classical in~$(0,+\infty)\times\mathbb{R}^N$ with
\begin{equation}\label{0u1}
0<u(t,x)<1\ \hbox{ for all }t>0\hbox{ and }x\in\mathbb{R}^N,
\end{equation}
by the strong parabolic maximum principle. From the assumptions made on $f$, even without~\eqref{oscf}, it is shown in~\cite{bhr} that any stationary solution $p(x)$ of~\eqref{eq} such that $0\le p\le 1$ in~$\mathbb{R}^N$ is either identically equal to $0$ in $\mathbb{R}^N$ or is bounded from below by a positive constant in~$\mathbb{R}^N$. Since $\inf_{x\in\mathbb{R}^N}f(x,s)>0$ for every $s\in(0,1)$, it then follows immediately in the latter case that $p$ is identically equal to $1$ in $\mathbb{R}^N$. Therefore, again from~\cite{bhr}, the solution $u$ of~\eqref{eq} satisfies $u(t,x)\to1$ as $t\to+\infty$ locally uniformly in $x\in\mathbb{R}^N$.
Lastly, from~\cite{bhn}, it is also known that there is $c>0$ such that
\begin{equation}\label{defc}
\min_{|x|\le c t}u(t,x)\to1\ \hbox{ as }t\to+\infty.
\end{equation}
In other words, the state $1$ invades the whole space as $t\to+\infty$ with at least a positive spreading speed $c>0$. But, the asymptotic spreading speed of $u$ may not be unique, in the sense that some oscillations of the spreading rates of the level sets of $u$ between two different positive speeds are possible in general even for compactly supported initial conditions, see~\cite{ggn}. This means that, in general, there is no speed $c_0>0$ such that~\eqref{defc} holds for all $c\in[0,c_0)$ and $\max_{|x|\ge ct}u(t,x)\to0$ as $t\to+\infty$ for all $c>c_0$. However, when the equation~\eqref{eq} is homogeneous and the initial condition is compactly supported, there exists such a positive spreading speed $c_0$, see e.g.~\cite{aw}.
\subsubsection*{Main results}
The main result of our paper is the following asymptotic time-monotonicity of the solutions of~\eqref{eq}.
\begin{theo}\label{th1}
Under the above assumptions~\eqref{u0gaussian} or~\eqref{u0}, and~\eqref{hypA}-\eqref{oscf}, the solution $u$ of~\eqref{eq} satisfies
\begin{equation}\label{ut}
\inf_{x\in\mathbb{R}^N}u_t(t,x)\to0\ \hbox{ as }t\to+\infty.
\end{equation}
Furthermore, for every $0<\varepsilon<1$, there is a time $T_{\varepsilon}>0$ such that
\begin{equation}\label{Teps}
\forall\,(t,x)\in[T_{\varepsilon},+\infty)\times\mathbb{R}^N,\ \ u(t,x)\ge\varepsilon\ \Longrightarrow\ u_t(t,x)>0.
\end{equation}
\end{theo}
Property~\eqref{Teps} means the monotonicity in time at large times in the time-dependent sets where $u$ is bounded away from $0$. On the other hand, in the sets where, say, $t\ge 1$ and $u$ is close to $0$, then $u_t$ is close to $0$ too.\footnote{Indeed, if $u(t_n,x_n)\to0$ with $(t_n,x_n)\in[1,+\infty)\times\mathbb{R}^N$, then the functions $v_n(t,x):=u(t+t_n,x+x_n)$ converge locally in $C^{1,2}_{t,x}((-1,+\infty)\times\mathbb{R}^N)$, up to extraction of a subsequence, to a solution $v$ of an equation of the type $v_t=\mbox{div}(A_{\infty}(x)\nabla u)+f_{\infty}(x,u)$ for some diffusion and reaction coefficients $A_{\infty}$ and $f_{\infty}$ satisfying the same type of assumptions as $A$ and $f$. Furthermore, $v(0,0)=0$ and $0\le v\le 1$ in $(-1,+\infty)\times\mathbb{R}^N$, whence $v=0$ in $(-1,0]\times\mathbb{R}^N$ from the strong maximum principle and then $v=0$ in $(-1,+\infty)\times\mathbb{R}^N$ from the uniqueness of the solutions of the associated Cauchy problem. Finally, $v_t(0,0)=0$ and $u_t(t_n,x_n)=(v_n)_t(0,0)\to v_t(0,0)=0$ as $n\to+\infty$.} Therefore, property~\eqref{Teps} easily yields~\eqref{ut}. Lastly, since
\begin{equation}\label{conv0}
u(t,x)\to0\hbox{ as }|x|\to+\infty\hbox{ locally uniformly in }t\in[0,+\infty),
\end{equation}
as will be easily seen in the proof of Theorem~\ref{th1} (more precisely, see the proof of Lemma~\ref{lem1} below), property~\eqref{Teps} implies that, for every $T\ge T_{\varepsilon}$, the set $\big\{(t,x)\in[T_{\varepsilon},T]\times\mathbb{R}^N,\ u(t,x)\ge\varepsilon\big\}$ is compact, whence
$$\min_{(t,x)\in[T_{\varepsilon},T]\times\mathbb{R}^N,\,u(t,x)\ge\varepsilon}u_t(t,x)>0.$$
Let us now comment some earlier related references in the literature. In~\cite{r}, the question of the time-monotonicity at large times had been addressed for the solutions of some reaction-diffusion equations in straight infinite cylinders with advection shear flows and with $f$ being independent of the unbounded variable. Other time-monotonicity results have been obtained in~\cite{bh} for time-global transition fronts of space-heterogeneous reaction-diffusion equations of the type~\eqref{eq} connecting two stable limiting points. In~\cite{z}, the time-monotonicity of the solutions~$u$ of equations $u_t=\Delta u+f(x,u)$ with reactions $f$ of the ignition type or involving a weak Allee effect has been established for large times in the set where $0<\varepsilon\le u(t,x)\le 1-\varepsilon<1$, for any $\varepsilon>0$ small enough. Lastly, we refer to~\cite{dm} for some results on time-monotonicity for small $t$ and large $x$ for the solutions of the homogeneous equation $u_t=\Delta u+g(u)$ which are initially compactly supported.
For the heterogeneous Fisher-KPP type equation~\eqref{eq}, we conjecture that, under the assumptions of Theorem~\ref{th1}, $u_t(t,\cdot)>0$ in $\mathbb{R}^N$ for $t$ large enough. This is still an open question. However, we can answer positively under some additional assumptions on~\eqref{eq} in dimension~$1$.
\begin{theo}\label{th2}
In addition to~\eqref{u0gaussian} or~\eqref{u0},~\eqref{hypA} and~\eqref{f1}-\eqref{hypf}, assume that $N=1$, that~$A'(x)=0$ for $|x|$ large enough and that there are $\lambda^{\pm}>0$, $\theta\in(0,1)$ and two functions $f^{\pm}:[0,1]\to\mathbb{R}$ such that $f(x,u)=f^{\pm}(u)$ for $\pm x$ large enough and $f^{\pm}(u)=\lambda^{\pm}u$ for all $u\in[0,\theta]$. Then there is $\tau>0$ such that the solution $u$ of~\eqref{eq} satisfies
\begin{equation}\label{utT}
u_t(t,x)>0\ \hbox{ for all }t\ge\tau\hbox{ and }x\in\mathbb{R}.
\end{equation}
\end{theo}
Let us now describe the main ideas of the proof of Theorems~\ref{th1} and~\ref{th2} and the outline of the paper. In Section~\ref{sec2}, the solution $u$ is proved to be $T$-monotone in time ($u(t+T,x)\ge u(t,x)$) at large time $t$ and for all $T$ large enough, by using the decay of $u_0$ at infinity and some Gaussian estimates for the fundamental solution associated with the linear equation obtained from~\eqref{eq}. This $T$-monotonicity is then improved in Section~\ref{sec3} by compactness arguments in the region where $u$ is away from $0$ and from $1$ and then in Section~\ref{sec4} by using in particular the assumption~\eqref{f2} and by an application of the maximum principle in some sets which are defined recursively. In Section~\ref{sec5}, the monotonicity in time is proved in the region where $u$ is close to $1$ by using Harnack inequality applied to the function $1-u$ and some passage to the limit. In Section~\ref{sec6}, the $\tau$-monotonicity in time, for any $\tau>0$, is shown in the region where $u$ is close to $0$, by using some Gaussian estimates as well as some new quantitative inequalities for the fundamental solutions associated with families of linear equations similar to~\eqref{eq} (these new estimates are proved in Section~\ref{secpro1}). Section~\ref{sec7} is devoted to the proof of properties~\eqref{ut} and~\eqref{Teps} of Theorem~\ref{th1}. Lastly, Section~\ref{sec9} is concerned with the proof of Theorem~\ref{th2}, where explicit estimates of the Green function associated to some one-dimensional initial and boundary value problem in half-lines are used.
\begin{rem}{\rm Assume in this remark that, instead of the whole space $\mathbb{R}^N$, equation~\eqref{eq} is set on a smooth bounded domain $\Omega\subset\mathbb{R}^N$ with Neumann type boundary conditions $\mu(x)\cdot\nabla u(t,x)=0$ on $\partial\Omega$, where $\mu$ is a continuous vector field such that $\mu(x)\cdot\nu(x)>0$ for all $x\in\partial\Omega$ and $\nu$ denotes the outward normal vector field on $\partial\Omega$. Then it follows from the arguments used in the proof of Theorem~\ref{th1} (see especially Section~\ref{sec5}) that, under assumptions~\eqref{hypA} and~\eqref{f1}-\eqref{hypf}, any solution $u$ with a nontrivial initial condition $0\le,\not\equiv u_0\le,\not\equiv 1$ is increasing in time in the whole set $\overline{\Omega}$ at large times.}
\end{rem}
\subsubsection*{Modeling and background}
The question of the monotonicity of the solution for large times comes from a simple medical imagery question. A natural way to model a tumor is to introduce a function $\phi(t,x)$ describing the density of tumor cells. In some types of cancers, tumor cells migrate and multiply. They migrate randomly and multiply according to logistic type laws. The simplest model of tumor is therefore the classical KPP equation, as described by Murray~\cite{m}
$$
\phi_t - \nu \Delta \phi = \lambda \phi ( 1 - \phi),
$$
with positive coefficients $\nu$ and $\lambda$. Treatments like radiotherapy or chemotherapy induce the death of a part of tumor cells. A simple way to model a treatment at time $t_0$ is to say that $\phi$ is discontinuous at $t_0$ and
$$
\phi(t_0^+,x) = \beta\,\phi(t_0^-,x)
$$
for all $x$ and for some $0 < \beta < 1$.
Now the tumor size can be evaluated through medical imagery devices which detect tumor cells only if their density is large enough, above some threshold $\sigma>0$. The measured size of the tumor is therefore
$$
S(t) = \int_{\mathbb{R}^N} 1_{\phi(t,x) > \sigma} dx .
$$
A natural question is to know whether $S(t)$ can decrease just after a treatment, namely: can the observed size of a tumor decrease whereas its actual total mass $\int_{\mathbb{R}^N}\phi(t,x) dx$ increases~?
Let us detail the link between this question and the positivity of $\phi_t$. For this let~$\Omega(t) = \{ x\in\mathbb{R}^N;\ \phi(t,x) > \sigma \}$, and let $x_0 \in \partial \Omega(t_0^+)$. We have
$$\begin{array}{rcl}
\phi_t(t_0^+,x_0) & = & \nu \Delta \phi(t_0^+,x_0)+\lambda \phi(t_0^+,x_0) (1 - \phi(t_0^+,x_0))\vspace{3pt}\\
& = & \nu \beta \Delta \phi(t_0^-,x_0)+\lambda \beta \phi(t_0^-,x_0) (1 - \beta \phi(t_0^-,x_0))\vspace{3pt}\\
& = & \beta\phi_t(t_0^-,x_0) - \lambda \beta \phi(t_0^-,x_0) (1 - \phi(t_0^-,x_0)) + \lambda \beta \phi(t_0^-,x_0) (1 - \beta \phi(t_0^-,x_0))\vspace{3pt}\\
& = & \beta \phi_t(t_0^-,x_0) + \lambda \beta (1- \beta) \phi^2(t_0^-,x_0).\end{array}$$
The second term is positive, hence if $\phi_t(t_0^-,x) > 0$ everywhere on $\partial\Omega(t_0^+)$, this implies that~$\phi_t(t_0^+,\cdot)$ is positive on $\partial \Omega(t_0^+)$, hence that $S(t)$ is increasing just after $t_0$. The medical imagery question therefore reduces to the study of the sign of $\phi_t$.
\setcounter{equation}{0} \section{$T$-monotonicity in time}\label{sec2}
Throughout this section and the next ones, one assumes that the conditions~\eqref{hypA}-\eqref{oscf} are fulfilled and $u$ denotes a solution of~\eqref{eq} with initial condition $u_0$ having Gaussian decay at infinity as in~\eqref{u0gaussian} or satisfying~\eqref{u0}. The first step in the proof of Theorem~\ref{th1} consists in showing that~$u$ is $T$-monotone in time.
\begin{lem}\label{lem1}
There is $T>0$ such that
\begin{equation}\label{defT}
u(1+t,x)\ge u(1,x)\ \hbox{ for all }t\ge T\hbox{ and }x\in\mathbb{R}^N.
\end{equation}
\end{lem}
\noindent{\bf{Proof.}} First of all, as already emphasized, the strong maximum principle implies that~$u(1,x)<1$ for all $x\in\mathbb{R}^N$. Remember also that $u(1,\cdot)$ is actually of class $C^2(\mathbb{R}^N)$. The strategy consists in bounding $u(1,x)$ from above as $|x|\to+\infty$ by a function having the same decay as $u_0$, and then in showing that $u(1+t,\cdot)$ is above $u(1,\cdot)$ in $\mathbb{R}^N$ for all $t>0$ large enough. To do so, we will use some lower and upper bounds for the heat kernel associated with the linearized equation~\eqref{eqv} below, as well as the spreading property~\eqref{defc}. For the sake of clarity, the two cases -- Gaussian decay for $u_0$ or~\eqref{u0}-- will be treated separately.\par
{\it Case 1: Gaussian decay}. Assume here that $u_0$ has Gaussian decay at infinity, that is, there exists $\beta>0$ such that $u_0(x)=O(e^{-\beta|x|^2})$ as $|x|\to+\infty$. Since $u_0\in L^{\infty}(\mathbb{R}^N;[0,1])$, there is then~$C>0$ such that
$$0\le u_0(x)\le C\,e^{-\beta|x|^2}\ \hbox{ for a.e. }x\in\mathbb{R}^N.$$
Remember that the function $f$ is globally Lipschitz continuous in its second variable, uniformly with respect to $x\in\mathbb{R}^N$. Since $f(\cdot,0)=0$ in $\mathbb{R}^N$, let then $L>0$ be such that
\begin{equation}\label{defL}
f(x,s)\le Ls\ \hbox{ for all }(x,s)\in\mathbb{R}^N\times[0,1].
\end{equation}
The maximum principle yields
$$0\le u(1,x)\le e^L\,v(1,x)\ \hbox{ for all }x\in\mathbb{R}^N,$$
where $v$ denotes the solution of the Cauchy problem
\begin{equation}\label{eqv}\left\{\begin{array}{rcll}
v_t & = & \mbox{div}(A(x)\nabla v), & t>0,\ x\in\mathbb{R}^N,\vspace{3pt}\\
v(0,\cdot) & = & u_0.\end{array}\right.
\end{equation}
Therefore,
$$0\le u(1,x)\le C\,e^L\,\int_{\mathbb{R}^N}p(1,x;y)\,e^{-\beta|y|^2}\,dy\ \hbox{ for all }x\in\mathbb{R}^N,$$
where $p(t,x;y)$ denotes the heat kernel associated to the linear equation~\eqref{eqv}, that is, for every $y\in\mathbb{R}^N$, $p(\cdot,\cdot;y)$ solves~\eqref{eqv} with the Dirac distribution $\delta_y$ at $y$ as initial condition. It follows from the bounds of $p$ in~\cite{n} (see also~\cite{a,d,fs,fr} for related results) that there is a real number $K\ge1$ such that
\begin{equation}\label{boundsp}
\frac{e^{-K|x-y|^2/t}}{K\,t^{N/2}}\le p(t,x;y)\le\frac{K\,e^{-|x-y|^2/(Kt)}}{t^{N/2}}\ \hbox{ for all }t>0\hbox{ and }(x,y)\in\mathbb{R}^N\times\mathbb{R}^N.
\end{equation}
In particular,
$$0\le u(1,x)\le K\,C\,e^L\int_{\mathbb{R}^N}e^{-|x-y|^2/K-\beta|y|^2}dy$$
Let $\eta\in(0,1)$ be such that $\eta<\beta\,K\,(1-\eta)$ and denote $\rho=\beta-\eta/(K(1-\eta))>0$. By writing
$$\begin{array}{rcl}
\displaystyle-\frac{|x-y|^2}{K}=-\frac{|x|^2}{K}+\frac{2(x\cdot y)}{K}-\frac{|y|^2}{K} & \le & \displaystyle-\frac{|x|^2}{K}+\frac{(1-\eta)|x|^2}{K}+\frac{|y|^2}{K(1-\eta)}-\frac{|y|^2}{K}\vspace{3pt}\\
& = & \displaystyle-\frac{\eta\,|x|^2}{K}+\frac{\eta\,|y|^2}{K\,(1-\eta)},\end{array}$$
it follows that
$$0\le u(1,x)\le K\,C\,e^L\,e^{-\eta|x|^2/K}\int_{\mathbb{R}^N}e^{-\rho|y|^2}dy\ \hbox{ for all }x\in\mathbb{R}^N.$$
To sum up, since the continuous function $u(1,\cdot)$ is less than $1$ in $\mathbb{R}^N$ by~\eqref{0u1}, one infers that there exist some real numbers $\theta\in(0,1)$ and $\omega>0$ such that
\begin{equation}\label{u1}
u(1,x)\le\min\big(\theta,\omega\,e^{-\eta|x|^2/K}\big)\ \hbox{ for all }x\in\mathbb{R}^N.
\end{equation}\par
Let us now show that $u(1+t,\cdot)$ is above $u(1,\cdot)$ in $\mathbb{R}^N$ for all $t>0$ large enough. Since $f$ is nonnegative in $\mathbb{R}^N\times[0,1]$, one infers from the maximum principle that $u(1+t,x)\ge v(1+t,x)$ for all $t\ge0$ and $x\in\mathbb{R}^N$, where $v$ solves~\eqref{eqv}. Since $u_0$ is nonnegative a.e. in $\mathbb{R}^N$ and non-trivial, there is $R>0$ such that
$$\sigma:=\int_{B(0,R)}u_0(y)\,dy>0$$
and
$$u(1+t,x)\ge\int_{\mathbb{R}^N}p(1+t,x;y)\,u_0(y)\,dy\ge\frac{1}{K\,(1+t)^{N/2}}\int_{B(0,R)}e^{-K|x-y|^2/(1+t)}\,u_0(y)\,dy$$
for all $t\ge0$ and $x\in\mathbb{R}^N$, from~\eqref{boundsp}. By writing
$$-\frac{K\,|x-y|^2}{1+t}\ge-\frac{2\,K\,|x|^2}{1+t}-\frac{2\,K\,|y|^2}{1+t}\ge-\frac{2\,K\,|x|^2}{1+t}-2KR^2$$
for all $t\ge0$, $x\in\mathbb{R}^N$ and $y\in B(0,R)$, one gets that
\begin{equation}\label{ut1}
u(1+t,x)\ge\frac{e^{-2KR^2}\,e^{-2K|x|^2/(1+t)}}{K\,(1+t)^{N/2}}\int_{B(0,R)}u_0(y)\,dy=\frac{\sigma\,e^{-2KR^2}\,e^{-2K|x|^2/(1+t)}}{K\,(1+t)^{N/2}}
\end{equation}
for all $t\ge0$ and $x\in\mathbb{R}^N$.\par
We finally show that~\eqref{defT} holds for some $T>0$ large enough. Assume not. Then there exist a sequence $(T_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that~$T_n\to+\infty$ as $n\to+\infty$ and $u(1+T_n,x_n)<u(1,x_n)$ for all $n\in\mathbb{N}$. Since $u(1,\cdot)\le\theta<1$ in~$\mathbb{R}^N$ and $\min_{|x|\le c t}u(t,x)\to1$ as $t\to+\infty$ with $c>0$ by~\eqref{defc}, it follows that $|x_n|\ge c(1+T_n)$ for $n$ large enough, while $u(1+T_n,x_n)<u(1,x_n)$ and~\eqref{u1}-\eqref{ut1} yield
$$\frac{\sigma\,e^{-2KR^2}\,e^{-2K|x_n|^2/(1+T_n)}}{K\,(1+T_n)^{N/2}}<\omega\,e^{-\eta|x_n|^2/K}\ \hbox{ for all }n\in\mathbb{N},$$
whence
$$\sigma\,K^{-1}\,\omega^{-1}\,e^{-2KR^2}\,(1+T_n)^{-N/2}<e^{-\eta|x_n|^2/K+2K|x_n|^2/(1+T_n)}\le e^{-\eta|x_n|^2/(2K)}\le e^{-\eta c^2(1+T_n)^2/(2K)}$$
for all $n$ large enough. This clearly leads to a contradiction. As a consequence, there is $T>0$ such that~\eqref{defT} holds.\par
{\it Case 2: assumption~\eqref{u0}}. Since $0\le u_0\le 1$ a.e. in $\mathbb{R}^N$, it follows from~\eqref{u0} that there is~$\delta'>0$ such that $u_0(x)\le\delta'\,e^{-\lambda|x|}$ for a.e. $x\in\mathbb{R}^N$. Therefore, with the same notations as in case~1, one infers that
$$u(1,x)\le\delta'\,e^L\int_{\mathbb{R}^N}p(1,x;y)\,e^{-\lambda|y|}\,dy\le K\,\delta'\,e^L\int_{\mathbb{R}^N}e^{-|x-y|^2/K-\lambda|y|}\,dy\ \hbox{ for all }x\in\mathbb{R}^N.$$
Hence,
$$u(1,x)\le K\,\delta'\,e^L\int_{\mathbb{R}^N}e^{-|y|^2/K-\lambda|x-y|}\,dy\le K\,\delta'\,e^L\,e^{-\lambda|x|}\int_{\mathbb{R}^N}e^{-|y|^2/K+\lambda|y|}\,dy\ \hbox{ for all }x\in\mathbb{R}^N$$
and, since $u(1,\cdot)$ is continuous and less than $1$ in $\mathbb{R}^N$, there are then $\theta'\in(0,1)$ and $\omega'>0$ such that
\begin{equation}\label{u1bis}
u(1,x)\le\min\big(\theta',\omega'\,e^{-\lambda|x|}\big)\ \hbox{ for all }x\in\mathbb{R}^N.
\end{equation}\par
On the other hand, assumption~\eqref{u0} yields the existence of $R>0$ such that $u_0(x)\ge\gamma\,e^{-\lambda|x|}$ for all $|x|\ge R$. It follows then from~\eqref{boundsp} and the nonnegativity of $f$ and $u_0$ that, for all $t\ge0$ and~$x\in\mathbb{R}^N$,
\begin{equation}\label{uT1bis}\begin{array}{rcl}
\displaystyle u(1+t,x)\ge\int_{\mathbb{R}^N}p(1+t,x;y)\,u_0(y)\,dy & \!\!\ge\!\! & \displaystyle\frac{\gamma}{K\,(1+t)^{N/2}}\int_{\mathbb{R}^N\backslash B(0,R)}e^{-K|x-y|^2/(1+t)-\lambda|y|}\,dy\vspace{3pt}\\
& \!\!=\!\! & \displaystyle\frac{\gamma}{K}\int_{\{z\in\mathbb{R}^N;\,|x-\sqrt{1+t}\,z|\ge R\}}e^{-K|z|^2-\lambda|x-\sqrt{1+t}\,z|}\,dz.\end{array}
\end{equation}\par
Assume now by contradiction that property~\eqref{defT} does not hold for any $T>0$. Then there exist a sequence $(T_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that $T_n\to+\infty$ as $n\to+\infty$ and $u(1+T_n,x_n)<u(1,x_n)$ for all $n\in\mathbb{N}$. Since $u(1,\cdot)\le\theta'<1$ in~$\mathbb{R}^N$ and $\min_{|x|\le c t}u(t,x)\to1$ as $t\to+\infty$ with $c>0$ by~\eqref{defc}, it follows that $|x_n|\ge c(1+T_n)$ for $n$ large enough, while $u(1+T_n,x_n)<u(1,x_n)$ and~\eqref{u1bis}-\eqref{uT1bis} yield
$$\omega'\,e^{-\lambda|x_n|}>\frac{\gamma}{K}\int_{\{z\in\mathbb{R}^N;\,|x_n-\sqrt{1+T_n}\,z|\ge R\}}e^{-K|z|^2-\lambda|x_n-\sqrt{1+T_n}\,z|}\,dz\ \hbox{ for all }n\in\mathbb{N}.$$
Since $\liminf_{n\to+\infty}|x_n|/T_n\ge c>0$, one has $B(x_n/|x_n|,1/2)\subset\{z\in\mathbb{R}^N;\,|x_n-\sqrt{1+T_n}\,z|\ge R\}$ for~$n$ large enough, whence
$$\omega'\,e^{-\lambda|x_n|}>\frac{\gamma}{K}\int_{B(x_n/|x_n|,1/2)}\!\!\!e^{-K|z|^2-\lambda|x_n-\sqrt{1+T_n}\,z|}\,dz\ge\frac{\gamma\,e^{-9K/4}}{K}\int_{B(0,1/2)}\!\!\!e^{-\lambda|x_n-\sqrt{1+T_n}\,(x_n/|x_n|+y)|}\,dy$$
for $n$ large enough. For $n$ large enough so that $\sqrt{1+T_n}\le|x_n|$, it follows that, for all~$y\in B(0,1/2)$,
$$\Big|x_n-\sqrt{1+T_n}\,\Big(\frac{x_n}{|x_n|}+y\Big)\Big|\le|x_n|\Big(1-\frac{\sqrt{1+T_n}}{|x_n|}\Big)+\frac{\sqrt{1+T_n}}{2}=|x_n|-\frac{\sqrt{1+T_n}}{2},$$
whence
$$\omega'\,e^{-\lambda|x_n|}>\frac{\gamma\,e^{-9K/4}\,e^{-\lambda|x_n|+\lambda\sqrt{1+T_n}/2}}{K}\int_{B(0,1/2)}dy$$
for $n$ large enough. This leads to a contradiction since $T_n\to+\infty$ as $n\to+\infty$.\par
As a conclusion,~\eqref{defT} holds when~\eqref{u0} is fulfilled and the proof of Lemma~\ref{lem1} is thereby complete.~\hfill$\Box$\break
From Lemma~\ref{lem1} and the maximum principle, the following corollary immediately holds.
\begin{cor}\label{cor1}
For every $t\ge1$, $T'\ge T$ and $x\in\mathbb{R}^N$, one has $u(t+T',x)\ge u(t,x)$.
\end{cor}
\begin{rem}{\rm Notice from the proof of Lemma~\ref{lem1} that time $1$ could be replaced by any positive time $t_0$ in the statement: namely, for any $t_0>0$, there exists $T_0>0$ such that~$u(t_0+t,x)\ge u(t_0,x)$ for all $t\ge T_0$ and $x\in\mathbb{R}^N$. However, this property does not hold in general with $t_0=0$. Indeed, if~$0\le u_0\le 1$ is continuous and $\max_{\mathbb{R}^N}u_0=1$, then $u_0$ can never be bounded from above in $\mathbb{R}^N$ by~$u(t,\cdot)$ for any $t>0$, since $u(t,x)<1$ for all $t>0$ and~$x\in\mathbb{R}^N$ by the strong parabolic maximum principle.}
\end{rem}
\begin{rem}{\rm The assumptions~\eqref{u0gaussian} or~\eqref{u0} were crucially used in the proof of Lemma~\ref{lem1}, in order to trap $u(1,x)$ between two comparable functions as $|x|\to+\infty$, the lower one giving rise to a solution which, after some time, is above the upper one at time $1$. The conclusion of Lemma~\ref{lem1} may not hold for more general initial conditions $u_0$, for instance if $\gamma\,e^{-\lambda_1|x|}\le u_0(x)\le\delta\,e^{-\lambda_2|x|}$ for $|x|$ large enough, with $\gamma,\,\delta>0$, $0<\lambda_2<\lambda_1$, $\liminf_{|x|\to+\infty}u_0(x)\,e^{\lambda_1|x|}<+\infty$ and $\limsup_{|x|\to+\infty}u_0(x)\,e^{\lambda_2|x|}>0$. For such initial conditions, more complex dynamics may occur in general, even for homogeneous one-dimensional equations, see e.g.~\cite{hn,y}.}
\end{rem}
\setcounter{equation}{0} \section{Improved monotonicity when $u(t,x)$ is away from $0$ and~$1$}\label{sec3}
In this section, we improve the $T$-monotonicity result stated in Corollary~\ref{cor1}, for the points~$(t,x)$ such that $0<a\le u(t,x)\le b<1$, where $0<a\le b<1$ are given. To do so, let us first define
\begin{equation}\label{deftau*}
\tau_*=\inf\big\{\tau>0;\ \exists\,t_0\ge 0,\ \forall\,\tau'\ge\tau,\ \forall\,t\ge t_0,\ \forall\,x\in\mathbb{R}^N,\ u(t+\tau',x)\ge u(t,x)\big\}.
\end{equation}
It follows from Corollary~\ref{cor1} that $0\le \tau_*\le T<+\infty$. Our goal is to show that $\tau_*=0$ (this goal will be achieved at the beginning of Section~\ref{sec7}).
\begin{lem}\label{lem2}
Let $a$ and $b$ be any two real numbers such that $0<a\le b<1$ and let $\tau$ be any real number such that $\tau\ge\tau_*$ and $\tau>0$. Then,
$$\liminf_{t\to+\infty,\ a\le u(t,x)\le b}\frac{u(t+\tau,x)}{u(t,x)}>1,$$
that is, there exist $t_0>0$ and $\delta>0$ such that, for all $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ with $a\le u(t,x)\le b$, there holds $u(t+\tau,x)\ge(1+\delta)\,u(t,x)$.
\end{lem}
\noindent{\bf{Proof.}} The proof shall use the definition of $\tau_*$ and the positivity of $\tau$ together with the spreading pro\-perties of solutions of equations obtained as finite or infinite spatial shifts of~\eqref{eq}. We argue by contradiction. So, assume that the conclusion of Lemma~\ref{lem2} does not hold. Then there are two sequences $(t_n)_{n\in\mathbb{N}}$ and $(\delta_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that $\delta_n\to0$ as $n\to+\infty$, $t_n\to+\infty$ as $n\to+\infty$ and
\begin{equation}\label{un}
a\le u(t_n,x_n)\le b\ \hbox{ and }\ u(t_n+\tau,x_n)<(1+\delta_n)\,u(t_n,x_n)\ \hbox{ for all }n\in\mathbb{N}.
\end{equation}\par
Shift the origin at the points $(t_n,x_n)$ and define
$$u_n(t,x)=u(t+t_n,x+x_n).$$
The functions $u_n$ are classical solutions of
\begin{equation}\label{equn}
(u_n)_t=\hbox{div}(A(x+x_n)\nabla u_n)+f(x+x_n,u_n),\ \ t>-t_n,\ x\in\mathbb{R}^N
\end{equation}
with $0<u_n(t,x)<1$ for all $(t,x)\in(-t_n,+\infty)\times\mathbb{R}^N$. From Arzela-Ascoli theorem, up to extraction of a subsequence, the functions $\mathbb{R}^N\times[0,1]\ni(x,s)\mapsto f(x+x_n,s)$ converge locally uniformly in $\mathbb{R}^N\times[0,1]$ to a continuous function $f_{\infty}:\mathbb{R}^N\times[0,1]$ which actually shares with $f$ the following properties: $f_{\infty}(\cdot,0)=f_{\infty}(\cdot,1)=0$, $f_{\infty}(x,1-u)/u$ is nonincreasing in $u\in(0,1]$, and $f_{\infty}$ satisfies~\eqref{hypf}, whence $\inf_{x\in\mathbb{R}^N}f_{\infty}(x,s)>0$ for every $s\in(0,1)$). Furthermore, up to extraction of another subsequence, the matrix fields $x\mapsto A(x+x_n)$ converge in $C^1_{loc}(\mathbb{R}^N)$ to a uniformly definite positive symmetric matrix field $A_{\infty}$.\footnote{As a matter of fact, since $u(t_n,x_n)\le b<1$ and $t_n\to+\infty$, then $|x_n|\to+\infty$ by~\eqref{defc}, whence $A_{\infty}$ is a constant matrix due to~\eqref{oscA}. However, the fact that $A_{\infty}$ is constant is not used in the proof of the present lemma.} Lastly, from standard parabolic estimates, the functions $u_n$ converge locally uniformly in $C^{1,2}_{t,x}(\mathbb{R}\times\mathbb{R}^N)$, up to extraction of another subsequence, to a classical solution~$u_{\infty}$ of
\begin{equation}\label{uinfty}
(u_{\infty})_t=\hbox{div}(A_{\infty}\nabla u_{\infty})+f_{\infty}(x,u_{\infty}),\ \ t\in\mathbb{R},\ x\in\mathbb{R}^N,
\end{equation}
such that $0\le u_{\infty}(t,x)\le 1$ for all $(t,x)\in\mathbb{R}\times\mathbb{R}^N$.\par
Now, for any $\varepsilon>0$, it follows from $\tau\ge\tau_*$ and from the definition of~$\tau_*$ in~\eqref{deftau*} that there is~$T_0>0$ such that
$$u(t+\tau+\varepsilon,x)\ge u(t,x)\ \hbox{ for all }(t,x)\in[T_0,+\infty)\times\mathbb{R}^N$$
(actually, if $\tau>\tau_*$, then one can also take $\varepsilon=0$). In particular, since $t_n\to+\infty$ as $n\to+\infty$, one infers that $u_{\infty}(t+\tau+\varepsilon,x)\ge u_{\infty}(t,x)$ for all $(t,x)\in\mathbb{R}\times\mathbb{R}^N$. Since $\varepsilon>0$ can be arbitrary, one gets that
$$u_{\infty}(t+\tau,x)\ge u_{\infty}(t,x)\ \hbox{ for all }(t,x)\in\mathbb{R}\times\mathbb{R}^N.$$
On the other hand, the inequalities~\eqref{un} and $\lim_{n\to+\infty}\delta_n=0$ imply that $a\le u_{\infty}(0,0)\le b$ and $u_{\infty}(\tau,0)\le u_{\infty}(0,0)$, whence $u_{\infty}(\tau,0)=u_{\infty}(0,0)$. As a consequence, the bounded functions~$u_{\infty}(\cdot+\tau,\cdot)$ and $u_{\infty}(\cdot,\cdot)$ are ordered in $\mathbb{R}\times\mathbb{R}^N$ and are equal at $(0,0)$. It follows from the strong maximum principle that
$$u_{\infty}(t+\tau,x)=u_{\infty}(t,x)$$
for all $(t,x)\in(-\infty,0]\times\mathbb{R}^N$, and then for all $(t,x)\in\mathbb{R}\times\mathbb{R}^N$ from the uniqueness of the Cauchy problem associated with~\eqref{uinfty}. Furthermore, $0<a\le u_{\infty}(0,0)\le b<1$ and~$0\le u_{\infty}\le1$ in~$\mathbb{R}\times\mathbb{R}^N$, whence $0<u_{\infty}<1$ in $\mathbb{R}\times\mathbb{R}^N$ from the strong maximum principle. Lastly, $u_{\infty}(t,x)\to1$ as~$t\to+\infty$ locally uniformly in $x\in\mathbb{R}^N$, as recalled in Section~\ref{intro} for $u$ and $f$, from the properties shared by $f_{\infty}$ with $f$. Thus, the limit $\mathbb{N}\ni m\to+\infty$ in $u_{\infty}(m\tau,0)=u_{\infty}(0,0)\le b<1$ leads to a contradiction, since $\tau>0$ by assumption. The proof of Lemma~\ref{lem2} is thereby complete.~\hfill$\Box$\break
From Lemma~\ref{lem2} and the uniform continuity of $u$ in, say, $[1,+\infty)\times\mathbb{R}^N$, the inequalities stated in Lemma~\ref{lem2} hold uniformly for some time-shifts in a neighborhood of $\tau_*$ if $\tau_*$ is positive, as the following corollary shows.
\begin{cor}\label{cor2}
Let $a$ and $b$ be any two real numbers such that $0<a\le b<1$. If one assumes that $\tau^*>0$, then there exist~$t_0>0$, $\delta>0$ and $0<\underline{\tau}<\tau_*<\overline{\tau}$ such that, for all $\tau\in[\underline{\tau},\overline{\tau}]$ and $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ with~$a\le u(t,x)\le b$, then $u(t+\tau,x)\ge(1+\delta)\,u(t,x)$.
\end{cor}
\noindent{\bf{Proof.}} From Lemma~\ref{lem2} applied with $\tau=\tau_*$, there are $t_0>0$ and $\delta>0$ such that~$u(t+\tau_*,x)\ge(1+2\delta)\,u(t,x)$ for all $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ with $a\le u(t,x)\le b$. Choose~$\varepsilon\in(0,1)$ so that $(1-\varepsilon)(1+2\delta)\ge1+\delta$. Since $u$ is uniformly continuous in~$[t_0,+\infty)\times\mathbb{R}^N$ from standard parabolic estimates, there exist some real numbers~$\underline{\tau}$ and~$\overline{\tau}$ such that $0<\underline{\tau}<\tau_*<\overline{\tau}$ and
$$|u(t+\tau,x)-u(t+\tau_*,x)|\le\varepsilon\,(1+2\delta)\,a\ \hbox{ for all }\tau\in[\underline{\tau},\overline{\tau}]\hbox{ and for all }(t,x)\in[t_0,+\infty)\times\mathbb{R}^N.$$
Fix now any $\tau\in[\underline{\tau},\overline{\tau}]$ and any $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ with $a\le u(t,x)\le b$. One has~$u(t+\tau_*,x)\ge(1+2\delta)\,u(t,x)\ge(1+2\delta)\,a$, whence
$$u(t+\tau,x)\ge u(t+\tau_*,x)-\varepsilon\,(1+2\delta)\,a\ge(1-\varepsilon)\,u(t+\tau_*,x)\ge(1-\varepsilon)\,(1+2\delta)\,u(t,x)\ge(1+\delta)\,u(t,x).$$
This is the desired result and the proof is thereby complete.\hfill$\Box$
\setcounter{equation}{0} \section{Improved monotonicity when $u(t,x)$ is away from $0$}\label{sec4}
In this section, by using especially the fact that $f(x,1-u)/u$ is nonincreasing with respect to $u\in(0,1]$ for every $x\in\mathbb{R}^N$, we improve the $\tau$-monotonicity of $u$ (with $\tau>\tau_*$) in the region where $u(t,x)$ is close to $1$ (we recall that~$0<u(t,x)<1$ for all~$(t,x)\in(0,+\infty)\times\mathbb{R}^N$). Namely, we will prove the following lemma.
\begin{lem}\label{lem3}
Let $a$ and $\tau$ be any real numbers such that $0<a<1$ and $\tau>\tau_*$. Then,
$$\limsup_{t\to+\infty,\ u(t,x)\ge a}\frac{1-u(t+\tau,x)}{1-u(t,x)}<1,$$
that is, there exist $t_0>0$ and $\delta>0$ such that, for all $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ with $u(t,x)\ge a$, there holds $1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x))$.
\end{lem}
\noindent{\bf{Proof.}} First of all, since $\tau>\tau_*$, it follows from the definition of $\tau_*$ that there is $T_0>0$ such that
\begin{equation}\label{utau}
u(t+\tau,x)\ge u(t,x)\hbox{ for all }(t,x)\in[T_0,+\infty)\times\mathbb{R}^N.
\end{equation}
Notice that the strong maximum principle then yields $u(t+\tau,x)>u(t,x)$ in $(T_0,+\infty)\times\mathbb{R}^N$ (otherwise, one would have $u(t+\tau,x)=u(t,x)$ in $[T_0,T_1]\times\mathbb{R}^N$ with some $T_1>T_0$, whence~$u(t+\tau,x)=u(t,x)$ in $[T_0,+\infty)\times\mathbb{R}^N$ and $u(T_0+m\tau,0)=u(T_0,0)<1$ for all $m\in\mathbb{N}$, whereas~$u(t,0)\to1$ as $t\to+\infty$. Even if it means increasing $T_0$, one can then assume without loss of generality that
$$u(t+\tau,x)>u(t,x)\hbox{ for all }(t,x)\in[T_0,+\infty)\times\mathbb{R}^N,\ \ u(T_0,0)\ge a\ \hbox{ and }\ T_0>\tau.$$\par
Define now, for every $k\in\mathbb{N}=\{0,1,2,\cdots\}$,
$$E_k=\big\{x\in\mathbb{R}^N;\,\exists\,t\in[T_0+k\tau,T_0+(k+1)\tau],\,u(t,x)\ge a\big\}.$$
The set $E_0$ is not empty since $u(T_0,0)\ge a$. As a consequence,
$$u(T_0+k\tau,0)\ge u(T_0+(k-1)\tau,0)\ge\cdots\ge u(T_0,0)\ge a,$$
whence $0\in E_k$ for every $k\in\mathbb{N}$. Thanks to~\eqref{utau}, the same argument implies that $E_k\subset E_{k+1}$ for every $k\in\mathbb{N}$. Furthermore, each set $E_k$ is closed by continuity of $u$ in $[T_0,+\infty)\times\mathbb{R}^N$. Lastly, as done for the proof of~\eqref{u1} and~\eqref{u1bis} in Lemma~\ref{lem1}, one easily infers that $u(t,x)\to0$ as~$|x|\to+\infty$ locally uniformly in $t>0$, whence each set $E_k$ is bounded. Therefore, the sets~$E_k$ are a non-decreasing sequence of non-empty compact subsets of $\mathbb{R}^N$.\par
We are going to apply the maximum principle to the functions $1-u(t+\tau,x)$ and $1-u(t,x)$ in the sets $[T_0+k\tau,T_0+(k+1)\tau]\times E_k$ by induction with respect to $k$, in order to improve quantitatively the inequality $1-u(t+\tau,x)\le 1-u(t,x)$ in $[T_0+k\tau,T_0+(k+1)\tau]\times E_k$.\par
To do so, we first claim that the function $u$ is bounded from below by a positive constant uniformly in the sets $[T_0+k\tau,T_0+(k+1)\tau]\times E_k$, that is, there is $\underline{a}\in(0,a]$ such that
\begin{equation}\label{claim1}
\forall\,k\in\mathbb{N},\ \forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times E_k,\ u(t,x)\ge\underline{a}>0.
\end{equation}
Indeed, otherwise, there exist a sequence $(k_n)_{n\in\mathbb{N}}$ of integers and, for each $n\in\mathbb{N}$, a time~$t_n\in[T_0+k_n\tau,T_0+(k_n+1)\tau]$ and a point $x_n\in E_{k_n}$, with $u(t_n,x_n)\to0$ as $n\to+\infty$. For each $n\in\mathbb{N}$, since $x_n\in E_{k_n}$, there is a time $t'_n\in[T_0+k_n\tau,T_0+(k_n+1)\tau]$ such that $u(t'_n,x_n)\ge a$. Consider the functions
$$(t,x)\mapsto u_n(t,x)=u(t+t_n,x+x_n),$$
which are defined in $(-t_n,+\infty)\times\mathbb{R}^N\supset(-T_0,+\infty)\times\mathbb{R}^N$ and solve~\eqref{equn}, together with $0\le u_n\le 1$. From Arzela-Ascoli theorem and standard parabolic estimates, up to extraction of a subsequence, these functions $u_n$ converge locally uniformly in $C^{1,2}_{t,x}((-T_0,+\infty)\times\mathbb{R}^N)$ to a solution $0\le u_{\infty}\le 1$ of an equation of the type~\eqref{uinfty} in $(-T_0,+\infty)\times\mathbb{R}^N$ (notice that the sequences $(t_n)_{n\in\mathbb{N}}$ and $(x_n)_{n\in\mathbb{N}}$ may not be unbounded and the limiting equation satisfied by~$u_{\infty}$ may just be a finite spatial shift of~\eqref{eq}). Anyway, $u_n(0,0)=u(t_n,x_n)\to0$ as $n\to+\infty$, whence $u_{\infty}(0,0)=0$. Therefore, $u_{\infty}=0$ in $(-T_0,0]\times\mathbb{R}^N$ from the strong maximum principle, and $u_{\infty}=0$ in $(-T_0,+\infty)\times\mathbb{R}^N$ from the uniqueness of the Cauchy problem associated with~\eqref{uinfty}. On the other hand, $|t'_n-t_n|\le\tau<T_0$ for every $n\in\mathbb{N}$. Up to extraction of another subsequence, one can assume that $t'_n-t_n\to t'_{\infty}>-T_0$ as $n\to+\infty$. Since $u_n(t'_n-t_n,0)=u(t'_n,x_n)\ge a$, one gets $u_{\infty}(t'_{\infty},0)\ge a>0$, which leads to a contradiction. As a consequence, the claim~\eqref{claim1} is proved.\par
The second claim is concerned with an upper bound of the values of $u$ on the boundaries~$\partial E_k$ of the sets $E_k$, on the time intervals $[T_0+k\tau,T_0+(k+1)\tau]$. Namely, we claim that there is a real number $b\in(0,1)$ such that
\begin{equation}\label{claim2}
\forall\,k\in\mathbb{N},\ \forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times\partial E_k,\ u(t,x)\le b<1.
\end{equation}
Assume not. Then, there exist a sequence $(k_n)_{n\in\mathbb{N}}$ of integers and, for each $n\in\mathbb{N}$, a time~$t_n\in[T_0+k_n\tau,T_0+(k_n+1)\tau]$ and a point~$x_n\in\partial E_{k_n}$, with $u(t_n,x_n)\to1$ as $n\to+\infty$. For each~$n\in\mathbb{N}$, since $x_n\in\partial E_{k_n}\subset E_{k_n}$ and since $u(\cdot,x_n)$ is continuous on $[T_0+k_n\tau,T_0+(k_n+1)\tau]$, the definition of $E_{k_n}$ yields
$$\max_{[T_0+k_n\tau,T_0+(k_n+1)\tau]}u(\cdot,x_n)\ge a.$$
Furthermore, if $\min_{[T_0+k_n\tau,T_0+(k_n+1)\tau]}u(\cdot,x_n)>a$, then by uniform continuity of $u$ in $[T_0,+\infty)\times\mathbb{R}^N$ one would have $\min_{[T_0+k_n\tau,T_0+(k_n+1)\tau]}u(\cdot,x)>a$ for all $x$ in a neighborhood of~$x_n$ and~$x_n$ would then be an interior point of $E_{k_n}$. Therefore, $\min_{[T_0+k_n\tau,T_0+(k_n+1)\tau]}u(\cdot,x_n)\le a$ and there is a time~$t'_n\in[T_0+k_n\tau,T_0+(k_n+1)\tau]$ such that
$$u(t'_n,x_n)=a.$$
Now, as in the previous paragraph, the functions $(t,x)\mapsto u_n(t,x)=u(t+t_n,x+x_n)$ converge, up to extraction of a subsequence, locally uniformly in $C^{1,2}_{t,x}((-T_0,+\infty)\times\mathbb{R}^N)$ to a solution $0\le u_{\infty}\le 1$ of an equation of the type~\eqref{uinfty} in $(-T_0,+\infty)\times\mathbb{R}^N$. One has $u_{\infty}(0,0)=1$, whence $u_{\infty}=1$ in $(-T_0,0]\times\mathbb{R}^N$ and then in $(-T_0,+\infty)\times\mathbb{R}^N$. On the other hand, up to extraction of another subsequence, there holds $\lim_{n\to+\infty}(t'_n-t_n)=t'_{\infty}\in[-\tau,\tau]\subset(-T_0,+\infty)$ and $u_{\infty}(t'_{\infty},0)=a<1$. One has reached a contradiction, and the claim~\eqref{claim2} follows.\par
Similarly, we claim that there is a real number $\overline{b}\in(0,1)$ such that
\begin{equation}\label{claim3}
\forall\,k\in\mathbb{N},\ \forall\,x\in E_{k+1}\backslash E_k,\ u(T_0+(k+1)\tau,x)\le\overline{b}<1.
\end{equation}
Otherwise, there exist a sequence $(k_n)_{n\in\mathbb{N}}$ of integers and, for each $n\in\mathbb{N}$, a point $x_n\in E_{k_n+1}\backslash E_{k_n}$, with $u(T_0+(k_n+1)\tau,x_n)\to1$ as $n\to+\infty$. For each $n\in\mathbb{N}$, since $x_n\in E_{k_n+1}\backslash E_{k_n}$, there holds
$$\max_{[T_0+(k_n+1)\tau,T_0+(k_n+2)\tau]}u(\cdot,x_n)\ge a\ \hbox{ and }\ \max_{[T_0+k_n\tau,T_0+(k_n+1)\tau]}u(\cdot,x_n)<a,$$
whence there is a time $t_n\in[T_0+(k_n+1)\tau,T_0+(k_n+2)\tau]$ such that $u(t_n,x_n)=a$. Up to extraction of a subsequence, the functions
$$(t,x)\mapsto u_n(t,x)=u(t+T_0+(k_n+1)\tau,x+x_n)$$
converge locally uniformly in $C^{1,2}_{t,x}((-T_0-\tau,+\infty)\times\mathbb{R}^N)$ to a solution $0\le u_{\infty}\le 1$ of an equation of the type~\eqref{uinfty} in $(-T_0-\tau,+\infty)\times\mathbb{R}^N$. One has $u_{\infty}(0,0)=1$, whence $u_{\infty}=1$ in~$(-T_0-\tau,0]\times\mathbb{R}^N$ and then in $(-T_0-\tau,+\infty)\times\mathbb{R}^N$. On the other hand, up to extraction of another subsequence, there holds $\lim_{n\to+\infty}(t_n-(T_0+(k_n+1)\tau))=t_{\infty}\in[0,\tau]\subset(-T_0-\tau,+\infty)$ and $u_{\infty}(t_{\infty},0)=a<1$. One has reached a contradiction, and the claim~\eqref{claim3} is proved.\par
Putting together~\eqref{claim1},~\eqref{claim2} and~\eqref{claim3}, one gets that
$$\left\{\begin{array}{ll}
\forall\,k\in\mathbb{N},\ \forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times\partial E_k, & 0<\underline{a}\le u(t,x)\le b<1,\vspace{3pt}\\
\forall\,k\in\mathbb{N},\ \forall\,x\in E_{k+1}\backslash E_k, & 0<\underline{a}\le u(T_0+(k+1)\tau,x)\le\overline{b}<1.\end{array}\right.$$
It follows then from Lemma~\ref{lem2} applied once with $(\underline{a},b,\tau)$ and another time with $(\underline{a},\overline{b},\tau)$ (notice that $\tau\ge\tau_*$ and $\tau>0$ since here $\tau>\tau_*$) that there are $k_0\in\mathbb{N}$ and $\delta_0\in(0,+\infty)$ such that, for all $k\ge k_0$,
$$\left\{\begin{array}{ll}
\forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times\partial E_k, & u(t+\tau,x)\ge(1+\delta_0)\,u(t,x),\vspace{3pt}\\
\forall\,x\in E_{k+1}\backslash E_k, & u(T_0+(k+2)\tau,x)\ge(1+\delta_0)\,u(T_0+(k+1)\tau,x).\end{array}\right.$$
Define
$$\delta=\delta_0\underline{a}>0.$$
One infers that
\begin{equation}\label{Ek}\begin{array}{l}
\forall\,k\ge k_0,\ \forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times\partial E_k,\vspace{3pt}\\
\quad\begin{array}{rcl}
1-u(t+\tau,x)\le1-(1+\delta_0)\,u(t,x)\le1-u(t,x)-\delta_0\underline{a} & \!\!=\!\! & 1-u(t,x)-\delta\vspace{3pt}\\
& \!\!\le\!\! & (1-\delta)\,(1-u(t,x)),\end{array}\eaa
\end{equation}
and, by arguing similarly with $x\in E_{k+1}\backslash E_k$, that
\begin{equation}\label{Ek1k}
\forall\,k\ge k_0,\ \forall\,x\in E_{k+1}\backslash E_k,\ \ 1-u(T_0+(k+2)\tau,x)\le(1-\delta)\,(1-u(T_0+(k+1)\tau,x)).
\end{equation}
On the other hand, since
$$1>u(t+\tau,x)>u(t,x)>0$$
for all $(t,x)\in[T_0+k_0\tau,T_0+(k_0+1)\tau]\times E_{k_0}$ and since both functions $u(\cdot+\tau,\cdot)$ and $u$ are continuous on this compact set $[T_0+k_0\tau,T_0+(k_0+1)\tau]\times E_{k_0}$, if follows that, even if it means decreasing $\delta>0$,
\begin{equation}\label{Ek0}
\forall\,(t,x)\in[T_0+k_0\tau,T_0+(k_0+1)\tau]\times E_{k_0},\ \ 1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x)).
\end{equation}\par
Finally, we claim by induction on $k$ that
\begin{equation}\label{claim4}
\forall\,k\ge k_0,\ \forall\,(t,x)\in[T_0+k\tau,T_0+(k+1)\tau]\times E_k,\ \ 1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x)).
\end{equation}
First of all, the property is true at $k=k_0$, by~\eqref{Ek0}. Assume now that the property is satisfied for some $k\in\mathbb{N}$ with $k\ge k_0$. In particular, by choosing $t=T_0+(k+1)\tau$, there holds
$$\forall\,x\in E_k,\ \ 1-u(t_0+(k+2)\tau,x)\le(1-\delta)\,(1-u(t_0+(k+1)\tau,x)).$$
This last inequality also holds for all $x\in E_{k+1}\backslash E_k$, by~\eqref{Ek1k}. Therefore,
\begin{equation}\label{Ek1}
\forall\,x\in E_{k+1},\ \ 1-u(t_0+(k+2)\tau,x)\le(1-\delta)\,(1-u(t_0+(k+1)\tau,x)).
\end{equation}
Furthermore, property~\eqref{Ek} yields
\begin{equation}\label{Ek1bis}
\forall\,(t,x)\in[t_0+(k+1)\tau,t_0+(k+2)\tau]\times\partial E_{k+1},\ \ 1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x)).
\end{equation}
Consider the functions
$$v(t,x)=1-u(t+\tau,x)\ \hbox{ and }\ \overline{v}(t,x)=(1-\delta)\,(1-u(t,x))$$
in the compact set
$$Q_k=[t_0+(k+1)\tau,t_0+(k+2)\tau]\times E_{k+1}.$$
The inequalities~\eqref{Ek1} and~\eqref{Ek1bis} mean that
$$v(t,x)\le\overline{v}(t,x)\ \hbox{ for all }(t,x)\,\in\,\{t_0+(k+1)\tau\}\!\times\!E_{k+1}\,\cup\,[t_0+(k+1)\tau,t_0+(k+2)\tau]\!\times\!\partial E_{k+1},$$
namely $v\le\overline{v}$ on the parabolic boundary of $Q_k$. Let us now check that $\overline{v}$ is a supersolution of the equation satisfied by $v$. On the one hand, the function $v$ satisfies $0\le v\le 1$ and obeys
$$v_t=\hbox{div}(A(x)\nabla v)+g(x,v)\ \hbox{ in }Q_k,$$
where $g$ is defined by $g(x,s)=-f(x,1-s)$ for all $(x,s)\in\mathbb{R}^N\times[0,1]$. On the other hand, the function $\overline{v}$ satisfies $0\le\overline{v}\le 1$ in $Q_k$ and
$$\begin{array}{rcl}
\overline{v}_t-\hbox{div}(A(x)\nabla\overline{v})-g(x,\overline{v}) & = & -(1-\delta)\,u_t+(1-\delta)\,\hbox{div}(A(x)\nabla u)-g(x,\overline{v})\vspace{3pt}\\
& = & -(1-\delta)\,f(x,u)-g(x,\overline{v})\vspace{3pt}\\
& = & (1-\delta)\,g(x,1-u)-g(x,(1-\delta)\,(1-u)).\end{array}$$
But the function $g(x,s)/s$ is nondecreasing with respect to $s\in(0,1]$, since by assumption the function $f(x,1-s)/s$ is nonincreasing with respect to $s\in(0,1]$. Hence,
$$g(x,(1-\delta)\,(1-u(t,x)))\le(1-\delta)\,g(x,1-u(t,x))\ \hbox{ in }Q_k$$
and
$$\overline{v}_t-\hbox{div}(A(x)\nabla\overline{v})-g(x,\overline{v})\ge0\ \hbox{ in }Q_k.$$
The parabolic maximum principle then implies that $v\le\overline{v}$ in $Q_k$. This means that property~\eqref{claim4} is satisfied with $k+1$ and finally that it holds by induction for all $k\ge k_0$.\par
As a conclusion, set $t_0=T_0+k_0\tau$ and consider any $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$ such that $u(t,x)\ge a$. Let $k\in\mathbb{N}$, $k\ge k_0$ be such that $T_0+k\tau\le t\le T_0+(k+1)\tau$. Thus, $x\in E_k$ and property~\eqref{claim4} yields
$$1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x)).$$
The proof of Lemma~\ref{lem3} is thereby complete.\hfill$\Box$
\setcounter{equation}{0} \section{Monotonicity in time when $u(t,x)$ is close to $1$}\label{sec5}
In this section, based on Lemma~\ref{lem3}, we will show that $u$ is actually increasing in time at large time when it is close to $1$.
\begin{lem}\label{lem4}
There exist $b\in(0,1)$ and $\widetilde{T}>0$ such that, for all $(t,x)\in[\widetilde{T},+\infty)\times\mathbb{R}^N$ with~$u(t,x)\ge b$, there holds $u_t(t,x)>0$.
\end{lem}
\noindent{\bf{Proof.}} As in the proof of Lemma~\ref{lem3}, denote $v=1-u$. The function $v$ satisfies $0<v<1$ in~$(0,+\infty)\times\mathbb{R}^N$ and
$$v_t=\hbox{div}(A(x)\nabla v)+g(x,v),\ \ t>0,\ x\in\mathbb{R}^N$$
with $g(x,s)=-f(x,1-s)$. Furthermore, by choosing, say, $\tau=\tau_*+1$, it follows from definition~\eqref{deftau*} that there is $t_0>1$ such that, for all $(t,x)\in[t_0,+\infty)\times\mathbb{R}^N$, $1>u(t+\tau,x)\ge u(t,x)$, that is,
\begin{equation}\label{vtau}
0<v(t+\tau,x)\le v(t,x)\ \hbox{ in }[t_0,+\infty)\times\mathbb{R}^N.
\end{equation}
From standard parabolic estimates and Harnack inequality, there are some positive constants $C_1$ and $C_2$ such that
$$\forall\,(t,x)\in[t_0,+\infty)\times\mathbb{R}^N,\ |v_t(t,x)|+|\nabla v(t,x)|\le C_1\max_{[t-1,t]\times\overline{B(x,1)}}v\le C_2\,v(t+\tau,x).$$
Together with~\eqref{vtau}, it follows that the fields $v_t/v$ and $\nabla v/v$ are bounded in $[t_0,+\infty)\times\mathbb{R}^N$. Define now
\begin{equation}\label{defM}
M=\limsup_{t\to+\infty,\,v(t,x)\to0}\frac{v_t(t,x)}{v(t,x)}.
\end{equation}
From the previous observations and the fact that $v(t,x)=1-u(t,x)\to0$ as $t\to+\infty$ locally uniformly in $x\in\mathbb{R}^N$, one infers that $M$ is a real number. To complete the proof of Lemma~\ref{lem4}, it will actually be sufficient to show that $M<0$.\par
To do so, owing to the definition of $M$, pick a sequence of points $(t_n,x_n)_{n\in\mathbb{N}}$ in $[t_0,+\infty)\times\mathbb{R}^N$ such that
$$t_n\to+\infty,\ \ v(t_n,x_n)\to0\ \hbox{ and }\ \frac{v_t(t_n,x_n)}{v(t_n,x_n)}\to M\ \hbox{ as }n\to+\infty.$$
Define
$$v_n(t,x)=\frac{v(t+t_n,x+x_n)}{v(t_n,x_n)}>0\ \hbox{ in }(-t_n,+\infty)\times\mathbb{R}^N.$$
Since the fields $v_t/v$ and $\nabla v/v$ are bounded in $[t_0,+\infty)\times\mathbb{R}^N$, one infers that the functions $v_n$ are bounded locally in $\mathbb{R}\times\mathbb{R}^N$, in the sense that, for any compact subset $K$ of $\mathbb{R}\times\mathbb{R}^N$, there is $n_K\in\mathbb{N}$ such that $v_n$ is well defined in $K$ for every $n\ge n_K$ and $\sup_{n\ge n_K}\|v_n\|_{L^{\infty}(K)}<+\infty$. Furthermore, the functions $v_n$ obey
\begin{equation}\label{eqvn}
(v_n)_t(t,x)=\hbox{div}(A(x+x_n)\nabla v_n(t,x))+\frac{g(x+x_n,v(t_n,x_n)\,v_n(t,x))}{v(t_n,x_n)},\ \ t>-t_n,\ x\in\mathbb{R}^N.
\end{equation}
Remember now that $f(\cdot,1)=0$ in $\mathbb{R}^N$, that the function $(x,s)\mapsto f(x,s)$ is Lipschitz continuous with respect to $s$ uniformly in $x\in\mathbb{R}^N$, of class $C^1$ with respect to $s$ in $\mathbb{R}^N\times[s_1,1]$ for some~$s_1\in(0,1)$, and that $f_s$ is uniformly continuous in $\mathbb{R}^N\times[s_1,1]$ and of class $C^{0,\alpha}$ with respect to $x$ uniformly in $s\in[s_1,1]$. Therefore, the function $g$ satisfies the same properties in~$\mathbb{R}^N\times[0,1-s_1]$. In particular, the functions
$$(t,x)\mapsto h_n(t,x):=\frac{g(x+x_n,v(t_n,x_n)\,v_n(t,x))}{v(t_n,x_n)}$$
are bounded locally in $\mathbb{R}\times\mathbb{R}^N$ and $\|h_n-g_s(x+x_n,0)\,v_n\|_{L^{\infty}(K)}\to0$ as $n\to+\infty$ for any compact set $K\subset\mathbb{R}\times\mathbb{R}^N$, from the mean value theorem. From standard parabolic estimates and Sobolev estimates, it follows that the functions $v_n$ are bounded locally in $W^{1,2,p}_{t,x}(\mathbb{R}\times\mathbb{R}^N)$ and are therefore bounded locally in $C^{0,\alpha}(\mathbb{R}\times\mathbb{R}^N)$. It is then straightforward to check that the functions $h_n$ are actually bounded locally in $C^{0,\alpha}(\mathbb{R}\times\mathbb{R}^N)$. Notice also that, up to extraction of a subsequence, the functions $g_s(\cdot+x_n,0)$ converge locally uniformly in $\mathbb{R}^N$ to a function $a\in C^{0,\alpha}(\mathbb{R}^N)$ and that the matrix fields $A(\cdot+x_n)$ converge locally uniformly in $\mathbb{R}^N$ to a uniformly definite positive symmetric matrix field $A_{\infty}\in C^{1,\alpha}(\mathbb{R}^N)$. As a consequence, again by standard parabolic estimates, the functions $v_n$ converge, up to extraction of a subsequence, locally uniformly in $C^{1,2}_{t,x}(\mathbb{R}\times\mathbb{R}^N)$, to a nonnegative classical solution $v_{\infty}$ of
\begin{equation}\label{eqvinfty}
(v_{\infty})_t=\hbox{div}(A_{\infty}(x)\nabla v_{\infty})+a(x)\,v_{\infty},\ \ t\in\mathbb{R},\ x\in\mathbb{R}^N.
\end{equation}\par
On the other hand, $v_n(0,0)=1$, whence $v_{\infty}(0,0)=1$ and $v_{\infty}>0$ in $\mathbb{R}\times\mathbb{R}^N$ from the strong maximum principle. Hence, the functions $(v_n)_t/v_n$ and $\nabla v_n/v_n$ converge locally uniformly in~$\mathbb{R}\times\mathbb{R}^N$ to $(v_{\infty})_t/v_{\infty}$ and $\nabla v_{\infty}/v_{\infty}$. In particular,
$$\frac{(v_{\infty})_t(0,0)}{v_{\infty}(0,0)}=(v_{\infty})_t(0,0)=M.$$
Moreover, since the fields $v_t/t$ and $\nabla v/v$ are bounded in $[t_0,+\infty)\times\mathbb{R}^N$ together with $\lim_{n\to+\infty}t_n=+\infty$ and $\lim_{n\to+\infty}v(t_n,x_n)=0$, it follows that the fields $(v_{\infty})_t/v_{\infty}$ and $\nabla v_{\infty}/v_{\infty}$ are bounded in $\mathbb{R}\times\mathbb{R}^N$ and that $v(t+t_n,x+x_n)\to0$ as $n\to+\infty$ locally uniformly in $\mathbb{R}\times\mathbb{R}^N$. Hence, owing to the definition of $M$ in~\eqref{defM}, one infers that
$$\frac{(v_{\infty})_t(t,x)}{v_{\infty}(t,x)}\le M\ \hbox{ for all }(t,x)\in\mathbb{R}\times\mathbb{R}^N.$$\par
Denote $z=(v_{\infty})_t/v_{\infty}$. Since the coefficients of~\eqref{eqvinfty} do not depend on $t$, it follows from standard parabolic estimates and the differentiation of~\eqref{eqvinfty} with respect to $t$ that $z$ is a classical solution of
\begin{equation}\label{eqz}
z_t=\hbox{div}(A_{\infty}\nabla z)+2\frac{\nabla v_{\infty}}{v_{\infty}}\cdot A_{\infty}\nabla z
\end{equation}
in $\mathbb{R}\times\mathbb{R}^N$. Furthermore, $z$ and $\nabla v_{\infty}/v_{\infty}$ are bounded in $\mathbb{R}\times\mathbb{R}^N$ and $z\le M$ in $\mathbb{R}\times\mathbb{R}^N$ with~$z(0,0)=M$. The strong parabolic maximum principle then implies that $z=M$ in~$(-\infty,0]\times\mathbb{R}^N$, and hence in $\mathbb{R}\times\mathbb{R}^N$ by uniqueness of the Cauchy problem associated with~\eqref{eqz}. In other words, $(v_{\infty})_t/v_{\infty}=M$ in $\mathbb{R}\times\mathbb{R}^N$. In particular, since $v_{\infty}(0,0)=1$, one gets that~$v_{\infty}(\tau,0)=e^{M\tau}$ (we recall that $\tau=\tau_*+1$).\par
Lastly, by Lemma~\ref{lem3} applied with $\tau=\tau_*+1$ and $a=1/2$, there are $T_0>0$ and $\delta>0$ such that
$$1-u(t+\tau,x)\le(1-\delta)\,(1-u(t,x))\ \hbox{ for all }(t,x)\in[T_0,+\infty)\times\mathbb{R}^N\hbox{ with }u(t,x)\ge\frac12.$$
Thus, for any given $(t,x)\in\mathbb{R}\times\mathbb{R}^N$, since $t+t_n\ge T_0$ and
$$u(t+t_n,x+x_n)=1-v(t+t_n,x+x_n)\ge\frac12$$
for all $n$ large enough, one infers that
$$1-u(t+t_n+\tau,x+x_n)\le(1-\delta)\,(1-u(t+t_n,x+x_n)),$$
whence $v_n(t+\tau,x)\le(1-\delta)\,v_n(t,x)$ for all $n$ large enough. Thus, $v_{\infty}(t+\tau,x)\le(1-\delta)\,v_{\infty}(t,x)$ for all $(t,x)\in\mathbb{R}\times\mathbb{R}^N$. Consequently, $e^{M\tau}=v_{\infty}(\tau,0)\le(1-\delta)v_{\infty}(0,0)=1-\delta<1$ and $M<0$.\par
As a conclusion, owing to the definition of $M$ in~\eqref{defM} and since $v=1-u$, the conclusion of Lemma~\ref{lem4} follows.\hfill$\Box$
\setcounter{equation}{0} \section{$\tau$-monotonicity in time when $u(t,x)$ is close to $0$}\label{sec6}
In this section, for any arbitrary $\tau>0$, we show the $\tau$-monotonicity in time at large time in the region where $u(t,x)$ is close to $0$. We shall use in particular the assumptions~\eqref{oscA} and~\eqref{oscf} on asymptotic homogeneity of the coefficients $A$ and $f_u(\cdot,0)$. The key step will be the following proposition, which is of independent interest.
\begin{pro}\label{pro1}
Let $\underline{\nu}$ and $\overline{\nu}$ be two fixed positive real numbers such that $0<\underline{\nu}\le\overline{\nu}$ and let~$\sigma\in(0,1)$ be fixed. Then, there exist $\tau>0$ and $\eta>0$ such that, for every $C^1(\mathbb{R}^N)$ symmetric matrix field $a=(a_{ij})_{1\le i,j\le N}:\mathbb{R}^N\to\mathbb{S}_N(\mathbb{R})$ with $\underline{\nu}I\le a\le\overline{\nu}I$ and $|\nabla a|\le\eta$ in $\mathbb{R}^N$ $($where~$|\nabla a(x)|=\max_{1\le i,j\le N}|\nabla a_{ij}(x)|)$, the fundamental solution $p(t,x;y)$ of
\begin{equation}\label{defp}\left\{\begin{array}{rcl}
p_t(t,x;y) & = & {\rm{div}}(a(x)\nabla p(t,x;y)),\ \ t>0,\ x\in\mathbb{R}^N,\vspace{3pt}\\
p(0,\cdot;y) & = & \delta_y\end{array}\right.
\end{equation}
satisfies
\begin{equation}\label{ptau}
p(\tau+1,x;0)\ge\sigma\,p(\tau,x;0)\ \hbox{ for all }x\in\mathbb{R}^N.
\end{equation}
\end{pro}
Let us postpone the proof of this proposition to Section~\ref{secpro1}. We continue in this section the proof of Theorem~\ref{th1} for the solution $u$ of~\eqref{eq}. The main result proved in this section is the following lemma.
\begin{lem}\label{lem5}
Let $\theta$ and $\theta'$ be any two real numbers such that $0<\theta\le\theta'$. Then there exist $T_0>0$ and $\varepsilon>0$ such that
$$\forall\,\tau\in[\theta,\theta'],\ \forall\,(t,x)\in[T_0,+\infty)\times\mathbb{R}^N,\ \ \big(u(t,x)\le\varepsilon\big)\Longrightarrow\big(u(t+\tau,x)>u(t,x)\big).$$
\end{lem}
\noindent{\bf{Proof.}} Let us argue by way of contradiction. So, assume that there exist a sequence $(\tau_n)_{n\in\mathbb{N}}$ in~$[\theta,\theta']$, a sequence $(t_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that
\begin{equation}\label{tntaun}
t_n\mathop{\longrightarrow}_{n\to+\infty}+\infty,\ u(t_n,x_n)\mathop{\longrightarrow}_{n\to+\infty}0\hbox{ and }u(t_n+\tau_n,x_n)\le u(t_n,x_n)\hbox{ for all }n\in\mathbb{N}.
\end{equation}
Up to extraction of a subsequence, one can assume without loss of generality that
$$\tau_n\to\tau_{\infty}\in[\theta,\theta']\subset(0,+\infty)\ \hbox{ as }n\to+\infty.$$
Notice that~\eqref{defc} and~\eqref{tntaun} yield $\lim_{n\to+\infty}|x_n|=+\infty$ and even $\liminf_{n\to+\infty}|x_n|/t_n\ge c>0$. Therefore, up to extraction of another subsequence, the matrix fields $x\mapsto A(x+x_n)$ converge in~$C^1_{loc}(\mathbb{R}^N)$ to a definite positive symmetric matrix field $A_{\infty}$, which turns out to be a constant matrix due to~\eqref{oscA}. Observe also that~\eqref{tntaun} and the nonnegativity of $u$ imply that~$u(t_n+\tau_n,x_n)\to0$ as $n\to+\infty$.\par
Define the functions
$$u_n(t,x)=\frac{u(t+t_n+\tau_n,x+x_n)}{u(t_n+\tau_n,x_n)},\ \ (t,x)\in(-t_n-\tau_n,+\infty)\times\mathbb{R}^N.$$
Each function $u_n$ obeys
$$(u_n)_t(t,x)=\hbox{div}(A(x+x_n)\nabla u_n(t,x))+\frac{f(x+x_n,u(t_n+\tau_n,x_n)\,u_n(t,x))}{u(t_n+\tau_n,x_n)}.$$
For each compact set $K\subset(-\infty,0)\times\mathbb{R}^N$, there is $n_K\in\mathbb{N}$ such that $K\subset(-t_n-\tau_n+1,+\infty)\times\mathbb{R}^N$ for all $n\!\ge\!n_K$ and it follows from Harnack inequality applied to $u$ that $\sup_{n\ge n_K}\!\|u_n\|_{L^{\infty}(K)}\!<\!+\!\infty$. Remember that $f(\cdot,0)=0$ in $\mathbb{R}^N$, that the function $(x,s)\mapsto f(x,s)$ is Lipschitz continuous with respect to $s$ uniformly in $x\in\mathbb{R}^N$, of class $C^1$ with respect to $s$ in $\mathbb{R}^N\times[0,s_0]$ with~$s_0\in(0,1)$, and that $f_s$ is uniformly continuous in $\mathbb{R}^N\times[0,s_0]$ and of class $C^{0,\alpha}$ with respect to~$x$ uniformly in~$s\in[0,s_0]$. Furthermore, up to extraction of another subsequence, the functions~$x\mapsto f_s(x+x_n,0)$ converge locally uniformly in $\mathbb{R}^N$ to a function $r\in C^{0,\alpha}(\mathbb{R}^N)$, which is actually a constant such that~$r\ge\mu>0$ by~\eqref{hypf} and~\eqref{oscf}. Therefore, as we did in the proof of Lemma~\ref{lem4} for the functions~$v_n$ satisfying~\eqref{eqvn}, we get that, up to extraction of a subsequence, the positive functions~$u_n$ converge locally uniformly in $C^{1,2}_{t,x}((-\infty,0)\times\mathbb{R}^N)$ to a nonnegative solution $u_{\infty}$ of
$$(u_{\infty})_t=\hbox{div}(A_{\infty}\nabla u_{\infty})+r\,u_{\infty}\ \hbox{ in }(-\infty,0)\times\mathbb{R}^N.$$
Since $u_n(-\tau_n,0)\ge1$ by~\eqref{tntaun} and $\tau_n\to\tau_{\infty}>0$ as $n\to+\infty$, we get that $u_{\infty}(-\tau_{\infty},0)\ge1$, whence $u_{\infty}>0$ in $(-\infty,0)\times\mathbb{R}^N$ from the strong maximum principle and the uniqueness of the Cauchy problem associated with that equation. Since $A_{\infty}$ is a constant symmetric definite positive matrix, there is an invertible matrix $M$ such that the function $\widetilde{u}_{\infty}$ defined in $(-\infty,0)\times\mathbb{R}^N$ by $\widetilde{u}_{\infty}(t,x)=u_{\infty}(t,Mx)$ satisfies $(\widetilde{u}_{\infty})_t=\Delta\widetilde{u}_{\infty}+r\,\widetilde{u}_{\infty}$ in $(-\infty,0)\times\mathbb{R}^N$. In other words, the function $(t,x)\mapsto e^{-rt}\widetilde{u}_{\infty}(t,x)$ is a positive solution of the heat equation in $(-\infty,0)\times\mathbb{R}^N$. Thus, by~\cite{w}, it is nondecreasing with respect to $t$ in $(-\infty,0)\times\mathbb{R}^N$. Therefore, the function $(t,x)\mapsto e^{-rt}u_{\infty}(t,x)$ is nondecreasing with respect to $t$ in $(-\infty,0)\times\mathbb{R}^N$.\par
Remember now that $\mu>0$ and that $\theta>0$ is given in the statement of Lemma~\ref{lem5}. Fix a real number $\sigma\in(0,1)$ close enough to $1$ so that
$$\sigma\,e^{\,\mu\,\theta/2}>1,$$
and let
$$\tau>0\hbox{ and }\eta>0$$
be as in Proposition~\ref{pro1} applied with this real number $\sigma$ and with $\underline{\nu}=\nu^{-1}$ and $\overline{\nu}=\nu$ (remember that~$\nu^{-1}I\le A\le\nu I$ in $\mathbb{R}^N$ with $\nu\ge1$). Finally, let $L>0$ be such that~\eqref{defL} holds and let us fix~$\varepsilon>0$ small enough so that
\begin{equation}\label{defepsilon}
0<\varepsilon\le\frac{\theta}{4},\ \ \sqrt{\varepsilon}\,|\nabla A|\le\eta\hbox{ in }\mathbb{R}^N\ \hbox{ and }\ e^{-\varepsilon L\tau}\,\sigma\,e^{\,\mu\,\theta/2}>1.
\end{equation}\par
Let us finally complete the proof of Lemma~\ref{lem5} by reaching a contradiction. Since the function~$t\mapsto e^{-rt}u_{\infty}(t,x)$ is nondecreasing in $(-\infty,0)$ for each given $x\in\mathbb{R}^N$ and since~$0<\varepsilon<2\varepsilon<\theta\le\tau_{\infty}$, one has $e^{r\varepsilon}u_{\infty}(-\varepsilon,0)\ge e^{r\tau_{\infty}}u_{\infty}(-\tau_{\infty},0)$. But $u_n\to u_{\infty}>0$ locally uniformly in $(-\infty,0)\times\mathbb{R}^N$ and $\tau_n\to\tau_{\infty}$ as $n\to+\infty$. As a consequence,
$$e^{r\varepsilon}u_n(-\varepsilon,0)\ge e^{r(\tau_n-\varepsilon)}u_n(-\tau_n,0)\ \hbox{ for all }n\hbox{ large enough},$$
whence
$$u(-\varepsilon+t_n+\tau_n,x_n)\ge e^{r(\tau_n-2\varepsilon)}u(t_n,x_n)\ge e^{\,\mu\,\theta/2}u(t_n,x_n)\ge e^{\,\mu\,\theta/2}u(t_n+\tau_n,x_n)$$
for all $n$ large enough, since $r\ge\mu>0$, $\tau_n-2\varepsilon\ge\theta-2\varepsilon\ge\theta/2>0$ and $u(t_n,x_n)\ge u(t_n+\tau_n,x_n)>0$ by~\eqref{tntaun}. Lastly, consider the parabolically rescaled functions
$$v_n(t,x)=u(\varepsilon\,t+t_n+\tau_n,\sqrt{\varepsilon}\,x+x_n),\ \ (t,x)\in(-\varepsilon^{-1}(t_n+\tau_n),+\infty)\times\mathbb{R}^N$$
and observe that
\begin{equation}\label{vn10}
v_n(-1,0)\ge e^{\,\mu\,\theta/2}v_n(0,0)\ \hbox{ for all }n\hbox{ large enough}.
\end{equation}
Furthermore, the functions $v_n$ obey
\begin{equation}\label{eqvnbis}
(v_n)_t=\hbox{div}(A_n(x)\nabla v_n)+\varepsilon\,f(\sqrt{\varepsilon}\,x+x_n,v_n)\ \hbox{ in }(-\varepsilon^{-1}(t_n+\tau_n),+\infty)\times\mathbb{R}^N,
\end{equation}
with $A_n(x)=A(\sqrt{\varepsilon}\,x+x_n)$, and they are positive in $(-\varepsilon^{-1}(t_n+\tau_n),+\infty)\times\mathbb{R}^N$. For each $n\in\mathbb{N}$ and $y\in\mathbb{R}^N$, call $p_n$ and $p_{n,y}$ the fundamental solutions of~\eqref{defp} with diffusion matrix fields $a=A_n$ and $a=A_n(\cdot+y)$, respectively. Remember that $\tau>0$ is given above from Proposition~\ref{pro1} and choose~$n\in\mathbb{N}$ large enough so that $-\varepsilon^{-1}(t_n+\tau_n)<-\tau-1$ and~\eqref{vn10} holds. Since the function $f$ is such that~$0\le f(x,s)\le Ls$ for all $(x,s)\in\mathbb{R}^N\times[0,1]$, it follows from~\eqref{eqvnbis} that
\begin{equation}\label{vn00}
v_n(0,0)\ge\int_{\mathbb{R}^N}p_n(\tau+1,0;y)\,v_n(-\tau+1,y)\,dy=\int_{\mathbb{R}^N}p_{n,y}(\tau+1,-y;0)\,v_n(-\tau+1,y)\,dy
\end{equation}
and
\begin{equation}\label{vn-10}
v_n(-1,0)\le e^{\varepsilon L\tau}\int_{\mathbb{R}^N}p_n(\tau,0;y)\,v_n(-\tau+1,y)\,dy=e^{\varepsilon L\tau}\int_{\mathbb{R}^N}p_{n,y}(\tau,-y;0)\,v_n(-\tau+1,y)\,dy.
\end{equation}
On the other hand, for every $y\in\mathbb{R}^N$, the matrix field $A_{n,y}:=A_n(\cdot+y)$ is of class $C^1(\mathbb{R}^N)$ it satisfies $\nu^{-1}I\le A_{n,y}\le\nu I$ in $\mathbb{R}^N$ and~$|\nabla A_{n,y}(x)|=\sqrt{\varepsilon}\,|\nabla A(\sqrt{\varepsilon}\,(x+y)+x_n)|\le\eta$ for all $x\in\mathbb{R}^N$ by~\eqref{defepsilon}, where $\eta>0$ is given above from Proposition~\ref{pro1}. It follows then from the conclusion of Proposition~\ref{pro1} that
$$p_{n,y}(\tau+1,-y;0)\ge\sigma\,p_{n,y}(\tau,-y;0)\ \hbox{ for all }y\in\mathbb{R}^N,$$
whence $v_n(0,0)\ge e^{-\varepsilon L\tau}\sigma\,v_n(-1,0)$ by~\eqref{vn00} and~\eqref{vn-10}, and finally $v_n(0,0)\ge e^{-\varepsilon L\tau}\sigma\,e^{\,\mu\,\theta/2}v_n(0,0)$ by~\eqref{vn10}. The posi\-tivity of $v_n(0,0)=u(t_n+\tau_n,x_n)$ (since $t_n+\tau_n>0$) contradicts the last property in~\eqref{defepsilon}. The proof of Lemma~\ref{lem5} is thereby complete.\hfill$\Box$
\setcounter{equation}{0} \section{Conclusion of the proof of Theorem~\ref{th1}}\label{sec7}
Remember the definition of $\tau_*$ in~\eqref{deftau*} and remember that $0\le\tau^*<+\infty$. Before completing the proof of Theorem~\ref{th1}, we first show that $\tau^*=0$.
\begin{lem}\label{lem6}
There holds
$$\tau_*=0.$$
\end{lem}
\noindent{\bf{Proof.}} Assume by contradiction that $\tau_*>0$. It follows from the definition~\eqref{deftau*} of $\tau_*$ that there exist two sequences $(\tau_n)_{n\in\mathbb{N}}$ and $(t_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that
\begin{equation}\label{tntaunu}
\lim_{n\to+\infty}t_n=+\infty,\ \ \liminf_{n\to+\infty}\tau_n\ge\tau_*\ \hbox{ and }\ u(t_n+\tau_n,x_n)<u(t_n,x_n)\hbox{ for all }n\in\mathbb{N}.
\end{equation}
Actually, it turns out that $\tau_n\to\tau_*$ as $n\to+\infty$, otherwise the assumption $\limsup_{n\to+\infty}\tau_n>\tau_*$ together with $\lim_{n\to+\infty}t_n=+\infty$ would contradict the definition of $\tau_*$. The real numbers $u(t_n,x_n)$ take values in $(0,1)$. Thus, up to extraction of a subsequence, three cases can occur.\par
{\it Case 1: $u(t_n,x_n)\to m\in(0,1)$ as $n\to+\infty$.} There are then two real numbers $a$ and $b$ such that~$0<a\le u(t_n,x_n)\le b<1$ for all $n\in\mathbb{N}$. Since $t_n\to+\infty$ and $\tau_n\to\tau_*>0$ as $n\to+\infty$, Corollary~\ref{cor2} yields the existence of $\delta>0$ such that $u(t_n+\tau_n,x_n)\ge(1+\delta)\,u(t_n,x_n)$ for all $n$ large enough. This is impossible by~\eqref{tntaunu}.\par
{\it Case 2: $u(t_n,x_n)\to1$ as $n\to+\infty$.} Therefore, $u(t_n,x_n)\ge b$ and $t_n\ge\widetilde{T}$ for all $n$ large enough, where $b\in(0,1)$ and $\widetilde{T}$ are given as in Lemma~\ref{lem4}. One infers then from Lemma~\ref{lem4} that, for all $n$ large enough, $u_t(t_n,x_n)>0$ and thus even that $u_t(t,x_n)>0$ for all $t\ge t_n$. In particular,~$u(t_n+\tau_n,x_n)>u(t_n,x_n)$ for all $n$ large enough, contradicting~\eqref{tntaunu}. Thus, Case~2 is ruled out too.\par
{\it Case 3: $u(t_n,x_n)\to0$ as $n\to+\infty$.} Since $\tau_n\to\tau_*>0$ and each $\tau_n$ is positive, there are two real numbers $0<\theta\le\theta'$ such that $\theta\le\tau_n\le\theta'$ for all $n\in\mathbb{N}$. Let then $T_0>0$ and~$\varepsilon>0$ be as in Lemma~\ref{lem5}. For all $n$ large enough, one has $t_n\ge T_0$ and $u(t_n,x_n)\le\varepsilon$, whence~$u(t_n+\tau_n,x_n)>u(t_n,x_n)$. This contradicts~\eqref{tntaunu}.\par
As a conclusion, all three cases are impossible. Finally, $\tau_*=0$ and the proof of Lemma~\ref{lem6} is complete.\hfill$\Box$\break
Based on the previous lemma, we can now complete the proof of Theorem~\ref{th1}.\hfill\break
\noindent{\bf{Proof of Theorem~\ref{th1}.}} We first notice that, for any $t>0$,
\begin{equation}\label{ut0}
u_t(t,x)\to0\ \hbox{ as }|x|\to+\infty.
\end{equation}
Indeed, on the one hand, as in the proof of Lemma~\ref{lem2}, for any sequence $(x_n)_{n\in\mathbb{N}}$ in $\mathbb{R}^N$ with~$\lim_{n\to+\infty}|x_n|=+\infty$, the functions $u_n:(t,x)\mapsto u(t,x+x_n)$ converge locally uniformly in $C^{1,2}_{t,x}((0,+\infty)\times\mathbb{R}^N)$, up to extraction of a subsequence, to a solution $0\le u_{\infty}\le 1$ of an equation of the type
$$(u_{\infty})_t=\hbox{div}(A_{\infty}\nabla u_{\infty})+f_{\infty}(x,u_{\infty})\ \hbox{ in }(0,+\infty)\times\mathbb{R}^N$$
for some constant symmetric definite positive matrix $A_{\infty}$ and for some function $f_{\infty}$ satis\-fying the same properties as $f$. On the other hand, as already emphasized from the proof of Lemma~\ref{lem1},~$u(t,x)\to0$ as $|x|\to+\infty$, for every $t>0$. Therefore, $u_{\infty}=0$ in $(0,+\infty)\times\mathbb{R}^N$. Hence, by uniqueness the whole sequence $(u_n)_{n\in\mathbb{N}}$ converges to $0$ locally uniformly in $C^{1,2}_{t,x}((0,+\infty)\times\mathbb{R}^N)$ and~$u_t(t,x+x_n)\to0$ as $n\to+\infty$ for every $(t,x)\in(0,+\infty)\times\mathbb{R}^N$. As a consequence,~\eqref{ut0} holds.\par
In particular, it follows that $\inf_{\mathbb{R}^N}u_t(t,\cdot)\le0$ for all $t>0$. We now prove~\eqref{ut}, and then~\eqref{Teps}.\footnote{We could also view~\eqref{ut} as a consequence of~\eqref{Teps} by observing that $u_t(t,x)\to0$ as $u(t,x)\to0$. But since the proof of~\eqref{ut} is easy even without~\eqref{Teps}, we choose to carry it out before~\eqref{Teps}.} Assume now by contradiction that~$\inf_{\mathbb{R}^N}u_t(t,\cdot)\not\to0$ as $t\to+\infty$. Since $u_t$ is bounded in~$(1,+\infty)\times\mathbb{R}^N$, it follows then that there are a sequence $(t_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that~$t_n\to+\infty$ and $\limsup_{n\to+\infty}u_t(t_n,x_n)\in(-\infty,0)$. As done in the previous paragraph or in the proof of Lemma~\ref{lem2}, the functions
$$(t,x)\mapsto u(t+t_n,x+x_n)$$
converge, up to extraction of a subsequence, locally uniformly in $C^{1,2}_{t,x}(\mathbb{R}\times\mathbb{R}^N)$ to a solution~$0\le u_{\infty}\le1$ of an equation of the type~\eqref{uinfty} with $(u_{\infty})_t(0,0)<0$. However, for any $\tau>0$, it follows from the definition~\eqref{deftau*} of $\tau_*$ together with $\lim_{n\to+\infty}t_n=+\infty$ and Lemma~\ref{lem6} ($\tau_*=0$) that, for any given~$(t,x)\in\mathbb{R}^N$, there holds
$$u(t+\tau+t_n,x+x_n)\ge u(t+t_n,x+x_n)\ \hbox{ for all }n\hbox{ large enough},$$
whence $u_{\infty}(t+\tau,x)\ge u_{\infty}(t,x)$. Therefore, since $\tau>0$ and $(t,x)\in\mathbb{R}\times\mathbb{R}^N$ are arbitrary, one gets that $(u_{\infty})_t\ge0$ in $\mathbb{R}\times\mathbb{R}^N$, which yields a contradiction. In other words, one has shown that~$\inf_{\mathbb{R}^N}u_t(t,\cdot)\to0$ as $t\to+\infty$, that is,~\eqref{ut}.\par
Finally, let $\varepsilon\in(0,1)$ be given and let us show~\eqref{Teps}, that is, the existence of $T_{\varepsilon}>0$ such that~$u_t(t,x)>0$ for every $(t,x)\in[T_{\varepsilon},+\infty)\times\mathbb{R}^N$ with $u(t,x)\ge\varepsilon$. Assume not. Then there are a sequence~$(t_n)_{n\in\mathbb{N}}$ of positive real numbers and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that~$t_n\to+\infty$ as~$n\to+\infty$, while $u(t_n,x_n)\ge\varepsilon$ and $u_t(t_n,x_n)\le0$ for all $n\in\mathbb{N}$. It follows then from Lemma~\ref{lem4} that~$1$ is not a limiting value of the sequence $(u(t_n,x_n))_{n\in\mathbb{N}}$. Therefore, up to extraction of a sequence, one can assume without loss of generality that
$$u(t_n,x_n)\to m\in(0,1)\ \hbox{ as }n\to+\infty.$$
As done in the previous paragraph, one infers that the functions $(t,x)\mapsto u(t+t_n,x+x_n)$ converge, up to extraction of a subsequence, locally uniformly in $C^{1,2}_{t,x}(\mathbb{R}\times\mathbb{R}^N)$ to a solution $0\le u_{\infty}\le1$ of an equation of the type
$$(u_{\infty})_t=\hbox{div}(A_{\infty}\nabla u_{\infty})+f_{\infty}(x,u_{\infty}),\ \ t\in\mathbb{R},\ x\in\mathbb{R}^N$$
with $(u_{\infty})_t(t,x)\ge0$ for all $(t,x)\in\mathbb{R}\times\mathbb{R}^N$, whereas $u_{\infty}(0,0)=m\in(0,1)$ and $(u_{\infty})_t(0,0)\le0$. It follows from the strong maximum principle applied to the function $(u_{\infty})_t$ that $(u_{\infty})_t=0$ in~$(-\infty,0]\times\mathbb{R}^N$ and then in $\mathbb{R}\times\mathbb{R}^N$. Furthermore, the strong maximum principle applied to~$u_{\infty}$ also implies that $0<u_{\infty}<1$ in $\mathbb{R}\times\mathbb{R}^N$. As recalled in Section~\ref{intro}, one infers then from the properties shared by $f_{\infty}$ with $f$ that $u_{\infty}(t,x)\to1$ as $t\to+\infty$ for every $x\in\mathbb{R}^N$. This leads to a contradiction, since $(u_{\infty})_t=0$ in $\mathbb{R}\times\mathbb{R}^N$ and $u_{\infty}(0,0)=m<1$. Therefore,~\eqref{Teps} is shown and the proof of Theorem~\ref{th1} is thereby complete.\hfill$\Box$
\setcounter{equation}{0} \section{Proof of Proposition~\ref{pro1}}\label{secpro1}
Let $0<\underline{\nu}\le\overline{\nu}$ and $0<\sigma<1$ be fixed. When $a=D\,I$ with a real number $D\in[\underline{\nu},\overline{\nu}]$, then the conclusion~\eqref{ptau} holds immediately for $\tau>0$ large enough, from the explicit formula
$$p(t,x;0)=\frac{e^{-|x|^2/(4Dt)}}{(4\pi t)^{N/2}}.$$
In the general case where $a$ may not be constant, we will get the estimates~\eqref{ptau} by using uniform Gaussian estimates for large $x$ and small $t$, and by approximating locally, when $a$ is nearly locally constant, the solutions $p$ of~\eqref{defp} by explicit fundamental solutions of parabolic equations with constant coefficients.\par
More precisely, choose first any real number $\tau>0$ large enough so that
\begin{equation}\label{tausigma}
\Big(\frac{\tau}{\tau+1}\Big)^{N/2}>\sigma.
\end{equation}
We shall prove the conclusion of Proposition~\ref{pro1} is fulfilled with any such real number $\tau$.\par
First of all, it follows from the Gaussian upper and lower bounds of the fundamental solutions of~\eqref{defp} in~\cite{n} that there exist a constant $K\ge 1$ such that, for every $L^{\infty}(\mathbb{R}^N)$ matrix field~$a:\mathbb{R}^N\to\mathbb{S}_N(\mathbb{R})$ with $\underline{\nu}I\le a\le\overline{\nu}I$ a.e. in $\mathbb{R}^N$, the fundamental solution $p$ of~\eqref{defp} satisfies
\begin{equation}\label{pbounds}
\frac{e^{-K\,|x|^2/t}}{K\,t^{N/2}}\le p(t,x;0)\le\frac{K\,e^{-|x|^2/(Kt)}}{t^{N/2}}\ \hbox{ for all }(t,x)\in(0,+\infty)\times\mathbb{R}^N.
\end{equation}\par
Therefore, we claim that there exists $\tau_0\in(0,\tau)$ small enough such that, for every $L^{\infty}(\mathbb{R}^N)$ matrix field $a:\mathbb{R}^N\to\mathbb{S}_N(\mathbb{R})$ with $\underline{\nu}I\le a\le\overline{\nu}I$ a.e. in $\mathbb{R}^N$, there holds
\begin{equation}\label{ptau0}
p(\tau_0+1,x;0)\ge\sigma\,p(\tau_0,x;0)\ \hbox{ for all }x\in\mathbb{R}^N\hbox{ with }|x|\ge1.
\end{equation}
Indeed, otherwise, by picking any sequence $(\tau_n)_{n\in\mathbb{N}}$ in $(0,\tau)$ such that $\lim_{n\to+\infty}\tau_n=0$, there would exist a sequence $(a_n)_{n\in\mathbb{N}}$ of bounded symmetric matrix fields with $\underline{\nu}I\le a_n\le\overline{\nu}I$ a.e. in $\mathbb{R}^N$ and a sequence $(x_n)_{n\in\mathbb{N}}$ of points in $\mathbb{R}^N$ such that
$$|x_n|\ge1\ \hbox{ and }\ p_n(\tau_n+1,x_n;0)<\sigma\,p_n(\tau_n,x_n;0)\ \hbox{ for all }n\in\mathbb{N},$$
where $p_n$ denotes the fundamental solution of~\eqref{defp} with $a=a_n$. It would then follow from~\eqref{pbounds} that
$$\frac{e^{-K|x_n|^2/(\tau_n+1)}}{K\,(\tau_n+1)^{N/2}}<\frac{\sigma\,K\,e^{-|x_n|^2/(K\tau_n)}}{\tau_n^{N/2}}$$
for all $n\in\mathbb{N}$, that is,
$$e^{|x_n|^2(1/(K\tau_n)-K/(\tau_n+1))}<\sigma\,K^2\,\Big(\frac{\tau_n+1}{\tau_n}\Big)^{N/2}.$$
But $|x_n|\ge1$ for all $n\in\mathbb{N}$ and $1/(K\tau_n)-K/(\tau_n+1)\ge1/(2K\tau_n)$ for all $n$ large enough, since~$\tau_n\to0^+$ as $n\to+\infty$. Therefore, $e^{1/(2K\tau_n)}<\sigma\,K^2\,(1+1/\tau_n)^{N/2}$ for all $n$ large enough, which leads to a contradiction by passing to the limit as $n\to+\infty$. As a consequence, there is $\tau_0\in(0,\tau)$ such that~\eqref{ptau0} holds for every $L^{\infty}(\mathbb{R}^N)$ matrix field~$a:\mathbb{R}^N\to\mathbb{S}_N(\mathbb{R})$ with $\underline{\nu}I\le a\le\overline{\nu}I$ a.e. in~$\mathbb{R}^N$.\par
Now, we fix a real number $R\ge1$ large enough such that
\begin{equation}\label{defR}
\sigma\,\Big(1+\frac{1}{\tau_0}\Big)^{N/2}e^{-R^2/(4\overline{\nu}\tau(\tau+1))}<1.
\end{equation}
Given this choice of $R\ge1$, we claim that there exists a real number $\eta>0$ such that, for every~$a\in C^1(\mathbb{R}^N;\mathbb{S}_N(\mathbb{R}))$ with $\underline{\nu}I\le a\le\overline{\nu}I$ and $|\nabla a|\le\eta$ in $\mathbb{R}^N$, the fundamental solution~$p(t,x;y)$ of~\eqref{defp} satisfies
\begin{equation}\label{boundaries}\left\{\begin{array}{rl}
p(\tau+1,x;0)\ge\sigma\,p(\tau,x;0) & \hbox{for all }|x|\le R,\vspace{3pt}\\
p(t+1,x;0)\ge\sigma\,p(t,x;0) & \hbox{for all }t\in[\tau_0,\tau]\hbox{ and }|x|=R.\end{array}\right.
\end{equation}
Indeed, otherwise, there exist a sequence $(a_n)_{n\in\mathbb{N}}$ in $C^1(\mathbb{R}^N;\mathbb{S}_N(\mathbb{R}))$ with $\underline{\nu}I\le a_n\le\overline{\nu}I$ in $\mathbb{R}^N$ and~$\lim_{n\to+\infty}\|\,|\nabla a_n|\,\|_{L^{\infty}(\mathbb{R}^N)}=0$, as well as a sequence of points $(t_n,x_n)_{n\in\mathbb{N}}$ in $(0,+\infty)\times\mathbb{R}^N$ such that
\begin{equation}\label{tnxn}
(t_n,x_n)\,\in\,\{\tau\}\!\times\!\overline{B(0,R)}\,\cup\,[\tau_0,\tau]\!\times\!\partial B(0,R)\ \hbox{ and }\ p_n(t_n+1,x_n;0)<\sigma\,p_n(t_n,x_n;0)
\end{equation}
for all $n\in\mathbb{N}$, where $p_n$ denotes the fundamental solution of~\eqref{defp} with $a=a_n$. Up to extraction of a subsequence, the matrix fields $a_n$ converge locally uniformly in $\mathbb{R}^N$ to a constant symmetric definite positive matrix $a_{\infty}$ such that $\underline{\nu}I\le a_{\infty}\le\overline{\nu}I$. Furthermore, the functions $(p_n)_{n\in\mathbb{N}}$ are bounded locally in $(0,+\infty)\times\mathbb{R}^N$ from the bounds~\eqref{pbounds}. From standard parabolic estimates, the functions $p_n(\cdot,\cdot;0)$ converge then locally uniformly in $(0,+\infty)\times\mathbb{R}^N$ to a classical solution~$p_{\infty}$ of
$$(p_{\infty})_t=\hbox{div}(a_{\infty}\nabla p_{\infty})\ \hbox{ in }(0,+\infty)\times\mathbb{R}^N$$
such that
\begin{equation}\label{pinfty0}
K^{-1}t^{-N/2}e^{-K|x|^2/t}\le p_{\infty}(t,x)\le K\,t^{-N/2}e^{-|x|^2/(Kt)}\ \hbox{ for all }(t,x)\in(0,+\infty)\times\mathbb{R}^N.
\end{equation}
Moreover, it follows from~\eqref{tnxn} that there exists a point $(t_{\infty},x_{\infty})$ such that
\begin{equation}\label{tinfty}
(t_{\infty},x_{\infty})\,\in\,\{\tau\}\!\times\!\overline{B(0,R)}\,\cup\,[\tau_0,\tau]\!\times\!\partial B(0,R)\ \hbox{ and }\ p_{\infty}(t_{\infty}+1,x_{\infty})\le\sigma\,p_{\infty}(t_{\infty},x_{\infty}).
\end{equation}
From the aforementioned Gaussian estimates, the function $p_{\infty}$ is therefore positive in $(0,+\infty)\times\mathbb{R}^N$ and since $a_{\infty}\in\mathbb{S}_N(\mathbb{R}^N)$ satisfies $\underline{\nu}I\le a_{\infty}\le\overline{\nu}I$, there is an orthogonal linear map $M:x\mapsto y$ such that the function $(t,y)\mapsto q(t,y)=p_{\infty}(t,M^{-1}y)=p_{\infty}(t,x)$ is a positive solution of
$$q_t=\sum_{1\le i\le N}\lambda_i\frac{\partial^2q}{\partial y_i^2}\ \hbox{ in }(0,+\infty)\times\mathbb{R}^N$$
for some real numbers $\lambda_1,\ldots,\lambda_N\in[\underline{\nu},\overline{\nu}]$. Hence, there is a nonnegative Radon measure $\lambda$ such that
$$p_{\infty}(t,x)=\frac{1}{(4\pi t)^{N/2}}\int_{\mathbb{R}^N}e^{-\sum_{1\le i\le N}|y_i/\sqrt{\lambda_i}-z_i|^2/(4t)}\,d\lambda(z)\ \hbox{ for all }(t,x)\in(0,+\infty)\times\mathbb{R}^N.$$
Since, by~\eqref{pinfty0}, $\int_{B(x_0,r)}p_{\infty}(t,x)\,dx\to0^+$ as $t\to0^+$ for every $x_0\in\mathbb{R}^N$ and $r>0$ such that~$0\not\in\overline{B(x_0,r)}$, it follows easily that $\lambda$ is supported on the singleton~$\{0\}$ and that there is~$\rho>0$ such that
\begin{equation}\label{pinfty}
p_{\infty}(t,x)=\frac{\rho\,e^{-\sum_{1\le i\le N}|y_i|^2/(4\lambda_it)}}{(4\pi t)^{N/2}}\ \hbox{ for all }(t,x)\in(0,+\infty)\times\mathbb{R}^N.
\end{equation}
Remember now~\eqref{tinfty}. On the one hand, if $t_{\infty}=\tau$ (and $|x_{\infty}|\le R$), then~\eqref{tinfty} and~\eqref{pinfty} imply that
$$\frac{\rho\,e^{-\sum_{1\le i\le N}|y_{\infty,i}|^2/(4\lambda_i(\tau+1))}}{(4\pi(\tau+1))^{N/2}}\le\sigma\times\frac{\rho\,e^{-\sum_{1\le i\le N}|y_{\infty,i}|^2/(4\lambda_i\tau)}}{(4\pi\tau)^{N/2}},$$
where $y_{\infty}=M\,x_{\infty}$. Since $\rho>0$, $\lambda_i>0$ for all $1\le i\le N$ and $0<\tau<\tau+1$, one gets that~$\tau^{N/2}\le\sigma(\tau+1)^{N/2}$, which contradicts~\eqref{tausigma}. On the other hand, if $|x_{\infty}|=R$ (and $t_{\infty}\in[\tau_0,\tau]$), then~\eqref{tinfty} and~\eqref{pinfty} yield, with the same notations as above,
$$\frac{\rho\,e^{-\sum_{1\le i\le N}|y_{\infty,i}|^2/(4\lambda_i(t_{\infty}+1))}}{(4\pi(t_{\infty}+1))^{N/2}}\le\sigma\times\frac{\rho\,e^{-\sum_{1\le i\le N}|y_{\infty,i}|^2/(4\lambda_it_{\infty})}}{(4\pi t_{\infty})^{N/2}},$$
whence
$$1\le\sigma\,\Big(1+\frac{1}{t_{\infty}}\Big)^{N/2}e^{-\sum_{1\le i\le N}|y_{\infty,i}|^2/(4\lambda_it_{\infty}(t_{\infty}+1))}.$$
Since $0<\tau_0\le t_{\infty}\le\tau$, $0<\underline{\nu}\le\lambda_i\le\overline{\nu}$ for all $1\le i\le N$ and $|y_{\infty}|=|M\,x_{\infty}|=|x_{\infty}|=R$, one infers that
$$1\le\sigma\,\Big(1+\frac{1}{\tau_0}\Big)^{N/2}e^{-R^2/(4\overline{\nu}\tau(\tau+1))},$$
which contradicts~\eqref{defR}.\par
As a consequence, there is $\eta>0$ such that~\eqref{boundaries} holds for every $a\in C^1(\mathbb{R}^N;\mathbb{S}_N(\mathbb{R}))$ with~$\underline{\nu}I\le a\le\overline{\nu}I$ and $|\nabla a|\le\eta$ in $\mathbb{R}^N$. Consider finally any such matrix field $a$ and let us show that the conclusion~\eqref{ptau} holds, that is $p(\tau+1,\cdot;0)\ge\sigma\,p(\tau,\cdot;0)$ in $\mathbb{R}^N$, where $p$ solves~\eqref{defp}. First of all, it follows from~\eqref{boundaries} that
\begin{equation}\label{ptau1}
p(\tau+1,\cdot;0)\ge\sigma\,p(\tau,\cdot;0)\ \hbox{ in }\overline{B(0,R)}.
\end{equation}
On the other hand,
$$p(\tau_0+1,\cdot;0)\ge\sigma\,p(\tau_0,\cdot;0)\ \hbox{ in }\mathbb{R}^N\backslash B(0,R)\subset\mathbb{R}^N\backslash B(0,1)$$
by~\eqref{ptau0} and $R\ge 1$. Lastly,
$$p(t+1,\cdot;0)\ge\sigma\,p(t,\cdot;0)\ \hbox{ on }\partial B(0,R)\ \hbox{ for all }t\in[\tau_0,\tau]$$
by~\eqref{boundaries}. Therefore, since $p(\cdot,\cdot;0)$ and $p(\cdot+1,\cdot;0)$ are two positive bounded solutions of the same linear parabolic equation in (at least) $[\tau_0,\tau]\times(\mathbb{R}^N\backslash B(0,R))$, it follows from the parabolic maximum principle that
$$p(\tau+1,\cdot;0)\ge\sigma\,p(\tau,\cdot;0)\ \hbox{ in }\mathbb{R}^N\backslash B(0,R).$$
Together with~\eqref{ptau1}, one concludes that $p(\tau+1,\cdot;0)\ge\sigma\,p(\tau,\cdot;0)$ in $\mathbb{R}^N$ and the proof of Proposition~\ref{pro1} is thereby complete.\hfill$\Box$
\setcounter{equation}{0} \section{Proof of Theorem~\ref{th2}}\label{sec9}
The key-point in the proof of Theorem~\ref{th2} is the following result of independent interest on some monotonicity properties of the solutions of a boundary value problem in a half-line for a homogeneous linear one-dimensional reaction-diffusion.
\begin{pro} \label{propsigne}
Let $a$ and $\lambda$ be two positive real numbers and let $u$ be the solution of
\begin{equation} \label{lin1}
v_t = a\,v_{xx} + \lambda\,v,\ \ t>0,\ x>0,
\end{equation}
with boundary condition
\begin{equation} \label{lin2}
v(t,0) = g(t),\ \ t>0,
\end{equation}
and initial datum $v_0\in L^{\infty}(0,+\infty)\backslash\{0\}$. Assume that $v_0(x) \ge 0$ for a.e. $x>0$ and that $g$ is continuous, nonnegative and nondecreasing on $(0,+\infty)$. Then
$$
v_t(t,x) > 0
$$
provided $t\ge t_0$ and $x\ge\sqrt{8at}$, where $t_0=(2\lambda)^{-1}+e(e-1)^{-1}\lambda^{-1}>0$ is a positive constant depending only on $\lambda$.
\end{pro}
\noindent{\bf{Proof.}} Observe that $v$ can be written as
$$v=w+z,$$
where $w$ and $z$ solve~\eqref{lin1}, with $w(0,x)=v_0(x)$ and $z(0,x)=0$ for a.e. $x>0$, while $w(t,0)=0$, $z(t,0)=g(t)$ for all $t>0$.
Let us first consider the solution $w$ of the homogeneous Dirichlet boundary condition. There holds, for all $t>0$ and $x>0$,
$$w(t,x)=\int_0^{+\infty}G(t,x,y)\,v_0(y)\,dy,$$
where the Green function $G$ is given by
$$
G(t,x,y) = {e^{\lambda t} \over \sqrt{4 \pi a t} } \Bigl[ e^{ - | x - y |^2 / (4at) } - e^{- | x + y |^2 / (4at)} \Bigr]
$$
for $t>0$, $x>0$ and $y>0$. The time derivative of $G$ satisfies
$$
G_t(t,x,y) = {e^{\lambda t} \over \sqrt{4 \pi a t^3} } e^{ - | x - y |^2 / (4at) } \Bigl[ \lambda t - {1 \over 2 } + { | x - y |^2 \over 4 a t } \Bigr]- {e^{\lambda t} \over \sqrt{4 \pi a t^3} } e^{ - | x + y |^2 / (4at) } \Bigl[ \lambda t - {1 \over 2} + { | x + y |^2 \over 4 a t } \Bigr].
$$
Let us also introduce the function $\phi:[0,+\infty)\to[0,+\infty)$ defined by
$$
\phi(x) = x\,e^{-x}.
$$
This function $\phi$ has a maximum at $x=1$ and it is increasing in $[0,1]$ and decreasing in $[1,+\infty)$.
We are going to prove that $G_t(t,x,y)>0$ for all $y\in(0,+\infty)$ provided $x\ge\sqrt{8at}$ and $t\ge t_0$, where $t_0>0$ is given in the statement of Proposition~\ref{propsigne2}. In this paragraph, we fix any $t\ge t_0$, $x\ge\sqrt{8at}$ and $y>0$, and we first observe that $|x + y|^2 / (4at) \ge 2$. Two cases can then appear, depending on whether $|x-y|^2/(4at)$ is larger or smaller than $1$. In the first case, namely if $|x - y |^2 /(4at)\ge 1$, then $1 \le |x - y |^2 / (4at) < |x + y |^2 / (4at)$ and
$$
\phi \Bigl( {|x - y |^2 \over 4at } \Bigr) > \phi \Bigl( {|x + y |^2 \over 4at } \Bigr),
$$
hence we get $G_t(t,x,y) > 0$ (we also use the fact that $t \ge t_0\ge(2 \lambda)^{-1}$). In the second case, one has $0\le|x - y |^2 / (4at) <1$ and
$$
\big( \lambda t - {1 \over 2 }\big) \bigl[ e^{ - | x - y |^2 / (4at) } - e^{ - | x + y |^2 / (4at) } \bigr]
\ge
\big( \lambda t - {1 \over 2 } \big) ( e^{-1} - e^{- 2} ) \ge e^{-1}
$$
since $t\ge t_0=(2\lambda)^{-1}+e(e-1)^{-1}\lambda^{-1}\ge(2\lambda)^{-1}$, whereas
$$
{ | x + y |^2 \over 4at}e^{ - | x + y |^2 / (4at) } < \phi(1)=e^{-1}.
$$
Hence, we also have $G_t(t,x,y) >0$ in the second case. As a consequence, $G_t(t,x,\cdot)>0$ in $(0,+\infty)$ for all $t\ge t_0$ and $x\ge\sqrt{8at}$, hence $w_t(t,x)>0$ for all $t\ge t_0$ and $x\ge\sqrt{8at}$, since $v_0$ is nonnegative and nontrivial in $(0,+\infty)$.
Let us now turn to the solution $z$ of equation~(\ref{lin1}) with vanishing initial condition and with boundary condition~(\ref{lin2}). Since $g$ is nonnegative and bounded in any interval $(0,T)$ with $T\in(0,+\infty)$, it follows from the maximum principle that $z$ is nonnegative and bounded in $(0,T)\times(0,+\infty)$ too. Furthermore, for any $h>0$ and $t>0$, there holds $z(h,\cdot)\ge 0=z(0,\cdot)$ in $(0,+\infty)$ and $z(t+h,0)=g(t+h)\ge g(t)=z(t,0)$. Hence, the maximum principle yields $z(t+h,x)\ge z(t,x)$ for all $t>0$ and $x>0$. In other words, the function $z$ is nondecreasing with respect to $t$.
As a conclusion, the solution $v$ of~(\ref{lin1}) with boundary condition~(\ref{lin2}) satisfies $v_t(t,x)> 0$ provided $t\ge t_0$ and $x\ge\sqrt{8at}$.\hfill$\Box$\break
From the previous proposition and a change of variable $x\to-x$, the following result immediately follows.
\begin{pro} \label{propsigne2}
Let $a$ and $\lambda$ be two positive real numbers and let $u$ be the solution of
\begin{equation} \label{lin1b}
v_t = a\,v_{xx} + \lambda\,v,\ \ t>0,\ x<0,
\end{equation}
with boundary condition~\eqref{lin2} and initial datum $v_0\in L^{\infty}(-\infty,0)\backslash\{0\}$. Assume that $v_0(x) \ge 0$ for a.e. $x<0$ and that $g$ is continuous, nonnegative and nondecreasing on $(0,+\infty)$. Then $v_t(t,x) > 0$ provided $t\ge t_0$ and $x\le-\sqrt{8at}$, where $t_0=(2\lambda)^{-1}+e(e-1)^{-1}\lambda^{-1}>0$.
\end{pro}
With these two propositions in hand, let us now turn to the proof of Theorem~\ref{th2}.\hfill\break
\noindent{\bf{Proof of Theorem~\ref{th2}.}} Let $A$, $f$, $f^{\pm}$, $\lambda^{\pm}>0$, $\theta\in(0,1)$ and $u_0$ be as in the statement and let $R>0$ and $a^{\pm}\in(0,+\infty)$ be such that
\begin{equation}\label{defR2}
f(x,\cdot)=f^{\pm}\hbox{ in }[0,1]\ \hbox{ and }\ A(x)=a^{\pm}\ \hbox{ for all }|x|\ge R\hbox{ with }\pm x>0.
\end{equation}
Let us call
\begin{equation}\label{defT0}
T_0=\max\big((2\lambda^-)^{-1}+e(e-1)^{-1}(\lambda^-)^{-1},(2\lambda^+)^{-1}+e(e-1)^{-1}(\lambda^+)^{-1}\big)>0.
\end{equation}\par
Firstly, we claim that there exists $\eta\in(0,\theta)$ such that
\begin{equation}\label{claimeta}
\forall\,t\ge 1,\ \forall\,x\in\mathbb{R},\ \ \big(u(t,x)\le\eta\big)\ \Longrightarrow\ \big(u(s,x)\le\theta\hbox{ for all }s\in[t,t+T_0]\big).
\end{equation}
Assume not. Then there are a sequence $(t_n)_{n\in\mathbb{N}}$ in $[1,+\infty)$ and a sequence $(x_n)_{n\in\mathbb{N}}$ in $\mathbb{R}$ such that
$$u(t_n,x_n)\to0\hbox{ as }n\to+\infty\ \hbox{ and }\ \max_{[t_n,t_n+T_0]}u(\cdot,x_n)>\theta\hbox{ for all }n\in\mathbb{N}.$$
If the sequence $(x_n)_{n\in\mathbb{N}}$ were bounded, then the sequence $(t_n)_{n\in\mathbb{N}}$ would be bounded too due to~\eqref{defc}. Hence, up to extraction of a subsequence, $(t_n,x_n)$ would converge to a point $(t,x)$ in $[1,+\infty)\times\mathbb{R}$ with $u(t,x)=0$, which is impossible due to~\eqref{0u1}. Therefore, the sequence $(x_n)_{n\in\mathbb{N}}$ is unbounded. Up to extraction of a subsequence, let us then assume without loss of generality that $x_n\to+\infty$ as $n\to+\infty$ (the case $\lim_{n\to+\infty}x_n=-\infty$, up to extraction of a subsequence, can be handled similarly). From standard parabolic estimates, up to extraction of a subsequence, the functions $u_n:(t,x)\mapsto u(t+t_n,x+x_n)$ converge in $C^{1,2}_{t,x}$ locally in (at least) $(-1,+\infty)\times\mathbb{R}$ to a solution $v$ of
$$v_t=a^+v_{xx}+f^+(v)\ \hbox{ in }(-1,+\infty)\times\mathbb{R}$$
such that $v(0,0)=0$ and $\max_{[0,T_0]}v(\cdot,0)\ge\theta$. Furthermore, $0\le v\le1$ in $(-1,+\infty)\times\mathbb{R}$ by~\eqref{0u1}. Hence, $v=0$ in $(-1,0]\times\mathbb{R}$ from the strong maximum principle, and $v=0$ in $[0,+\infty)\times\mathbb{R}$ by the uniqueness of the solutions of the associated Cauchy problem. This contradicts the property $\max_{[0,T_0]}v(\cdot,0)\ge\theta\ (>0)$. Finally, the claim~\eqref{claimeta} has been proved.\par
Secondly, since the function $u$ is positive in $(0,+\infty)\times\mathbb{R}$ and the function $f$ is Lipschitz continuous with respect to $u\in[0,1]$ uniformly in $x\in\mathbb{R}$, it follows from Harnack inequality that there is a constant $C\in(0,1)$ such that, for all $(t,x)\in[1,+\infty)\times\mathbb{R}$,
\begin{equation}\label{harnack}
u(t+T_0,x\pm\sqrt{8a^{\pm}T_0})\ge C\,u(t,x).
\end{equation}
Let us denote
\begin{equation}\label{defeps3}
\varepsilon=C\,\eta
\end{equation}
and notice that $0<\varepsilon<\eta<\theta<1$. From Theorem~\ref{th1}, there is $T_{\varepsilon}>0$ such that~\eqref{Teps} holds, that is,
\begin{equation}\label{Teps2}
\forall\,(t,x)\in[T_{\varepsilon},+\infty)\times\mathbb{R},\ \ u(t,x)\ge\varepsilon\ \Longrightarrow\ u_t(t,x)>0.
\end{equation}
From~\eqref{defc}, since $\eta<\theta<1$, one can also assume without loss of generality that $T_{\varepsilon}\ge1$ and that
$$\min_{[-R,R]}u(T_{\varepsilon},\cdot)\ge\eta,$$
where $R>0$ is given in~\eqref{defR2}. Since $\eta>0$ and $u(T_{\varepsilon},\pm\infty)=0$ by~\eqref{conv0}, it follows from the continuity of $u(T_{\varepsilon},\cdot)$ that there are some real numbers $x^{\pm}$ such that
$$x^-\le-R<R\le x^+,\ \ u(T_{\varepsilon},x^{\pm})=\eta\ \hbox{ and }\ u(T_{\varepsilon},\cdot)\le\eta\hbox{ in }(-\infty,x^-]\cup[x^+,+\infty).$$\par
Let now $v$ be the solution of~\eqref{lin1} with $a=a^+$, $\lambda=\lambda^+$ and initial and boundary conditions given by
$$v_0=u(T_{\varepsilon},\cdot+x^+)\,(>0)\hbox{ in }(0,+\infty)\ \hbox{ and }\ v(t,0)=u(t+T_{\varepsilon},x^+)\,(>0)\hbox{ for all }t>0.$$
Since $u(T_{\varepsilon},x^+)=\eta>\varepsilon$, one infers from~\eqref{Teps2} that the continuous nonnegative function $t\mapsto u(t+T_{\varepsilon},x^+)$ is actually increasing in $[0,+\infty)$. Therefore, as $T_0\ge(2\lambda^+)^{-1}+e(e-1)^{-1}(\lambda^+)^{-1}$ by~\eqref{defT0}, it follows from Proposition~\ref{propsigne} that, in particular,
$$v_t(T_0,x)>0\hbox{ for all }x\ge\sqrt{8a^+T_0}.$$
On the other hand, since $T_{\varepsilon}\ge1$ and $u(T_{\varepsilon},\cdot)\le\eta$ in $[x^+,+\infty)$, it follows from~\eqref{claimeta} that $u\le\theta$ in $[T_{\varepsilon},T_{\varepsilon}+T_0]\times[x^+,+\infty)$. In that set, since $x^+\ge R$, there holds $f(u)=\lambda^+u$ (and $A(x)=a^+$), by~\eqref{defR2} and the assumption on $f^+$. Therefore, $u(\cdot+T_{\varepsilon},\cdot+x^+)$ satisfies the same linear equation as $v$ in the set $[0,T_0]\times[0,+\infty)$, with the same initial and boundary conditions on $\{0\}\times[0,+\infty)$ and $[0,T_0]\times\{0\}$. Thus,
$$u(t+T_{\varepsilon},x+x^+)=v(t,x)\hbox{ for all }(t,x)\in[0,T_0]\times[0,+\infty)$$
and
\begin{equation}\label{monotone}
u_t(T_0+T_{\varepsilon},x)>0\hbox{ for all }x\ge x^++\sqrt{8a^+T_0}.
\end{equation}\par
Furthermore, since $T_{\varepsilon}\ge1$ and $u(T_{\varepsilon},x^+)=\eta$, it follows from~\eqref{harnack} and~\eqref{defeps3} that
$$u(T_{\varepsilon}+T_0,x^++\sqrt{8a^+T_0})\ge C\,\eta=\varepsilon.$$
Hence, $u_t(t,x^++\sqrt{8a^+T_0})>0$ for all $t\ge T_{\varepsilon}+T_0$ by~\eqref{Teps2}. Together with~\eqref{monotone} and the fact that the equation~\eqref{eq} does not depend on time, one concludes from the maximum principle applied to $u_t$ that
\begin{equation}\label{monotone2}
u_t(t,x)>0\hbox{ for all }(t,x)\in[T_{\varepsilon}+T_0,+\infty)\times[x^++\sqrt{8a^+T_0},+\infty).
\end{equation}
Similarly, by using Proposition~\ref{propsigne2} among other things, one can show that
\begin{equation}\label{monotone3}
u_t(t,x)>0\hbox{ for all }(t,x)\in[T_{\varepsilon}+T_0,+\infty)\times(-\infty,x^--\sqrt{8a^-T_0}].
\end{equation}\par
Finally, due to~\eqref{defc}, there is $\tau\ge T_{\varepsilon}+T_0$ such that $u(\tau,\cdot)\ge\varepsilon$ in $[x^--\sqrt{8a^-T_0},x^++\sqrt{8a^+T_0}]$. Hence, by~\eqref{Teps2}, there holds
$$u_t(t,x)>0\hbox{ for all }(t,x)\in[\tau,+\infty)\times[x^--\sqrt{8a^-T_0},x^++\sqrt{8a^+T_0}].$$
Together with~\eqref{monotone2} and~\eqref{monotone3}, one concludes that
$$u_t(t,x)>0\hbox{ for all }(t,x)\in[\tau,+\infty)\times\mathbb{R}$$
and the proof of Theorem~\ref{th2} is thereby complete.\hfill$\Box$
| {
"timestamp": "2016-06-02T02:06:46",
"yymm": "1606",
"arxiv_id": "1606.00176",
"language": "en",
"url": "https://arxiv.org/abs/1606.00176",
"abstract": "In this paper, we consider nonnegative solutions of spatially heterogeneous Fisher-KPP type reaction-diffusion equations in the whole space. Under some assumptions on the initial conditions, including in particular the case of compactly supported initial conditions, we show that, above any arbitrary positive value, the solution is increasing in time at large times. Furthermore, in the one-dimensional case, we prove that, if the equation is homogeneous outside a bounded interval and the reaction is linear around the zero state, then the solution is time-increasing in the whole line at large times. The question of the monotonicity in time is motivated by a medical imagery issue.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Large time monotonicity of solutions of reaction-diffusion equations in R^N",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517507633985,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7089606310648392
} |
https://arxiv.org/abs/1807.04634 | On the Decomposition of Forces | We show that any continuously differentiable force is decomposed into the sum of a Rayleigh force and a gyroscopic force. We also extend this result to piecewise continuously differentiable forces. Our result improves the result on the decomposition of forces in a book by David Merkin and further extends it to piecewise continuously differentiable forces. | \section{Introduction}
Generally forces depend on both position variables and velocity variables. Among those forces, gyroscopic forces receive special attention since they do not contribute to any change in the total energy of a mechanical system. Another special type of force is a Rayleigh force, which can be represented as the derivative of a function with respect to the velocity variables \cite{Ro60,Wh89}. These two types of forces are important not only in mechanics but also in control. For example, both gyroscopic forces and dissipative Raleigh forces are employed in feedback control design to stabilize mechanical systems \cite{Ch10:SIAM,NgChLa13}. In this paper we show that any force, dependent upon both position and velocity, is uniquely decomposed into the sum of a Rayleigh force and a gyroscopic force. Constructive formulas and examples for the decomposition are provided.
\section{Main Results}
\subsection{Decomposition of Continuously Differentiable Forces}
Let $Q$ be a configuration manifold of dimension $n$. We denote by $TQ$ the tangent bundle, or velocity phase space, of $Q$ and by $T^*Q$ the cotangent bundle, or momentum phase space, of $Q$. We use $q = (q^1, \ldots, q^n)$ for coordinates on $Q$, and $(q,v) = (q^1,\ldots, q^n, v^1, \ldots, v^n)$ for coordinates on $TQ$. Sometimes we simply write $v$ instead of $(q,v)$. The cotangent bundle $T^*Q$ is locally spanned by the coordinate one-forms $\{ \mathbf{d}q^1, \ldots, \mathbf{d}q^n\}$, where one can identify the set $\{ \mathbf{d}q^1, \ldots, \mathbf{d}q^n\}$ with the standard basis for $\mathbb R^n$ without loss of generality in this paper. The Einstein summation convention is employed such that repeated indices are implicitly summed over.
Hence, we have the following identification:
\[
a_i \mathbf{d}q^i = a_1 \mathbf{d} q^1 + \cdots + a_n \mathbf{d}q^n = \begin{bmatrix} a_1\\ \vdots \\ a_n \end{bmatrix}.
\]
A fiber-preserving map $F: TQ \rightarrow T^*Q$ is called a force map, or simply a force. A force $F$ is called gyroscopic if $\langle F(q,v), v\rangle = 0$ for all $(q,v) \in TQ$, where $\langle \, , \rangle$ is the canonical pairing between cotangent vectors in $T^*Q$ and tangent vectors in $TQ$. A force $F$ is called a Rayleigh force if there exists a function $R: TQ \rightarrow \mathbb R$ such that
$F = \frac{\partial R}{\partial v^i} \mathbf{d}q^i$ for all $(q,v)\in TQ$, where the function $R$ is called the Rayleigh function for the force $F$.
\begin{lemma}\label{rayleigh:lemma}
For any continuously differentiable force $F: TQ \rightarrow T^*Q$, there exists a differentiable function $R: TQ \rightarrow \mathbb R$ such that
\begin{equation}\label{power:equal}
\langle F (q,v), v\rangle = \frac{\partial R}{\partial v^i} v^i
\end{equation}
for all $(q,v) \in TQ$. Concretely, the function $R: TQ \rightarrow \mathbb R$ defined by
\begin{equation}\label{def:Pi}
R (q,v) = \int_0^1 \langle F(q,sv), v\rangle ds
\end{equation}
satisfies (\ref{power:equal}). Moreover, if there is another function $\tilde R : TQ \rightarrow \mathbb R$ such that
\begin{equation}\label{tilde:Pi}
\langle F (q,v), v\rangle = \frac{\partial \tilde R}{\partial v^i} v^i
\end{equation}
for all $(q,v) \in TQ$, then there exists a function $f: Q \rightarrow \mathbb R$ such that
\[
\tilde R (q,v)= R (q,v) + f (q)
\]
for all $(q,v) \in TQ$.
\begin{proof}
Adapting the proof of the Proposition on p.517 in \cite{MaHo93}, one can show that the function $R$ defined in (\ref{def:Pi}) is differentiable on $TQ$. For any $\tau \in \mathbb R$
\begin{align*}
R (q,\tau v) = \int_0^1 \langle F(q,s\tau v), \tau v\rangle ds = \int_0^\tau \langle F(q,sv), v\rangle ds.
\end{align*}
Hence,
\[
\frac{\partial R}{\partial v^i}v^i = \left . \frac{d }{d \tau}\right |_{\tau = 1} R(q,\tau v) = \langle F(q,v), v\rangle.
\]
Suppose that there is another differentiable function $\tilde R :TQ\rightarrow \mathbb R$ such that (\ref{tilde:Pi}) holds.
Then,
\begin{align*}
\tilde R (q,v) &= \tilde R(q,0) + \int_0^1 \frac{d}{ds}\tilde R(q,sv) ds\\
&= \tilde R (q,0) + \int_0^1 \frac{\partial \tilde R}{\partial v^i} (q,sv) v^ids\\
&= \tilde R (q,0) + \int_0^1 \langle F(q,s v), v\rangle ds\\
&=\tilde R (q,0) + R (q,v)
\end{align*}
where (\ref{tilde:Pi}) is used for the third equality. Therefore, the difference between $\tilde R$ and $R$ is a function on $Q$.
\end{proof}
\end{lemma}
The following theorem states that any continuously differentiable force can be expressed as the sum of a Rayleigh force and a gyroscopic force.
\begin{theorem}\label{theorem:decomposition:force}
For any continuously differentiable force $F: TQ \rightarrow T^*Q$, there exists a differentiable function $R: TQ \rightarrow \mathbb R$ and a gyroscopic force $G: TQ \rightarrow T^*Q$ such that
\begin{equation}\label{FRG}
F = \frac{\partial R}{\partial v^i}\mathbf{d}q^i + G.
\end{equation}
Moreover, the decomposition is unique.
\begin{proof}
Let $R $ be the function defined in (\ref{def:Pi}), and let $G = F - \frac{\partial R}{\partial v^i}\mathbf{d}q^i$. By Lemma~\ref{rayleigh:lemma}, it is easy to show that $G$ is gyroscopic. Suppose that there exists another differentiable function $\tilde R$ on $TQ$ and another gyroscopic force $\tilde G$ such that
\begin{equation}\label{FRG:tilde}
F = \frac{\partial \tilde R}{\partial v^i}\mathbf{d}q^i + \tilde G.
\end{equation}
Since both $G$ and $\tilde G$ are gyroscopic, by (\ref{FRG}) and (\ref{FRG:tilde})
\[
\frac{\partial R}{\partial v^i}v^i = \langle F(q,v), v \rangle = \frac{\partial \tilde R}{\partial v^i}v^i.
\]
Hence, by Lemma~\ref{rayleigh:lemma} there exists a function $f: Q\rightarrow \mathbb R$ such that $R (q,v) = f(q) + \tilde R(q,v)$. Thus, $\frac{\partial R}{\partial v^i}\mathbf{d}q^i = \frac{\partial \tilde R}{\partial v^i}\mathbf{d}q^i$ and $G = F - \frac{\partial R}{\partial v^i}\mathbf{d}q^i = F - \frac{\partial \tilde R}{\partial v^i}\mathbf{d}q^i = \tilde G$, which proves the uniqueness of the decomposition.
\end{proof}
\end{theorem}
Notice that Theorem \ref{theorem:decomposition:force} is an improvement of the exposition in Section 6.2 of \cite{Me97}. Our proof herein is more rigorous, and we discuss and prove the uniqueness of the decomposition as well.
Let us take an example. Consider a force
\[
F(q,v) = \begin{bmatrix}
q^1 v^2 \\ v^1v^2
\end{bmatrix}
\]
on $\mathbb R^2$, where $q = (q^1,q^2)$ and $v=(v^1,v^2)$. By (\ref{def:Pi})
\begin{align*}
R(q,v) &= \int_0^1 (s q^1v^1v^2 + s^2 v^1 (v^2)^2 ) ds = \frac{1}{2} q^1v^1v^2 + \frac{1}{3} v^1 (v^2)^2,\\
\frac{\partial R}{\partial v} &= \begin{bmatrix} \frac{\partial R}{\partial v^1} \\ \frac{\partial R}{\partial v^2}\end{bmatrix} = \begin{bmatrix} \frac{1}{2}q^1v^2 + \frac{1}{3}(v^2)^2 \\ \frac{1}{2} q^1v^1 + \frac{2}{3}v^1v^2\end{bmatrix},\\
G &= F - \frac{\partial R}{\partial v} = \begin{bmatrix} \frac{1}{2}q^1 v^2 - \frac{1}{3}(v^2)^2 \\ \frac{1}{3}v^1v^2 - \frac{1}{2}q^1v^1\end{bmatrix}
\end{align*}
such that
\[
F(q,v) = \begin{bmatrix} \frac{1}{2}q^1v^2 + \frac{1}{3}(v^2)^2 \\ \frac{1}{2} q^1v^1 + \frac{2}{3}v^1v^2\end{bmatrix} + \begin{bmatrix} \frac{1}{2}q^1 v^2 - \frac{1}{3}(v^2)^2 \\ \frac{1}{3}v^1v^2 - \frac{1}{2}q^1v^1\end{bmatrix},
\]
where the first term on the right-hand side is the Raleigh part and the second the gyroscopic part of $F$.
Let us take another example. Consider a force $F(q,v) = a(q)(v^1)^{i_1} \cdots (v^n)^{i_n} \mathbf{d}q^1$ on $\mathbb R^n$, where $i_k$'s are non-negative integers such that $\sum_{k=1}^n i_k \geq 0$. Then, by (\ref{def:Pi})
\[
R(q,v) = \frac{a(q)}{ (i_1 +1) + \cdots + i_n}(v^1)^{i_1+1} \cdots (v^n)^{i_n}.
\]
One can easily extend this computation to the case where $F$ is a force each component of which is a polynomial in $v$, since the left-hand side of (\ref{power:equal}) is linear in $F$ and the right-hand side of (\ref{power:equal}) is linear in $R$.
We now give an example to show the role of a gyroscopic force as a coupling force. Consider the following forced 2-dimensional harmonic oscillator:
\begin{align*}
\ddot q^1 + q^1 &= -\dot q^1 + \epsilon (\dot q^2)^2,\\
\ddot q^2 + q^2 &= -\epsilon \dot q^1 \dot q^2,
\end{align*}
where $\epsilon$ is a constant parameter. One can easily show that the force vector on the right-hand side is decomposed as follows:
\[
F = \begin{bmatrix} -\dot q^1 \\ 0 \end{bmatrix} + \epsilon \begin{bmatrix}
(\dot q^2)^2 \\ - \dot q^1 \dot q^2
\end{bmatrix},
\]
where the first vector on the right-hand side is a Rayleigh force and the second a gyroscopic force. The Rayleigh force is dissipative and the total energy function of the system is given by
\[
E = \frac{1}{2} ( (q^1)^2 + (q^2)^2 + (\dot q^1)^2 + (\dot q^2)^2).
\]
If $\epsilon = 0$, then the $q^1$-dynamics and the $q^2$-dynamics get decoupled. As a result the $q^2$-dynamics becomes only Lypunov stable, not asymptotically stable, while the $q^1$-dynamics is exponentially stable. However, if $\epsilon \neq 0$, then the two dynamics get coupled through the gyroscopic force, and one can apply LaSalle's invariance principle with the energy function $E$ as a Lyapunov function
to show that the origin is an asymptotically stable equilibrium point in the total dynamics. This shows that the gyroscopic force creates a coupling between the two sub-dynamics and propagates the partially dissipative force throughout the entire dynamics.
\subsection{Decomposition of Piecewise Continuously Differentiable Forces}
We now study decomposition for a class of piecewise continuously differentiable forces. A good example is a Coulomb friction force such as $F = -\textup{sgn}(v)$. A submanifold $S$ of $TQ$ is called a star-shaped subbundle of $TQ$ if $ S = \bigcup_{q\in Q}S_q$ where for each $q\in Q$ the set $S_q$ is a star-shaped open subset of $T_qQ$, i.e., $S_q$ is an open subset of $T_qQ$ and $\lambda v_q \in S_q$ for all $v_q \in T_qQ$ and $0<\lambda\leq 1$.
\begin{theorem}\label{theorem:piecewise:C1:F}
Let $S$ be a star-shaped subbundle of $TQ$, and $F: TQ \rightarrow T^*Q$ be a force that is continuously differentiable on $S$. Let $x = (q,v) \in TQ$. Suppose that for each $x_0 \in S$ there exists a neighborhood $U$ of $x_0$ in $S$ such that the function $P: U \times (0,1] \rightarrow \mathbb R$ defined by $P((q,v),\tau ) = \langle F(q,\tau v), v\rangle$ satisfies the following:
\begin{itemize}
\item [1.] $P$ extends to be continuous on $U \times [0,1]$,
\item [2.] $\frac{\partial}{\partial x}P$ extends to be continuous on $U \times [0,1]$.
\end{itemize}
Then, there exists a differentiable function $R :S \rightarrow \mathbb R$, and a force $G: S \rightarrow T^*Q$ that is gyroscopic on $S$, i.e., $\langle G(q,v), v\rangle = 0$ for all $v\in S$, such that
\[
F = \frac{\partial R}{\partial v^i}\mathbf{d}q^i + G
\]
for all $(q,v) \in S$.
Moreover, the decomposition is unique.
\begin{proof}
By the hypothesis on $P(x,s)$, we may assume that $P$ and $\frac{\partial P}{\partial x}$ are continuous functions on $U\times [0,1]$. Then, adapting the Proposition in p.517 of \cite{MaHo93}, one can show that
\[
R(q,v) = \lim_{\epsilon \rightarrow 0^+}\int_\epsilon^1 \langle F(q, \tau v), q\rangle d\tau
\]
is a well-defined differentiable function on $S$. The rest of the proof follows from the proof of Theorem~\ref{theorem:decomposition:force}.
\end{proof}
\end{theorem}
Here is an example. Let $Q = \mathbb R^2$ and $F = \cos(v^2) \textup{sgn}(v^1) \mathbf{d}q^1 + |v^2|\mathbf{d}q^2$. It is easy to see that $S =\{(q, v) \in \mathbb R^2 \times \mathbb R^2\mid v^1 \neq 0, v^2 \neq 0\}$ is a star-shaped subbundle of $TQ$. Then, for $\tau \in (0,1]$ and $(q,v) \in S$
\begin{align*}
P((q,v),\tau):=\langle F(q,\tau v), v\rangle &= \cos(\tau v^2) \textup{sgn}(\tau v^1) v^1 + |\tau v^2|v^2\\
&= \cos(\tau v^2) |v^1| + \tau v^2 |v^2|,
\end{align*}
and
\begin{align*}
\frac{\partial P}{\partial q}((q,v),\tau) &=(0,0),\\
\frac{\partial P}{\partial v}((q,v),\tau ) &= ( \cos (\tau v^2) \textup{sgn}(v^1), -\tau \sin (\tau v^2)|v^1| + 2\tau |v^2|).
\end{align*}
It is easy to show that $P$, $\frac{\partial P}{\partial q}$ and $\frac{\partial P}{\partial v}$ extend to be continuous on $S \times [0,1]$, so the force decomposition is possible by Theorem \ref{theorem:piecewise:C1:F}. Indeed $F= \frac{\partial R}{\partial v^i}\mathbf{d}q^i + G$ where
\begin{align*}
R (q,v) &= \int_0^1(\cos( \tau v^2) |v^1| + \tau v^2 |v^2| ) d\tau = \frac{\sin (v^2)}{v^2}|v^1| + \frac{1}{2}v^2 |v^2|,\\
\frac{\partial R}{\partial v^i}\mathbf{d}q^i &= \frac{\sin(v^2)}{v^2}\textup{sgn}(v^1) \mathbf{d}q^1 + \left (\frac{\cos(v^2) v^2 - \sin(v^2)}{(v^2)^2}|v^1| + |v^2| \right ) \mathbf{d}q^2,\\
G &= \frac{\cos(v^2)v^2 - \sin (v^2)}{v^2}\textup{sgn}(v^1) \mathbf{d}q^1 - \frac{\cos(v^2) v^2 - \sin(v^2)}{(v^2)^2}|v^1|\mathbf{d}q^2.
\end{align*}
\section{Conclusion}
We have shown that any continuously differentiable force can be uniquely expressed as the sum of a Rayleigh force and a gyroscopic force and extended this result to piecewise continuously differentiable forces. We have provided formulas for the decomposition and illustrated the formulas with examples.
\bibliographystyle{aipnum4-1}
| {
"timestamp": "2018-07-13T02:09:08",
"yymm": "1807",
"arxiv_id": "1807.04634",
"language": "en",
"url": "https://arxiv.org/abs/1807.04634",
"abstract": "We show that any continuously differentiable force is decomposed into the sum of a Rayleigh force and a gyroscopic force. We also extend this result to piecewise continuously differentiable forces. Our result improves the result on the decomposition of forces in a book by David Merkin and further extends it to piecewise continuously differentiable forces.",
"subjects": "Classical Physics (physics.class-ph); Systems and Control (eess.SY)",
"title": "On the Decomposition of Forces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7089606308493783
} |
https://arxiv.org/abs/1112.3413 | Potential scattering and the continuity of phase-shifts | Let $S(k)$ be the scattering matrix for a Schrödinger operator (Laplacian plus potential) on $\RR^n$ with compactly supported smooth potential. It is well known that $S(k)$ is unitary and that the spectrum of $S(k)$ accumulates on the unit circle only at 1; moreover, $S(k)$ depends analytically on $k$ and therefore its eigenvalues depend analytically on $k$ provided the values stay away from 1.We give examples of smooth, compactly supported potentials on $\RR^n$ for which (i) the scattering matrix $S(k)$ does not have 1 as an eigenvalue for any $k > 0$, and (ii) there exists $k_0 > 0$ such that there is an analytic eigenvalue branch $e^{2i\delta(k)}$ of S(k)$ converging to 1 as $k \downarrow k_0$. This shows that the eigenvalues of the scattering matrix, as a function of $k$, do not necessarily have continuous extensions to or across the value 1. In particular this shows that a `micro-Levinson theorem' for non-central potentials in $\RR^3$ claimed in a 1989 paper of R. Newton is incorrect. | \section{Introduction}
In this article, we consider scattering in $\RR^n$ due to a nonpositive potential function, which we call a potential well. We denote the potential $-V(x)$, $V \geq 0$, and assume for simplicity that $V$ is smooth and compactly supported. Recall that the Schr\"odinger operator $H = \Delta - V$, where $\Delta = - \sum \partial_{x_i}^2$, has absolutely continuous spectrum on $(0, \infty)$ and may have finitely many eigenvalues on the nonpositive real axis. The scattering matrix, $S(k)$, $k > 0$, can be defined in terms of the generalized eigenfunctions or scattering solutions for $H$. As is well known, for each smooth function $q_\inc$ on the sphere, there is a unique solution $u$ to $(H - k^2)u = 0$, with $u$ taking the form
$$
u = r^{-(n-1)/2} \Big( e^{-ikr} q_\inc(\omega) + e^{ikr} q_\out(-\omega) \Big) + O(r^{-(n+1)/2}),
$$
as $r = |x| \to \infty$ \cite{GST}. Here $q_\out \in C^\infty(S^{n-1})$. As a consequence of uniqueness, $q_\out$ is determined by $q_\inc$; the map $q_\inc \mapsto e^{i\pi(n-1)/2} q_\out$ is by definition the scattering matrix $S(k)$. The normalization factor $e^{i\pi(n-1)/2}$ is chosen so that this `stationary' definition of the scattering matrix agrees with time-dependent definitions (see e.g.\ \cite{RSIII} or \cite{Yafaev}), and is such that the scattering matrix for the zero potential is the identity. The scattering matrix $S(k)$ extends to a unitary map from $L^2(S^{n-1})$ to $L^2(S^{n-1})$ for each $k > 0$.
In this note, we are interested in the eigenvalues of the scattering matrix $S(k)$. As $S(k)$ is unitary, these lie on the unit circle, and they are conventionally denoted $e^{2i\delta_j(k)}$ where $\delta_j$ is real; the $\delta_j$ are called phase shifts. They are determined up to a multiple of $\pi$.
The scattering matrix $S(k)$ for compactly supported potentials takes the form $\Id + A(k)$, where $A(k)$ has a smooth kernel. It follows that $S(k) - \Id$ is a compact operator for each $k$, and therefore the spectrum of $S(k)$ is discrete except at $1$. Moreover, for nonpositive potentials, the spectrum only accumulates `from above', that is, from the upper half plane \cite[Theorem 1.7.9]{Yafaev}. The scattering amplitude $A(k)$ is analytic in $k$, which implies that the eigenprojections of $S(k)$ vary analytically with $k$, provided the eigenvalue stays away from $1$.
The question we address here is whether the eigenvalues can pass through $1$, or more precisely, whether a phase shift $\delta(k)$ that tends upward to $\pi$ as $k \to k_0$ can be extended continuously up to, and even past, $k = k_0$.
In the case that $V$ is spherically symmetric, the scattering
solutions take the form of a spherical harmonic times a function of
$r$, and ODE analysis establishes that the phase shifts are analytic
for \emph{all} $k > 0$. Furthermore, Levinson's theorem for central
potentials can be used to guarantee that eigenvalues of $S(k)$ do pass through $1$.
\medskip
\noindent \textbf{Levinson's theorem (central potentials).}
\textit{Given $V(r) \in C^{\infty}_c(\mathbb{R})$ and a spherical harmonic $\phi$, let $\alpha(k)$ be the eigenvalue of $\phi$ for the scattering matrix $S(k)$ of $H = \Delta - V(\absv{x})$, i.e.\ $S(k)\phi = \alpha(k)\phi$. Let $n$ be the dimension of the subspace of $L^2$ eigenvectors of $H$ of the form $u = a(r)\phi$. Then the counterclockwise winding number of $\alpha(k)$ as $k$ goes from $\infty$ to $0$ satisfies
\begin{equation}
- \frac{1}{2\pi i} \int_0^{\infty} \frac{\alpha'(k)}{\alpha(k)} dk = n + \nu,
\end{equation}
where $\nu = 1/2$ if $\phi \equiv 1$ and there is a half-bound state, and is $0$ otherwise.
}
\medskip
\noindent See Theorem XI.59 of \cite{RSIII} for a proof of this well-known fact in the case $\phi \equiv 1$, and equation (5.15) of \cite{N1960} for the general case. For a fixed, non-positive, spherically symmetric $V$, the number of bound states of $\Delta - \lambda V$ with angular part $\phi$ grows at least as fast as $c \sqrt{\lambda}$ for some constant $c > 0$ (Theorem XIII.9 of \cite{RSIIII}), so for such $\lambda$ there are eigenvalues of the scattering matrix which pass continuously through $1$.
By contrast, our main result is that, for an arbitrary $C_c^\infty$ potential function, phase shifts \textit{cannot} necessarily be so continued: we give an explicit example of a potential well for which
\begin{itemize}
\item
the scattering matrix $S(k)$ does not have $1$ as an eigenvalue for any $k > 0$, and
\item
there exists $k_0 > 0$ and an eigenvalue branch $e^{2i\delta(k)}$ of $S(k)$ such that $\delta(k) \uparrow \pi$ as $k \downarrow k_0$.
\end{itemize}
Lest this seem bizarre, we mention that the corresponding phenomenon for obstacle scattering has already been observed by Eckmann-Pillet \cite{EP1995}.
In both cases, the source of the phenomenon is the same: for a compactly supported perturbation of $\RR^n$, if $1$ is an eigenvalue of $S(k)$ then there is a generalized eigenfunction that outside the perturbation agrees \emph{exactly} with a generalized eigenfunction of the free Laplacian (which has scattering matrix $S(k)$ equal to the identity, and hence has every eigenvalue $1$).
See Section \ref{sec:eval1} for an elementary (and well-known) proof of this fact. This is an extremely restrictive situation, and for certain perturbations one can show that it is not possible.
Our interest in the continuity of phase shifts through multiples of
$\pi$, or equivalently in eigenvalues of $S(k)$ through $1$, came from
reading a 1989 paper of R. Newton published in \emph{Annals of
Physics} in which, in particular, it is claimed that one can label
the phase shifts of the scattering matrix for any $C_c^\infty$
potential in $\RR^3$ (actually, Newton considers the larger class of
bounded potentials with exponential decay) in such a way that they are
continuous functions of $k \in (0, \infty)$. This is then used to
claim a `micro-Levinson theorem' relating phase shifts at $k=0$ to the
non-positive spectrum of $H$. But it is straightforward to show that
for any nonnegative $V \in C_c^\infty(\RR^3)$ and $\lambda$
sufficiently large, some phase shift of $- \lambda V$ will approach $\pi$ from below
as $k$ approaches some finite positive value $k_0$ from above (see
Section \ref{sec:monotone}), and so Newton's result would imply that the scattering matrix
of $-\lambda V$ at energy $k_{0}$ has $1$ as an eigenvalue. Our example shows that this is not true, and therefore shows that Newton's claimed theorem is incorrect. Further discussion about Newton's paper is given in Section~\ref{sec:newton} below.
The scattering matrix for nonpositive potentials is not the only family of operators of interest in spectral analysis in which there is a one-sided accumulation of spectrum and where one is interested in eigenvalues approaching the accumulation point. Another example is the Neumann-to-Dirichlet operator $N(k)$ for a smooth bounded domain $\Omega$ in $\RR^n$ (or more generally a Riemannian manifold with boundary). This operator, defined for complex $k$ (except for $k^2$ in the Neumann spectrum of $\Omega$), takes
$L^2(\dOmega)$ to $L^2(\dOmega)$ and maps $f \in C^\infty(\dOmega)$ to the boundary value of the function $u \in C^\infty(\Omega)$ satisfying the Helmholtz equation $(\Delta - k^2) u = 0$ and the Neumann boundary condition $d_n u = f$. This operator is a pseudodifferential operator of order $-1$ with positive principal symbol, and therefore has an accumulation of eigenvalues at $0$ from above. Eigenvalues of $N(k)$ are monotone increasing in $k$ \cite{Friedlander} and, for every Dirichlet eigenvalue $k_0^2$, there is an eigenvalue $\beta(k)$ of $N(k)$ such that $\beta(k) \uparrow 0$ as $k \uparrow k_{0}$. The Neumann-to-Dirichlet operator is quite closely analogous to the scattering matrix (and the analogue becomes even closer if one considers the Cayley transform of $N(k)$, which is a family of unitary operators defined for \emph{every} $k > 0$ and depending analytically on $k$). But there is an important difference between the two cases: in the case of $N(k)$, the eigenvalue branches $\beta(k)$ tending to zero as $k \uparrow k_0$ always have a continuous extension to $k_0$, with the eigenfunction at $k_0$ being the normal derivative of the corresponding Dirichlet eigenfunction on $\Omega$. This can be traced to the fact that the eigenfunction branch corresponding to $\beta(k)$ has a weak limit in $H^1(\Omega)$ and thus, thanks to the compact embedding $H^1(\Omega) \to L^2(\Omega)$, a \emph{strong} limit as $k \uparrow k_0$, which is necessarily nonzero \cite{Barnett-Hassell}. By contrast, the generalized eigenfunction branch for the scattering problem $(H - k^2) u = 0$ corresponding to a phase shift $\delta(k)$ may have only a weak limit --- which may be zero --- if $\delta \uparrow \pi$ as $k \downarrow k_0$.
\medskip
The authors would like to thank Lennie Friedlander, Rafe Mazzeo, and Yuri Safarov for helpful conversations during the writing of this paper.
\section{Consequences of the scattering matrix having eigenvalue $1$}\label{sec:eval1}
Suppose that we have a compactly supported perturbation, $H$, of the Laplacian $\Delta$ on $\RR^n$, such that the scattering matrix $S(k)$, $k > 0$, has an eigenvalue equal to $1$. Notice that since $S(k) = \Id + A(k)$, where $A(k)$ has a smooth kernel, the eigenfunction, say $q(\omega)$, is smooth. So there is a generalized eigenfunction $u$ of $H$ having the asymptotics
$$
u = r^{-(n-1)/2} \Big( e^{-ikr} q(\omega) + e^{ikr} e^{i\pi(n-1)/2} q(-\omega) \Big) + O(r^{-(n+1)/2}).
$$
Since the scattering matrix for the zero potential is the identity operator for all $k$, there is also a free generalized eigenfunction $u_f$, satisfying $(\Delta - k^2) u_f = 0$ on $\RR^n$, satisfying
$$
u_f = r^{-(n-1)/2} \Big( e^{-ikr} q(\omega) + e^{ikr} e^{i\pi(n-1)/2} q(-\omega) \Big) + O(r^{-(n+1)/2}).
$$
It follows that $u - u_f = O(r^{-(n+1)/2})$ near infinity. If outside some large ball in $\RR^n$ we expand $u - u_f$ in spherical harmonics, then we find that the coefficients are functions $j_l(r)$ such that $r^{(n-2)/2} j_l(r)$ satisfy Bessel's equation of order $l + (n-2)/2$, and are $O(r^{-3/2})$ as $r \to \infty$. As the only such solutions are identically zero, we find that $ u = u_f$ outside any ball containing the perturbation. Then applying standard unique continuation theorems, such as \cite[Theorem 17.2.6]{Hor}, we find that $u = u_f$ outside the support of the perturbation; in the case of potential scattering, this means outside the support of $V$.
It is now straightforward to understand the observation of Eckmann-Pillet that the typical smooth obstacle $\Omega$ in $\RR^n$, endowed with Dirichlet boundary conditions, will never have $1$ as an eigenvalue of its scattering matrix $S(k)$ for any $k > 0$. For if a free plane wave $u_f$ agrees with a generalized eigenfunction for the exterior domain $\RR^n \setminus \Omega$, then $u_f$ vanishes on $\dOmega$. But $u_f$ is real analytic, so that would imply that $\dOmega$ is contained in the zero set of a nontrivial real analytic function, and of course this is not true for a generic smooth obstacle. (As an example, take any smooth compact obstacle whose boundary contains an open subset of a hyperplane.)
On the other hand, the main result of Eckmann
and Pillet was that for each $k_0$ such that $k_0^2$ is a Dirichlet eigenvalue of $\Omega$, there is an eigenvalue branch $\beta(k)$ that tends to $1$ as $k \uparrow k_0$. This furnishes many examples of cases where an eigenvalue branch tending to $1$ does not extend continuously to include the value $1$, i.e.\ when the accumulation point of the spectrum is reached, the eigenfunction ceases to exist. See \cite{EP1995} for the details.
We introduce some terminology to describe these situations. We shall
say that a potential function $-V$ is \textbf{partially transparent}
at frequency $k_0 > 0$ if its scattering matrix $S(k_0)$ has an
eigenvalue $1$; \textbf{almost partially transparent} at frequency $k_0
\geq 0$ if there is an eigenvalue branch $e^{2i\delta(k)}$ for $k >
k_0$ that tends to $1$ as $k \downarrow k_0$; and \textbf{completely
non-transparent} if the scattering matrix is not partially
transparent for any $k$, i.e.\ if the scattering matrix has no
eigenvalue equal to $1$ for any $k > 0$. (Note that partial
transparency implies almost partial transparency.)
In Section \ref{sec:potential}, we give an example of a completely non-transparent potential. The example is not in the least bit pathological: it is simply a double well potential, where each well is smooth, compactly supported and spherically symmetric,
and the only subtlety is that the ratio between the radii of these wells is required to avoid a countable set of values (which may be dense). We believe that this property of being completely non-transparent is generic for potential wells in $C_c^\infty(\RR^n)$, but we do not attempt to prove this in this note.
\section{Model Example on $\ell^{2}$} \label{sec:littleell2}
We now give an explicit example of a family of
operators with an eigenvalue branch that vanishes at an accumulation point. This can be regarded as a model for the way in which an eigenbranch for the scattering matrix can disappear when the eigenvalue hits $1$. This example is illustrative only and is not used in the remainder of the paper.
Let $e_{j}$, $j = 1, 2, \dots$ denote the standard basis of $\ell^{2} =
\ell^{2}(\mathbb{N})$, and let $z_{0}$ be any vector with
\begin{equation}
\label{eq:z0def}
\begin{split}
\langle z_{0}, e_{j} \rangle &> 0 \mbox{ for all } j \\
\langle z_{0}, e_{j} \rangle &= \langle z_{0}, e_{k} \rangle \iff
j = k \\
\norm{ z_{0} }&\leq 1.
\end{split}
\end{equation}
For example, we can take $z_0 = \frac1{2} \sum_j e_j/j$.
We define a self-adjoint compact operator $T$ on $\ell^{2}$ by
\begin{equation}
\label{eq:T0def}
T(e_{j}) := \absv{\langle z_{0}, e_{j} \rangle}^{3} e_{j},
\end{equation}
and perturb it by a family of rank one self-adjoint operators parametrized by $k
\in \mathbb{R}$, as follows:
\begin{equation}
\label{eq:T0def}
T_{k}(z) := T(z) + k \langle z_{0}, z \rangle z_{0}.
\end{equation}
Notice that $T$, and hence $T_k$, has spectrum accumulating at $0$, from above only.
We will show that $T_{k}$ has the following properties
\begin{enumerate}
\item[(i)] for $k \geq 0$, $T_k$ has only positive eigenvalues;
\item[(ii)] for $k < 0$, there is exactly one negative eigenvalue $\alpha(k)$ of
$T_{k}$, satisfying $k < \alpha(k) < 0$;
\item[(iii)] $T_{k}$ shares no eigenvalue in common with $T_{0} = T$ for $k
\neq 0$, and for all $k$, every eigenvalue of $T_k$ is simple.
\end{enumerate}
The upshot is that for $k < 0$ there is an eigenvalue $\alpha(k)$ of $T_{k}$
which approaches $0$ from below as $k \uparrow 0$, but there is no corresponding zero eigenvalue at $k = 0$. By point (iii), there is no possible continuation of
$\alpha(k)$ to $k \geq 0$, even allowing a removable singularity at $k=0$: the eigenvalue branch $\alpha(k)$ simply ceases
to exist.
The proofs of these three facts are elementary. For $k \geq 0$, the
operator $T_{k}$ is manifestly
positive, and for $k < 0$, it is manifestly bounded below by $k$. For any $k$, $T_{k}$ restricted to $\spn
\langle z_{0} \rangle^{\perp}$ is equal to $T$, hence it is positive
off a codimension $1$ subspace. On the other hand\begin{equation*}
\langle T_{k}(e_{j}), e_{j} \rangle = \absv{\langle z_{0},e_{j}
\rangle}^{3} + k \absv{\langle z_{0},e_{j} \rangle}^{2},
\end{equation*}
so for $k < 0$ this is negative for large $j$. Thus there is exactly one
negative eigenvalue if $k < 0$.
To prove (iii), suppose that $\lambda$ is an eigenvalue of
both $T$ and $T_{k}$ for $k \neq 0$. All eigenspaces of $T$ are
simple, so $\lambda = \absv{\langle z_{0}, e_{j} \rangle}^{3}$ for
some $j$ and the eigenvector is $e_{j}$. Let $z$ be the eigenvector of $T_k$. Then we have
$$
\lambda \langle z, e_j \rangle = \langle T_k z, e_j \rangle = \langle z, T_0 e_j \rangle
\implies \langle (T_k - T_0) z, e_j \rangle = 0.
$$
This means that
$$
k \langle z_0, z \rangle \langle z_0, e_j \rangle = 0.
$$
Since $\langle z_0, e_j \rangle \neq 0$ we find that
\begin{equation}
\langle z_0, z \rangle = 0.
\label{ipz}\end{equation}
But this implies that $T_k z = T_0 z$. Therefore, $z$ is an
eigenfunction of $T_0$, i.e.\ $z = e_i$ for some $i$. But this contradicts \eqref{ipz}, since $\langle z_0, e_i \rangle \neq 0$.
Finally we show that $T_k$ has only simple eigenvalues. This is true by construction for $k = 0$. For $k \neq 0$, if $T_k$ had an eigenspace of two or more dimensions then it would contain a nontrivial eigenvector $w$ orthogonal to $z_0$. But then, as before, we would have $T_k w = T_0 w$ so $w$ would be an eigenvector of $T_0$, contradicting the fact just proved that $T_k$ and $T_0$ have no eigenvalues in common.
The spectrum of $T_k$ as a function of $k$ therefore is as in Figure \ref{fig:model}, with necessarily an infinity of avoided crossings due to property (iii) above.
\begin{figure}
\centering
\input{toy.pdf_tex}
\caption{Schematic depiction of the spectrum of $T(k)$. The negative eigenvalue branch, $\alpha(k)$, vanishes as $k = 0$, the other eigenvalues avoid crossing, and the largest eventually increases as $\| z_0 \|^2k + O(1)$. The dotted line is the line $y = \| z_0 \|^2 k$.}
\label{fig:model}
\end{figure}
\section{Completely non-transparent potential}\label{sec:potential}
We now give an example of a compactly supported, smooth potential $V >
0$ which is completely non-transparent. In fact we show a slightly stronger result; for any sequence of `strengths', i.e.\ positive real numbers $\lambda_i \to \infty$, we find a $V$ such that all of the potentials $-{\lambda}_{i}V$ are completely non-transparent.
Given any $W \in C_{c}^{\infty}(\mathbb{R}^{n})$, let $S_W(k)$ denote
the scattering matrix of $\Delta - W$. If $ W$ is partially transparent at frequency $k$, then, as discussed in Section \ref{sec:eval1}, there are solutions
\begin{equation}\label{eq:wschroed}
\begin{split}
\lp \Delta - W - k^{2} \rp u &= 0 \\
\lp \Delta - k^{2} \rp u_{f} &= 0 \\
\end{split}
\end{equation}
such that $u =
u_{f}$ on the complement of $\supp W$. Let $\Omega$ be any smooth bounded domain containing the interior of $\supp W.$ Then, at the boundary of $\Omega$,
\begin{equation}\label{eq:doubledton}
\begin{split}
u \rvert_{\partial \Omega} &= u_{f} \rvert_{\partial \Omega} \\
\partial_{\nu} u \rvert_{\partial \Omega} &= \partial_{\nu} u_{f} \rvert_{\partial \Omega}.
\end{split}
\end{equation}
We will construct a domain so that for
any functions $u$ and $u_{f}$ satisfying \eqref{eq:wschroed}
\textbf{on $\Omega$}, equation \eqref{eq:doubledton} holds only
when $u \equiv u_{f} \equiv 0$.
Let $\chi \in C^{\infty}_c(\mathbb{R})$ denote be a bump function with $\chi \geq 0$, $\chi(r) \equiv 1$ for $r < 1/4$, and
$\supp \chi \in \set{r \leq 1/2}$. Also choose $x_0 \in \RR^n$ with $|x_0| > 1$ and $R > 0$ such that the intersection $\overline{B_0(1)} \cap \overline{B_{x_0}(R)}$ is empty, that is, so that $R < |x_0| -1$. We define a potential
\begin{equation}
\label{eq:2}
\fbox{$ V_{R}(x) := \chi(\absv{x}) + R^{-2} \chi(\absv{x - x_0}/R) $.}
\end{equation}
We think of $x_0$ as fixed throughout.
\begin{theorem}\label{thm:notransparency}
Let $\Omega_{R} = B_{0}(1) \cup B_{x_0}(R)$ where $|x_0| > 1$. Then for any sequence of
positive numbers ${\lambda}_{i} \to \infty$, there is a countable set
$\Lambda$ so that for $R \not \in \Lambda$ and $0 < R < |x_0| - 1$, there are
no non-zero simultaneous solutions to \eqref{eq:wschroed} and
\eqref{eq:doubledton} with $W = \lambda_i V_{R}$. Thus, for each $\lambda_i$, the scattering matrix for $S_{\lambda_{i} V_{R}}(k)$ does not have $1$ as an eigenvalue for any $k > 0$, i.e.\ $-\lambda_i V_R$ is completely non-transparent.
\end{theorem}
Before we begin the proof, we discuss the case of a single well potential
$$W_{R}(\absv{x}) = \chi(x/R)/R^2.$$ If we
label the spherical harmonics in the standard way, $\phi_{lm}$, where
$\Delta_{S^{n-1}}\phi_{lm} = l(l + n - 2) \phi_{lm}$, any
solution $u, u_{f}$ to \eqref{eq:wschroed} on $B_{0}(R)$ can be
written
\begin{align}
\label{eq:things}
u_{f}(r\omega) &= \sum_{ |m| \leq l, l=0}^\infty a_{lm} r^{-(n-2)/2}
J_{l + (n-2)/2}(kr) \phi_{lm}(\omega)
\end{align}
where $J_{\nu}$ is the standard Bessel function of order $\nu$, and
\begin{align}
\label{eq:things}
u(r\omega) &= \sum_{ |m| \leq l, l=0}^\infty b_{lm} r^{-(n-2)/2} \JJ_{l, k, \lambda, R}(r) \phi_{lm}(\omega)
\end{align}
where $\JJ_{l, k, \lambda, R}(r)$ is the unique solution to
\begin{equation}
\label{eq:almostbessell}
\lp - \partial_{r}^{2} - \frac{1}{r} \partial_{r} + \frac{(l+(n-2)/2)^2}{r^{2}} -
\lambda W_{R}(r) - k^{2} \rp \JJ_{l, k, \lambda, R}(r) = 0,
\end{equation}
which is equal to $ J_{l + (n-2)/2}(\sqrt{\lambda/R^{2} + k^{2}} \, r)$ near $r =
0$. (This is, up to scale, the unique regular solution, since $W_{R} = 1/R^2$ near $r = 0$.)
From this it is clear that a necessary and sufficient condition for solving both
\eqref{eq:wschroed} and \eqref{eq:doubledton} is that there exist a non-negative integer $l$ and a $k > 0$ such that the Wronskian
\begin{equation}
\begin{split}
D_{l, k, \lambda, R}(r) :=
\det \lp
\begin{array}{cc}
J_{l + (n-2)/2}(kr) & \JJ_{l, k, \lambda, R}(r) \\
\partial_r \big(J_{l + (n-2)/2}(kr) \big) & \partial_{r} \big(\JJ_{l, k, \lambda, R}(r) \big)
\end{array}
\rp
\end{split}
\end{equation}
satisfies
\begin{equation}
\label{eq:bessellzeroesR}
D_{l, k, \lambda, R}(R) = 0.
\end{equation}
We will prove
\begin{lemma}\label{thm:bessellzeroeslemma}
For fixed ${l}$, ${\lambda}$, and $R$, the set of zeroes of $D_{{l}, k,
{\lambda}, R}(R)$ as a function of $k$ is discrete (hence countable) in
$(0, \infty)$. For fixed ${l}$, ${\lambda}$, and $k$, the set of zeroes of $D_{{l}, k,
{\lambda}, R}(R)$ as a function of $R$ is discrete (hence countable) in
$(0, \infty)$.
\end{lemma}
Assuming the lemma, the theorem follows easily.
\begin{proof}[Proof of Theorem \ref{thm:notransparency}]
Fix a sequence $\lambda_{i} \to \infty$, and a particular element
$\lambda$, thereof. By the proceeding discussion, given any $R < |x_0| - 1$, there is a solution
to \eqref{eq:wschroed} and \eqref{eq:doubledton} on $\Omega_{R}$ if
and only if there are $l, \wt{l} \geq 0$, and $k > 0$ so that
we have both
\begin{equation}
\label{eq:bessellzeroessystems}
\begin{split}
D_{l, k , {\lambda}, R}(R) &= 0 \\
D_{\wt{l}, k , {\lambda}, 1}(1) &= 0 .
\end{split}
\end{equation}
Fixing $l, \wt{l}$, consider the set of $R > 0$ for which there is a solution to \eqref{eq:bessellzeroessystems}. Using the first part of Lemma~\ref{thm:bessellzeroeslemma}, there
are only countably many solutions, $k_{i}$, to the second equation in \eqref{eq:bessellzeroessystems}. For each $k_{i}$, using the second part of Lemma~\ref{thm:bessellzeroeslemma}, there are only countably many
solutions $R_{i,j}$ to $ D_{l, k_{i} , {\lambda}, R_{i,j}}(R_{i,j}) = 0$, and thus there
are only countably many $R$ for which the system \eqref{eq:bessellzeroessystems}
admits a solution. There are
only countably many pairs $(l, \wt{l})$ of nonnegative integers,
so the set of $R$ such that \eqref{eq:bessellzeroessystems} holds for any pair
$(l, \wt{l})$ is countable.
Thus, for each $\lambda_{i}$, there are no solutions to
\eqref{eq:bessellzeroessystems} with $l, \wt{l} \in \{ 0, 1, 2, \dots \}$ and $R < |x_0| -1$ not in some
countable set $\Lambda_{i}$. Setting $\Lambda = \bigcup_{i}
\Lambda_{i}$ proves the theorem.
\end{proof}
It remains to prove the lemma.
\begin{proof}[Proof of Lemma \ref{thm:bessellzeroeslemma}]
To simplify notation, for fixed $R$, ${l}$, and ${\lambda}$
set $F(k) := D_{l, k, \lambda, R}(R)$, and for fixed $k$, ${l}$, and
${\lambda}$, set $G(R) = D_{l, k, \lambda, R}(R)$. Our goal is to
prove that both $F(k)$ and $G(R)$ have countably many zeroes.
To see this, note that by scalling the $r$ variable in \eqref{eq:almostbessell}, $\JJ_{l, k, \lambda, R}(r) = f(l, kR, \lambda, r/R)$, where
$f$ satisfies the differential equation
\begin{equation}
\label{eq:almostbessell2}
\lp - \partial_{r}^{2} - \frac{1}{r} \partial_{r} + \frac{(l+(n-2)/2)^2}{r^{2}} -
\lambda W_{1}(r) - k^{2} \rp f(l, k, \lambda, r) = 0.
\end{equation}
The two terms in $D_{l, k, \lambda, R}(R)$ involving $\JJ_{l, k, \lambda, R}$ are,
$\JJ_{l, k, \lambda, R}(R) = f(l, kR, \lambda, 1)$ and \linebreak $\partial_{r} \lp
\JJ_{l, k, \lambda, R}(r) \rp \rvert_{r = R} = \partial_{r} f(l, kR, \lambda, 1) / R$. These are analytic functions of $kR$ by Theorem XI.56 in
\cite{RSIII}. Bessel functions are analytic, so $D_{l, k, \lambda, R}(R)$ is analytic in both $k$ and $R$.
It therefore suffices to check that neither $F(k)$ nor $G(R)$ is
identically zero, which we do using the Sturm comparison theorem and taking
$k$ and $R$ large in comparison with $\lambda$. Note that $D_{l, k, \lambda, R}(R)$ is nonzero precisely when $\JJ_{l, k, \lambda, R}$ is not a multiple of $J_{l+(n-2)/2}$ for $r \geq R/2$. We show this is true for when $kR$ is large by comparing $\JJ_{l, k, \lambda, R}$ to the solutions to equation \eqref{eq:almostbessell} when the potential $-\lambda W_R$ is replaced by either $-\lambda R^{-2} 1_{[0, R/4]}$ or $-\lambda R^{-2} 1_{[0, R/2]}$. Since $R^{-2} 1_{[0, R/4]} \leq W_R \leq R^{-2} 1_{[0, R/2]}$, the Sturm comparison theorem tells us that the $N$th zero of $\JJ_{l, k, \lambda, R}$ is between that of the solution to equation \eqref{eq:almostbessell} with potential $-\lambda W_R$ replaced with the characteristic functions above. Since the solutions for these are $J_{l+(n-2)/2}(\sqrt{k^2 + \lambda R^{-2}} \, r)$ on the support of the characteristic function, and since $\sqrt{k^2 + \lambda R^{-2}} = k + O(1/(kR^2))$, we find that the zeroes of $\JJ_{l, k, \lambda, R}(r)$ for $r \geq R$, which are $\sim \pi/k$ apart, are shifted by an amount between $c_1 (kR)^{-2}$ and $c_2 (kR)^{-2}$ relative to those of $J_{l + (n-2)/2}(kr)$, where $0 < c_1 < c_2$. This shows that when $k$ is large for fixed $R$, or for $R$ large for fixed $k$, then $\JJ_{l, k, \lambda, R}(r)$ is not equal to $J_{l + (n-2)/2}(kr)$ for $r \geq R$ and hence that $D_{l, k, \lambda, R}(R) \neq 0$.
\begin{comment}
A direct computation shows that
\begin{equation*}
\partial_{r} r D_{l, k, \lambda, R}(r) = - \lambda^{2}rW_{R}(r) J_{l + (n-2)/2}(kr) \JJ_{l, k, \lambda, R}(r),
\end{equation*}
so, since $\lim_{r \to 0} r D_{l, k, \lambda, R}(r) = 0$,
\begin{equation}\label{eq:fR}
D_{l, k, \lambda, R}(R) = - \frac{1}{R} \int_{0}^{R} s \lambda^{2} W_{R}(s) J_{l + (n-2)/2}(ks) \JJ_{l, k, \lambda, R}(s) ds
\end{equation}
For $f$ as in \eqref{eq:almostbessell2}, and changing variables to $ks = s'$, we have
\begin{equation}\label{eq:fR2}
k R^{2} D_{l, k, \lambda, R}(R) = - \frac{1}{kR} \int_{0}^{kR} s'
\lambda^{2} W_{kR}(s') J_{l + (n-2)/2}(s) f(l, \lambda, s/kR, kR) ds
\end{equation}
By \eqref{eq:almostbessell2}, we expect $f(l, \lambda, s/kR, kR)$ to behave like $ J_{l + (n-2)/2}(s)$ for large $kR$. Approximating using variation of parameters gives
\begin{equation*}
k R^{2} D_{l, k, \lambda, R}(R) = - \frac{1}{kR} \int_{0}^{kR} s'
\lambda^{2} W_{kR}(s') J^2_{l + (n-2)/2}(s) ds + \mathcal{O}(1/\sqrt{kR}),
\end{equation*}
Using the standard expansions for Bessel
functions near infinity it is easy to see that the integral converges to a non-zero constant for $kR$ large. Thus $ k R^{2} D_{l, k, \lambda, R}(R) $ is nonzero for large $k$ with $R$ fixed, and for large $R$ with fixed $k$. This proves the lemma.
\end{comment}
\end{proof}
\section{Monotonicity of phase shifts and the existence of almost partially transparent frequencies}\label{sec:monotone}
For any
potential $V \geq 0$, it is not difficult to show, using the monotonicity result of \cite{HR1976}, that the number of non-zero almost partially transparent
frequencies of $\Delta - \lambda V$ is unbounded as $\lambda
\to \infty$. We sketch an argument now. Without loss of generality, assume that
$V(0) \neq 0$, and let $\chi = \chi(r) \in C_{c}^{\infty}$ be a smooth, non-negative, spherically-symmetric
function with $V \geq \chi$. Setting $V(s) := s V+ (1 - s) \chi$,
let $S_{s}(k)$ be the scattering matrix for $\Delta - \lambda V(s)$ at
frequency $k$. Let $\alpha(s,k) = \exp(2i\delta(s,k))$ be an eigenvalue of $S_{s}(k)$. As long as it is not equal to $1$, $\alpha(s,k)$ can be taken analytic in $s$. If $\alpha(s,k) \neq 1$ for all $s \in [0,1]$, Theorem 1 of \cite{HR1976} gives
\begin{equation}\label{eq:mono}
\frac{ \partial \delta}{\partial s} \geq 0 \mbox{ for } s \in [0,1].
\end{equation}
Recall that for $s = 0$, i.e.\ for the potential $-\lambda \chi$, the
scattering matrix is diagonal with respect to (say, the standard)
basis of spherical
harmonics, $\phi_{lm}$. The eigenvalues $\alpha_{lm}(k) = \exp(2i\delta_{lm}(k))$ are
defined continuously for all $k$, and can be taken so that
$\delta_{lm}(k) \to 0$ as $k \to \infty$. As we saw in our discussion of Levinson's theorem for central potentials in the introduction, by taking $\lambda$ sufficiently large, the counterclockwise winding number of $\alpha_{lm}(k)$ can be assumed bigger than $1$. For $k$ taken large so that $S_{s}(k)$ is
very close to the identity, let $\alpha(s,k)$ be an eigenvalue with
$\alpha(0,k) = \alpha_{lm}(k)$. By \eqref{eq:mono}, $\delta_{lm}(k) < \delta(1,k)$.
If $\lambda$ was chosen
large enough so that
$\delta_{lm}(k)$ crosses $\pi$, say at $k_{0}$, then $\delta(1,k)$ must approach
$\pi$ from below at some frequency $k' \geq k_{0}$ --- see Figure~\ref{fig:almosttrans}. This produces a non-zero almost partially transparent eigenvalue for each pair $(l,m)$ for which the winding number of $\alpha_{lm}(k)$ is bigger than $1$.
\begin{figure}[htbp]
\begin{center}
\input{monotone.pdf_tex}
\caption{For $k$ large, the phase shifts are monotone in $s$.}
\label{fig:almosttrans}
\end{center}
\end{figure}
As a simple corollary to this argument we have
\begin{corollary}\label{thm:onlyalmost}
For $V_R$ and $\lambda_i \to \infty$ as in Theorem \ref{thm:notransparency}, and for $\lambda_i$ large enough, there exist eigenvalues of the scattering matrix $S_{\lambda_i V_R}$ which approach $1$ at non-zero frequencies. Since $- \lambda_i
V_R$ is completely non-transparent, these limiting frequencies are almost partially transparent but not partially transparent.
\end{corollary}
\section{On Newton's `micro Levinson theorem' for non-central potentials}\label{sec:newton}
In 1989, R. Newton published a paper \emph{The spectrum of the Schr\"odinger $S$ matrix: low energies and a new Levinson theorem} \cite{N1989}, claiming the following result:
\medskip
\noindent \textbf{Claimed Theorem.} \textit{Assume that the
potential [is smooth and exponentially decaying]. Then each
eigenphase shift $\delta_{lm}(k)$ may be defined to be a continuous
function of $k$, to vanish at $k \to \infty$, and so that its value at the
origin is $\delta_{lm}(0) = \pi (\mathcal{N}_{lm} + \nu)$ where
$\mathcal{N}_{lm}$ is the number of bound states associated with the
pair $(l,m)$, $\nu = 1/2$ if $l = 0$ and there is a half-bound state,
and $\nu = 0$ otherwise.
}
\medskip
To label bound states, Newton introduces a strength parameter $\lambda$ and considers the negative spectrum for the family $\lambda V$; as $\lambda \to 0$ the bound states approach zero energy and then disappear, and the label is related to the asymptotic spatial behaviour of the limiting zero energy solution.
Corollary \ref{thm:onlyalmost} above shows that Newton's claimed theorem is
incorrect. Indeed, for $V_R$ and $\lambda_i$ as in the corollary, there are eigenvalues of the scattering matrix of $\Delta - \lambda_i V$ which approach $1$ as $k \downarrow k_0$ for some non-zero frequency $k_0$, but such that $1$ is not an eigenvalue of $S(k_0)$. Therefore, the phase shifts of $S(k)$ cannot always be taken continuous on $k \in (0, \infty)$.
We mention some other problems with Newton's paper.
\begin{itemize}
\item The proposed labelling of bound states is flawed. It seems to depend implicitly on the assumption that the limiting eigenfunctions of the scattering matrix $S_{\lambda V}(k)$ as $k \to 0$ is independent of $\lambda$, but this is not the case.
\item Lemma 4.2 of \cite{N1989} is incorrect. Using the notation in the lemma, if $u$ is a zero energy bound state of $(\Delta + V) u = 0$ with angular dependence $r^{-l-1} \mathcal{Y}_{ln}(\omega)$, where $\mathcal{Y}_{ln}(\omega)$ is a spherical harmonic with angular momentum quantum number $l$ , then $u = Ku$ and $P_{ln} u = u$, and hence $u = K P_{ln} u$. However, the converse is certainly not true: from $u = K P_{ln} u$ we are not able to deduce that
both $u = Ku$ and $u = P_{ln} u$, which would be required to conclude Lemma 4.2.
\end{itemize}
The second point is of some significance: if Lemma 4.2 were correct for every $C_c^\infty(\RR^3)$ potential $V$, there would be infinitely many $\lambda$ such that $\Delta - \lambda V$ had a zero eigenvalue with eigenfunction behaving as $r^{-l-1} \mathcal{Y}_{ln}(\omega)$ at infinity, for some spherical harmonic
$\mathcal{Y}_{ln}(\omega)$ (depending on $\lambda$) with angular momentum quantum number $l$. However, we are sure this is not the case. In fact, we conjecture that the generic potential $V \in C_c^\infty(\RR^n)$ is such that all the zero energy eigenfunctions of $\Delta + \lambda V$ are half-bound states.
\section{Open problems}
This short note suggests some interesting open problems concerning potential scattering.
\begin{itemize}
\item Is the generic potential well in $C_c^\infty(\RR^n)$ completely non-transparent?
\item Is it true that for a generic potential $V$ in $C_c^\infty(\RR^n)$, all the zero-energy, decaying solutions $u$ to $(\Delta - \lambda V)u = 0$ are half-bound states? That is, are $L^2$ zero-energy eigenfunctions nongeneric relative to half-bound states?
\item Is there a Levinson-type theorem for non-central potentials where instead of looking at the value of phase-shifts at $k=0$, we count the number of almost partially transparent energies (counted with multiplicity)?
\item Can one characterize, in some spectral-geometric way, the almost partially transparent frequencies of a potential well $-V \in C_c^\infty(\RR^n)$, by analogy with that for obstacle scattering \cite{EP1995}?
\end{itemize}
\bibliographystyle{abbrv}
| {
"timestamp": "2012-03-16T01:01:34",
"yymm": "1112",
"arxiv_id": "1112.3413",
"language": "en",
"url": "https://arxiv.org/abs/1112.3413",
"abstract": "Let $S(k)$ be the scattering matrix for a Schrödinger operator (Laplacian plus potential) on $\\RR^n$ with compactly supported smooth potential. It is well known that $S(k)$ is unitary and that the spectrum of $S(k)$ accumulates on the unit circle only at 1; moreover, $S(k)$ depends analytically on $k$ and therefore its eigenvalues depend analytically on $k$ provided the values stay away from 1.We give examples of smooth, compactly supported potentials on $\\RR^n$ for which (i) the scattering matrix $S(k)$ does not have 1 as an eigenvalue for any $k > 0$, and (ii) there exists $k_0 > 0$ such that there is an analytic eigenvalue branch $e^{2i\\delta(k)}$ of S(k)$ converging to 1 as $k \\downarrow k_0$. This shows that the eigenvalues of the scattering matrix, as a function of $k$, do not necessarily have continuous extensions to or across the value 1. In particular this shows that a `micro-Levinson theorem' for non-central potentials in $\\RR^3$ claimed in a 1989 paper of R. Newton is incorrect.",
"subjects": "Spectral Theory (math.SP); Analysis of PDEs (math.AP)",
"title": "Potential scattering and the continuity of phase-shifts",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517424466175,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7089606308493782
} |
https://arxiv.org/abs/1509.01420 | Anti-Urysohn spaces | All spaces are assumed to be infinite Hausdorff spaces. We call a space "anti-Urysohn" $($AU in short$)$ iff any two non-emty regular closed sets in it intersect. We prove that$\bullet$ for every infinite cardinal ${\kappa}$ there is a space of size ${\kappa}$ in which fewer than $cf({\kappa})$ many non-empty regular closed sets always intersect;$\bullet$ there is a locally countable AU space of size $\kappa$ iff $\omega \le \kappa \le 2^{\mathfrak c}$.A space with at least two non-isolated points is called "strongly anti-Urysohn" $($SAU in short$)$ iff any two infinite closed sets in it intersect. We prove that$\bullet$ if $X$ is any SAU space then $ \mathfrak s\le |X|\le 2^{2^{\mathfrak c}}$;$\bullet$ if $\mathfrak r=\mathfrak c$ then there is a separable, crowded, locally countable, SAU space of cardinality $\mathfrak c$; \item if $\lambda > \omega$ Cohen reals are added to any ground model then in the extension there are SAU spaces of size $\kappa$ for all $\kappa \in [\omega_1,\lambda]$;$\bullet$ if GCH holds and $\kappa \le\lambda$ are uncountable regular cardinals then in some CCC generic extension we have $\mathfrak s={\kappa}$, $\,\mathfrak c={\lambda}$, and for every cardinal ${\mu}\in [\mathfrak s, \mathfrak c]$ there is an SAU space of cardinality ${\mu}$.The questions if SAU spaces exist in ZFC or if SAU spaces of cardinality $> \mathfrak c$ can exist remain open. | \section{Introduction}
In this paper ``space'' means ``infinite Hausdorff topological space''.
The space $X$ is called {\em anti-Urysohn} ({\em AU}, in short) iff
$A\cap B\ne \emptyset$ for any $A,B\in \operatorname{RC}^+(X)$, where $\operatorname{RC}^+(X)$ denotes the family of non-empty
regular closed sets in $X$.
We call the space $X$ {\em strongly anti-Urysohn} ({\em SAU}, in short)
iff $|X'|>1$, that is $X$ has at least two non-isolated points,
and $A\cap B\ne \emptyset$ for any $A,B\in \ \operatorname{\mc F}^+(X)$, where
$\operatorname{\mc F}^+(X)$ denotes the family of {\em infinite} closed subsets of $X$.
Clearly, AU spaces are crowded and a crowded SAU space is AU. Our original intention was to include
crowdedness in the definition of SAU spaces. However, we changed our minds after
we realized that it seems to be just as hard to construct them with the weaker property of having
at least two non-isolated points.
What led us to consider AU spaces was not just idle curiosity. Co-operating via correspondence with Alan Dow,
we have recently arrived at the result that in the Cohen model any separable and sequentially compact
Urysohn space has cardinality $\le \mathfrak{c}$. (This result will be published elsewhere.)
The natural question if this holds for all (Hausdorff) spaces, however, remained open.
When trying to find a ZFC counterexample, it was natural to look for spaces that are
as much non-Urysohn as possible.
Actually, a countable AU space, under a different name, had been constructed
by W. Gustin in \cite{Gu} a long time ago, as a simple(r) example
of a countable connected Hausdorff space. (It is obvious that AU spaces are connected.)
The first example of a countable connected Hausdorff space was constructed by
Urysohn in \cite{Ur}, but his construction is
extremely long and complicated: just the description of his space takes up three pages.
(We have no idea if Urysohn's example is AU or not.)
A much simpler example was obtained by
Gustin in \cite{Gu} where the following was proved:
\cite[Theorem 4.2]{Gu}
There is a countably infinite Hausdorff space $X$
such that no two distinct points in $X$ have disjoint closed neighbourhoods (i.e.
$X$ is AU).
An even simpler construction of
a countable connected Hausdorff space, which takes up only one page, was published by
Bing in \cite{Bi}. This is also presented as example 6.1.6 in Engelking's book \cite{En}.
Bing's example also turns out to be AU.
In contrast to this, we are not aware of any earlier appearance of SAU spaces. In fact,
we admit that when we first considered them we did not think that they could exist.
Our notation and terminology is standard. In set theory we follow \cite{Ku} and
in topology \cite{En}.
\section{Existence of anti-Urysohn spaces}
In this section we show that for every infinite cardinal $\kappa$ there
is an AU space of cardinality $\kappa$. Actually, we prove much more that is
new and of interest even for the case $\kappa = \omega$.
To do that we need the following somewhat technical lemma that provides a general method for
constructing AU spaces.
\begin{lemma}\label{lm:large_SA}
Assume that ${\kappa}$ is an infinite cardinal and $X$ is a space with
$X\cap {\kappa}=\emptyset$, moreover
$\{K_{\alpha}:{\alpha}<{\kappa}\}$ are pairwise
disjoint non-empty compact subsets of $X$ such that
\begin{enumerate}[(1)]
\item if $\,a\subset {\kappa}$ is cofinal then
$\,\bigcup\nolimits_{{\alpha}\in a}K_{\alpha}$ is dense in $X$;
\smallskip
\item $Y=X\setminus \bigcup\nolimits_{{\alpha} < \kappa}K_{\alpha}$ is also dense in $X$.
\end{enumerate}
\smallskip
Define the topology $\varrho$ on $Z = Y\cup {\kappa}$ as follows:
\begin{itemize}
\item for $y \in Y$ the family $\{U \cap Y : y \in U \in \tau(X) \}$,
\smallskip
\item for ${\alpha}\in {\kappa}$ the family
\begin{align*}
\Big\{\{{\alpha}\}\cup (W\cap Y) : K_{\alpha} \subset W \in \tau(X) \Big\}
\end{align*}
\end{itemize}
is a $\varrho$-neighbourhood base. Then
\begin{enumerate}[(i)]
\item $\varrho$ is a Hausdorff topology on $Z$,
\item if $V \in \varrho$ is non-empty then $\,\overline{V}^\varrho$
includes a final segment of ${\kappa}$.
\end{enumerate}
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lm:large_SA}]
It is straight forward to see that the above definition of $\varrho$ is correct.
That $\varrho$ is Hausdorff follows from the fact that any two disjoint
compact sets have disjoint neighbourhoods in $X$.
It is obvious from the definition that the subspace topology of $Y$ inherited from $X$ is
the same as $\varrho \upharpoonright Y$, moreover $Y$ is dense open in $\langle Z,\varrho \rangle$.
Thus it suffices to show (ii) for $V \in \varrho$ with $V \subset Y$.
But every $\varrho$-open set $V \subset Y$ is of the form $U \cap Y $ with $U \in \tau(X)$.
Now, if $V$ is non-empty then (1) implies that
\begin{align*}
I=\{{\alpha}<{\kappa}: K_{\alpha}\cap U \ne \emptyset\}
\end{align*}
contains a final segment of ${\kappa}$. But if ${\alpha} \in I$ and
$K_\alpha \subset W \in \tau(X)$ then $W \cap U \supset K_{\alpha}\cap U \ne \emptyset$, hence
$W \cap V = (W \cap Y) \cap V = W \cap U \cap Y \ne \emptyset$ as well because, by (2),
$Y$ is dense in $X$. Thus we have ${\alpha}\in \overline {V}^\varrho$ for all $\alpha \in I$,
which completes the proof of (ii).
\end{proof}
Now we shall present two applications of lemma \ref{lm:large_SA}.
\begin{theorem}\label{tm:any}
For any infinite cardinal ${\kappa}$ there is a(n AU) space $Z$
such that
$|Z|={\kappa}$, $\,d(Z) = \log \kappa$, and
\begin{align*}
\text{$\bigcap \mc A\ne \emptyset$ whenever
$\mc A\in \br \operatorname{RC}^+(X);<cf({\kappa});$.}
\end{align*}
\end{theorem}
\begin{proof}
[Proof of theorem \ref{tm:any}]
Consider in the Cantor cube $\mathbb{C}_\kappa = \{0,1\}^\kappa$ the pairwise
disjoint non-empty compact subsets
$$K_\alpha = \{x \in \mathbb{C}_\kappa : x(\alpha) = 1 \mbox{ and } x(\beta) = 0 \mbox{ for all } \alpha < \beta < \kappa \}.$$
It is well known that $d(\mathbb{C}_\kappa) = \log \kappa$ and we leave it to the reader to check that
the standard proof of this fact (see e.g. \cite{Ju}) yields a dense set $Y \subset \mathbb{C}_\kappa$ with $|Y| = \log \kappa$
such that $Y \cap \bigcup_{\alpha < \kappa}K_\alpha = \emptyset$.
Now, it is obvious that we may apply lemma \ref{lm:large_SA} to the subspace $X = \bigcup_{\alpha < \kappa}K_\alpha\, \cup\, Y$
of $\mathbb{C}_\kappa$ to obtain the required space on $Z = Y \cup \kappa$.
\end{proof}
In the case $\kappa = \omega$ the countable AU, hence connected, space we obtain from theorem \ref{tm:any}
has the stronger property that any intersection of finitely many
non-empty regular closed sets is non-empty. We think that its construction is at least as simple as
Bing's in \cite{Bi}. In any case, it is certainly stronger because,
as is easily checked, both Gustin's and Bing's countable AU spaces contain
three non-empty regular closed sets whose intersection is empty.
Our next application of lemma \ref{lm:large_SA} will enable us to construct large AU spaces
that are locally small.
Before formulating it we recall that the dispersion character
$\Delta(X)$ of a space $X$ is the
smallest size of a non-empty open set in $X$.
\begin{theorem}\label{tm:large_SA}
For any infinite cardinal ${\kappa}$ there is an AU space $Z$
with a closed discrete subset $D\subset Z$
such that
$|D|=|Z|=\Delta(Z)=d(Z)={\kappa}$ and
\begin{align}\label{eq:large_SA0}
\text{$|D\setminus A|<{\kappa}$ for each
$A\in \operatorname{RC}^+(X)$.}
\end{align}
Moreover, there is a family
$\{U_{\alpha}:{\alpha}<{\kappa}\}$ of pairwise disjoint open
sets in $Z \setminus D$
such that every point $z\in Z$ has a neighborhood $W_z$ for which
\begin{align}\label{eq:large_SA2}
\{{\alpha}<{\kappa}: U_{\alpha}\cap W_z\ne \emptyset\}
\text{ is bounded in ${\kappa}$}.
\end{align}
\end{theorem}
\begin{proof}[Proof of theorem \ref{tm:large_SA}]
Let $\mbb E_{\kappa}$ be the product space $L({\kappa})^{\kappa}$,
where $L({\kappa})$ denotes $\kappa$ with the usual ordinal topology.
For any ${\alpha}<{\kappa}$ we let
\begin{align*}
K_{\alpha}=\{p\in \mbb E_{\kappa}:\ &
p({\zeta})\le {\alpha} \text{ for all ${\zeta}<{\alpha}$},
\\&p({\alpha})=1 \land p({\beta})=0
\text{ for all }
{\alpha < \beta} < {\kappa}\setminus ({\alpha}+1)\}.
\end{align*}
Then the $K_{\alpha}$ are pairwise disjoint compact subsets of $\mbb E_{\kappa}$.
Put $K_\kappa = \bigcup_{\alpha < \kappa}K_\alpha$.
Then clearly $\mbb E_{\kappa}\setminus K_{\kappa}$ is dense
in $\mbb E_{\kappa}$,
$\Delta(\mbb E_{\kappa}\setminus K_{\kappa})=2^{\kappa}$,
and $w(E_{\kappa})={\kappa}$, hence there is a dense set
$Y\subset \mbb E_{\kappa}\setminus K_{\kappa}$ such that
$|Y|=\Delta(Y)={\kappa}$.
We may then apply lemma \ref{lm:large_SA} to the space $X=Y\cup K_{\kappa}$ to obtain the space
$Z=\<Y\cup {\kappa},\varrho\>$ with the closed discrete set $D={\kappa}$.
Clearly, we have $|D|=|Z|=\Delta(Z)=d(Z)={\kappa}$ and property (\ref{lm:large_SA}) holds.
Let us now define
$$U_{\alpha}=\{p\in Y: p(0)={\alpha}+1\}$$ for
${\alpha}<{\kappa}$.
Since the singleton $\{{\alpha}+1\}$ is open in $L({\kappa})$, we have
$U_{\alpha} \in \varrho$, and clearly $\alpha \ne \beta$ implies $U_\alpha \cap U_\beta = \emptyset$.
Now, if $y\in Y$ then
$$W_y=\{p\in Y: p(0)\le y(0)\} \in \varrho$$
is a neighborhood of $y$ that witnesses \eqref{eq:large_SA2}
because $W_y\cap U_{\beta}=\emptyset$
for ${\beta}\ge y(0)$.
If, on the other hand, $\alpha \in {\kappa}$, then $G_\alpha = \{p \in \mathbb{E}_\kappa : p(0) \le \alpha + 1\}$
is an open subset of $\mathbb{E}_\kappa$ with $K_\alpha \subset G_\alpha$, hence
\begin{align*}
W_\alpha=\{\alpha\}\cup \{p\in Y: p(0)\le {\alpha}+1\} \in \varrho
\end{align*}
is a neighborhood of $\alpha$ in $Z$ with
$W_\alpha \cap U_{\beta}=\emptyset$ for all ${\beta}>{\alpha}$.
\end{proof}
We say that a space $X$ is {\em locally $\kappa$} if every point of $X$
has a neighbourhood of cardinality $\le\, \kappa$.
The following easy result yields an upper bound for the cardinality of a locally $\kappa$
AU space. It will also be used in section 3 for SAU spaces.
\begin{theorem}\label{tm:bound_loc_countable_SA}
Any locally $\kappa$ space $X$ contains an infinite clopen subset
$Y$ of cardinality $|Y| \le 2^{2^\kappa}$.
\end{theorem}
\begin{proof}
Since $X$ is Hausdorff, we have $|\overline A|\le 2^{2^\kappa}$ for all $A\in \br X;{\le \kappa};$.
We may fix for every point $p\in X$ a neighborhood $U_p$ of size $\le\, \kappa$.
A very simple closure procedure
then yields an infinite subset $Y\subset X$ of cardinality $\le 2^{2^\kappa}$
such that
\begin{enumerate}[(a)]
\item $U_p\subset Y$ for all $p \in Y$,
\item $\overline{A}\subset Y$ for all $A\in \br Y;{\le \kappa};$.
\end{enumerate}
Then (a) implies that $Y$ is open and (b) implies that $Y$ is closed because
$t(X) \le \kappa$.
\end{proof}
It is immediate from theorem \ref{tm:bound_loc_countable_SA} that any space which is locally $\kappa$
and connected, in particular AU, has cardinality $\le 2^{2^\kappa}$.
The following result implies that this upper bound is sharp: For every $\kappa$ there is a locally
$\kappa$ AU space of cardinality $ 2^{2^\kappa}$.
\begin{theorem}\label{tm:AU2}
For every infinite cardinal ${\kappa}$ there is a locally $\kappa$ space $X$
with a closed discrete subset $D$ such that
\begin{enumerate}[(i)]
\item $|X|=2^{2^{\kappa}}$ and $d(X)=|D|={\kappa}$,
\item $|D\setminus A|<{\kappa}$ holds
for any $A\in \operatorname{RC}^+(X)$.
\end{enumerate}
In particular, then $\bigcap \mc A\ne \emptyset$ whenever
$\mc A\in \br \operatorname{RC}^+(X);<cf({\kappa});$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{tm:AU2} ]
By theorem \ref{tm:large_SA}
there is a space $Z$ with a closed discrete $D\subset Z$ such that
$|D|=|Z|=\Delta(Z)=d(Z)={\kappa}$ and
\begin{align*}
\text{$|D\setminus A|<{\kappa}$ for all
$ A\in \operatorname{RC}^+(Z)$,}
\end{align*}
moreover there are pairwise disjoint open sets
$\{U_{\alpha}:{\alpha}<{\kappa}\}$ in $Z \setminus D$ so that
every point $z\in Z$ has a neighborhood $W_z$ which meets $U_\alpha$
only for boundedly many $\alpha < \kappa$.
The underlying set of our space is
\begin{align*}
X=Z\cup S({\kappa}),
\end{align*}
where $S({\kappa})$ is the set of all uniform ultrafilters on
${\kappa}$. (Of course, we may assume that $Z \cap S({\kappa}) = \emptyset$.)
So we have $|X| = |S(\kappa)| = 2^{2^{\kappa}}$.
Next we define the topology $\tau$ on $X$ with the following stipulations:
\begin{enumerate}[(i)]
\item $Z\in \tau$ and the subspace topology
of $Z$ inherited from $X$ is the original topology of $Z$;
\item for any uniform ultrafilter $x\in S({\kappa})$
the family
\begin{align*}
\mc U_x=\Big\{\{x\}\cup\bigcup\nolimits_{{\alpha}\in a} U_{\alpha}\ :\
a\in x\Big\}.
\end{align*}
is a $\tau$-neighborhood base of $x$.
\end{enumerate}
It is obvious from this definition that $Z$ is a dense open subspace of $X$, moreover
$D$ remains a closed discrete set in $X$. This immediately implies (i), while (ii)
follows because if $A \in \operatorname{RC}^+(X)$ then $A \cap Z \in \operatorname{RC}^+(Z)$. The only thing
that is left to show is the Hausdorffness of $X$.
Since $Z$ is Hausdorff and open in $X$, it is obvious that any two points of $Z$
can be separated in $X$. If $\{x,y\} \in [S(\kappa)]^2$ then there are $a \in x$ and
$b \in y$ with $a \cap b = \emptyset$, hence
\begin{align*}
\{x\}\cup\bigcup\nolimits_{\alpha\in a} U_\alpha
\text{\quad and \quad}
\{y\}\cup\bigcup\nolimits_{\beta\in b} U_\beta
\end{align*}
are disjoint neighborhoods of $x$ and $y$ in $X$.
Finally, assume that $z\in Z$ and $x \in S({\kappa})$.
Then, by \eqref{eq:large_SA2},
there is ${\xi}\in {\kappa}$ such that
$W_z\cap U_{\zeta}=\emptyset$ for all ${\xi}\le {\zeta}<{\kappa}$.
But we may pick $a\in x$ with $a\cap {\xi}=\emptyset$, and
then $W_z$ and
\begin{align*}
\{x\}\cup\bigcup\nolimits_{{\zeta}\in a} U_\zeta
\end{align*}
are disjoint neighborhoods of $z$ and $x$.
\end{proof}
Since in the above construction $S({\kappa})$ is clearly closed discrete in
$X$, we actually get the following result.
\begin{corollary}
Given $\kappa \ge \omega$, for every cardinal $\lambda \le 2^{2^{\kappa}}$ there is a locally $\kappa$
AU space of cardinality $\lambda$. In particular, for every infinite cardinal $\lambda \le 2^\mathfrak{c}$
there is a locally countable AU space of cardinality $\lambda$.
\end{corollary}
\section{Existence of strongly anti-Urysohn spaces}
We will see later in this section that, at least consistently, strongly anti-Urysohn (SAU) spaces
exist. However, in strong contrast to the case of AU spaces, there are both lower and upper bounds
for their possible cardinalities. Before establishing these bounds, in the following theorem we
collect some simple properties of SAU spaces.
\begin{theorem}\label{tm:sAU-basic}
Let $X$ be any SAU space. Then
\begin{enumerate}[(1)]
\item $X$ is countably compact;
\item every compact subset of $X$ is finite;
\item any infinite closed set $F \subset X$ is uncountable; hence
$A\in \br X;{\omega};$ implies $|A'|>{\omega}$;\smallskip
\item $\operatorname{\mc F}^+(X)$ is closed under countable intersections.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) An infinite closed discrete set breaks into two disjoint infinite closed sets.
\smallskip\noindent (2)
Assume that $F\subset X$ is compact and let $p_0$ and $p_1$ be two different accumulation points
of $X$ with disjoint neighborhoods $U_0$ and $U_1$, respectively.
Then the compact set $F\setminus U_0$ and the point $p_0$ have disjoint open
neighbourhoods: $F\setminus U_0\subset V$, $p_0\in W$ and $V\cap W=\emptyset$.
Then $F\setminus U_0$ and $\overline {W}$
are disjoint closed sets. But $p_0\in X'$ implies that
$\overline {W}$ is infinite, so $F\setminus U_0$ is finite because
$X$ is SAU.
A symmetrical argument yields that $F\setminus U_1$ is also finite,
hence $F=(F\setminus U_0)\cup (F\setminus U_1)$ implies that $F$ is finite as well.
\smallskip\noindent (3)
If $F\subset X$ is countable and closed then $F$ is compact because
$X$ is countably compact by (1).
Consequently, $F$ is finite by (2). The second part now follows from $\overline{A} = A \cup A'$.
\smallskip\noindent (4)
First we show that
\begin{align}\label{eq:finint}
\text{$\operatorname{\mc F}^+(X)$ is closed under finite intersections.}
\end{align}
Otherwise we could
choose $A,B\in \br X;{\omega};$ such that $n=|\overline A\cap \overline B| < \omega$ is minimal.
Since $X$ is SAU, we can
pick $p\in \overline A\cap \overline B\ne \emptyset$.
Then $A\cup \{p\}$ is not compact by (2), hence
there is an open set $U\ni p$ such that $A\setminus U$ is infinite.
But $p\notin \overline{A\setminus U}$,
so
\begin{align*}
\overline{A\setminus U}\cap \overline{B}\subset (\overline{A}\cap \overline{B})
\setminus \{p\},
\end{align*}
consequently $|\overline{A\setminus U}\cap \overline{B}| < n$,
which contradicts the choice of $n$.
So we proved \eqref{eq:finint}.
Now assume that $\{F_n:n\in {\omega}\}\subset \operatorname{\mc F}^+(X)$.
Using \eqref{eq:finint} and (3)
we can pick by recursion points $$p_n\in \bigcap_{m\le n}F_m\setminus \{p_i:i<n\}$$ for $n<{\omega}$, and put
$P=\{p_n:n<{\omega}\}$. Then, by (3), $P'$ is infinite, in fact even uncountable,
and we have $P'\subset \bigcap_{n\in {\omega}} F_n$.
\end{proof}
All our (consistent) examples of SAU spaces that we shall construct below have cardinality $\le \mathfrak{c}$.
We do not know if SAU spaces of size $> \mathfrak{c}$ can exist but we have the following related result.
\begin{theorem}
Every SAU space $X$ has a
SAU subspace of size $\le \mathfrak{c}$.
\end{theorem}
\begin{proof}
We may fix a function $\,\varphi:\br X;{\omega};\times \br X;{\omega};\to X$
such that $\varphi(A,B)\in \overline A\cap \overline B$ for all
$\langle A,B \rangle \in \br X;{\omega};\times \br X;{\omega};$.
Let us also fix $Y_0\in \br X;{\omega};$. By theorem \ref{tm:sAU-basic}(3), $Y_0$ has two
accumulation points $p$ and $q$.
Since $\mf c^{\omega}=\mf c$, there is a set $Y$ with
$Y_0 \cup \{p,q\} \subset Y\in \br X;\le \mf c;$ which is $\varphi$-closed, i.e. $\varphi(A,B)\in Y$ holds
whenever $A,B\in \br Y;{\omega};$.
But then any two infinite closed subsets of $Y$ intersect,
moreover, $|Y'|>1$ because $p,q \in Y_0' \subset Y'$, hence
$Y$ is SAU.
\end{proof}
Now we turn to giving the lower and upper bound for the possible cardinalities of SAU spaces.
To do that we first prove two lemmas. The first one is purely combinatorial. To formulate it
we recall that a set $A$ is said to {\em split} another set $B$
iff both $B\cap A$ and $B\setminus A$ are infinite.
Also, a family of sets $\mc A$ is called a {\em splitting family for $X$} if
every $B\in \br X;{\omega};$ is split by some member of $\mc A$.
\begin{lemma} \label{lm:comb}
If $\mc A$ is a splitting family for $X$
then $|X|\le 2^{|\mc A|}$.
\end{lemma}
\begin{proof}
For every $x\in X$ let us put $\mc A(x)=\{A\in \mc A:x\in A\}$.
Then for each subfamily $\mc B\subset \mc A$ the set $S = \{x \in X :\mc A(x)=\mc B\}$ is finite
because no element of $\mc A$ can split $S$.
Thus the map $x\mapsto \mc A(x)$ is finite-to-one and hence we have $|X|\le 2^{|\mc A|}$.
\end{proof}
The next lemma involves the well-known splitting number $\mathfrak{s}$ which is defined as
the smallest cardinality of a splitting family for $\omega$.
\begin{lemma}\label{lm:small-w}
If $X$ is any space of weight $w(X)<\mf s$ then every set $A\in \br X;{\omega};$
has an infinite subset $B$ with at most one accumulation point in $X$, i.e. such that $|B'|\le 1$.
So, if in addition, $X$ is countably compact then actually $X$ is sequentially compact.
\end{lemma}
\begin{proof}
Let $\mc U$ be a base of $X$ with $|\mc U| <\mf s$. This implies that there is
$B\in \br A;{\omega};$ such that no element of $\mc U$
splits $B$. But then the Hausdorff property of $X$ clearly implies $|B'|\le 1$. Indeed, if
$x$ and $y$ would be distinct accumulation points of $B$ with disjoint neighbourhoods
$U, V \in \mc U$, then both $U$ and $V$ would split $B$.
The second part follows because the countable compactness of $X$ implies $|B'| \ge 1$
for all $B\in \br X;{\omega};$.
\end{proof}
We are now ready to give the promised lower and upper bounds for the size of a SAU space.
\begin{theorem}\label{tm:bounds}
If $X$ is any SAU space then $$\mf s\le |X|\le 2^{2^{\mf c}}.$$
\end{theorem}
\begin{proof}
Assume that $|X|<\mf s$. Then there is a coarser $Hausdorff$ topology
$\varrho$ on $X$ of weight
$w(X,\varrho)\le |X|$, hence by lemma \ref{lm:small-w} there is
$B\in \br X;{\omega};$ such that $B$ has at most one accumulation point in
$(X,\varrho)$. Then $B$ has at most one accumulation point in the finer topology of $X$, as well.
But this implies that $X$ is not SAU by Theorem \ref{tm:sAU-basic}(3).
To verify the upper bound of $|X|$,
consider any $A\in \br X;{\omega};$ and put $F=\overline A$.
Then, by Pospi\v sil's theorem, we have $|F|\le 2^{\mf c}$, hence
there is a family $\mathcal{A}$ of relatively open subsets of $F$
that $T_2$-separates the points of $F$ and $|\mc A|\le |F| \le 2^{\mf c}$.
By Theorem \ref{tm:rc_not_star}(4),
for every $B\in \br X;{\omega};$ we have $F \cap B' \in \operatorname{\mc F}^+(X)$,
hence $B$ has at least two, in fact uncountably many, accumulation points in $F$.
But then some element of
$\mc A$ splits $B$, i.e. $\mc A$ is a splitting family for $X$.
By lemma \ref{lm:comb}, this implies
\begin{align*}
|X|\le 2^{|\mc A|}\le 2^{|F|}\le 2^{2^{\mf c}},
\end{align*}
which completes the proof.
\end{proof}
Of course, the previous results are only of interest if SAU spaces, at least consistently, exist.
So now we turn to proving that they do. Our first construction will make use of the
{\em reaping number} $\mathfrak{r}$ whose definition we recall next.
If $\mc A$ is any family of infinite sets then a set $S$ is said to reap $\mc A$
iff $S$ splits every member of $\mc A$. Now, $\mathfrak{r}$ is the minimum cardinality
of a family $\mc A \subset [\omega]^\omega$ such that no $S \in [\omega]^\omega$ reaps $\mc A$.
So the assumption $\mathfrak{r} = \mathfrak{c}$, that will figure in our construction of a SAU space given
below, is equivalent to the statement that every subfamily of $[\omega]^\omega$
of size $< \mathfrak{c}$ can be reaped by a member of $[\omega]^\omega$.
To make the inductive construction of our SAU space easier to digest,
we prove first the following lemma.
\begin{lemma}\label{lm:one-step}
Assume that $\<X,\tau\>$ is a locally countable space of weight $w(X)<\mf r$,
moreover we are given
$I,J\in \br X;{\omega};$ and a family $\{\<x_i,A_i\>:i<{\kappa}\} \subset X\times
\br X;{\omega};$, where $\kappa < \mf r$ and $x_i\in ({A_i}')^\tau$ for all $i < \kappa$.
Then, for any fixed $p \notin X$, there is a locally countable Hausdorff topology $\rho$ on $X\cup \{p\}$
such that
\begin{enumerate}[(i)]
\item $\tau\subset \rho$ and $w(\rho) \le w(\tau)$,
\smallskip
\item $p\in (I')^\rho \cap (J')^\rho$,
\smallskip
\item $x_i \in {(A_i')}^\rho$ for all $i<{\kappa}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\<X,\tau\>$ is locally countable, there is a countable
$U\in \tau$ with $I\cup J\subset U$.
We fix an enumeration $U=\{u_n:n\in {\omega}\}$ and
a base $\mc B$ of $X$ of cardinality $w(\tau)$.
Next we consider the family
\begin{align}\notag
\mc C_0 = \big(\{I,J\}\cup \big\{B\cap U: B\in \mc B \big\}
\cup
\big\{A_i\cap B\cap U : i<{\kappa},\,B\in \mc B \big\}\big) \cap [U]^\omega.
\end{align}
Clearly, $|\mc C_0| < \mf r$, hence there is $D_0 \in [U \setminus \{u_0\}]^\omega$ that reaps $\mc C_0$.
We continue this by recursion on $n\in {\omega}$: If $\mc C_n \in \big[[U]^\omega \big]^{< \mf r}$ and
$D_n \in [U \setminus \{u_n\}]^\omega$ reaping $\mc C_n$ have been defined, then we put
$$\mc C_{n+1}=\mc C_n\cup \{C\cap D_n\,,\, C\setminus D_n: C\in \mc C_n \}\,.$$
Then $\mc C_{n+1} \in [U]^\omega$ and $|\mc C_{n+1}|= |\mc C_n| < \mf r$,
hence we may choose $D_{n+1} \in [U \setminus \{u_{n+1}\}]^\omega$
that reaps $\,\mc C_{n+1}$.
Our topology $\rho$ on $X\cup\{p\}$ is generated by the family
\begin{align}
\tau\cup \big\{X\setminus D_n: n\in {\omega} \big\} \cup
\big\{\,\{p\} \cup D_n \,: n\in {\omega}\big\}.
\end{align}
Then (i) and the local countability of $\rho$ are clear.
To see that $\rho$ is Hausdorff it suffices to show that any $x \in X$ and $p$
have disjoint $\rho$-neighbourhoods. But if $x = u_n$ then $X \setminus D_n$ and
$\{p\} \cup D_n$ work,
while for $x \notin U$ any $X \setminus D_n$ and $\{p\} \cup D_n$ will work.
To check (ii), by symmetry, it suffices to show that $p\in (I')^\rho$.
This is clearly equivalent to the statement
\begin{align}
\text{ $I\cap \bigcap_{m<n}D_m$ is infinite for all $n < \omega$,}
\end{align}
which we can prove
by induction on $n$.
If $n=0$, then
$I\cap \bigcap_{m<n}D_m=I$
is infinite by assumption.
If
\begin{align}
\text{$C=I\cap \bigcap_{m<n}D_m$ is infinite}
\end{align}
then $C\in \mc C_n$, hence $C\cap D_n = I\cap \bigcap_{m<n+1}D_m$
is infinite as well, because $D_n$ splits $C$.
This completes the proof of (ii).
For any $\varepsilon\in Fn({\omega},2)$ let us put
\begin{align*}
D_\varepsilon =\bigcap_{\varepsilon(n)=1} D_n
\cap \bigcap_{\varepsilon(n)=0} (X\setminus D_n).
\end{align*}
It is easy to prove by induction on $|\varepsilon|$ that for every $\varepsilon\in Fn({\omega},2)$
and for every $C \in \mc C_0$ we have $|C \cap D_\varepsilon| = \omega$.
Now, to check (iii), fix $i<{\kappa}$.
If $x_i\in {(A_i\cap U)'}^\tau$, which certainly holds if $x_i \in U$, then for any $B \in \mc B$
with $x_i\in B$ we have $A_i\cap U\cap B \in \mc C_0$
and hence, by the above,
$A_i\cap U\cap B\cap D_\varepsilon$ is infinite
for each $\varepsilon\in Fn({\omega},2)$.
But this clearly implies that $x_i \in (A_i \cap U)'^\rho \subset (A'_i)^\rho$.
If, on the other hand, we have $x_i \notin {(A_i\cap U)'}^\tau$, then
$x_i\in {(A_i \setminus U)'}^\tau$. But in this case $x_i \notin U$ and then the obvious fact
that $\tau$ and $\rho$ coincide on $X \setminus U$ trivially implies that
$x_i\in {(A_i \setminus U)'}^\rho \subset (A'_i)^\rho$.
Thus we have verified (iii) and completed the proof of the lemme.
\end{proof}
We are now ready to formulate and prove our first existence result concerning
SAU spaces.
\begin{theorem}\label{tm:sAU-lc-cont}
If $\mf r=\mf c$ then there is a locally countable, separable, and crowded
SAU space of cardinality $\mf c$.
\end{theorem}
\begin{proof}[Proof of theorem \ref{tm:sAU-lc-cont}]
The underlying set of our space will be $\mbb Q\cup \mf c$, where $\mathbb{Q}$ is the set of rational numbers.
Our aim is to achieve that the closures of any two members of $[\mbb Q\cup \mf c]^\omega$
intersect, so we fix an enumeration of all these pairs:
\begin{align*}
\big\{\{I_{\zeta},J_{\zeta}\}:{\zeta}< \mathfrak{c}\big\}= \big[[\mbb Q\cup \mf c]^\omega \big]^2\,,
\end{align*}
where $I_{\zeta}\cup J_{\zeta}\subset \mbb Q\cup {\zeta}$ for all ${\zeta}<\mf c$.
Then, by transfinite recursion on $\zeta \le \mf c$, we define
locally countable Hausdorff topologies
$\tau_{\zeta}$ on $X_{\zeta}=\mbb Q\cup {\zeta}$ as follows:
\begin{enumerate}[(1)]
\item $\tau_0$
is the usual topology of $\mbb Q$;
\smallskip
\item if ${\zeta}<{\nu}\le\mf c$ then $w(\tau_\zeta) < \mf r$, moreover
\begin{align*}
\tau_{\zeta}\subset \tau_{\nu}\text{ and }
{\zeta}\in (I_{\zeta}')^{\tau_{\nu}}\cap
(J_{\zeta}')^{\tau_{\nu}};
\end{align*}
\smallskip
\item if ${\nu}\le \mf c$ is a limit ordinal then
$\tau_{\nu}$ is generated by
$\bigcup_{\zeta<\nu} \tau_\zeta$ on $X_\nu$;
\smallskip
\item for every ${\zeta} < \mf c$ we obtain $\tau_{\zeta + 1}$ by applying Lemma \ref{lm:one-step}
to the space $\<X_{\zeta},\tau_{\zeta}\>$ with $p={\zeta}$, the pair
$\{I_{\zeta}, J_{\zeta}\}$,
and the family
\begin{align*}
\big\{\<q,\mbb Q\setminus \{q\}\>:q\in \mbb Q\big\}\cup
\big\{\<{\xi}, I_{\xi}\> : {\xi}<{\zeta}\big\} \cup \big\{
\<{\xi}, J_{\xi}\> :{\xi}<{\zeta}\big\}.
\end{align*}
\end{enumerate}
We claim that the space $\<X_{\mf c},\,\tau_{\mf c}\>$ is as required. Indeed,
local countability and Hausdorffness is built in and,
using (4), one may easily check by transfinite induction that
$\mbb Q$ is both dense and crowded in each
$\<X_{\zeta}, \tau_\zeta\>$, hence $\<X_{\mf c},\,\tau_{\mf c}\>$ is separable and crowded.
Finally $\langle X_{\mf c}, \tau_{\mf c} \rangle$ is SAU
because for any $I,J\in \br X;{\omega};$ there is ${\zeta}<\mf c$
such that $\{I,J\} = \{I_\zeta , J_{\zeta}\}$, and so
${\zeta}\in (I')^{\tau_{\mf c}}\cap (J')^{\tau_{\mf c}} \ne \emptyset$.
\end{proof}
Now that we know that locally countable SAU spaces may consistently exist,
it makes sense to remark that they certainly cannot have cardinality $> 2^\mathfrak{c}$.
Indeed, this is an immediate consequence of theorem \ref{tm:bound_loc_countable_SA}
because a clopen subset of a SAU space must have finite complement.
Although we do not know if this upper bound is sharp, at least we know that it is
smaller than the upper bound for all SAU spaces given by theorem \ref{tm:bounds}.
We now turn to another method of constructing consistent examples of SAU spaces.
Unlike the construction in theorem \ref{tm:sAU-lc-cont}, this will allow us
to produce simultaneously SAU spaces of many different sizes.
These constructions will make use of certain (consistent) combinatorial principles that we
formulate below.
\begin{definition}\label{df:star}
Let ${\kappa} \ge \omega$ and ${\mu}$ be cardinals, where ${\mu}=1$ or ${\mu}$ is infinite.
Then $\circledast_{\kappa,\mu}$ is the following statement:
There is a sequence
\begin{align*}
\<\<A^0_{\alpha},A^1_{\alpha}\>:{\alpha} \in {\kappa}\>
\end{align*}
such that
\begin{enumerate}[(A)]
\item each $\<A^0_{\alpha},A^1_{\alpha}\>$ is a partition of ${\alpha}\times {\mu}$,
\item
for every $S\in \br {\kappa}\times {\mu};{\omega};$ there is
${\beta} <{\kappa}$ such that $\<\<A^0_{\alpha},A^1_{\alpha}\> : {\alpha} \in {\kappa} \setminus \beta\>$
is dyadic on $S$, i.e.
\begin{align*}
\big|S \cap \bigcap \{A^{\varepsilon({\zeta})}_{\zeta} : {\zeta}\in \operatorname{dom}(\varepsilon) \}\big|={\omega},
\end{align*}
whenever $\varepsilon \in Fn({\kappa}\setminus {\beta},2)$.
\end{enumerate}
\end{definition}
In what follows, we shall write
$\,A[\varepsilon]=\bigcap \{A^{\varepsilon({\zeta})}_{\zeta} : {\zeta}\in \operatorname{dom}(\varepsilon) \}$.
Also, a sequence witnessing $\circledast_{\kappa,\mu}$ will be called
simply a $\circledast_{\kappa,\mu}$-sequence.
The following simple result is given just for orientation.
\begin{proposition}\label{*}
$\circledast_{\kappa,{\mu}}$ implies
$\mf s\le cf({\kappa})\le \kappa\le \mf c$.
\end{proposition}
\begin{proof}
If $\<\<A^0_{\alpha},A^1_{\alpha}\>:{\alpha}<{\kappa}\>$
is a $\circledast_{\kappa, \mu}$-sequence and $I\subset {\kappa}$ is cofinal then,
by condition \ref{df:star}(B), $\,\{A^0_{\alpha} : {\alpha}\in I\}$ is a splitting family
for $\kappa \times \mu$. Thus $\mf s\le cf({\kappa})$.
Now, assume that we had ${\kappa}>\mf c$.
Then fix $S \in [\kappa \times \mu]^\omega$ and
${\beta} <{\kappa}$ such that $\<\<A^0_{\alpha},A^1_{\alpha}\>:{\beta}\le {\alpha}<{\kappa}\>$
is dyadic on $S$. But $|{\kappa}\setminus {\beta}|={\kappa}>\mf c$,
so there are ${\beta}<{\zeta}<{\xi}<{\kappa}$ such that
$A^0_{\zeta}\cap S = A^0_{\xi}\cap S$, contradicting that
$\<\<A^0_{\alpha},A^1_{\alpha}\>:{\beta}\le {\alpha}<{\kappa}\>$
is dyadic on $S$. This proves ${\kappa}\le \mf c$.
\end{proof}
To obtain SAU spaces from $\circledast_{\kappa,{\mu}}$, we actually need
$\circledast_{\kappa,{\mu}}$-sequences with an extra property that, luckily,
we can get for free.
\begin{lemma}\label{tm:strongst}
If $\circledast_{\kappa,\mu}$ holds then
there is a $\circledast_{\kappa,{\mu}}$-sequence
\begin{align*}
\<\<A^0_{\alpha},A^1_{\alpha}\>:{\alpha}<{\kappa}\>
\end{align*}
that, in addition to \ref{df:star}(A) and \ref{df:star}(B),
satisfies condition (C) below as well:
\begin{enumerate}[(A)]
\item[(C)] every pair $\{x,y\}\in \br {\kappa}\times {\mu};2;$ is separated by
$\<A^0_{\alpha},A^1_{\alpha}\>$ for cofinally many $\alpha < \kappa$, that is
for any ${\beta}<{\kappa}$
there is $\alpha \in \kappa \setminus \beta$ such that
\begin{align*}
\big|\{x,y\}\cap A^0_{\alpha}\big|=1.
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Let us start by fixing a $\circledast_{\kappa,{\mu}}$-sequence
\begin{align*}
\<\<B^0_{\alpha},B^1_{\alpha}\>:{\alpha}<{\kappa}\>.
\end{align*}
Case 1: ${\mu}=1$.
We fix an injective map $f:{\kappa}\times {\kappa}\times {\kappa} \to {\kappa}$
such that $\max\{\zeta,\xi\} < f(\zeta,\xi,\eta)$
and then put
\begin{align*}
A^0_{\alpha}= \left\{
\begin{array}{ll}
(B^0_{\alpha}\cup\{{\zeta}\})\setminus \{{\xi}\}&\text{if ${\alpha}=f({\zeta},{\xi},\eta)$
for some ${\zeta},{\xi},\eta<{\kappa}$,}\\\\
B^0_{\alpha}&\text{otherwise,}
\end{array}
\right .
\end{align*}
and
\begin{align*}
A^1_{\alpha}=({\alpha}\times \{0\})\setminus A^0_{\alpha}.
\end{align*}
Then $|A^i_{\alpha}\bigtriangleup B^i_{\alpha}|\le 2$ implies that property (B) is preserved,
and (C) holds because for any $\langle \zeta,\xi \rangle \in \kappa \times \kappa$ there
are cofinally many $\alpha < \kappa$ with ${\alpha}=f({\zeta},{\xi},\eta)$.
Case 2: ${\mu}\ge {\omega}$.
For any
$a,b\in {\kappa}\times {\mu}$ we define
\begin{align*}
a\equiv b\ \,\Leftrightarrow\,
\exists\, {\gamma < \kappa\,}\, \forall\, {\zeta}\in {\kappa}\setminus {\gamma}
\ ( a\in B^0_{\zeta} \Longleftrightarrow
b\in B^0_{\zeta} )\,.
\end{align*}
Then $\equiv$ is clearly an equivalence relation and property (B) implies that
every equivalence class of $\equiv$ is finite as any
infinite subset of ${\kappa}\times {\mu}$ is split by $B^0_{\alpha}$ eventually.
This clearly implies that
if $X\subset {\kappa}\times {\mu}$
contains exactly one element from each $\equiv$-equivalence class
then $ |X\cap (\{{\alpha}\}\times {\mu})|={\mu}$
for all ${\alpha}<{\kappa}$.
Pick such an $X$ and, for every $\alpha < \kappa$, fix a bijection
$$f_\alpha : \{\alpha\} \times {\mu}\to X\cap (\{{\alpha}\}\times {\mu}).$$
Now it is obvious that if we put $A^i_{\alpha}= f_\alpha^{-1}(X\cap B^{\alpha}_i)$,
then the sequence $\<\<A^0_{\alpha},A^1_{\alpha}\>:{\alpha}<{\kappa}\>$
has all the three properties (A), (B), and (C).
\end{proof}
A $\circledast_{\kappa, \mu}$-sequence with the additional property (C)
will be called a strong $\circledast_{\kappa, \mu}$-sequence.
These, as we shall now show, yield us spaces with very strong SAU properties.
\begin{theorem}\label{tm:star}
If $\<\<A^0_{\alpha},A^1_{\alpha}\>:{\alpha}<{\kappa}\>$ is a
strong $\circledast_{\kappa, \mu}$-sequence then
there is a separable and crowded Hausdorff topology $\tau$ on
${\kappa}\times {\mu}$ such that
for every infinite set $S \subset X$ there is ${\alpha}<{\kappa}$
for which $$\overline{S} \supset (\kappa \setminus {\alpha})\times {\mu} .$$
In particular, if ${\mu}<{\kappa}$, then
\begin{align}\label{eq:largeF1}
\text{ $|(\kappa \times \mu) \setminus \overline S|< \kappa$ for all infinite $S\subset \kappa \times \mu\,$.}
\end{align}
\end{theorem}
\begin{proof}
Let $\rho$ be a
topology on $Q = \omega \times \{0\}$ such that $\<Q,\rho\>$ is homeomorphic to
$\mbb Q$ with its usual topology and fix a countable base $\mc B$ of
$\<Q,\rho\>$. Since $cf({\kappa})> {\omega}$, we can pick
${\beta} <{\kappa}$ such that, for all $B\in \mc B$,
$\<\<A^0_{\alpha},A^1_{\alpha}\>:{\beta}\le {\alpha}<{\kappa}\>$
is dyadic on $B$.
Our topology $\tau$ is generated by
\begin{align}
\mc B \cup
\{A^0_{\alpha} : {\alpha}\in {\kappa}\setminus {\beta}\}
\cup
\{A^1_{\alpha} : {\alpha}\in {\kappa}\setminus {\beta}\}.
\end{align}
Then $\tau$ is Hausdorff by \ref{tm:strongst}.(C).
The choice of $\beta$ implies that $Q$ is both dense and crowded with
respect to $\tau$, hence $\tau$ is separable and crowded.
For every $S\in \br X;{\omega};$ there is
${\alpha} \in \kappa \setminus \beta$ such that
$\langle \<A^0_{\zeta}, A^1_{\zeta} \rangle : {\zeta}\in {\kappa}\setminus {\alpha}\>$
is dyadic on $S$. This clearly implies $({\kappa}\setminus {\alpha})\times {\mu}\subset \overline S$.
\end{proof}
The above space $X = \langle \kappa \times \mu , \tau \rangle$ trivially has the following very strong
SAU property: any family $\mathcal{A} \in [\operatorname{\mc F}^+ (X) ]^{< cf(\kappa)}$ has non-empty intersection.
Now we turn to examining the consistency of the principles $\circledast_{\kappa, \mu}$.
As it will turn out, many instances of them hold true in a generic extension obtained by
adding a lot of Cohen reals to an arbitrary ground model. To see this, we need the following
easy but technical lemma. First we fix some notation:
\begin{definition}\label{def:Cohen}
Assume that the cardinals ${\kappa},\,{\mu}$ and ${\alpha}\in {\kappa}$ are given.
Let $\mc G$ be $Fn(({\kappa}\setminus {\alpha})\times {\kappa}\times {\mu},2)$-generic over $V$.
Then in $V[\mc G]$ we put $g=\bigcup \mc G$ and define the sequence
\begin{align*}
\mc A_{\alpha}^{\mc G}=\<\<A^0_{\beta},A^1_{\beta} \>: {\alpha}\le {\beta}<{\kappa}\>
\end{align*}
by
\begin{align*}
A^i_{\beta}=\{\langle \xi,\zeta \rangle \in {\beta}\times {\mu}: g({\beta},\xi,\zeta)=i\}.
\end{align*}
\end{definition}
\begin{lemma}\label{lm:cohen}
Assume that $\mc G$ is
$Fn(({\kappa}\setminus {\alpha})\times {\kappa}\times {\mu},2)$-generic over $V$.
Then $\mc A_{\alpha}^{\mc G}$ is dyadic on each
$S\in V \cap \br {\alpha}\times {\mu};{\omega};$.
\end{lemma}
\begin{proof}
Fix $S\in V \cap \br {\alpha}\times {\mu};{\omega};$,
$\varepsilon \in Fn({\kappa}\setminus {\alpha},2)$, and $T \in [S]^{<\omega}$.
For any condition $p\in Fn(({\kappa}\setminus {\alpha})\times {\kappa}\times {\mu},2)$
we can find $\<\xi,\eta\> \in S\setminus T$ such that
\begin{align*}
(\operatorname{dom}(\varepsilon)\times \{\<\xi,\eta\>\})\cap \operatorname{dom} (p)=\emptyset.
\end{align*}
Then we can find a condition $q\supset p$ such that for each
${\zeta}\in \operatorname{dom}(\varepsilon)$ we have
\begin{align*}
q({\zeta}, \xi, \eta)=\varepsilon({\zeta}),
\end{align*}
hence
\begin{align*}
q\Vdash \<\xi,\eta\>\in \mc A_{\alpha}^{\mc G}[\varepsilon]\cap (S\setminus T).
\end{align*}
Since $p$ and $T$
were arbitrary, this implies
\begin{align*}
1 \Vdash \mc A_{\alpha}^{\mc G}[\varepsilon]\cap S\text{ is infinite},
\end{align*}
hence, as $\varepsilon$ was also arbitrary,
\begin{align*}
1\Vdash \text{$\mc A_{\alpha}^{\mc G}$
is dyadic on $S$.}
\end{align*}
\end{proof}
\begin{theorem}\label{tm:Cohen}
If we add $\lambda > \omega$ many Cohen reals to any ground model then
in the extension $\circledast_{\omega_1,\mu}$ holds for any uncountable
$\mu \le \lambda $.
\end{theorem}
\begin{proof}
Clearly, it suffices to prove that $\circledast_{\omega_1,\lambda}$ holds in the extension.
Since $|\omega_1 \times \omega_1 \times \lambda| = \lambda$, we may assume that our generic
extension is of the form $V[G]$ where $G$ is $Fn(\omega_1 \times \omega_1 \times \lambda\,,2)$-generic
over $V$.
Now, in $V[G]$, putting $g = \bigcup G$ we define for $\beta < \omega_1$ and $i < 2$
\begin{align*}
A^i_{\beta}=\{\langle \xi,\zeta \rangle \in {\beta}\times {\lambda}: g({\beta},\xi,\zeta)=i\},
\end{align*}
and we claim that
\begin{align*}
\<\<A^0_{\beta},A^1_{\beta}\>:{\beta} \in {\omega_1}\>
\end{align*}
is a $\circledast_{\omega_1,\lambda}$-sequence.
Then (A) is obvious and to check (B) consider any $S \in [\omega_1 \times \lambda]^\omega$.
It is well-known, however, that there is some $\alpha < \omega_1$ such that
$S \in V[G \cap Fn(\alpha \times \omega_1 \times \lambda,2)]$. From this,
applying lemma \ref{lm:cohen} to the ground model $V[G \cap Fn(\alpha \times \omega_1 \times \lambda,2)]$,
we may immediately deduce that the tail sequence $\<\<A^0_{\beta},A^1_{\beta}\>:{\beta} \in {\omega_1} \setminus \alpha\>$
is dyadic on $S$.
Actually, it is easy to see using genericity that $\<\<A^0_{\beta},A^1_{\beta}\>:{\beta} < {\omega_1}\>$
is a strong $\circledast_{\omega_1,\lambda}$-sequence.
\end{proof}
\begin{corollary}
If we add $\lambda > \omega$ many Cohen reals to our ground model then
in the extension for every cardinal $\mu \in [\omega_1, \lambda]$
there is a SAU space of size $\mu$.
\end{corollary}
Theorem \ref{tm:Cohen} and proposition \ref{*} imply the well-known and trivial fact
that $\mf s = \omega_1$ holds in a generic extension obtained by adding
uncountably many Cohen reals. Hence by the the above
corollary it is consistent to have a gap of any possible size
between $\mf s = \omega_1$ and $\mf c$ and to have
SAU spaces of all cardinalities between $\mf s$ and $\mf c$.
Our next theorem implies the consistency of the analogous statement with $\mf s > \omega_1$.
\begin{theorem}\label{tm:sAU-size}
Assume that $GCH$ holds and ${\nu}\le {\lambda}$ are uncountable cardinals
in our ground model $\,V$ such that $\nu$ is regular and $cf(\lambda) > \omega$.
Then there is a CCC, hence cardinal and cofinality preserving,
forcing notion $P$ such that in the extension $V^P$ we have
$\mf s={\nu}$, $\mf c = \lambda$, moreover $\circledast_{{\kappa},{\mu}}$ holds whenever
${\nu}\le {\kappa}=cf({\kappa})\le {\lambda}$ and either ${\omega}\le {\mu}\le {\lambda}$
or ${\mu}=1$.
\end{theorem}
\begin{proof}
Let $P=Fn({\lambda},2)*\dot{Q}$, where $\dot{Q}$ is a name in $V^{Fn({\lambda},2)}$ for
the standard finite support iteration
\begin{align*}
\<Q_{\zeta}:{\zeta}\le {\lambda}, R_{\xi}:{\xi}<{\lambda} \>
\end{align*}
that forces $\mathfrak{p} = \nu$ in such a way that
each $R_{\xi}$ is a CCC, even $\sigma$-centered, poset of size $<{\nu}$
in $V^{Fn({\lambda},2)*Q_{\xi}}$.
So in $V^P$ we have $\mathfrak{c}={\lambda}$ and
$\mf p = {\nu}$.
\medskip
Let us now fix ${\kappa}$ and $\mu$ as indicated and check that $\circledast_{{\kappa},{\mu}}$ holds.
This will imply $\mf s={\nu}$ because on one hand $\nu =\mathfrak{p} \le \mathfrak{s}$,
while, by proposition \ref{*}, $\circledast_{{\nu},{\mu}}$ implies $\mathfrak{s} \le \nu$.
Because of $\kappa \cdot \mu \le \lambda$ the forcings
$Fn({\lambda},2)\times Fn({\kappa}\times {\kappa}\times {\mu},2)$ and $Fn({\lambda},2)$
are equivalent, hence we may assume that actually
\begin{align*}
P=Fn({\lambda},2)\times Fn({\kappa}\times {\kappa}\times {\mu},2)*\dot{Q}\,,
\end{align*}
and from now on we will work in the intermediate model
\begin{align*}
W=V^{Fn({\lambda},2)}.
\end{align*}
Now, let $\mc G$ be $Fn({\kappa}\times {\kappa}\times {\mu},2)$-generic
over $W$ and $\mc H$ be $Q$-generic over $W[\mc G]$. Our result then follows from
the following claim.
\begin{claim}
The sequence
\begin{align*}
\mc A^\mc G_0=\<\<A^0_{\alpha}, A^1_{\alpha}\>:{\alpha}<{\kappa}\>\,,
\end{align*}
defined in $W[\mc G]$ following \ref{def:Cohen} with the choice $\alpha = 0$,
is a $\circledast_{{\kappa},{\mu}}$-sequence in the final generic extension
$W[\mc G][\mc H]$.
\end{claim}
\begin{proof}[Proof of the claim]
Let us fix $S\in \br {\kappa}\times {\mu};{\omega};$ in $W[\mc G][\mc H]$.
It is easy to see that there is
a regular suborder $Q'\lessdot Q$ such that $|Q'|<{\kappa}$ and
\begin{align*}
S\in W[\mc G][\mc H']
\end{align*}
where $\mc H' = \mc H \cap Q'$.
Since ${\kappa}$ is regular, then there is ${\alpha}<{\kappa}$ such that
$Q'\in W[\mc G_{\alpha}]$, where $\mc G_{\alpha} = \mc G \cap Fn({\alpha}\times {\kappa}\times {\mu},2)$.
But $W[\mc G_{\alpha}]$ contains both $Q'$
and $Fn(({\kappa}\setminus {\alpha})\times {\kappa}\times {\mu},2)$,
hence, putting $$\mc G^\alpha = \mc G \cap Fn(({\kappa}\setminus {\alpha})\times {\kappa}\times {\mu},2)\,,$$ we have
\begin{align}\label{eq:change}
W[\mc G][\mc H'] =
W[\mc G_{\alpha}][\mc G^{\alpha}][\mc H']=
W[\mc G_{\alpha}][\mc H'][\mc G^{\alpha}].
\end{align}
By lemma \ref{lm:cohen} then we have
\begin{align*}
W[\mc G][\mc H'] = W[\mc G_{\alpha}][\mc H'][\mc G^{\alpha}]\models
\text{``$\mc A^{\mc G^{\alpha}}_\alpha$ is dyadic on $S$''},
\end{align*}
while $\mc A^{\mc G^{\alpha}}_\alpha$ is clearly just the final segment of $\mc A^\mc G_0$
starting at $\alpha$. But then by $W[\mc G][\mc H'] \subset W[\mc G][\mc H]$ we also have
\begin{align*}
W[\mc G][\mc H] \models
\text{``$\mc A^{\mc G^{\alpha}}_\alpha$ is dyadic on $S$''},
\end{align*}
hence $\mc A^\mc G_0$ is indeed a $\circledast_{{\kappa},{\mu}}$-sequence in $W[\mc G][\mc H]$.
\end{proof}
This completes the proof of our theorem.
\end{proof}
We have seen in theorem \ref{tm:star} that if $\mu < \kappa$
then our construction from $\circledast_{{\kappa},{\mu}}$ yields a space
$X$ of size $\kappa$ with the very strong SAU property that every
infinite closed set $F \in \operatorname{\mc F}^+(X)$ is ``co-small'', i.e. $|X \setminus F| < |X| = \kappa$.
Our following result shows that this strong property cannot be pushed any further.
\begin{theorem}\label{tm:nospace}
If $X$ is any space then
\begin{align*}
|X|\le \sup \{|X\setminus F|^+: F\in F\in \operatorname{\mc F}^+(X)\}.
\end{align*}
\end{theorem}
\begin{proof}
Assume, on the contrary, that
\begin{align}\label{eq:nospace}
{\kappa}=\sup \{|X\setminus F|^+: F\in F\in \operatorname{\mc F}^+(X)\}< |X|.
\end{align}
Then every point $p\in X$ has an open neighborhood $U(p)$ with $|U(p)|<{\kappa}$.
But then, by Hajnal's Set Mapping
Theorem from \cite{Ha}, there is a set $Y\in \br X;|X|;$
such that $q\notin U(p)$ for any distinct $p,q \in Y$.
Fix any $Z\subset Y$ such that $|Z|=|Y|=|X|$ and $Y \setminus Z$ is infinite.
Then for $U=\bigcup_{p\in Z}U(p)$ we have $|U| = |X|$ while $X \setminus U \in \operatorname{\mc F}^+(X)$,
contradicting (\ref{eq:nospace}).
\end{proof}
Our results below show that, consistently, the existence of a SAU space of a given size
does not imply the existence of a space of the same size in which
every infinite closed set is ``co-small''.
\begin{theorem}\label{tm:rc_not_star}
Assume that $\mf r = \mf c={\omega}_2$ and $\clubsuit$ holds.
Then SAU spaces of cardinality $\mf c$ exist but
in every space $X$ of cardinality $\mf c$ there is an infinite closed
set $F$ such that $|X \setminus F| = \mf c$.
\end{theorem}
\begin{proof}
By theorem \ref{tm:sAU-lc-cont} $\mf r=\mf c$ implies that
SAU spaces of cardinality $\mf c$ exist .
Now assume that $X=\<{\omega}_2, \tau\>$ is a space
such that $|{\omega}_2\setminus F|\le {\omega}_1$ for all $F \in \operatorname{\mc F}^+(X)$ and
fix a $\clubsuit$-sequence
$\<T_{\alpha}:{\alpha} \in Lim({\omega}_1)\>$.
Then
\begin{align*}
F=\bigcap\nolimits_{{\alpha}\in Lim({\omega}_1)} \overline {T_{\alpha}}
\end{align*}
contains a final segment of ${\omega}_2$. So we may
pick two points $x,y \in F \setminus \omega_1$ with disjoint open neighborhoods
$U$ and $V$, respectively. Clearly,
then $U\cap T_{\alpha}$ is infinite for all ${\alpha} \in Lim({\omega}_1)$, consequently
$|U\cap {\omega}_1|={\omega}_1$.
But $\<T_{\alpha}:{\alpha} \in Lim({\omega}_1)\>$ is a $\clubsuit$-sequence, hence
there is ${\alpha} \in Lim(\omega_1)$ with $T_{\alpha}\subset U$.
Thus we get $V\cap T_{\alpha}=\emptyset$,
which contradicts $y\in F \subset \overline{T_\alpha}$.
\end{proof}
The consistency of the assumptions of theorem \ref{tm:rc_not_star}
follows from a result that had been
proved by the first author back in 1983 but has never been published. So
we decided to include it here. For that we need some preparation.
For any cardinal ${\mu}$ we shall write
\begin{align*}
S^{\omega}_{\mu}=\{{\alpha}<{\mu}: cf({\alpha})={\omega}\}.
\end{align*}
We also need the following definition.
\begin{definition}
For any given set $X$ we define the forcing notion
$\Pjuh X=\<\pjuh X,\le \>$ as follows:
\begin{align*}
\pjuh X=\{f\in &Fn(X\times {\omega},\,2\, ;\,{\omega}_1) :\\&
\operatorname{dom}(f)=A\times n \text{ for some }
A\in \br X;{\le \omega}; \text{ and }
n\in {\omega}\}.
\end{align*}
For $p,q\in \pjuh X$ we let $p\le q$ iff $p\supset q$.
\end{definition}
We now present some properties of this forcing.
\begin{theorem}\label{tm:j1}
Let ${\kappa}$ be any infinite cardinal in our ground model $V$.
\begin{enumerate}[(1)]
\item $\Pjuh \kappa$ is $\mathfrak{c}^+$-CC; in fact, for any $\{p_{\alpha}:{\alpha}<\mathfrak{c}^+\}\subset \pjuh{{\kappa}}$ there is
$I\in \br {\mathfrak{c}^+};{\mathfrak{c}^+};$ such that $\,\bigcup_{{\alpha}\in K}p_{\alpha}\in \pjuh{{\kappa}}$ whenever $K\in \br I;{\omega};$.
Consequently, the forcing $\Pjuh \kappa$ preserves all cardinals $> \mathfrak{c}$.
\smallskip
\item $\mathfrak{c}$ becomes countable in $V^{\Pjuh{{\kappa}}}$, hence $(\mathfrak{c}^+)^V = (\omega_1)^{V^{\Pjuh{\kappa}}}$.
\smallskip
\item If
$\clubsuit(S^{\omega}_{\mathfrak{c}^+})$ holds in $V$ then
$\clubsuit$ holds in $V^{\Pjuh{\kappa}}$ .
\smallskip
\item If ${\kappa}={\kappa}^\mathfrak{c}$ then $\mathfrak{c}^{V^{\Pjuh{\kappa}}}={\kappa}$ and
$\,V^{\Pjuh{\kappa}}\models \, {\rm MA(\mbox{\it countable})}\,$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Assume that $\operatorname{dom}(p_{\alpha})=A_{\alpha}\times n_{\alpha}$ for
${\alpha}< \mathfrak{c}^+.$ Clearly we can find $H \in \br {\mathfrak{c}^+};{\mathfrak{c}^+};$
and $n\in {\omega}$ such that $n_{\alpha}=n$ for all ${\alpha}\in H$. A simple $\Delta$-system
and counting argument then yields $I\in \br H;\mathfrak{c}^+;$ such that the functions
$\{p_{\alpha}:{\alpha}\in I\}$
are pairwise compatible.
It is obvious then that $I$ is as required.
\smallskip
(2) Let $\mc G$ be $\Pjuh {\kappa}$-generic over $V$, then
$g=\bigcup \mc G : {\kappa}\times {\omega}\to 2$. For each $n \in \omega\,$
we define the function $d_n \in {}^{\omega}2\,$ by putting for all $i < n$
\begin{align*}
d_n(i)=g(i,n).
\end{align*}
It is straight forward to check that if
$r:{\omega}\to 2$ is in the ground model then
\begin{align*}
D_r=\{p\in \pjuh{{\kappa}} : \exists n\in {\omega}\, \forall i\in {\omega} \, [r(i)= p(i,n)]\}.
\end{align*}
is dense in $\Pjuh{{\kappa}}$, consequently we have
\begin{align*}
V^{\Pjuh {\kappa}}\models \{d_n:n<{\omega}\} \supset {}^{\omega}2\cap V.
\end{align*}
But this clearly implies that $\mathfrak{c}$ becomes countable in $V^{\Pjuh{{\kappa}}}$.
Then $(\mathfrak{c}^+)^V = (\omega_1)^{V^{\Pjuh{\kappa}}}$ follows because $(\mathfrak{c}^+)^V$
remains a cardinal by (1).
\smallskip
(3) To aid readability, we write $\mu = \mathfrak{c}^+$ and $S = S^{\omega}_{\mu}$.
Then we fix a $\clubsuit(S)$-sequence
$\<A_{\zeta}:{\zeta} \in S \>$ in $V$. By (2) we have $S \subset Lim(\omega_1)$
in the generic extension $V^{\Pjuh{\kappa}}$.
Let us assume now that $p \Vdash \dot X \in [\mu]^\mu$ for a condition $p$ in ${\Pjuh{\kappa}}$.
We can then define in $V$ a strictly increasing map $\varphi : S \to \mu$ and for each $\zeta \in S$
a condition $p_\zeta \le p$ such that $p_\zeta \Vdash \varphi(\zeta) \in \dot X$.
Applying (1) we can find
$I\in \br {\mu};{\mu};$ such that
$\,p_K = \bigcup_{{\zeta}\in K}p_{\zeta}\in \pjuh{{\kappa}}$ holds whenever $K\in \br I;{\omega};$.
Now, $\varphi[I] = \{\varphi(\zeta) : \zeta \in I\} \in [\mu]^\mu$, hence there is some
$\eta \in S$ such that $A_\eta \subset \phi[I]$. But then for
$K= \phi^{-1}[A_{\eta}] \in [I]^\omega$
we have $p_K \in \pjuh{{\kappa}}$ and $p_K \le p$, moreover
we clearly have $\,p_K \Vdash A_{\eta}\subset \dot X$. Thus, no matter how we define
$A_\zeta$ for $\zeta \in Lim(\omega_1) \setminus S$, the sequence $\<A_{\zeta}:{\zeta} \in Lim(\omega_1) \>$
will be a $\clubsuit$-sequence in the generic extension $V^{\Pjuh{\kappa}}$.
\smallskip
(4) For each ${\alpha}<{\kappa}$ we define the real $\,q_{\alpha}\in {}^{\omega}2$
in $V^{\Pjuh{\kappa}}$ by stipulating
$q_{\alpha}(n)=g({\alpha},n)$ for all $n \in \omega$. Then, by genericity,
$\{q_{\alpha} : {\alpha}<{\kappa}\}$ are pairwise distinct, hence we have
$\mathfrak{c}^{ {V^{{\Pjuh{\kappa}}}}} \ge {\kappa}$.
On the other hand, by (1) $\,\Pjuh {\kappa}$ satisfies the $\mathfrak{c}^+$-chain condition,
hence the standard calculation using nice names and the condition
${\kappa}={\kappa}^\mathfrak{c}$ yield us that
$\mathfrak{c}^{ {V^{{\Pjuh{\kappa}}}}}\le {\kappa}$.
Thus indeed $\mathfrak{c}^{ {V^{{\Pjuh{\kappa}}}}} = {\kappa}$.
Now suppose that $\mathfrak{c}^V \le {\lambda}<{\kappa}$ and $\mc D=\{D_{\alpha}:{\alpha}<{\lambda}\}$
is a family of dense subsets of $Fn({\omega},2)$ in $V^{\Pjuh{\kappa}}$.
Then there is $I\in \br {\kappa};{\lambda};$
such that $\mc D\in V^{\Pjuh{I}}$.
Pick any ${\alpha}\in {\kappa}\setminus I$. Then, as $\mc D\in V^{{\Pjuh{{\kappa}\setminus \{{\alpha}\}}}}$ and
$${\Pjuh{{\kappa}}}\approx{\Pjuh{{\kappa}\setminus \{{\alpha}\}}} \times Fn({\omega},2), $$
$q_{\alpha}$ is generic over $\mc D$.
This clearly implies {\rm MA(\mbox{\it countable})}\ in the generic extension $V^{\Pjuh{\kappa}}$.
\end{proof}
In the constructible universe $L$ we have $\mathfrak{c} = \omega_1$, $(\omega_2)^{\omega_1} = \omega_2$,
moreover $\clubsuit(S^{\omega}_{\mathfrak{c}^+})$ holds. Also, it is well-known and
easy to prove that {\rm MA(\mbox{\it countable})}\ implies $\mathfrak{r} = \mathfrak{c}$. Consequently, it is an
immediate corollary of theorem \ref{tm:j1} that $L^{\Pjuh{\omega_2}}$ satisfies all the assumptions of
theorem \ref{tm:rc_not_star}.
\section{Problems}
In this section we formulate the most intriguing questions concerning SAU spaces
that are left open.
\begin{problem}
Is there a SAU space in ZFC?
\end{problem}
\begin{problem}
Is it consistent that there is a SAU space
of cardinality $>\mf c$? Is it consistent that there is a locally countable SAU space
of cardinality $>\mf c$?
\end{problem}
\begin{problem}
Does the existence of a SAU space imply the existence of a crowded SAU space?
\end{problem}
| {
"timestamp": "2015-09-07T02:08:05",
"yymm": "1509",
"arxiv_id": "1509.01420",
"language": "en",
"url": "https://arxiv.org/abs/1509.01420",
"abstract": "All spaces are assumed to be infinite Hausdorff spaces. We call a space \"anti-Urysohn\" $($AU in short$)$ iff any two non-emty regular closed sets in it intersect. We prove that$\\bullet$ for every infinite cardinal ${\\kappa}$ there is a space of size ${\\kappa}$ in which fewer than $cf({\\kappa})$ many non-empty regular closed sets always intersect;$\\bullet$ there is a locally countable AU space of size $\\kappa$ iff $\\omega \\le \\kappa \\le 2^{\\mathfrak c}$.A space with at least two non-isolated points is called \"strongly anti-Urysohn\" $($SAU in short$)$ iff any two infinite closed sets in it intersect. We prove that$\\bullet$ if $X$ is any SAU space then $ \\mathfrak s\\le |X|\\le 2^{2^{\\mathfrak c}}$;$\\bullet$ if $\\mathfrak r=\\mathfrak c$ then there is a separable, crowded, locally countable, SAU space of cardinality $\\mathfrak c$; \\item if $\\lambda > \\omega$ Cohen reals are added to any ground model then in the extension there are SAU spaces of size $\\kappa$ for all $\\kappa \\in [\\omega_1,\\lambda]$;$\\bullet$ if GCH holds and $\\kappa \\le\\lambda$ are uncountable regular cardinals then in some CCC generic extension we have $\\mathfrak s={\\kappa}$, $\\,\\mathfrak c={\\lambda}$, and for every cardinal ${\\mu}\\in [\\mathfrak s, \\mathfrak c]$ there is an SAU space of cardinality ${\\mu}$.The questions if SAU spaces exist in ZFC or if SAU spaces of cardinality $> \\mathfrak c$ can exist remain open.",
"subjects": "General Topology (math.GN); Logic (math.LO)",
"title": "Anti-Urysohn spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517501236461,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7089606306011018
} |
https://arxiv.org/abs/1901.06480 | Classifying uniformly generated groups | A finite group $G$ is called *uniformly generated*, if whenever there is a (strictly ascending) chain of subgroups $1<\langle x_1\rangle<\langle x_1,x_2\rangle <\cdots<\langle x_1,x_2,\dots,x_d\rangle=G$, then $d$ is the minimal number of generators of $G$. Our main result classifies the uniformly generated groups without using the simple group classification. These groups are related to finite projective geometries by a result of Iwasawa on subgroup lattices. | \section{Introduction}\label{S1}
Let $G$ be a finite group. A chain $1=G_0<G_1<\cdots<G_d=G$ of
subgroups of a $G$ is called \emph{unrefinable} if $G_i$ is maximal in
$G_{i+1}$ for each $i$. The \emph{length} of $G$, denoted $\ell(G)$,
is the maximum length of an unrefinable chain, and the \emph{depth} of
$G$, denoted $\lambda(G)$, is the minimum length of an unrefinable
chain. By~\cite{BLS2}, a nonabelian simple group $G$ satisfies
\[
\lambda(G)\le\left(1+o(1)\right)\frac{\ell(G)}{\log_2(\ell(G))}.
\]
It was shown in~\cite{CST} that
$\ell(\Alt_n)=\left\lfloor\frac{3(n-1)}{2}\right\rfloor-s_2(n)$ where $\Alt_n$
is the alternating group of degree~$n$, and
$s_p(n)=\sum_{i\ge0}n_i$ is the sum of the digits
of the base-$p$ expansion of~$n=\sum_{i\ge0}n_ip^i$.
In~\cite{BLS} and~\cite{BLS2} the length and depth of finite groups, and algebraic groups, are studied. These references review some of the
earlier work in this area.
Iwasawa~\cite{I} proved a striking result, namely
$\ell(G) = \lambda(G)$ if and only if $G$ is supersolvable. Inspired by
this result,~\cite{BLS} classifies the finite groups $G$ for which
$\ell(G) - \lambda(G)$ is `small'. An elementary proof of Iwasawa's result is
given in~\cite{H}*{Theorem~19.3.1}.
We say that $G$ is \emph{$d$-uniformly generated} if for all
$(x_1,x_2,\dots,x_d)\in G^d$ with
\[
1<\langle x_1\rangle<\langle x_1,x_2\rangle <\cdots<\langle x_1,x_2,\dots,x_d\rangle
\]
we have $G=\langle x_1,x_2,\dots,x_d\rangle$. In Lemma~\ref{L3}, we
will prove that $G$ is $d$-uniformly generated if and only if $d=\ell(G)$.
In particular, this implies that $G$ can be $d$-uniformly generated for
at most one choice of~$d$. The minimal number of generators of $G$
is denoted~$d(G)$.
Clearly $G=\langle x_1,x_2,\dots,x_d\rangle$ implies $d\ge d(G)$.
Recall that a generating set $S$ for a group $G$ is called \emph{independent}
(sometimes called \emph{irredundant}) if
$\langle S \setminus \{s\}\rangle < G$ for all $s \in S$. Let $m(G)$ denote
the maximal size of an independent generating set for $G$. For example,
$d(\Sym_n)=2$ for $n\ge3$, and $m(\Sym_n)=n-1$ for $n\ge1$ by \cite{W}.
The finite groups
with $m(G)=d(G)$ are classified by Apisa and Klopsch in~\cite{AK}*{Theorem~1.6}.
We say that $G$ is \emph{uniformly generated} if $G$ is $d(G)$-uniformly
generated. By Lemma~\ref{L3}, $G$ is uniformly generated if and only
$d(G) = \ell(G)$. We classify such groups in Theorem~\ref{T1}. Our first
proof of this result (see~\cite{G}*{p.\,4}) relied on the
Classification of Finite Simple
Groups (CFSG). This dependence seemed undesirable as the conclusion did not involve any
nonabelian simple groups. The proof we give appeals to Iwasawa's result,
and is completely elementary.
\begin{theorem}\label{T1}
Let $G$ be a finite group, and let $\C_n$ denote a cyclic group of order~$n$. Then $G$ is uniformly generated if and only if either $G\cong (\C_p)^d$ is elementary or $G\cong (\C_p)^{d-1}\rtimes\C_q$ where $p,q$ are primes and $\C_q$ acts as a nontrivial scalar on $(\C_p)^{d-1}$.
\end{theorem}
\begin{remark}
There are two key ideas for the proof of Theorem~\ref{T1}. First,
for any group $G$, we have $d(G) \leq m(G) \leq \ell(G)$ and
$d(G) \leq \lambda(G) \leq \ell(G)$, and second
\begin{equation}\label{E1}
\textup{if $G$ is uniformly generated, then $d(G)=\ell(G)$ and hence
$\ell(G)=\lambda(G) = m(G)$.}
\end{equation}
Since $\lambda(G)=\ell(G)$, a uniformly generated group $G$ must be
supersolvable by~\cite{I}. Further, since
$d(G) = m(G)$ it is amongst the (solvable)
groups classified by Apisa and Klopsch in~\cite{AK}*{Theorem~1.6}.
Their groups are structurally similar to ours, but with a more general
module action.
Our proof does not refer to~\cite{AK}, even though it would be natural to do so,
because we want our proof to be independent of the CFSG.
\end{remark}
\begin{remark}
The groups we classify in Theorem~\ref{T1} arise in connection with other
very natural characterizations. For example, Iwasawa~\cite{I} classified
the groups $G$ whose subgroup lattice forms a finite projective
geometry with at least three points on a line, and found the same groups.
Further, Baer~\cite{B}*{Theorem~11.2(b)} determined the same groups when
considering ``subgroup-isomorphisms'' and ``ideal-cyclic'' groups~\cite{B}*{p.\,2, p.\,8}.
\end{remark}
\section{Proof}\label{S2}
The characterization of $d$-uniformly generated groups in Lemma~\ref{L3} below
helps to prove Theorem~\ref{T1}.
\begin{lemma}\label{L3}
A finite group $G$ is $d$-uniformly generated if and only if $d=\ell(G)$.
\end{lemma}
\begin{proof}
The inequality $d\le\ell(G)$ is clear. Suppose now that $G$ is $d$-uniformly
generated and $d<\ell(G)$. Then there exists an unrefinable chain
\[
1=G_0<G_1<\cdots<G_{\ell(G)}=G.
\]
Since $G_i$ is maximal in $G_{i+1}$ we have $G_{i+1}=\langle G_i,x_i\rangle$ for all $x_i\in G_i\setminus G_{i-1}$. It follows that $G_i=\langle x_1,\dots,x_i\rangle$ and $1=G_0<G_1<\cdots<G_d<G_{\ell(G)}=G$. Consequently, $G$ is not $d$-uniformly generated. This contradiction proves the result.
\end{proof}
Recall the following definitions. The \emph{Frattini subgroup}, $\Phi(G)$, is the intersection of the maximal subgroups of $G$; so the elements of $\Phi(G)$ are precisely the elements of $G$ contained in no independent generating sets of $G$. The \emph{Fitting subgroup}, $\Fit(G)$, is the largest normal nilpotent subgroup of $G$.
\begin{lemma}\label{L2}
Let $G$ be a finite uniformly generated group.
\begin{enumerate}[{\rm (a)}]
\item If $1 \lhdeq N \lhdeq G$, then $N$ and $G/N$ are both uniformly generated.
\item The Frattini subgroup $\Phi(G)$ is trivial.
\end{enumerate}
\end{lemma}
\begin{proof}
(a)~Suppose $1\lhdeq N\lhdeq G$. For any group $G$ we have
$d(G) \leq d(G/N) + d(N)$ and
$\ell(G) = \ell(G/N) + \ell(N)$, see~\cite{CST}*{Lemma~2.1}. Since $G$ is uniformly generated,
\[
d(G) = \ell(G) = \ell(G/N) + \ell(N) \geq d(G/N) + d(N) \geq d(G).
\]
Therefore, $\ell(G/N) = d(G/N)$ and $\ell(N) = d(N)$, implying that $G/N$ and $N$ are uniformly generated by Lemma~\ref{L3}.
(b)~Assume that $\Phi(G)\ne 1$, and choose $1\ne x_1\in\Phi(G)$. Suppose $Y=\{y_1,\dots,y_d\}$ generates $G$, where $d=d(G)$. The minimality of $d$ implies $\langle y_2,\dots,y_d\rangle<G$, and hence $\langle x_1,y_2,\dots,y_d\rangle<G$ as $x_1 \in \Phi(G)$. If for some $i<d$, the subgroup $\langle x_1,y_2,\dots,y_i\rangle$ equals $\langle x_1,y_2,\dots,y_{i+1}\rangle$, then $y_{i+1}\in\langle x_1,y_2,\dots,y_i\rangle$. In this case, we therefore have
\[
G=\langle y_1,\dots,y_i,y_{i+1},\dots,y_d\rangle
=\langle y_1,\dots,y_i,x_1,y_{i+2},\dots,y_d\rangle
=\langle y_1,\dots,y_i,y_{i+2},\dots,y_d\rangle< G.
\]
This contradiction shows that there is a strictly ascending chain
\[
1<\langle x_1\rangle<\langle x_1,y_2\rangle<\cdots
<\langle x_1,y_2,\dots,y_d\rangle<G
\]
with too many subgroups, contradicting the fact that $G$ is uniformly generated.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T1}]
Assume that $G$ is uniformly generated and $d=d(G)$.
Then $\ell(G)=\lambda(G)$ by~\eqref{E1}, and $G$ is supersolvable
by~\cite{I}. Assume $G\ne1$ and $N:=\Fit(G)$.
Then $N \neq 1$ since $G$ is solvable.
Lemma~\ref{L2}(a,b) imply that $\Phi(N)=1$. If $|N|$ is divisible by
two primes, then we have a smaller generating set.
Hence~$N$ must be elementary abelian. The first possibility
is $G=N\cong (\C_p)^d$. Suppose now that~$N$ is a proper subgroup of~$G$.
Since $G$ is supersolvable, the derived subgroup $G'$ is nilpotent, so
$G'\le F(G)$ and $G/F(G)$ is abelian. The above argument shows
that $G/N$ is an elementary $q$-group. Clearly $q\ne p$.
Let $g\in G$ have order~$q$. By Lemma~\ref{L2}(a),
$N\langle g\rangle$ is uniformly generated, and by Maschke's
theorem $N$ is a direct sum of simple
$\langle g\rangle$-submodules which must
have dimension~1 and be isomorphic. Therefore $g$ acts as a scalar
matrix on $N$. The scalar has order~$q$, and not~1 because
$N=\Fit(G)$. Also, if $|G/N|=q^k$, then we must have $k=1$,
otherwise we could find an element of order $q$ centralizing
$N$ and hence $N<\Fit(G)$, a contradiction. In summary, either
$G\cong(\C_p)^d$ or $G\cong(\C_p)^{d-1}\rtimes\C_q$ where $\C_q$ acts as a
nontrivial scalar on $(\C_p)^{d-1}$. Conversely, such groups are easily shown
to be uniformly generated and to~have~$d=d(G)$.
\end{proof}
We conclude with two open problems.
\begin{problem}
Classify the finite groups $G$ with $m(G)-d(G)\le1$.
\end{problem}
\begin{problem}
Bound the difference $m(G)-d(G)$, for a connected algebraic group~$G$.
\end{problem}
\section*{Acknowledgments}
The problem of classifying uniformly generated groups was posed by the author
at the 2018 CMSC Annual Research Retreat, and solved promptly. I thank the
CMSC for hosting the Retreat, and Scott Harper for his helpful comments. I
acknowledge the support of the Australian Research Council Discovery Grants
DP160102323 and DP190100450. Finally, I thank the referee for suggesting
improvements to this note.
| {
"timestamp": "2019-05-31T02:10:43",
"yymm": "1901",
"arxiv_id": "1901.06480",
"language": "en",
"url": "https://arxiv.org/abs/1901.06480",
"abstract": "A finite group $G$ is called *uniformly generated*, if whenever there is a (strictly ascending) chain of subgroups $1<\\langle x_1\\rangle<\\langle x_1,x_2\\rangle <\\cdots<\\langle x_1,x_2,\\dots,x_d\\rangle=G$, then $d$ is the minimal number of generators of $G$. Our main result classifies the uniformly generated groups without using the simple group classification. These groups are related to finite projective geometries by a result of Iwasawa on subgroup lattices.",
"subjects": "Group Theory (math.GR)",
"title": "Classifying uniformly generated groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517494838938,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7089606301373643
} |
https://arxiv.org/abs/1701.04444 | The phases of large networks with edge and triangle constraints | Based on numerical simulation and local stability analysis we describe the structure of the phase space of the edge/triangle model of random graphs. We support simulation evidence with mathematical proof of continuity and discontinuity for many of the phase transitions. All but one of themany phase transitions in this model break some form of symmetry, and we use this model to explore how changes in symmetry are related to discontinuities at these transitions. | \section{Introduction}
\label{SEC:Intro}
We use the variational formalism of equilibrium statistical mechanics
to analyze large random graphs. More specifically we analyze ``emergent
phases", which represent the (nonrandom) large-scale structure of
typical large graphs under
global constraints on subgraph densities. We concentrate on
the model with edge and triangle density constraints, $\E$ and $\tau$,
which in this context play somewhat similar roles that mass and energy density constraints play in
microcanonical models of simple materials. Our goal is to understand the statistical states
$g_{\E,\tau}$ which maximize entropy for given $\E$ and $\tau$. These are the
analogues of Gibbs states in statistical mechanics.
Other parametric models of random graphs are
widely used, in particular exponential random graph models (analogues
of grand canonical models of materials), and the edge/triangle
constraints have been studied in that formalism since
they were popularized by Strauss in 1986 \cite{St}. (There is a short discussion
of exponential models in Section \ref{SEC:notation}.)
This paper, following on
\cite{RS1,RS2,RRS1,KRRS1,KRRS2,RRS2}, is the first attempt to
determine the qualitative features of the whole phase space of the
edge/triangle model. We discuss what phases exist, and give the basic features of the
transitions between these phases; see Figure \ref{phase-diagram}.
\begin{figure}[htbp]
\center{\includegraphics[width=5in]{phases10e}}
\caption{Distorted sketch of 14 of the phases in the edge/triangle model.
Continuous phase transitions are shown in red, and discontinuous
phase transitions in black.}
\label{phase-diagram}
\end{figure}
The phase space $\Gamma$ is the space of achievable values of the
constraints, and a phase is a connected open subset of $\Gamma$
in which the entropy optimizing $g_{\E,\tau}$ are unique for given $(\E,\tau)$, and
vary analytically with
$(\E,\tau)$. In the case of edge/triangle constraints $\Gamma$ is the
``scalloped triangle" of Razborov \cite{R3} (see Figure~\ref{Razborov-triangle}).
Determining the optimal states in the edge/triangle model is a difficult
variational problem, one that has not been
solved rigorously except in special
regions of the phase space \cite{RS1,RS2,RRS1,KRRS1,KRRS2,RRS2}. However there is
solid evidence that each individual optimizer
is \emph{multipodal}, that is, described by a stochastic block model (see definition below).
This evidence is borne out by our numerical studies, and based on them we conjecture that the optimal states
occur in three families (plus one extra phase), corresponding to certain
equivalences or symmetries, as follows. We will give the evidence for
our conjectures as we proceed.
In the region $\tau<\E^3$ of $\Gamma$ there are three infinite families of
phases, denoted $A(n,0)$, $B(n,1)$ and $C(n,2)$; See Figure~\ref{phase-diagram}.
Each statistical state in an $A(n,0)$ phase corresponds to a partition
of the set $\Sigma$ of nodes into $n\ge 2$ subsets $\Sigma_j$ which are
{\it equivalent} in the sense that the sizes of all $\Sigma_j$ are $1/n$
of the whole,
the probability $p_{jk}$ of an edge between $v\in\Sigma_j$ and $w\in\Sigma_k$
is independent of $j$ and $k$, only depending on whether $j=k$ or $j\ne k$.
It follows that for an $A(n,0)$ phase there are only two
statistical parameters $p_{11}$ and $p_{12}$, which are thus easily computable functions of the edge and
triangle constraints, $\E$ and $\tau$. See for example Figure \ref{phaseA} for the case $A(2,0)$,
and the left graphon in Figure \ref{ABCexample}.
\begin{figure}[htbp]
\center{\includegraphics[width=5in]{phase_space1je2}}
\caption{Distorted sketch of Razborov's scalloped triangle. The dotted curve is not part of the region; it just indicates
the quadratic curve on which cusps lie.}
\label{Razborov-triangle}
\end{figure}
The $A(n,0)$ phase touches
the lower boundary of $\Gamma$ at the cusp $(\E,\tau)=
\left ( \frac{n}{n+1}, \frac{n(n-1)}{(n+1)^2} \right)$.
A statistical state in a $B(n,1)$ phase corresponds to a partition of
$\Sigma$ into $n$ statistically equivalent subsets, $n\ge 1$, plus one
other set of statistically equivalent nodes, see Figure
\ref{ABCexample} middle. In the phase diagram, these phases are
arranged in stripes coming out of the point $(\E,\tau)=(1,1)$, and part
of the boundary of $B(n,1)$ is shared with $A(n+1,0), A(n+2,0)$ and
with $C(n,2)$. These are depicted schematically in Figure~\ref{phase-diagram}.
A statistical state in a $C(n,2)$ phase corresponds to a nodal
partition into $n$ equivalent subsets, $n\ge 1$,
plus another statistically equivalent pair of subsets, see Figure \ref{ABCexample} right.
The phase
shares boundary with $B(n,1)$, $A(n+2,0)$, and the scallop
connecting the cusps with $\E=n/(n+1)$ and
$\E=(n+1)/(n+2)$. See Figure~\ref{phase-diagram}.
$F(1,1)$ denotes the single phase for the region
$\tau>\E^3$; see Figure~\ref{phase-diagram}.
A statistical state in
$F(1,1)$ corresponds to a bipodal graphon.
Although this is the same basic structure as $B(1,1)$, the two phases
are different in more subtle ways.
\begin{figure}[htbp]
\center{\includegraphics[width=3in]{bipartite3b.pdf}}
\caption{The optimal graphons in phase $A(2,0)$}
\label{phaseA}
\end{figure}
\begin{figure}[htbp]
\center{\includegraphics[width=2in]{Aexample}\includegraphics[width=2in]{Bexample}\includegraphics[width=2in]{Cexample}}
\caption{\label{ABCexample}Example of an $A(3,0)$, a $B(2,1)$ and $C(3,2)$ graphon.}
\end{figure}
The multipodal structure of
the $g_{\E,\tau}$ has only been proven on certain subsets of the phase space: on the boundary, on the
Erd\H{o}s-R\'{e}nyi curve $\tau=\E^3$, on the line segment $(1/2,\tau)$
for $0\le \tau\le 1/8$, and in a region just above the
Erd\H{o}s-R\'{e}nyi curve; see \cite{KRRS2}. It has however also been proven
just above the
Erd\H{o}s-R\'{e}nyi curve in a wide family of other models \cite{KRRS2}, for
all parameters in edge/$k$-star models \cite{KRRS1}, and is supported in our
edge/triangle model by
extensive simulations for a range of constraints \cite{RRS1}.
The support for the specific pattern of phases in
Figure~\ref{phase-diagram} is purely from simulation and will be described
in Section~\ref{SEC:numerics}.
In Section~\ref{SEC:transitions} we use this detailed structure of the
optimal states to analyze the transitions between phases. In our
edge/triangle model all phase transitions involve a change of
symmetry.
Some of the
transitions are continuous and some discontinuous, and we prove some
of these results in that section. The role of symmetry in determining
these qualitative features is of interest, given the major
role that symmetry plays in the understanding of phase transitions in
statistical physics~\cite{LL,An}. We discuss this further in Section~\ref{SEC:symmetry}.
\section{Notation and formalism}
\label{SEC:notation}
The study of large dense
graphs uses the mathematical tool of graphons, which we now review \cite{L}, making use of the
discussion in \cite{Ra}. We let $G_n$ denote the set of graphs on $n$
nodes, which we label $\{1,\ldots,n\}$. Graphs are assumed simple, i.e.\ undirected
and without
multiple edges or loops. A graph $G$ in $G_n$ can be represented by
its $0,1$-valued adjacency matrix, or alternatively by the function
${g_{_G}}$ on the unit square $[0,1]^2$ with constant value 0 or 1 in each
of the subsquares of area $1/n^2$ centered at the points $((j-\frac12)/n,
(k-\frac12)/n)$; see Figure~\ref{multipartite}.
\begin{figure}[htbp]
\center{\includegraphics[width=2.5in]{multipartite2}}
\caption{Graph with 14 nodes}
\label{multipartite}
\end{figure}
More generally, a graphon $g\in {\mathcal G}$ is an
arbitrary symmetric
measurable function on $[0,1]^2$ with values in $[0,1]$.
{We define the ``cut metric'' on graphons by}
\mathbf e} \newcommand{\bff}{\mathbf f
{d}(f,g)\equiv \sup_{S,T\subseteq [0,1]}\Big| \int_{S\times
T}[f(x,y)-g(x,y)]\,dx\,dy\Big|.
\label{cutmetric}
\end{equation}
Informally, $g(x,y)$ is the probability of an edge between nodes $x$
and $y$, and so two graphons
are called equivalent if they agree up to a
`node rearrangement', that is, $g(x,y)\sim g(\phi(x),\phi(y))$ where $\phi$
is a measure-preserving transformation of $[0,1]$
(see \cite{L} for details). {The cut metric on graphons is invariant under
the action of $\phi$: $d(f\circ \phi, g\circ \phi) = d(f,g)$.}
We define the cut metric on the quotient space $\tilde {\mathcal G}={\mathcal G}/\sim$ of `reduced
graphons' to be the infimum of (\ref{cutmetric}) over all representatives
of the given equivalence classes.
$\tilde {\mathcal G}$ is compact in the topology induced by this metric~\cite{L}.
We now consider the notion of `blowing up' a graph $G$ by replacing each
node with a cluster of $K$ nodes, for some fixed $K=2,3,\ldots$, with
edges inherited as follows: there is an edge between a node in
cluster $V$ (which replaced the node $v$ of $G$) and a node in cluster
$W$ (which replaced node $w$ of $G$) if and only if there is an edge
between $v$ and $w$ in $G$.
Note that the blowups of a graph are all represented by the same
reduced graphon, and ${g_{_G}}$ can therefore be considered a graph on
arbitrarily many -- even infinitely many -- nodes, which allows us to
reinterpret Figure~\ref{multipartite} as representing a multipartite graph.
This represents a form of symmetry which we exploit next, and discuss
in Section~\ref{SEC:symmetry}.
The `large scale' features of a graph $G$ on which we focus are the
densities with which various subgraphs $H$ sit in $G$. Assume for
instance that $H$ is a $4$-cycle. We could represent the density of
$H$ in $G$ in terms of the adjacency matrix $A_G$ by
\mathbf e} \newcommand{\bff}{\mathbf f
\frac{1}{n(n-1)(n-2)(n-3)} \sum_{w,x,y,z} A_G(w,x)A_G(x,y)A_G(y,z)A_G(z,w),
\end{equation}
where the sum is over distinct nodes $\{w,x,y,z\}$ of $G$.
For large $n$ this can approximated, within $O(1/n)$, as:
\mathbf e} \newcommand{\bff}{\mathbf f
\int_{[0,1]^4} {g_{_G}}(w,x){g_{_G}}(x,y){g_{_G}}(y,z){g_{_G}}(z,w)\, dw\, dx\, dy\, dz.
\end{equation}
It is therefore useful to define the
density $t_H(g)$ of this $H$ in a graphon $g$ by
\mathbf e} \newcommand{\bff}{\mathbf f
t_H(g) = \int_{[0,1]^4} g(w,x)g(x,y)g(y,z)g(z,w)\, dw\, dx\, dy\, dz,
\end{equation}
The density for other subgraphs is defined analogously.
We note that $t_H(g)$ only depends on the equivalence class of $g$ and is a
continuous function of $g$ with respect to the cut metric
on reduced graphons. It is an important result of \cite{L} that the densities of subgraphs
are separating: any two reduced graphons with the
same values for all densities $t_H$ are the same.
Our goal is to analyze typical large graphs with
variable edge/triangle constraints in the phase space of Figure
\ref{Razborov-triangle}. Our densities are real numbers, limits of densities
which are attainable in large finite systems, so we begin by softening
the constraints, considering graphs with $n>>1$ nodes and with
edge/triangle densities $(\E^\prime,\tau^\prime)$ satisfying
$\E-\delta<\E^\prime<\E+\delta$ and $\tau-\delta<\tau^\prime<\tau+\delta$
for some small $\delta$ (which will eventually disappear.) It is easy to
show that the number of such constrained graphs is of the form $\exp
(sn^2)$, for some $s=s(\E,\tau,\delta)>0$ and by a typical graph we mean
one chosen from the uniform distribution on the constrained set.
Getting back to our goal of analyzing constrained uniform
distributions, a related step is to determine the cardinality of the
set of graphs on $n$ vertices subject to the constraints. Constraints
are expressed in terms of a vector $\alpha$ of values of a set $C$ of densities,
and a small parameter
$\delta$. Denoting the cardinality by $Z_n(\alpha,\delta)$, it was proven
in \cite{RS1, RS2} that $\lim_{\delta\to 0}\lim_{n\to \infty}
(1/n^2)\ln[Z_n(\alpha,\delta)]$ exists; it is called the \emph{constrained
entropy} $s({\alpha})$. As in statistical mechanics this can be
usefully represented via a variational principle.
\begin{theorem}\label{thm:g-variation}
(The variational principle for constrained
graphs \cite{RS1,RS2})
For any $k$-tuple $\bar H=(H_1,\dots,H_k)$ of subgraphs and $k$-tuple $\alpha=(\alpha_1,\dots,\alpha_k)$ of
numbers,
\mathbf e} \newcommand{\bff}{\mathbf f \label{var}
s(\alpha)=\max_{t_{H_j}(q)=\alpha_j} S(g),
\end{equation}
\noindent where
$S$ is the graphon entropy:
\mathbf e} \newcommand{\bff}{\mathbf f \label{Shannon}
S(g)=-\frac12\int_{[0,1]^2} g(x,y)\ln[g(x,y)]+[1-g(x,y)]\ln[1-g(x,y)]\,dx\,dy.
\end{equation}
\end{theorem}
Variational principles such as Theorem \ref{thm:g-variation} are well
known in statistical mechanics \cite{Ru1,Ru2,Ru3}.
One aspect of such a variational principle in the current
context is not well understood, and that is the uniqueness of the optimizer.
In all known graph examples the
optimizers in the variational principle are unique in $\tilde {\mathcal G}$
except on a lower dimensional set of constraints where
there is a phase transition. Assuming there is a unique optimizer for
(\ref{var}), this optimizer is the limiting constrained uniform distribution.
We will discuss the question of uniqueness of entropy optimizers
in Section~\ref{SEC:symmetry}.
Finally we contrast the above formalism with the formalism of exponential random graph
models (ERGM's) mentioned in the Introduction. The latter are widely
used, especially in the social sciences, to model graphs on a fixed,
small number of nodes \cite{N}. They are sometimes considered as
Legendre transforms of the models being discussed in this paper
\cite{RS1}. However, as was pointed out in \cite{CD}, this use of Legendre transform is largely problematic,
since the constrained entropy in these models is neither convex nor concave, and the Legendre
transform is therefore not invertible \cite{RS1}.
As a consequence
the parameters in ERGM's become redundant, and
this confuses any interpretation of phases or phase transitions in such
models.
\section{Numerical simulation}
\label{SEC:numerics}
We now briefly describe the computational algorithms that we used to obtain the phase portrait sketched in Figure~\ref{phase-diagram}. The details of the algorithms, as well as their benchmark with known analytical results, can be found in~\cite{RRS1}. In the algorithms, we assume that the entropy maximizing graphons are at most $16$-podal, the largest number of podes that our computational power can handle. We then draw random samples from the space of $16$-podal graphons, select those that satisfy, besides the symmetry constraints, the constraints on edge and triangle densities up to given accuracy. We compute the entropy of these selected graphons and take the ones with maximal entropy as the optimizing graphons.
For each given $(\varepsilon,\tau)$ value, we generate a large number of samples such that the optimizing graphons we obtain have entropy values that are within a given accuracy to the true values. The computational complexity of such a procedure is extremely high for high accuracy computations. For each optimizing graphon which we determined with the sampling algorithm, we use it as the initial guess for a Netwon-type local optimization algorithm, more precisely a sequential quadratic programming (SQP) algorithm, to search for local improvements. Detailed analysis of the computational complexity of the sampling algorithm and the implementation of the SQP algorithm are documented in ~\cite{RRS1}. The results of the SQP algorithms are the candidate optimizing graphons that we show in the rest of this section.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.675\textwidth,height=0.615\textwidth]{Boundary-30-40-normal-coordinate}
\caption{Boundaries of the $A(3,0)$ and the $A(4,0)$ phases, in solid blue lines, as determined by numerical simulations.
The black dashed lines are the Erd\H{o}s-R\'{e}nyi curve (upper) and Razborov scallops (bottom).}
\label{FIG:N-0 Boundary}
\end{figure}
\paragraph{The $A(n,0)$ phase boundaries.}
In the first group of numerical simulations we try to determine the
boundaries of two of the completely symmetric phases, the $A(3,0)$
phase and the $A(4,0)$ phase. As we pointed out in the Introduction,
for a given $(\varepsilon, \tau)$ value in $A(n,0)$ we know analytically the unique
expression for the optimal $A(n,0)$ graphon (see, for instance,
equation (\ref{a-and-b})) and the corresponding entropy. Therefore, a given
point $(\varepsilon, \tau)$ is outside of the $A(n, 0)$ phase if we can find
a graphon that gives a larger entropy value, and is within
the $A(n, 0)$ phase if we can not find a graphon that has a
larger value. We use this strategy to determine the boundaries
of the $A(3,0)$ and the $A(4, 0)$ phases. In
Figure~\ref{FIG:N-0 Boundary} we show the boundaries we determined
numerically. Note that since these boundaries are determined using a
mesh on the $(\varepsilon, \tau)$ plane, they are only accurate
up to the mesh size in each direction.
For better visualization, we reproduced
Figure~\ref{FIG:N-0 Boundary} in a rescaled coordinate system, $(\varepsilon,
\tau'=\tau-\varepsilon(2\varepsilon-1))$, in Figure~\ref{FIG:N-0 Boundary
Rescaled}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth]{Boundary-30-40-rescaled-coordinate}
\caption{Same as in Figure~\ref{FIG:N-0 Boundary} except that the coordinates $(\varepsilon, \tau'=\tau-\varepsilon(2\varepsilon-1))$ are used, and $\varepsilon$ is restricted to
$[0.5, 0.8]$.}
\label{FIG:N-0 Boundary Rescaled}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37845575-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37373880-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37368265-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37323341-2}\\
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37188572-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37171725-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37154879-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p37149264-2}\\
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p36138489-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p36132874-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p36127258-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p36020565-2}\\
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35762256-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35745410-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35739795-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35728564-2}\\
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35706102-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35694871-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35689256-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35678025-2}\\
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35475870-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35464639-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35318639-2}
\includegraphics[width=0.22\textwidth]{E0p73500000-T0p35183869-2}
\caption{The optimizing graphons at the phase boundaries, cut as $\tau$
decreases at fixed $\E=0.735$. The first two columns are before the
transition and the last two columns are after the transition. Rows,
top to bottom, show respectively the following transitions: (i)
$B(1,1)\to B(2,1)$, (ii) $B(2,1)\to A(3,0)$, (iii) $A(3,0)\to
B(2,1)$, (iv) $B(2,1)\to C(2,2)$, (v) $C(2,2)\to A(4,0)$, and (vi)
$A(4,0)\to C(2,2)$. We discuss in the text the $\tau$ values surrounding the
transitions, corresponding to the middle two columns.}
\label{FIG:Line E0p735}
\end{figure}
\paragraph{Phases along line segment $(0.735, \tau)$.}
In the second group of numerical simulations, we provide some
numerical evidence to support our conjecture on phases depicted
in Figure~\ref{phase-diagram}. Consider the optimizing graphons
in the phases which the vertical line segment $(0.735, \tau)$:
$\tau\in(0.3504, 0.3971)$, cuts through. (Figure~\ref{phase-diagram} is
quite crude: for an accurate representation of the
boundaries of $A(3,0)$ and $A(4,0)$ see Figure~\ref{FIG:N-0
Boundary} .) From top to bottom, the line
cuts through: $B(1,1)$, $B(2,1)$, $A(3,0)$, $B(2,1)$, $C(2,2)$,
$A(4,0)$, and $C(2,2)$ phases. In Figure~\ref{FIG:Line E0p735}, we show the
optimizing graphons in phases before and after each transition along
the line. Let us mention here two obvious discontinuous
transitions, the first being the transition from $B(1,1)$ to
$B(2,1)$ at about $\tau=0.3737$, and the second being the transition
from $B(2,1)$ to $C(2,2)$ at about $\tau=0.3574$.
{The derivative $\partial s/\partial \tau$ of the entropy with respect to
$\tau$ exhibits jumps at these transitions}, as seen in
Figure~\ref{FIG:Line E0p735 Entropy}. A theoretical analysis of the transitions is
presented in the next section.
\paragraph{Phases along line segment $(0.845, \tau)$.}
We performed similar simulations to reveal phases along the line
segment $(0.845, \tau)$, $\tau\in(0.5842, 0.6034)$. The phases, from
top to bottom, should be, respectively $B(1,1)$, $B(2,1)$, $B(3,1)$,
$B(4,1)$, $A(6,0)$, $B(4,1)$, $B(5,1)$, and
$C(5,2)$. Our simultation did not capture the last two phases, which lie
extremely close to the lower boundary of the phase space;
representive optimizing graphons for the others
are shown in Figure~\ref{FIG:Line E0p845}. Also, due to limitations in
computational power we were not able to resolve the transitions
between the phases as accurately as in the previous case, that is, those
in Figure~\ref{FIG:Line E0p735}.
\paragraph{Remark on our conjecture on the phase space structure.} We now briefly
summarize the evidence behind our conjecture of the phase portrait in
Figure~\ref{phase-diagram}. First of all, by simulations and proofs in
several models, in particular this one, we had found that in the
interior of phase spaces optimizing graphons have always been found to
be multipodal. In this model furthermore, the only place we find more
than 3 podes is for $\varepsilon>0.7$ and near the scalloped boundary.
(Recall that in all the numerical simulations in this paper we assume
that the graphons have $16$ or fewer podes, even though where we simulate
they always end up having many fewer than $16$ podes). Second, for each point
$(\varepsilon, \tau)$ in the phase space except for $\varepsilon > 0.75$ and close to
the scallops), our algorithm, even though computationally expensive, could
determine the optimal graphons up to relatively high accuracy.
The computational cost is in general tolerable; in the exceptional
region it is too expensive to find enough graphons which satisfy all
the constraints to get the accuracy we wanted. (In more detail, we use the techniques explained in~\cite{RRS1} with edge and triangle constraint intervals of size $10^{-9}$, which determines entropy to order $10^{-6}$, from which we determine our optimal graphons.)
Third, within each phase away from the scallops our computational power allows us to
perform simulations on relatively fine meshes of $(\varepsilon, \tau)$. This
is how we determined the boundary of the $A(3,0)$ and $A(4,0)$ phases
as well as the transitions shown in Figure~\ref{FIG:Line E0p735
Entropy}, for instance. In more detail, consider the middle two graphons in each row in
Figure~\ref{FIG:Line E0p735}, which straddle the transitions. They all
have edge density $\E=0.735$. In the first row they have triangle
densities $0.37373880$ and $0.37368265$; in the second row the triangle densities
are $0.37171725$ and $0.37154879$; in the third row they are $0.36138489$
and $0.36132874$; in the fourth row they are $0.35745410$ and
$0.35739795$; in the fifth they are $0.35706102$ and $0.35694871$; and
in the last row they are $0.35475870$ and $0.35318639$.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{E0p73500000_smax_vs_t-2}
\includegraphics[width=0.2\textwidth,height=0.38\textwidth]{E0p73500000_smax_vs_t-zoom3574-2}
\includegraphics[width=0.2\textwidth,height=0.38\textwidth]{E0p73500000_smax_vs_t-zoom3737-2}
\caption{Left: entropy $s$ as a function of $\tau$ along the line segment $(0.735, \tau)$, $\tau\in(0.3504, 0.3971)$; Middle: zoom of $s(\tau)$ around the $B(2,1)\to C(2,2)$ phase transition at $\tau=0.3574$; Right: zoom of $s(\tau)$ around the $B(1,1)\to B(2,1)$ phase transition at $\tau=0.3737$.}
\label{FIG:Line E0p735 Entropy}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p59815784-2}
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p59421364-2}
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p59135216-2}\\
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p58938006-2}
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p58798799-2}
\includegraphics[width=0.3\textwidth]{E0p84500000-T0p58787198-2}
\caption{Typical optimizing graphons in the phases that the line
segment $(0.845, \tau)$, $\tau\in(0.5842, 0.6034)$ cuts
through. From top left to bottom right are respectively graphons in
phases $B(1,1)$, $B(2,1)$, $B(3,1)$, $B(4,1)$, $A(6,0)$
and $B(4,1)$, respectively.}
\label{FIG:Line E0p845}
\end{figure}
\section{Analysis of transitions}
\label{SEC:transitions}
As noted above, the numerical simulations indicate that all phase
transitions below the Erd\H{o}s-R\'{e}nyi curve with the exception of
$A(n,0) \leftrightarrow B(n-1,1)$, occur discontinuously. At each
other transition, the optimizing graphon jumps from one form to
another, and so the densities of certain subgraphs also jump. In this
section, we prove that certain of these transitions \emph{can only
occur discontinuously}.
\begin{theorem}\label{thm:no-continuous-theorem-1}
Except at a finite number of values of the edge density $\E$,
there cannot be a
continuous transition from a $B(1,1)$ bipodal phase to a $B(2,1)$ or $C(1,2)$
tripodal phase.
\end{theorem}
Theorem~\ref{thm:no-continuous-theorem-1} can be generalized to consider all $B \leftrightarrow C$ transitions:
\begin{theorem}\label{thm:no-continuous-theorem-2}
Except at a finite number of values of the edge density $\E$, there cannot be a continous transition from
a $B(m,1)$ to a $C(m,2)$ phase.
\end{theorem}
Assuming that our conjectured phase portrait is correct, this leaves the
$B(m,1) \leftrightarrow B(m-1,1)$, $C(n-2,2) \leftrightarrow A(n,0)$,
$A(n,0) \leftrightarrow B(n-2,1)$ and
$A(n,0) \leftrightarrow B(n-1,1)$ transitions for us to consider.
\begin{theorem}\label{thm:3}
For $m>2$, there cannot be a continuous transition from a $B(m,1)$
phase to a $B(m-1,1)$ phase, and there cannot be a continuous transition
from an $A(n,0)$ phase to a $B(n-2,1)$ phase.
\end{theorem}
To summarize (see Figure \ref{phase-diagram}) we have proven
that transitions cannot be continuous
between: any two $B$ phases; any
$A(n,0)$ and $B(n-2,1)$ phases; any $B$ and $C$ phases except perhaps
at finitely many values of $\E$.
As noted earlier, transitions between $A$ and $C$ phases appear to
always be discontinuous, and transitions between any $A(n,0)$ and
$B(n-1,1)$ phases
appear to always be continuous. However, we currently lack a
proof for these last two claims, although in \cite{RRS2} there is a possible
path to prove continuity for $A(2,0) \leftrightarrow B(1,1)$.
For completeness we note that the two transitions across the
Erd\H{o}s-R\'{e}nyi curve, $A(2,0) \leftrightarrow
F(1,1)$ and $B(1,1)\leftrightarrow
F(1,1)$, are continuous: for $(\E,\tau)$ on the Erd\H{o}s-R\'{e}nyi
curve the graphon with constant value $\E$ is easily seen to be the
unique entropy-optimizer, and the optimizers from each side must approach
it in $L^2$ and therefore in cut metric.
\noindent{\bf Proof of Theorem \ref{thm:no-continuous-theorem-1}.}
The $C(1,2)$ phase is in the $C$ family and appears to the left of
the $A(3,0)$ phase, while $B(2,1)$ is in the $B$
family and appears to the right of $A(3,0)$.
For purposes of this proof, however, they are indistinguishable.
All that matters is that there are three ``podes'', with a symmetry
swapping two of them.
The proof has three steps:
\begin{enumerate}
\item Showing that the Lagrange multipliers (see definition below) would have to diverge at
a continuous transition.
\item Showing that for a bipodal optimizing graphon, divergent Lagrange
multipliers can only occur at the ``natural boundary'' of the $B(1,1)$ phase,
namely at the minimum possible value of $\tau$ achievable by a bipodal
graphon for the given value of $\E$.
\item Showing that on the natural boundary, there exists a tripodal graphon
with the given values of $(\E,\tau)$ and with higher entropy than any
bipodal graphon. That is, showing that the natural boundary of the $B(1,1)$
phase actually lies within a tripodal phase. This step requires that a certain
analytic function of $\E$ be nonzero. Since analytic functions (that
aren't identically zero) can only have finitely many roots in a compact
interval, this step of the proof can break down at finitely many values
of $\E$. (Numerical examination of the analytic function
reveals that it does not have any roots at all for
relevant values of $\E$,
making the ``all but finitely many'' caveat moot in practice.)
\end{enumerate}
We establish some notation. For any graphon $g$, we
let $\E(g)$ and $\tau(g)$ denote the densities
$t_H(g)$ where $H$ is an edge and a triangle, respectively.
For our $B(1,1)$ graphon, we let
\begin{equation} a = p_{11}, \qquad d=p_{12}, \qquad b= p_{22},
\end{equation}
and let $c$ be the size of the first pode. For a $B(2,1)$ graphon, we
assume that the first two podes are interchangable, each of size
$c/2$, and set $a_+ = p_{12}$, $a_- = p_{11} = p_{22}$,
$d=p_{13}=p_{23}$, and $b=p_{33}$. That is, the $B(2,1)$ graphon is
obtained from a $B(1,1)$ graphon by splitting the first pode in half (and
renumbering the last pode), and by making $p_{11}$ and $p_{12}$
distinct variables. The only way that a $B(1,1)$ graphon can be a limit
of $B(2,1)$ graphons is if $a_+ - a_-$ goes to zero (note that if $c\to 1$ it becomes a type $A(2,0)$ graphon,
not a $B(1,1)$). So to prove our
theorem, we must show that a sequence of $B(2,1)$ entropy maximizers
cannot approach a limiting graphon with $a_+ = a_-$.
The Euler-Lagrange equations for maximizing entropy (see \cite{KRRS1}) say that there exist constants $\alpha$
and $\beta$ such that, for all $(x,y) \in [0,1]^2$,
\begin{equation}
\frac{\delta S(g)}{\delta g(x,y)} = \alpha \frac{\delta \E(g)}{\delta g(x,y)} + \beta \frac{\delta \tau(g)}{\delta g(x,y)},
\end{equation}
or more explicitly
\begin{equation}
\label{EL1}
S_0'(g(x,y))
= \alpha + 3 \beta \int_0^1 g(x,z) g(y,z) dz,
\end{equation}
where
\begin{equation}
S(g) = \int S_0(g(x,y))\,dxdy,\ \hbox{ and }\ S_0(u)=-u\log(u) - (1-u)\log(1-u).
\end{equation}
For a $B(2,1)$ graphon,
the integral $\int_0^1 g(x,z) g(y,z) dz$ equals
\begin{equation}
\begin{cases}
\frac{c}{2} (a_+^2 + a_-^2) + (1-c) d^2, &\hbox{ for }x,y < \frac{c}{2} \hbox{ or } \frac{c}{2} < x,y < c;\cr
c a_+ a_- + (1-c)d^2, &\hbox{ for }x < \frac{c}{2} <y <c \hbox{ or } y < \frac{c}{2} < x < c;\cr
\frac{c}2 d (a_+ + a_-) + (1-c)bd, &\hbox{ for }x < c < y \hbox{ or } y < c < x;\cr
cd^2 + (1-c)b^2, &\hbox{ for }x,y > c.
\end{cases}
\end{equation}
The Lagrange multiplier $\beta$ equals $\partial s/\partial \tau$ and is
always positive, since we can increase both the entropy and $\tau$ by linearly
interpolating between our given graphon and a constant graphon.
Subtracting equation (\ref{EL1}) for $(x,y) = (\frac{c}{3},\frac{2c}{3})$ from equation (\ref{EL1}) for
$(x,y)=(\frac{c}{3}, \frac{c}{3})$ gives:
\begin{equation}
S_0'(a_-) - S_0'(a_+) = \frac{3\beta c}{2} (a_+ - a_-)^2.
\end{equation}
By the mean value theorem, the left hand side equals $-S_0''(u) (a_+
-a_-)$ for some $u$ between $a_-$ and $a_+$, and so
\begin{equation}
\beta = \frac{-2 S_0''(u)}{3 (a_+ - a_-)}.
\end{equation}
Since $\beta>0$ and
$S_0''(u)=\frac{-1}{u(1-u)}$ is negative and bounded away from zero,
$a_+$ must be greater than $a_-$ and $\beta$
must diverge to $+ \infty$ as $a_+ - a_-$ approaches zero. To
compensate, $\alpha$ must diverge to $-\infty$. This completes step 1.
To understand the effect of divergent Lagrange multipliers, we must
consider all of the variational equations for bipodal graphons,
including those related to changes in $c$. The edge, triangle and
entropy densities are:
\begin{eqnarray}
\E(g) & = & c^2 a + 2c(1-c) d + (1-c)^2 b \cr
\tau(g) & = & c^3 a^3 + 3c^2(1-c) ad^2 + 3c(1-c)^2 bd^2 + (1-c)^3 b^3 \cr
S(g) & = & c^2 S_0(a) + 2c(1-c) S_0(d) + (1-c)^2 S_0(d).
\end{eqnarray}
Taking gradients with respect to the four parameters $(a,b,c,d)$ and
setting $\nabla S = \alpha \nabla \E + \beta \nabla \tau$ gives the four
equations:
\begin{eqnarray}
S_0'(a) & = & \alpha + 3 \beta (c a^2 + (1-c) d^2), \cr
S_0'(b) & = & \alpha + 3 \beta (c d^2 + (1-c)b^2), \cr
2 c S_0(a) \!+\! 2(1\!-\!2c) S_0(d) \!-\! 2(1\!-\!c)S_0(d) & = & \alpha(2ca + 2(1-2c)d -2(1-c)b) \cr
+\ 3\beta (c^2 a^3 \!+\! (2c\!-\!3c^2)ad^2 \!&+&\! (3c^2\!-\!4c+1) bd^2 \!-\! (1\!-\!c)^2 b^3 ), \cr
S_0'(d) & = & \alpha + 3\beta (cad + (1-c)bd).
\end{eqnarray}
Note that the left hand side of the third equation is always finite, and that
the left hand sides of the other equations only diverge if the
relevant parameter
$a$, $b$, or $d$ approaches 0 or 1. Otherwise, in the $\beta \to \infty$ limit
the ratios of the coefficients
of $\beta$ and $\alpha$ must be the same for all equations. In other words,
there is a constant $\lambda = -\beta/\alpha$ such that
\begin{equation} \nabla \tau = \lambda \nabla \E,
\end{equation}
where we restrict attention to parameters that are not 0 or 1, and such
that a parameter $i\in\{a,b,d\}$ equals 0 if
$\partial_i \tau > \lambda \partial_i \E$
and equals 1 if $\partial_i \tau < \lambda \partial_i \E$. (The signs
of the inequalities come
from the sign of $S_0'(u)$ as $u \to 0$ or $u \to 1$.)
However, these are precisely the same equations that describe
finding a stationary point for $\tau$ for fixed $\E$, without regard to the
entropy. For $B(1,1)$ graphons, such stationary points occur only for
Erd\H{o}s-R\'{e}nyi graphons, with $a=b=d$, or for minimizers of $\tau$,
with $d=1$ and $b=0$ and $c$ satisfying the algebraic condition
\mathbf e} \newcommand{\bff}{\mathbf f \label{min-t-eq}
0 = a^3c^2 + ac^2 + 2a^2c + 2(1-c) -4a^2c^2 -4c(1-c). \end{equation}
This completes step 2.
Finally, we must show that a $B(1,1)$ graphon with $b=0$ and $d=1$, and
satisfying (\ref{min-t-eq}), is not an
entropy maximizer. Among bipodal graphons with $b=0$ and $d=1$ and a
fixed value of $\E$, minimizing $\tau$ and minimizing the entropy $s$ give
different analytic equations for $a$ and $c$. For all but finitely many values
of $\E$, these equations have distinct roots, implying that the graphon that
minimizes $\tau$ does not maximize the entropy. If we start at the bipodal
graphon that minimizes $\tau$ for fixed $\E$ and change $c$ to
$c + \delta_c$,
while adjusting $a$ to keep $\E$ fixed, we can increase the entropy to
first order in $\delta_c$ while only increasing $\tau$ to second order in
$\delta_c$.
To compensate for this increase in $\tau$, we can split the first pode
in half, yielding a $B(2,1)$ tripodal graphon with
$d=1$ and $b=0$, with $a_+= a+\delta_a$ and $a_-=a-\delta_a$.
This decreases $\tau$ by $c^3\delta_a^3$, while
decreasing the entropy by $O(\delta_a^2)$. By taking $\delta_a$
to be of order $\delta_c^{2/3}$, we can restore the initial value of $\tau$ at
an entropy cost of $O(\delta_c^{4/3})$.
For $\delta_c$ sufficiently small and of the correct sign,
the $O(\delta_c^1)$ gain in entropy from
changing $c$ and $a$ is greater than the $O(\delta_a^2)=O(\delta_c^{4/3})$
cost in entropy
from having $a_+ \ne a_-$, so the resulting $B(2,1)$ graphon has higher entropy,
but the same values of $\E$ and $\tau$, than the $B(1,1)$ graphon that minimizes $\tau$ (among $B(1,1)$ graphons). This completes step 3.
\hfill$\square$
\noindent{\bf Proof of Theorem~\ref{thm:no-continuous-theorem-2}.}
The proof follows the same strategy as that of the $m=1$ case. In
step 1 we show that the Lagrange multipliers must diverge at a
continuous transition. In step 2 we show that divergent Lagrange
multipliers force a $B(m,1)$ graphon to be a stationary point of $\tau$
for fixed $\E$. In step 3 we show that we can perturb such a
stationary $B(m,1)$ graphon into a $C(m,2)$ graphon with the same
values of $(\E,\tau)$ and more entropy, implying that we are not
actually at the phase boundary, which is a contradiction.
We parametrize $C(m,2)$ graphons
as follows: There are two interchangable podes, each of size $c/2$, and $m$ podes of size $(1-c)/m$.
Let $p_{ij}$ denote the value of $g(x,y)$ when $x$ is in the $i$-th pode and $y$ is in the $j$-th. We define parameters
$\{a_+, a_-, b, d, p\}$ such that
\begin{equation} p_{ij} = \begin{cases} a_+ & \hbox{if }(i,j)=(1,2) \hbox{ or }(2,1), \cr
a_- & \hbox{if }(i,j)=(1,1) \hbox{ or }(2,2), \cr
d & \hbox{if }i \le 2 <j \hbox{ or } j \le 2 < i, \cr
b & \hbox{if } i=j>2, \cr
p & \hbox{if } i \ne j \hbox{ and } i,j>2.
\end{cases}
\end{equation}
The transition that we are trying to rule out is one where $a_\pm$ approach a common value $a$, resulting in a
$B(m,1)$ graphon with one pode of size $c$ and $m$ podes of size $(1-c)/m$, with
\begin{equation} p_{ij} = \begin{cases} a & \hbox{if }(i,j)=(1,1), \cr
d & \hbox{if }i =1 <j \hbox{ or } j =1 < i, \cr
b & \hbox{if } i=j>1, \cr
p & \hbox{if } i \ne j \hbox{ and } i,j>1.
\end{cases}
\end{equation}
Let $\nabla S$, $\nabla \E$ and $\nabla \tau$ be the gradients of the functionals $S$, $\E$ and $\tau$ with respect to the parameters
$(a,b,c,d,p)$ or $(a_+,a_-,b,c,d,p)$, depending on the phase we are considering. Maximizing the entropy for fixed $(\E,\tau)$ means finding Lagrange multipliers $\alpha$ and $\beta$ such that
\begin{equation}\label{EL2s} \nabla S = \alpha \nabla \E + \beta \nabla \tau. \end{equation}
In the $C(m,2)$ phase, one checks that $\partial S/\partial a_- - \partial S/\partial a_+$ is first order in $(a_+-a_-)$, while
$\partial \tau/\partial a_- - \partial \tau/\partial a_+$ is second order. This forces $\beta$ to diverge to $+\infty$ as
$a_+ - a_- \to 0^+$,
which in turn forces $\alpha$ to diverge to $-\infty$, exactly as in the proof of Theorem
\ref{thm:no-continuous-theorem-1}. This concludes step 1.
Next we consider equation (\ref{EL2s}) in the limit of divergent $\alpha$ and $\beta$. Restricting attention to
those parameters for which $\nabla S$ does not diverge (i.e. those that are not 0 or 1 in the limiting graphon), we
have that $\nabla \E$ and $\nabla \tau$ must be collinear. That is, there is a constant $\lambda$ such that,
for each parameter $q \in (a,b,d,p)$ taking
values in $(0,1)$, we must have that $\partial \tau /\partial q = \lambda \partial \E/\partial q$.
Furthermore, if in the limit $q$ goes to 0 we must have $\partial \tau/\partial q \ge \lambda \partial \E/\partial q$ and if
$q$ approaches 1 we must have $\partial \tau/\partial q \le \lambda \partial \E/\partial q$. That is,
\begin{equation} \label{EL3s} \nabla \tau = \lambda \nabla
\E, \end{equation} where we restrict attention to parameters that
are not 0 or 1, and we have 1-sided inequalities for those remaining
parameters. This is {\em precisely} the set of equations obtained by
ignoring entropy and looking for stationary points of $\tau$ for fixed
$\E$, with $\lambda$ being the Lagrange multiplier of this process.
Seeking entropy maximizers with divergent $(\alpha, \beta)$ is
equivalent to seeking stationary points of $\tau$ for fixed $\tau$. This
concludes step 2.
Now we consider a 1-parameter family of graphons, with a fixed value of $\E$, that satisfy all of (\ref{EL3s})
except the equation relating $\partial \tau/\partial c$ and $\partial \E/\partial c$. Since by assumption we start at
a stationary point of $\tau$, moving along this family will only
change $\tau$ to second order or slower in the change in $c$, but for all but finitely many values of $\E$ will change $S$ to
first order. Move a distance $\delta_c$ in the direction of increasing $S$. A priori we do not know that the resulting
change in $\tau$ will be positive, but negative changes in $\tau$ can be compensated for while {\em increasing} $S$, since $\beta$ is positive.
If the change in $\tau$ is positive, then we can compensate by splitting the first pode in half, with $a_+-a_-$ of
order $(\delta_c^{2/3})$, restoring $\tau$ at an entropy cost of $O(\delta_c^{4/3})$.
Since $O(\delta_c^1) > O(\delta_c^{4/3})$, by picking $\delta_c$ small enough we can
always find a $C(m,2)$ graphon that does better than the $B(m,1)$ graphon that was purportedly the entropy
maximizer at the phase boundary, which is a contradiction.
\hfill$\square$
\noindent{\bf Proof of Theorem \ref{thm:3}.}
This proof does not require any consideration of entropy.
The symmetries are simply incompatible,
in that there is no way to approximate a
$B(m-1,1)$ graphon with a $B(m,1)$ graphon, and there is no way to approximate
a $B(n-2,1)$ graphon with an $A(n,0)$ graphon.
\hfill$\square$
We now turn to determining the locations of the phase transitions. For each of the discontinuous transitions, this is a difficult
problem. The optimizing graphons on each side of the transition line are very different, so it is impossible to use perturbation theory to understand the behavior near the line. Instead, one must study each phase separately and approximate the entropy in
each phase as an analytic funciton of $\E$ and $\tau$ (e.g., by doing a polynomial fit to numerical data). These functions can then
be continued over a larger region and compared. The phase transition line is the locus where the two functions are equal. Using
such techniques, we can localize the transition lines and the triple points with considerable accuracy, but in the end the results
remain grounded in numerical simulation, and cannot provide independent confirmation of our numerics.
For the continuous $A(n,0) \leftrightarrow B(n-1,1)$ transitions, however, it is possible to obtain an analytic equation satisfied
along the transition line. This was already done for the $A(2,0) \leftrightarrow B(1,1)$ transition in \cite{RRS1,RRS2}. Here we extend
the results to $n>2$.
This calculation, although elementary, is too long to present here in
its entirety; we give the method here. For each fixed $n>2$, the
space of $B(n-1,1)$ graphons is 5-dimensional. As in Figure 3, we
imagine $n-1$ intervals of size $\frac{c}{n-1}$ and one of size $1-c$,
and must specify $a=p_{1,1}$, $b=p_{1,2}$, $d=p_{1,n}$, $p=p_{n,n}$
and $c$. An $A(n,0)$ graphon is a special case of this with $p=a$,
$d=b$ and $c=\frac{n-1}{n}$, and for such graphons the parameters $a$
and $b$ are easily computed from the edge and triangle densities:
\begin{equation}\label{a-and-b}
a = \E - (n-1) \left(\frac{\E^3 - \tau}{n-1}\right )^{1/3}; \qquad
b = \E + \left(\frac{\E^3 - \tau}{n-1}\right )^{1/3}.
\end{equation}
Returning to a general $B(n-1,1)$ graphon, we use the constraints on
$\E$ and $\tau$ to eliminate two variables, expressing $d,p$ as functions of $a,b,c$: $d=d(a,b,c), p=p(a,b,c)$.
In fact we don't need to solve explicitly; we only need the first and second partial derivatives of $d(a,b,c),p(a,b,c)$
(evaluated at the parameter values of the $A(n,0)$ graphon), which can be obtained by implicit differentiation of the equations for $\E,\tau$.
Then the entropy $S$ is a function of $a,b,c$: $S=S(a,b,c,d(a,b,c),p(a,b,c)).$ Computing its Hessian
$H(S)$ at the $A(n,0)$ phase when $d=b,p=a$ yields
the matrix
\begin{equation}
H(S)=\begin{pmatrix}
\frac{(n-1) \left((a-b)^2-4 (a-1) a^2 X\right)}{(a-1) a (a-b)^2 n} & -\frac{2 b (n-2) (n-1) X}{(a-b)^2 n} & -\frac{2 (a+b) X}{a-b} \\
-\frac{2 b (n-2) (n-1) X}{(a-b)^2 n} & \frac{(n-2) (n-1) \left((a-b)^2-2 (b-1) b (2 a+b (n-4)) X\right)}{2 (a-b)^2 (b-1) b n} & -\frac{2 b (n-2) X}{a-b} \\
-\frac{2 (a+b) X}{a-b} & -\frac{2 b (n-2) X}{a-b} & \frac{2 n (\log (1-b)-\log (1-a))}{n-1}
\end{pmatrix}
\end{equation}
where $X=\tanh^{-1}(a)-\tanh^{-1}(b)$. Within the $A(n,0)$ phase,
this second variation is negative-definite, while within the
$B(n-1,1)$ phase the matrix has a positive eigenvalue. The boundary is
thus defined (locally) by the analytic equation $\det H(S)=0$.
One can also consider continuous changes from an $A(n,0)$ graphon to a
graphon with a symmetry other than $B(n-1,1)$. The local stability condition
for such changes works out to be exactly the same as for
$A(n,0) \leftrightarrow B(n-1,1)$. The upshot is that the analytic curve
$\det H(S)=0$ defines the boundary of the region where the $A(n,0)$ graphon
is stable against small perturbations.
In Figure \ref{FIG:Stability-regions} we plot the regions where the
$A(3,0)$ and $A(4,0)$ graphons are stable against small perturbations.
Comparing to Figures 6 and 7, we see that the stability regions are larger
than the actual $A(3,0)$ and $A(4,0)$ phases, and even overlap!
Without assuming {\em anything} about the phase portrait (beyond the
existence of $A(3,0)$ and $A(4,0)$ phases) this proves that
some of the transitions from $A(3,0)$ or $A(4,0)$ to other phases must not
be governed by the local stability condition, and so must be discontinuous.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\textwidth,height=0.41\textwidth]{Natural-Boundaries-30-40-normal-coordinate} \hskip 1cm
\includegraphics[width=0.45\textwidth,height=0.41\textwidth]{Natural-Boundaries-30-40-rescaled-coordinate.png}
\caption{The regions of local stability for $A(3,0)$ and $A(4,0)$
graphons, showing overlap.
The left plot uses coordinates $(\E,\tau)$ and the right plot uses
$(\E, \tau'=\tau-\E(2\E-1))$.}
\label{FIG:Stability-regions}
\end{figure}
The transition $A(2,0) \leftrightarrow B(1,1)$ is a supercritical pitchfork
bifurcation \cite{RRS2}. The analytic family of entropy maximizers in the
$A(2,0)$ phase has a natural analytic continuation into the $B(1,1)$ region,
but no longer maximizes entropy there. The analytic family of entropy maximizers
in the $B(1,1)$ phase doubles back on itself and cannot be continued into
the $A(2,0)$ region. Power series analysis suggests
that something similar happens in
the transition from $A(n,0)$ to $B(n-1,1)$. The family of graphons representing
$A(n,0)$ entropy maximizers can be continued into the $B(n-1,1)$ region but
no longer maximizes entropy. The family of graphons representing $B(n-1,1)$
cannot be analytically continued into the $A(n,0)$ region, but instead
doubles back into the $B(n-1,1)$ region. Unlike in the case of $B(1,1)$, the
``doubling back'' branch is not related by symmetry to the original branch,
and so represents a different set of reduced graphons, all stationary
points of the entropy, but with presumably lower entropy than graphons of the
original branch with the same values of $(\E,\tau)$.
It is a classical result (Mantel's theorem~\cite{Ma}, generalized by
Tur\' an~\cite{Tu}) that the only way to satisfy the constraints of
edge density $\E=1/2$ and triangle density $\tau=0$ is with the complete,
balanced bipartite graph, which implies that those values of those
density constraints determine the values of the densities of all other
subgraphs. Put another way, there is a {\em unique} reduced graphon $g$ with
$\tau(g)=0$ and $\E(g)=1/2$. Likewise, for each $\E$ there is a unique
reduced graphon with the maximum possible value of $\tau=\E^{3/2}$.
This phenomenon is called `finite forcing'; see~
\cite{LS}. However, this phenomenon only occurs on the boundary of the
phase space:
\begin{theorem}For each pair $(\E_0,\tau_0)$ in the interior of the space of
achievable values (see Figure~\ref{Razborov-triangle}), there exist
multiple inequivalent graphons $g$ with $\E(g)=\E_0$ and $\tau(g)=\tau_0$.
\end{theorem}
\begin{proof}
For fixed $\E_0$, let $g_0$ be a graphon that minimizes $\tau(g)$ given
$\E(g)=\E_0$, and let $g_1$ be a graphon that maximizes $\tau(g)$, and
let $\phi: [0,1] \to [0,1]$ be a general measure-preserving homeomorphism.
The graphon $g_0$ is always $n$-podal for some $n$, while $g_1$ is
bipodal. For $t \in (0,1)$, let $g_{t,\phi} = t g_0 + (1-t) g_1 \circ \phi$.
This will be a multipodal graphon, generically with $2n$ podes whose sizes
depend on the details of $\phi$ but not on the value of $t$. In particular,
we can choose homeomorphisms $\phi_1$ and $\phi_2$ such that the podal
structure of $g_{t,\phi_1}$ is different from that of $g_{t,\phi_2}$, and hence
is different from that of $g_{t',\phi_2}$ for arbitrary $t' \in (0,1)$.
It is easy to check that $\E(g_{t,\phi}) = t \E(g_0) + (1-t) \E(g_1\circ \phi)
= \E_0$, and that for given $\phi$, $\tau(g_{t,\phi})$ is a continuous function
of $t$. By the intermediate
value theorem, we can thus find graphons $g_{t_1,\phi_1}$
and $g_{t_2,\phi_2}$ such that
$\E(g_{t_1,\phi_1})=\E(g_{t_2,\phi_2})=\E_0$ and
$\tau(g_{t_1,\phi_1})=\tau(g_{t_2,\phi_2})=\tau_0$. But
$g_{t_1,\phi_1}$ and $g_{t_2,\phi_2}$ have different podal structures, and so
are inequivalent.
\end{proof}
In contrast to the graphons not being determined by $\E$ and $\tau$,
simulations indicate that maximizing the entropy for fixed $(\E,\tau)$
does give a unique reduced graphon {\it throughout the interior,
except on a lower dimensional set}, the exceptions being
constraint values associated with the discontinuous phase transitions.
(The uniqueness of the entropy maximizer was also proven analytically
for $(\E,\tau)=(1/2,\tau)$ for any $0\le \tau\le 1/8$ in~\cite{RS1}, and for
an open subset of the $F(1,1)$ phase in~\cite{KRRS2}.) We do not yet
have a theoretical understanding of this fundamental issue, sometimes
called the Gibbs phase rule in physics; see however~\cite{KRRS1} for
$k$-star graph models, and~\cite{Ru2,I} for weak versions in physics.
\old{In some sense this uniqueness is in the spirit of finite forcing, in
that the values of a finite set of functionals $\{\E, \tau, S \}$ picks
out a single graphon from an infinite-dimensional family. However,
this only seems to occur on the boundary of the space of achievable
values of $(\E, \tau, S)$, specifically the constrained maximum value of
$S$.}
\section{Symmetry}
\label{SEC:symmetry}
All aspects of this paper relate to the symmetry of phases, referring
to the symmetry of the unique entropy-optimizing graphons $g_{\E,\tau}$
for points $(\E,\tau)$ in the phase. These symmetries occur at two
levels.
The first and most significant level of symmetry is a consequence of the multipodality of
the $g_{\E,\tau}$, which means that the set of all nodes is composed of
a finite number of equivalence classes: the probability of an edge,
between a node $v_j$ in class $j$ with a node $v_k$ in class $k$, is
independent of $v_j$ and $v_k$, only depending on $j$ and $k$.
The second level
of symmetry concerns the equivalence classes of nodes:
certain equivalence classes have the same sizes and edge probabilities, and others don't.
These size and probability parameters are used to distinguish distinct phases, that is,
maximal open regions in the parameter space where the entropy-maximizing graphon $g_{\E,\tau}$ is unique and varies analytically.
Because of multipodality the function $g_{\E,\tau}$ restricted to a given phase can be
considered a smooth vector valued function of $(\E,\tau)$, the
coordinates being the probabilities of edges between node equivalence
classes, and the relative sizes of those equivalence classes.
By the symmetry of a phase we refer to the symmetries among these
coordinates, with the following caveat, illustrated through an example. Within the phase
$F(1,1)$ there is a curve such that the two node equivalence classes
have the same size. In a narrow sense this might have signalled a
higher symmetry. However there is no singular behavior as $(\E,\tau)$
crosses this curve so the curve is simply part of $F(1,1)$ and does
not affect the `symmetry' of the phase.
Each of the phases has a different symmetry and we
conjecture that they all fall into the three families: $A(n,0)$,
$B(n,1)$ and $C(n,2)$, or $F(1,1)$, in which the notation completely specifies the
symmetry (except for the pairs $F(1,1),\ B(1,1)$, and $C(1,2),\ B(2,1)$).
An important result of this paper is the conjectured phase diagram,
Figure~\ref{phase-diagram}. The other goal is to show how knowledge of the
structure of optimizing graphons in phases can be helpful in
understanding the role of symmetry in specific features of phase
transitions.
Consider the following, paraphrased from P.W.~Anderson~\cite{An}, a picture he attributes to Landau~\cite{LL}:
\leftskip=0.5truein
\rightskip=0.5truein
\noindent
The First Theorem of solid-state physics states that it is
impossible to change symmetry gradually. A symmetry element is
either there or it is not; there is no way for it to grow
imperceptibly.
\leftskip=0truein
\rightskip=0truein
\noindent
This intuitive picture has been applied, for instance by Landau, to
understand why there is no critical point for the fluid/solid
transition~\cite{An}, though it has been difficult to make the
argument rigorous:
\leftskip=0.5truein
\rightskip=0.5truein
\noindent
This is the theoretical argument, which has appeared to some to be a
little too straightforward to be absolutely convincing~\cite{Pi}.
\leftskip=0truein
\rightskip=0truein
\noindent
We suggest that network models
such as the edge/triangle model of this paper provide a useful
framework for enabling a rigorous study of such symmetry
principles. This was done in~\cite{RRS2} specifically for the issue of
existence of a critical point.
Landau's symmetry principles are commonly applied to the issue of
whether phases are continuous at a transition~\cite{LL},
which is also related to uniqueness of entropy optimizers.
We have proven that in the edge/triangle model certain transitions cannot be
continuous, and evidence suggests that certain other transitions are
continuous.
It is worthwhile discussing how these transitions are approached at the micro-level, that
is, in terms of the multipodal parameters, the probabilities of edges
between various type of nodes. In this regard Figure~\ref{phase-diagram}
and Figure~\ref{FIG:Line E0p735} are useful.
For all the discontinuous transitions, those proven and those only
seen in simulation, we believe from simulation that the transitions can
be visualized as the intersection of a pair of two dimensional smooth
surfaces both of which exist beyond the intersection but only
represent entropy-optimizers on one side. See rows $(i),
(iv),(v),(vi)$ in Figure~\ref{FIG:Line E0p735}.
For the continuous transitions there is more variety. By simulation
the continuous $B(2,1)\leftrightarrow A(3,0)$ transition is achieved
through directly acquiring the higher symmetry of $A(3,0)$. See rows
$(ii)$ and $(iii)$ in Figure~\ref{FIG:Line E0p735}. An analogue in
statistical mechanics would be a transition between crystal phases in
which a rhombohedral unit cell becomes, and remains, cubic. On the other hand the
$A(2,0)\leftrightarrow F(1,1)$ transition occurs by the different
bipodal symmetries on both
sides rising to the full symmetry of the constant graphons at the
Erd\H{o}s-R\'{e}nyi curve. It is noteworthy that the full symmetry of
the constant graphon is incompatible with any possible phase in our sense: since $\E=\tau^3$, there
is not a two-dimensional family of parameter values.
\section{Conclusion}
\label{SEC:conclusion}
The edge/triangle model in this paper was built by
analogy with microcanonical mean-field models in statistical
mechanics. Mean-field models are useful because frequently the free
energy can be determined analytically as a function of the
thermodynamic parameters. An important distinction for the
edge/triangle and related random graph models is that not only the free energy
(entropy in this case) but also the entropy-optimizing graphons (which are the
analogues of the Gibbs states) can sometimes be determined for a range of
parameters. For instance in~\cite[Section 3.7]{RRS2}
this control of the optimizing states is used to compute how some
global quantity changes with the constraint parameters, a level of
analysis never possible in short or mean-field models in
statistical mechanics.
The main result of this paper is the conjectured phase
diagram, Figure~\ref{phase-diagram}, for edge/triangle constraints,
based largely on simulation, including continuity/discontinuity of all
the transitions and the structure of the entropy-optimizing states
within each phase.
The secondary goal is to show how knowledge of the structure of
the optimizing graphons can be helpful in understanding the role of
symmetry in the continuity/discontinuity of phase transitions.
Interesting subjects for further investigation include the triple points where
phases $F(1,1)$, $A(2,0)$ and $B(1,1)$ meet, all the pairwise transitions
being continuous, and where $B(1,1)$, $A(3,0)$
and $B(2,1)$ meet, with $B(1,1)\leftrightarrow A(3,0)$ and $B(2,1)
\leftrightarrow B(1,1)$ discontinuous and $A(3,0)\leftrightarrow
B(2,1)$ continuous.
\section*{Acknowledgments}
The main computational results were obtained on the
computational facilities in the Texas Super Computing Center
(TACC). We gratefully acknowledge this computational support.
This work was also partially supported by NSF grants DMS-1208191, DMS-1612668,
DMS-1509088, DMS-1321018 and DMS-1620473, and Simons Investigator grant 327929.
| {
"timestamp": "2017-02-09T02:07:05",
"yymm": "1701",
"arxiv_id": "1701.04444",
"language": "en",
"url": "https://arxiv.org/abs/1701.04444",
"abstract": "Based on numerical simulation and local stability analysis we describe the structure of the phase space of the edge/triangle model of random graphs. We support simulation evidence with mathematical proof of continuity and discontinuity for many of the phase transitions. All but one of themany phase transitions in this model break some form of symmetry, and we use this model to explore how changes in symmetry are related to discontinuities at these transitions.",
"subjects": "Combinatorics (math.CO); Statistical Mechanics (cond-mat.stat-mech); Social and Information Networks (cs.SI); Probability (math.PR)",
"title": "The phases of large networks with edge and triangle constraints",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517488441416,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7089606296736269
} |
https://arxiv.org/abs/2008.02143 | On the correctness of monadic backward induction | In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs.Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards.In the present paper we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore-algebra. They hold in familiar settings like those of deterministic or stochastic SDPs but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al.'s generic framework.Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material. |
\section{Introduction}
\label{section:introduction}
Backward induction is a method introduced by \cite{bellman1957} that
is routinely used to solve \emph{finite-horizon sequential decision
problems (SDP)}. Such problems lie
at the core of many applications in economics, logistics,
and computer science \citep{finus+al2003, helm2003,
heitzig2012, gintis2007, botta+al2013b,
de_moor1995, de_moor1999}.
Examples include inventory, scheduling and shortest path
problems, but also the search for optimal strategies in
games~\citep{bertsekas1995, diederich01}.
Botta, Jansson and Ionescu (\citeyear{2017_Botta_Jansson_Ionescu})
propose a generic framework for \emph{monadic} finite-horizon SDPs as
generalisation of the deterministic, non-deterministic and stochastic SDPs
treated in control theory textbooks \citep{bertsekas1995,
puterman2014markov}. This framework allows to
specify such problems and to solve them with a generic version of
backward induction that we will refer to as \emph{monadic backward
induction}.
The Botta-Jansson-Ionescu-framework, subsequently referred to as
\emph{BJI-framework}, \emph{BJI-theory} or simply \emph{framework},
already includes a verification of monadic
backward induction with respect to a certain underlying \emph{value}
function (see Sec.~\ref{subsection:solution_components}). However,
in the literature on stochastic SDPs this formulation of the function
is itself part of the backward induction algorithm and needs to be
verified against an optimisation criterion, the \emph{expected total
reward} \citep[Ch.~4.2]{puterman2014markov}.
For stochastic SDPs semi-formal proofs can be found in textbooks -- but
monadic SDPs are substantially more general than the stochastic SDPs
for which these results are established.
This observation raises a number of questions:
\begin{itemize}
\item What exactly should ``correctness'' mean for a solution of
monadic SDPs?
\item Does monadic backward induction provide
correct solutions in this sense for monadic SDPs in their full
generality?
\item And if not, is there a class of monadic SDPs for which
monadic backward induction does provide provably correct
solutions?
\end{itemize}
In the present paper we address these questions and make the
following contributions to answering them:
\begin{itemize}
\item We put forward a formal specification that monadic backward
induction should meet in order
to be considered ``correct'' as solution method for monadic SDPs.
This specification uses an optimisation criterion that is
a generic version of the \emph{expected total reward} of standard control
theory textbooks.\footnote{Note that in control theory
backward
induction is often referred to as \emph{the dynamic programming
algorithm} where the term \emph{dynamic programming} is used in
the original sense of \cite{bellman1957}.} In analogy, we call this
criterion \emph{measured total reward}.
%
\item We consider the value function underlying monadic backward
induction as ``correct'' if it computes the \emph{measured total reward}.
%
\item If the value function is correct, monadic backward induction can
be proven to be correct in our sense by extending the result of
\cite {2017_Botta_Jansson_Ionescu}.
However, we show that this is not necessarily the case, i.e.
the value function does not compute the \emph{measured total reward}
for arbitrary monadic SDPs.
\item We therefore formulate conditions that identify a class of monadic SDPs
for which the value function and thus monadic backward induction
can be shown to be correct. The conditions are fairly simple and allow for
a neat description in category-theoretical terms using the notion of
Eilenberg-Moore-algebra.
\item We give a formalised proof that monadic backward induction fulfils
the correctness criterion if the conditions hold. This correctness result can
be seen as a generic version of correctness results for standard backward induction
like \citep[Prop.~1.3.1]{bertsekas1995} and
\citep[Th.~4.5.1.c]{puterman2014markov}.
\end{itemize}
Our results rule out the application of backward induction to certain monadic SDPs that
were previously considered as admissible in the BJI-framework. Thus, they complement
the verification result of \bottaetal and provide a necessary clarification.
Still, the new conditions are simple enough to be checked for
non-standard instantiations of the framework. This
allows to broaden the applicability of backward induction to settings
which are not commonly discussed in the literature and to obtain a
formalised proof of correctness with considerably less effort.
It is worth stressing that our conditions can
be useful for anyone interested in applying monadic backward induction in
non-standard situations -- completely independent of the BJI-framework.
Finally, the value function underlying monadic backward induction is
also interesting in itself. Given the conditions hold, it can be used to compute
the measured total reward efficiently, using a method reminiscent of a \emph{thinning}
algorithm \cite[Ch.~10]{adwh}.
For the reader unfamiliar with SDPs, we provide a brief informal
overview and two simple examples in the next section. We recap the
BJI-framework and its (partial) verification result for monadic
backward induction in Sec.~\ref{section:framework}.
In Sec.~\ref{section:preparation}
we specify correctness for monadic backward induction
and the BJI-value function. We also
show that in the general monadic case the value function does not
necessarily meet the specification. To resolve this problem, we
identify conditions under which the value function does meet the
specification. These conditions are stated and analysed in
Sec.~\ref{section:conditions}. In Sec.~\ref{section:valval} we
prove that, given the conditions hold, the BJI-value function and monadic
backward induction are correct in the sense defined in
Sec.~\ref{section:preparation}. We discuss the conditions from a more
abstract perspective in Sec.~\ref{section:discussion} and
conclude in Sec.~\ref{section:conclusion}.
Throughout the paper we use Idris as our host language
\citep{JFP:9060502,idrisbook}. We assume some familiarity
with Haskell-like syntax and notions like \emph{functor} and
\emph{monad} as used in functional programming. We tacitly consider
types as logical statements and programs as proofs, justified by the
propositions-as-types correspondence \citep[for an accessible
introduction see][]{DBLP:journals/cacm/Wadler15}.
\paragraph*{Source code.} \hspace{0.1cm}
Our development is
formalised in Idris as an extension of a lightweight version of the
BJI-framework. The proofs are machine-checked and the source code is
available as supplementary material attached to this paper.
The sources of this document have been written in literal Idris and are
available at \citep{IdrisLibsValVal}, together with some example code.
All source files can be type checked with Idris~1.3.2.
\section{Finite-horizon Sequential Decision Problems}
\label{section:SDPs}
In deterministic, non-deterministic and stochastic finite-horizon
SDPs, a decision maker seeks
to control the evolution of a \emph{(dynamical) system} at a finite number of
\emph{decision steps} by selecting certain \emph{controls} in sequence,
one after the other. The controls
available to the decision maker at a given decision step typically
depend on the \emph{state} of the system at that step.
In \emph{deterministic} problems, selecting a control in a state at
decision step \ensuremath{\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}}, determines a unique next state at decision step
\ensuremath{\textcolor{PIKgreenDark2}{t}\mathbin{+}\mathrm{1}} through a given \emph{transition function}.
In \emph{non-deterministic} problems, the transition function yields a
whole set of \emph{possible} states at the next decision step.
In \emph{stochastic} problems, the transition function yields a
\emph{probability distribution} on states at the next decision step.
The notion of \emph{monadic} problem generalises that of deterministic,
non-deterministic and stochastic problem through a transition
function that yields an \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of next states where \ensuremath{\textcolor{PIKcyanDark2}{M}} is a
monad.
For example, the identity monad can be applied to model deterministic
systems. Non-deterministic systems can be represented in terms of
transition functions that return lists (or some other representations
of sets) of next states. Stochastic systems can be represented in
terms of probability distribution monads \citep{giry1981,
DBLP:journals/jfp/ErwigK06, DBLP:journals/scp/AudebaudP09,
DBLP:journals/tcs/Jacobs11}.
The uncertainty monad, the states, the controls and the next function
define what is often called a \emph{decision process}.
The idea of sequential decision problems is that each single decision
yields a \emph{reward} and these rewards add up to a \emph{total
reward} over all decision steps. Rewards are often represented by
values of a numeric type, and added up using the canonical addition.
If the transition function and thus
the evolution of the system is not deterministic, then the resulting
possible total rewards need to be aggregated to yield a single outcome
value.
In stochastic SDPs, evolving the
underlying stochastic system leads to a probability distribution
on total rewards which is usually aggregated using the familiar
\emph{expected value} measure. The value thus obtained is called the
\emph{expected total reward} \citep[ch.~4.1.2]{puterman2014markov} and
its role is central: It is the quantity that is to be optimised in
an SDP.
In monadic SDPs, the measure is generic, i.e. it is not fixed in advance
but has to be given as part of the specification of a concrete problem.
Therefore we will generalise the notion of \emph{expected total reward} to
a corresponding notion for monadic SDPs that we call
\emph{measured total reward} in analogy (see Sec.~\ref{section:preparation}).
\emph{Solving a stochastic SDP} consists in \emph{finding a list of rules
for selecting controls that maximises the expected total reward for
\ensuremath{\textcolor{PIKgreenDark2}{n}} decision steps when starting at
decision step \ensuremath{\textcolor{PIKgreenDark2}{t}}}.
Similarly, we define that \emph{solving a monadic SDP} consists in
\emph{finding a list of rules for selecting controls that maximises
the measured total reward}.
This means that when starting from any initial state at decision step
\ensuremath{\textcolor{PIKgreenDark2}{t}}, following the computed list of rules for selecting controls will
result in a value that is maximal as measure of the sum of rewards
along all possible trajectories rooted in that initial state.
Equivalently, rewards can instead be considered as \emph{costs}
that need to be \emph{minimised}. This dual perspective is taken e.g.
in \citep{bertsekas1995}. In the subsequent sections we will follow
the terminology of the BJI-framework and \citep{puterman2014markov}
and speak of ``rewards'', but our second example below will illustrate
the ``cost'' perspective.
In mathematical theories of optimal control, the rules for selecting
controls are called \emph{policies}. A \emph{policy} for a decision
step is simply a function that maps each possible state to a
control. As mentioned above, the controls available in a given
state typically depend on that state, thus policies are dependently typed
functions. A list of such policies is called a \emph{policy sequence}.
The central idea underlying backward induction is to compute a globally
optimal solution of a multi-step SDP incrementally by solving
local optimisation problems at each decision step. This is captured
by \emph{Bellman's principle}: \emph{Extending an optimal
policy sequence with an optimal policy yields again an optimal policy
sequence}. However, as we will see in Sec.~\ref{subsection:counterEx},
one has to carefully check whether for a given SDP backward
induction is indeed applicable.
Two features are crucial for finite-horizon, monadic SDPs to be
solvable with the BJI-framework that we study in this
paper: (1) the number of decision steps has to be given explicitly as
input to the backward induction and (2) at each decision step, the
number of possible next states has to be \emph{finite}.
While (2) is a necessary condition for backward induction to be
computable, (1) is a genuine limitation of the BJI-framework: in many SDPs,
for example in a game of tic-tac-toe, the number of decision steps is
bounded but not known a priori.
Before we discuss the BJI-framework in the next section, we illustrate
the notion of sequential decision problem with two simple examples,
one in which the purpose is to maximise rewards and one in which the
purpose is to minimise costs. Rewards and costs in these examples are
just natural numbers and are summed up with ordinary addition. The
first example is a non-deterministic SDP. Although it is somewhat
oversimplified, it has the advantage of being tractable for
computations by hand while still being sufficient as basis for
illustrations in
sections~\ref{section:framework}--\ref{section:conditions}. The second
example is a deterministic SDP that stands for the important class of
scheduling SDPs. This problem highlights why dependent types are necessary
to model state and control spaces accurately. As in these simple
examples state and control spaces are finite, the transition functions
can be described by directed graphs. These are given in
Fig.~\ref{fig:examplesGraph}.
\begin{exm}[\emph{A toy climate problem}]
\label{subsection:example1SDPs}
\begin{figure}
\centering
\framebox{
\parbox{\textwidth-0.5cm}{
\vspace{0.2cm}
\begin{subfigure}[b]{.35\textwidth-0.55cm}
\centering
\includegraphics[height=0.26\textheight]{img/fig1a.eps}
\caption{Example~1}
\label{fig:example1}
\end{subfigure}
\begin{subfigure}[b]{.65\textwidth}
\centering
\includegraphics[height=0.25\textheight]{img/fig1b.eps}
\caption{Example~2}
\label{fig:example2}
\end{subfigure}
\begin{small}
\caption{Transition graphs for the example SDPs described in
Sec.~\ref{section:SDPs}. The edge labels denote pairs
$\textit{control} \mid \textit{reward}$ for the associated
transitions. In the first example, state and control spaces
are constant over time therefore we have omitted the temporal
dimension.
\label{fig:examplesGraph}
}
\end{small}
}
}
\end{figure}
Our first example is a variant of a
stochastic climate science SDP studied in \citep{esd-9-525-2018},
stripped down to a simple non-deterministic SDP. At every decision
step, the world may be in
one of two \emph{states}, namely \emph{Good} or \emph{Bad}, and the
\emph{controls} determine whether a \emph{Low} or a \emph{High} amount
of green house gases is emitted into the
atmosphere. If the world is
in the \emph{Good} state, choosing \emph{Low} emissions will definitely
keep the world in the \emph{Good} state, but the result of choosing high
emissions is non-deterministic: either the world may stay in the
\emph{Good} or tip to the \emph{Bad} state. Similarly, in the \emph{Bad}
state, \emph{High} emissions will definitely keep the world in
\emph{Bad}, while with \emph{Low} emissions it might either stay in
\emph{Bad} or recover and return to the \emph{Good} state. The
transitions just described define a non-deterministic \emph{transition
function}. The \emph{rewards}
associated with each transition are determined by the respective control
and reached state. Now we can formulate an SDP: \emph{``Which
policy sequence will maximise the worst-case sum of rewards along all
possible trajectories when taking \ensuremath{\textcolor{PIKgreenDark2}{n}} decisions starting at decision
step \ensuremath{\textcolor{PIKgreenDark2}{t}}?''}. In this simple example the question is not hard to
answer: always choose \emph{Low} emissions, independent of decision step and
state. The \emph{optimal policy sequence} for any \ensuremath{\textcolor{PIKgreenDark2}{n}} and \ensuremath{\textcolor{PIKgreenDark2}{t}} would
thus consist of \ensuremath{\textcolor{PIKgreenDark2}{n}} constant \emph{Low} functions. But in a more
realistic example the
situation will be more involved: every option will have its benefits and
drawbacks encoded in a more complicated reward function, uncertainties
might come with different probabilities, there might be intermediate
states, different combinations of control options etc. For more along
these lines we refer the interested reader to~\citep{esd-9-525-2018}.
\end{exm}
\begin{exm}[\emph{Scheduling}]
\label{subsection:example2SDPs}
Scheduling
problems serve as canonical examples in control theory textbooks. The
one we present here is a slightly modified version of
\citep[Example~1.1.2]{bertsekas1995}.
Think of some machine in a factory that can perform different
operations, say $A, B, C$ and $D$. Each of these operations is supposed
to be performed once. The machine can only perform one operation at a
time, thus an order has to be fixed in which to perform the
operations. Setting the machine up for each operation incurs a specific
cost that might vary according to which operation has been performed
before. Moreover, operation $B$ can only be performed after operation
$A$ has already been completed, and operation $D$ only after operation
$C$. It suffices to fix the order in which the first three operations
are to be performed as this uniquely determines which will be the
fourth task. The aim is now to choose an order that minimises
the total cost of performing the four operations.
This situation can be modelled as follows as a problem with three
decision steps: The \emph{states at each step} are the sequences of
operations already performed, with the empty sequence at step 0. The
\emph{controls at a decision step and in a state} are the operations which have
not already been performed at previous steps and which are permitted in
that state. For example, at decision step 0, only controls $A$ and $C$
are available because of the above constraint on performing $B$ and
$D$.
The \emph{transition} and \emph{cost} functions of the problem are depicted by
the graph in Fig.~\ref{fig:example2}. As the problem is deterministic,
picking a control will result in a unique next state and each sequence
of policies will result in a unique trajectory. In this setting,
solving the SDP
reduces to finding a control sequence that \emph{minimises the sum of costs
along the single resulting trajectory}. In Fig.~\ref{fig:example2} this
is the sequence $CAB(D)$.
\end{exm}
\section{The BJI-framework}
\label{section:framework}
The BJI-framework is a dependently typed formalisation of optimal
control theory for finite-horizon, discrete-time SDPs. It extends
mathematical formulations for stochastic
SDPs \citep{bertsekas1995, bertsekasShreve96, puterman2014markov}
to the general problem of optimal decision making under \emph{monadic}
uncertainty.
For monadic SDPs, the framework provides a
generic implementation of backward induction. It has been applied to
study the impact of uncertainties on optimal emission policies
\citep{esd-9-525-2018} and is currently used to investigate solar
radiation management problems under tipping point
uncertainty \citep{TiPES::Website}.
In a nutshell, the framework consists of two sets of components: one
for the \emph{specification} of an SDP and one for its \emph{solution}
with monadic backward induction.
\subsection{Problem specification components}
\label{subsection:specification_components}
In the following we discuss the components necessary to specify a
monadic SDP.\\
The first one is the monad \ensuremath{\textcolor{PIKcyanDark2}{M}}:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}c<{\hspost}@{}}%
\column{13E}{@{}l@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{M}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}\textcolor{PIKcyanDark2}{Type} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
As discussed in the previous
section, \ensuremath{\textcolor{PIKcyanDark2}{M}} accounts for the uncertainties that affect the decision
problem. For our first example, we could for instance define \ensuremath{\textcolor{PIKcyanDark2}{M}} to be
\ensuremath{\textcolor{PIKcyanDark2}{List}}. For the second example it suffices to use \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{Id}} as the
problem is deterministic.
Further, the BJI-framework supports the specification of the
\emph{states}, the \emph{controls} and the \emph{transition function} of
an SDP through
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}c<{\hspost}@{}}%
\column{13E}{@{}l@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{X}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Y}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{next}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x} \to \textcolor{PIKcyanDark2}{M}\;(\textcolor{PIKcyanDark2}{X}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
The interpretation is that \ensuremath{\textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}} represents the states at decision step
\ensuremath{\textcolor{PIKgreenDark2}{t}}.\footnote{Note that in Idris, $S$ and $Z$ are the familiar
constructors of the data type \ensuremath{\mathbb{N}}.} In the first example of
Sec.~\ref{section:SDPs}, there are just two states
(\ensuremath{\textcolor{PIKcyanDark2}{Good}} and \ensuremath{\textcolor{PIKcyanDark2}{Bad}}) such that \ensuremath{\textcolor{PIKcyanDark2}{X}} is a constant family:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{State}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\textcolor{PIKcyanDark2}{Good}\mid \textcolor{PIKcyanDark2}{Bad}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{\char95 t}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\textcolor{PIKcyanDark2}{State}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
But in the second example the possible states depend on the decision
step \ensuremath{\textcolor{PIKgreenDark2}{t}}. Taking for example step \ensuremath{\textcolor{PIKgreenDark2}{t}\mathrel{=}\mathrm{2}}, we could simply define
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{6}{@{}>{\hspre}l<{\hspost}@{}}%
\column{16}{@{}>{\hspre}c<{\hspost}@{}}%
\column{16E}{@{}l@{}}%
\column{19}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{State2}{}\<[16]%
\>[16]{}\mathrel{=}{}\<[16E]%
\>[19]{}\textcolor{PIKcyanDark2}{AB}\mid \textcolor{PIKcyanDark2}{AC}\mid \textcolor{PIKcyanDark2}{CA}\mid \textcolor{PIKcyanDark2}{CD}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{X}\;{}\<[6]%
\>[6]{}\mathrm{2}{}\<[16]%
\>[16]{}\mathrel{=}{}\<[16E]%
\>[19]{}\textcolor{PIKcyanDark2}{State2}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Alternatively, we could employ type dependency in a more systematic way
to express that in Ex.~2 states are admissible sequences of
actions
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{Act}\mathrel{=}\textcolor{PIKcyanDark2}{A}\mid \textcolor{PIKcyanDark2}{B}\mid \textcolor{PIKcyanDark2}{C}\mid \textcolor{PIKcyanDark2}{D}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Recall that actions could require that another action was performed before,
no action was to be carried out twice and the problem was limited to
3 steps. These conditions might be captured by a type-valued predicate
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{AdmissibleState}{}\<[20]%
\>[20]{} \mathop{:} \{\mskip1.5mu \textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{Vect}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKcyanDark2}{Act} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
and the type of states might then be expressed as dependent pair
of a vector of actions and a proof that it is admissible.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}\mathrel{=}(\textcolor{PIKgreenDark2}{as} \mathop{:} \textcolor{PIKcyanDark2}{Vect}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKcyanDark2}{Act} \mathbin{*\!*} \textcolor{PIKcyanDark2}{AdmissibleState}\;\textcolor{PIKgreenDark2}{as}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Similarly, \ensuremath{\textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}} represents the controls available at decision step
\ensuremath{\textcolor{PIKgreenDark2}{t}} and in state \ensuremath{\textcolor{PIKgreenDark2}{x}} and \ensuremath{\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}} represents the states that can
be obtained by selecting control \ensuremath{\textcolor{PIKgreenDark2}{y}} in state \ensuremath{\textcolor{PIKgreenDark2}{x}} at decision step
\ensuremath{\textcolor{PIKgreenDark2}{t}}. In our first example, the available controls remain constant over
time (\ensuremath{\textcolor{PIKcyanDark2}{High}} or \ensuremath{\textcolor{PIKcyanDark2}{Low}}) like the states, but for the second example,
the type dependency is relevant: e.g. we might define (again at step
\ensuremath{\textcolor{PIKgreenDark2}{t}\mathrel{=}\mathrm{2}})
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{6}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{CtrlAC}{}\<[17]%
\>[17]{}\mathrel{=}{}\<[17E]%
\>[20]{}\textcolor{PIKcyanDark2}{B}\mid \textcolor{PIKcyanDark2}{D}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Y}\;{}\<[6]%
\>[6]{}\mathrm{2}\;{}\<[9]%
\>[9]{}\textcolor{PIKcyanDark2}{AC}{}\<[17]%
\>[17]{}\mathrel{=}{}\<[17E]%
\>[20]{}\textcolor{PIKcyanDark2}{CtrlAC}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
or more elegantly use dependent pairs to define the type of controls,
using the observation that an action is an
admissible control for some state represented by a vector of actions,
if adding the action to the vector results again in an admissible state :
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{22}{@{}>{\hspre}c<{\hspost}@{}}%
\column{22E}{@{}l@{}}%
\column{25}{@{}>{\hspre}l<{\hspost}@{}}%
\column{27}{@{}>{\hspre}c<{\hspost}@{}}%
\column{27E}{@{}l@{}}%
\column{30}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{AdmissibleControl}{}\<[22]%
\>[22]{} \mathop{:} {}\<[22E]%
\>[25]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{Vect}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKcyanDark2}{Act} \to \textcolor{PIKcyanDark2}{Act} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{AdmissibleControl}\;\textcolor{PIKgreenDark2}{as}\;\textcolor{PIKgreenDark2}{a}{}\<[27]%
\>[27]{}\mathrel{=}{}\<[27E]%
\>[30]{}\textcolor{PIKcyanDark2}{AdmissibleState}\;(\textcolor{PIKgreenDark2}{a}\mathbin{::}\textcolor{PIKgreenDark2}{as}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}(\textcolor{PIKgreenDark2}{a} \mathop{:} \textcolor{PIKcyanDark2}{Act} \mathbin{*\!*} \textcolor{PIKcyanDark2}{AdmissibleControl}\;(\textcolor{PIKgreenDark2}{fst}\;\textcolor{PIKgreenDark2}{x})\;\textcolor{PIKgreenDark2}{a}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Recall from Sec.~\ref{section:SDPs} that the monad, the states,
the controls and the next function together define a decision process.
In order to fully
specify a decision problem, one also has to define the rewards obtained
at each decision step and the operation that is used to add up rewards.
In the BJI-framework, this is done in terms of
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}c<{\hspost}@{}}%
\column{13E}{@{}l@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Val}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}\textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{reward}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x} \to \textcolor{PIKcyanDark2}{X}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\\
\>[3]{}( \mathbin{\oplus} ){}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}\textcolor{PIKcyanDark2}{Val} \to \textcolor{PIKcyanDark2}{Val} \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Here, \ensuremath{\textcolor{PIKcyanDark2}{Val}} is the type of rewards and \ensuremath{\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}\;\textcolor{PIKgreenDark2}{x'}} is the reward
obtained by selecting control \ensuremath{\textcolor{PIKgreenDark2}{y}} in state \ensuremath{\textcolor{PIKgreenDark2}{x}} if the next state is
\ensuremath{\textcolor{PIKgreenDark2}{x'}}, an element of the state space at step \ensuremath{\textcolor{PIKgreenDark2}{t}\mathbin{+}\mathrm{1}}.
Note that for deterministic problems it is unnecessary to parameterise
the reward function over the next state as it is unique and can thus be
obtained from the current state and control. But for non-deterministic
problems it is useful to be able to assign rewards depending on the
(uncertain) outcome of a transition.
A few remarks are at place here.
\begin{itemize}
\item In many applications, \ensuremath{\textcolor{PIKcyanDark2}{Val}} is a numerical type and the controls of the
SDP represent resources (fuel, water, etc.) that come at a cost. In these
cases, the reward function encodes the costs and perhaps also the
benefits associated with a decision step. Often, the latter also depends
both on the current state \ensuremath{\textcolor{PIKgreenDark2}{x}} and on the next state \ensuremath{\textcolor{PIKgreenDark2}{x'}}. The
BJI-framework nicely copes with all these situations.
\item The operation \ensuremath{ \mathbin{\oplus} } determines how rewards are added up. It
could be a simple arithmetic operation, but it could also be defined
in terms of problem-specific parameters, e.g.\ discount factors to
give more weight to current rewards as compared to future rewards.
\item Mapping \ensuremath{\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}} onto \ensuremath{\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}} (remember
that \ensuremath{\textcolor{PIKcyanDark2}{M}} is a monad and thus a functor) yields a value of type \ensuremath{\textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{Val}}. These are the \emph{possible} rewards obtained by selecting
control \ensuremath{\textcolor{PIKgreenDark2}{y}} in state \ensuremath{\textcolor{PIKgreenDark2}{x}} at decision step \ensuremath{\textcolor{PIKgreenDark2}{t}}.
In mathematical theories of optimal control, the implicit assumption
often is that \ensuremath{\textcolor{PIKcyanDark2}{Val}} is equal to \ensuremath{ \mathbb{R} } and that the \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure is a
probability distribution on real numbers which can be evaluated with
the \emph{expected value} measure.
However, in many practical applications, measuring uncertainty of rewards in
terms of the expected value is inadequate \citep{mercure2020risk}.
The BJI-framework therefore takes a generic approach and allows the
specification of SDPs in terms of problem-specific measures.
\end{itemize}
As just discussed, in SDPs with uncertainty a measure is required to
aggregate multiple possible rewards. The BJI-framework supports
the specification of the measure by:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{meas}{}\<[20]%
\>[20]{} \mathop{:} {}\<[20E]%
\>[23]{}\textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{Val} \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
In our first example we could use the minimum of a list as worst-case
measure, while in the second example the measure
would just be the identity (as the problem is deterministic).
Before we get to the solution components of the BJI-framework, one
more ingredient needs to be specified. In the next section we will
formalise a notion of optimality for which it is necessary to be able
to compare elements of \ensuremath{\textcolor{PIKcyanDark2}{Val}}.
The framework allows users to compare \ensuremath{\textcolor{PIKcyanDark2}{Val}}-values in terms of a problem
specific comparison operator:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}( \,\sqsubseteq\, ){}\<[20]%
\>[20]{} \mathop{:} {}\<[20E]%
\>[23]{}\textcolor{PIKcyanDark2}{Val} \to \textcolor{PIKcyanDark2}{Val} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
The operator \ensuremath{( \,\sqsubseteq\, )} is required to define a total preorder on \ensuremath{\textcolor{PIKcyanDark2}{Val}}.
In our two examples, we simply have:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Val}{}\<[20]%
\>[20]{}\mathrel{=}{}\<[20E]%
\>[23]{}\mathbb{N}{}\<[E]%
\\
\>[3]{}( \mathbin{\oplus} ){}\<[20]%
\>[20]{}\mathrel{=}{}\<[20E]%
\>[23]{}(\mathbin{+}){}\<[E]%
\\
\>[3]{}( \,\sqsubseteq\, ){}\<[20]%
\>[20]{}\mathrel{=}{}\<[20E]%
\>[23]{}(\leq){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Three more ingredients are necessary to fully specify a monadic SDP,
but we defer discussing them to when they come
up in the next subsection.
For illustration, a formalisation of Ex.~\ref{subsection:example1SDPs} can be found in
Fig.~\ref{fig:example1Formal}. A formalisation of Ex.~\ref{subsection:example2SDPs} is
included in the supplementary material.
\begin{figure}
\centering
\framebox{
\parbox{\textwidth-0.5cm}{
\scalebox{0.8}{
\small
\begin{tabular}{m{7.3cm} m{6.7cm}}
\multicolumn{2}{l}{ \parbox[t]{13cm}{
\textbf{Formalisation of Ex.~\ref{subsection:example1SDPs}:}\\
We use the monoid and preorder structure on \ensuremath{\mathbb{N}}, i.e.
\ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\mathbb{N}}, \ensuremath{( \mathbin{\oplus} )\mathrel{=}(\mathbin{+})}, \ensuremath{\textcolor{PIKgreenDark2}{zero}\mathrel{=}\mathrm{0}}, \ensuremath{(\leq)\mathrel{=}( \,\sqsubseteq\, )}.
}
}
\\
\parbox{6.9cm}{
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Measure:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}c<{\hspost}@{}}%
\column{12E}{@{}l@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{22}{@{}>{\hspre}c<{\hspost}@{}}%
\column{22E}{@{}l@{}}%
\column{25}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{minList}{}\<[12]%
\>[12]{} \mathop{:} {}\<[12E]%
\>[15]{}\textcolor{PIKcyanDark2}{List}\;\mathbb{N} \to \mathbb{N}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{minList}\;[\mskip1.5mu \mskip1.5mu]{}\<[22]%
\>[22]{}\mathrel{=}{}\<[22E]%
\>[25]{}\mathrm{0}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{minList}\;(\textcolor{PIKgreenDark2}{x}\mathbin{::}[\mskip1.5mu \mskip1.5mu]){}\<[22]%
\>[22]{}\mathrel{=}{}\<[22E]%
\>[25]{}\textcolor{PIKgreenDark2}{x}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{minList}\;(\textcolor{PIKgreenDark2}{x}\mathbin{::}\textcolor{PIKgreenDark2}{xs}){}\<[22]%
\>[22]{}\mathrel{=}{}\<[22E]%
\>[25]{}\textcolor{PIKgreenDark2}{x}\mathbin{`\textcolor{PIKgreenDark2}{minimum}`}\textcolor{PIKgreenDark2}{xs}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{minList}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
States and Controls:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}c<{\hspost}@{}}%
\column{12E}{@{}l@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{18}{@{}>{\hspre}c<{\hspost}@{}}%
\column{18E}{@{}l@{}}%
\column{21}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{States}{}\<[18]%
\>[18]{}\mathrel{=}{}\<[18E]%
\>[21]{}\textcolor{PIKcyanDark2}{Good}\mid \textcolor{PIKcyanDark2}{Bad}{}\<[E]%
\\
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{Controls}{}\<[18]%
\>[18]{}\mathrel{=}{}\<[18E]%
\>[21]{}\textcolor{PIKcyanDark2}{High}\mid \textcolor{PIKcyanDark2}{Low}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{\char95 t}{}\<[12]%
\>[12]{}\mathrel{=}{}\<[12E]%
\>[15]{}\textcolor{PIKcyanDark2}{States}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}{}\<[12]%
\>[12]{}\mathrel{=}{}\<[12E]%
\>[15]{}\textcolor{PIKcyanDark2}{Controls}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
}
&
\parbox{6.5cm}{
Transition function:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}l<{\hspost}@{}}%
\column{23}{@{}>{\hspre}c<{\hspost}@{}}%
\column{23E}{@{}l@{}}%
\column{26}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKcyanDark2}{Good}\;{}\<[17]%
\>[17]{}\textcolor{PIKcyanDark2}{Low}{}\<[23]%
\>[23]{}\mathrel{=}{}\<[23E]%
\>[26]{}[\mskip1.5mu \textcolor{PIKcyanDark2}{Good}\mskip1.5mu]{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKcyanDark2}{Bad}\;{}\<[17]%
\>[17]{}\textcolor{PIKcyanDark2}{High}{}\<[23]%
\>[23]{}\mathrel{=}{}\<[23E]%
\>[26]{}[\mskip1.5mu \textcolor{PIKcyanDark2}{Bad}\mskip1.5mu]{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}\;{}\<[17]%
\>[17]{}\textcolor{PIKgreenDark2}{\char95 y}{}\<[23]%
\>[23]{}\mathrel{=}{}\<[23E]%
\>[26]{}[\mskip1.5mu \textcolor{PIKcyanDark2}{Good},\textcolor{PIKcyanDark2}{Bad}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Rewards:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{22}{@{}>{\hspre}l<{\hspost}@{}}%
\column{28}{@{}>{\hspre}c<{\hspost}@{}}%
\column{28E}{@{}l@{}}%
\column{31}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}\;\textcolor{PIKcyanDark2}{Low}\;{}\<[22]%
\>[22]{}\textcolor{PIKcyanDark2}{Good}{}\<[28]%
\>[28]{}\mathrel{=}{}\<[28E]%
\>[31]{}\mathrm{3}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}\;\textcolor{PIKcyanDark2}{High}\;{}\<[22]%
\>[22]{}\textcolor{PIKcyanDark2}{Good}{}\<[28]%
\>[28]{}\mathrel{=}{}\<[28E]%
\>[31]{}\mathrm{2}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}\;\textcolor{PIKcyanDark2}{Low}\;{}\<[22]%
\>[22]{}\textcolor{PIKcyanDark2}{Bad}{}\<[28]%
\>[28]{}\mathrel{=}{}\<[28E]%
\>[31]{}\mathrm{1}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{\char95 t}\;\textcolor{PIKgreenDark2}{\char95 x}\;\textcolor{PIKcyanDark2}{High}\;{}\<[22]%
\>[22]{}\textcolor{PIKcyanDark2}{Bad}{}\<[28]%
\>[28]{}\mathrel{=}{}\<[28E]%
\>[31]{}\mathrm{0}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
}
\end{tabular}
}
\caption{A formalisation of Ex.~\ref{subsection:example1SDPs} from
Sec.~\ref{section:SDPs}
\label{fig:example1Formal}.
}
}
}
\end{figure}
\subsection{Problem solution components}
\label{subsection:solution_components}
The second set of components of the BJI-framework is an extension of the
mathematical theory of optimal control for stochastic sequential
decision problems to monadic problems. Here, we provide a summary of the
theory. Motivation and full details can be found in
\citep{2014_Botta_et_al, 2017_Botta_Jansson_Ionescu, esd-9-525-2018}.
The theory formalises the notions of policy (decision
rule) from Sec.~\ref{section:SDPs} as
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}c<{\hspost}@{}}%
\column{13E}{@{}l@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Policy}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}{}\<[13]%
\>[13]{}\mathrel{=}{}\<[13E]%
\>[16]{}(\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Policy sequences are then essentially vectors
of policies\footnote{The curly brackets in the types of \ensuremath{\textcolor{PIKcyanDark2}{Nil}} and
\ensuremath{(\mathbin{::})} indicate that \ensuremath{\textcolor{PIKgreenDark2}{t}} and \ensuremath{\textcolor{PIKgreenDark2}{n}} are implicit arguments.}.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{19}{@{}>{\hspre}c<{\hspost}@{}}%
\column{19E}{@{}l@{}}%
\column{22}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{PolicySeq}{}\<[19]%
\>[19]{} \mathop{:} {}\<[19E]%
\>[22]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}\;\mathbf{where}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKcyanDark2}{Nil}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKcyanDark2}{Z}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\mathbin{::}){}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Notice the role of the step (time) index \ensuremath{\textcolor{PIKgreenDark2}{t}} and of the length index \ensuremath{\textcolor{PIKgreenDark2}{n}}
in the constructors of policy sequences:
For a policy sequence to make sense, policies for taking decisions at
step \ensuremath{\textcolor{PIKgreenDark2}{t}} can only be prepended to policy sequences for taking \emph{first}
decisions at step \ensuremath{\textcolor{PIKgreenDark2}{t}\mathbin{+}\mathrm{1}} and the operation yields
policy sequences for taking \emph{first} decisions at step \ensuremath{\textcolor{PIKgreenDark2}{t}}.
Thus the time index allows to ensure a consistency property of policy
sequences by construction.
As for plain vectors and lists, prepending a policy to a policy sequence of
length \ensuremath{\textcolor{PIKgreenDark2}{n}} yields a policy sequence of length \ensuremath{\textcolor{PIKgreenDark2}{n}\mathbin{+}\mathrm{1}}.
Both the time and the
length index will be useful below: they allow to express that the backward
induction algorithm computes policy sequences starting at a specific time
and having a specific length depending on its inputs.
The perhaps most important ingredient of backward induction is a
\emph{value function} that incrementally measures and adds up rewards.
For a given decision problem,
the value function takes two arguments: a policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}} for making
\ensuremath{\textcolor{PIKgreenDark2}{n}} decision steps starting from decision step \ensuremath{\textcolor{PIKgreenDark2}{t}} and an initial state
\ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}}. It computes the value of taking \ensuremath{\textcolor{PIKgreenDark2}{n}} decision steps
according to the policies \ensuremath{\textcolor{PIKgreenDark2}{ps}} when starting in \ensuremath{\textcolor{PIKgreenDark2}{x}}. In the
BJI-framework, the value function is defined as
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{8}{@{}>{\hspre}c<{\hspost}@{}}%
\column{8E}{@{}l@{}}%
\column{11}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{21}{@{}>{\hspre}l<{\hspost}@{}}%
\column{24}{@{}>{\hspre}c<{\hspost}@{}}%
\column{24E}{@{}l@{}}%
\column{27}{@{}>{\hspre}l<{\hspost}@{}}%
\column{36}{@{}>{\hspre}c<{\hspost}@{}}%
\column{36E}{@{}l@{}}%
\column{39}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{val}{}\<[8]%
\>[8]{} \mathop{:} {}\<[8E]%
\>[11]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{val}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;{}\<[12]%
\>[12]{}\textcolor{PIKcyanDark2}{Nil}\;{}\<[21]%
\>[21]{}\textcolor{PIKgreenDark2}{x}{}\<[24]%
\>[24]{}\mathrel{=}{}\<[24E]%
\>[27]{}\textcolor{PIKgreenDark2}{zero}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{val}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}{}\<[24]%
\>[24]{}\mathrel{=}{}\<[24E]%
\>[27]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{y}{}\<[36]%
\>[36]{}\mathrel{=}{}\<[36E]%
\>[39]{}\textcolor{PIKgreenDark2}{p}\;\textcolor{PIKgreenDark2}{x}\;\mathbf{in}{}\<[E]%
\\
\>[27]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{mx'}{}\<[36]%
\>[36]{}\mathrel{=}{}\<[36E]%
\>[39]{}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}\;\mathbf{in}{}\<[E]%
\\
\>[27]{}\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{mx'}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Notice that, independently of the initial state \ensuremath{\textcolor{PIKgreenDark2}{x}}, the value of the
empty policy sequence is \ensuremath{\textcolor{PIKgreenDark2}{zero}}.
This is a problem-specific reference
value
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{zero}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\textcolor{PIKcyanDark2}{Val}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
that has to be provided as part of a problem
specification.\footnote{The name might
suggest that \ensuremath{\textcolor{PIKgreenDark2}{zero}} is supposed to be a neutral element relative to
\ensuremath{ \mathbin{\oplus} }. However, this is not required by the framework.} It is one of
the specification components that we have not
discussed in Sec.~\ref{subsection:specification_components}.
The value of a policy sequence consisting of a first policy \ensuremath{\textcolor{PIKgreenDark2}{p}} and of a
tail policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}} is defined inductively as the measure of an
\ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of \ensuremath{\textcolor{PIKcyanDark2}{Val}}-values. These values are obtained by first
computing the control \ensuremath{\textcolor{PIKgreenDark2}{y}} dictated by the policy \ensuremath{\textcolor{PIKgreenDark2}{p}} in state \ensuremath{\textcolor{PIKgreenDark2}{x}} and
the \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of possible next states \ensuremath{\textcolor{PIKgreenDark2}{mx'}} dictated by the
transition function \ensuremath{\textcolor{PIKgreenDark2}{next}}. Then, for all \ensuremath{\textcolor{PIKgreenDark2}{x'}} in \ensuremath{\textcolor{PIKgreenDark2}{mx'}} the current reward
\ensuremath{\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}\;\textcolor{PIKgreenDark2}{x'}} is added to the aggregated outcome for the tail policy sequence
\ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'}} . Finally, the result of
this functorial mapping is aggregated with the problem-specific
measure \ensuremath{\textcolor{PIKgreenDark2}{meas}} to obtain a result of type \ensuremath{\textcolor{PIKcyanDark2}{Val}} as outcome
for the policy sequence \ensuremath{(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})}. The function which is
mapped onto \ensuremath{\textcolor{PIKgreenDark2}{mx'}} is just a lifted version of \ensuremath{ \mathbin{\oplus} }:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}( \mathbin{\medoplus} ){}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{f},\textcolor{PIKgreenDark2}{g} \mathop{:} \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Val}) \to \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{f} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{g}\mathrel{=}\lambda \textcolor{PIKgreenDark2}{a}\Rightarrow \textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{a} \mathbin{\oplus} \textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{a}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
We will come back to the value function of the BJI-theory in
Sec.~\ref{section:preparation} where we will contrast it
with a function \ensuremath{\textcolor{PIKgreenDark2}{val'}} that, for a policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}} and an initial state
\ensuremath{\textcolor{PIKgreenDark2}{x}}, computes the measure of the sum of the
rewards along all possible trajectories starting at \ensuremath{\textcolor{PIKgreenDark2}{x}} under
\ensuremath{\textcolor{PIKgreenDark2}{ps}} (the \emph{measured total reward} that we anticipated in
Sec.~\ref{section:SDPs}).
For the time being, though, we accept the notion of value of a policy
sequence as put forward in the BJI-theory and we show how the
definition of \ensuremath{\textcolor{PIKgreenDark2}{val}} can be employed to compute policy sequences that
are provably optimal in the sense of
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{28}{@{}>{\hspre}c<{\hspost}@{}}%
\column{28E}{@{}l@{}}%
\column{31}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{OptPolicySeq}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{OptPolicySeq}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{n}\mskip1.5mu\}\;\textcolor{PIKgreenDark2}{ps}{}\<[28]%
\>[28]{}\mathrel{=}{}\<[28E]%
\>[31]{}(\textcolor{PIKgreenDark2}{ps'} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Notice the universal quantification in this definition:
A policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}} is defined to be optimal iff \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} for any policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps'}} and for any state \ensuremath{\textcolor{PIKgreenDark2}{x}}.
The generic implementation of backward induction in the
BJI-framework is an application of \emph{Bellman's principle of
optimality} mentioned in Sec.~\ref{section:SDPs}.
In control theory textbooks, this principle is often
referred to as \emph{Bellman's equation}. It can be suitably
formulated in terms of the notion of \emph{optimal extension}. We say
that a policy \ensuremath{\textcolor{PIKgreenDark2}{p} \mathop{:} \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}} is an optimal extension of a policy
sequence \ensuremath{\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{Policy}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}} if it is
the case that the value of \ensuremath{\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps}} is at least as good as the value of
\ensuremath{\textcolor{PIKgreenDark2}{p'}\mathbin{::}\textcolor{PIKgreenDark2}{ps}} for any policy \ensuremath{\textcolor{PIKgreenDark2}{p'}} and for any state \ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}}:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{OptExt}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{OptExt}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{p}{}\<[20]%
\>[20]{}\mathrel{=}{}\<[20E]%
\>[23]{}(\textcolor{PIKgreenDark2}{p'} \mathop{:} \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{p'}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
With the notion of optimal extension in place, Bellman's principle can
be formulated as
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}c<{\hspost}@{}}%
\column{12E}{@{}l@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{21}{@{}>{\hspre}c<{\hspost}@{}}%
\column{21E}{@{}l@{}}%
\column{24}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Bellman}{}\<[12]%
\>[12]{} \mathop{:} {}\<[12E]%
\>[15]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to {}\<[E]%
\\
\>[15]{}(\textcolor{PIKgreenDark2}{ps}{}\<[21]%
\>[21]{} \mathop{:} {}\<[21E]%
\>[24]{}\textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{OptPolicySeq}\;\textcolor{PIKgreenDark2}{ps} \to {}\<[E]%
\\
\>[15]{}(\textcolor{PIKgreenDark2}{p}{}\<[21]%
\>[21]{} \mathop{:} {}\<[21E]%
\>[24]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{OptExt}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{p} \to {}\<[E]%
\\
\>[15]{}\textcolor{PIKcyanDark2}{OptPolicySeq}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
In words: \emph{extending an optimal policy sequence with an optimal
extension (of that policy sequence) yields an optimal policy sequence}
or shorter \emph{prefixing with optimal extensions preserves
optimality}.
Proving Bellman's optimality principle is almost straightforward and
relies on \ensuremath{ \,\sqsubseteq\, } being reflexive and transitive and two
\emph{monotonicity} properties:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{plusMonSpec}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{v1},\textcolor{PIKgreenDark2}{v2},\textcolor{PIKgreenDark2}{v3},\textcolor{PIKgreenDark2}{v4} \mathop{:} \textcolor{PIKcyanDark2}{Val}\mskip1.5mu\} \to \textcolor{PIKgreenDark2}{v1} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{v2} \to \textcolor{PIKgreenDark2}{v3} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{v4} \to (\textcolor{PIKgreenDark2}{v1} \mathbin{\oplus} \textcolor{PIKgreenDark2}{v3}) \,\sqsubseteq\, (\textcolor{PIKgreenDark2}{v2} \mathbin{\oplus} \textcolor{PIKgreenDark2}{v4}){}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{measMonSpec}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{f},\textcolor{PIKgreenDark2}{g} \mathop{:} \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Val}) \to ((\textcolor{PIKgreenDark2}{a} \mathop{:} \textcolor{PIKcyanDark2}{A}) \to \textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{a} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{a}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{ma} \mathop{:} \textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{A}) \to \textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{ma}) \,\sqsubseteq\, \textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{ma}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
The second condition is a special case of the measure monotonicity
requirement originally formulated by \cite{ionescu2009} in
the context of a theory of vulnerability and monadic dynamical
systems. It is a natural property and the expected value measure, the
worst (best) case measure and any sound statistical measure fulfil it.
Like the reference value \ensuremath{\textcolor{PIKgreenDark2}{zero}} discussed above, \ensuremath{\textcolor{PIKgreenDark2}{plusMonSpec}} and
\ensuremath{\textcolor{PIKgreenDark2}{measMonSpec}} are specification components of the BJI-framework that
we have not discussed in Sec.~\ref{subsection:specification_components}.
We provide a proof of \ensuremath{\textcolor{PIKcyanDark2}{Bellman}} in Appendix~\ref{appendix:Bellman}. As one
would expect, the proof makes essential use of the recursive definition of
the function \ensuremath{\textcolor{PIKgreenDark2}{val}} discussed above.
As a consequence, this precise definition of \ensuremath{\textcolor{PIKgreenDark2}{val}} plays a crucial
role for the verification of backward induction in
\citep{2017_Botta_Jansson_Ionescu}.
Apart from the increased level of generality, the
definition of \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKcyanDark2}{Bellman}} are in fact just an
Idris formalisation of Bellman's equation as formulated in control
theory textbooks. With \ensuremath{\textcolor{PIKcyanDark2}{Bellman}} and provided that we can compute
optimal extensions of arbitrary policy sequences
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{optExt}{}\<[15]%
\>[15]{} \mathop{:} {}\<[15E]%
\>[18]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{optExtSpec}{}\<[15]%
\>[15]{} \mathop{:} {}\<[15E]%
\>[18]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{OptExt}\;\textcolor{PIKgreenDark2}{ps}\;(\textcolor{PIKgreenDark2}{optExt}\;\textcolor{PIKgreenDark2}{ps}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
it is easy to derive an implementation of monadic backward
induction that computes provably optimal policy sequences with
respect to \ensuremath{\textcolor{PIKgreenDark2}{val}}: first, notice that the empty policy sequence is
optimal:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{nilOptPolicySeq}{}\<[20]%
\>[20]{} \mathop{:} {}\<[20E]%
\>[23]{}\textcolor{PIKcyanDark2}{OptPolicySeq}\;\textcolor{PIKcyanDark2}{Nil}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{nilOptPolicySeq}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}\textcolor{PIKgreenDark2}{reflexive}\;\textcolor{PIKgreenDark2}{lteTP}\;\textcolor{PIKgreenDark2}{zero}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This is the base case for constructing optimal policy sequences by
backward induction, starting from the empty policy sequence:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{7}{@{}>{\hspre}c<{\hspost}@{}}%
\column{7E}{@{}l@{}}%
\column{9}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{bi}{}\<[7]%
\>[7]{} \mathop{:} {}\<[7E]%
\>[11]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;{}\<[9]%
\>[9]{}\textcolor{PIKcyanDark2}{Z}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\textcolor{PIKcyanDark2}{Nil}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}){}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{ps}\mathrel{=}\textcolor{PIKgreenDark2}{bi}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}\;\mathbf{in}\;\textcolor{PIKgreenDark2}{optExt}\;\textcolor{PIKgreenDark2}{ps}\mathbin{::}\textcolor{PIKgreenDark2}{ps}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
That \ensuremath{\textcolor{PIKgreenDark2}{bi}} computes optimal policy sequences with respect to \ensuremath{\textcolor{PIKgreenDark2}{val}}
is then proved by induction on \ensuremath{\textcolor{PIKgreenDark2}{n}}, the input that determines the
length of the resulting policy sequence:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{10}{@{}>{\hspre}c<{\hspost}@{}}%
\column{10E}{@{}l@{}}%
\column{13}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{21}{@{}>{\hspre}c<{\hspost}@{}}%
\column{21E}{@{}l@{}}%
\column{24}{@{}>{\hspre}l<{\hspost}@{}}%
\column{32}{@{}>{\hspre}l<{\hspost}@{}}%
\column{36}{@{}>{\hspre}l<{\hspost}@{}}%
\column{41}{@{}>{\hspre}c<{\hspost}@{}}%
\column{41E}{@{}l@{}}%
\column{44}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{biOptVal}{}\<[13]%
\>[13]{} \mathop{:} {}\<[16]%
\>[16]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{OptPolicySeq}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}){}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{biOptVal}\;\textcolor{PIKgreenDark2}{t}\;{}\<[15]%
\>[15]{}\textcolor{PIKcyanDark2}{Z}{}\<[21]%
\>[21]{}\mathrel{=}{}\<[21E]%
\>[24]{}\textcolor{PIKgreenDark2}{nilOptPolicySeq}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{biOptVal}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}){}\<[21]%
\>[21]{}\mathrel{=}{}\<[21E]%
\>[24]{}\textcolor{PIKcyanDark2}{Bellman}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{ops}\;\textcolor{PIKgreenDark2}{p}\;\textcolor{PIKgreenDark2}{oep}\;\mathbf{where}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKgreenDark2}{ps}{}\<[10]%
\>[10]{} \mathop{:} {}\<[10E]%
\>[13]{}\textcolor{PIKcyanDark2}{PolicySeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}{}\<[32]%
\>[32]{}\quad;\quad {}\<[36]%
\>[36]{}\textcolor{PIKgreenDark2}{ps}{}\<[41]%
\>[41]{}\mathrel{=}{}\<[41E]%
\>[44]{}\textcolor{PIKgreenDark2}{bi}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKgreenDark2}{ops}{}\<[10]%
\>[10]{} \mathop{:} {}\<[10E]%
\>[13]{}\textcolor{PIKcyanDark2}{OptPolicySeq}\;\textcolor{PIKgreenDark2}{ps}{}\<[32]%
\>[32]{}\quad;\quad {}\<[36]%
\>[36]{}\textcolor{PIKgreenDark2}{ops}{}\<[41]%
\>[41]{}\mathrel{=}{}\<[41E]%
\>[44]{}\textcolor{PIKgreenDark2}{biOptVal}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;\textcolor{PIKgreenDark2}{n}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKgreenDark2}{p}{}\<[10]%
\>[10]{} \mathop{:} {}\<[10E]%
\>[13]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}{}\<[32]%
\>[32]{}\quad;\quad {}\<[36]%
\>[36]{}\textcolor{PIKgreenDark2}{p}{}\<[41]%
\>[41]{}\mathrel{=}{}\<[41E]%
\>[44]{}\textcolor{PIKgreenDark2}{optExt}\;\textcolor{PIKgreenDark2}{ps}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKgreenDark2}{oep}{}\<[10]%
\>[10]{} \mathop{:} {}\<[10E]%
\>[13]{}\textcolor{PIKcyanDark2}{OptExt}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{p}{}\<[32]%
\>[32]{}\quad;\quad {}\<[36]%
\>[36]{}\textcolor{PIKgreenDark2}{oep}{}\<[41]%
\>[41]{}\mathrel{=}{}\<[41E]%
\>[44]{}\textcolor{PIKgreenDark2}{optExtSpec}\;\textcolor{PIKgreenDark2}{ps}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This is the verification result for \ensuremath{\textcolor{PIKgreenDark2}{bi}} of
\citep{2017_Botta_Jansson_Ionescu}.\footnote{Note that \ensuremath{\textcolor{PIKgreenDark2}{biOptVal}} is
called \ensuremath{\textcolor{PIKgreenDark2}{biLemma}} in \citep{2017_Botta_Jansson_Ionescu}. We chose the
new name to emphasise that \ensuremath{\textcolor{PIKgreenDark2}{bi}} computes optimal policy sequences
with respect to \ensuremath{\textcolor{PIKgreenDark2}{val}}.}
\subsection{BJI-framework wrap-up.}
\label{subsection:wrap-up}
The specification and solution components discussed in the last two
sections are all we need to formulate precisely the problem of
correctness for monadic backward induction in the BJI-framework.
This is done in the next section.
Before turning to it, two further remarks are necessary:
\begin{itemize}
\item The theory proposed in \citep{2017_Botta_Jansson_Ionescu} is
slightly more general than the one summarised above. Here, policies are
just functions from states to controls:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Policy}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}\mathrel{=}(\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
By contrast, in \citep{2017_Botta_Jansson_Ionescu}, policies are indexed
over a number of decision steps \ensuremath{\textcolor{PIKgreenDark2}{n}}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{19}{@{}>{\hspre}c<{\hspost}@{}}%
\column{19E}{@{}l@{}}%
\column{22}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Policy}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKcyanDark2}{Z}{}\<[19]%
\>[19]{}\mathrel{=}{}\<[19E]%
\>[22]{}\textcolor{PIKcyanDark2}{Unit}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{m}){}\<[19]%
\>[19]{}\mathrel{=}{}\<[19E]%
\>[22]{}(\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Reachable}\;\textcolor{PIKgreenDark2}{x} \to \textcolor{PIKcyanDark2}{Viable}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{m})\;\textcolor{PIKgreenDark2}{x} \to \textcolor{PIKcyanDark2}{GoodCtrl}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{m}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
and their domain for \ensuremath{\textcolor{PIKgreenDark2}{n}\mathbin{>}\mathrm{0}} is restricted to states that are
\emph{reachable} and \emph{viable} for \ensuremath{\textcolor{PIKgreenDark2}{n}} steps. This allows
to cope with states whose control set is empty and with transition
functions that return empty \ensuremath{\textcolor{PIKcyanDark2}{M}}-structures of next states.
(For a discussion of reachability and viability
see \citep[Sec.~3.7~and~3.8]{2017_Botta_Jansson_Ionescu}.)
This generality, however, comes at a cost: Compare e.g.\
the proof of Bellman's principle from the last
subsection with the corresponding proof in
\cite[Appendix B]{2017_Botta_Jansson_Ionescu}. The impact of the
reachability and viability
constraints on other parts of the theory is even more severe.
Here, we have decided to trade some generality for better readability
and opted for a simplified version of the original theory.
Still, for the generic backward induction algorithm we need to make
sure that it is possible to define
policy sequences of the length required for a specific SDP.
This can e.g. be done by postulating
controls to be non-empty:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{14}{@{}>{\hspre}c<{\hspost}@{}}%
\column{14E}{@{}l@{}}%
\column{17}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{notEmptyY}{}\<[14]%
\>[14]{} \mathop{:} {}\<[14E]%
\>[17]{}(\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
We also impose a non-emptiness requirement on
the transition function \ensuremath{\textcolor{PIKgreenDark2}{next}} that will be discussed in
Sec.~\ref{section:discussion}.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{nextNotEmpty}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to (\textcolor{PIKgreenDark2}{y} \mathop{:} \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}) \to \textcolor{PIKcyanDark2}{NotEmpty}\;(\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\item In section \ref{subsection:solution_components}, we have not
discussed under which conditions one can implement optimal extensions of
arbitrary policy sequences. This is an interesting topic that is however
orthogonal to the purpose of the current paper.
For the same reason we have not addressed the question of how to make \ensuremath{\textcolor{PIKgreenDark2}{bi}}
more efficient by tabulation.
We briefly discuss the specification and implementation of optimal extensions
in the BJI-framework in Appendix~\ref{appendix:optimal_extension}.
We refer the reader interested in tabulation of \ensuremath{\textcolor{PIKgreenDark2}{bi}} to
\href{https://gitlab.pik-potsdam.de/botta/IdrisLibs/-/blob/master/SequentialDecisionProblems/TabBackwardsInduction.lidr}
{SequentialDecisionProblems.TabBackwardsInduction}
of \citep{botta20162018}.
\end{itemize}
\vfill
\pagebreak
\section{Correctness for monadic backward induction}
\label{section:preparation}
In this section we formally specify the notions of correctness for
monadic backward induction \ensuremath{\textcolor{PIKgreenDark2}{bi}} and the value function \ensuremath{\textcolor{PIKgreenDark2}{val}} of the
BJI-framework that we will study in the remainder of this paper.
We develop these notions as generic variants of the corresponding
notions for stochastic SDPs.
\subsection{Extension of the BJI-framework}
\label{subsection:frameworkExtension}
In the previous section, we have seen that a monadic SDP can be
specified in terms of nine components: \ensuremath{\textcolor{PIKcyanDark2}{M}}, \ensuremath{\textcolor{PIKcyanDark2}{X}}, \ensuremath{\textcolor{PIKcyanDark2}{Y}},
\ensuremath{\textcolor{PIKgreenDark2}{next}}, \ensuremath{\textcolor{PIKcyanDark2}{Val}}, \ensuremath{\textcolor{PIKgreenDark2}{zero}}, \ensuremath{ \mathbin{\oplus} }, \ensuremath{ \,\sqsubseteq\, } and \ensuremath{\textcolor{PIKgreenDark2}{reward}}.
Given a policy sequence (optimal or not) and an initial state
for an SDP, we can compute the \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of possible trajectories
starting at that state:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}c<{\hspost}@{}}%
\column{11E}{@{}l@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{22}{@{}>{\hspre}c<{\hspost}@{}}%
\column{22E}{@{}l@{}}%
\column{25}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\mathbf{data}\;\textcolor{PIKcyanDark2}{StateCtrlSeq}{}\<[22]%
\>[22]{} \mathop{:} {}\<[22E]%
\>[25]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Type}\;\mathbf{where}{}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}\textcolor{PIKcyanDark2}{Last}{}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKcyanDark2}{Z}){}\<[E]%
\\
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}( \mathbin{\#\!\#} ){}\<[11]%
\>[11]{} \mathop{:} {}\<[11E]%
\>[14]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \mathbin{*\!*} \textcolor{PIKcyanDark2}{Y}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}) \to \textcolor{PIKcyanDark2}{StateCtrlSeq}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{t})\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n})){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{8}{@{}>{\hspre}c<{\hspost}@{}}%
\column{8E}{@{}l@{}}%
\column{11}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{21}{@{}>{\hspre}l<{\hspost}@{}}%
\column{24}{@{}>{\hspre}c<{\hspost}@{}}%
\column{24E}{@{}l@{}}%
\column{27}{@{}>{\hspre}l<{\hspost}@{}}%
\column{35}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{trj}{}\<[8]%
\>[8]{} \mathop{:} {}\<[8E]%
\>[11]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{M}\;(\textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n})){}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{trj}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;{}\<[12]%
\>[12]{}\textcolor{PIKcyanDark2}{Nil}\;{}\<[21]%
\>[21]{}\textcolor{PIKgreenDark2}{x}{}\<[24]%
\>[24]{}\mathrel{=}{}\<[24E]%
\>[27]{}\textcolor{PIKgreenDark2}{pure}\;(\textcolor{PIKcyanDark2}{Last}\;\textcolor{PIKgreenDark2}{x}){}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{trj}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}{}\<[24]%
\>[24]{}\mathrel{=}{}\<[24E]%
\>[27]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{y}{}\<[35]%
\>[35]{}\mathrel{=}\textcolor{PIKgreenDark2}{p}\;\textcolor{PIKgreenDark2}{x}\;\mathbf{in}{}\<[E]%
\\
\>[27]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{mx'}\mathrel{=}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}\;\mathbf{in}{}\<[E]%
\\
\>[27]{}\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{x} \mathbin{*\!*} \textcolor{PIKgreenDark2}{y}) \mathbin{\#\!\#} )\;(\textcolor{PIKgreenDark2}{mx'} \mathbin{>\!\!>\!\!=} \textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
where we use \ensuremath{\textcolor{PIKcyanDark2}{StateCtrlSeq}} as type of trajectories. Essentially it is
a non-empty list of (dependent) state/control pairs, with the exception of the base case
which is a singleton just containing the last state reached.
Furthermore, we can compute the \emph{total reward} for a single
trajectory, i.e. its sum of rewards:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}c<{\hspost}@{}}%
\column{9E}{@{}l@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{31}{@{}>{\hspre}c<{\hspost}@{}}%
\column{31E}{@{}l@{}}%
\column{34}{@{}>{\hspre}l<{\hspost}@{}}%
\column{37}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{sumR}{}\<[9]%
\>[9]{} \mathop{:} {}\<[9E]%
\>[12]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{sumR}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;(\textcolor{PIKcyanDark2}{Last}\;\textcolor{PIKgreenDark2}{x}){}\<[34]%
\>[34]{}\mathrel{=}{}\<[37]%
\>[37]{}\textcolor{PIKgreenDark2}{zero}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{sumR}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;((\textcolor{PIKgreenDark2}{x} \mathbin{*\!*} \textcolor{PIKgreenDark2}{y}) \mathbin{\#\!\#} \textcolor{PIKgreenDark2}{xys}){}\<[31]%
\>[31]{}\mathrel{=}{}\<[31E]%
\>[34]{}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}\;(\textcolor{PIKgreenDark2}{head}\;\textcolor{PIKgreenDark2}{xys}) \mathbin{\oplus} \textcolor{PIKgreenDark2}{sumR}\;\textcolor{PIKgreenDark2}{xys}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
where \ensuremath{\textcolor{PIKgreenDark2}{head}} is the helper function
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}c<{\hspost}@{}}%
\column{9E}{@{}l@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{27}{@{}>{\hspre}c<{\hspost}@{}}%
\column{27E}{@{}l@{}}%
\column{30}{@{}>{\hspre}c<{\hspost}@{}}%
\column{30E}{@{}l@{}}%
\column{33}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{head}{}\<[9]%
\>[9]{} \mathop{:} {}\<[9E]%
\>[12]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{head}\;(\textcolor{PIKcyanDark2}{Last}\;\textcolor{PIKgreenDark2}{x}){}\<[30]%
\>[30]{}\mathrel{=}{}\<[30E]%
\>[33]{}\textcolor{PIKgreenDark2}{x}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{head}\;((\textcolor{PIKgreenDark2}{x} \mathbin{*\!*} \textcolor{PIKgreenDark2}{y}) \mathbin{\#\!\#} \textcolor{PIKgreenDark2}{xys}){}\<[27]%
\>[27]{}\mathrel{=}{}\<[27E]%
\>[30]{}\textcolor{PIKgreenDark2}{x}{}\<[30E]%
\ColumnHook
\end{hscode}\resethooks
By mapping \ensuremath{\textcolor{PIKgreenDark2}{sumR}} onto an \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of trajectories, we obtain an
\ensuremath{\textcolor{PIKcyanDark2}{M}}-structure containing the individual sums of rewards of the
trajectories. Now, using the measure function, we can compute the
generic analogue of the expected total reward for a policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}}
and an initial state \ensuremath{\textcolor{PIKgreenDark2}{x}}:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}c<{\hspost}@{}}%
\column{9E}{@{}l@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{val'}{}\<[9]%
\>[9]{} \mathop{:} {}\<[9E]%
\>[12]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKcyanDark2}{Val}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}{}\<[12]%
\>[12]{}\mathrel{=}{}\<[15]%
\>[15]{}\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
As anticipated in Sec.~\ref{section:SDPs} we call the value
computed by \ensuremath{\textcolor{PIKgreenDark2}{val'}} the \emph{measured total reward}. Recall that
solving a stochastic SDP commonly means finding a policy sequence that
maximises the \emph{expected total reward}. By analogy, we define that
solving a monadic SDP means to find a policy sequence that maximises
the \emph{measured total reward}. I.e. given \ensuremath{\textcolor{PIKgreenDark2}{t}} and \ensuremath{\textcolor{PIKgreenDark2}{n}}, the solution
of a monadic SDP is a sequence of \ensuremath{\textcolor{PIKgreenDark2}{n}} policies that maximises the measure
of the sum of rewards along all possible trajectories of length \ensuremath{\textcolor{PIKgreenDark2}{n}}
that are rooted in an initial state at step \ensuremath{\textcolor{PIKgreenDark2}{t}}.
Again by analogy to the stochastic case, we define monadic
backward induction to be correct if, for a given SDP, the policy
sequence computed by \ensuremath{\textcolor{PIKgreenDark2}{bi}} is the solution to the SDP.
I.e., we consider \ensuremath{\textcolor{PIKgreenDark2}{bi}} to be correct if it meets the specification
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{25}{@{}>{\hspre}c<{\hspost}@{}}%
\column{25E}{@{}l@{}}%
\column{28}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{biOptMeasTotalReward}{}\<[25]%
\>[25]{} \mathop{:} {}\<[25E]%
\>[28]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{GenOptPolicySeq}\;\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
where \ensuremath{\textcolor{PIKcyanDark2}{GenOptPolicySeq}} is a generalised version of the optimality
predicate \ensuremath{\textcolor{PIKcyanDark2}{OptPolicySeq}} from
Sec.~\ref{subsection:solution_components}. It now takes as an
additional parameter the function with respect to which the policy
sequence is to be optimal:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{33}{@{}>{\hspre}c<{\hspost}@{}}%
\column{33E}{@{}l@{}}%
\column{36}{@{}>{\hspre}l<{\hspost}@{}}%
\column{40}{@{}>{\hspre}l<{\hspost}@{}}%
\column{74}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{GenOptPolicySeq}{}\<[20]%
\>[20]{} \mathop{:} {}\<[20E]%
\>[23]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to {}\<[40]%
\>[40]{}(\textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Val}) \to {}\<[74]%
\>[74]{}\textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKcyanDark2}{GenOptPolicySeq}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{n}\mskip1.5mu\}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{ps}{}\<[33]%
\>[33]{}\mathrel{=}{}\<[33E]%
\>[36]{}(\textcolor{PIKgreenDark2}{ps'} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
As recapitulated in Sec.~\ref{subsection:solution_components},
\bottaetal have already shown that if \ensuremath{\textcolor{PIKcyanDark2}{M}} is a monad, \ensuremath{ \,\sqsubseteq\, } a total
preorder and
\ensuremath{ \mathbin{\oplus} } and \ensuremath{\textcolor{PIKgreenDark2}{meas}} fulfil two monotonicity conditions, then \ensuremath{\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}}
yields an optimal policy sequence with respect to the value function
\ensuremath{\textcolor{PIKgreenDark2}{val}} in the sense that \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n})\;\textcolor{PIKgreenDark2}{x}} for any
policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps'}} and initial state \ensuremath{\textcolor{PIKgreenDark2}{x}}, for arbitrary \ensuremath{\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}}. Or, expressed using the generalised optimality predicate,
that the type
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{GenOptPolicySeq}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{t}\mskip1.5mu\}\;\{\mskip1.5mu \textcolor{PIKgreenDark2}{n}\mskip1.5mu\}\;\textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
is inhabited.
As seen in Sec.~\ref{subsection:solution_components}, the function
\ensuremath{\textcolor{PIKgreenDark2}{val}} measures and adds rewards incrementally. But does it always
compute the measured total reward like \ensuremath{\textcolor{PIKgreenDark2}{val'}}?
Modulo differences in the presentation \citet[Theorem
4.2.1]{puterman2014markov} suggests that for standard
stochastic SDPs, \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} are extensionally equal, which in turn
allows the use of backward induction for solving these SDPs.
Generalising, we therefore consider \ensuremath{\textcolor{PIKgreenDark2}{val}} as
correct if it fulfils the specification
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{23}{@{}>{\hspre}c<{\hspost}@{}}%
\column{23E}{@{}l@{}}%
\column{26}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{valMeasTotalReward}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
If this equality held for the general monadic SDPs of the BJI-theory,
we could prove the correctness of \ensuremath{\textcolor{PIKgreenDark2}{bi}} as immediate corollary of
\ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}} and \bottaetal\!'s result \ensuremath{\textcolor{PIKgreenDark2}{biOptVal}}.
The statement \ensuremath{\textcolor{PIKgreenDark2}{biOptMeasTotalReward}} can be seen as a generic version
of textbook correctness statements for backward induction as solution
method for stochastic SDPs like \citep[prop.1.3.1]{bertsekas1995} or
\citep[Theorem~4.5.1.c]{puterman2014markov}.
By proving \ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}} we could therefore extend the
verification of \citep{2017_Botta_Jansson_Ionescu} and obtain a
stronger correctness result for monadic backward induction.
\vspace{0.2cm}
Our main objective in the remainder of the paper is therefore to prove
that \ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}} holds.
But there is a problem.
\subsection{The problem with the BJI-value function}
\label{subsection:counterEx}
A closer look at \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} reveals two quite different
computational patterns: applied to a policy sequence \ensuremath{\textcolor{PIKgreenDark2}{ps}} of length \ensuremath{\textcolor{PIKgreenDark2}{n}\mathbin{+}\mathrm{1}}
and a state \ensuremath{\textcolor{PIKgreenDark2}{x}}, the function \ensuremath{\textcolor{PIKgreenDark2}{val}} directly evaluates \ensuremath{\textcolor{PIKgreenDark2}{meas}} on
the \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of rewards
corresponding to the possible next states after one step. This entails
further evaluations of \ensuremath{\textcolor{PIKgreenDark2}{meas}} for each possible next state.
By contrast, \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} entails only one evaluation of \ensuremath{\textcolor{PIKgreenDark2}{meas}},
independently of the length of \ensuremath{\textcolor{PIKgreenDark2}{ps}}. The computation, however, builds up
an intermediate \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of state-control sequences. The elements of this
\ensuremath{\textcolor{PIKcyanDark2}{M}}-structure, the state-control sequences, are then consumed by \ensuremath{\textcolor{PIKgreenDark2}{sumR}}
and finally the \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of rewards is reduced by \ensuremath{\textcolor{PIKgreenDark2}{meas}}.
For illustration, let us revisit Ex.~\ref{subsection:example1SDPs} from
Sec.~\ref{section:SDPs} as formalised in
Fig.~\ref{fig:example1Formal}.
To do an example calculation with \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} we first need a
concrete policy sequence as input.
The simplest two policies are the two constant policies:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{constH} \mathop{:} (\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{constH}\;\textcolor{PIKgreenDark2}{\char95 t}\mathrel{=}\textcolor{PIKgreenDark2}{const}\;\textcolor{PIKcyanDark2}{High}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{constL} \mathop{:} (\textcolor{PIKgreenDark2}{t} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{constL}\;\textcolor{PIKgreenDark2}{\char95 t}\mathrel{=}\textcolor{PIKgreenDark2}{const}\;\textcolor{PIKcyanDark2}{Low}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
From these, we can define a policy sequence
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\mathrm{0}\;\mathrm{3}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{ps}\mathrel{=}\textcolor{PIKgreenDark2}{constH}\;\mathrm{0}\mathbin{::}(\textcolor{PIKgreenDark2}{constL}\;\mathrm{1}\mathbin{::}(\textcolor{PIKgreenDark2}{constH}\;\mathrm{2}\mathbin{::}\textcolor{PIKcyanDark2}{Nil})){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
It is instructive to compute \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}} by hand. Recall that in this example, we have
\ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}} with \ensuremath{( \mathbin{>\!\!>\!\!=} )\mathrel{=}\textcolor{PIKgreenDark2}{concatMap}} and \ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\mathbb{N}} with \ensuremath{ \mathbin{\oplus} \mathrel{=}\mathbin{+}}.
The measure \ensuremath{\textcolor{PIKgreenDark2}{meas}} thus needs to
have the type \ensuremath{\textcolor{PIKcyanDark2}{List}\;\mathbb{N} \to \mathbb{N}}. Without instantiating \ensuremath{\textcolor{PIKgreenDark2}{meas}} for the
moment, the computations roughly exhibit the structure
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{19}{@{}>{\hspre}l<{\hspost}@{}}%
\column{25}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}\mathrel{=}{}\<[18]%
\>[18]{}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{2}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{3}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{2},\mathrm{0}\mskip1.5mu]\mskip1.5mu],\mathrm{0}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{3}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{2},\mathrm{0}\mskip1.5mu],\mathrm{1}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mathrm{0}\mskip1.5mu]\mskip1.5mu]\mskip1.5mu]{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}\mathrel{=}{}\<[19]%
\>[19]{}\textcolor{PIKgreenDark2}{meas}\;{}\<[25]%
\>[25]{}[\mskip1.5mu \mathrm{7},\mathrm{5},\mathrm{5},\mathrm{3},\mathrm{1}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
and it is not ``obviously clear'' that \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} are
extensionally equal without further knowledge about \ensuremath{\textcolor{PIKgreenDark2}{meas}}.
In the deterministic case, i.e. for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{Id}} and
\ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{id}}, \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} are indeed equal for all \ensuremath{\textcolor{PIKgreenDark2}{ps}}
and \ensuremath{\textcolor{PIKgreenDark2}{x}}, without imposing any further
conditions (as we will see in Sec.~\ref{section:valval}).
For the stochastic case, \cite[Theorem
4.2.1]{puterman2014markov} suggests that the equality
should hold. But for the monadic case, no such result has been
established.
And as it turns out, in general the functions \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} are not
unconditionally equal -- consider the following counter-example:
We continue in the setting of Ex.~\ref{subsection:example1SDPs}
from above, but now instantiate the measure to the plain arithmetic sum
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{foldr}\;(\mathbin{+})\;\mathrm{0}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This measure fulfils the monotonicity condition
(\ensuremath{\textcolor{PIKgreenDark2}{measMonSpec}}, Sec.~\ref{subsection:solution_components}) imposed by the
BJI-framework.
But if we instantiate the above computations with it, then we get \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}\mathrel{=}\mathrm{13}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKcyanDark2}{Good}} = 21!
We thus see that the equality between \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} cannot hold
unconditionally in the generic setting of the BJI-framework.
In the next section we therefore present conditions under which the
equality \emph{does} hold.
\section{Correctness conditions}
\label{section:conditions}
We now formulate three conditions on combinations of the monad,
the measure function and the binary operation \ensuremath{ \mathbin{\oplus} } that imply the
extensional equality of \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}}:
\setlist[itemize,1]{leftmargin=58pt}
\begin{itemize}
\item[{\bf Condition 1.}]
The measure needs to be left-inverse to \ensuremath{\textcolor{PIKgreenDark2}{pure}}:
\footnote{The symbol \ensuremath{\doteq} denotes \emph{extensional} equality,
see Appendix~\ref{appendix:monadLaws}}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{20}{@{}>{\hspre}c<{\hspost}@{}}%
\column{20E}{@{}l@{}}%
\column{23}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measPureSpec}{}\<[20]%
\>[20]{} \mathop{:} {}\<[20E]%
\>[23]{}\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{pure}\doteq\textcolor{PIKgreenDark2}{id}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{center}
\includegraphics{img/diag1.eps}
\end{center}
\item[{\bf Condition 2.}]
Applying the measure after \ensuremath{\textcolor{PIKgreenDark2}{join}} needs to be extensionally
equal to applying it after \ensuremath{\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{meas}}:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measJoinSpec}{}\<[17]%
\>[17]{} \mathop{:} \textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{join}\doteq\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{meas}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{center}
\includegraphics{img/diag2.eps}
\end{center}
\item[\bf{Condition 3.}]
For arbitrary \ensuremath{\textcolor{PIKgreenDark2}{v} \mathop{:} \textcolor{PIKcyanDark2}{Val}} and non-empty \ensuremath{\textcolor{PIKgreenDark2}{mv} \mathop{:} \textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{Val}} applying
the measure after mapping \ensuremath{(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )} onto \ensuremath{\textcolor{PIKgreenDark2}{mv}} needs to be equal to
applying \ensuremath{(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )} after the measure:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measPlusSpec}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}(\textcolor{PIKgreenDark2}{v} \mathop{:} \textcolor{PIKcyanDark2}{Val}) \to (\textcolor{PIKgreenDark2}{mv} \mathop{:} \textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{Val}) \to (\textcolor{PIKcyanDark2}{NotEmpty}\;\textcolor{PIKgreenDark2}{mv}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} ))\;\textcolor{PIKgreenDark2}{mv}\mathrel{=}((\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )\mathbin{\circ}\textcolor{PIKgreenDark2}{meas})\;\textcolor{PIKgreenDark2}{mv}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{center}
\includegraphics{img/diag3.eps}
\end{center}
\end{itemize}
\setlist[itemize,1]{leftmargin=10pt}
Essentially, these conditions assure that the measure is well-behaved
relative to the monad structure and the \ensuremath{ \mathbin{\oplus} }-operation.
They arise by, again, generalising from the
standard example of stochastic SDPs with a probability monad,
the \emph{expected value} as measure and ordinary addition as \ensuremath{ \mathbin{\oplus} }.
The first two conditions are lifting properties that allow to do
computations either in the monad or the underlying structure
with the same result.
The third condition is a distributivity law. For the computation of the
measured total reward it means that instead of adding the current reward to
the outcome of each trajectory and then measuring, one may as well first measure the
outcomes and then add the current reward.
\vspace{0.2cm}
To illustrate the conditions, let us consider a simple representation of
discrete probability distributions like in \citep{DBLP:journals/jfp/ErwigK06}.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{11}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{Dist} \mathop{:} {}\<[11]%
\>[11]{}\textcolor{PIKcyanDark2}{Type} \to \textcolor{PIKcyanDark2}{Type}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{Omega}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\textcolor{PIKcyanDark2}{List}\;(\textcolor{PIKcyanDark2}{Omega},\textcolor{PIKcyanDark2}{Double}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
with
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{12}{@{}>{\hspre}c<{\hspost}@{}}%
\column{12E}{@{}l@{}}%
\column{15}{@{}>{\hspre}l<{\hspost}@{}}%
\column{18}{@{}>{\hspre}c<{\hspost}@{}}%
\column{18E}{@{}l@{}}%
\column{21}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{distMap}{}\<[12]%
\>[12]{} \mathop{:} {}\<[12E]%
\>[15]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A},\textcolor{PIKcyanDark2}{B} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{f} \mathop{:} \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{B}) \to \textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{B}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{distMap}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{aps}{}\<[18]%
\>[18]{}\mathrel{=}{}\<[18E]%
\>[21]{}[\mskip1.5mu (\textcolor{PIKgreenDark2}{f}\;(\textcolor{PIKgreenDark2}{fst}\;\textcolor{PIKgreenDark2}{ap}),\textcolor{PIKgreenDark2}{snd}\;\textcolor{PIKgreenDark2}{ap})\mid \textcolor{PIKgreenDark2}{ap}\leftarrow \textcolor{PIKgreenDark2}{aps}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{distPure}{}\<[15]%
\>[15]{} \mathop{:} {}\<[15E]%
\>[18]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{A}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{distPure}\;\textcolor{PIKgreenDark2}{a}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{1.0})\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{distJoin} \mathop{:} \{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKcyanDark2}{Dist}\;(\textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{A})) \to \textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{A}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{distJoin}\;\textcolor{PIKgreenDark2}{apsps}\mathrel{=}\textcolor{PIKgreenDark2}{concat}\;[\mskip1.5mu [\mskip1.5mu (\textcolor{PIKgreenDark2}{fst}\;\textcolor{PIKgreenDark2}{ap},\textcolor{PIKgreenDark2}{snd}\;\textcolor{PIKgreenDark2}{ap}\mathbin{*}\textcolor{PIKgreenDark2}{snd}\;\textcolor{PIKgreenDark2}{aps})\mid \textcolor{PIKgreenDark2}{ap}\leftarrow \textcolor{PIKgreenDark2}{fst}\;\textcolor{PIKgreenDark2}{aps}\mskip1.5mu]\mid \textcolor{PIKgreenDark2}{aps}\leftarrow \textcolor{PIKgreenDark2}{apsps}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\textcolor{PIKcyanDark2}{Double}} and as measure the expected value
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{expected} \mathop{:} \textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{Double} \to \textcolor{PIKcyanDark2}{Double}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{expected}\;\textcolor{PIKgreenDark2}{dps}\mathrel{=}\textcolor{PIKgreenDark2}{sum}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{fst}\;\textcolor{PIKgreenDark2}{dp}\mathbin{*}\textcolor{PIKgreenDark2}{snd}\;\textcolor{PIKgreenDark2}{dp}\mid \textcolor{PIKgreenDark2}{dp}\leftarrow \textcolor{PIKgreenDark2}{dps}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
With \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{expected}} and \ensuremath{ \mathbin{\oplus} \mathrel{=}\mathbin{+}}, we can now consider the three conditions from above.
\paragraph*{Condition 1.} \hspace{0.1cm}
The first condition \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}} holds since \ensuremath{\mathrm{1.0}}
is neutral for \ensuremath{\mathbin{*}}.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distPure}\;\textcolor{PIKgreenDark2}{a}\mathrel{=}\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{1.0}\mathrel{=}\textcolor{PIKgreenDark2}{a}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
The second and the third condition require some arithmetic reasoning, so
let us just consider them for two examples.
Let \ensuremath{\textcolor{PIKgreenDark2}{a},\textcolor{PIKgreenDark2}{b},\textcolor{PIKgreenDark2}{c},\textcolor{PIKgreenDark2}{d}} be variables of type \ensuremath{\textcolor{PIKcyanDark2}{Double}} and say we have distributions
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}c<{\hspost}@{}}%
\column{9E}{@{}l@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{dps1}{}\<[9]%
\>[9]{} \mathop{:} {}\<[9E]%
\>[12]{}\textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{Double}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{dps1}{}\<[9]%
\>[9]{}\mathrel{=}{}\<[9E]%
\>[12]{}[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{0.5}),(\textcolor{PIKgreenDark2}{b},\mathrm{0.3}),(\textcolor{PIKgreenDark2}{c},\mathrm{0.2})\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{9}{@{}>{\hspre}c<{\hspost}@{}}%
\column{9E}{@{}l@{}}%
\column{12}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{dps2}{}\<[9]%
\>[9]{} \mathop{:} {}\<[9E]%
\>[12]{}\textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{Double}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{dps2}{}\<[9]%
\>[9]{}\mathrel{=}{}\<[9E]%
\>[12]{}[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{0.4}),(\textcolor{PIKgreenDark2}{d},\mathrm{0.6})\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{10}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{dpdps}{}\<[10]%
\>[10]{} \mathop{:} {}\<[13]%
\>[13]{}\textcolor{PIKcyanDark2}{Dist}\;(\textcolor{PIKcyanDark2}{Dist}\;\textcolor{PIKcyanDark2}{Double}){}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{dpdps}{}\<[10]%
\>[10]{}\mathrel{=}[\mskip1.5mu (\textcolor{PIKgreenDark2}{dps1},\mathrm{0.1}),(\textcolor{PIKgreenDark2}{dps2},\mathrm{0.9})\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\paragraph*{Condition 2.} \hspace{0.1cm}
Then the second condition \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}} instantiates to
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}(\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distJoin})\;\textcolor{PIKgreenDark2}{dpdps}\mathrel{=}(\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distMap}\;\textcolor{PIKgreenDark2}{expected})\;\textcolor{PIKgreenDark2}{dpdps}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This equality holds because of the standard properties of
addition and multiplication:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{84}{@{}>{\hspre}c<{\hspost}@{}}%
\column{84E}{@{}l@{}}%
\column{94}{@{}>{\hspre}c<{\hspost}@{}}%
\column{94E}{@{}l@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}(\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distJoin})\;\textcolor{PIKgreenDark2}{dpdps}{}\<[94]%
\>[94]{}\mathrel{=}{}\<[94E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{expected}\;[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{0.5}\mathbin{*}\mathrm{0.1}),(\textcolor{PIKgreenDark2}{b},\mathrm{0.3}\mathbin{*}\mathrm{0.1}),(\textcolor{PIKgreenDark2}{c},\mathrm{0.2}\mathbin{*}\mathrm{0.1}),(\textcolor{PIKgreenDark2}{a},\mathrm{0.4}\mathbin{*}\mathrm{0.9}),(\textcolor{PIKgreenDark2}{d},\mathrm{0.6}\mathbin{*}\mathrm{0.9})\mskip1.5mu]{}\<[94]%
\>[94]{}\mathrel{=}{}\<[94E]%
\\[\blanklineskip]%
\>[3]{}(\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.5}\mathbin{*}\mathrm{0.1})\mathbin{+}(\textcolor{PIKgreenDark2}{b}\mathbin{*}\mathrm{0.3}\mathbin{*}\mathrm{0.1})\mathbin{+}(\textcolor{PIKgreenDark2}{c}\mathbin{*}\mathrm{0.2}\mathbin{*}\mathrm{0.1})\mathbin{+}(\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.4}\mathbin{*}\mathrm{0.9})\mathbin{+}(\textcolor{PIKgreenDark2}{d}\mathbin{*}\mathrm{0.6}\mathbin{*}\mathrm{0.9}){}\<[94]%
\>[94]{}\mathrel{=}{}\<[94E]%
\\[\blanklineskip]%
\>[3]{}((\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.5}\mathbin{+}\textcolor{PIKgreenDark2}{b}\mathbin{*}\mathrm{0.3}\mathbin{+}\textcolor{PIKgreenDark2}{c}\mathbin{*}\mathrm{0.2})\mathbin{*}\mathrm{0.1}\mathbin{+}(\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.4}\mathbin{+}\textcolor{PIKgreenDark2}{d}\mathbin{*}\mathrm{0.6})\mathbin{*}\mathrm{0.9}{}\<[84]%
\>[84]{}\mathrel{=}{}\<[84E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{expected}\;[\mskip1.5mu (\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.5}\mathbin{+}\textcolor{PIKgreenDark2}{b}\mathbin{*}\mathrm{0.3}\mathbin{+}\textcolor{PIKgreenDark2}{c}\mathbin{*}\mathrm{0.2},\mathrm{0.1}),(\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.4}\mathbin{+}\textcolor{PIKgreenDark2}{d}\mathbin{*}\mathrm{0.6},\mathrm{0.9})\mskip1.5mu]{}\<[84]%
\>[84]{}\mathrel{=}{}\<[84E]%
\\[\blanklineskip]%
\>[3]{}(\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distMap}\;\textcolor{PIKgreenDark2}{expected})\;\textcolor{PIKgreenDark2}{dpdps}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\paragraph*{Condition 3.} \hspace{0.1cm}
For the third condition \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}, consider for some \ensuremath{\textcolor{PIKgreenDark2}{v} \mathop{:} \textcolor{PIKcyanDark2}{Double}}
the equation
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}(\textcolor{PIKgreenDark2}{expected}\mathbin{\circ}\textcolor{PIKgreenDark2}{distMap}\;(\textcolor{PIKgreenDark2}{v}\mathbin{+}))\;\textcolor{PIKgreenDark2}{dps1}\mathrel{=}((\textcolor{PIKgreenDark2}{v}\mathbin{+})\mathbin{\circ}\textcolor{PIKgreenDark2}{expected})\;\textcolor{PIKgreenDark2}{dps1}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Again using the usual arithmetic laws for \ensuremath{\mathbin{+}} and \ensuremath{\mathbin{*}}, we can calculate
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{66}{@{}>{\hspre}c<{\hspost}@{}}%
\column{66E}{@{}l@{}}%
\column{70}{@{}>{\hspre}c<{\hspost}@{}}%
\column{70E}{@{}l@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{expected}\;(\textcolor{PIKgreenDark2}{distMap}\;(\textcolor{PIKgreenDark2}{v}\mathbin{+})\;[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{0.5}),(\textcolor{PIKgreenDark2}{b},\mathrm{0.3}),(\textcolor{PIKgreenDark2}{c},\mathrm{0.2})\mskip1.5mu]){}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\\[\blanklineskip]%
\>[3]{}(\textcolor{PIKgreenDark2}{v}\mathbin{+}\textcolor{PIKgreenDark2}{a})\mathbin{*}\mathrm{0.5}\mathbin{+}(\textcolor{PIKgreenDark2}{v}\mathbin{+}\textcolor{PIKgreenDark2}{b})\mathbin{*}\mathrm{0.3}\mathbin{+}(\textcolor{PIKgreenDark2}{v}\mathbin{+}\textcolor{PIKgreenDark2}{c})\mathbin{*}\mathrm{0.2}{}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\\[\blanklineskip]%
\>[3]{}(\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.5}\mathbin{+}\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.5})\mathbin{+}(\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.3}\mathbin{+}\textcolor{PIKgreenDark2}{b}\mathbin{*}\mathrm{0.3})\mathbin{+}(\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.2}\mathbin{+}\textcolor{PIKgreenDark2}{c}\mathbin{*}\mathrm{0.2}){}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\\[\blanklineskip]%
\>[3]{}(\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.5}\mathbin{+}\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.3}\mathbin{+}\textcolor{PIKgreenDark2}{v}\mathbin{*}\mathrm{0.2})\mathbin{+}(\textcolor{PIKgreenDark2}{a}\mathbin{*}\mathrm{0.5}\mathbin{+}\textcolor{PIKgreenDark2}{b}\mathbin{*}\mathrm{0.3}\mathbin{+}\textcolor{PIKgreenDark2}{c}\mathbin{*}\mathrm{0.2}){}\<[66]%
\>[66]{}\mathrel{=}{}\<[66E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{v}\mathbin{+}\textcolor{PIKgreenDark2}{expected}\;[\mskip1.5mu (\textcolor{PIKgreenDark2}{a},\mathrm{0.5}),(\textcolor{PIKgreenDark2}{b},\mathrm{0.3}),(\textcolor{PIKgreenDark2}{c},\mathrm{0.2})\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
As we can see, an essential ingredient for the equality to hold is that
the mapped occurrences of
\ensuremath{(\textcolor{PIKgreenDark2}{v}\mathbin{+})} are weighted by the probabilities which add up to 1.
\vspace{0.2cm}
Note that in this example, we have glossed over problems that might arise
from the use of \ensuremath{\textcolor{PIKcyanDark2}{Dist}} to represent probability distributions.
\footnote{For the sake of simplicity,
we do not address (important)
conceptional questions concerning the representation
of probability distributions or the problems caused by the use of
floating point arithmetic in this example.
Note however, that the chosen type \ensuremath{\textcolor{PIKcyanDark2}{Prob}} does e.g. neither
enforce that the probabilities lie in the interval \ensuremath{[\mskip1.5mu \mathrm{0},\mathrm{1}\mskip1.5mu]} nor
that they add up to \ensuremath{\mathrm{1}}. These properties would however
be crucial for actual proofs.}
We will briefly address probability monads and the expected value
from a more abstract perspective in Subsection~\ref{subsection:impactMeas}.
\subsection{Examples and counter-examples}
\label{subsection:exAndCounterEx}
Besides the motivating example above, let us now consider some more
functions that have the correct type to serve as a measure,
and that do or do not fulfil the three conditions.
Simple examples of admissible measures are the minimum (\ensuremath{\textcolor{PIKgreenDark2}{minList}} as
defined in Fig.~\ref{fig:example1Formal}) or maximum (\ensuremath{\textcolor{PIKgreenDark2}{maxList}\mathrel{=}\textcolor{PIKgreenDark2}{foldr}\mathbin{`\textcolor{PIKgreenDark2}{maximum}`}\mathrm{0}}) of a list for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}} with \ensuremath{\mathbb{N}} as type of values
and ordinary addition as \ensuremath{ \mathbin{\oplus} }. It is straightforward to prove that
the conditions hold for these two measures and the proofs for
\ensuremath{\textcolor{PIKgreenDark2}{maxList}} are included in the supplementary material.
The function \ensuremath{\textcolor{PIKgreenDark2}{length}} is a very simple counter-example:
It has the right type for a list measure but fails all three
of the conditions.
As to other counter-examples, let us revisit the conditions one by one.
Throughout, we use \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}} with \ensuremath{\textcolor{PIKgreenDark2}{map}\mathrel{=}\textcolor{PIKgreenDark2}{listMap}}, \ensuremath{\textcolor{PIKgreenDark2}{join}\mathrel{=}\textcolor{PIKgreenDark2}{concat}}
and \ensuremath{ \mathbin{\oplus} \mathrel{=}\mathbin{+}}
(the canonical addition for the respective type of \ensuremath{\textcolor{PIKcyanDark2}{Val}}).
\paragraph*{Condition~1.} \hspace{0.1cm}
We remain in the setting of
Ex.~\ref{subsection:example1SDPs} with \ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\mathbb{N}},
and just vary the measure. Using a somewhat contrived
variation of \ensuremath{\textcolor{PIKgreenDark2}{maxList}}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{15}{@{}>{\hspre}c<{\hspost}@{}}%
\column{15E}{@{}l@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{maxListVar}{}\<[15]%
\>[15]{} \mathop{:} {}\<[15E]%
\>[18]{}\textcolor{PIKcyanDark2}{List}\;\mathbb{N} \to \mathbb{N}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{maxListVar}{}\<[15]%
\>[15]{}\mathrel{=}{}\<[15E]%
\>[18]{}\textcolor{PIKgreenDark2}{foldr}\;(\lambda \textcolor{PIKgreenDark2}{x},\textcolor{PIKgreenDark2}{v}\Rightarrow (\textcolor{PIKgreenDark2}{x}\mathbin{+}\mathrm{1}\mathbin{`\textcolor{PIKgreenDark2}{maximum}`}\textcolor{PIKgreenDark2}{v}))\;\mathrm{0}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
with \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{maxListVar}} it suffices to consider
that for an arbitrary \ensuremath{\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}(\textcolor{PIKgreenDark2}{maxListVar}\mathbin{\circ}\textcolor{PIKgreenDark2}{pure})\;\textcolor{PIKgreenDark2}{n}\mathrel{=}\textcolor{PIKgreenDark2}{maxListVar}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{n}\mskip1.5mu]\mathrel{=}(\textcolor{PIKgreenDark2}{n}\mathbin{+}\mathrm{1})\mathbin{`\textcolor{PIKgreenDark2}{maximum}`}\mathrm{0}\mathrel{=}\textcolor{PIKgreenDark2}{n}\mathbin{+}\mathrm{1}\neq\textcolor{PIKgreenDark2}{n}\mathrel{=}\textcolor{PIKgreenDark2}{id}\;\textcolor{PIKgreenDark2}{n}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
to see that now the condition \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}} fails.
\paragraph*{Condition~2.} \hspace{0.1cm}
To exhibit a measure that fails the condition \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}}, we
switch to \ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\textcolor{PIKcyanDark2}{Double}} and use the arithmetic average
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{8}{@{}>{\hspre}c<{\hspost}@{}}%
\column{8E}{@{}l@{}}%
\column{11}{@{}>{\hspre}l<{\hspost}@{}}%
\column{14}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{avg}{}\<[8]%
\>[8]{} \mathop{:} {}\<[8E]%
\>[11]{}\textcolor{PIKcyanDark2}{List}\;\textcolor{PIKcyanDark2}{Double} \to \textcolor{PIKcyanDark2}{Double}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{avg}\;[\mskip1.5mu \mskip1.5mu]{}\<[11]%
\>[11]{}\mathrel{=}{}\<[14]%
\>[14]{}\mathrm{0.0}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{avg}\;\textcolor{PIKgreenDark2}{ds}{}\<[11]%
\>[11]{}\mathrel{=}\textcolor{PIKgreenDark2}{sum}\;\textcolor{PIKgreenDark2}{ds}\mathbin{/}\textcolor{PIKgreenDark2}{cast}\;(\textcolor{PIKgreenDark2}{length}\;\textcolor{PIKgreenDark2}{ds}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
as measure \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{avg}}. Taking a list of lists of different lengths
like [[1], [2, 3]] we have
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{39}{@{}>{\hspre}c<{\hspost}@{}}%
\column{39E}{@{}l@{}}%
\column{43}{@{}>{\hspre}l<{\hspost}@{}}%
\column{58}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{avg}\;(\textcolor{PIKgreenDark2}{concat}\;[\mskip1.5mu [\mskip1.5mu \mathrm{1}\mskip1.5mu],[\mskip1.5mu \mathrm{2},\mathrm{3}\mskip1.5mu]\mskip1.5mu]){}\<[39]%
\>[39]{}\mathrel{=}{}\<[39E]%
\>[43]{}\textcolor{PIKgreenDark2}{avg}\;[\mskip1.5mu \mathrm{1},\mathrm{2},\mathrm{3}\mskip1.5mu]{}\<[58]%
\>[58]{}\mathrel{=}\mathrm{2}{}\<[E]%
\\
\>[39]{}\neq{}\<[39E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{avg}\;(\textcolor{PIKgreenDark2}{listMap}\;\textcolor{PIKgreenDark2}{avg}\;[\mskip1.5mu [\mskip1.5mu \mathrm{1}\mskip1.5mu],[\mskip1.5mu \mathrm{2},\mathrm{3}\mskip1.5mu]\mskip1.5mu]){}\<[39]%
\>[39]{}\mathrel{=}{}\<[39E]%
\>[43]{}\textcolor{PIKgreenDark2}{avg}\;[\mskip1.5mu \mathrm{1},\mathrm{2.5}\mskip1.5mu]{}\<[58]%
\>[58]{}\mathrel{=}\mathrm{1.75}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\paragraph*{Condition~3.} \hspace{0.1cm}
Let again \ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\mathbb{N}} to take another look at our counter-example
from the last section with \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{sum}}, the arithmetic sum of a list.
It does fulfil \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}} and \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}}, the first by definition,
the second by structural induction using the associativity of \ensuremath{\mathbin{+}}.
But it fails to fulfil
\ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}. If the list has the form \ensuremath{\textcolor{PIKgreenDark2}{a}\mathbin{::}\textcolor{PIKgreenDark2}{as}}, we would have to
show the following equality for \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} to hold:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}(\textcolor{PIKgreenDark2}{sum}\mathbin{\circ}\textcolor{PIKgreenDark2}{listMap}\;(\textcolor{PIKgreenDark2}{v}\mathbin{+}))\;(\textcolor{PIKgreenDark2}{a}\mathbin{::}\textcolor{PIKgreenDark2}{as})\mathrel{=}((\textcolor{PIKgreenDark2}{v}\mathbin{+})\mathbin{\circ}\textcolor{PIKgreenDark2}{sum})\;(\textcolor{PIKgreenDark2}{a}\mathbin{::}\textcolor{PIKgreenDark2}{as}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Clearly, if \ensuremath{\textcolor{PIKgreenDark2}{v}\neq\mathrm{0}} and \ensuremath{\textcolor{PIKgreenDark2}{as}\neq[\mskip1.5mu \mskip1.5mu]} this equality cannot hold.
This is why in the last section the equality of \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}}
failed for \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{sum}}.\\
A similar failure would arise if we chose \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{foldr}\;(\mathbin{*})\;\mathrm{1}} instead,
as \ensuremath{\mathbin{+}} does not distribute over \ensuremath{\mathbin{*}}. But if we turned the situation
around by setting \ensuremath{ \mathbin{\oplus} \mathrel{=}\mathbin{*}} and \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{sum}}, the condition
\ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} would hold thanks to the usual arithmetic
distributivity law for \ensuremath{\mathbin{*}} over \ensuremath{\mathbin{+}}.
\vspace{0.2cm}
All of the measures considered in this subsection do fulfil the
\ensuremath{\textcolor{PIKgreenDark2}{measMonSpec}} condition imposed by the BJI-theory. This raises the
question how previously admissible measures are impacted by adding the
three new conditions to the framework.
\subsection{Impact on previously admissible measures}
\label{subsection:impactMeas}
As we have seen in Sec.~\ref{subsection:solution_components}, the
BJI-framework already requires measures to fulfil the monotonicity
condition
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{16}{@{}>{\hspre}c<{\hspost}@{}}%
\column{16E}{@{}l@{}}%
\column{19}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measMonSpec}{}\<[16]%
\>[16]{} \mathop{:} {}\<[16E]%
\>[19]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{f},\textcolor{PIKgreenDark2}{g} \mathop{:} \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Val}) \to ((\textcolor{PIKgreenDark2}{a} \mathop{:} \textcolor{PIKcyanDark2}{A}) \to (\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{a}) \,\sqsubseteq\, (\textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{a})) \to {}\<[E]%
\\
\>[19]{}(\textcolor{PIKgreenDark2}{ma} \mathop{:} \textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{A}) \to \textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{ma}) \,\sqsubseteq\, \textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{ma}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\bottaetal show that the arithmetic average (for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}}), the
worst-case measure (for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}} and for a probability monad \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{Prob}})
and the expected value measure (for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{Prob}}) all fulfil \ensuremath{\textcolor{PIKgreenDark2}{measMonSpec}}.
Thus, a natural question is whether these measures also fulfil the three
additional requirements.
\paragraph*{Expected value for probability distributions.} \hspace{0.1cm}
As already discussed, most applications of backward induction concern
stochastic SDPs where possible rewards are aggregated using the
expected value measure from probability theory, commonly denoted as \ensuremath{\textcolor{PIKcyanDark2}{E}}.
Essentially, for a numerical type \ensuremath{\textcolor{PIKcyanDark2}{Q}}, the
expected value of a probability distribution on \ensuremath{\textcolor{PIKcyanDark2}{Q}} is
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{6}{@{}>{\hspre}c<{\hspost}@{}}%
\column{6E}{@{}l@{}}%
\column{9}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKcyanDark2}{E}{}\<[6]%
\>[6]{} \mathop{:} {}\<[6E]%
\>[9]{}\textcolor{PIKcyanDark2}{Num}\;\textcolor{PIKcyanDark2}{Q}\Rightarrow \textcolor{PIKcyanDark2}{Prob}\;\textcolor{PIKcyanDark2}{Q} \to \textcolor{PIKcyanDark2}{Q}{}\<[E]%
\\
\>[3]{}\textcolor{PIKcyanDark2}{E}\;\textcolor{PIKgreenDark2}{pq}\mathrel{=}\textcolor{PIKgreenDark2}{sum}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{q}\mathbin{*}\textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{pq}\;\textcolor{PIKgreenDark2}{q}\mid \textcolor{PIKgreenDark2}{q}\leftarrow \textcolor{PIKgreenDark2}{supp}\;\textcolor{PIKgreenDark2}{pq}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
where \ensuremath{\textcolor{PIKgreenDark2}{prob}} and \ensuremath{\textcolor{PIKgreenDark2}{supp}} are generic functions that encode the notions of
\emph{probability} and of \emph{support} associated with a finite
probability distribution:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{prob} \mathop{:} \{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{Prob}\;\textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{Q}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{supp} \mathop{:} \{\mskip1.5mu \textcolor{PIKcyanDark2}{A} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to \textcolor{PIKcyanDark2}{Prob}\;\textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{List}\;\textcolor{PIKcyanDark2}{A}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
For \ensuremath{\textcolor{PIKgreenDark2}{pa}} and \ensuremath{\textcolor{PIKgreenDark2}{a}} of suitable types, \ensuremath{\textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{pa}\;\textcolor{PIKgreenDark2}{a}} represents the
probability of \ensuremath{\textcolor{PIKgreenDark2}{a}} according to \ensuremath{\textcolor{PIKgreenDark2}{pa}}. Similarly, \ensuremath{\textcolor{PIKgreenDark2}{supp}\;\textcolor{PIKgreenDark2}{pa}} returns a
list of those values whose probability is not zero in \ensuremath{\textcolor{PIKgreenDark2}{pa}}.
The probability function \ensuremath{\textcolor{PIKgreenDark2}{prob}} has to fulfil the axioms of
probability theory. In particular,
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{sum}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{pa}\;\textcolor{PIKgreenDark2}{a}\mid \textcolor{PIKgreenDark2}{a}\leftarrow \textcolor{PIKgreenDark2}{supp}\;\textcolor{PIKgreenDark2}{pa}\mskip1.5mu]\mathrel{=}\mathrm{1}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This condition implies that probability distributions cannot be empty, a
precondition of \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}. Putting forward minimal specifications
for \ensuremath{\textcolor{PIKgreenDark2}{prob}} and \ensuremath{\textcolor{PIKgreenDark2}{supp}} is not completely trivial but if the \ensuremath{\mathbin{+}}-operation
associated with \ensuremath{\textcolor{PIKcyanDark2}{Q}} is commutative and associative, if \ensuremath{\mathbin{*}}
distributes over \ensuremath{\mathbin{+}} and if the \ensuremath{\textcolor{PIKgreenDark2}{map}} and \ensuremath{\textcolor{PIKgreenDark2}{join}} associated with
\ensuremath{\textcolor{PIKcyanDark2}{Prob}} -- for \ensuremath{\textcolor{PIKgreenDark2}{f}}, \ensuremath{\textcolor{PIKgreenDark2}{a}}, \ensuremath{\textcolor{PIKgreenDark2}{b}}, \ensuremath{\textcolor{PIKgreenDark2}{pa}} and \ensuremath{\textcolor{PIKgreenDark2}{ppa}} of suitable types --
fulfil the conservation law
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{prob}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{pa})\;\textcolor{PIKgreenDark2}{b}\mathrel{=}\textcolor{PIKgreenDark2}{sum}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{pa}\;\textcolor{PIKgreenDark2}{a}\mid \textcolor{PIKgreenDark2}{a}\leftarrow \textcolor{PIKgreenDark2}{supp}\;\textcolor{PIKgreenDark2}{pa},\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{a}\doubleequals\textcolor{PIKgreenDark2}{b}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
and the total probability law
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{prob}\;(\textcolor{PIKgreenDark2}{join}\;\textcolor{PIKgreenDark2}{ppa})\;\textcolor{PIKgreenDark2}{a}\mathrel{=}\textcolor{PIKgreenDark2}{sum}\;[\mskip1.5mu \textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{pa}\;\textcolor{PIKgreenDark2}{a}\mathbin{*}\textcolor{PIKgreenDark2}{prob}\;\textcolor{PIKgreenDark2}{ppa}\;\textcolor{PIKgreenDark2}{pa}\mid \textcolor{PIKgreenDark2}{pa}\leftarrow \textcolor{PIKgreenDark2}{supp}\;\textcolor{PIKgreenDark2}{ppa}\mskip1.5mu]{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
then the expected value fulfils \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}}, \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}} and
\ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}.
This is not surprising since -- as stated above -- this has been the
guiding example for the generalisation to monadic SDPs and the formulation
of the three conditions.
\paragraph*{Average and arithmetic sum.} \hspace{0.1cm}
As can already be concluded
from the corresponding counter-examples in the previous subsection,
neither the plain arithmetic average nor the arithmetic sum are
suited as measure when using the standard monad structure on
\ensuremath{\textcolor{PIKcyanDark2}{List}} to represent non-deterministic
uncertainty. We think this is an important observation, as the
average seems innocent enough to come to mind as a simple way
to represent uniformly distributed outcomes:
\emph{``The probability of each element can simply be inferred from the length of the list
-- so why bother to explicitly deal with probabilities?''}
Although our counter-example shows that this idea is flawed, the intuition
behind it can be employed to define an alternative, but less general monad
structure on lists by incorporating the averaging operation into the joining
of lists (i.e. by choosing \ensuremath{\textcolor{PIKgreenDark2}{join}\mathrel{=}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{avg}}).
However, this only makes sense for types that are instances of the \ensuremath{\textcolor{PIKcyanDark2}{Num}}
and \ensuremath{\textcolor{PIKcyanDark2}{Fractional}} type classes, and naturality only holds for a restricted class of
functions (namely additive functions). As a consequence, this alternative structure
does not seem particularly useful for our current purpose either.
\paragraph*{Worst-case measures.} \hspace{0.1cm}
In many important applications in
climate impact research but also in portfolio management and sports,
decisions are taken as to minimise the consequences of worst case
outcomes. Depending on how ``worse'' is defined, the corresponding
measures might pick the maximum or
minimum from an \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of values. In the previous subsection we
considered an example in which the monad was \ensuremath{\textcolor{PIKcyanDark2}{List}}, the operation
\ensuremath{ \mathbin{\oplus} } plain addition together with either \ensuremath{\textcolor{PIKgreenDark2}{maxList}} or \ensuremath{\textcolor{PIKgreenDark2}{minList}} as
measure. And indeed we can prove that for both measures the three
requirements hold (the proofs for \ensuremath{\textcolor{PIKgreenDark2}{maxList}} can be found in the
supplementary material). This gives us a useful notion of worst-case
measure that is admissible for monadic backward induction.
\vspace{0.2cm}
We can thus conclude that the new requirements hold for certain
familiar measures, but that they also rule out
certain instances that were considered admissible in the BJI-framework.
Given the three conditions \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}}, \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}},
\ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} hold, we can prove the
extensional equality of the functions \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} generically.
This is what we will do in the next section.
\section{Correctness Proofs}
\label{section:valval}
In this section we show that \ensuremath{\textcolor{PIKgreenDark2}{val}}
(Sec.~\ref{subsection:solution_components}) and
\ensuremath{\textcolor{PIKgreenDark2}{val'}} (Sec.~\ref{section:preparation}) are extensionally equal
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{23}{@{}>{\hspre}c<{\hspost}@{}}%
\column{23E}{@{}l@{}}%
\column{26}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{valMeasTotalReward}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to \textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
given the three conditions from the previous section hold. As a
corollary we then obtain our correctness result for monadic backward
induction.
We can understand the proof of \ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}} as an optimising
program transformation from the less efficient but ``obviously
correct'' implementation \ensuremath{\textcolor{PIKgreenDark2}{val'}} to the more efficient implementation
\ensuremath{\textcolor{PIKgreenDark2}{val}}. Therefore the equational reasoning proofs in this section will
proceed from \ensuremath{\textcolor{PIKgreenDark2}{val'}} to \ensuremath{\textcolor{PIKgreenDark2}{val}}. In Sec.~\ref{section:conditions} we
have stated sufficient conditions for this transformation to be
possible: \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}}, \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}}, \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}. We also have
seen the different computational patterns that the two implementations
exhibit: While \ensuremath{\textcolor{PIKgreenDark2}{val'}} first computes all possible trajectories for the
given policy sequence and initial state, then computes their
individual sum of rewards and finally applies the measure once, \ensuremath{\textcolor{PIKgreenDark2}{val}}
computes its final result by adding the current reward to an
intermediate outcome and applying the measure locally at each decision
step. This suggests that a transformation from \ensuremath{\textcolor{PIKgreenDark2}{val'}} to \ensuremath{\textcolor{PIKgreenDark2}{val}} will
essentially have to push the application of the measure into the
recursive computation of the sum of rewards. The proof will be carried
out by induction on the structure of policy sequences.
\subsection{Deterministic Case}
\label{subsection:detCase}
To get a first intuition, let's have a look at what
the induction step looks like in the deterministic case
(i.e. if we fix monad and measure to be identities):
{\linespread{1.5}\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{7}{@{}>{\hspre}l<{\hspost}@{}}%
\column{48}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{valMeasTotalReward}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}\mathrel{=}{}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}){}\<[48]%
\>[48]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val'}\;\}\hspace{-3pt}={}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{sumR}\;((\textcolor{PIKgreenDark2}{x} \mathbin{*\!*} \textcolor{PIKgreenDark2}{y}) \mathbin{\#\!\#} \textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'})){}\<[48]%
\>[48]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{sumR}\;\}\hspace{-3pt}={}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{r}\;(\textcolor{PIKgreenDark2}{head}\;(\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'})) \mathbin{\oplus} \textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'}){}\<[48]%
\>[48]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{headLemma}\;\}\hspace{-3pt}={}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'} \mathbin{\oplus} \textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'}){}\<[48]%
\>[48]{}=\hspace{-3pt}\{\; \text{by }\;\text{induction hypothesis}\;\}\hspace{-3pt}={}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'} \mathbin{\oplus} \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'}){}\<[48]%
\>[48]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val}\;\}\hspace{-3pt}={}\<[E]%
\\
\>[3]{}\hsindent{4}{}\<[7]%
\>[7]{}(\textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x})\;{}\<[48]%
\>[48]{}\hfill\Box{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
}
\noindent where \ensuremath{\textcolor{PIKgreenDark2}{y}\mathrel{=}\textcolor{PIKgreenDark2}{p}\;\textcolor{PIKgreenDark2}{x}}, \ensuremath{\textcolor{PIKgreenDark2}{x'}\mathrel{=}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}} and \ensuremath{\textcolor{PIKgreenDark2}{r}\mathrel{=}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}}. In the
proof sketch, we have first applied the definitions of \ensuremath{\textcolor{PIKgreenDark2}{val'}} and
\ensuremath{\textcolor{PIKgreenDark2}{sumR}}. Using the fact that in the deterministic case \ensuremath{\textcolor{PIKgreenDark2}{trj}} returns
exactly one state-control sequence and that the \ensuremath{\textcolor{PIKgreenDark2}{head}} of any
trajectory starting in \ensuremath{\textcolor{PIKgreenDark2}{x'}} is just \ensuremath{\textcolor{PIKgreenDark2}{x'}} (let us call the latter
\ensuremath{\textcolor{PIKgreenDark2}{headLemma}}), the left-hand side of the sum simplifies to \ensuremath{\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}}. Its
right-hand side amounts to \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x'}} so that we can apply the
induction hypothesis. The rest of the proof
only relies on definitional equalities. Thus in the deterministic case
\ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} are unconditionally extensionally equal -- or rather,
the conditions of Sec.~\ref{section:conditions} are trivially fulfilled.
\subsection{Lemmas}
\label{subsection:lemmas}
To prove the general, monadic case, we proceed similarly.
This time, however, the situation is complicated by the presence of
the abstract monad \ensuremath{\textcolor{PIKcyanDark2}{M}}. Instead of being able to use the type structure
of some concrete monad, we need to leverage on the properties of \ensuremath{\textcolor{PIKcyanDark2}{M}},
\ensuremath{\textcolor{PIKgreenDark2}{meas}} and \ensuremath{ \mathbin{\oplus} } postulated in Sec.~\ref{section:conditions}. To
facilitate the main proof, we first prove three lemmas about the
interaction of the measure with the monad structure and the
\ensuremath{ \mathbin{\oplus} }-operator on \ensuremath{\textcolor{PIKcyanDark2}{Val}}.
Machine-checked proofs are given in the
Appendices~\ref{appendix:theorem},~\ref{appendix:biCorrectness} and
\ref{appendix:lemmas}. The monad laws we use are stated in
Appendix~\ref{appendix:monadLaws}. In the remainder of this section,
we discuss semi-formal versions of the proofs.\\
\paragraph*{Monad algebras.} \hspace{0.1cm} The first lemma allows us
to lift and eliminate an application of the monad's \ensuremath{\textcolor{PIKgreenDark2}{join}} operation:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measAlgLemma}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKcyanDark2}{A},\textcolor{PIKcyanDark2}{B} \mathop{:} \textcolor{PIKcyanDark2}{Type}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{f} \mathop{:} \textcolor{PIKcyanDark2}{B} \to \textcolor{PIKcyanDark2}{Val}) \to (\textcolor{PIKgreenDark2}{g} \mathop{:} \textcolor{PIKcyanDark2}{A} \to \textcolor{PIKcyanDark2}{M}\;\textcolor{PIKcyanDark2}{B}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\mathbin{\circ}\textcolor{PIKgreenDark2}{g}))\doteq(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\mathbin{\circ}\textcolor{PIKgreenDark2}{join}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
The proof of this lemma hinges on the condition \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}}. It
allows to trade the application of \ensuremath{\textcolor{PIKgreenDark2}{join}} against an application of
\ensuremath{\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{meas}}. The rest is just standard reasoning with monad and functor
laws, i.e. we use that the functorial map for \ensuremath{\textcolor{PIKcyanDark2}{M}} preserves
composition and that \ensuremath{\textcolor{PIKgreenDark2}{join}} is a natural transformation:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{53}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measAlgLemma}\;\textcolor{PIKgreenDark2}{f}\;\textcolor{PIKgreenDark2}{g}\;\textcolor{PIKgreenDark2}{ma}\mathrel{=}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\mathbin{\circ}\textcolor{PIKgreenDark2}{g}))\;\textcolor{PIKgreenDark2}{ma}){}\<[53]%
\>[53]{}=\hspace{-3pt}\{\; map \text{ preserves composition}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f})\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g})\;\textcolor{PIKgreenDark2}{ma}){}\<[53]%
\>[53]{}=\hspace{-3pt}\{\; map \text{ preserves composition}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f})\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g})\;\textcolor{PIKgreenDark2}{ma}){}\<[53]%
\>[53]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{measJoinSpec}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{join}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f})\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g})\;\textcolor{PIKgreenDark2}{ma}){}\<[53]%
\>[53]{}=\hspace{-3pt}\{\; join \text{ is a natural transformation}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{f}\mathbin{\circ}\textcolor{PIKgreenDark2}{join}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{g})\;\textcolor{PIKgreenDark2}{ma})\;{}\<[53]%
\>[53]{}\hfill\Box{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
This lemma is generic in the sense that it holds for arbitrary
Eilenberg-Moore algebras of a monad. Here we prove it for the
framework's measure \ensuremath{\textcolor{PIKgreenDark2}{meas}}, but note that in the appendix we prove a
generic version that is then appropriately instantiated.
\paragraph*{Head/trajectory interaction.} \hspace{0.1cm}
The second lemma amounts to a lifted version of \ensuremath{\textcolor{PIKgreenDark2}{headLemma}} in the
deterministic case. Mapping \ensuremath{\textcolor{PIKgreenDark2}{head}} onto an \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure of
trajectories computed with \ensuremath{\textcolor{PIKgreenDark2}{trj}} results in an \ensuremath{\textcolor{PIKcyanDark2}{M}}-structure filled
with the initial states of these trajectories; similarly, mapping \ensuremath{(\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})} onto \ensuremath{\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} for functions \ensuremath{\textcolor{PIKgreenDark2}{r}} and \ensuremath{\textcolor{PIKgreenDark2}{s}} of
appropriate type is the same as mapping \ensuremath{(\textcolor{PIKgreenDark2}{const}\;(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})} onto
\ensuremath{\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} (where \ensuremath{\textcolor{PIKgreenDark2}{const}} is the constant function). We can prove
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{headTrjLemma}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to (\textcolor{PIKgreenDark2}{r} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Val}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{s} \mathop{:} \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{Val}) \to (\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}\mathrel{=}{}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{const}\;(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
by doing a case split on \ensuremath{\textcolor{PIKgreenDark2}{ps}}. In case \ensuremath{\textcolor{PIKgreenDark2}{ps}\mathrel{=}\textcolor{PIKcyanDark2}{Nil}}, the equality holds
because the monad's \ensuremath{\textcolor{PIKgreenDark2}{pure}} is a natural transformation and in case \ensuremath{\textcolor{PIKgreenDark2}{ps}\mathrel{=}\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps'}} because \ensuremath{\textcolor{PIKcyanDark2}{M}}'s functorial \ensuremath{\textcolor{PIKgreenDark2}{map}} preserves composition.
\paragraph*{Measure/sum interaction.} \hspace{0.1cm}
The third lemma allows us to both commute the measure into the right
summand of an \ensuremath{ \mathbin{\medoplus} }-sum and to perform the head/trajectory
simplification. It lies at the core of the relationship between \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}}.
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{17}{@{}>{\hspre}c<{\hspost}@{}}%
\column{17E}{@{}l@{}}%
\column{20}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measSumLemma}{}\<[17]%
\>[17]{} \mathop{:} {}\<[17E]%
\>[20]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}\mskip1.5mu\} \to (\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{r} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t} \to \textcolor{PIKcyanDark2}{Val}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{s} \mathop{:} \textcolor{PIKcyanDark2}{StateCtrlSeq}\;\textcolor{PIKgreenDark2}{t}\;(\textcolor{PIKcyanDark2}{S}\;\textcolor{PIKgreenDark2}{n}) \to \textcolor{PIKcyanDark2}{Val}) \to {}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\doteq{}\<[E]%
\\
\>[20]{}(\textcolor{PIKgreenDark2}{r} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{s}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Recall that our third condition from
Sec.~\ref{section:conditions}, \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}}, plays the role of a
distributive law and allows us to ``factor out'' a partially applied
sum \ensuremath{(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )} for arbitrary \ensuremath{\textcolor{PIKgreenDark2}{v} \mathop{:} \textcolor{PIKcyanDark2}{Val}}.
Given that \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} holds, the lemma is provable by simple
equational reasoning using the above head-trajectory lemma and the fact
that map preserves composition:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{66}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{measSumLemma}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{s}\;\textcolor{PIKgreenDark2}{x'}\mathrel{=}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{headTrjLemma}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{const}\;(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of } \mathbin{\medoplus} ,\mathbin{\circ}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{const}\;(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{id})\mathbin{\circ}\textcolor{PIKgreenDark2}{s})\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; map \text{ preserves composition}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{const}\;(\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{id})\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{s}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of } \mathbin{\medoplus} \;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\oplus} )\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{s}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{measPlusSpec}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((((\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\oplus} )\mathbin{\circ}\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{s}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of } \mathbin{\medoplus} \;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}((\textcolor{PIKgreenDark2}{r} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{s}\mathbin{\circ}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x'})\;{}\<[66]%
\>[66]{}\hfill\Box{}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Notice how \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} is used to transform an application of
\ensuremath{\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\oplus} )} into an application of \ensuremath{((\textcolor{PIKgreenDark2}{r}\;\textcolor{PIKgreenDark2}{x'}) \mathbin{\oplus} )\mathbin{\circ}\textcolor{PIKgreenDark2}{meas}}.
This is essential to simplify the computation of the measured total reward:
instead of first adding the current reward to the intermediate outcome of each
individual trajectory and then measuring the outcomes, one can first measure the
intermediate outcomes of the trajectories and then add the current reward.
\subsection{Correctness of the BJI-value function}
\label{subsection:valvalTh}
With the above lemmas in place, we now prove that \ensuremath{\textcolor{PIKgreenDark2}{val}} is
extensionally equal to \ensuremath{\textcolor{PIKgreenDark2}{val'}}.
\vspace{0.15cm}
Let \ensuremath{\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}}, \ensuremath{\textcolor{PIKgreenDark2}{ps} \mathop{:} \textcolor{PIKcyanDark2}{PolicySeq}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}}. We prove \ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}}
by induction on \ensuremath{\textcolor{PIKgreenDark2}{ps}}.
\vspace{0.15cm}
\paragraph*{Base case.}\hspace{0.1cm}
We need to show that for all \ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}}, \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}} = \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}}. The right hand side of this equation
reduces to \ensuremath{\textcolor{PIKgreenDark2}{zero}} by definition.
The left hand side can be simplified to \ensuremath{\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{pure}\;\textcolor{PIKgreenDark2}{zero})}
since \ensuremath{\textcolor{PIKgreenDark2}{pure}} is a natural transformation. At this point,
our first condition, \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}}, comes into play: Using that
\ensuremath{\textcolor{PIKgreenDark2}{meas}} is inverse to \ensuremath{\textcolor{PIKgreenDark2}{pure}} on the left, we can conclude
that the equality holds.\\
\noindent In equational reasoning style: For all \ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}},
{\linespread{1.2}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{5}{@{}>{\hspre}l<{\hspost}@{}}%
\column{44}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{valMeasTotalReward}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val'}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\;(\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}))){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{trj}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\;(\textcolor{PIKgreenDark2}{pure}\;(\textcolor{PIKcyanDark2}{Last}\;\textcolor{PIKgreenDark2}{x})))){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; pure \text{ is a natural transformation}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{pure}\;(\textcolor{PIKgreenDark2}{sumR}\;(\textcolor{PIKcyanDark2}{Last}\;\textcolor{PIKgreenDark2}{x})))){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{sumR}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{pure}\;\textcolor{PIKgreenDark2}{zero})){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{measPureSpec}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{zero}){}\<[44]%
\>[44]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{2}{}\<[5]%
\>[5]{}(\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKcyanDark2}{Nil}\;\textcolor{PIKgreenDark2}{x}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
}
\paragraph*{Step case.}\hspace{0.1cm}
The induction hypothesis (\ensuremath{\textcolor{PIKcyanDark2}{IH}}) is:
for all \ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}}, \ensuremath{\textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}}. We have to show that
\ensuremath{\textcolor{PIKcyanDark2}{IH}} implies that for all \ensuremath{\textcolor{PIKgreenDark2}{p} \mathop{:} \textcolor{PIKcyanDark2}{Policy}\;\textcolor{PIKgreenDark2}{t}} and \ensuremath{\textcolor{PIKgreenDark2}{x} \mathop{:} \textcolor{PIKcyanDark2}{X}\;\textcolor{PIKgreenDark2}{t}}, the equality
\ensuremath{\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}\mathrel{=}\textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}} holds.
For brevity (and to economise on brackets), let in the following
\ensuremath{\textcolor{PIKgreenDark2}{y}\mathrel{=}\textcolor{PIKgreenDark2}{p}\;\textcolor{PIKgreenDark2}{x}}, \ensuremath{\textcolor{PIKgreenDark2}{mx'}\mathrel{=}\textcolor{PIKgreenDark2}{next}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}}, \ensuremath{\textcolor{PIKgreenDark2}{r}\mathrel{=}\textcolor{PIKgreenDark2}{reward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{x}\;\textcolor{PIKgreenDark2}{y}}, \ensuremath{\textcolor{PIKgreenDark2}{trjps}\mathrel{=}\textcolor{PIKgreenDark2}{trj}\;\textcolor{PIKgreenDark2}{ps}}, and \ensuremath{\textcolor{PIKgreenDark2}{consxy}\mathrel{=}((\textcolor{PIKgreenDark2}{x} \mathbin{*\!*} \textcolor{PIKgreenDark2}{y}) \mathbin{\#\!\#} )}.
As in the base case, all that has to be done on the \ensuremath{\textcolor{PIKgreenDark2}{val}}-side of the
equation only depends on definitional equality. However it is more
involved to bring the \ensuremath{\textcolor{PIKgreenDark2}{val'}}-side into a form in which the induction
hypothesis can be applied. This is where we leverage on the lemmas
proved above.
By definition and because \ensuremath{\textcolor{PIKgreenDark2}{map}} preserves
composition, we know that \ensuremath{\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}} is equal to
\ensuremath{(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{sumR}))\;(\textcolor{PIKgreenDark2}{mx'} \mathbin{>\!\!>\!\!=} \textcolor{PIKgreenDark2}{trjps})}.
We use the relation between the monad's \ensuremath{\textcolor{PIKgreenDark2}{bind}} and \ensuremath{\textcolor{PIKgreenDark2}{join}} to eliminate
the \ensuremath{\textcolor{PIKgreenDark2}{bind}}-operator from the term.
Now we can apply the first lemma from above, \ensuremath{\textcolor{PIKgreenDark2}{measAlgLemma}}, to lift
and eliminate the \ensuremath{\textcolor{PIKgreenDark2}{join}} operation.
To commute the measure under the \ensuremath{ \mathbin{\medoplus} } and get rid of the application
of \ensuremath{\textcolor{PIKgreenDark2}{head}}, we use our third lemma, \ensuremath{\textcolor{PIKgreenDark2}{measSumLemma}}.
At this point we can apply the induction hypothesis and the resulting
term is equal to \ensuremath{\textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps}\;\textcolor{PIKgreenDark2}{x}} by definition.\\
\noindent The more detailed equational reasoning proof:
\footnote{We are very grateful to the anonymous reviewer who suggested
an alternative proof for the induction step. The proof presented
here is based on his proof, and his suggestions have lead to
significantly weaker conditions on the measure and thus a stronger
result.}
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{4}{@{}>{\hspre}l<{\hspost}@{}}%
\column{66}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{valMeasTotalReward}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}\mathrel{=}{}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val'}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\;(\textcolor{PIKgreenDark2}{trj}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}))){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{trj}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{consxy}\;(\textcolor{PIKgreenDark2}{mx'} \mathbin{>\!\!>\!\!=} \textcolor{PIKgreenDark2}{trjps})))){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; map \text{ preserves composition}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{sumR}\mathbin{\circ}\textcolor{PIKgreenDark2}{consxy})\;(\textcolor{PIKgreenDark2}{mx'} \mathbin{>\!\!>\!\!=} \textcolor{PIKgreenDark2}{trjps}))){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{sumR}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{sumR})\;(\textcolor{PIKgreenDark2}{mx'} \mathbin{>\!\!>\!\!=} \textcolor{PIKgreenDark2}{trjps}))){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{relation } bind/join \;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;((\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head}) \mathbin{\medoplus} \textcolor{PIKgreenDark2}{sumR})\;(\textcolor{PIKgreenDark2}{join}\;(\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{trjps}\;\textcolor{PIKgreenDark2}{mx'})))){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{measAlgLemma}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r}\mathbin{\circ}\textcolor{PIKgreenDark2}{head} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{sumR})\mathbin{\circ}\textcolor{PIKgreenDark2}{trjps})\;\textcolor{PIKgreenDark2}{mx'})){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by }\;\textcolor{PIKgreenDark2}{measSumLemma}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{meas}\mathbin{\circ}\textcolor{PIKgreenDark2}{map}\;\textcolor{PIKgreenDark2}{sumR}\mathbin{\circ}\textcolor{PIKgreenDark2}{trjps})\;\textcolor{PIKgreenDark2}{mx'})){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val'}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{val'}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{mx'})){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by }\;\text{induction hypothesis}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{r} \mathbin{\medoplus} \textcolor{PIKgreenDark2}{val}\;\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{mx'})){}\<[66]%
\>[66]{}=\hspace{-3pt}\{\; \text{by definition of }\;\textcolor{PIKgreenDark2}{val}\;\}\hspace{-3pt}={}\<[E]%
\\[\blanklineskip]%
\>[3]{}\hsindent{1}{}\<[4]%
\>[4]{}(\textcolor{PIKgreenDark2}{val}\;(\textcolor{PIKgreenDark2}{p}\mathbin{::}\textcolor{PIKgreenDark2}{ps})\;\textcolor{PIKgreenDark2}{x}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\hfill $\Box$
\paragraph*{Technical remarks.} \hspace{0.1cm}
The above proof of \ensuremath{\textcolor{PIKgreenDark2}{valMeasTotalReward}} omits some technical details that
may be uninteresting for a pen and paper proof, but turn out to be
crucial in the setting of an intensional type theory -- like Idris --
where function extensionality does not hold in general.
In particular, we have to postulate that the functorial \ensuremath{\textcolor{PIKgreenDark2}{map}}
preserves extensional equality (see Appendix~\ref{appendix:monadLaws}
and \citep{botta2020extensional}) for Idris to accept the proof.
In fact, most of the reasoning proceeds by replacing functions that are mapped
onto monadic values by other functions that are only extensionally
equal. Using that \ensuremath{\textcolor{PIKgreenDark2}{map}} preserves extensional equality
allows to carry out such proofs generically without knowledge of
the concrete structure of the functor.
\subsection{Correctness of monadic backward induction}
\label{subsection:biCorrectness}
As corollary, we can now prove the correctness of monadic backward induction,
namely that the policy sequences computed by \ensuremath{\textcolor{PIKgreenDark2}{bi}} are optimal with
respect to the measured total reward computed by \ensuremath{\textcolor{PIKgreenDark2}{val'}}:
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{6}{@{}>{\hspre}l<{\hspost}@{}}%
\column{18}{@{}>{\hspre}l<{\hspost}@{}}%
\column{25}{@{}>{\hspre}c<{\hspost}@{}}%
\column{25E}{@{}l@{}}%
\column{28}{@{}>{\hspre}l<{\hspost}@{}}%
\column{57}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{biOptMeasTotalReward}{}\<[25]%
\>[25]{} \mathop{:} {}\<[25E]%
\>[28]{}(\textcolor{PIKgreenDark2}{t},\textcolor{PIKgreenDark2}{n} \mathop{:} \mathbb{N}) \to \textcolor{PIKcyanDark2}{GenOptPolicySeq}\;\textcolor{PIKgreenDark2}{val'}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}){}\<[E]%
\\[\blanklineskip]%
\>[3]{}\textcolor{PIKgreenDark2}{biOptMeasTotalReward}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x}\mathrel{=}{}\<[E]%
\\
\>[3]{}\hsindent{3}{}\<[6]%
\>[6]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{vvEqL}{}\<[18]%
\>[18]{}\mathrel{=}\textcolor{PIKgreenDark2}{sym}\;(\textcolor{PIKgreenDark2}{valMeasTotalReward}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x})\;{}\<[57]%
\>[57]{}\mathbf{in}{}\<[E]%
\\
\>[3]{}\hsindent{3}{}\<[6]%
\>[6]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{vvEqR}{}\<[18]%
\>[18]{}\mathrel{=}\textcolor{PIKgreenDark2}{sym}\;(\textcolor{PIKgreenDark2}{valMeasTotalReward}\;(\textcolor{PIKgreenDark2}{bi}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n})\;\textcolor{PIKgreenDark2}{x})\;{}\<[57]%
\>[57]{}\mathbf{in}{}\<[E]%
\\
\>[3]{}\hsindent{3}{}\<[6]%
\>[6]{}\mathbf{let}\;\textcolor{PIKgreenDark2}{biOpt}{}\<[18]%
\>[18]{}\mathrel{=}\textcolor{PIKgreenDark2}{biOptVal}\;\textcolor{PIKgreenDark2}{t}\;\textcolor{PIKgreenDark2}{n}\;\textcolor{PIKgreenDark2}{ps'}\;\textcolor{PIKgreenDark2}{x}\;{}\<[57]%
\>[57]{}\mathbf{in}{}\<[E]%
\\
\>[3]{}\hsindent{3}{}\<[6]%
\>[6]{}\textcolor{PIKgreenDark2}{replace}\;\textcolor{PIKgreenDark2}{vvEqR}\;(\textcolor{PIKgreenDark2}{replace}\;\textcolor{PIKgreenDark2}{vvEqL}\;\textcolor{PIKgreenDark2}{biOpt}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
\section{Discussion}
\label{section:discussion}
In the last two sections we have seen what the three conditions mean
for concrete examples and how they are used in the correctness proof.
In this section we take a step back and consider them from a more abstract
point of view.
\paragraph*{Category-theoretical perspective.}\hspace{0.1cm}
Readers familiar with the theory of monads might have recognised that the
first two conditions ensure that \ensuremath{\textcolor{PIKgreenDark2}{meas}} is the structure map of a
monad algebra for \ensuremath{\textcolor{PIKcyanDark2}{M}} on \ensuremath{\textcolor{PIKcyanDark2}{Val}} and thus the pair \ensuremath{(\textcolor{PIKcyanDark2}{Val},\textcolor{PIKgreenDark2}{meas})} is an
object of
the Eilenberg-Moore category associated with the monad \ensuremath{\textcolor{PIKcyanDark2}{M}}. The third
condition requires the map \ensuremath{(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )} to be an \ensuremath{\textcolor{PIKcyanDark2}{M}}-algebra homomorphism
-- a structure preserving map -- for arbitrary values \ensuremath{\textcolor{PIKgreenDark2}{v}}.
This perspective allows us to use existing knowledge about monad
algebras as a first criterion for choosing measures. For example, the
Eilenberg-Moore-algebras of the list monad are monoids -- this
implicitly played a role in the examples we considered
above. \cite{DBLP:journals/tcs/Jacobs11} shows that the algebras of
the distribution monad for probability distributions with finite
support correspond to convex sets. Interestingly, convex sets play an
important role in the theory of optimal control
\citep{bertsekas2003convex}.
\paragraph*{Measures for the list monad.} \hspace{0.1cm}
The knowledge that monoids are \ensuremath{\textcolor{PIKcyanDark2}{List}}-algebras suggests a generic
description of admissible measures for \ensuremath{\textcolor{PIKcyanDark2}{M}\mathrel{=}\textcolor{PIKcyanDark2}{List}}:
Given a monoid \ensuremath{(\textcolor{PIKcyanDark2}{Val}, \mathbin{\odot} ,\textcolor{PIKgreenDark2}{b})}, we can prove that monoid
homomorphisms of the form \ensuremath{\textcolor{PIKgreenDark2}{foldr}\; \mathbin{\odot} \;\textcolor{PIKgreenDark2}{b}} fulfil the three conditions,
if \ensuremath{ \mathbin{\oplus} } distributes over $\odot$ on the left. I.e. for \ensuremath{\textcolor{PIKgreenDark2}{meas}\mathrel{=}\textcolor{PIKgreenDark2}{foldr}\; \mathbin{\odot} \;\textcolor{PIKgreenDark2}{b}} the three conditions can be proven from
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{23}{@{}>{\hspre}c<{\hspost}@{}}%
\column{23E}{@{}l@{}}%
\column{26}{@{}>{\hspre}l<{\hspost}@{}}%
\column{43}{@{}>{\hspre}c<{\hspost}@{}}%
\column{43E}{@{}l@{}}%
\column{47}{@{}>{\hspre}l<{\hspost}@{}}%
\column{70}{@{}>{\hspre}c<{\hspost}@{}}%
\column{70E}{@{}l@{}}%
\column{73}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{odotNeutrRight}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}(\textcolor{PIKgreenDark2}{l} \mathop{:} \textcolor{PIKcyanDark2}{Val}){}\<[43]%
\>[43]{} \to {}\<[43E]%
\>[47]{}\textcolor{PIKgreenDark2}{l} \mathbin{\odot} \textcolor{PIKgreenDark2}{neutr}{}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\>[73]{}\textcolor{PIKgreenDark2}{l}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{odotNeutrLeft}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}(\textcolor{PIKgreenDark2}{r} \mathop{:} \textcolor{PIKcyanDark2}{Val}){}\<[43]%
\>[43]{} \to {}\<[43E]%
\>[47]{}\textcolor{PIKgreenDark2}{neutr} \mathbin{\odot} \textcolor{PIKgreenDark2}{r}{}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\>[73]{}\textcolor{PIKgreenDark2}{r}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{odotAssociative}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}(\textcolor{PIKgreenDark2}{l},\textcolor{PIKgreenDark2}{v},\textcolor{PIKgreenDark2}{r} \mathop{:} \textcolor{PIKcyanDark2}{Val}){}\<[43]%
\>[43]{} \to {}\<[43E]%
\>[47]{}\textcolor{PIKgreenDark2}{l} \mathbin{\odot} (\textcolor{PIKgreenDark2}{v} \mathbin{\odot} \textcolor{PIKgreenDark2}{r}){}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\>[73]{}(\textcolor{PIKgreenDark2}{l} \mathbin{\odot} \textcolor{PIKgreenDark2}{v}) \mathbin{\odot} \textcolor{PIKgreenDark2}{r}{}\<[E]%
\\
\>[3]{}\textcolor{PIKgreenDark2}{oplusOdotDistrLeft}{}\<[23]%
\>[23]{} \mathop{:} {}\<[23E]%
\>[26]{}(\textcolor{PIKgreenDark2}{n},\textcolor{PIKgreenDark2}{l},\textcolor{PIKgreenDark2}{r} \mathop{:} \textcolor{PIKcyanDark2}{Val}){}\<[43]%
\>[43]{} \to {}\<[43E]%
\>[47]{}\textcolor{PIKgreenDark2}{n} \mathbin{\oplus} (\textcolor{PIKgreenDark2}{l} \mathbin{\odot} \textcolor{PIKgreenDark2}{r}){}\<[70]%
\>[70]{}\mathrel{=}{}\<[70E]%
\>[73]{}(\textcolor{PIKgreenDark2}{n} \mathbin{\oplus} \textcolor{PIKgreenDark2}{l}) \mathbin{\odot} (\textcolor{PIKgreenDark2}{n} \mathbin{\oplus} \textcolor{PIKgreenDark2}{r}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
Neutrality of \ensuremath{\textcolor{PIKgreenDark2}{b}} on the right is needed for \ensuremath{\textcolor{PIKgreenDark2}{measPureSpec}},
while \ensuremath{\textcolor{PIKgreenDark2}{measJoinSpec}} follows from neutrality on the left and
the associativity of \ensuremath{ \mathbin{\odot} }. The algebra morphism condition on \ensuremath{(\textcolor{PIKgreenDark2}{v} \mathbin{\oplus} )}
is provable from the distributivity of \ensuremath{ \mathbin{\oplus} } over \ensuremath{ \mathbin{\odot} } and again
neutrality of \ensuremath{\textcolor{PIKgreenDark2}{b}} on the right.
If moreover \ensuremath{ \mathbin{\odot} } is monotone with respect to \ensuremath{ \,\sqsubseteq\, }
\begin{hscode}\SaveRestoreHook
\column{B}{@{}>{\hspre}l<{\hspost}@{}}%
\column{3}{@{}>{\hspre}l<{\hspost}@{}}%
\column{13}{@{}>{\hspre}c<{\hspost}@{}}%
\column{13E}{@{}l@{}}%
\column{16}{@{}>{\hspre}l<{\hspost}@{}}%
\column{E}{@{}>{\hspre}l<{\hspost}@{}}%
\>[3]{}\textcolor{PIKgreenDark2}{odotMon}{}\<[13]%
\>[13]{} \mathop{:} {}\<[13E]%
\>[16]{}\{\mskip1.5mu \textcolor{PIKgreenDark2}{a},\textcolor{PIKgreenDark2}{b},\textcolor{PIKgreenDark2}{c},\textcolor{PIKgreenDark2}{d} \mathop{:} \textcolor{PIKcyanDark2}{Val}\mskip1.5mu\} \to \textcolor{PIKgreenDark2}{a} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{b} \to \textcolor{PIKgreenDark2}{c} \,\sqsubseteq\, \textcolor{PIKgreenDark2}{d} \to (\textcolor{PIKgreenDark2}{a} \mathbin{\odot} \textcolor{PIKgreenDark2}{c}) \,\sqsubseteq\, (\textcolor{PIKgreenDark2}{b} \mathbin{\odot} \textcolor{PIKgreenDark2}{d}){}\<[E]%
\ColumnHook
\end{hscode}\resethooks
then we can also prove \ensuremath{\textcolor{PIKgreenDark2}{measMonSpec}} using the transitivity of \ensuremath{ \,\sqsubseteq\, }.
The proofs are simple and can be found in the
supplementary material to this paper.
This also illustrates how the three abstract conditions follow from
more familiar algebraic properties.
\paragraph*{Mutual independence.}\hspace{0.1cm}
Although it does not seem surprising, it should be noted that the
three conditions are mutually independent. This can be concluded from
the counter-examples in Sec.~\ref{subsection:exAndCounterEx}: The
sum, the modified list maximum and the arithmetic average each fail
exactly one of the three conditions.
\paragraph*{Sufficient vs.\ necessary.}\hspace{0.1cm}
The three conditions are sufficient to prove the extensional equality
of the functions \ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}}. They are justified by their level
of generality and the fact that they hold for standard \ensuremath{\textcolor{PIKgreenDark2}{measures}} used
in control theory. However, we leave open the interesting question
whether these conditions are also necessary for the correctness of
monadic backward induction.
\paragraph*{Non-emptiness requirement.}\hspace{0.1cm}
Note that \ensuremath{\textcolor{PIKgreenDark2}{mv}} in the premises of \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} is required to be
non-empty.
This condition arises from a pragmatic consideration.
As an example, let us again use the list monad with \ensuremath{\textcolor{PIKcyanDark2}{Val}\mathrel{=}\mathbb{N}} and \ensuremath{ \mathbin{\oplus} \mathrel{=}\mathbin{+}}. It is not hard to see that
for any natural number \ensuremath{\textcolor{PIKgreenDark2}{n}} greater than 0 the equality
\ensuremath{\textcolor{PIKgreenDark2}{meas}\;(\textcolor{PIKgreenDark2}{map}\;(\textcolor{PIKgreenDark2}{n}\mathbin{+})\;[\mskip1.5mu \mskip1.5mu])\mathrel{=}\textcolor{PIKgreenDark2}{n}\mathbin{+}\textcolor{PIKgreenDark2}{meas}\;[\mskip1.5mu \mskip1.5mu]} must fail.
So, if we wish to use the standard list data type instead of
defining a custom type of non-empty lists, the only way to prove
the base case of \ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} is by contradiction with the
non-emptiness premise.
However, omitting the premise \ensuremath{\textcolor{PIKgreenDark2}{mv} \mathop{:} \textcolor{PIKcyanDark2}{NotEmpty}} would not prevent us
from generically proving the correctness result of
Section~\ref{section:valval} -- it would even simplify matters as it
would spare us reasoning about preservation of non-emptiness.
But it would implicitly restrict the class of monads that can be used
to instantiate \ensuremath{\textcolor{PIKcyanDark2}{M}}. For example, we have seen above, that
\ensuremath{\textcolor{PIKgreenDark2}{measPlusSpec}} is not provable for the empty list without the
non-emptiness premise and we would therefore need to resort to a custom
type of non-empty lists instead.
The price to pay for including the non-emptiness premise is
the additional condition \ensuremath{\textcolor{PIKgreenDark2}{nextNotEmpty}} on the transition function
\ensuremath{\textcolor{PIKgreenDark2}{next}} that was already stated in Sec.~\ref{subsection:wrap-up}.
Moreover, we have to postulate non-emptiness preservation laws for the
monad operations (Appendix~\ref{appendix:monadLaws}) and to prove an
additional lemma about the preservation of non-emptiness
(Appendix~\ref{appendix:lemmas}).
Conceptually, it might seem cleaner to omit the non-emptiness
condition: In this case, the remaining conditions would only concern
the interaction between the monad, the measure, the type of values and the
binary operation \ensuremath{ \mathbin{\oplus} }. However, the non-emptiness preservation laws seem
less restrictive with respect to the monad. In particular, for our
above example of ordinary lists they hold (the relevant proofs can be
found in the supplementary material).
Thus we have opted for explicitly restricting the \ensuremath{\textcolor{PIKgreenDark2}{next}} function
instead of implicitly restricting the class of monads for which the
result of Sec.~\ref{section:valval} holds.
\section{Conclusion}
\label{section:conclusion}
In this paper, we have proposed correctness criteria for monadic backward
induction and its underlying value function in the
framework for specifying and solving finite-horizon, monadic SDPs proposed
in \citep{2017_Botta_Jansson_Ionescu}.
After having shown that these criteria are not necessarily met for arbitrary
monadic SDPs, we have formulated three general compatibility conditions.
We have given a proof that monadic backward induction and its underlying value
function are correct if these conditions are fulfilled.
The main theorem has been proved via the extensional equality of
two functions: 1) the value function of Bellman's dynamic programming
\citep{bellman1957} and optimal control theory \citep{bertsekas1995,
puterman2014markov} that is also at the core of the generic
backward induction algorithm of \citep{2017_Botta_Jansson_Ionescu} and
2) the measured total reward function that specifies the objective of
decision making in monadic SDPs: the maximisation of a measure of the
sum of the rewards along the trajectories rooted at the state
associated with the first decision.
Our contribution to verified optimal decision making is twofold: On the
one hand, we have implemented a machine-checked generalisation of the
semi-formal results for deterministic and stochastic SDPs
discussed in \citep[Prop.~1.3.1]{bertsekas1995} and
\citep[Theorem~4.5.1.c]{puterman2014markov}.
As a consequence, we now have a provably correct method for solving
deterministic and stochastic sequential decision problems with their
canonical measure functions.
On the other hand, we have identified three general conditions that are
sufficient for the equivalence between the two functions and thus the
correctness result to hold.
The first two conditions are natural compatibility conditions
between the measure of uncertainty \ensuremath{\textcolor{PIKgreenDark2}{meas}} and the monadic operations
associated with the uncertainty monad \ensuremath{\textcolor{PIKcyanDark2}{M}}. The third condition is a
distributivity principle concerning the relationship between \ensuremath{\textcolor{PIKgreenDark2}{meas}},
the functorial map associated with \ensuremath{\textcolor{PIKcyanDark2}{M}} and
the rule for adding rewards \ensuremath{ \mathbin{\oplus} }. All three conditions have a
straightforward category-theoretical interpretation in terms of
Eilenberg-Moore algebras \citep[ch.~VI.2]{maclane}.
As discussed in Sec.~\ref{section:discussion}, the three
conditions are independent and have non-trivial implications for the
measure and the addition function that cannot be derived from the
monotonicity condition on \ensuremath{\textcolor{PIKgreenDark2}{meas}} already imposed in
\citep{ionescu2009, 2017_Botta_Jansson_Ionescu}.
A consequence of this contribution is more flexibility:
We can now compute verified solutions of stochastic sequential
decision problems in which the
measure of uncertainty is different from the expected value
measure. This is important for applications in which the goal of
decision making is, for example, of maximising the value of
worst-case outcomes.
To the best of our knowledge, the formulation of the compatibility
condition and the proof of the equivalence between the two value
functions are novel results.
The latter can be employed in a wider context than the one that has
motivated our study: in many practical problems in science and
engineering, the computation of optimal policies via backward induction
(let apart brute-force or gradient methods) is simply not feasible.
In these problems one often still needs to generate, evaluate and
compare different policies and our result shows under which conditions
such evaluation can safely be done via the ``fast'' value function \ensuremath{\textcolor{PIKgreenDark2}{val}}
of standard control theory.
Finally, our contribution is an application of verified, literal
programming to optimal decision making: the sources of this document
have been written in literal Idris and are available at
\citep{IdrisLibsValVal}, where the reader can also find the bare code
and some examples. Although the development has been carried out in
Idris, it should be readily reproducible in other implementations of
type theory like Agda or Coq.
\section*{Acknowledgements} \label{section:acknowledgements}
The work presented in this paper was motivated by a remark of Marina
Mart{\'i}nez Montero who raised the question of the equivalence between
\ensuremath{\textcolor{PIKgreenDark2}{val}} and \ensuremath{\textcolor{PIKgreenDark2}{val'}} (and, thus, of the correctness of the
\bottaetal framework) during an introduction to verified decision making
that the authors gave at UCL (Université catholique de Louvain) in
2019. We are especially thankful to Marina for that question!
We are grateful to Jeremy Gibbons, Christoph Kreitz, Patrik Jansson, Tim
Richter and to the JFP editors and reviewers, whose comments and
recommendations have lead to significant improvements of the original
manuscript.
A very special thanks goes to the anonymous reviewer who has suggested
both a more straightforward proof of the \ensuremath{\textcolor{PIKgreenDark2}{val}}-\ensuremath{\textcolor{PIKgreenDark2}{val'}} equality and,
crucially, weaker conditions on the measure function for the
result to hold. This warrants the applicability of the
\bottaetal framework for verified decision making to a wider class of
problems than our original conditions.
The work presented in this paper heavily relies on free software, among
others on Coq, Idris, Agda, GHC, git, vi, Emacs, \LaTeX\ and on the
FreeBSD and Debian GNU/Linux operating systems. It is our pleasure to
thank all developers of these excellent products.
This is TiPES contribution No 37. This project has received funding from
the European Union’s Horizon 2020 research and innovation programme
under grant agreement No 820970 (TiPES
--Tipping Points in the Earth System, \citeyear{TiPES::Website}).
\subsection*{Conflicts of Interest}
None.
\bibliographystyle{jfplike}
| {
"timestamp": "2021-09-13T02:19:25",
"yymm": "2008",
"arxiv_id": "2008.02143",
"language": "en",
"url": "https://arxiv.org/abs/2008.02143",
"abstract": "In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs.Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards.In the present paper we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore-algebra. They hold in familiar settings like those of deterministic or stochastic SDPs but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al.'s generic framework.Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.",
"subjects": "Logic in Computer Science (cs.LO)",
"title": "On the correctness of monadic backward induction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517475646369,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.708960628746152
} |
https://arxiv.org/abs/2111.00630 | Long time decay and asymptotics for the complex mKdV equation | We study the asymptotics of the complex modified Korteweg-de Vries equation\begin{equation*} \partial_t u + \partial_x^3 u = \pm |u|^2 \partial_x u \end{equation*} In the real valued case, it is known that solutions with small, localized initial data exhibit modified scattering for $|x| \geq t^{1/3}$, and behave self-similarly for $|x| \leq t^{1/3}$. We prove that the same asymptotics hold for complex mKdV. The major difficulty in the complex case is that the nonlinearity cannot be expressed as a derivative, which makes the low-frequency dynamics harder to control. To overcome this difficulty, we introduce the decomposition $u = S + w$, where $S$ is a self-similar solution with the same mean as $u$ and $w$ is a remainder that has better decay. By using the explicit expression for $S$, we are able to get better low-frequency behavior for $u$ than we could from dispersive estimates alone. | \section{Introduction}
We study the complex modified Korteweg-de Vries (mKdV) equation
\begin{equation}\label{eqn:cmkdv}
\partial_t u + \partial_x^3 u = \pm |u|^2 \partial_x u
\end{equation}
This equation appears as a model in nonlinear optics, where it models higher order corrections for waves travelling in a nonlinear medium~\cite{rodriguezStandardEmbeddedSolitons2003,heFewcycleOpticalRogue2014,biswasOpticalSolitonsPresence2018}. It also describes higher order effects in vortex filament evolution~\cite{fukumotoThreedimensionalDistortionsVortex1991}. The complex mKdV equation is completely integrable, and has infinitely many conserved quantities, the first few of which are the momentum, angular twist, and energy~\cite{ancoTravelingWavesConservation2012}:
\begin{equation*}
\mathcal{P}(u) = \int |u|^2\;dx,\qquad\qquad \mathcal{W}(u) = \int |u|^2\arg(u)_x\;dx \qquad\qquad \mathcal{E}(u) = \int \frac{1}{2}|\partial_x u|^2 \mp \frac{1}{4}|u|^4\;dx
\end{equation*}
\subsection{Known results}
The smoothing and maximal function estimates developed by Kenig, Ponce, and Vega in~\cite{kenigWellposednessScatteringResults1993,kenigGeneralizedKortewegdeVries1989} can be used to show that~\eqref{eqn:cmkdv} is locally wellposed in $H^s$, $s \geq \frac{1}{4}$. For real initial data, Colliander, Keel, Staffilani, Takaoka, and Tao show in~\cite{collianderSharpGlobalWellposedness2003} that for $s > 1/4$ solution exist globally, and global wellposedness for real initial data at the $s = 1/4$ endpoint was shown independently by Kishimoto in~\cite{kishimotoWellposednessCauchyProblem2009} and by Guo in~\cite{guoGlobalWellposednessKorteweg2009}. If we require uniformly continuous dependence on the initial data, then the $s = 1/4$ endpoint is sharp, see~\cite{kenigIllposednessCanonicalDispersive2001} for the focusing case and~\cite{christAsymptoticsFrequencyModulation2003} for the defocusing case. Local wellposedness has also been show in the weighted Sobolev spaces $H^{s} \cap |x|^{-m}L^2$ for $s \geq \max(2m, 1/4)$, in~\cite{katoCauchyProblemGeneralized1983,fonsecaPersistencePropertiesFractional2015}. For the equation set on the torus, wellposedness was shown by Chapouto in a range of Fourier-Lebesgue spaces~\cite{chapoutoRemarkWellposednessModified2021,chapoutoRefinedWellPosednessResult2021}. Relaxing the requirement of uniformly continuous dependence on the initial data, Harrop-Griffiths, Killip, and Visan used the complete integrability of the equation to prove a weaker form of wellposedness in $H^s$ for $s > - 1/2$ in ~\cite{harrop-griffithsSharpWellposednessCubic2020}. They also show that for $s \leq -1/2$, the equation exhibits instantaneous norm inflation, so no wellposedness result is possible. Outside the scale of $H^s$ spaces, Gr\"unrock proved in~\cite{grunrockImprovedLocalWellposedness2004} that the equation is locally wellposed for real-valued initial data in the spaces $\widehat{H^s_r}$ defined by the norms $\lVert u \rVert_{\widehat{H^s_r}} := \lVert \jBra{\xi} \hat{u} \rVert_{L^{r'}}$ for the parameter range $\frac{4}{3} < r \leq 2$, $s \geq \frac{1}{2} - \frac{1}{2r}$. The parameter range was later improved by Gr\"unrock and Herr in~\cite{grunrockLocalWellposednessModified2009} to $1 < r \leq 2$, $s \geq \frac{1}{2} - \frac{1}{2r}$, which has the scaling-critical space $\widehat{H}^0_1 = \mathcal{F}L^\infty$ as the (excluded) endpoint. By using an approximation argument, Correia, C\^ote, and Vega were able to show a form of wellposedness in a critical space contained in $\mathcal{F}L^\infty$ in~\cite{correiaSelfSimilarDynamicsModified2020}.
The long time asymptotics of the real-valued mKdV equation equation have received a great deal of study. The first complete results were given in~\cite{deiftSteepestDescentMethod1993}, where Deift and Zhou used the complete integrability of the equations to obtain asymptotic formulas using the inverse scattering transform. The first results not depending on complete integrability were derived by Hayashi and Naumkin in~\cite{hayashiLargeTimeBehavior1999,hayashiModifiedKortewegVries2001}, where it was shown that solutions starting with small, localized data decay at the linear rate and exhibit {Painlev\'{e}} asymptotics in the self-similar region $|x| \leq t^{1/3}$. These results were extended to proving modified scattering in the region $x \leq -t^{-1/3}$ and more rapid decay in the region $x \geq t^{-1/3}$ by Harrop-Griffiths in~\cite{harrop-griffithsLongTimeBehavior2016} using the method of testing with wave packets developed by Ifrim and Tataru in~\cite{ifrimGlobalBoundsCubic2015,ifrimTwoDimensionalWater2016}. These results were also proved independently by Germain, Pusateri, and Rousset in~\cite{germainAsymptoticStabilitySolitons2016} using the method of space time resonances, and it was further shown that solitons are stable under small, localized perturbations, and that for long times the perturbation has the same asymptotics as in the small data case. More recently, Correia, C\^ote, and Vega extended these results in~\cite{correiaSelfSimilarDynamicsModified2020} by allowing the solution to have a jump discontinuity at $0$ in Fourier space, which corresponds to studying the dynamics of vortex filaments with corners, see~\cite{perelmanSelfsimilarPlanarCurves2007}. For the complex equation with nonlinearity $\partial_x(|u|^2 u)$, asymptotic results in the region $x/t < 0$ were given in~\cite{zhangSpectralAnalysisLongtime2022} using the Deift-Zhou steepest descent method. Results for the Sasa-Satsuma mKdV equation (nonlinearity $2|u|^2 \partial_x u + u \partial_x |u|^2)$ are given for $x/t > 0$ in~\cite{liuLongtimeAsymptoticsSasa2019} and in the self-similar region $|x| \leq t^{-1/3}$ in~\cite{huang_asymptotics_2020}. Except for the results in the region $x/t < 0$ given in~\cite{huangLongtimeAsymptoticHirota2015} using steepest descent, the asymptotics of the complex mKdV equation~\eqref{eqn:cmkdv} do not appear to have been studied.
Asymptotic results for the mKdV equation hinge on the decay properties of solutions to the linear equation
\begin{equation*}
\left\{\begin{array}{l}
\partial_t u + \partial_x^3 u = 0\\
u(t=0) = u_0
\end{array}\right.
\end{equation*}
The fundamental solution is given in terms of the Airy function by
\begin{equation*}
F(x,t) = (3t)^{-1/3}\Ai((3t)^{-1/3}x)
\end{equation*}
If $u_0$ is localized and regular, then for $t \geq 1$ we have
\begin{equation}\label{eqn:linear-estimates}
|u(x,t)| \lesssim t^{-1/3} \jBra{x/t^{1/3}}^{-1/4},\qquad\qquad |\partial_x u(x,t)| \lesssim t^{-2/3} \jBra{x/t^{1/3}}^{1/4}
\end{equation}
In particular, $|u||\partial_x u| \lesssim t^{-1}$, which suggests the problem~\eqref{eqn:cmkdv} (like its real-valued counterpart), should be critical with respect to scattering.
\subsection{Main results}
We will consider the equation with data prescribed at $t=1$:
\begin{equation}\label{eqn:cmkdv-t-1}
\left\{\begin{array}{c}
\partial_t u + \partial_x^3 u = \pm |u|^2 \partial_x u\\
u(t=1) = e^{-\partial_x^3}u_*
\end{array}\right.
\end{equation}
We will prove the following result:
\begin{thm}\label{thm:main-theorem}
There exists an $\epsilon_0 > 0$ such that if $\epsilon < \epsilon_0$, and $u_* \in H^2$ satisfies
\begin{equation}\label{eqn:main-thm-initial-bdd}
\lVert \hat{u}_* \rVert_{L^\infty} + \lVert x u_* \rVert_{L^2} \leq \epsilon
\end{equation}
then the solution $u$ to~\eqref{eqn:cmkdv-t-1} exists on $[1,\infty)$ and has the following asymptotics:
\paragraph{\indent For $x \geq t^{1/3}$,} we have rapid decay of the form
\begin{equation}\label{eqn:positive-x-asymp}
|u(x,t)| \lesssim \epsilon t^{-1/3} (x t^{-1/3} )^{-3/4}
\end{equation}
\paragraph{\indent For $x \leq -t^{-1/3}$,} we have modified scattering
\begin{equation}\label{eqn:negative-x-asymp}\begin{split}
u(x,t) =& \frac{1}{\sqrt{12 t \xi_0}}\sum_{\nu \in \{1, -1\}}\exp\left(-2\nu it\xi_0^3 + \nu i\frac{\pi}{4} \pm i \nu \int_1^t \frac{|\hat{f}(\nu\xi_0,s)|^2}{s}\;ds\right)\hat{f}_\infty(\nu\xi_0)\\
&\qquad + O(\epsilon t^{-1/3} (x t^{-1/3})^{-9/28})
\end{split}\end{equation}
where $\xi_0 = \sqrt{\frac{-x}{3t}}$ and $f_\infty$ a bounded function of $\xi$.
\paragraph{\indent For $|x| \leq t^{1/3 + 4\beta}$,} we have self-similar behavior
\begin{equation}\label{eqn:small-x-asymptotics}
u(x,t) = S(x,t;\alpha) + O(\epsilon t^{-1/3-\beta})
\end{equation}
where $\alpha$ is some complex number with $|\alpha| \lesssim \epsilon$, $\beta = \frac{1}{6} - C\epsilon^2$ for some constant $C$, and $S$ is a self-similar solution; that is, $S(x,t;\alpha) = t^{-1/3} \sigma(x/t^{1/3};\alpha)$, where $\sigma$ is a bounded solution of
\begin{equation}\label{eqn:self-sim-def-intro}
\left\{\begin{array}{c}
\sigma'' - \frac{1}{3}x \sigma = \pm \frac{1}{3}|\sigma|^2\sigma\\
\hat{\sigma}(0) = \alpha
\end{array}\right.
\end{equation}
\end{thm}
\begin{rmk}
Equation~\eqref{eqn:self-sim-def-intro} is nothing more than a complex, phase-rotation invariant version of the Painlev\'{e} II equation,
\begin{equation}\label{eqn:real-painleve-II}
\left\{\begin{array}{c}
\tau'' - x \tau = \pm \tau^3\\
\hat{\tau}(0) = \alpha \in \mathbb{R}
\end{array}\right.
\end{equation}
It is known (see~\cite{deiftAsymptoticsPainleveII1995,hastingsBoundaryValueProblem1980,correiaAsymptoticsFourierSpace2020}) that~\eqref{eqn:real-painleve-II} has a unique bounded solution for $|\alpha| < 1$, and this fact will be used in~\Cref{sec:self-sim} to prove that~\eqref{eqn:self-sim-def-intro} has a unique, bounded solution.
\end{rmk}
\begin{rmk}\label{rmk:regularity-assumption-main-thm}
Note that the assumption that $u_* \in H^2$ is only used to give local wellposedness for the equation (using the $H^2 \cap x^{-1}L^2$ theory from~\cite{katoCauchyProblemGeneralized1983}). In particular, it plays no role in the a priori estimates which give us the asymptotics, and we do not need any smallness assumption on the $H^2$ norm of $u_*$.
If we could prove local wellposedness for~\eqref{eqn:cmkdv-t-1} with $u_* \in \mathcal{F}L^\infty \cap x^{-1}L^2$, then we could drop the requirement that $u_* \in H^2$ entirely, since our arguments would then imply that the local solution can be extended to a global one. However, proving local wellposedness in this space is not straightforward: the quasilinear behavior of the problem appears to preclude the use of a fixed-point argument, and smooth functions are not dense in $\mathcal{F}L^\infty$, which makes compactness arguments more complicated. It is possible that local existence could be proved by arguing along the lines of~\cite{correiaSelfSimilarDynamicsModified2020}; however, nontrivial modifications outside the scope of this paper would be needed to account for the different algebraic structure of the nonlinearity in the complex case.
\end{rmk}
It might appear somewhat unnatural to prescribe initial data in this form. However, by combining~\Cref{thm:main-theorem} with the weighted local wellposedness result in~\cite{katoCauchyProblemGeneralized1983} we obtain a result with initial conditions given at $t = 0$:
\begin{cor}\label{thm:main-thm-u-0}
Let $u$ solve
\begin{equation*}
\left\{\begin{array}{c}
\partial_t u + \partial_x^3 u = \pm |u|^2 \partial_x u\\
u(t=0) = u_0
\end{array}\right.
\end{equation*}
Then, there exists an $\epsilon_0 > 0$ such that for all $\epsilon < \epsilon_0$, if
\begin{equation*}
\lVert x u_0 \rVert_{L^2} + \lVert u_0 \rVert_{H^2} \leq \varepsilon
\end{equation*}
then the solution $u$ has the same asymptotics as in~\Cref{thm:main-theorem}.
\end{cor}
\begin{proof}
By~\cite[Theorem 8.1]{katoCauchyProblemGeneralized1983}, for $\epsilon$ small enough there exists a local solution $u \in C([0,1], H^2 \cap |x|^{-1} L^2)$ with $\sup_{0 \leq t \leq 1} \lVert u(t) \rVert_{H^2} + \lVert xu(t) \rVert_{L^2} \lesssim \epsilon$. Now, let $u_* = e^{\partial_x^3} u(1)$. Since the linear propagator is unitary on $L^2$ Sobolev spaces, $u_* \in H^2$. Moreover, by using the identity $x e^{t\partial_x^3} = e^{t\partial_x^3} (x - 3t \partial_x^2)$, we see that
\begin{equation*}\begin{split}
\lVert \jBra{x} u_* \rVert_{L^2} \leq& \lVert u_* \rVert_{L^2} + \lVert x e^{t\partial_x^3} u(1) \rVert_{L^2}\\
\lesssim& \lVert u(1) \rVert_{L^2} + \lVert (x - 3 \partial_x^2) u(1) \rVert_{L^2}
\lesssim \epsilon
\end{split}\end{equation*}
This controls~\eqref{eqn:main-thm-initial-bdd} by the Sobolev-Morrey embedding, so \Cref{thm:main-theorem} gives the result.
\end{proof}
\begin{rmk}
A discussed in~\Cref{rmk:regularity-assumption-main-thm}, the role of the $H^2$ hypothesis in~\Cref{thm:main-thm-u-0} is largely to allow us to use the weighted local wellposedness theory of~\cite{katoCauchyProblemGeneralized1983}. In this case, however, it is much less clear that we could obtain a wellposedness theorem in the scaling critical space $\mathcal{F}L^\infty \cap x^{-1}L^2$ because the dispersive decay estimates degenerate at $t = 0$. Even in the real-valued case, very little is know about wellposedness on $[0,1]$ with initial data in $\mathcal{F}L^\infty \cap x^{-1}L^2$: see~\cite{correiaSelfSimilarDynamicsModified2020}.
\end{rmk}
\begin{rmk}
Although the arguments in this paper are given for the nonlinear $\pm |u|^2 \partial_x u$, with slight modifications they can apply to nonlinearities of the form $a |u|^2 \partial_x u + b u^2 \partial_x \overline{u}$ for $a,b$ real. If $a = 3b$, we have the Sasa-Satsuma equation whose asymptotics were studied in~\cite{liu_long-time_2019,huang_asymptotics_2020}.
\end{rmk}
\subsection{Main difficulties}
The main difficulty for complex mKdV over real valued mKdV is the unfavorable location of the derivative in the nonlinearity, which creates significant obstacles for the proof.
The first difficulty comes when we try to control $u$ in weighted spaces. Our argument requires us to control $Lu$ in $L^2$, where
\begin{equation*}
L = e^{-t\partial_x^3}xe^{t\partial_x^3} = x - 3t\partial_x^2
\end{equation*}
Prior works for real mKdV estimate $Lu$ by relating it to the scaling transform of $u$:
\begin{equation*}
\Lambda u = (1 + x\partial_x + 3t\partial_t)
\end{equation*}
via the identity
\begin{equation*}
Lu = \partial_x^{-1} \Lambda u \mp 3tu^3
\end{equation*}
which holds for solutions of real mKdV. Since the nonlinearity for real mKdV can be written as a derivative, we can integrate the equation for $\Lambda u$ and perform an energy estimate to get control of $\partial_x^{-1} \Lambda u$. This strategy fails completely for complex mKdV: the relationship between $Lu$ and $\Lambda u$ now reads
\begin{equation*}
Lu = \partial_x^{-1} \Lambda u \mp 3t \partial_x^{-1} \left( |u|^2 \partial_x u\right)
\end{equation*}
and the last term cannot be bounded in $L^2$. Thus, we must use a different approach.
Our argument, very roughly speaking, amounts to performing an energy estimate on $Lu$ directly. The fact that we have a derivative in the nonlinearity means we must exploit some cancellation (via integration by parts) to avoid having to estimate terms containing a $\partial_x Lu$ factor. This is analogous to the situation for the $H^k$ energy estimates for mKdV: see~\cite[Chapter 4]{taoNonlinearDispersiveEquations2006}.
The other (and more major) problem appears when we try to establish bounds on $\hat{u}$ at low frequencies. Writing $f = e^{t\partial_x^3} u$ for the linear profile of $u$, we see that
\begin{equation*}
\partial_t \hat f(\xi,t) = \pm \frac{i}{2\pi} \int_1^t\int e^{it\phi} \hat{f}(\eta,t) \overline{\hat{f}(-\sigma,t)} (\xi - \eta - \sigma)\hat{f}(\xi - \eta - \sigma,t)\;d\eta d\sigma
\end{equation*}
for $\phi(\xi, \eta, \sigma) = \xi^3 - (\xi-\eta-\sigma)^3 - \eta^3 - \sigma^3 = 3(\eta + \sigma)(\xi - \eta)(\xi - \sigma)$. When $|\xi| \geq t^{-1/3}$, we can perform a stationary phase estimate on the nonlinear term to find that $\partial_t \hat{f}$ satisfies a perturbed Hamiltonian ODE, which we can integrate to get bounds for $\hat{f}$. However, the stationary phase estimate degenerates at low frequencies, which prevents us from applying this argument for $|\xi| < t^{-1/3}$. In the real-valued case, the failure of the stationary phase estimates is compensated by the favorable position of the derivative, allowing us to obtain global bounds using only dispersive estimates. In the complex valued case, the derivative structure is less favorable, and the dispersive estimates for $u$ are only strong enough to prove $|\partial_t \hat{f}| \lesssim \epsilon^3 t^{-1}$, which is insufficient to prove global bounds.
To overcome this second difficulty, we perform a modulation argument. We write $u = S + w$, where $S$ is a self-similar solution to~\eqref{eqn:cmkdv} satisfying $\hat{S}(0) = \hat{u}(0)$ and $w$ is a remainder. This decomposition is advantageous: the fact that $S$ is self-similar implies that $|S|^2 \partial_x S$ is a derivative, giving additional cancellation at low frequencies. Moreover, the Fourier space estimates of Correia, C\^ote, and Vega in~\cite{correiaAsymptoticsFourierSpace2020} can be combined with our estimates on the linear propagator to show that $S$ obeys the same decay estimates as $u$. In particular, $S$ and $u$ have matched asymptotics for $|x| \leq t^{1/3}$ (which corresponds to $|\xi| \leq t^{-1/3}$ in frequency space), so low-frequency projections of $w$ obey stronger decay bounds than either $u$ or $S$. By writing
\begin{equation*}\begin{split}
\partial_t \hat{f}(0,t) =& \pm\frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x u \;dx\\
=& \pm\frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x u - |S|^2 \partial_x S\;dx\\
=& \pm \frac{1}{\sqrt{2\pi}}\int |u|^2 \partial_x w + 2\Re(u \overline{w}) \partial_x S\;dx
\end{split}\end{equation*}
and using this improved decay, we can prove that $\partial_t \hat{f}(0,t)$ decays at an integrable rate, which allows us to show that $\hat{f}(\xi,t)$ is bounded for $|\xi| \lesssim t^{-1/3}$. As a bonus, we immediately get the asmyptotics $u \approx S$ for $|x| \lesssim t^{1/3}$.
This argument has some similarities with the one used by Hayashi and Naumkin in~\cite{hayashiModifiedKortewegVries2001}: indeed, the estimates we find for $w$ are largely identical to theirs. However, our argument differs from theirs in three key regards. First, we cannot estimate $Lw$ using the scaling vector field $\Lambda$, so we instead must use the method of space-time resonances to perform an energy estimate on $Lw$ directly. Second, since the mean $\hat{u}(0, t)$ varies in time, we must modulate in time rather than subtracting a fixed self-similar solution. Finally, our argument is different in terms of how the estimates on $w$ fit into the proof. In~\cite{hayashiModifiedKortewegVries2001}, the estimates on $w$ are performed after the solution has been shown to decay at the linear rate for all time, and are only necessary to obtain the asymptotics in the self-similar region. In our work, on the other hand, the estimates on $w$ are necessary in order to prove that the solution $u$ decays at the linear rate globally in time.
\subsection{Plan of the proof}
\subsubsection{Overview of the space-time resonance method}
To prove~\Cref{thm:main-theorem}, we will work within the framework of the method of space-time resonances. This method, first developed by Germain, Shatah, and Masmoudi in~\cite{germainGlobalSolutions3D2008} and independently by Gustafson, Nakanishi, and Tsai in~\cite{gustafsonScatteringTheoryGross2009}, has been used to derive improved time of existence and asymptotics for a variety of equations with dispersive character; see~\cite{germainGlobalExistenceCoupled2011, germainGlobalExistenceEulerMaxwell2014,ionescuGlobalRegularity2d2018,ionescuEinsteinKleinGordonCoupledSystem2020,cordobaGlobalSolutionsGeneralized2019,katoNewProofLongrange2011,haniScatteringZakharovSystem2013,germainGlobalSolutions3D2008,germainGlobalSolution2D2012}. The method begins by rewriting the nonlinear equation for $u$ in term of the profile $f(t) = e^{t\partial_x^3} u(t)$:
\begin{equation}\label{eqn:cmkdv-profile-eqn}
\left\{\begin{array}{l}
\partial_t f = \pm e^{t\partial_x^3}\left(\left|e^{-t\partial_x^3} f \right|^2 \partial_x e^{-t\partial_x^3} f\right)\\
f(t=1) = u_*
\end{array}\right.
\end{equation}
Which can be re-written in mild form as
\begin{equation}\label{eqn:cmkdv-profile-eqn-mild}
\hat f(\xi,t) = \hat{u}_* \pm \frac{i}{2\pi} \int_1^t\int e^{is\phi} \hat{f}(\eta,t) \overline{\hat{f}(-\sigma,t)} (\xi - \eta - \sigma)\hat{f}(\xi - \eta - \sigma,t)\;d\eta d\sigma\;ds
\end{equation}
where $\phi$ is the phase associated with the four wave mixing by the cubic nonlinearity:
\begin{equation}\label{eqn:4-wave-phase}
\phi(\xi, \eta, \sigma) = \xi^3 - (\xi-\eta-\sigma)^3 - \eta^3 - \sigma^3 = 3(\eta + \sigma)(\xi - \eta)(\xi - \sigma)
\end{equation}
Roughly speaking, we would like to show that the change in $f$ (given by the integral in~\eqref{eqn:cmkdv-profile-eqn-mild}) is small in some norm that gives us the required decay estimates for $u$. Heuristically, if we imagine that $\hat{f}$ is a smooth bump function, then the integral term in~\eqref{eqn:cmkdv-profile-eqn-mild} will be dominated by the stationary points of the phase where $\nabla_{s, \eta,\sigma} (s \phi(\xi,\eta,\sigma)) = 0$. The points where $\phi = 0$ corresponds to a resonance (in the classical sense of the term) in the nonlinear interaction between plane waves of frequencies $\xi-\eta-\sigma, \eta$ and $-\sigma$.
For dispersive PDEs, it is more natural to think in terms of \emph{wave packets} instead of plane waves. A wave packet at frequency $\xi$ is a bump function which travels at the group velocity, which for complex mKdV is $v_\xi = -3\xi^2$. Clearly, wave packets can interact over large timescales only if they have the same group velocity, and the condition $\nabla_{\eta,\sigma} \phi = 0$ is precisely what is required for three wave-packets at frequencies $\xi-\eta-\sigma$, $\eta$, and $-\sigma$ to have the same group velocities. See~\cite{germainSpacetimeResonances2010} for an expository overview of the method.
In our application, we will see in~\Cref{sec:linear-ests} the decay we want in~\Cref{thm:main-theorem} follows from the estimate $\sup_{t \geq 1} \lVert f \rVert_X \lesssim \epsilon$, where the $X$ norm is defined by
\begin{equation}\label{eqn:X-def}
\lVert f \rVert_{X} = \lVert \hat f(t) \rVert_{L^\infty} + t^{-1/6}\lVert xf(t) \rVert_{L^2}
\end{equation}
Note that this norm is scale invariant. In our argument, we use a bootstrap argument to show that this norm is small for all time.
\subsubsection{Step 1: Stationary phase estimate for high frequencies}
Let us first consider the $L^\infty$ bound for $\hat{f}(\xi,t)$. Since this amounts to a pointwise bound, it is natural to consider $\xi$ fixed. The stationary points $\nabla_{\eta,\sigma} \phi = 0$ are then given by
\begin{align*}
(\eta_1,\sigma_1) =& (\xi,\xi)\\
(\eta_1,\sigma_1) =& (\xi,-\xi)\\
(\eta_1,\sigma_1) =& (-\xi,\xi)\\
(\eta_4,\sigma_4) =& (\xi/3,\xi/3)
\end{align*}
A formal stationary phase calculation then shows that
\begin{equation*}
\partial_t \hat f(\xi,t) = \pm \frac{i\sgn \xi}{6t} | \hat{f}(\xi, t)|^2 \hat{f}(\xi, t) + ce^{it8/9\xi^3}\frac{\sgn \xi}{t} | \hat{f}(\xi/3, t)|^2 \hat{f}(\xi/3, t) + \{\text{error}\}
\end{equation*}
where $c$ is some constant whose exact value is unimportant. Since the second term has a highly oscillatory phase, we expect that it will not be relevant on timescales $t \geq |\xi|^{-3}$. Similarly, we expect that the error term will be higher order in $t$, and hence will not contribute significantly to the asymptotics. After discarding the oscillatory term and the error term, we are left with a Hamiltonian ODE, which we can integrate explicitly to find that
\begin{equation*}
\hat f(\xi,t) \approx \exp\left(\pm\frac{i}{6} \int_1^t \frac{|\hat{f}(\xi,s)|^2}{s}\;ds\right) f_\infty(\xi)
\end{equation*}
for some bounded function $f_\infty$.
\subsubsection{Step 2: Modulation analysis in the self-similar region}
The above argument only applies for frequencies $|\xi| \geq t^{-1/3}$. For smaller frequencies, there is not enough oscillation to neglect the oscillating term, and the error term in the stationary phase expansion becomes unacceptably large due to the coalescence of the stationary points $(\eta_i, \sigma_i)$ as $\xi \to 0$. Using the embedding $\dot{H}^1 \to C^{0,1/2}$ and making the bootstrap assumption that $\lVert \partial_\xi \hat{f} \rVert_{L^2} = \lVert xf \rVert_{L^2} \lesssim \epsilon t^{1/6}$, we see that the problem of controlling low frequencies reduces to understanding the behavior of the zero Fourier mode. In the real-valued case, $\hat{f}(0,t)$ is conserved by the flow (and hence the low frequency bounds are immediate), but in the complex valued case,
\begin{equation*}
\partial_t \hat{f}(0,t) = \partial_t \hat{u}(0,t) = \pm \frac{1}{\sqrt{2\pi}}\int |u|^2 \partial_x u\;dx
\end{equation*}
which is not zero in general.
The main difficulty for $|\xi| \leq t^{-1/3}$ is that the low-frequency component of $u$ evolves in a genuinely nonlinear manner. By analogy with the real-valued problem, we expect $u$ to exhibit self-similar asymptotics for $|x| \leq t^{1/3}$, which corresponds to the low frequency range $|\xi| \leq t^{-1/3}$. Thus, we will attempt to show the behavior of $u$ at low frequencies is approximately self-similar. If $S(x,t) = t^{-1/3} \sigma(x t^{-1/3})$ is a self-similar solution of~\eqref{eqn:cmkdv}, then $\sigma$ satisfies the third order ODE
\begin{equation}\label{eqn:sigma-third-order-ODE}
\partial_x^3 \sigma - \frac{1}{3}\partial_x(x\sigma) = |\sigma|^2 \sigma_x
\end{equation}
In order for $S$ to be compatible with the asymptotics given in~\Cref{thm:main-theorem}, we need $\sigma$ to be bounded and lie in a certain weighted $L^2$ space. In particular, this means $\sigma$ must solve the Painlev\'e-type equation by~\eqref{eqn:self-sim-def-intro}. Since the mean of $u$ changes in time, we will also need to modulate the mean of the self-similar solution. This leads us to impose the condition $\hat{\sigma}(0) = p$, which by~\cite{correiaAsymptoticsFourierSpace2020} is enough to determine $\sigma$. By examining~\eqref{eqn:sigma-third-order-ODE} and recalling that $S$ is obtained by applying the self-similar scaling to $\sigma$, we see that $|S|^2\partial_x S$ is the derivative of a function. Thus, $|S|^2\partial_x S$ has mean zero. If we choose $S$ such that $\hat{S}(0,t) = \hat{\sigma}(0) = \hat{f}(0,t)$, then this implies that
\begin{equation*}
\partial_t \hat{f}(0,t) = \pm\frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x u - |S|^2 \partial_x S\;dx = \pm \frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x w + (w\overline{w} + w \overline{u}) \partial_x S\;dx
\end{equation*}
where $w = u - S$.
\subsubsection{Step 3: Weighted bounds for $w$} We now consider the difference $w$ in more detail. By definition, $w$ has mean zero, and so does $g = e^{t\partial_x^3} w$. We will show that $\lVert xg \rVert_{L^2} \lesssim \epsilon t^{1/6 - \beta}$ for $\beta$ as in~\Cref{thm:main-theorem}. (This is similar to \cite{hayashiModifiedKortewegVries2001}, where it is shown that $Lw$ obeys better $L^2$ bounds that $Lu$). Using this estimate, we find that $w$ has better dispersive decay than $u$, which allows us to prove a bound
\begin{equation*}
|\partial_t \hat{f}(0,t)| \lesssim \epsilon^3 t^{-1-\beta}
\end{equation*}
Integrating in time gives the boundedness for low frequencies, and shows that $u(x,t) \approx S(x,t;\alpha)$ for $|x| \lesssim t^{1/3}$ and $t$ large, where $\alpha = \lim_{t\to\infty} \hat{f}(0,t)$. Moreover, using the self-similar scaling and~\eqref{eqn:self-sim-def-intro}, it can be seen that $S$ satisfies $\lVert LS \rVert_{L^2} \sim \epsilon^3 t^{1/6}$. Thus,
\begin{equation*}
\lVert xf \rVert_{L^2} \leq \lVert LS \rVert_{L^2} + \lVert xg \rVert_{L^2}
\end{equation*}
so the bound $\lVert xf \rVert_{L^2} \lesssim \epsilon t^{1/6}$ also follows from the improved $L^2$ bound for $xg$. Thus, the proof is complete if we can obtain the weighted $L^2$ bound for $g$.
In the real-valued case, it is possible to use the scaling vector field to control $Lw$ in $L^2$, but the non-divergence form of the nonlinearity in~\eqref{eqn:cmkdv} precludes this argument. Instead, we will use a more direct argument. Since $\mathcal{F}(xg) = -i \partial_\xi \hat{g}$, by Plancherel's theorem it suffices to prove bounds on $\partial_\xi g$ in $L^2$. We find that
\begin{equation}\label{eqn:cmkdv-localization-g-eqn}\begin{split}
\partial_t \partial_\xi \hat g =& \mp \frac{s}{2\pi} \int \partial_\xi \phi e^{is\phi} \hat{f}(\eta) (\xi-\eta-\sigma)\hat{g}(-\xi+\eta+\sigma) \overline{\hat{f}(-\sigma)}\;d\eta d\sigma \\&\qquad \mp \frac{i}{2\pi} \int e^{is\phi} \hat{f}(\eta) (\xi - \eta - \sigma) \partial_\xi \hat{g}(\xi-\eta-\sigma) \overline{\hat{f}(-\sigma)}\;d\eta d\sigma\\
&+\{\text{easier terms}\}
\end{split}\end{equation}
The second term appears concerning, since our estimates do not allow us to control derivatives of $Lw$. However, by writing it as $\pm e^{-t\partial_x^3} \left(|u|^2 \partial_x Lw\right)$ in physical space and performing an energy estimate, we see that this term is actually harmless:
\begin{equation*}
\frac{1}{2}\partial_t \lVert xg \rVert_{L^2}^2 = \pm\int |u|^2 \partial_x \left| Lw \right|^2\;dx + \cdots = \mp \int \partial_x \left| u \right|^2 \left| Lw \right|^2\;dx + \cdots
\end{equation*}
and $\int \partial_x |u|^2 |Lw|^2\;dx \lesssim \epsilon^2 t^{-1} \lVert xg \rVert_{L^2}^2$, which is consistent with the slow growth of $xg$ in $L^2$. Thus, it only remains to control the first term in~\eqref{eqn:cmkdv-localization-g-eqn}. We do this by considering the space-time resonance structure of $\phi$, together with cancellations coming from the $\partial_\xi \phi$ multiplier. The derivatives of $\phi$ are
\begin{equation}\label{eqn:phi-derivatives}\begin{split}
\partial_\xi \phi =& 3(\eta + \sigma)(2\xi - \eta - \sigma)\\
\partial_\eta \phi =& 3(\xi - \sigma)(\xi - 2\eta - \sigma)\\
\partial_\sigma \phi =& 3(\xi - \eta) (\xi - \eta -2\sigma)\
\end{split}\end{equation}
Based on~\cref{eqn:4-wave-phase,eqn:phi-derivatives},
We introduce the space-time resonant sets
\begin{align*}
\mathcal{T} =& \{\xi = \eta\} \cup \{\xi = \sigma\}\\
\mathcal{S} =& \{\eta = \sigma = \xi/3\}\\
\mathcal{R} =& \mathcal{S} \cap \mathcal{T} \\
=& \{(0,0,0)\}
\end{align*}
where $\mathcal{T}$ is the set of time resonances (where $\phi$ vanishes to higher order than $\partial_\xi \phi$), $\mathcal{S}$ is the set of space resonances (where $\nabla_{\eta,\sigma} \phi$ vanishes to higher order than $\partial_\xi \phi$), and the set of space-time resonances is given by $\mathcal{R} = \mathcal{T} \cap \mathcal{S}$.
Away from the set $\mathcal{T}$ of time resonances the quotient $\frac{\partial_\xi \phi}{\phi}$ is bounded and we may integrate by parts in the time variable. This is akin to the normal form transformation method introduced by Shatah in~\cite{shatahNormalFormsQuadratic1985}. In particular, it transforms the cubic nonlinearity into a quintic nonlinearity, which gives us more decay and leads to better bounds.
On the other hand, outside of the set $\mathcal{S}$ of space resonances, $\frac{\partial_\xi \phi}{|\nabla_{\eta,\sigma} \phi|}$ is bounded, and we can integrate by parts using the relation $\frac{\nabla_{\eta,\sigma}\phi}{is|\nabla_{\eta,\sigma}\phi|^2} \cdot \nabla_{\eta,\sigma} e^{is\phi} = e^{is\phi}$ to gain a power of $s^{-1}$. This is similar in spirit to the vector field method developed by Klainerman in~\cite{klainermanNullConditionGlobal1986}. In principle, this integration could result in a loss of derivatives when the space weight and the derivative fall on the same term. In practice, however, we only need to apply this integration by parts in a small neighborhood of $\mathcal{T}$, where it can be seen that $|\xi - \eta - \sigma| \lesssim \max (|\eta|, |\sigma|)$, where we can move the derivative from the term with an $L$ weight to an unweighted term.
Finally, in a small (time-dependent) neighborhood of the space-time resonant set $\mathcal{R}$, we can integrate crudely using the volume bounds in Fourier space and the H\"older bound $|\hat{g}(\xi)| \lesssim \epsilon t^{1/6-\beta}|\xi|^{1/2}$ together with the $L^\infty$ bound for $\hat{f}$ to bound the contribution from $\mathcal{R}$.
\subsubsection{Organization of the paper} The plan of the rest of the paper is as follows: In~\Cref{sec:notation}, we present some notation that will be used throughout the paper, and give some conventions and results about pseudoproduct operators. In~\Cref{sec:linear-ests}, we will give decay estimates for the linear equation, and show how these estimates allow us to control bi- and trilinear terms. In~\Cref{sec:self-sim}, we will consider the self-similar solution $S$, and derive estimates which will be necessary for the later analysis. In~\Cref{sec:reduction}, we will show that~\Cref{thm:main-theorem} follows if we prove that $\hat{f}$ evolves by logarithmic phase rotation, and that $\lVert xg \rVert_{L^2}$, $\lVert \hat{f} \rVert_{L^\infty}$ and $|\partial_t \hat{f}(0,t)|$ satisfy certain bootstrap estimates. In~\Cref{sec:weighted-L2}, we prove that under the bootstrap assumptions, $\lVert xg \rVert_{L^2} \lesssim \epsilon t^{1/6-\beta}$ using space-time resonances and a Gr\"onwall argument. We then verify that $|\partial_t \hat{u}|$ obeys the required decay (and hence that $\hat{f}$ is bounded for low frequencies) and by show that at high frequencies $\hat{f}$ is bounded and undergoes the required logarithmic phase rotation in~\Cref{sec:L-infty-est}.
\subsection*{Acknowledgements}
The author would like to thank P. Germain for many helpful discussions. The author would also like to thank P. Deift and R. C\^ote for discussions about the self-similar solution.
\section{Preliminaries}\label{sec:notation}
\subsection{Notation and basic inequalities}
We will make use of the Japanese bracket notation
\begin{equation*}
\jBra{x} := \sqrt{1 + x^2}
\end{equation*}
When discussing constants, we will write $C$ to denote a (positive) absolute constant, the exact value of which can change from line to line. We also write $c_j$ and $c_{j,k}$ to denote sequences which are $\ell^2$ summable to some arbitrary absolute constant:
\begin{equation*}\begin{split}
\left(\sum_j c_j^2\right)^{1/2} = C\\
\left(\sum_{j,k} c_{j,k}^2\right)^{1/2} = C
\end{split}\end{equation*}
If $X$ and $Y$ are two quantities which we wish to compare, but we want to suppress constant factors, we will write
\begin{itemize}
\item $X \lesssim Y$ if $X \leq CY$ for some $C > 0$,
\item $X \sim Y$ if $X \lesssim Y$ and $Y \lesssim X$,
\item $X \ll Y$ if $X \leq c Y$, where $c$ is a small constant, the exact value of which depends on the context.
\end{itemize}
If we want to allow the implicit constant to depend on some parameters $P_1, P_2, \cdots P_n$, then we will write $X \lesssim_{P_1, P_2, \cdots P_n} Y$, $X \sim_{P_1, P_2, \cdots P_n} Y,$ or $X \ll_{P_1, P_2, \cdots P_n} Y$, respectively.
We use the Fourier transform convention
\begin{equation*}
\mathcal{F}f(\xi) = \hat{f}(\xi) := \frac{1}{\sqrt{2\pi}}\int f(x) e^{-ix\xi}\,dx
\end{equation*}
with the inverse transformation
\begin{equation*}
\mathcal{F}^{-1}(\xi) = \check f(x) := \frac{1}{\sqrt{2\pi}}\int f(x) e^{ix\xi}\,dx
\end{equation*}
Under this convention, multiplication and convolution are linked by
\begin{equation*}\begin{split}
\mathcal{F}(fg)(\xi) =& \frac{1}{\sqrt{2\pi}} \hat{f} * \hat{g}(x)\\
\mathcal{F}(f * g)(\xi) =& \sqrt{2\pi} \hat{f}(\xi) \hat{g}(\xi)
\end{split}\end{equation*}
where
\begin{equation*}
f*g(x) = \int f(x-y) g(y)\,dy = \int f(y) g(x-y)\,dy
\end{equation*}
Using the Fourier transform, we can generalize the notion of differential operators to define Fourier multiplication operators. A Fourier multiplication operator with symbol $m : \mathbb{R} \to \mathbb{C}$ is given by
\begin{equation*}
m(D) f(x) := \mathcal{F}^{-1}(m(\xi) \hat{f}(\xi))(x)
\end{equation*}
The Littlewood-Paley projection operators are an especially important family of Fourier multiplication operators. Let ${\psi \in C^\infty(\mathbb{R})}$ be a function supported on $B_2(0)$ which is identically zero on $B_{1/2}(0)$ satisfying
\begin{equation*}
\sum_{j \in \mathbb{Z}} \psi(2^j \xi) = 1
\end{equation*}
for all $\xi \neq 0$. Then, we define the Littlewood-Paley projectors as
\begin{equation*}
P_j = \psi_j(D) = \psi\left(\frac{D}{2^j}\right)
\end{equation*}
and define $P_j^+$ and $P_j^-$ to be the projectors to positive and negative frequencies, respectively:
\begin{equation*}
P_j^+ = \psi_j(D) \mathds{1}_{D > 0},\qquad\qquad P_j^- = \psi_j(D) \mathds{1}_{D < 0},
\end{equation*}
We write
\begin{equation*}
P_{\leq j} = \sum_{k \leq j} P_k,\qquad P_{\geq j} = \sum_{k \geq j} P_k, \qquad P_{[j_1, j_2]} = \sum_{j_1 \leq k \leq j_2} P_k
\end{equation*}
with $P_{<j}$ and $P_{> j}$ being defined similarly. We also define
\begin{equation*}
P_{\lesssim j} = \sum_{k \leq j + 10} P_k\qquad P_{\ll j} = \sum_{k < j + 10} P_k \qquad P_{\sim j} = \sum_{j - 10 \leq k \leq j + 10} P_k
\end{equation*}
All of the Littlewood-Paley projectors are bounded from $L^p(\mathbb{R}) \to L^p(\mathbb{R})$, and moreover we have the Plancherel-type identity
\begin{equation*}
\sum_{j \in \mathbb{Z}}\lVert P_j f \rVert_{L^2}^2 \sim \lVert f \rVert_{L^2}^2
\end{equation*}
Furthermore, if $f$ has mean $0$ (i.e. $\hat{f}(0) = 0$), then taking Fourier transforms and applying Hardy's inequality (see~\cite{hardyInequalities1959}), we find that
\begin{equation}\label{eqn:hardy-est}
\lVert f \rVert_{\dot{H}^{-1}}^2 := \sum_{2^j} 2^{-2j} \lVert P_j f \rVert_{L^2}^2 \lesssim \lVert xf \rVert_{L^2}^2
\end{equation}
When discussing the complex mKdV equation, we will find it convenient to use the (time-dependent) frequency projectors given by
\begin{equation*}
Q_j = \begin{cases}
P_j & 2^j > t^{-1/3}\\
P_{\leq j} & 2^{j-1} < t^{-1/3} \leq 2^{j}\\
0 & \text{else}
\end{cases}
\end{equation*}
Clearly, these projectors obey the same $L^p \to L^p$ bounds as the usual Littlewood-Paley projectors uniformly in time. The decision not to distinguish between frequencies $\lesssim t^{-1/3}$ is motivated our desire to have the frequency localization of $\hat{f}$ determine the spatial localization of $u = e^{-t\partial_x^3} f$ through the group velocity relation. For frequencies $\gtrsim t^{-1/3}$, this is possible (see~\Cref{sec:linear-ests}), but the uncertainty principle implies this spatial localization deteriorates when we project to frequencies $\ll t^{-1/3}$. To denote the projector to the low frequencies, we will sometimes write
\begin{equation*}
Q_{\leq \log t^{-1/3}} = \psi_{\leq \log t^{-1/3}}(D) := P_{\leq j}
\end{equation*}
where $j \in \mathbb{Z}$ is such that $2^{j-1} < t^{-1/3} \leq 2^{j}$, so
\begin{equation*}
\operatorname{Id} = Q_{\leq \log t^{-1/3}} + \sum_{2^j > t^{-1/3}} Q_j
\end{equation*}
As with the projectors $P_j$, we define
\begin{equation*}
Q_{\leq j} = \sum_{k \leq j} Q_k,\qquad Q_{\geq j} = \sum_{k \geq j} Q_k, \qquad Q_{[j_1 , j_2]} = \sum_{j_1 \leq k \leq j_2} Q_k
\end{equation*}
with $Q_{< j}$ and $Q_{> j}$ being defined analogously, and
\begin{equation*}
Q_{\lesssim j} = Q_{\leq j + 10},\qquad Q_{\sim j} = Q_{[j - 10, j + 10]}, \qquad Q_{\ll j} = \chi_{< j - 10}
\end{equation*}
We will often indicate the frequency localization of a function through a subscript, so
\begin{equation*}\begin{split}
f_j =& Q_j f\\
f_{[j_1,j_2]} =& Q_{[j_1,j_2]} f
\end{split}\end{equation*}
and similarly for $f_{< j}$, $f_{\sim j}$, etc.
To complement the $Q_j$, it will be useful to consider time-dependent functions $\chi_j$ with the property that if $f$ is a bump function localized in space near $0$, then $e^{-t\partial_x^3} Q_j f$ will be localized near the support of $\chi_j$ (up to more rapidly decaying tails). To do this, we define
\begin{equation*}
\chi_j(x;t) = \begin{cases}
\chi(x/(t2^{2j})) & 2^j > t^{-1/3}\\
\sum_{2^k \leq t^{-1/3}} \chi(x/(t2^{2k})) & 2^{j} \leq t^{-1/3} < 2^{j+1}\\
0 & 2^{j+1} \leq t^{-1/3}
\end{cases}
\end{equation*}
where $\chi$ is a non-negative bump function localized in the region $|x| \approx 1$ chosen so that $\sum_{j} \chi_j(x,t) = 1$ for all $x \neq 0$. As with the Fourier projectors, we define
\begin{equation*}
\chi_{\leq j} = \sum_{k \leq j} \chi_{k}, \qquad \chi_{< j} = \sum_{k < j} \chi_k, \qquad \chi_{[j_1, j_2]} = \sum_{j_1 \leq k \leq j_2} \chi_{k}
\end{equation*}
with $\chi_{> j}$ and $\chi_{\geq j}$ defined in analogously, and
\begin{equation*}
\chi_{\lesssim j} = \chi_{\leq j + 10},\qquad \chi_{\ll j} = \chi_{< j - 10},\qquad \chi_{\sim j} = \chi_{[j - 10, j + 10]}
\end{equation*}
and similarly for $\chi_{\gtrsim j}$ and $\chi_{\gg j}$.
Note that for $f \in L^2$, each of the families $\{\chi_k f\}$, $\{Q_j f \}$ and $\{\chi_k Q_j f \}$ is almost orthogonal, which implies that
\begin{equation}\label{eqn:phy-and-fourier-almost-orthogonality}
\lVert f \rVert_{L^2}^2 \sim \sum_{2^k \gtrsim t^{-1/3}} \lVert \chi_k f \rVert_{L^2}^2 \sim \sum_{2^j \gtrsim t^{-1/3}} \lVert Q_j f \rVert_{L^2}^2 \sim \sum_{2^j, 2^k \gtrsim t^{-1/3}} \lVert \chi_k Q_j f \rVert_{L^2}^2
\end{equation}
We also recall the following bound (which expresses the pseudolocality of the projectors $Q_j$): For $\frac{1}{p} = \frac{1}{p_1} + \frac{1}{p_2}$,
\begin{equation}\label{eqn:P_j-loc-bound}
\lVert\left( Q_{\leq j} f\right) g\rVert_{L^p} \lesssim_N \jBra{2^j d(\supp(f), \supp(g))}^{-N} \lVert f \rVert_{L^{p_1}}\lVert g \rVert_{L^{p_2}}
\end{equation}
which can be obtained by writing $Q_{\leq j} f = \check{Q}_{\leq j} * f$ and noting that $\check{Q}_{\leq j}$ is a rapidly decreasing function. In particular, if the supports of $f$ and $g$ are separated by a distance much larger than $2^{-j}$, the term on the right in~\eqref{eqn:P_j-loc-bound} will be small.
\begin{rmk}
The notion of pseudolocality holds in a much greater generality for pseudodifferential operators $a(x, hD)$, see~\cite{zworskiSemiclassicalAnalysis2012}. In equation~\eqref{eqn:P_j-loc-bound}, $2^{-j}$ plays the role of the small parameter $h$.
\end{rmk}
\begin{rmk}
We will often apply the estimate~\eqref{eqn:P_j-loc-bound} as follows: Taking $g = \chi_j$, we find that
\begin{equation}\label{eqn:pseudoloc-comm-est}
\left\lVert \chi_j Q_{\leq j} \left((1 - \chi_{\sim j}) F\right)\right\rVert_{L^p} \lesssim_N (t 2^{3j})^{-N} \lVert F \rVert_{L^p}
\end{equation}
so
\begin{equation*}
\lVert \chi_j Q_{\leq j} F \rVert_{L^p} \lesssim_N \lVert \chi_{\sim j} F \rVert_{L^p} + (t2^{3j})^{-N} \lVert F \rVert_{L^p}
\end{equation*}
In our applications, $\chi_j F$ and $\chi_{\sim j} F$ will generally obey the same sorts of bounds, so we can essentially commute the frequency localization of $Q_{\leq j}$ and the spatial localization of $\chi_j$ up to an error which is summable in $j$.
\end{rmk}
\begin{rmk}
By writing $Q_j = Q_{\leq j} - Q_{\leq j-1}$, we see that the same bounds hold true if $Q_{\leq j}$ is replaced by $Q_{j}$
\end{rmk}
\subsection{Multilinear harmonic analysis}\label{sec:multilinear-defs}
For a symbol $m : \mathbb{R}^3 \to \mathbb{C}$, we define the trilinear pseudoproduct operator $T_m$ by
\begin{equation*}
\mathcal{F}T_m(f,g,h)(\xi) = \frac{1}{2\pi} \int m(\xi,\eta,\sigma) \hat{f}(\eta) \hat{g}(\xi - \eta - \sigma) \hat{h}(\sigma)\,d\eta d\sigma
\end{equation*}
In particular, $T_1(f,g,h)(x) = (fgh)(x)$, so $T_m$ can be thought of as a generalized product. If the symbol $m$ is sufficiently well-behaved, we can show that that the pseudoproduct $T_m(f,g,h)$ obeys H\"older-type bounds (see also~\cite{coifmanAuDelaOperateurs1978}):
\begin{thm}\label{thm:L1-symbol-bounds}
Suppose $m$ is a symbol with $\check{m} \in L^1$, and $\frac{1}{p_1} + \frac{1}{p_2} = \frac{1}{p_3} = \frac{1}{p}$. Then,
\begin{equation}
\lVert T_m(f,g,h) \rVert_{L^p} \lesssim \lVert \check{m} \rVert_{L^1} \lVert f \rVert_{L^{p_1}} \lVert g\rVert_{L^{p_2}} \lVert h\rVert_{L^{p_3}}
\end{equation}
\end{thm}
\begin{proof}
By inverting the Fourier transform, we find
\begin{equation*}
T_m(f,g,h)(x) = \frac{1}{(2\pi)^{3/2}}\int \check{m}(y,z,w) f(x - y - z) g(x-y) h(x-y-w)\,dydzdw
\end{equation*}
which yields the result by Young's inequality.
\end{proof}
\begin{rmk}\label{rmk:freq-loc-symbol-bounds}
In our analysis, we will often consider symbols $m$ which are supported on a region of volume $O\left(2^{3j}\right)$ and satisfy the symbol bounds
\begin{equation*}
|\partial_{\xi,\eta,\sigma}^\alpha m(\xi,\eta,\sigma)| \lesssim_\alpha 2^{-|\alpha|j}
\end{equation*}
For such symbols,
\begin{equation*}
\left|\check{m}(y,z,w)\right| \lesssim_N \frac{2^j}{(1 + |2^{j}y|)^N}\frac{2^{j}}{(1 + |2^{j}z|)^N}\frac{2^{j}}{(1 + |2^{j}w|)^N}
\end{equation*}
which shows that $m$ satisfies the hypotheses of~\Cref{thm:L1-symbol-bounds}.
\end{rmk}
We also need a pseudolocality property for pseudoproduct operators given in the following lemma:
\begin{lemma}\label{thm:cm-paraprod-pseudolocality}
Suppose that $f_1, f_2, f_3, f_4$ are functions, and suppose that $\supp f_i$ and $\supp f_k$ are separated by a distance $R$ for some $i \neq k$. Let $m$ be a symbol supported on $|\xi| + |\eta| + |\sigma| \leq 2^{j+5}$ and satisfying the symbol bounds
\begin{equation*}
|\partial_{\xi,\eta,\sigma}^\alpha m_j(\xi,\eta,\sigma)| \lesssim_\alpha 2^{-j|\alpha|}
\end{equation*}
Then, for $\frac{1}{p_1} + \frac{1}{p_2} + \frac{1}{p_3} + \frac{1}{p_4} = \frac{1}{p}$, we have
\begin{equation}\label{eqn:pseudoloc-1}
\lVert f_4 T_{m_j}(f_1,f_2,f_3) \rVert_{L^p} \lesssim_N \jBra{2^j R}^{-N} \lVert f_1 \rVert_{L^{p_1}} \lVert f_2 \rVert_{L^{p_2}} \lVert f_3 \rVert_{L^{p_3}} \lVert f_4 \rVert_{L^{p_4}}
\end{equation}
\end{lemma}
\begin{proof}
We will assume $i=1, k = 2$, since the other cases are similar. Arguing as in the proof of~\Cref{thm:L1-symbol-bounds} and using the hypothesis on the supports of $f_1$ and $f_2$, we have that
\begin{equation*}
f_4T_{m_j}(f_1,f_2,f_3) = \int_{|z| \geq} \check{m}_j(y,z,w) f_1\left( x - y - z\right) f_2\left( x - y\right) f_3\left( x - y - w\right) f_4(x)\,dy dz dw
\end{equation*}
Using Minkowski's and H\"older's inequalities, we see that~\eqref{eqn:pseudoloc-1} reduces to showing that
\begin{equation*}
\int_{|z| \geq R} |\check{m}_j(y,z,w)\,dydzdw \lesssim_N \jBra{2^j R}^{-N}
\end{equation*}
Since $m_j$ is smooth and supported on a region of size $O(2^{3j})$, the same reasoning as in~\Cref{rmk:freq-loc-symbol-bounds} gives us the bound
\begin{equation*}
|\check{m}_j(y,z,w)| \lesssim_N 2^{3j}\jBra{(2^j y,2^j z,2^j w)}^{-N-10}
\end{equation*}
which gives the required bound.
\end{proof}
\section{Linear and multilinear estimates}\label{sec:linear-ests}
\subsection{The linear estimate}
We now turn our attention to the linear part of~\eqref{eqn:cmkdv}. We begin by proving a linear estimate for the Airy propagator. Define the spaces $X_j$ for $2^j \gtrsim t^{-1/3}$ by the norm
\begin{equation}\label{eqn:X-j-norm-def}
\lVert f \rVert_{X_j} := \lVert \widehat{Q_{\sim j} f} \rVert_{L^\infty} + t^{-1/6} \lVert x Q_{\sim j} f \rVert_{L^2}
\end{equation}
and note that $\lVert f \rVert_{X_j}^2 \lesssim \lVert f \rVert_{X}^2$, where $X$ is the norm defined in~\eqref{eqn:X-def}.
\begin{lemma}\label{thm:lin-decay-lemma}
Let $u(x,t) = e^{-t\partial_x^3} f(x,t)$. For $2^j > t^{-1/3}$, we have the pointwise estimate
\begin{equation}\label{eqn:freq-loc-ptwise-decay}\begin{split}
P^{\pm}_{j} u(x,t) =& \frac{1}{\sqrt{12 t \xi_0}} e^{\mp 2it\xi_0^3 \pm i \frac{\pi}{4}} \widehat{P^{\pm}_j f}\left(\pm \xi_0\right) \mathds{1}_{x < 0}\\ &\qquad + O\left(t^{-1/3} \left(2^j t^{1/3}\right)^{-9/14}\right) \mathds{1}_{x < 0}\chi_{\sim j}(x,t) \lVert f \rVert_{X_j}\\
&\qquad + O\left(t^{-1/3} \left((2^j + 2^{-j/3} |\xi_0|^{4/3}) t^{1/3}\right)^{-3/2}\right)\lVert f \rVert_{X_j}
\end{split}\end{equation}
where $\xi_0 = \sqrt{\left|\frac{x}{3t}\right|}$. Moreover, we have the estimate
\begin{equation}\label{eqn:low-freq-ptwise-decay}
|Q_{\leq \log t^{-1/3}} u(x,t)| \lesssim t^{-1/3} (1 + t^{2/3} \xi_0^2)^{-1} \lVert f \rVert_{X_{\leq \log t^{-1/3}}}
\end{equation}
so for $p \in [4, \infty]$,
\begin{equation}\label{eqn:freq-loc-lp-decay}
\lVert Q_j u \rVert_{L^p} \lesssim t^{-\frac{1}{2} + \frac{1}{p}} 2^{\left(\frac{2}{p} - \frac{1}{2}\right) j} \lVert f \rVert_{X_j}
\end{equation}
In particular, if $p > 4$,
\begin{equation}\label{eqn:lp-decay}
\lVert u \rVert_{L^p} \lesssim t^{-\frac{1}{3} + \frac{1}{3p}} \lVert f \rVert_{X}
\end{equation}
\end{lemma}
\begin{proof}
The estimate \Cref{eqn:freq-loc-lp-decay} follows directly from~\cref{eqn:freq-loc-ptwise-decay,eqn:low-freq-ptwise-decay}, and~\cref{eqn:lp-decay} follows from~\cref{eqn:low-freq-ptwise-decay,eqn:freq-loc-lp-decay} since
\begin{equation*}
\lVert u \rVert_{L^p} \lesssim \lVert Q_{\leq \log t^{-1/3}} u \rVert_{L^p} + \sum_{2^j > t^{-1/3}} \lVert Q_j u \rVert_{L^p}
\end{equation*}
so it only remains to prove~\eqref{eqn:freq-loc-ptwise-decay} and \eqref{eqn:low-freq-ptwise-decay}.
To prove the low frequency estimate \cref{eqn:low-freq-ptwise-decay}, it suffices to show that
\begin{equation*}
|Q_{\leq \log t^{-1/3}} u| \lesssim \min(t^{-1/3}, t^{-1} \xi_0^{-2})\lVert f \rVert_{X_{\leq \log t^{-1/3}}}
\end{equation*}
The bound $|Q_{\leq \log t^{-1/3}} u| \lesssim t^{-1/3} \lVert f \rVert_{X_{\leq \log t^{-1/3}}}$ follows immediately from the Hausdorff-Young inequality, so it suffices to consider the case $|\xi_0| \gg t^{-1/3}$. In this case, writing
\begin{equation*}
Q_{\leq \log t^{-1/3}} u(x,t) = \frac{1}{\sqrt{2\pi}} \int_\mathbb{R} \psi_{\leq \log t^{-1/3}}(\xi) e^{it\phi_\text{lin}(\xi)} \hat{f}(\xi)\,d\xi
\end{equation*}
for $\phi_\text{lin}(\xi) = \frac{x}{t}\xi + \xi^3 = \xi^3 - 3\xi_0^2\xi$, the assumption $|\xi_0| \gg t^{-1/3}$ implies that $|\partial_\xi \phi_\text{lin}| \sim |\xi_0|^2$ on the support of $\psi_{\leq \log t^{-1/3}}\hat{f}$, so integration by parts yields
\begin{equation*}\begin{split}
|Q_{\leq \log t^{-1/3}} u(x,t)| \lesssim& \frac{1}{t} \int_\mathbb{R} \left|\partial_\xi \left(\frac{1}{\partial_\xi \phi_\text{lin}}\right)\right| |\psi_{\leq \log t^{-1/3}}\hat{f}(\xi)|\,d\xi\\
&+ \frac{1}{t} \int_\mathbb{R} \left|\frac{1}{\partial_\xi \phi_\text{lin}}\right| \left|\partial_\xi \left(\psi_{\leq \log t^{-1/3}}\hat{f}(\xi)\right)\right|\,d\xi\\
\lesssim& t^{-1}|\xi_0|^{-2}\left( t^{-1/3} \xi_0^{-1}\lVert \widehat {Q}_{\leq \log t^{-1/3}} \hat{f} \rVert_{L^\infty} + t^{-1/6} \lVert xQ_{\leq \log t^{-1/3}} f \rVert_{L^2}\right)\\
\lesssim& t^{-1} |\xi_0|^{-2} \lVert f \rVert_{X_{\leq t^{-1/3}}}
\end{split}\end{equation*}
as required.
We now turn to the estimate~\eqref{eqn:freq-loc-ptwise-decay}. We consider the estimate for $P^{+}_j u$: the estimate for $P^{-}_j u$ is similar. As before, we write
\begin{equation*}
P^+_j u(x,t) = \frac{1}{\sqrt{2\pi}} \int_0^\infty \psi^+_j(\xi) e^{it\phi_\text{lin}(\xi)} \hat{f}(\xi)\,d\xi
\end{equation*}
We distinguish between three cases depending on the relative sizes of $2^j$ and $|\xi_0|$ and the sign of $x$.
\paragraph{\indent \textbf{Case} $|\xi_0| < 2^{j-10}$}
In this case, $|\partial_\xi \phi_\text{lin}| \sim 2^{2j}$, and integration by parts gives
\begin{subequations}\begin{align}
|P^+_j u(x,t)| \lesssim& \frac{1}{t} \int_0^\infty \left|\partial_\xi\left( \frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}}\right)\right| |\psi^+_{\sim j}(\xi)\hat{f}(\xi)|\,d\xi\label{eqn:lin-est-small-xi-0-a}\\
&+ \frac{1}{t} \int_0^\infty \left|\psi^+_j(\xi) \frac{1}{\partial_\xi \phi_\text{lin}}\right| |\psi^+_{\sim j}(\xi)\partial_\xi \hat{f}(\xi)|\,d\xi\label{eqn:lin-est-small-xi-0-b}
\end{align}\end{subequations}
For~\eqref{eqn:lin-est-small-xi-0-a}, we observe that $\partial_{\xi} \frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}}$ has size $O(2^{-3j})$ and is supported on a region of size $O(2^j)$, so using Hardy's inequality~\eqref{eqn:hardy-est} yields
\begin{equation*}\begin{split}
\eqref{eqn:lin-est-small-xi-0-a} \lesssim& t^{-1}2^{-5/2j} \lVert \widehat{Q_{\sim j} f} \rVert_{L^2}\\
\lesssim& t^{-1/3} \left( t^{1/3} 2^j \right)^{-3/2} \lVert f \rVert_{X_j}
\end{split}\end{equation*}
Similarly, $\frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}}$ has size $O(2^{-2j})$ and is supported on a region of size $O(2^j)$, so
\begin{equation*}\begin{split}
\eqref{eqn:lin-est-small-xi-0-b} \lesssim& t^{-1}2^{-3/2j} \lVert Q_{\sim j} xf \rVert_{L^2}\\
\lesssim& t^{-1/3} \left(2^j t^{1/3}\right)^{-3/2} \lVert f \rVert_{X_j}
\end{split}\end{equation*}
as required.
\paragraph{\indent \textbf{Case} $|\xi_0| > 2^{j+10}$}
In this case, $|\partial_\xi \phi_\text{lin}| \sim \xi_0^2$, and a quick calculation gives that
\begin{equation}\label{eqn:lin-est-large-xi-0-bounds}\begin{split}
\left\lVert \partial_{\xi} \frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}(\xi)} \right\rVert_{L^2} \lesssim& \frac{1}{\xi_0^2 2^{j/2}}\\
\left\lVert \frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}(\xi)} \right\rVert_{L^2} \lesssim& \frac{2^{j/2}}{\xi_0^2}\\
\end{split}\end{equation}
Integrating by parts, we find that
\begin{subequations}\begin{align}
|P_j^{\pm} u(x,t)| \lesssim& \frac{1}{t} \int_0^\infty \left|\partial_{\xi} \frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}(\xi)} \right| |\psi^+_{\sim j} \hat{f}(\xi)|\,d\xi\label{eqn:lin-est-large-xi-0-1}\\
&+ \frac{1}{t}\int_0^\infty \left|\frac{\psi^+_j(\xi)}{\partial_\xi \phi_\text{lin}(\xi)} \right| |\psi^+_{\sim j} \partial_\xi \hat{f}(\xi)|\,d\xi\label{eqn:lin-est-large-xi-0-2}
\end{align}\end{subequations}
Using the bounds~\eqref{eqn:lin-est-large-xi-0-bounds} and arguing as in the case $|\xi_0| < 2^{j-10}$, we find that
\begin{equation*}\begin{split}
\eqref{eqn:lin-est-large-xi-0-1} \lesssim& t^{-1} \xi_0^{-2} 2^{-j/2} \lVert \widehat{Q_{\sim j} f} \rVert_{L^2}\\
\lesssim& t^{-1/3} \left(t^{1/3} 2^{-j/3}\xi_0^{-4/3}\right)^{-3/2}\lVert f \rVert_{X_{j}}
\end{split}\end{equation*}
and
\begin{equation*}\begin{split}
\eqref{eqn:lin-est-large-xi-0-2} \lesssim& t^{-1} \xi_0^{-2} 2^{j/2} \lVert \widehat{Q}_{\sim j} \partial_\xi\hat{f} \rVert_{L^2}\\
\lesssim& t^{-1/3} \left(t^{1/3} 2^{-j/3}\xi_0^{-4/3}\right)^{-3/2}\lVert f \rVert_{X_{j}}
\end{split}\end{equation*}
which yields the desired bound for $P^+_j u(x,t)$.
\paragraph{\indent \textbf{Case} $x > 0$, $2^{j-10} \leq |\xi_0| \leq 2^{j+10}$}
Here, $|\partial_\xi \phi_\text{lin}| \sim \xi_0^{-2}$, and the estimate is identical to the previous case.
\paragraph{\indent \textbf{Case} $x < 0$, $2^{j-10} \leq |\xi_0| \leq 2^{j+10}$}
Since $\partial_\xi \phi_\text{lin}$ vanishes at $\xi = \xi_0$, we employ the method of stationary phase. Let us write
\begin{equation*}
P^+_j u(x,t) = \sum_{\ell = \ell_0}^{j+10} I_{j,\ell}
\end{equation*}
where
\begin{equation*}\begin{split}
I_{j,\ell} =& \frac{1}{\sqrt{2\pi}} \int_0^\infty \psi^+_j(\xi) \psi_{\ell}(\xi - \xi_0) e^{it\phi_\text{lin}(\xi)} \widehat{Q_{\sim j}f}(\xi)\,d\xi, \qquad \ell > \ell_0\\
I_{j,\ell_0} =& \frac{1}{\sqrt{2\pi}} \int_0^\infty \psi^{+}_j(\xi)\psi_{\leq \ell_0}(\xi - \xi_0) e^{it\phi_\text{lin}(\xi)} \widehat{Q_{\sim j} f}(\xi)\,d\xi
\end{split}\end{equation*}
and $\ell_0$ is chosen such that $2^{\ell_0} \sim t^{-1/3} (2^j t^{1/3})^{-3/7}$.
For the $I_{j,\ell}$ factors with $\ell > \ell_0$, we have that $|\partial_\xi \phi_\text{lin}| \sim 2^\ell 2^j$. Integrating by parts, we find that
\begin{equation*}\begin{split}
|I_{j,\ell}| \lesssim& \frac{1}{t} \int_0^\infty \left| \partial_\xi \frac{\psi^+_j(\xi)\psi_{\ell}(\xi - \xi_0)}{\partial_\xi\phi_\text{lin}} \right| |\psi_{\sim j}(\xi) \hat{f}(\xi)| \,d\xi\\
&+ \frac{1}{t} \int_0^\infty \left|\frac{\psi^+_j(\xi)\psi_{\ell}(\xi - \xi_0)}{\partial_\xi\phi_\text{lin}} \right| |\psi_{\sim j}(\xi) \partial_\xi\hat{f}(\xi)| \,d\xi\\
\lesssim& t^{-1} \left(2^{-j}2^{-\ell} \lVert \widehat{Q_{\sim j}f} \rVert_{L^\infty} + 2^{-j} 2^{-\ell/2} \lVert \partial_\xi \widehat{Q_{\sim j}f} \rVert_{L^2}\right)
\end{split}\end{equation*}
summing over $\ell > \ell_0$ gives
\begin{equation}\label{eqn:far-from-st-pt-est}\begin{split}
\sum_{\ell > \ell_0}|I_{j,\ell}| \lesssim& t^{-1} \left(2^{-j}2^{-\ell_0} \lVert \widehat{Q_{\sim k}f} \rVert_{L^\infty} + 2^{-j} 2^{-\ell_0/2} \lVert \partial_\xi \widehat{Q_{\sim j}f} \rVert_{L^2}\right)\\
\lesssim& \left( t^{-1} 2^{-j} 2^{-\ell_0} + t^{-5/6} 2^{-j} 2^{-\ell_0/2}\right) \lVert f \rVert_{X_j}
\end{split}\end{equation}
For the $I_{j,\ell_0}$ term, we write
\begin{subequations}\begin{align}
I_{j,\ell_0} =& \frac{1}{\sqrt{2\pi}} \int_0^\infty \psi_{\leq \ell_0}(\xi - \xi_0) e^{it\phi_\text{lin}(\xi)} \left(\psi_j^{+}(\xi)\widehat{Q_{\sim j}f}(\xi) - \psi_j^{+}(\xi_0)\widehat{Q_{\sim j}f}(\xi_0)\right)\,d\xi\label{eqn:lin-stat-ph-1}\\
&+ \frac{1}{\sqrt{2\pi}} \psi_j^{+}(\xi_0) \hat{f}(\xi_0) \int_0^\infty \psi_{\leq \ell_0}(\xi - \xi_0) \left(e^{it\phi_\text{lin}(\xi)} - e^{6it\xi_0 (\xi-\xi_0)^2 -2 it \xi_0^3}\right) \,d\xi\label{eqn:lin-stat-ph-2}\\
&+ \frac{1}{\sqrt{2\pi}} \psi_j^{+}(\xi_0)\hat{f}(\xi_0)e^{-2it\xi_0^3} \int_0^\infty \psi_{\leq \ell_0}(\xi - \xi_0) e^{6it \xi_0 \xi^2} \,d\xi\label{eqn:lin-stat-ph-3}
\end{align}\end{subequations}
For the first term, we note that
\begin{equation*}\begin{split}
\Big| \psi^+_j(\xi)\widehat{Q_{\sim j} f}(\xi) - \psi^+_j(\xi_0)\widehat{Q_{\sim j} f}(\xi_0)\Big| \lesssim& \big(2^{-j}|\xi - \xi_0| + t^{1/6}|\xi - \xi_0|^{1/2} \big) \lVert f \rVert_{X_j}
\end{split}\end{equation*}
by the Sobolev-Morrey embedding $\dot{H}^1 \to C^{0,1/2}$. Using this bound, we find that
\begin{equation*}\begin{split}
|\eqref{eqn:lin-stat-ph-1}| \lesssim& \int\psi_{\leq \ell_0}(\xi - \xi_0) \left(|\xi-\xi_0|2^{-j} + t^{1/6} |\xi - \xi_0|^{1/2}\right) \lVert f \rVert_{X_j}\,d\xi\\
\lesssim& \left(2^{3/2\ell_0} t^{1/6} + 2^{-j} 2^{2\ell_0}\right) \lVert f \rVert_{X_j}
\end{split}\end{equation*}
For the second term, we observe that
\begin{equation*}
\phi_\text{lin}(\xi) = -2\xi_0^3 + 6\xi_0 (\xi - \xi_0)^2 + (\xi - \xi_0)^3
\end{equation*}
so
\begin{equation*}\begin{split}
|\eqref{eqn:lin-stat-ph-2}| \lesssim& \lVert \psi_{j}^+(\xi) \hat{f} \rVert_{L^\infty}\int_0^\infty \psi_{\leq \ell_0} (\xi - \xi_0) \left| e^{it (\xi - \xi_0)^3} - 1 \right| \,d\xi\\
\lesssim& t 2^{4\ell_0} \lVert f \rVert_{X_k}
\end{split}\end{equation*}
Finally, rescaling and using the classical stationary phase estimate gives
\begin{equation*}\begin{split}
\eqref{eqn:lin-stat-ph-3} =& \frac{1}{\sqrt{2\pi}} \psi_j^{+}(\xi_0)\hat{f}(\xi_0)e^{-2it\xi_0^3} \int_0^\infty \psi_{\leq \ell_0}(\xi - \xi_0) e^{6it \xi_0 (\xi-\xi_0)^2} \,d\xi\\
=& \frac{2^{\ell_0}}{\sqrt{2\pi}} \psi_j^{+}(\xi_0)\hat{f}(\xi_0)e^{-2it\xi_0^3} \int_\mathbb{R} \psi_{\leq 0}(\xi) e^{6it \xi_0 2^{2\ell_0} \xi^2} \,d\xi\\
=& \frac{\psi_j^{+}(\xi_0)\hat{f}(\xi_0)}{\sqrt{12 t \xi_0}}e^{-2it\xi_0^3+i\frac{\pi}{4}} + O\left(t^{-3/2}2^{-2\ell_0} 2^{-3/2j} \lVert \widehat{Q_{\sim j} f} \rVert_{L^\infty}\right)
\end{split}\end{equation*}
Collecting the terms~\cref{eqn:far-from-st-pt-est,eqn:lin-stat-ph-1,eqn:lin-stat-ph-2,eqn:lin-stat-ph-3} and recalling the definition of $\ell_0$, we find that
\begin{equation*}
u(t,x) = \frac{\psi_j^{+}(\xi_0)\hat{f}(\xi_0)}{\sqrt{12 t \xi_0}}e^{-2it\xi_0^3 + i\frac{\pi}{4}} + O\left(t^{-1/3}(2^j t^{1/3})^{-9/14}\lVert f \rVert_{X_j}\right)\qedhere
\end{equation*}
\end{proof}
As a corollary of the above estimate, we obtain an improved bilinear decay estimate:
\begin{cor}\label{thm:simple-bilinear-decay}
If $f, g \in X$, then
\begin{equation*}
|e^{-t\partial_x^3}f e^{-t\partial_x^3} \partial_x g| \lesssim t^{-1} \lVert f \rVert_{X} \lVert g \rVert_{X}
\end{equation*}
\end{cor}
\begin{proof}
We will prove that
\begin{equation*}
|\chi_k e^{-t\partial_x^3}f e^{-t\partial_x^3} \partial_x g| \lesssim t^{-1} \lVert f \rVert_{X} \lVert g \rVert_{X}
\end{equation*}
which gives the desired result. Using~\Cref{thm:lin-decay-lemma}, we find that
\begin{equation*}
|\chi_{\sim k} Q_{j} e^{-t\partial_x^3}f| \lesssim \begin{cases}
t^{-5/6} 2^{-3/2j} \lVert f \rVert_{X_j} & k < j - 20\\
t^{-1/2} 2^{-k/2} \lVert f \rVert_{X_j} & |j - k| \leq 20\\
t^{-5/6} 2^{j/2 - 2k} \lVert f \rVert_{X_j} & k > j - 20
\end{cases}
\end{equation*}
Thus, summing in $j$, we find that
\begin{equation*}
|\chi_{\sim k} e^{-t\partial_x^3}f| \lesssim t^{-1/2} 2^{-k/2} \lVert f \rVert_{X}
\end{equation*}
Now, $\lVert \partial_x g \rVert_{X_j} \sim 2^j \lVert g \rVert_{X_j}$, so a similar calculation shows that
\begin{equation*}
|\chi_{\sim k} e^{-t\partial_x^3} \partial_x g| \lesssim t^{-1/2} 2^{k/2} \lVert g \rVert_{X}
\end{equation*}
which gives the result.
\end{proof}
\begin{rmk}
Although \Cref{thm:simple-bilinear-decay} does not apply directly to pseudoproducts, we will see in \Cref{sec:weighted-L2} that~\Cref{thm:lin-decay-lemma} can be used together with the pseudolocality of pseudoproducts given in~\Cref{thm:cm-paraprod-pseudolocality} to give bounds of the same type for pseudoproducts involving a $\partial_x u$ term.
\end{rmk}
From the proof of~\Cref{thm:simple-bilinear-decay}, we also get a decay bound on $\chi_{\geq k} u$:
\begin{cor}\label{thm:u-space-loc-lin-decay}
If $f \in X$ and $u = e^{-t\partial_x^3} f$,
\begin{equation*} \lVert \chi_{\geq k} u \rVert_{L^\infty} \lesssim t^{-1/2} 2^{-k/2} \lVert f \rVert_X
\end{equation*}
\end{cor}
It will also be important later to have decay estimates for $e^{-t\partial_x^3} g$ when $\hat{g}(0) = 0$. We record them here:
\begin{cor}\label{thm:w-lin-decay}
Suppose $\hat{g}(0) = 0$ and $xg \in L^2$. Then, if $w = e^{-t\partial_x^3} g$, we have the bounds
\begin{equation}\label{eqn:w-lin-decay-est}
|Q_j w| \lesssim \left(t^{-1/2} \chi_{\sim j} + t^{-5/6} 2^{j/2} \left(2^j + 2^{-j/3} |\xi_0|^{4/3} \right)^{-3/2} \right) \lVert xg \rVert_{L^2}c_j
\end{equation}
and
\begin{equation}\label{eqn:w-space-loc-decay-est}
\lVert \chi_{k} w \rVert_{L^\infty} \lesssim t^{-1/2} \lVert xg \rVert_{L^2} c_k
\end{equation}
\end{cor}
\begin{proof}
By the Morrey-Sobolev embedding $\dot{H}^1 \to C^{1/2}$, we have that for $2^j \gtrsim t^{-1/3}$,
\begin{equation*}\begin{split}
\lVert g \rVert_{X_j} =& \lVert \mathcal{F}\left({Q_{[j - 10, j + 10]} g}\right) \rVert_{L^\infty} + t^{-1/6} \lVert x Q_{[j- 10, j+10]} g \rVert_{L^2}\\
\lesssim& (1 + t^{-1/6}2^{-j/2}) \lVert \mathcal{F}\left({Q_{[j - 20, j + 20]} g}\right) \rVert_{L^\infty} + t^{-1/6}\lVert Q_{[j - 20, j + 20]} (xg) \rVert_{L^2}\\
\lesssim& 2^{j/2} \lVert Q_{[j - 30, j + 30]} (xg) \rVert_{L^2}
\end{split}\end{equation*}
so applying~\Cref{thm:lin-decay-lemma} gives~\eqref{eqn:w-lin-decay-est}. To prove the localized bound~\eqref{eqn:w-space-loc-decay-est}, we note that~\eqref{eqn:w-lin-decay-est} implies that
\begin{equation*}\begin{split}
\lVert \chi_{k} w \rVert_{L^\infty} \leq& \lVert \chi_{k} Q_{\sim k} w \rVert_{L^\infty} + \lVert \chi_{k} Q_{\ll k} w \rVert_{L^\infty} + \lVert \chi_{ k} Q_{\gg k} w \rVert_{L^\infty}\\
\lesssim& \sum_{|k-\ell| \leq 10} \lVert Q_{\ell} w \rVert_{L^\infty} + t^{-5/6} \sum_{\ell < k-10 } \lVert \chi_k Q_{\ell} w \rVert_{L^\infty} + \sum_{\ell > k+ 10} \lVert \chi_k Q_{\ell} w \rVert_{L^\infty}\\
\lesssim& \sum_{|k-\ell| \leq 10} t^{-1/2} \lVert xg \rVert_{L^2}c_{\ell} + t^{-5/6} \sum_{\ell < k-10 } 2^{\ell - 2k} \lVert xg \rVert_{L^2} + \sum_{\ell > k+ 10} 2^{-\ell} \lVert xg \rVert_{L^2}\\
\lesssim& t^{-1/2} \lVert xg \rVert_{L^2}c_k + t^{-5/6} 2^{-k} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which can be seen to satisfy the required $\ell^2_k$ summability condition.
\end{proof}
\subsection{Bounds for cubic terms\label{sec:cubic-bounds}}
Since the complex mKdV equation has a cubic nonlinearity, we will naturally find ourselves dealing with frequency-localized terms of the form $Q_{j}(|u|^2 \partial_x u)$ and the like. In this section, we collect some basic bounds for these terms for later reference. Let $u_i = e^{-t\partial_x^3} f_i$ for $i = 1,2,3$ and $f_i \in X$. We begin by considering $\chi_{\geq j - 10} Q_{\sim j}(u_1 u_2 u_3)$. Using~\eqref{eqn:pseudoloc-comm-est} to control the error in commuting the physical and Fourier localization operators, we find that
\begin{equation}\label{eqn:cubic-sim-larger}\begin{split}
\lVert \chi_{\geq j - 10} Q_{\sim j}(u_1 u_2 u_3)\rVert_{L^\infty} \lesssim& \lVert \chi_{\geq j - 10} Q_{\sim j} (\chi_{\geq j - 20} u_1 u_2 u_3)\rVert_{L^\infty} \\
& + \lVert \chi_{\geq j - 10} Q_{\sim j} (\chi_{< j - 20} u_1 u_2 u_3)\rVert_{L^\infty}\\
\lesssim& \lVert \chi_{\geq j - 30} u_1 \rVert_{L^\infty}\lVert \chi_{\geq j - 30} u_2 \rVert_{L^\infty}\lVert \chi_{\geq j - 30} u_3 \rVert_{L^\infty} \\
&+ \left(t 2^{3j}\right)^{-1}\lVert u_1 \rVert_{L^\infty}\lVert u_2 \rVert_{L^\infty}\lVert u_3 \rVert_{L^\infty}\\
\lesssim& t^{-3/2} 2^{-3/2j} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}\end{equation}
Arguing in the same manner, we find that
\begin{equation}\label{eqn:cubic-bounds-compendium}\begin{split}
\lVert \chi_{\geq j - 10} Q_{\lesssim j}(u_1 u_2 u_3) \rVert_{L^\infty} \lesssim& t^{-3/2} 2^{-3/2 j}\lVert f_1 \rVert_{X}\lVert f_2 \rVert_{X}\lVert f_3 \rVert_{X}
\end{split}\end{equation}
Moreover, by using the bilinear decay estimate given in~\Cref{thm:simple-bilinear-decay}, we can obtain analogous bounds for terms containing a derivative:
\begin{equation}\label{eqn:cubic-deriv-bounds-compendium}\begin{split}
\lVert \chi_{\geq j - 10} Q_{\sim j}(u_1 \partial_x u_2 u_3) \rVert_{L^\infty} \lesssim& t^{-3/2} 2^{-j/2 }\prod_{i=1}^3 \lVert f_i \rVert_X\\
\lVert \chi_{\geq j - 10} Q_{\lesssim j}(u_1 \partial_x u_2 u_3) \rVert_{L^\infty} \lesssim& t^{-3/2} 2^{-j/2 }\prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}\end{equation}
In the bounds~\cref{eqn:cubic-sim-larger,eqn:cubic-bounds-compendium,eqn:cubic-deriv-bounds-compendium}, the frequency localization in some sense plays no role, in that the bounds would remain true if we removed the frequency projection operator. The situation changes when we consider the cubic $Q_{\sim j}(u_1 u_2 u_3)$ in the region $|x| \ll t 2^{2j}$, since the frequency localization eliminates the worst contribution in this region. A straightforward paraproduct decomposition yields
\begin{equation*}\begin{split}
Q_{\sim j}\left(u_1 u_2 u_3\right) =& Q_{\sim j} \bigg(u_{1, < j-20}u_{2, < j-20 }u_{3, j - 20} + u_{1,[j-20, j+20]} u_{2, < j - 20} u_{3, < j - 20} \\
&\qquad+ \sum_{\ell \geq j - 20} u_{1, \ell} u_{2,[\ell -20, \ell + 20]} u_{3, < \ell - 20} \\
&\qquad+ \sum_{\ell \geq j - 20} u_{1, \ell} u_{2,[\ell -20, \ell + 20]} u_{3, [\ell -20, \ell + 20]}\bigg)\\ &+ \{\text{similar terms}\}
\end{split}\end{equation*}
Note that the term $Q_{\sim j}(u_{1, < j-20}u_{2, < j-20 }u_{3, j - 20})$ vanishes. If $k < j - 30$, then we can use~\eqref{eqn:pseudoloc-comm-est} to commute the physical- and frequency-space localizations and apply~\Cref{thm:lin-decay-lemma} to find that
\begin{equation*}\begin{split}
\Big\lVert \chi_{k} Q_{\sim j} \big(u_{1, [j -20, j + 20]} u_{2, < j-20}, u_{3, < j - 20}\big) \Big\rVert_{L^\infty} \lesssim& t^{-11/6} 2^{-3/2 j-k}\prod_{i=1}^3 \lVert f_i \rVert_X\\
\Big\lVert \chi_{k} Q_{\sim j} \big(u_{1, \ell} u_{2, [\ell -20, \ell + 20]} u_{3, < \ell - 20}\big)\Big\rVert_{L^\infty} \lesssim& \Big(t^{-13/6} 2^{-3\ell-k/2} + t^{-7/3} 2^{-j-2k-\ell}\Big) \prod_{i=1}^3 \lVert f_i \rVert_X\\
\Big\lVert \chi_{k} Q_{\sim j} \big(u_{1, \ell} u_{2,[\ell -20, \ell + 20]} u_{3, [\ell -20, \ell + 20]}\big)\Big\rVert_{L^\infty} \lesssim & \Big(t^{-5/2} 2^{-9/2\ell} + t^{-5/2} 2^{-j-2k-3/2 \ell} \Big)\prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}\end{equation*}
Summing in $\ell$, we find that
\begin{equation}\label{eqn:cubic-sim-with-k}
\lVert \chi_{k} Q_{\sim j}\left(u_1 u_2 u_3\right) \rVert_{L^\infty} \lesssim t^{-11/6}2^{-3/2 j}2^{-k}\prod_{i=1}^3 \lVert f_i \rVert_X
\end{equation}
Summing over $k < j - 30$ and combining the result with~\eqref{eqn:cubic-sim-larger}, we obtain the bound
\begin{equation}\label{eqn:cubic-sim}
\lVert Q_{\sim j}\left(u_1 u_2 u_3\right) \rVert_{L^\infty} \lesssim t^{-3/2} 2^{-3/2j} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{equation}
By a similar argument, we find that for $k < j - 30$,
\begin{equation}\label{eqn:cubic-sim-w-deriv-small}
\lVert \chi_k Q_{\sim j} (u_1 \partial_x u_2 u_3)\rVert_{L^\infty} \lesssim t^{-11/6} 2^{-j/2} 2^{-k}\prod_{i=1}^3 \lVert f_i \rVert_X
\end{equation}
so
\begin{equation}
\label{eqn:cubic-sim-w-deriv}
\lVert Q_{\sim j} (u_1 \partial_x u_2 u_3)\rVert_{L^\infty} \lesssim t^{-3/2} 2^{-j/2} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{equation}
It will also be important to bound the mean of terms like $|u|^2 \partial_x u$, since these terms will arise naturally when we consider $\partial_t \hat{u}(0,t)$.
\begin{thm}\label{thm:cubic-mean-thm}
Suppose $u_i = e^{-t\partial_x^3} f_i$ for $i=1,2,3$. Then,
\begin{equation}\label{eqn:cubic-mean-bound}
\int u_1 \overline{u_2} \partial_x u_3 \,dx \lesssim t^{-1} \prod_{i=1}^3\lVert f_i \rVert_{X}
\end{equation}
If instead $\hat{f}_k(0,t) = 0$, then
\begin{equation}\label{eqn:cubic-mean-bound-mean-0}
\int u_1 \overline{u_2} \partial_x u_3 \,dx \lesssim t^{-7/6} \lVert Lu_k \rVert_{L^2}\prod_{i\in \{1,2,3\}\setminus\{k\}}\lVert f_i \rVert_{X}
\end{equation}
\end{thm}
\begin{proof}
Let us write
\begin{equation*}
\int u_1 \overline{u_2} \partial_x u_3 \,dx = I_j + \tilde{i}_j
\end{equation*}
where
\begin{equation*}\begin{split}
I_j =& \int u_{1,j} \overline{u_{2,\leq j}} \partial_x u_3 \,dx\\
\tilde{I}_j =& \int u_{1,< j} \overline{u_{2,j}} \partial_x u_3 \,dx
\end{split}
\end{equation*}
We will show how to bound the $I_j$: the bounds for $\tilde{I}_j$ are analogous.
For $2^j \sim t^{-1/3}$, we note that we can replace the $u_3$ factor in the definition with $u_{3, \lesssim j}$, so
\begin{equation*}\begin{split}
I_j \lesssim& \lVert u_{1,j} \rVert_{L^\infty} \lVert u_{2,\leq j} \rVert_{L^2} \lVert \partial_x u_{3, \lesssim j} \rVert_{L^2}\\
\lesssim& t^{-1/3} 2^{2j} \prod_{i=1}^3 \lVert f_i \rVert_X\\
\lesssim& t^{-1} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}\end{equation*}
which is acceptable, so it only remains to consider the case $2^j \gg t^{-1/3}$. There, we can write
\begin{equation*}
I_j = -\int e^{it\phi(0,\eta,\sigma)} (\eta + \sigma) \psi_j(\eta) \psi_{\leq j}(-\sigma) \hat{f_1}(\eta) \overline{\hat{f_2}(-\sigma)} \hat{f_3}(-\eta -\sigma)\,d\eta d\sigma
\end{equation*}
where $\phi$ is the four-wave mixing function given in~\eqref{eqn:4-wave-phase}. Since $\nabla_{\eta,\sigma} \phi(0,\eta, \sigma)$ vanishes only at $\eta = \sigma = 0$, we can integrate by parts using the identity
\begin{equation*}
e^{it \phi(0,\eta,\sigma)} = \frac{\nabla_{\eta,\sigma} \phi(0,\eta, \sigma)}{it|\nabla_{\eta,\sigma} \phi(0,\eta, \sigma)|^2} \cdot e^{it \phi(0,\eta,\sigma)}
\end{equation*}
to find that
\begin{subequations}\begin{align}
I_j =& -\frac{i}{t} \int \nabla_{\eta,\sigma} \cdot m_j(\eta,\sigma) \hat{f_1}(\eta) \overline{\hat{f_2}(-\sigma)}\hat{f_3}(-\eta -\sigma) \,d\eta d\sigma \label{eqn:I-j-1}\\
&- \frac{i}{t} \int m_j^{\eta} \partial_\eta\hat{f_1}(\eta) \overline{\hat{f_2}(-\sigma)}\hat{f_3}(-\eta -\sigma) \,d\eta d\sigma \label{eqn:I-j-2}\\
&- \frac{i}{t} \int m_j^{\sigma} \hat{f_1}(\eta) \partial_\sigma\overline{\hat{f_2}(-\sigma)}\hat{f_3}(-\eta -\sigma) \,d\eta d\sigma \label{eqn:I-j-3}\\
&+ \frac{i}{t} \int (m_j^\eta + m_j^\sigma) \hat{f_1}(\eta) \overline{\hat{f_2}(-\sigma)}\partial_\eta\hat{f_3}(-\eta -\sigma) \,d\eta d\sigma \label{eqn:I-j-4}
\end{align}\end{subequations}
where $m_j$ is the (vector-valued) symbol
\begin{equation*}
m_j(\eta,\sigma) = \frac{(\eta + \sigma)\psi_j(\eta) \psi_{\leq j}(-\sigma) \nabla_{\eta,\sigma} \phi(0,\eta,\sigma)}{|\nabla_{\eta,\sigma} \phi(0,\eta,\sigma)|^2}
\end{equation*}
with components $m_j^\eta$ and $m_j^\sigma$. Let us first consider~\eqref{eqn:I-j-1}. We can rewrite this term as the Fourier transform of a pseudoproduct
\begin{equation*}
\eqref{eqn:I-j-1} = 2\pi i t^{-1} 2^{-2j} \hat{T}_{m^1_j}(u_{1,\sim j}, \overline{u_{2, \lesssim j}}, u_{3, \lesssim j})(0)
\end{equation*}
with symbol
\begin{equation*}
m^1_j = 2^{2j} \psi_j(\xi) \nabla_{\eta,\sigma} \cdot m_j
\end{equation*}
Now, $m^1_j$ obeys the symbol bounds
\begin{equation*}
|\partial^\alpha_{\xi,\eta,\sigma} m_j^1| \lesssim 2^{-|\alpha|j}, \qquad\qquad |\supp m_j^1| \lesssim 2^{3j}
\end{equation*}
so by~\Cref{rmk:freq-loc-symbol-bounds} the pseudoproduct $T_{m_j^1}(\cdot,\cdot,\cdot)$ satisfies H\"older-type bounds uniformly in $j$. Thus, the Hausdorff-Young inequality gives us the bound
\begin{equation*}\begin{split}
|\eqref{eqn:I-j-1}| \lesssim& t^{-1} 2^{-2j} \lVert T_{m_j^1}(u_{1,\sim j}, \overline{u_{2, \lesssim j}}, u_{3, \lesssim j}) \rVert_{L^1}\\
\lesssim& t^{-1} 2^{-2j} \lVert u_{1,\sim j} \rVert_{L^\infty} \lVert u_{2,\lesssim j} \rVert_{L^2} \lVert u_{3,\lesssim j} \rVert_{L^2}\\
\lesssim& t^{-3/2} 2^{-3/2 j} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}
\end{equation*}
Turning to~\eqref{eqn:I-j-2}, we write
\begin{equation*}\begin{split}
\eqref{eqn:I-j-2} =& 2\pi i t^{-1} 2^{-j} \hat{T}_{m^2_j}(Lu_1, \overline{u_{2, \lesssim j}}, u_{3, \sim j})(0) + 2\pi i t^{-1} 2^{-j} \hat{T}_{m^3_j}(Lu_1, \overline{u_{2, \lesssim j}}, u_{3, \sim j})(0)
\end{split}\end{equation*}
with
\begin{equation*}\begin{split}
m_j^2 =& 2^j \psi_j(\xi) m_j^{\eta} \\
m_j^3 =& 2^j \psi_j(\xi) m_j^{\eta}
\end{split}\end{equation*}
Using the fact that $m_j^2$ and $m_j^3$ satisfy the symbol bounds from~\Cref{rmk:freq-loc-symbol-bounds} uniformly in $j$, we find that
\begin{equation*}\begin{split}
|\eqref{eqn:I-j-2}| \lesssim& t^{-1} 2^{-j} \lVert T_{m_j^2}(Lu_1, \overline{u_{2, \sim j}}, u_{3, \lesssim j})\rVert_{L^1} + t^{-1} 2^{-j} \lVert T_{m_j^2}(Lu_1, \overline{u_{2, \ll j}}, u_{3, \sim j})\rVert_{L^1}\\
\lesssim& t^{-1} 2^{-j} \lVert Lu_1 \rVert_{L^2} \left(\lVert u_{2,\sim j} \rVert_{L^\infty} \lVert u_{3,\lesssim j} \rVert_{L^2} + \lVert u_{2, \ll j} \rVert_{L^2} \lVert u_{3, \sim j} \rVert_{L^\infty} \right)\\
\lesssim& t^{-4/3} 2^{-j} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{split}\end{equation*}
The estimates for~\eqref{eqn:I-j-3} and~\eqref{eqn:I-j-4} are analogous. Thus,
\begin{equation*}
\sum_{2^j \gg t^{-1/3}} I_j \lesssim t^{-1} \prod_{i=1}^3 \lVert f_i \rVert_X
\end{equation*}
which completes the proof of~\eqref{eqn:cubic-mean-bound}.
To prove~\eqref{eqn:cubic-mean-bound-mean-0}, we observe that the only place we used the assumption $f_k \in X$ was to obtain bounds of the form
\begin{equation*}\begin{split}
\lVert Lu_k \rVert_{L^2} \lesssim& t^{1/6} \lVert f_k \rVert_X\\
\lVert u_{k, \lesssim j} \rVert_{L^2} \lesssim& 2^{j/2} \lVert f_k \rVert_X\\
\lVert u_{k, \sim j} \rVert_{L^\infty} \lesssim& t^{-1/2} 2^{-j/2} \lVert f_k \rVert_X
\end{split}\end{equation*}
If we instead estimate these quantities in terms of $\lVert Lu_k \rVert_{L^2}$, we gain a factor of $t^{1/6}$ when estimating terms containing $Lu_k$. For the other terms, we see from~\Cref{thm:w-lin-decay} that
\begin{equation*}\begin{split}
\lVert u_{k,\lesssim j} \rVert_{L^2} \lesssim& 2^j \lVert Lu_k \rVert_{L^2}\\
\lVert u_{k,\sim j} \rVert_{L^\infty} \lesssim& t^{-1/2} \lVert Lu_k \rVert_{L^2}\\
\end{split}\end{equation*}
so we gain a factor of $2^{j/2}$ in this case. Modifying the estimates in light of this, we obtain~\eqref{eqn:cubic-mean-bound-mean-0}.
\end{proof}
\section{Bounds for the self-similar term}\label{sec:self-sim}
As discussed in the introduction, we are interested in the self-similar solutions $S$ given by
\begin{equation*}
S(x,t;p) = t^{-1/3}\sigma(t^{-1/3}x; p)
\end{equation*}
where $\sigma$ solves the third order ODE
\begin{equation}\label{eqn:cmkdv-self-sim-prof-expanded}
\partial_y^3 \sigma(y;p) - \frac{1}{3}y\partial_y \sigma(y;p) - \frac{1}{3}\sigma(y;p) = \pm|\sigma(y;p)|^2 \partial_y v(y;\alpha)
\end{equation}
subject to the (nonlocal) boundary condition $\hat{\sigma}(0;p) = p$. We will see that this condition uniquely specifies $\sigma$ once we restrict to the class of solutions which are compatible with the asymptotics of $u$.
Since the solutions $u$ we are considering to~\eqref{eqn:cmkdv} are bounded, we would like our self-similar solutions $\sigma$ to be bounded. This boundedness assumption allows us to treat the nonlinearity in~\eqref{eqn:cmkdv-self-sim-prof-expanded} perturbatively as $y \to \infty$, giving us the asymptotics
\begin{equation*}
\sigma(y;p) \sim c_1(p) \Ai(3^{-1/2}y) + c_2(p) \operatorname{Bi}(3^{-1/2}y) + c_3(p) \operatorname{Gi}(3^{-1/2}y)\qquad\qquad \text{as } y \to \infty
\end{equation*}
where $\operatorname{Ai}$ and $\operatorname{Bi}$ are the Airy functions of the first and second kind, $\operatorname{Gi}$ is Scorer's function~\cite{scorerNumericalEvaluationIntegrals1950}, and the $c_j(p)$'s are constants. Since $\operatorname{Bi}(y) \to \infty$ as $y \to \infty$, we immediately conclude that $c_2(p) = 0$. Our functional framework also requires that $Lu$ remain in $L^2$, so we must also impose the condition $LS \in L^2$. This condition translates to the requirement that $(\partial_y^2 - \frac{1}{3}y)\sigma(y;p) \in L^2_y$, and since $(\partial_y^2 - \frac{1}{3}y) \operatorname{Gi}(3^{-1/2}y) = -\frac{1}{3\pi} \neq 0$, we see that $c_3(p) = 0$. Thus, we are seeking self-similar solutions to complex mKdV which are asymptotic to some multiple of $\Ai(3^{-1/2}y)$ at infinity. This problem has received a good deal of study, see~\cite{deiftAsymptoticsPainleveII1995,hastingsBoundaryValueProblem1980,correiaAsymptoticsFourierSpace2020}. In particular, from~\cite{correiaAsymptoticsFourierSpace2020} we know that $\sigma$ is the solution to the Painlev\'e II equation:
\begin{equation}\label{eqn:phase-rot-Painleve-II}
\left\{\begin{array}{c}
\sigma''(y;p) = \frac{1}{3}y \sigma(y;p) \pm |\sigma(y;p)|^2 \sigma(y;p)\\
\hat{\sigma}(0;p) = p\\
\hat{\sigma}(\cdot;p) \text{ is continuous at } 0
\end{array}\right.
\end{equation}
We note that~\eqref{eqn:phase-rot-Painleve-II} is phase rotation invariant, so $\sigma(y;pe^{i\theta}) = e^{i\theta} \sigma(y;p)$ when $r$ and $\theta$ are real.
We derived~\eqref{eqn:phase-rot-Painleve-II} under the relatively weak assumptions that $S$ was bounded and $LS \in L^2$. However, we can say much more about the pointwise behavior of $S$ and $LS$ using~\eqref{eqn:phase-rot-Painleve-II}. Let $\Phi = e^{-\partial_x^3} \sigma$. Then, by~\cite[Theorem 1]{correiaAsymptoticsFourierSpace2020}, we have that
\begin{equation*}
\lVert \hat\Phi \rVert_{L^\infty} + \lVert x\Phi \rVert_{L^2} \lesssim |p|
\end{equation*}
It follows that $h(x, t;p) := e^{t\partial_x^3} S(x,t;p) = t^{-1/3}\Phi(t^{-1/3} x;p)$ satisfies $\lVert h \rVert_{X} \lesssim p$, so the pointwise behavior of $S$ is given by~\Cref{thm:lin-decay-lemma} (cf.~\cite{deiftAsymptoticsPainleveII1995, hastingsBoundaryValueProblem1980}). Turning to $LS$, we note that~\eqref{eqn:phase-rot-Painleve-II} implies that
\begin{equation}\label{eqn:self-similar-L-identity}
LS = \mp 3t|S|^2 S
\end{equation}
In particular, this gives us the weighted $L^2$ bound $\lVert LS \rVert_{L^2} \lesssim t \lVert S \rVert_{L^6}^3 \lesssim |p|^3 t^{1/6}$. Similarly, we can use~\eqref{eqn:cmkdv-self-sim-prof-expanded} to find that
\begin{equation}\label{eqn:self-similar-dx-L-identity}
\partial_x LS = \mp 3 |S|^2 \partial_x S
\end{equation}
Since our argument involves modulating around $S$, we need $\sigma$ to have some smoothness in the $p$ parameter. The nonlinearity of~\eqref{eqn:phase-rot-Painleve-II} is not analytic, so we cannot expect $\sigma$ to be complex differentiable in $p$. However, $\sigma(x + iy; p)$ is differentiable as a function of $x$ and $y$, which is sufficient for our purposes. In particular, for $p: \mathbb{R} \to \mathbb{C}$ differentiable, we have that
\begin{equation}\label{eqn:d-p-sigma-def}
\partial_s \sigma(x;p(s)) = \left[\partial_{\Re p} \sigma\right] \Re p'(s) + \left[\partial_{\Im p} \sigma\right] \Im p'(s) =: D_p \sigma p'(s)
\end{equation}
where we have abused notation slightly by interpreting $D_p \sigma$ as the derivative of a function from $\mathbb{R}^2 \to \mathbb{C}$ and $p$ as a function from $\mathbb{R} \to \mathbb{R}^2$. Changing to polar coordinates $p = re^{i\theta}$ and exploiting the phase rotation invariance of the equation, we find that
\begin{equation}\label{eqn:phi-polar-derivs}\begin{split}
\partial_r \sigma(y;re^{i\theta}) =& e^{i\theta} \partial_r \sigma(y;r)\\
\partial_\theta \sigma(y;re^{i\theta}) =& ie^{i\theta} \sigma(y;r) = i\sigma(x, re^{i\theta})
\end{split}\end{equation}
Recall that by~\cite[Theorem 1]{correiaAsymptoticsFourierSpace2020} the profile $\Phi$ satisfies
\begin{equation*}
\hat{\Phi}(\xi;r) = \chi(\xi) e^{ia\ln|\xi|} \left(A + B e^{2ia\ln|\xi|} \frac{e^{-i \frac{8}{9}\xi^3}}{\xi^3} \right) + z(\xi; r)
\end{equation*}
where $\chi$ is a cut-off function supported on $|\xi| \geq 1$, $A, B,$ and $a$ are real-valued and have a Lipschitz dependence on $r$ (at least for $r$ sufficiently small), and $z$ is some function which has a Lipschitz dependence on $r$ with respect to the norm
\begin{equation*}
\lVert z \rVert_Z := \lVert z(\xi) \jBra{\xi}^k \rVert_{L^\infty} + \lVert z'(\xi) \jBra{\xi}^{k+1} \rVert_{L^\infty}
\end{equation*}
for $k \in \left( \frac{1}{2}, \frac{4}{7} \right)$. From this, we can see that the worst term in $\partial_r \hat{\Phi}$ occurs when the derivative hits the logarithmically oscillating phase, so
\begin{equation*}
|\partial_r \hat{\Phi}(\xi;r)| \lesssim \ln(2 + |\xi|)
\end{equation*}
Similarly, differentiating in $\xi$, we find that
\begin{equation*}
|\partial_\xi \partial_r \hat{\Phi}(\xi)| \lesssim \frac{\ln 2+|\xi|}{\jBra{\xi}} + O\left(\jBra{\xi}^{-3/2}\right)
\end{equation*}
so $\partial_r h(x,t; p) = t^{-1/3} \partial_r \Phi(t^{-1/3}x;p)$ satisfies the bound
\begin{equation*}\begin{split}
\lVert \partial_r h \rVert_{X_j} =& \lVert \widehat{Q_{\sim j} \partial_r h} \rVert_{L^\infty} + t^{-1/6}\lVert x(Q_{\sim j} \partial_r h) \rVert_{L^2}\\
=& \lVert \widehat{Q_{\sim j} \partial_r\Phi} \rVert_{L^\infty} + \lVert \partial_{\xi}\widehat{Q_{\sim j} \partial_r \Phi} \rVert_{L^2}\\
\lesssim& \ln(2 + t^{1/3} 2^j)
\end{split}\end{equation*}
In particular, applying~\Cref{thm:lin-decay-lemma}, we find that
\begin{equation*}
|Q_{\sim j} \partial_r S | \lesssim t^{-1/2} 2^{-j/2} \ln(2 + t^{-1/3} 2^j) \chi_{\sim j} + t^{-5/6}\ln(2 + t^{-1/3} 2^j)\left(2^j + 2^{-j/3} |\xi_0(x)|^{4/3}\right)^{-3/2}
\end{equation*}
where $\xi_0 = \xi_0(x)$ is as in~\Cref{thm:lin-decay-lemma}. Since $\partial_\theta S = iS$ satisfies better estimates, we find that
\begin{equation}\label{eqn:D-p-S-freq-loc-L-infty}\begin{split}
\lVert Q_{\sim j} D_p S \rVert_{L^\infty} =& \frac{1}{|p|} \lVert Q_{\sim j} \partial_\theta S \rVert_{L^\infty} + \lVert Q_{\sim j} \partial_r S \rVert_{L^\infty} \\
\lesssim& t^{-1/2} 2^{-j/2} \ln(2 + t^{1/3} 2^j)
\end{split}\end{equation}
and, for $4 < q < \infty$
\begin{equation}\label{eqn:D-p-S-L-p-freq-loc}
\lVert Q_{\sim j}D_p S \rVert_{L^q} \lesssim t^{-\frac{1}{2} + \frac{1}{q}} 2^{\left(-\frac{1}{2} + \frac{2}{q}\right)j} \ln(2 + t^{1/3} 2^j)
\end{equation}
which can be summed to give
\begin{equation}\label{eqn:D-p-S-L-p}
\lVert D_p S \rVert_{L^q} \lesssim t^{-\frac{1}{3} + \frac{1}{3p}}
\end{equation}
To perform the weighted $L^2$ estimates, we also need a bound on $L D_p S$ in $L^2$. As above, we have
\begin{equation*}
\lVert L D_p S \rVert_{L^2} = t^{1/6} \lVert L D_p \sigma \rVert_{L^2} \lesssim t^{1/6} \frac{1}{|p|} \lVert (3\partial_y^2 - y) \partial_\theta \sigma \rVert_{L^2} + t^{1/6} \lVert (3\partial_y^2 - y) \partial_r \sigma \rVert_{L^2}
\end{equation*}
Since $\partial_\theta \sigma = i \sigma$, we have from~\eqref{eqn:phase-rot-Painleve-II} that
\begin{equation*}
\frac{1}{p} \lVert L \partial_\theta \sigma \rVert_{L^2} = \frac{1}{|p|} \lVert (3\partial_y - y) \sigma \rVert_{L^2} \lesssim \frac{1}{|p|} \lVert \sigma \rVert_{L^6}^3 \lesssim |p|^2
\end{equation*}
and the problem reduces to finding bounds on $(3\partial_y^2 - y)\partial_r \sigma(y;r)$ for $r$ real. Differentiating~\eqref{eqn:phase-rot-Painleve-II} shows that $\partial_r \sigma(y;r)$ satisfies
$(3\partial_y^2 - y)\partial_r \sigma = \pm 3\sigma^2 \partial_r\sigma$.
Using the $L^6$ estimates for $\sigma$ and $\partial_r \sigma$ (which follow immediately from the estimates for $S$ and $\partial_r S$ given above), we obtain the bound
\begin{equation*}
\lVert (\partial_x^2 - x) \partial_r \sigma \rVert_{L^2} \lesssim \lVert \sigma \rVert_{L^6}^2 \lVert \partial_r \sigma \rVert_{L^6} \lesssim |p|^2
\end{equation*}
and so
\begin{equation}\label{eqn:param-deriv-LS}
\lVert L D_p S\rVert_{L^2} \lesssim t^{1/6}|p|^2
\end{equation}
\section{Reduction of the main theorem}\label{sec:reduction}
\subsection{Reduction of the main theorem to profile estimates}
Let
\begin{equation*}
w(x,t) = u(x,t) - S(x,t; \hat{u}(0,t))
\end{equation*}
and define $g = e^{t\partial_x^3} w$. The remainder of the paper will be devoted to proving the nonlinear bounds on $f$ and $g$ given in the following theorem:
\begin{thm}\label{thm:nonlinear-bounds-thm}
There exists an $\epsilon_0 > 0$ such that if $u_* \in H^2$ and $\lVert \hat{u}_* \rVert_{L^\infty} + \lVert x u_* \rVert_{L^2} \leq \epsilon \leq \epsilon_0$, then the solution $u$ to~\eqref{eqn:cmkdv-t-1} is global, and the following bounds hold for all $t \in [1,\infty)$
\begin{equation}\label{eqn:desired-g-bound}
\lVert xg(t) \rVert_{L^2} \lesssim \epsilon t^{1/6-\beta}
\end{equation}
\begin{equation}\label{eqn:desired-f-hat-bound}
\lVert \hat{f}(t) \rVert_{L^\infty} \lesssim \epsilon
\end{equation}
where $\beta = \beta(\epsilon) = \frac{1}{6} - C \epsilon^2$ for some constant $C$. Moreover,
\begin{equation}\label{eqn:desired-zero-mode-conv}
|\partial_t\hat{u}(0,t)| \lesssim \epsilon^3 t^{-1-\beta}
\end{equation}
and there exists a bounded function $f_\infty(\xi)$ such that \begin{equation}\label{eqn:phase-rot-dynamics-f-hat}\hat{f}(\xi,t) = \exp\left(\pm \frac{i}{6} \int_1^t \frac{|\hat{f}(\xi,s)|^2}{s}\,ds\right)f_\infty(\xi) + O(\epsilon^3 (t^{-1/3} |\xi|)^{-1/14})\end{equation}
\end{thm}
Assuming~\Cref{thm:nonlinear-bounds-thm}, we can prove~\Cref{thm:main-theorem}.
\begin{proof}[Proof of~\Cref{thm:main-theorem}]
We first show that $u$ satisfies the linear estimates from~\Cref{thm:lin-decay-lemma}. By~\eqref{eqn:desired-f-hat-bound}, $|\hat{u}(0,t)| = |\hat{f}(0,t)| \lesssim \epsilon$, so $\lVert LS \rVert_{L^2} \lesssim \epsilon^3t^{1/6}$ by~\eqref{eqn:self-similar-L-identity}. Combining this with the bound for $xg$, we see that
\begin{equation*}
\lVert xf \rVert_{L^2} \leq \lVert xg \rVert_{L^2} + \lVert LS \rVert_{L^2} \lesssim \epsilon t^{1/6}
\end{equation*}
Recalling the $\mathcal{F}L^\infty$ bound given in~\eqref{eqn:desired-f-hat-bound}, we see that $\lVert f \rVert_{X} \lesssim \epsilon$, which is enough to give the asymptotics~\eqref{eqn:positive-x-asymp} for $x > t^{1/3}$ using~\Cref{thm:lin-decay-lemma}. By using the more precise expression for $\hat{f}$ given in~\eqref{eqn:phase-rot-dynamics-f-hat}, we see that $u$ has the modified scattering asymptotics given by~\eqref{eqn:negative-x-asymp} in the region $x < -t^{-1/3}$.
It only remains to verify that the asymptotics for $|x| \lesssim t^{1/3 + 4\beta}$ are given by~\eqref{eqn:small-x-asymptotics}. By~\Cref{thm:w-lin-decay} and the hypothesis~\eqref{eqn:desired-g-bound}, $\lVert w \rVert_{L^\infty} \lesssim \epsilon t^{-1/3 -\beta}$, so in the region $|x| \lesssim t^{1/3 +4\beta}$
\begin{equation}\label{eqn:u-S-error}
u(x,t) = S(x,t; \hat{u}(0,t)) + O(\epsilon t^{-1/3 -\beta})
\end{equation}
Moreover, since $t^{-1-\beta}$ is integrable over $[1,\infty)$,~\eqref{eqn:desired-zero-mode-conv} implies that $\alpha = \lim_{t\to\infty} \hat{u}(0,t)$ has size $O(\epsilon)$ and satisfies $|\hat{u}(0,t) - \alpha| = O(\epsilon^3 t^{-\beta})$.
Using~\eqref{eqn:D-p-S-freq-loc-L-infty} to bound the terms $\lVert Q_j D_p S \rVert_{L^\infty}$, we find that
\begin{equation*}\begin{split}
\lVert S(x,t; \hat{u}(0,t)) - S(x,t;\alpha)\rVert_{L^\infty} \lesssim& \sum_{2^j \geq t^{-1/3}} \lVert Q_{j} D_p S \rVert_{L^\infty} |\hat{u}(0,t) - \alpha|\\
\lesssim& \epsilon^3 t^{-1/3-\beta}
\end{split}\end{equation*}
which, combined with~\eqref{eqn:u-S-error}, gives the self-similar asymptotics~\eqref{eqn:small-x-asymptotics}.
\end{proof}
\subsection{Plan of the proof of~\texorpdfstring{\Cref{thm:nonlinear-bounds-thm}}{Theorem 9}}
We will use a bootstrap argument to prove~\Cref{thm:nonlinear-bounds-thm}. We begin with some qualitative observations about the local wellposedness of~\eqref{eqn:cmkdv-t-1}. By~\cite{katoCauchyProblemGeneralized1983},~\eqref{eqn:cmkdv-t-1} has a local solution on $[1,1+\delta]$ for some $\delta > 0$ such that $u(t)$ is continuous in $H^2_x$ and $xu(t)$ is continuous in $L^2_x$. It follows that $f(t)$ is continuous in $X$ over $[1, 1+\delta]$. \Cref{thm:cubic-mean-thm} then implies that
\begin{equation*}
\partial_t \hat{u}(0,t) = \frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x u \,dx \in C(1, 1+\delta)
\end{equation*}
with
\begin{equation*}
|\partial_t \hat{u}(0,0)| = \frac{1}{\sqrt{2\pi}} \left|\int|u_*|^2\partial_x u_*\,dx\right| \lesssim \epsilon^3
\end{equation*}
Thus, for $T \leq 1+ \delta$ sufficiently close to $1$, the following bootstrap hypotheses hold:
\begin{equation}\label{eqn:bootstrap-hypotheses}\tag{BH}
\sup_{1 \leq t \leq T}\left(\lVert \hat f(t) \rVert_{L^\infty} + t^{-1/6}\lVert x f(t) \rVert_{L^2}\right) \leq M \epsilon, \qquad \sup_{1 \leq t \leq T}|\partial_t \hat{u}(0,t)| \leq M^3\epsilon^3 t^{-1-\beta}
\end{equation}
where $M \gg 1$ is a large constant independent of $u_*$, the value of which we will specify later. Let us fix an $\epsilon_0 \ll M^{-3/2}$ independent of $u_*$, and suppose that $\epsilon \leq \epsilon_0$. Using~\eqref{eqn:bootstrap-hypotheses}, we prove in~\Cref{sec:weighted-L2} that $\lVert xg \rVert_{L^2} \leq C\epsilon t^{1/6-\beta}$ for some $C$ independent of $M$. Then, by using this bound on $xg$ in addition to the bootstrap hypotheses, we verify that $| \partial_t \hat{u}(0,t)| \leq CM^2\epsilon^3 t^{-1-\beta}$ and that $\lVert \hat{f}(t) \rVert_{L^\infty} \leq C \epsilon$ in~\Cref{sec:L-infty-est}, where again the constants $C$ do not depend on $M$. These results imply that in fact we have the improved bounds
\begin{equation}\label{eqn:bootstrap-hypos-improved}
\sup_{1 \leq t \leq T}\left(\lVert \hat f(t) \rVert_{L^\infty} + t^{-1/6}\lVert x f(t) \rVert_{L^2}\right) \leq C \epsilon, \qquad \sup_{1 \leq t \leq T}|\partial_t \hat{u}(0,t)| \leq CM^2\epsilon^3 t^{-1-\beta} \tag{BH+}
\end{equation}
In particular, since we are free to choose $M$, we may choose $M > C$, so that~\eqref{eqn:bootstrap-hypos-improved} are stronger than the original bootstrap hypotheses in~\eqref{eqn:bootstrap-hypotheses}. Moreover, a simple energy estimate shows that
\begin{equation*}\begin{split}
\frac{d}{dt} \lVert \partial_x^2 u \rVert_{L^2}^2 \lesssim& \lVert u \partial_x u \rVert_{L^\infty} \lVert \partial_x^2 u \rVert_{L^2}^2 \lesssim M^2\epsilon^2 t^{-1} \lVert \partial_x^2 u \rVert_{L^2}^2
\end{split}\end{equation*}
so, by Gr\"onwall's inequality, $\lVert \partial_x^2 u \rVert_{L^2}$ grows at most at a polynomial rate in time. By using the $L^\infty$ bound on $\hat{f}$ to control the low frequencies, we see that $\lVert u \rVert_{H^2}$ does not blow up at time $T$. Since the results of~\cite{katoCauchyProblemGeneralized1983} imply that the solution $u$ can be continued until $\lVert xu \rVert_{L^2} + \lVert u \rVert_{H^2}$ blows up, we can extend to a solution to a longer time interval $[1, T']$ such that the bootstrap bounds~\eqref{eqn:bootstrap-hypotheses} hold up to time $T'$. By a standard continuity argument, this shows that the estimates~\eqref{eqn:bootstrap-hypos-improved} hold for all time. Moreover, in the course of proving the $\mathcal{F}L^\infty$ bound in~\Cref{sec:L-infty-est}, we obtain~\eqref{eqn:phase-rot-dynamics-f-hat}, which proves~\Cref{thm:nonlinear-bounds-thm} (and hence~\Cref{thm:main-theorem}, as well).
\subsection{Basic consequences of the bootstrap estimates}
We close this section by listing some basic estimates that follow from the bootstrap assumptions~\eqref{eqn:bootstrap-hypotheses} and the material in~\Cref{sec:linear-ests}.
We begin by discussing estimates for $u$. By the first bootstrap assumption, for $t \in [1, T]$,
\begin{equation*}
\lVert f(t) \rVert_X \lesssim M\epsilon
\end{equation*}
Thus,~\Cref{thm:lin-decay-lemma} gives us the following bounds:
\begin{equation}\label{eqn:u-lin-ests}\begin{split}
\lVert u_j \rVert_{L^\infty} \lesssim& M\epsilon t^{-1/2}2^{-j/2}\\
\lVert \chi_{\geq k} u \rVert_{L^\infty} \lesssim& M\epsilon t^{-1/2} 2^{-k/2}\\
\lVert \chi_{\geq j} u_{\ll j} \rVert_{L^\infty} \lesssim& M\epsilon t^{-5/6} 2^{-3/2 j}\\
\lVert (1 - \chi_{\sim j}) u_j \rVert_{L^\infty} \lesssim& M\epsilon t^{-5/6} 2^{-3/2 j}\\
\lVert u_{j} \rVert_{L^4} + \lVert \chi_{j} u\rVert_{L^4} \lesssim& M \epsilon s^{-1/4}\\
\lVert (1 - \chi_{\sim j}) u_j \rVert_{L^4} \lesssim& M \epsilon s^{-7/12} 2^{-j}\\
\lVert \chi_{> j} u_{\ll j} \rVert_{L^4} \lesssim& M\epsilon s^{-7/12} 2^{-j}
\end{split}\end{equation}
We now consider the bounds for the self-similar solution $S = S(x,t;\hat{u}(0,t))$. From the bootstrap assumptions and the fact that $\epsilon \ll M^{-3/2}$, we find that
\begin{equation*}
\sup_{1 \leq t \leq T} |\hat{u}(0,t)| \leq \epsilon + \int_1^\infty M^3\epsilon^3 s^{-1-\beta} \,ds \lesssim \epsilon
\end{equation*}
It follows from the work in~\Cref{sec:self-sim} that $S$ obeys all of the estimates in~\eqref{eqn:u-lin-ests} but without the bootstrap factor $M$. In addition, by combining the identity~\eqref{eqn:self-similar-L-identity} with the cubic estimates from~\Cref{sec:cubic-bounds}, we find that
\begin{equation}\label{eqn:LS-cubic-bounds}\begin{split}
\lVert (LS)_{\sim j} \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-1/2} 2^{-j/2}\\
\lVert \chi_{k} (LS)_{\sim j} \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-5/6}2^{-3/2 j} 2^{-k} \qquad k < j - 30\\
\lVert \chi_{k} (LS)_{\ll j} \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-1/2} 2^{-3/2 k}\\
\lVert \chi_{\geq k} \partial_x (LS)_{\ll j} \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-1/2} 2^{-3/2 k}\\
\lVert \partial_x (LS)_{\sim j} \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-1/2} 2^{-j/2}\\
\lVert \partial_x LS \rVert_{L^\infty} \lesssim& \epsilon^3 t^{-1/3}
\end{split}\end{equation}
All but the last equation are straightforward consequences of the cubic bounds \cref{eqn:cubic-sim-larger,eqn:cubic-bounds-compendium,eqn:cubic-deriv-bounds-compendium,eqn:cubic-sim-with-k,eqn:cubic-sim-w-deriv,eqn:cubic-sim-w-deriv-small}, and the last inequality follows from the second-to-last after summing in $j$.
Finally, we turn to the linear estimates for $w$. By~\Cref{thm:w-lin-decay}, these estimates can be given in terms of $\lVert xg \rVert_{L^2}$:
\begin{equation}\label{eqn:w-lin-ests-L-inf}\begin{split}
\lVert w_j \rVert_{L^\infty} \lesssim& t^{-1/2} \lVert xg \rVert_{L^2} c_j\\
\lVert \chi_k w \rVert_{L^\infty} \lesssim& t^{-1/2} \lVert xg \rVert_{L^2} c_k\\
\lVert \chi_k \partial_x w \rVert_{L^\infty} \lesssim& t^{-1/2} 2^k \lVert xg \rVert_{L^2} c_k\\
\lVert \chi_k \partial_x w_{j} \rVert_{L^\infty} \lesssim& t^{-5/6} \lVert xg \rVert_{L^2} c_j \qquad\qquad k < j - 30
\end{split}
\end{equation}
The first two equations are simple restatements of~\Cref{thm:w-lin-decay}, while the last two follow from applying~\Cref{thm:w-lin-decay} to $\partial_x g$. We will also often make use of $L^2$ estimates for $w$, so we record some below for future reference:
\begin{equation}\label{eqn:w-lin-ests-L-2}\begin{split}
\lVert w_{\leq j} \rVert_{L^2} \lesssim& 2^j \lVert xg \rVert_{L^2}\\
\lVert w_{j} \rVert_{L^2} \lesssim& 2^j \lVert xg \rVert_{L^2}c_j\\
\lVert \partial_x w_{\leq j} \rVert_{L^2} \lesssim& 2^{2j} \lVert xg \rVert_{L^2}\\
\lVert \partial_x w_{j} \rVert_{L^2} \lesssim& 2^{2j} \lVert xg \rVert_{L^2} c_j\\
\lVert \chi_{k} w \rVert_{L^2} \lesssim& 2^k \lVert xg \rVert_{L^2} c_k\\
\lVert \chi_k \partial_x w \rVert_{L^2} \lesssim& 2^{2k} \lVert xg \rVert_{L^2} c_k\\
\lVert \chi_k \partial_x w_{j} \rVert_{L^2} \lesssim& t^{-1/3}2^{k} \lVert xg \rVert_{L^2}c_j \qquad\qquad k < j - 30
\end{split}
\end{equation}
The first four estimates follow from Hardy's inequality, and the other inequalities are direct consequences of the $L^\infty$ estimates~\eqref{eqn:w-lin-ests-L-inf} and the estimate $\lVert \chi_{\sim k} \rVert_{L^2} \lesssim t^{1/2} 2^k$.
\begin{rmk}\label{rmk:proj-bdds-rmk} Observe that all of the above bounds are based estimates on the linear propagator. Since the linear propagator is well-behaved under the Littlewood-Paley projectors, all of the above estimates continue to hold if we replace $u$, $S$, or $w$ with their Littlewood-Paley projections. For instance, we get the bound $\lVert \chi_k u_{\lesssim j} \rVert_{L^\infty} \lesssim M\epsilon t^{-1/2} 2^{-k/2}$ from the second bound in~\eqref{eqn:u-lin-ests}.
\end{rmk}
\section{The weighted energy estimate}\label{sec:weighted-L2}
In this section, we will show that $\lVert xg \rVert_{L^2} \lesssim \epsilon t^{1/6 - \beta}$, where $\beta$ is as in~\Cref{thm:nonlinear-bounds-thm}. To establish this bound, we show that the bootstrap hypotheses imply
\begin{equation}\label{eqn:xg-desired-bound}
\lVert xg(t) \rVert_{L^2}^2 \lesssim \epsilon^2 + \int_1^t\left[ M^2\epsilon^2s^{-1}\lVert xg(s) \rVert_{L^2}^2 + M^2\epsilon^3 s^{-5/6-\beta} \lVert xg(s) \rVert_{L^2}\right]\,ds
\end{equation}
for all $t \leq T$. By adding a factor of $\epsilon^2 t^{1/3 - 2\beta}$,~\eqref{eqn:xg-desired-bound} implies that
\begin{equation*
\lVert xg(t) \rVert_{L^2}^2 + \epsilon^2 t^{1/3 -2\beta} \lesssim \epsilon^2 + \int_1^t M^2\epsilon^2 s^{-1} \left( \lVert xg(s) \rVert_{L^2}^2 + \epsilon^2 s^{1/3 -2\beta} \right)\,ds
\end{equation*}
(recall that by the definition of $\beta$, $\frac{1}{6} - \beta = O(M^2\epsilon^2)$). Applying Gr\"onwall's inequality, we obtain the desired bound for $\lVert xg(t) \rVert_{L^2}$. To prove~\eqref{eqn:xg-desired-bound}, we write the inequality in differential form using the expansion
\begin{equation}\label{eqn:xg-division}
\begin{split}
x \partial_t g =& xe^{t\partial_x^3}(\partial_t + \partial_x^3) (u-S)\\
=& \pm x e^{t\partial_x^3} \left(|u|^2 \partial_x u - |S|^2 \partial_x S - D_p S \partial_t \hat u(0,t)\right)\\
=& \pm x e^{t\partial_x^3} \left(|u|^2 \partial_x w + (w \overline{u} + u \overline{w}) \partial_x S\right) - xe^{t\partial_x^3}D_p S \partial_t \hat u(0,t)\\
=& \mp \mathcal{F}^{-1} \frac{1}{2\pi}\partial_\xi \int e^{it\phi} (\xi - \eta - \sigma) \left(\hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{g}(\xi-\eta-\sigma)\right.\\
&\qquad\qquad\qquad + \left.\left(\hat{f}(\eta) \overline{\hat{g}(-\sigma)}+ \hat{g}(\eta) \overline{\hat{f}(-\sigma)}\right) \hat{h}(\xi-\eta-\sigma)\right)\,d\eta d\sigma\\
& \mp e^{t\partial_x}LD_p S \partial_t \hat u(0,t)\\
= & \mp it\left(T_{\partial_\xi \phi e^{it\phi}} (f,\partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{it\phi}} (f,\partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{it\phi}} (g,\partial_x h, \overline{f})\right)\\
& \mp e^{t\partial_x^3} \left(|u|^2 w + (u\overline{w} + \overline{u}w) S\right) \mp e^{t\partial_x^3} \left( |u|^2 \partial_x e^{-t\partial_x^3} (xg) + (u\overline{w} + \overline{w}u) \partial_x e^{-t\partial_x^3} (xh) \right)\\
& \mp e^{t\partial_x}LD_p S \partial_t \hat u(0,t)
\end{split}
\end{equation}
where $h = e^{t\partial_x^3} S$. Thus,
\begin{subequations}
\begin{align}
\frac{1}{2}\partial_t \lVert xg \rVert_{L^2}^2 =& \Re \langle xg, \partial_t xg \rangle\nonumber\\
=& \pm t\Im \langle xg, T_{\partial_\xi \phi e^{it\phi}} (f,\partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{it\phi}} (f,\partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{it\phi}} (g,\partial_x h, \overline{f}) \rangle\label{eqn:xf-energy-estimate-paraprods}\\
& \mp \Re \langle e^{-t\partial_x^3} xg, |u|^2 w + (u\overline{w} + \overline{u}w) S \rangle\label{eqn:xf-energy-estimate-no-deriv}\\
&\mp \Re \langle e^{-t\partial_x^3} xg, |u|^2 \partial_x e^{-t\partial_x^3} (xg) \rangle\label{eqn:xf-energy-estimate-cancellation}\\
&\mp \Re \langle e^{-t\partial_x^3} xg, (u\overline{w} + \overline{w}u) \partial_x LS) \rangle \label{eqn:xf-energy-estimate-S-ident}\\
& \mp \Re\langle e^{-t\partial_x^3} xg, LD_p S \partial_t \hat u(0,t) \rangle \label{eqn:xf-energy-estimate-modulation-term}
\end{align}
\end{subequations}
Since
\begin{equation}\label{eqn:xg-t-1-bd}
\lVert xg(x,1) \rVert_{L^2} \leq \lVert x u_* \rVert_{L^2} + \lVert LS(x,1; \hat{u_*}(0)) \rVert_{L^2} \lesssim \epsilon
\end{equation}
the desired inequality for $\partial_t \lVert xg \rVert_{L^2}^2$ will follow if we can show that~\cref{eqn:xf-energy-estimate-cancellation,eqn:xf-energy-estimate-modulation-term,eqn:xf-energy-estimate-no-deriv,eqn:xf-energy-estimate-paraprods,eqn:xf-energy-estimate-S-ident} satisfy bounds compatible with~\eqref{eqn:xg-desired-bound}. We will first show that the terms~\cref{eqn:xf-energy-estimate-no-deriv,eqn:xf-energy-estimate-cancellation,eqn:xf-energy-estimate-S-ident} decay in time like $M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}$, which is compatible with~\eqref{eqn:xg-desired-bound} after integrating in time. For~\eqref{eqn:xf-energy-estimate-no-deriv}, we use the bounds from~\eqref{eqn:u-lin-ests} and~\eqref{eqn:w-lin-ests-L-2} together with almost orthogonality in space to find that
\begin{equation*}\begin{split}
\left|\eqref{eqn:xf-energy-estimate-no-deriv}\right| \lesssim& \lVert xg \rVert_{L^2} \left(\sum_{2^k \gtrsim t^{-1/3}} \lVert \chi_k(|u|^2 w)\rVert_{L^2}^2\right)^{1/2} + \{\text{similar terms}\}\\
\lesssim& \lVert xg \rVert_{L^2} \left(\sum_{2^k \gtrsim t^{-1/3}} \left(\lVert \chi_{\sim k} u\rVert_{L^\infty}^2 \lVert \chi_{\sim k} w \rVert_{L^2}\right)^2\right)^{1/2} + \{\text{similar terms}\}\\
\lesssim& M^2 \epsilon^2 t^{-1}\lVert xg \rVert_{L^2}^2
\end{split}\end{equation*}
For~\eqref{eqn:xf-energy-estimate-cancellation}, we integrate by parts and use~\Cref{thm:simple-bilinear-decay} to find that
\begin{equation*}
\begin{split}
|\eqref{eqn:xf-energy-estimate-cancellation}| =& \left|\int \Re (e^{-t\partial_x^3} xg \partial_x \overline{e^{-t\partial_x^3} xg}) |u|^2\,dx\right|\\
=& \frac{1}{2} \left|\int \partial_x(|u|^2) |e^{s\partial_x^3} xg|^2\,dx\right|\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}^2
\end{split}
\end{equation*}
Finally, for~\eqref{eqn:xf-energy-estimate-S-ident}, we use equation~\eqref{eqn:self-similar-dx-L-identity} to write $\partial_x LS = \mp 3t|S|^2\partial_x S$. Then, since $h \in X$, we can use the bilinear bound given in~\Cref{thm:simple-bilinear-decay} and an arguments similar to the one for~\eqref{eqn:xf-energy-estimate-no-deriv} give us the bound
\begin{equation*}\begin{split}
\left|\eqref{eqn:xf-energy-estimate-S-ident}\right| \lesssim& t\lVert xg \rVert_{L^2} \lVert uw|S|^2 \partial_x S \rVert_{L^2}\\
\lesssim& \epsilon^2 \lVert xg \rVert_{L^2} \left(\sum_{2^k \gtrsim t^{-1/3}} \lVert \chi_k(uwS) \rVert_{L^2}^2 \right)^{1/2}\\
\lesssim& \epsilon^2 \lVert xg \rVert_{L^2} \left(\sum_{2^k \gtrsim t^{-1/3}} \left(\lVert \chi_{\sim k} u \rVert_{L^\infty} \lVert \chi_{\sim k} S \rVert_{L^\infty} \lVert \chi_{\sim k} w \rVert_{L^2}\right)^2 \right)^{1/2}\\%\lesssim& M\epsilon^4t^{-1}\lVert xg \rVert_{L^2} \left(\sum_{2^k \gtrsim t^{-1/3}} c_k^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}^2
\end{split}\end{equation*}
which is better than required.
We now turn to the term~\eqref{eqn:xf-energy-estimate-modulation-term}. By~\eqref{eqn:bootstrap-hypotheses} and~\eqref{eqn:param-deriv-LS}
\begin{equation}\label{eqn:L-D-p-S-hat-u-bd}
\lVert LD_pS \hat{u}(0,t) \rVert_{L^2} \lesssim M^5\epsilon^5 t^{-5/6-\beta}
\end{equation}
from which it easily follows that
\begin{equation*}
|\eqref{eqn:xf-energy-estimate-modulation-term}| \lesssim M^5\epsilon^5 t^{-5/6-\beta} \lVert xg \rVert_{L^2}
\end{equation*}
which is better than the bound required by~\eqref{eqn:xg-desired-bound} since $\epsilon \ll M^{-3/2}$.
It only remains to control~\cref{eqn:xf-energy-estimate-paraprods}. We will re-write this term to exploit the space-time resonance structure of the phase $\phi$. Recall that the space-time resonances are
\begin{align*}
\mathcal{S} &= \{\eta = \sigma = \xi / 3\}\\
\mathcal{T} &= \{\xi = \eta\} \cup \{\xi = \sigma\}\\
\mathcal{R} &= \{(0,0,0)\}
\end{align*}
Let $\chi^\mathcal{S}, \chi^\mathcal{T}, \chi^\mathcal{R}$ be a smooth partition of unity such that:
\begin{itemize}
\item $\chi^\mathcal{S}$ and $\chi^\mathcal{T}$ are supported away from the sets $\mathcal{S}$ and $\mathcal{T}$, respectively,
\item $\chi^\mathcal{S}$ and $\chi^\mathcal{T}$ are $0$-homogeneous outside a ball of radius $2$ and vanish within a ball of radius $1$,
\item $\chi^\mathcal{R}$ is supported inside the ball of radius $2$, and
\item the support of $\chi^{\mathcal{S}}$ is contained in the set
\begin{equation*}
\widetilde{\mathcal{T}} = \left\{(\xi,\eta,\sigma) : \frac{\xi}{\eta} \in \left[1-c, 1+c\right] \text{ or } \frac{\xi}{\sigma} \in \left[1-c, 1+c\right] \right\}
\end{equation*}
where $c \ll 1$ is a small constant. (Note that $\widetilde{\mathcal{T}} \cap \mathcal{S}$ is empty for small $c$ so this is possible), and $\chi^\mathcal{T}$ is supported away from $\mathcal{T}$.
\end{itemize}
Define $\chi^\bullet_t = \chi^\bullet(t^{1/3} \cdot)$ for $\bullet = \mathcal{S}, \mathcal{T}, \mathcal{R}$. If we write
\begin{equation}\label{eqn:basic-STR-division}
T_{\partial_\xi \phi e^{is\phi}} = T_{\partial_\xi \phi e^{is\phi} \chi^\mathcal{T}_s} + T_{\partial_\xi \phi e^{is\phi} \chi^\mathcal{S}_s} + T_{\partial_\xi \phi e^{is\phi} \chi^\mathcal{R}_s}
\end{equation}
then we can naturally write~\eqref{eqn:xf-energy-estimate-paraprods} as
\begin{subequations}\begin{align}
\eqref{eqn:xf-energy-estimate-paraprods} =& \pm t\Im\langle xg, T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{R}_t}(f, \partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{R}_t}(f, \partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{R}_t}(g, \partial_x h, \overline{f}) \rangle \label{eqn:xg-str-term}\\
&\pm t\Im\langle xg, T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f, \partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f, \partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(g, \partial_x h, \overline{f}) \rangle \label{eqn:xg-space-non-res-term}\\
&\pm t\Im\langle xg, T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{T}_t}(f, \partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{T}_t}(f, \partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{T}_t}(g, \partial_x h, \overline{f}) \rangle \label{eqn:xg-time-non-res-term}
\end{align}\end{subequations}
In the following sections, we will show that the space-time resonant and space non-resonant terms satisfy the estimate
\begin{equation*}
|\eqref{eqn:xg-str-term}|, |\eqref{eqn:xg-space-non-res-term}| \lesssim M^2\epsilon^2 \lVert xg \rVert_{L^2}^2 t^{-1}\end{equation*}
pointwise in time, and that
\begin{equation*}
\left|\int_1^t \eqref{eqn:xg-time-non-res-term} \,ds\right| \lesssim \epsilon^2 + M^2\epsilon^2\lVert xg \rVert_{L^2}^2 + \int_1^t\left[ M^2\epsilon^2s^{-1}\lVert xg(s) \rVert_{L^2}^2 + M^2\epsilon^3 t^{-5/6-\beta} \lVert xg \rVert_{L^2}\right]\,ds
\end{equation*}
which is enough to prove~\eqref{eqn:xg-desired-bound}.
\subsection{The space-time resonant multiplier}\label{sec:xf-str}
Since the space-time resonant set is a single point, we can control its contribution using the $L^\infty$ bounds on $\hat{g},$ $\hat{f}$ and $\hat{S}$. Define
\begin{equation*}
m^{\mathcal{R}}_t = i(\xi - \eta - \sigma)\chi^\mathcal{R}_s \partial_\xi \phi e^{it\phi}
\end{equation*}
Then, $m^\mathcal{R}_t$ is of size $O(t^{-1})$ and is supported within the region $|\xi| + |\eta| + |\sigma| \lesssim t^{-1/3}$, so
\begin{equation*}\begin{split}
\left|\eqref{eqn:xg-str-term} \right| &\leq t\lVert xg \rVert_{L^2}\left\lVert \hat T_{m^\mathcal{R}_t}(f,g,\overline{f}) \right\rVert_{L^2_\xi} + \{\text{similar terms}\}\\
&\leq t\lVert xg \rVert_{L^2}\left\lVert \int_{\mathbb{R}^2} m^\mathcal{R}_t\hat g(\xi-\eta-\sigma)\hat f(\eta)\overline{\hat f(-\sigma)} \,d\eta d\sigma \right\rVert_{L^2_\xi} + \btext{similar terms}\\
&\leq \lVert xg \rVert_{L^2}\lVert \hat g\rVert_{C^{1/2}}\lVert \hat f\rVert_{L^\infty}^2\bigg\lVert \int_{|\xi| + |\eta| + |\sigma| \lesssim t^{-1/3}} |\xi|^{1/2} \,d\eta d\sigma \bigg\rVert_{L^2_\xi} + \btext{similar terms}\\
&\lesssim M^2\epsilon^2t^{-1}\lVert xg \rVert_{L^2}^2
\end{split}\end{equation*}
as required.
\subsection{The space non-resonant multiplier}\label{sec:xf-sr}
We will handle the terms supported away from the space resonant set in frequency space using integration by parts in $\eta$ and $\sigma$. To ease notation, let us write
\begin{equation*}
\mathcal{N}^\mathcal{S}(f,g,h) = tT_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f, \partial_x g, \overline{f}) + tT_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f, \partial_x h, \overline{g}) + tT_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(g, \partial_x h, \overline{f})
\end{equation*}
Then, the desired bound for~\eqref{eqn:xg-space-non-res-term} follows from showing that
\begin{equation}\label{eqn:N-s-desired-bound}
\lVert \mathcal{N}^\mathcal{S}(f,g,h) \rVert_{L^2} \lesssim M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{equation}
Let us first consider the term $tT_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f,\partial_x g,\overline{f})$. Integrating by parts in frequency gives
\begin{align}
t\hat{T}_{\partial_\xi \phi e^{it\phi} \chi^\mathcal{S}_t}(f,\partial_x g,\overline{f}) =& i\int t\partial_\xi \phi \chi^\mathcal{S}_t e^{it\phi} (\xi - \eta -\sigma) \hat{g}(\xi-\eta-\sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \,d\eta d\sigma\notag\\
=& - \int e^{it\phi} \nabla_{\eta,\sigma} \cdot \Big((\xi-\eta-\sigma)m^\mathcal{S}_s \hat{g}(\xi-\eta-\sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)}\Big)\, d\eta d\sigma\notag\\
\begin{split}
=& -e^{-it\xi^3} \hat{T}_{\nabla_{\eta,\sigma} \cdot m^\mathcal{S}_t(\xi - \eta - \sigma)}(u,w, \overline{u})\\
&- e^{-it\xi^3} \hat{T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x w, \overline{Lu})\\
&- e^{-it\xi^3} \hat{T}_{m^{\mathcal{S},\sigma}_t}(u,w, \overline{u})\\
&+ e^{-it\xi^3} \hat{T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x Lw, \overline{u})\\
&+ \{\text{similar terms}\}
\end{split}\label{eqn:space-pseudoproduct-division}\end{align}
where $m^\mathcal{S}_s$ is the vector-valued symbol $m^cS_s = \frac{\partial_\xi \phi}{|\nabla_{\eta,\sigma}\phi|^2} \nabla_{\eta,\sigma} \phi \chi^\mathcal{S}_s$, and $m^{\mathcal{S},\eta}_s,m^{\mathcal{S},\sigma}_s$ are its components. Because $Lu$ obeys worse estimates than $LS$ and $Lw$, we rewrite the term containing the $Lu$ factor using
\begin{equation*}
{T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x w, \overline{Lu}) = {T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x w, \overline{LS}) + {T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x u, \overline{Lw}) - {T}_{m^{\mathcal{S},\sigma}_t}(u, \partial_x S, \overline{Lw})
\end{equation*}
Similar expressions hold for the other terms in~$\mathcal{N}^\mathcal{S}$. Note that each symbol $m$ occurring after the last equality in~\eqref{eqn:space-pseudoproduct-division} satisfies Coifman-Meyer type bounds
\begin{equation*}
\left|(|\xi| + |\eta| + |\sigma|)^{|\alpha|}\partial_{\xi,\eta,\sigma}^\alpha m\right| \lesssim_\alpha 1
\end{equation*}
and is supported on $\{\xi \sim \eta \gtrsim t^{-1/3}\} \cup \{\xi \sim \sigma \gtrsim t^{-1/3}\}$. Thus, by dividing dyadically in frequency, we can write
\begin{subequations}\begin{align}
\mathcal{N}^\mathcal{S}(f,g,h) =& \sum_{2^j \gtrsim t^{-1/3}} T_{m_j^{\mathcal{S}}}( u, w, \overline{u})\label{eqn:mS-dyadic-no-deriv}\\
&\qquad+ T_{m_j^{\mathcal{S}}}( u, \partial_x Lw, \overline{u})\label{eqn:mS-dyadic-Lw-deriv}\\
&\qquad+ T_{m_j^{\mathcal{S}}}(u,\partial_x w, \overline{LS})\label{eqn:mS-dyadic-LS}\\
&\qquad + T_{m_j^\mathcal{S}}(u, S, \overline{w}) \label{eqn:mS-dyadic-SSw}\\
&\qquad + T_{m_j^{\mathcal{S}}}(u, \partial_x L S, \overline{w})\label{eqn:mS-dyadic-deriv-on-LS}\\
&\qquad + T_{m_j^{\mathcal{S}}}(u, \partial_x u, \overline{Lw})\label{eqn:mS-dyadic-deriv-on-S-and-Lw}\\
&\qquad + T_{m_j^{\mathcal{S}}}(LS, \partial_x u, \overline{w})\label{eqn:mS-dyadic-LS-and-outside-w}\\
&+ \btext{similar or easier terms}\nonumber
\end{align}\end{subequations}
where $m_j^{\mathcal{S}}$ denotes a generic symbol supported in the region
\begin{equation*}
\left\{|\xi| + |\eta| + |\sigma| \sim 2^j, |\xi - \eta| \ll 2^j\right\} \cup \left\{|\xi| + |\eta| + |\sigma| \sim 2^j, |\xi - \sigma| \ll 2^j\right\}
\end{equation*}
and satisfying
\begin{equation*}
|\partial_{\xi,\eta,\sigma}^\alpha m_j^\mathcal{S}| \lesssim 2^{-|\alpha| j}
\end{equation*}
Above and going forward, we allow the precise symbol represented by $m_j^\mathcal{S}$ to change from line to line. We now turn to the task of deriving estimates for~\cref{eqn:mS-dyadic-deriv-on-LS,eqn:mS-dyadic-LS,eqn:mS-dyadic-Lw-deriv,eqn:mS-dyadic-no-deriv,eqn:mS-dyadic-deriv-on-S-and-Lw,eqn:mS-dyadic-SSw,eqn:mS-dyadic-LS-and-outside-w}. Using the support condition on $m^\mathcal{S}_j$, we can decompose any pseudoproduct $T_{m_j^{\mathcal{S}}}$ as
\begin{equation}\label{eqn:space-non-res-pseudoproduct-expansion}\begin{split}
T_{m_j^{\mathcal{S}}}(p,q,r) =& T_{m_j^\mathcal{S}} (p_{\ll j}, q_{\lesssim j}, r_{\sim j}) + T_{m_j^\mathcal{S}}(p_{\sim j}, q_{\lesssim j}, r_{\ll j}) + Q_{\sim j}T_{m_j^{\mathcal{S}}}(p_{\sim j}, q_{\sim j}, r_{\sim j})
\end{split}\end{equation}
The term $T_{m_j^\mathcal{S}}(p_{\sim j}, q_{\ll j}, r_{\sim j})$ does not appear in the expansion because of the support assumption on $m^\mathcal{S}_j$. In particular, we always have $|\xi - \eta - \sigma| \lesssim \max\{|\eta|, |\sigma|\}$, which helps us control the derivative. We now consider each term~\cref{eqn:mS-dyadic-no-deriv,eqn:mS-dyadic-Lw-deriv,eqn:mS-dyadic-LS,eqn:mS-dyadic-deriv-on-LS,eqn:mS-dyadic-deriv-on-S-and-Lw,eqn:mS-dyadic-SSw,eqn:mS-dyadic-LS-and-outside-w} in turn and use the division~\eqref{eqn:space-non-res-pseudoproduct-expansion} and the decay estimates to obtain the bound~\eqref{eqn:N-s-desired-bound}.
\subsubsection{The bound for \eqref{eqn:mS-dyadic-no-deriv}:} From~\eqref{eqn:space-non-res-pseudoproduct-expansion}, we can write
\begin{subequations}\begin{align}
T_{m_j^\mathcal{S}}(u, w, \overline{u}) =& T_{m_{j}^\mathcal{S}} (u_{\ll j}, w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq}\\
&+ Q_{\sim j} T_{m_j^\mathcal{S}}(u_{\sim j}, w_{\sim j}, \overline{u}_{\sim j})\label{eqn:mS-no-deriv-equal-freq}\\
&+\{\text{similar terms}\}\nonumber
\end{align}\end{subequations}
The hypotheses on the symbols $m^\mathcal{S}_j$ imply that they satisfy the hypotheses of~\Cref{thm:L1-symbol-bounds} uniformly in $j$. Thus, using the $L^2$ bound for $w_{\sim j}$ from~\eqref{eqn:w-lin-ests-L-2} and the dispersive decay of $u_{\sim j}$ given in~\eqref{eqn:u-lin-ests}, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_{2^j \gtrsim t^{-1/3}} Q_{\sim j} T_{m_j^\mathcal{S}}(u_{\sim j}, w_{\sim j}, \overline{u}_{\sim j})\right\rVert_{L^2} \lesssim& \left(\sum_{2^j \gtrsim t^{-1/3}} \lVert u_{\sim j} \rVert_{L^\infty}^4\lVert w_{\sim j} \rVert_{L^2}^2 \right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required. For~\eqref{eqn:mS-no-deriv-low-freq}, we introduce a further dyadic decomposition in space to write
\begin{subequations}\begin{align}
T_{m_{j}^\mathcal{S}}(u_{\ll j}, w_{\lesssim j}, \overline{u}_{\sim j}) =& T_{m_{j}^\mathcal{S}}(u_{\ll j}, w_{\lesssim j}, (1 - \chi_{[j - 20, j + 20]})\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-1}\\
&+ T_{m_{j}^\mathcal{S}}(u_{\ll j}, w_{\lesssim j}, \chi_{[j - 20, j + 20]}\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-2}
\end{align}\end{subequations}
The first term is straightforward to bound using~\eqref{eqn:u-lin-ests} and the $L^2$ estimates for $w$ from~\eqref{eqn:w-lin-ests-L-2}:
\begin{equation*}\begin{split}
\lVert\eqref{eqn:mS-no-deriv-low-freq-1}\rVert_{L^2} \lesssim& \lVert u_{\ll j} \rVert_{L^\infty} \lVert (1 - \chi_{[j - 20, j + 20]}) u_{\sim j} \rVert_{L^\infty} \lVert w_{\lesssim j} \rVert_{L^2}\\
\lesssim& M^2 \epsilon^2 t^{-7/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient. Turning to the second term~\eqref{eqn:mS-no-deriv-low-freq-2}, we write
\begin{subequations}\begin{align}
\eqref{eqn:mS-no-deriv-low-freq-2} =& \chi_{[j - 30, j + 30]} T_{m_{j}^\mathcal{S}}(\chi_{[j - 30, j + 30]} u_{\ll j}, \chi_{[j - 30 , j + 30]} w_{\lesssim j}, \chi_{[j - 20 , j + 20]}\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-3-main}\\
&+ (1-\chi_{[j - 30 , j + 30]}) T_{m_{j}^\mathcal{S}}(u_{\ll j}, w_{\lesssim j}, \chi_{[j - 20 , j + 20]}\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-3-sub-1}\\
&+ \chi_{[j - 30 , j + 30]} T_{m_{j}^\mathcal{S}}((1-\chi_{[j - 30 , j + 30]})u_{\ll j},w_{\lesssim j}, \chi_{[j - 20 , j + 20]}\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-3-sub-2}\\
&+ \chi_{[j - 30 , j + 30]} T_{m_{j}^\mathcal{S}}(\chi_{[j - 30 , j + 30]}u_{\ll j}, (1-\chi_{[j - 30 , j + 30]})w_{\lesssim j}, \chi_{[j - 20 , j + 20]}\overline{u}_{\sim j})\label{eqn:mS-no-deriv-low-freq-3-sub-3}
\end{align}\end{subequations}
The subterms~\cref{eqn:mS-no-deriv-low-freq-3-sub-1,eqn:mS-no-deriv-low-freq-3-sub-2,eqn:mS-no-deriv-low-freq-3-sub-3} are non-pseudolocal in the sense of~\Cref{thm:cm-paraprod-pseudolocality}, so they satisfy the bound
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-no-deriv-low-freq-3-sub-1} \rVert_{L^2} + \lVert \eqref{eqn:mS-no-deriv-low-freq-3-sub-2} \rVert_{L^2} + \lVert \eqref{eqn:mS-no-deriv-low-freq-3-sub-3} \rVert_{L^2} \lesssim& \left(t2^{3j}\right)^{-1}\lVert u_{\ll j} \rVert_{L^\infty}\lVert w_{\lesssim j} \rVert_{L^2} \lVert u_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^2\epsilon^2 t^{-11/6} 2^{-5/2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives the required bound after summing in $j$. To bound the leading order term~\eqref{eqn:mS-no-deriv-low-freq-3-main}, we use almost orthogonality and find that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-no-deriv-low-freq-3-main} \right\rVert_{L^2} \lesssim& \left(\sum_{2^j \geq t^{-1/3}} \left( \lVert \chi_{[j - 30 , j + 30]}u_{\ll j}\rVert_{L^\infty} \lVert \chi_{[j - 30 , j + 30]}w_{\lesssim j} \rVert_{L^2} \lVert {u}_{\sim j}\rVert_{L^\infty}\right)^2\right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
Collecting the bounds for~\cref{eqn:mS-no-deriv-equal-freq,eqn:mS-no-deriv-low-freq}, we find
\begin{equation*}
\sum_{2^j \gtrsim t^{-1/3}} \lVert \eqref{eqn:mS-dyadic-no-deriv} \rVert_{L^2} \lesssim M^2 \epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{equation*}
as required.
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-Lw-deriv}}The estimates for~\eqref{eqn:mS-dyadic-Lw-deriv} are analogous to those for~\eqref{eqn:mS-dyadic-no-deriv} once we use the bounds
\begin{equation*}
\lVert \partial_x (Lw)_{j} \rVert_{L^2} \sim 2^j\lVert xg \rVert_{L^2}c_j
\end{equation*}
and
\begin{equation*}
\lVert \partial_x (Lw)_{\lesssim j} \rVert_{L^2} \lesssim 2^j \lVert xg \rVert_{L^2}
\end{equation*}
in place of the Hardy-type bounds on $w$.
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-LS}} Applying~\eqref{eqn:space-non-res-pseudoproduct-expansion}, we find
\begin{subequations}\begin{align}
T_{m_j^{\mathcal{S}}}(u, \partial_x w, \overline{LS}) =& T_{m_j^{\mathcal{S}}}(u_{\ll j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j}) \label{eqn:mS-dyadic-LS-ll-sim}\\
&+ T_{m_j^{\mathcal{S}}}(u_{\sim j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll}\\
&+ Q_{\sim j}T_{m_j^{\mathcal{S}}}(u_{\sim j}, \partial_x w_{\sim j}, (\overline{LS})_{\sim j}) \label{eqn:mS-dyadic-LS-sim-sim}
\end{align}\end{subequations}
We first consider~\eqref{eqn:mS-dyadic-LS-sim-sim}. Using almost orthogonality and the estimates for $LS$ from~\eqref{eqn:LS-cubic-bounds}, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-LS-sim-sim} \right\rVert_{L^2} \lesssim& \left(\sum_{2^j \gtrsim t^{-1/3}} \lVert u_{\sim j} \rVert_{L^\infty}^2 \lVert \partial_x w_{\sim j} \rVert_{L^2}^2 \lVert (LS)_{\sim j} \rVert_{L^\infty}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
where we have used the $L^2$ bounds for $\partial_x w_j$ from~\eqref{eqn:w-lin-ests-L-2}. For~\eqref{eqn:mS-dyadic-LS-ll-sim}, we divide dyadically in space to obtain
\begin{subequations}\begin{align}
T_{m_j^{\mathcal{S}}}(u_{\ll j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j}) =& T_{m_j^{\mathcal{S}}}(\chi_{> j} u_{\ll j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-1}\\
&+ \sum_{k < j - 40} T_{m_j^{\mathcal{S}}}(\chi_{k} u_{\ll j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-3}\\
&+ T_{m_j^{\mathcal{S}}}(\chi_{[j - 40, j]} u_{\ll j}, \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-2}
\end{align}\end{subequations}
For the first term, we have that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-ll-sim-1} \rVert_{L^2} \lesssim& \lVert \chi_{> j} u_{\ll j} \rVert_{L^\infty} \lVert \partial_x w_{\lesssim j} \rVert_{L^2} \lVert (LS)_{\sim j} \rVert_{L^\infty}\\
\lesssim& M \epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives a better than required bound after summing in $j$. For the second term, we perform the further division
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-ll-sim-3} =& \sum_{k < j - 40} T_{m_j^{\mathcal{S}}}(\chi_{k} u_{\ll j}, \chi_{\sim k}\partial_x w_{\lesssim j}, \chi_{\sim k} (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-3-main}\\
&+ \sum_{k < j - 40} T_{m_j^{\mathcal{S}}}(\chi_{k} u_{\ll j}, (1-\chi_{\sim k})\partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-3-sub-1}\\
&+ \sum_{k < j - 40} T_{m_j^{\mathcal{S}}}(\chi_{k} u_{\ll j}, \chi_{\sim k}\partial_x w_{\lesssim j}, (1- \chi_{\sim k})(\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-3-sub-2}
\end{align}\end{subequations}
\Cref{thm:cm-paraprod-pseudolocality} applies to the pseudoproducts in the terms~\cref{eqn:mS-dyadic-LS-ll-sim-3-sub-2,eqn:mS-dyadic-LS-ll-sim-3-sub-1}, yielding the bound
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-ll-sim-3-sub-1} \rVert_{L^2} + \lVert\eqref{eqn:mS-dyadic-LS-ll-sim-3-sub-2} \rVert_{L^2} \lesssim& \sum_{t^{-1/3} \leq 2^k < 2^{j - 40}} (t2^{2k+j})^{-1} \lVert u_{\ll j} \rVert_{L^\infty} \lVert \partial_x w_{\lesssim j} \rVert_{L^2} \lVert (LS)_{\sim j} \rVert_{L^\infty}\\
\lesssim& M \epsilon^4 t^{-7/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient. To bound the remaining term~\eqref{eqn:mS-dyadic-LS-ll-sim-3-main}, we use the bounds for the localized terms $\chi_{\sim k} \partial_x w_{\lesssim j}$ and $\chi_{\sim k} (LS)_{\sim j}$ from~\eqref{eqn:w-lin-ests-L-2} and~\eqref{eqn:LS-cubic-bounds} to obtain
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-ll-sim-3-main} \rVert_{L^2} \lesssim& \sum_{t^{-1/3} \leq 2^k \leq 2^{j-40}} \lVert \chi_k u_{\ll j} \rVert_{L^\infty} \lVert \chi_{\sim k} \partial_x w_{\lesssim j} \rVert_{L^2} \lVert \chi_{\sim k} (LS)_{\sim j} \rVert_{L^\infty}\\
\lesssim& M\epsilon^4t^{-4/3}\sum_{t^{-1/3} \leq 2^k \leq 2^{j - 40}} 2^{k/2- 3/2 j} \lVert xg \rVert_{L^2}\\
\lesssim& M\epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
To bound~\eqref{eqn:mS-dyadic-LS-ll-sim-2}, we write
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-ll-sim-2} =& \chi_{[j - 50, j+10]} T_{m_j^\mathcal{S}}(\chi_{[j - 40, j]} S_{\ll j}, \chi_{[j - 50, j+10]} \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-2-main}\\
&+ (1-\chi_{[j - 50, j+10]})T_{m_j^\mathcal{S}}(\chi_{[j - 40, j]} S_{\ll j}, \chi_{[j - 50, j+10]} \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-2-sub-1}\\
&+ T_{m_j^\mathcal{S}}(\chi_{[j - 40, j]} S_{\ll j}, (1-\chi_{[j - 50, j+10]}) \partial_x w_{\lesssim j}, (\overline{LS})_{\sim j})\label{eqn:mS-dyadic-LS-ll-sim-2-sub-2}
\end{align}\end{subequations}
The terms~\cref{eqn:mS-dyadic-LS-ll-sim-2-sub-1,eqn:mS-dyadic-LS-ll-sim-2-sub-2} can be handled using~\Cref{thm:cm-paraprod-pseudolocality} in the same manner as~\cref{eqn:mS-dyadic-LS-ll-sim-3-sub-2,eqn:mS-dyadic-LS-ll-sim-3-sub-1}. The terms in~\eqref{eqn:mS-dyadic-LS-ll-sim-2-main} are almost orthogonal in physical space, so we can write
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-LS-ll-sim-2-main} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \gtrsim t^{-1/3}} \lVert \chi_{[j - 40, j]}u_{\ll j} \rVert_{L^\infty}^2 \lVert \chi_{[j - 50, j+10]} \partial_x w_{\lesssim j} \rVert_{L^2}^2 \lVert ({LS})_{\sim j}\rVert_{L^\infty}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1}\lVert xg \rVert_{L^2}
\end{split}\end{equation*}
Combining the estimates for~\cref{eqn:mS-dyadic-LS-ll-sim-1,eqn:mS-dyadic-LS-ll-sim-2,eqn:mS-dyadic-LS-ll-sim-3} shows that the bound for~\eqref{eqn:mS-dyadic-LS-ll-sim} holds.
We now turn to~\eqref{eqn:mS-dyadic-LS-sim-ll}. Dividing dyadically in space, we can write
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-sim-ll} =& \sum_{k < j - 30} T_{m_j^\mathcal{S}}(u_{\sim j}, \partial_x w_{\lesssim j}, \chi_k (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll-1}\\
&+ T_{m_j^\mathcal{S}}(u_{\sim j}, \partial_x w_{\lesssim j}, \chi_{[j - 30 , j + 30]} (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll-2}\\
&+ \sum_{k > j + 30} T_{m_j^\mathcal{S}}(u_{\sim j}, \partial_x w_{\lesssim j}, \chi_k (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll-3}
\end{align}\end{subequations}
For~\eqref{eqn:mS-dyadic-LS-sim-ll-1}, we have
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-sim-ll-1} =& \sum_{k < j - 30} T_{m_j^\mathcal{S}}(\chi_{\sim k} u_{\sim j}, \chi_{\sim k} \partial_x w_{\lesssim j}, \chi_k (\overline{LS})_{\ll j})\label{eqn:mS-dyadic-LS-sim-ll-1-main}\\
&+ \sum_{k < j - 30} T_{m_j^\mathcal{S}}((1 - \chi_{\sim k}) u_{\sim j}, \partial_x w_{\lesssim j}, \chi_k (\overline{LS})_{\ll j})\label{eqn:mS-dyadic-LS-sim-ll-1-sub-1}\\
&+ \sum_{k < j - 30} T_{m_j^\mathcal{S}}(\chi_{\sim k} u_{\sim j}, (1 - \chi_{\sim k}) \partial_x w_{\lesssim j}, \chi_k (\overline{LS})_{\ll j})\label{eqn:mS-dyadic-LS-sim-ll-1-sub-2}
\end{align}\end{subequations}
The bounds for~\cref{eqn:mS-dyadic-LS-sim-ll-1-sub-1,eqn:mS-dyadic-LS-sim-ll-1-sub-2} immediately follow from~\Cref{thm:cm-paraprod-pseudolocality}:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-sim-ll-1-sub-1} \rVert_{L^2} + \lVert \eqref{eqn:mS-dyadic-LS-sim-ll-1-sub-2} \rVert_{L^2} \lesssim& \sum_{k < j - 30} (t2^{2k+j})^{-2} \lVert u_{\sim j} \rVert_{L^\infty} \lVert (LS)_{\ll j} \rVert_{L^\infty} \lVert \partial_x w_{\lesssim j} \rVert_{L^2}\\
\lesssim& M\epsilon^2 t^{-5/2} \sum_{2^k \gtrsim t^{-1/3}} 2^{-j/2-4k} \lVert xg \rVert_{L^2}\\
\lesssim& M\epsilon^2 t^{-7/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient. Turning to~\eqref{eqn:mS-dyadic-LS-sim-ll-1-main}, we have that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-sim-ll-1-main} \rVert_{L^2} \lesssim& \sum_{k < j - 30} \lVert \chi_{\sim k} u_{\sim j} \rVert_{L^\infty} \lVert \chi_{\sim k} \partial_x w_{\lesssim j} \rVert_{L^2} \lVert \chi_k (LS)_{\ll j} \rVert_{L^\infty}\\
\lesssim& M\epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is acceptable, completing the argument for~\eqref{eqn:mS-dyadic-LS-sim-ll-1}. Turning to~\eqref{eqn:mS-dyadic-LS-sim-ll-2}, we can write
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-sim-ll-2} =& \chi_{[j - 40 , j + 40]} T_{m_j^\mathcal{S}} (u_{\sim j}, \chi_{[j - 40 , j + 40]} \partial_x w_{\lesssim j}, \chi_{[j - 30 , j + 30]} (\overline{LS}){\ll j})\label{eqn:mS-dyadic-LS-sim-ll-2-main}\\
&+ (1-\chi_{[j - 40 , j + 40]}) T_{m_j^\mathcal{S}} (u_{\sim j}, \chi_{[j - 40 , j + 40]} \partial_x w_{\lesssim j}, \chi_{[j - 30 , j + 30]} (\overline{LS}){\ll j})\label{eqn:mS-dyadic-LS-sim-ll-2-sub-1}\\
&+ T_{m_j^\mathcal{S}} (u_{\sim j}, (1-\chi_{[j - 40 , j + 40]}) \partial_x w_{\lesssim j}, \chi_{[j - 30 , j + 30]} (\overline{LS}){\ll j})\label{eqn:mS-dyadic-LS-sim-ll-2-sub-2}
\end{align}\end{subequations}
The terms~\cref{eqn:mS-dyadic-LS-sim-ll-2-sub-1,eqn:mS-dyadic-LS-sim-ll-2-sub-2} can be controlled using~\Cref{thm:cm-paraprod-pseudolocality}:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-sim-ll-2-sub-1}\rVert_{L^2} + \lVert \eqref{eqn:mS-dyadic-LS-sim-ll-2-sub-2}\rVert_{L^2} \lesssim& (t2^{3j})^{-1} \lVert u_{\sim j} \rVert_{L^\infty} \lVert \partial_x w_{\lesssim j} \rVert_{L^2} \lVert (\overline{LS})_{\ll j} \rVert_{L^\infty}\\
\lesssim& M\epsilon^4 t^{-3/2} 2^{-3/2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is acceptable. For~\eqref{eqn:mS-dyadic-LS-sim-ll-2-main}, we use almost orthogonality to obtain
\begin{equation*}\begin{split}
\left\lVert \sum_{j}\eqref{eqn:mS-dyadic-LS-sim-ll-2-main} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j\gtrsim t^{-1/3}} \lVert u_{\sim j} \rVert_{L^\infty}^2 \lVert \chi_{[j - 40 , j + 40]} \partial_x w_{\lesssim j} \rVert_{L^2}^2 \lVert \chi_{[j - 30 , j + 30]} (LS)_{\ll j} \rVert_{L^\infty}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required.
Finally, for~\eqref{eqn:mS-dyadic-LS-sim-ll-3}, we have
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-sim-ll-3} =& T_{m_j^\mathcal{S}}(\chi_{> j + 20} u_{\sim j}, \partial_x w_{\lesssim j}, \chi_{> j + 30} (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll-3-main}\\
&+ T_{m_j^\mathcal{S}}(\chi_{\leq j - 20} u_{\sim j}, \partial_x w_{\lesssim j}, \chi_{> j + 30} (\overline{LS})_{\ll j}) \label{eqn:mS-dyadic-LS-sim-ll-3-sub}
\end{align}\end{subequations}
The second term is easily controlled using~\Cref{thm:cm-paraprod-pseudolocality}:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-sim-ll-3-sub} \rVert_{L^2} \lesssim& (t2^{3j})^{-1} \lVert u_{\sim j} \rVert_{L^\infty} \lVert \partial_x w_{\lesssim j} \rVert_{L^2}\lVert (\overline{LS})_{\ll j}) \rVert_{L^\infty}\\
\lesssim& M\epsilon^4 t^{-3/2} 2^{-3/2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
and for the main term~\eqref{eqn:mS-dyadic-LS-sim-ll-3-main} we have that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-sim-ll-3-main} \rVert_{L^2} \lesssim& \lVert \chi_{> j + 20} u_{\sim j}\rVert_{L^\infty}\lVert \partial_x w_{\lesssim j}\rVert_{L^2}\lVert \chi_{> j + 30} (\overline{LS})_{\ll j}) \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient to bound~\eqref{eqn:mS-dyadic-LS-sim-ll-3}, completing the estimate for~\eqref{eqn:mS-dyadic-LS}.
\begin{rmk}\label{rmk:non-pseudolocal-exclusion}
In estimating~\cref{eqn:mS-dyadic-no-deriv,eqn:mS-dyadic-Lw-deriv,eqn:mS-dyadic-LS}, we frequently encountered terms of the form
\begin{equation*}
T_{m_j^\mathcal{S}}(p,q,\chi_k r) = \chi_{\sim k} T_{m_j^\mathcal{S}}(\chi_{\sim k} p, \chi_{\sim k} q,\chi_k r) + \btext{non-pseudolocal terms}
\end{equation*}
where $\btext{non-pseudolocal terms}$ denotes terms which can be estimated using~\Cref{thm:cm-paraprod-pseudolocality}. The estimates for the non-pseudolocal remainder terms are routine: they do not require any refined linear or cubic estimates and are insensitive to the precise frequency localization of $p$, $q$, and $r$. Thus, in the interest of the exposition, we will not estimate these non-pseudolocal remainders or even write them explicitly in the following sections.
\end{rmk}
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-SSw}} Here, we have
\begin{subequations}\begin{align}
T_{m_j^\mathcal{S}}(u, S, \overline{w}) =& Q_{\sim j} T_{m_j^\mathcal{S}}(u_{\ll j}, S_{\ll j}, \overline{w}_{\sim j}) \label{eqn:mS-SSw-i}\\
&+ T_{m_j^\mathcal{S}}(u_{\sim j}, S_{\sim j}, \overline{w}_{\ll j})\label{eqn:mS-SSw-iv}\\
&+ T_{m_j^\mathcal{S}}(u_{\ll j}, S_{\sim j}, \overline{w}_{\sim j}) \label{eqn:mS-SSw-ii}\\
&+ T_{m_j^\mathcal{S}}(u_{\sim j}, S_{\ll j}, \overline{w}_{\ll j})\label{eqn:mS-SSw-iii}\\
&+ Q_{\sim j} T_{m_j^\mathcal{S}}(u_{\sim j}, S_{\sim j}, \overline{w}_{\sim j})\label{eqn:mS-SSw-v}
\end{align}\end{subequations}
Notice that the terms~\eqref{eqn:mS-SSw-ii} and~\eqref{eqn:mS-SSw-iii} can be controlled in the same way as~\eqref{eqn:mS-no-deriv-low-freq}, and that the term~\eqref{eqn:mS-SSw-v} can be controlled in the same way as~\eqref{eqn:mS-no-deriv-equal-freq}. Thus, it only remains to consider the contribution from the first two terms. For~\eqref{eqn:mS-SSw-i}, we write
\begin{subequations}\begin{align}
\eqref{eqn:mS-SSw-i} =& Q_{\sim j} T_{m_j^\mathcal{S}}(\chi_{< j - 30} u_{\ll j}, \chi_{< j - 20} S_{\ll j}, \chi_{< j - 20} \overline{w}_{\sim j}) \label{eqn:mS-SSw-i-1}\\
&+ Q_{\sim j} T_{m_j^\mathcal{S}}(\chi_{\geq j - 30} u_{\ll j}, \chi_{\geq j - 40} S_{\ll j}, \overline{w}_{\sim j}) \label{eqn:mS-SSw-i-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
The non-pseudolocal terms are easily handled (see~\Cref{rmk:non-pseudolocal-exclusion}). For~\eqref{eqn:mS-SSw-i-1}, we use the bound for $\chi_{\leq j - 30} w_{\sim j}$ from~\eqref{eqn:w-lin-ests-L-2} together with the fact that the terms in~\eqref{eqn:mS-SSw-i-1} are almost orthogonal to find that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-SSw-i-1} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \gtrsim t^{-1/3}} \lVert u_{\ll j}\rVert_{L^\infty}^2 \lVert S_{\ll j} \rVert_{L^\infty}^2 \lVert \chi_{\leq j - 30} w_{\sim j} \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required. For~\eqref{eqn:mS-SSw-i-2}, the improved decay of $\chi_{\geq j - 30} u_{\ll j}$ and $\chi_{\geq j - 40} S_{\ll j}$ given by~\eqref{eqn:u-lin-ests} gives us the bound
\begin{equation*}\begin{split}
\sum_{j} \eqref{eqn:mS-SSw-i-2} \lesssim& \sum_{2^j \geq t^{-1/3}} \lVert \chi_{\geq j - 30} u_{\ll j}\rVert_{L^\infty} \lVert \chi_{\geq j - 40} S_{\ll j} \rVert_{L^\infty}\lVert w_{\sim j}\rVert_{L^2}\\
\lesssim& M \epsilon^2 t^{-5/3}2^{-2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient after summing in $j$.
Turning to~\eqref{eqn:mS-SSw-iv}, we write
\begin{subequations}\begin{align}
\eqref{eqn:mS-SSw-iv} =& T_{m_j^\mathcal{S}}((1 - \chi_{[j-20, j+20]})u_{\sim j}, S_{\sim j}, \overline{w}_{\ll j})\label{eqn:mS-SSw-iv-1}\\
&+ \chi_{[j-30, j+30]} T_{m_j^\mathcal{S}}(\chi_{[j-20 , j+20]} u_{\sim j}, S_{\sim j}, \chi_{[j-30 , j+30]} \overline{w}_{\ll j}) \label{eqn:mS-SSw-iv-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
The first term is easily bounded using the linear estimates:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-SSw-iv-1} \rVert_{L^2} \lesssim& \lVert (1 - \chi_{[j-20 , j+20]})u_{\sim j} \rVert_{L^\infty}\lVert S_{\sim j} \rVert_{L^\infty} \lVert w_{\ll j} \rVert_{L^2}\\
\lesssim& M\epsilon^2 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives the desired result after summing in $j$. For~\eqref{eqn:mS-SSw-iv-2}, we use almost orthogonality to write
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-SSw-iv-2} \right\rVert_{L^2} \lesssim&\left( \sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j}\rVert_{L^\infty}^2\lVert S_{\sim j}\rVert_{L^\infty}^2 \lVert \chi_{[j-30 , j+30]} \overline{w}_{\ll j})\rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
The bound for~\eqref{eqn:mS-dyadic-SSw} now follows.
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-deriv-on-LS}} Here, we decompose the pseudoproduct as
\begin{subequations}\begin{align}
T_{m_j^{\mathcal{S}}}(u, \partial_x LS, \overline{w}) =& T_{m_j^{\mathcal{S}}}(u_{\ll j}, \partial_x (LS)_{\sim j}, \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim}\\
&+ Q_{\sim j} T_{m_j^{\mathcal{S}}}(u_{\ll j}, \partial_x (LS)_{\ll j}, \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim}\\
&+ T_{m_j^{\mathcal{S}}}(u_{\sim j}, \partial_x (LS)_{\lesssim j}, \overline{w}_{\ll j})\label{eqn:mS-dyadic-deriv-on-LS-sim-ll}\\
&+ Q_{\sim j} T_{m_j^{\mathcal{S}}}(u_{\sim j}, \partial_x (LS)_{\sim j}, \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-all-sim}
\end{align}\end{subequations}
For~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim}, we have
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim}
=& T_{m_j^{\mathcal{S}}}( \chi_{> j}u_{\ll j}, \partial_x (LS)_{\sim j}, \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-1}\\
&+ \chi_{[j - 50 , j+10]}T_{m_j^{\mathcal{S}}}( \chi_{[j - 40 , j]}u_{\ll j}, \partial_x (LS)_{\sim j}, \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-2}\\
&+ T_{m_j^{\mathcal{S}}}( \chi_{< j - 40}u_{\ll j}, \partial_x (LS)_{\sim j}, \chi_{< j - 30} \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
The first and third terms are straightforward to bound:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-1} \rVert_{L^2} \lesssim& \lVert \chi_{> j} u_{\ll j} \rVert_{L^\infty} \lVert \partial_x (LS)_{\sim j} \rVert_{L^\infty} \lVert w_{\sim j} \rVert_{L^2}\\
\lesssim& M\epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}\\
\lVert \eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-3} \rVert_{L^2} \lesssim& \lVert u_{\ll j} \rVert_{L^\infty} \lVert \partial_x (LS)_{\sim j} \rVert_{L^\infty} \lVert \chi_{< j - 30} w_{\sim j} \rVert_{L^2}\\
\lesssim& M\epsilon^4 t^{-7/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
For the second term, almost orthogonality implies that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim-2} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \geq t^{-1/3}} \lVert \chi_{[j - 40 , j]}u_{\ll j} \rVert_{L^\infty}^2 \lVert \partial_x (LS)_{\sim j}\rVert_{L^\infty}^2 \lVert {w}_{\sim j} \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives the required bound for~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-sim-sim}.
Turning to~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim}, we have
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim} =& Q_{\sim j} T_{m_j^{\mathcal{S}}}(u_{\ll j}, \partial_x (LS)_{\ll j}, \chi_{< j - 30} \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-1}\\
&+ Q_{\sim j} T_{m_j^{\mathcal{S}}}(\chi_{\geq j - 40}u_{\ll j}, \chi_{\geq j - 40}\partial_x (LS)_{\ll j}, \chi_{\geq j - 30} \overline{w}_{\sim j})\label{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-1}, we have that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-1} \right\rVert_{L^2}\lesssim& \left(\sum_{2^j \geq t^{-1/3}}\lVert u_{\ll j} \rVert_{L^\infty} \lVert \partial_x (LS)_{\ll j} \rVert_{L^\infty} \lVert \chi_{< j - 30} {w}_{\sim j}\rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
while for~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-2}, we have the bound
\begin{equation*}\begin{split}
\left\lVert \eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim-2} \right\rVert_{L^2} \lesssim& \lVert \chi_{\geq j - 40} u_{\ll j} \rVert_{L^\infty} \lVert \chi_{\geq j - 40} \partial_x(LS)_{\ll j} \rVert_{L^\infty} \lVert w_{\sim j} \rVert_{L^2}\\
\lesssim& M\epsilon^4 t^{-4/3} 2^{-j}\lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives the bound for~\eqref{eqn:mS-dyadic-deriv-on-LS-ll-ll-sim}.
For~\eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll}, we have
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll} =& T_{m_j^{\mathcal{S}}}((1 - \chi_{[j - 20 , j + 20]})u_{\sim j}, \partial_x (LS)_{\lesssim j}, \overline{w}_{\ll j}) \label{eqn:mS-dyadic-deriv-on-LS-sim-ll-1}\\
&+ \chi_{[j - 30 , j + 30]} T_{m_j^{\mathcal{S}}}(\chi_{[j - 20 , j + 20]}u_{\sim j}, \chi_{[j - 30 , j + 30]}\partial_x (LS)_{\lesssim j}, \chi_{[j - 30 , j + 30]} \overline{w}_{\ll j}) \label{eqn:mS-dyadic-deriv-on-LS-sim-ll-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the first term,
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll-1} \rVert_{L^2} \lesssim& \lVert \chi_{[j - 20 , j + 20]} u_{\sim j} \rVert_{L^\infty} \lVert \partial_x(LS)_{\lesssim j} \rVert_{L^\infty} \lVert w_{\ll j} \rVert_{L^2}\\
\lesssim& M\epsilon^4 t^{-7/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient. Turning to~\eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll-2}, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll-2} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j} \rVert_{L^\infty}^2 \lVert \chi_{[j - 30 , j + 30]}\partial_x (LS)_{\lesssim j} \rVert_{L^\infty}^2 \lVert \chi_{[j - 30 , j + 30]} w_{\ll j} \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which completes the bound for~\eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll}.
Finally, for~\eqref{eqn:mS-dyadic-deriv-on-LS-all-sim}, we use almost orthogonality and~\eqref{eqn:cubic-deriv-bounds-compendium} to conclude that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-on-LS-all-sim} \right\rVert_{L^2} \lesssim& \left(\sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j}\rVert_{L^\infty} \lVert \partial_x(LS)_{\sim j}\rVert_{L^\infty} \lVert w_{\sim j}\rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M\epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
completing the bound for~\eqref{eqn:mS-dyadic-deriv-on-LS}.
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-deriv-on-S-and-Lw}} Using the support condition for $m_j^{\mathcal{S}}$, we can decompose each summand as
\begin{subequations}\begin{align}
T_{m_j^\mathcal{S}}(u, \partial_x u, \overline{Lw}) =&
T_{m_j^\mathcal{S}}(u_{\sim j}, \partial_x u_{\lesssim j}, \overline{Lw})\label{eqn:mS-dyadic-deriv-and-Lw-i}\\
&+T_{m_j^\mathcal{S}}(u_{\ll j}, \partial_x u_{\sim j}, (\overline{Lw})_{\sim j})\label{eqn:mS-dyadic-deriv-and-Lw-ii}\\
&+ Q_{\sim j} T_{m_j^\mathcal{S}}(u_{\ll j}, \partial_x u_{\ll j}, (\overline{Lw})_{\sim j})\label{eqn:mS-dyadic-deriv-and-Lw-iii}
\end{align}\end{subequations}
For~\eqref{eqn:mS-dyadic-deriv-and-Lw-i}, we introduce the further decomposition
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-deriv-and-Lw-i} =& \chi_{[j - 40 , j + 40]}T_{m_j^\mathcal{S}}( \chi_{[j - 30 , j + 30]} u_{\sim j}, \partial_x u_{\ll j}, \chi_{[j - 40 , j + 40]} \overline{Lw})\label{eqn:mS-dyadic-deriv-and-Lw-i-1}\\
&+ T_{m_j^\mathcal{S}}((1 - \chi_{[j - 30 , j + 30]})u_{\sim j}, \partial_x u_{\ll j}, \overline{Lw})\label{eqn:mS-dyadic-deriv-and-Lw-i-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the first term, we use almost orthogonality to obtain the bound
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-and-Lw-i-1} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j}\rVert_{L^\infty}^2 \lVert \partial_x u_{\lesssim j}\rVert_{L^\infty}^2 \lVert \chi_{[j - 40 , j + 40]} Lw \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
For the second term, a direct application of the decay estimates yields
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-deriv-and-Lw-i-2} \rVert_{L^2} \lesssim& \lVert (1 - \chi_{[j - 30 , j + 30]}) u_{\sim j} \rVert_{L^\infty} \lVert \partial_x u_{\lesssim j} \rVert_{L^\infty} \lVert Lw \rVert_{L^2}\\
\lesssim& M^2\epsilon^2 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is acceptable, completing the argument for~\eqref{eqn:mS-dyadic-deriv-and-Lw-i}. The bound for~\eqref{eqn:mS-dyadic-deriv-and-Lw-ii} follows from similar reasoning. For the final term~\eqref{eqn:mS-dyadic-deriv-and-Lw-iii}, we see immediately that
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:mS-dyadic-deriv-and-Lw-iii} \right\rVert_{L^2} \lesssim&\left(\sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j}\rVert_{L^\infty}^2\lVert \partial_x u_{\sim j} \rVert_{L^\infty}^2\lVert (Lw)_{\sim j} \rVert_{L^2}^2 \right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which completes the argument for~\eqref{eqn:mS-dyadic-deriv-on-S-and-Lw}.
\subsubsection{The bound for~\eqref{eqn:mS-dyadic-LS-and-outside-w}} It only remains to consider~\eqref{eqn:mS-dyadic-LS-and-outside-w}. We can use the support restrictions on the $m_j^\mathcal{S}$ to write
\begin{subequations}\begin{align}
T_{m_j^\mathcal{S}}(LS, \partial_xS, \overline{w}) =& Q_{\sim j} T_{m_j^\mathcal{S}}( \partial_x(LS)_{\sim j}, S_{\sim j}, \overline{w}_{\sim j}) \label{eqn:mS-dyadic-LS-and-outside-w-i}\\
& + T_{m_j^\mathcal{S}}( \partial_x(LS)_{\sim j}, \partial_x S_{\sim j}, \overline{w}_{\ll j}) \label{eqn:mS-dyadic-LS-and-outside-w-ii}\\
& + T_{m_j^\mathcal{S}}( (LS)_{\ll j}, \partial_x S_{\sim j}, \partial_x\overline{w}_{\lesssim j}) \label{eqn:mS-dyadic-LS-and-outside-w-iii}\\
& + T_{m_j^\mathcal{S}}(\partial_x (LS)_{\sim j}, S_{\ll j}, \overline{w}_{\ll j})\label{eqn:mS-dyadic-LS-and-outside-w-iv}\\
&+ Q_{\sim j}T_{m_j^\mathcal{S}}( (LS)_{\ll j}, S_{\ll j}, \partial_x \overline{w}_{\sim j})\label{eqn:mS-dyadic-LS-and-outside-w-v}
\end{align}
\end{subequations}
Ignoring complex conjugation and permuting the arguments of the pseudoproducts, we see that the terms~\eqref{eqn:mS-dyadic-LS-and-outside-w-i},~\eqref{eqn:mS-dyadic-LS-and-outside-w-ii}, and~\eqref{eqn:mS-dyadic-LS-and-outside-w-iii} are essentially identical to~\eqref{eqn:mS-dyadic-LS-sim-sim},~\eqref{eqn:mS-dyadic-deriv-on-LS-sim-ll}, and~\eqref{eqn:mS-dyadic-LS-sim-ll}, respectively. Therefore, we will focus on the last two terms. For~\eqref{eqn:mS-dyadic-LS-and-outside-w-iv}, we write
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-and-outside-w-iv} =& T_{m_j^{\mathcal{S}}}( \chi_{> j + 20} \partial_x(LS)_{\sim j}, \chi_{> j + 10} S_{\ll j}, \overline{w}_{\ll j})\label{eqn:mS-dyadic-LS-and-outside-w-iv-1}\\
&+ \chi_{[j - 30, j+30]}T_{m_j^{\mathcal{S}}}( \chi_{[j+20, j-20]} \partial_x(LS)_{\sim j}, \chi_{[j - 30, j+30]} S_{\ll j}, \chi_{[j - 30, j+30]}\overline{w}_{\ll j})\label{eqn:mS-dyadic-LS-and-outside-w-iv-2}\\
&+ \sum_{k < j - 20} T_{m_j^{\mathcal{S}}}( \chi_{k} \partial_x(LS)_{\sim j}, \chi_{\sim k} S_{\ll j}, \chi_{\sim k} \overline{w}_{\ll j})\label{eqn:mS-dyadic-LS-and-outside-w-iv-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}
\end{subequations}
The first and last terms can be handled using decay estimates:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-and-outside-w-iv-1} \rVert_{L^2} \lesssim& \lVert \chi_{> j + 20} \partial_x(LS)_{\sim j} \rVert_{L^\infty} \lVert\chi_{> j + 10} S_{\ll j} \rVert_{L^\infty} \lVert \overline{w}_{\ll j} \rVert_{L^2}\\
\lesssim& \epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}\\
\lVert \eqref{eqn:mS-dyadic-LS-and-outside-w-iv-3} \rVert_{L^2} \lesssim& \sum_{k < j - 30} \lVert \partial_x(LS)_{\sim j}\rVert_{L^\infty} \lVert \chi_{\sim k} S_{\ll j}\rVert_{L^\infty} \lVert \chi_{\sim k} \overline{w}_{\ll j} \rVert_{L^2}\\
\lesssim& \epsilon^2 t^{-4/3} \lVert xg\rVert_{L^2} \sum_{2^k \gtrsim t^{-1/3}} 2^{-j/2 - k/2}\\
\lesssim& \epsilon^2 t^{-7/6} 2^{-j/2} \lVert xg\rVert_{L^2}
\end{split}
\end{equation*}
For the second term, we use almost orthogonality to get the bound
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:mS-dyadic-LS-and-outside-w-iv-2} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \geq t^{-1/3}} \lVert \partial_x (LS)_{\sim j} \rVert_{L^\infty}^2 \lVert \chi_{[j-30, j+30]} S_{\ll j} \rVert_{L^\infty}^2 \lVert \chi_{[j - 30, j+30]} w_{\ll j} \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& \epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}
\end{equation*}
Turning to~\eqref{eqn:mS-dyadic-LS-and-outside-w-v}, we introduce the decomposition
\begin{subequations}\begin{align}
\eqref{eqn:mS-dyadic-LS-and-outside-w-v} =& Q_{\sim j} T_{m_j^\mathcal{S}}(\chi_{> j + 30} (LS)_{\ll j}, \chi_{> j + 20} S_{\ll j}, \partial_x w_{\sim j}) \label{eqn:mS-dyadic-LS-and-outside-w-v-1}\\
&+ Q_{\sim j} T_{m_j^\mathcal{S}}(\chi_{[j - 30, j+ 30]} (LS)_{\ll j}, \chi_{[j - 40, j+ 40]} S_{\ll j}, \partial_x w_{\sim j}) \label{eqn:mS-dyadic-LS-and-outside-w-v-2}\\
&+ \sum_{k < j - 30} Q_{\sim j} T_{m_j^\mathcal{S}}(\chi_{k} (LS)_{\ll j}, \chi_{\sim k} S_{\ll j}, \chi_{\sim k} \partial_x w_{\sim j}) \label{eqn:mS-dyadic-LS-and-outside-w-v-3}
\end{align}
\end{subequations}
The first term is easily handled:
\begin{equation*}\begin{split}
\lVert \eqref{eqn:mS-dyadic-LS-and-outside-w-v-1} \rVert_{L^2} \lesssim& \lVert \chi_{> j + 30} (LS)_{\ll j} \rVert_{L^\infty}\lVert \chi_{> j + 20} S_{\ll j}\rVert_{L^\infty}\lVert \partial_x w_{\sim j} \rVert_{L^2}\\
\lesssim& \epsilon^4 t^{-4/3} 2^{-j} \lVert xg \rVert_{L^2}
\end{split}
\end{equation*}
For the remaining terms, we take advantage of the almost orthogonality coming from the frequency projection to obtain the bounds
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:mS-dyadic-LS-and-outside-w-v-2} \right\rVert_{L^2} \lesssim& \left(\sum_{2^j \gtrsim t^{-1/3}} \lVert \chi_{[j - 30, j+ 30]} (LS)_{\ll j}\rVert_{L^\infty}^2 \lVert\chi_{[j - 40, j+ 40]} S_{\ll j}\rVert_{L^\infty}^2\lVert \partial_x w_{\sim j} \rVert_{L^2}^2\right)^{1/2}\\
\lesssim& \epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
and
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:mS-dyadic-LS-and-outside-w-v-2} \right\rVert_{L^2} \lesssim& \left(\sum_{2^j \gtrsim t^{-1/3}} \left(\sum_{k < j - 30} \lVert \chi_k (LS)_{\ll j}\rVert_{L^\infty} \lVert \chi_{\sim k} S_{\ll j} \rVert_{L^\infty} \lVert \chi_{\sim k} \partial_x w_{\sim j} \rVert_{L^2}\right)^2\right)^{1/2}\\
\lesssim& \epsilon^4 t^{-4/3} \lVert xg \rVert_{L^2} \left( \sum_{2^j \gtrsim t^{-1/3}} \left( \sum_{2^k \gtrsim t^{-1/3}} 2^{-k}c_j \right)^2 \right)^{1/2}\\
\lesssim& \epsilon^4 t^{-1} \lVert xg \rVert_{L^2}
\end{split}
\end{equation*}
which completes the estimate for~\eqref{eqn:mS-dyadic-LS-and-outside-w}.
\subsection{The time non-resonant multiplier}\label{sec:xf-tr}
We control the term~\eqref{eqn:xg-time-non-res-term} by integrating by parts in $s$. Defining $m_s = \frac{i(\xi-\eta-\sigma) \partial_\xi \phi \chi_s^\mathcal{T}}{\phi}$, we can integrate by parts to obtain
\begin{subequations}\begin{align}
\int_1^t\eqref{eqn:xg-time-non-res-term}\,ds =& \mp \left.s \Re\langle e^{-s\partial_x^3} xg, T(s)\rangle\right|_{s=1}^{s=t} \label{eqn:time-non-res-bdy}\\
&\pm \Re\int_1^t \langle e^{-s\partial_x^3} xg, T(s) \rangle \,ds\label{eqn:time-non-res-no-s}\\
&\pm \Re\int_1^t s\langle e^{-s\partial_x^3} xg, \tilde{T}(s) \rangle \,ds\label{eqn:time-non-res-symbol-deriv}\\
&\pm \Re\int_1^t s\langle e^{-s\partial_x^3} xg, \mathring{T}(s)\rangle \,ds \label{eqn:time-non-res-arg-deriv}\\
&\pm \Re\int_1^t s\langle e^{-s\partial_x^3} x\partial_s g, T(s) \rangle \,ds \label{eqn:time-non-inner-product-deriv}
\end{align}\end{subequations}
where, to ease notation, we have written
\begin{equation*}\begin{split}
T(s) =& T_{m_s}(u, w, \overline{u}) + T_{m_s}(u, S, \overline{w}) + T_{m_s}(w, S, \overline{u})\\
\tilde{T}(s) =& T_{\partial_s m_s}(u, w, \overline{u}) + T_{\partial_s m_s}(u, S, \overline{w}) + T_{\partial_s m_s}(w, S, \overline{u})\\
\mathring{T}(s) =& T_{m_s}(e^{-s\partial_x^3} \partial_s f, w, \overline{u}) + T_{m_s}(u, e^{-s\partial_x^3} \partial_s g, \overline{u}) + T_{m_s}(u, w, \overline{e^{-s\partial_x^3} \partial_s f}) \\
&+ T_{m_s}(e^{-s\partial_x^3} \partial_s f, S, \overline{w})+ T_{m_s}(u, e^{-s\partial_x^3} \partial_s h, \overline{w})+ T_{m_s}(u, S, \overline{e^{-s\partial_x^3} \partial_s g})\\
&+ T_{m_s}(e^{-s\partial_x^3} \partial_s g, S, \overline{u}) + T_{m_s}(w, e^{-s\partial_x^3} \partial_s h, \overline{u}) + T_{m_s}(w, S, \overline{e^{-s\partial_x^3} \partial_s f})
\end{split}\end{equation*}
Note that the $m_t$ satisfy Coifman-Meyer type bounds uniformly in time:
\begin{equation}\label{eqn:m-t-symbol-bounds}
\left|(\xi^2 + \eta^2 + \sigma^2)|^{|\alpha|/2} \partial_{\xi,\eta,\sigma}^\alpha m_t\right| \lesssim_{\alpha} 1
\end{equation}
Thus, by dividing dyadically in frequency, we can decompose pseudoproducts involving $m_t$ as
\begin{equation} \label{eqn:m-t-pseudoprod-expansion}
T_{m_t}(p,q,r) = \sum_{2^j \geq t^{-1/3}} T_{m_j^\mathcal{T}}(p_{\lesssim j},q_{\lesssim j},r_{\lesssim j})
\end{equation}
where $m_j^\mathcal{T}$ stands for a generic symbol localized to $|\xi| + |\eta| + |\sigma| \sim 2^j$ and satisfying $|\partial_{\xi,\eta,\sigma}^{\alpha} m_j^{\mathcal{T}}| \lesssim_{\alpha} 2^{-|\alpha|j}$. By~\Cref{rmk:freq-loc-symbol-bounds}, all of the pseudoproducts on the right obey H\"older type bounds.
\subsubsection{The bound for~\eqref{eqn:time-non-res-bdy}}\label{sec:T-m-s-bounds} We will first give the argument for the boundary $s = t$. Since none of the frequencies $\eta$, $\sigma$ and $\xi - \eta - \sigma$ play a distinguished role, and since $S$ and $u$ obey the same decay estimates, it suffices to prove to obtain bounds for $T_{m_s}(u,w,\overline{u})$. Using~\eqref{eqn:m-t-pseudoprod-expansion}, we find that
\begin{subequations}\begin{align}
T_{m_t}(u,w,\overline{u}) =& \sum_{2^j \geq t^{-1/3}} T_{m_j^\mathcal{T}}(u_{\sim j}, w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:m-t-dyadic-sim-sim}\\
& \qquad+ T_{m_j^\mathcal{T}}(u_{\sim j}, w_{\lesssim j}, \overline{u}_{\ll j})\label{eqn:m-t-dyadic-sim-ll}\\
&\qquad+ T_{m_j^\mathcal{T}}(u_{\ll j}, w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:m-t-dyadic-ll-sim}\\
&\qquad+ Q_{\sim j} T_{m_j^\mathcal{T}}(u_{\ll j}, w_{\sim j}, \overline{u}_{\ll j})\label{eqn:m-t-dyadic-ll-ll}
\end{align}
\end{subequations}
The terms~\cref{eqn:m-t-dyadic-ll-sim,eqn:m-t-dyadic-sim-ll} are controlled using the same arguments as for~\eqref{eqn:mS-no-deriv-low-freq}:
\begin{equation*}
\lVert \eqref{eqn:m-t-dyadic-ll-sim} \rVert_{L^2} + \lVert \eqref{eqn:m-t-dyadic-sim-ll} \rVert_{L^2} \lesssim M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{equation*}
Similarly, using the same argument as for~\eqref{eqn:mS-SSw-i}, we find that
\begin{equation*}
\lVert \eqref{eqn:m-t-dyadic-ll-ll} \rVert_{L^2} \lesssim M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{equation*}
It only remains to bound~\eqref{eqn:m-t-dyadic-sim-sim}. Here, we have that
\begin{subequations}\begin{align}
\eqref{eqn:m-t-dyadic-sim-sim} =& \sum_{2^j \geq t^{-1/3}}\chi_{[j - 30 , j+ 30]} T_{m_j^\mathcal{T}}(\chi_{[j - 20 , j + 20]} u_{\sim j}, \chi_{[j - 30 , j+30]} w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:m-t-dyadic-sim-sim-1}\\
&\qquad+
T_{m_j^\mathcal{T}}((1- \chi_{[j - 20 , j + 20]}) u_{\sim j},w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:m-t-dyadic-sim-sim-2}\\
&\qquad + \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
Both terms can be estimated using the same types of arguments we have employed previously, yielding the bounds
\begin{equation*}\begin{split}
\lVert \eqref{eqn:m-t-dyadic-sim-sim-1} \rVert_{L^2} \lesssim& \left(\sum_{2^j \geq t^{-1/3}} \lVert u_{\sim j} \rVert_{L^\infty}^4 \lVert \chi_{[j - 30 , j+30]} w_{\lesssim j} \rVert_{L^2}^2 \right)^{1/2}\\%\lesssim& M^2\epsilon^2 t^{-1}\lVert xg \rVert_{L^2} \left(\sum_{2^j \geq t^{-1/3}} c_j^2\right)^{1/2}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
and
\begin{equation*}\begin{split}
\lVert \eqref{eqn:m-t-dyadic-sim-sim-2} \rVert_{L^2} \lesssim& \sum_{2^j \geq t^{-1/3}} \lVert (1 - \chi_{[j - 20 , j + 20]}) u_{\sim j} \rVert_{L^\infty} \lVert w_{\lesssim j} \rVert_{L^2} \lVert u_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^2\epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
Combining these estimates, we see that
\begin{equation}\label{eqn:T-t-bound}
\lVert T(t) \rVert_{L^2} \lesssim M^2 \epsilon^2 t^{-1} \lVert xg \rVert_{L^2}
\end{equation}
so, using Cauchy-Schwarz, we see that
\begin{equation*}
| t \langle e^{-t\partial_x^3} xg, T(t)\rangle | \lesssim M^2\epsilon^2 \lVert xg \rVert_{L^2}^2
\end{equation*}
Repeating the above arguments at $s = 1$ and recalling~\eqref{eqn:xg-t-1-bd}, we see that
\begin{equation*}
|\langle xg(x, 1), T(1)\rangle| \lesssim \epsilon^4
\end{equation*}
so we conclude that
\begin{equation*}
|\eqref{eqn:time-non-res-bdy}| \lesssim M^2\epsilon^2 \lVert xg \rVert_{L^2}^2 + \epsilon^4
\end{equation*}
Since $M^2\epsilon^2 \ll 1$, the first term can be absorbed into the left-hand side of~\eqref{eqn:xg-desired-bound}, and the second term is better than required.
\subsubsection{The bound for~\eqref{eqn:time-non-res-no-s}}
By using the bound for $\lVert T(s) \rVert_{L^2}$ derived above, we have at once that
\begin{equation*}
|\eqref{eqn:time-non-res-no-s}| \lesssim \int_1^t M^2 \epsilon^2 s^{-1} \lVert xg(s) \rVert_{L^2}^2 \,ds
\end{equation*}
which is acceptable.
\subsubsection{The bound for~\eqref{eqn:time-non-res-symbol-deriv}} A simple computation shows that $s\partial_s m_s$ also obeys symbols bounds of the form~\eqref{eqn:m-t-symbol-bounds}, so $\tilde{T}(s)$ obeys the same estimates as $T(s)$, giving us the bound for~\eqref{eqn:time-non-res-symbol-deriv}.
\subsubsection{The bound for ~\eqref{eqn:time-non-res-arg-deriv}} By differentiating $f$, $g$, and $h$ in time, we find that
\begin{equation*}\begin{split}
e^{-s\partial_x^3}\partial_s f =& |u|^2 \partial_x u\\
e^{-s\partial_x^3}\partial_s h =& |S|^2 \partial_x S + D_pS \partial_t \hat{u}(0,t)\\
e^{-s\partial_x^3} \partial_s g =& |u|^2 \partial_x w + (w\overline{u} + u\overline{w}) \partial_x S - D_p S \partial_t \hat{u}(0,t)
\end{split}\end{equation*}
Thus, we can write
\begin{subequations}\begin{align}
\mathring{T}(s) =& \sum_{j} T_{m_j^\mathcal{T}}((|u|^2 \partial_x u), w, \overline{u})\label{eqn:G-i}\\
&\qquad+ T_{m_j^\mathcal{T}}(u, D_pS \hat{u}(0,t), \overline{u})\label{eqn:G-ii}\\
&\qquad + T_{m_j^\mathcal{T}}(u,|u|^2 \partial_x w, \overline{u}) \label{eqn:G-iii}\\
&\qquad + T_{m_j^\mathcal{T}}(u,u \partial_x S \overline{w}, \overline{u}) \label{eqn:G-iv}\\
&+ \{\text{similar terms}\} \nonumber
\end{align}\end{subequations}
The desired bound on~\eqref{eqn:time-non-res-arg-deriv} will follow once we show that
\begin{equation*}
\lVert \mathring{T}(s) \rVert_{L^2} \lesssim M^2\epsilon^2 s^{-2} \lVert xg \rVert_{L^2} + M^2\epsilon^3s^{-5/6 - \beta}
\end{equation*}
so it suffices to bound the quantities~\cref{eqn:G-i,eqn:G-ii,eqn:G-iii,eqn:G-iv} in $L^2$. For~\eqref{eqn:G-i}, we can write
\begin{subequations}\begin{align}
T_{m_j^\mathcal{T}}\left((|u|^2\partial_x u)_{\lesssim j}, w_{\lesssim j}, \overline{u}_{\lesssim j}\right) =& Q_{\sim j}T_{m_j^\mathcal{T}}\left((|u|^2\partial_x u)_{\ll j}, w_{\sim j}, \overline{u}_{\ll j}\right)\label{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll} \\
&+ T_{m_j^\mathcal{T}}\left((|u|^2\partial_x u)_{\lesssim j}, w_{\sim j}, \overline{u}_{\sim j}\right) \label{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim} \\
&+ T_{m_j^\mathcal{T}}\left((|u|^2\partial_x u)_{\lesssim j}, w_{\ll j}, \overline{u}_{\sim j}\right)\label{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim}\\
&+ T_{m_j^\mathcal{T}}\left((|u|^2\partial_x u)_{\sim j}, w_{\lesssim j}, \overline{u}_{\ll j}\right)\label{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll}
\end{align}\end{subequations}
The term~\eqref{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll} is similar to~\eqref{eqn:mS-SSw-i}:
we write
\begin{subequations}\begin{align}
\eqref{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll} =& Q_{\sim j}T_{m_j^\mathcal{T}}(\chi_{< j - 30} (|u|^2\partial_x u)_{\ll j}, \chi_{< j - 20} w_{\sim j}, \overline{u}_{\ll j})\label{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll-1}\\
&+ Q_{\sim j}T_{m_j^\mathcal{T}}(\chi_{\geq j - 30} (|u|^2\partial_x u)_{\ll j}, w_{\sim j}, \chi_{\geq j - 40} \overline{u}_{\ll j})\label{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
Using~\eqref{eqn:cubic-deriv-bounds-compendium} to control the cubic $(|u|^2 \partial_x u)_{\ll j}$, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_{j}\eqref{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll-1} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j\gtrsim s^{-1/3}} \lVert (|u|^2\partial_x u)_{\ll j}\rVert_{L^\infty}^2 \lVert \chi_{< j - 20} w_{\sim j}\rVert_{L^2}^2 \lVert \overline{u}_{\ll j} \rVert_{L^\infty}^2\right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required. Similar reasoning also gives us the bound
\begin{equation*}\begin{split}
\left\lVert \sum_{j}\eqref{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll-2} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j\gtrsim s^{-1/3}} \lVert\chi_{\geq j - 30} (|u|^2\partial_x u)_{\ll j}\rVert_{L^\infty}^2 \lVert w_{\sim j}\rVert_{L^2}^2 \lVert \chi_{\geq j - 40} \overline{u}_{\ll j} \rVert_{L^\infty}^2\right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives us the required bound for~\eqref{eqn:time-non-res-arg-deriv-cubic-ll-sim-ll}. Turning to~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim}, we find that
\begin{subequations}\begin{align}
\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim} =& T_{m_j^\mathcal{T}}((|u|^2 \partial_x u)_{\lesssim j}, w_{\sim j}, (1 - \chi_{[j - 30 , j + 30]})\overline{u}_{\sim j})\label{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1}\\
&+ \chi_{[j - 40 , j + 40]}T_{m_j^\mathcal{T}}(\chi_{[j - 40 , j + 40]}(|u|^2 \partial_x u)_{\lesssim j}, w_{\sim j}, \chi_{[j - 30 , j + 30]}\overline{u}_{\sim j})\label{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For~\cref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1}, the linear and cubic dispersive estimates give us the bounds
\begin{equation*}\begin{split}
\lVert \eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1} \rVert_{L^2} \lesssim& \lVert (|u|^2\partial_x u)_{\lesssim j} \rVert_{L^\infty} \lVert w_{\sim j} \rVert_{L^2} \lVert (1 - \chi_{[j - 30 , j + 30]})\overline{u}_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 s^{-13/6} 2^{-j/2} \lVert xg \rVert_{L^2}\\
\end{split}\end{equation*}
which gives the required bound after summing in $j$. For the second term, we have
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \gtrsim s^{-1/3}} \lVert \chi_{[j - 40 , j + 40]} (|u|^2 \partial_x u)_{\lesssim j}\rVert_{L^\infty}^2 \lVert w_{\sim j} \rVert_{L^2}^2\lVert {u}_{\sim j} \rVert_{L^\infty}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}
\end{equation*}
which is sufficient, completing the bound for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim}. Similarly, for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim}, we have that
\begin{subequations}\begin{align}
\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim} =& T_{m_j^\mathcal{T}}\left( (|u|^2\partial_x u)_{\lesssim j}, w_{\ll j}, (1 - \chi_{[j - 30 , j + 30]})\overline{u}_{\sim j}\right)\label{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-1}\\
&+ \chi_{[j - 30 , j + 30]}T_{m_j^\mathcal{T}}\left( \chi_{[j - 40 , j + 40]}(|u|^2\partial_x u)_{\lesssim j}, \chi_{[j - 40 , j + 40]}w_{\ll j}, \chi_{[j - 30 , j + 30]}\overline{u}_{\sim j}\right)\label{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
The estimate for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-1} is analogous to the one for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1}. For~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-2}, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-2} \right\rVert_{L^2} \lesssim& \left(\sum_{j} \lVert \chi_{[j - 40 , j + 40]}(|u|^2\partial_x u)_{\lesssim j}\rVert_{L^\infty}^2 \lVert \chi_{[j - 40 , j + 40]}w_{\ll j} \rVert_{L^2}^2 \lVert {u}_{\sim j}\rVert_{L^\infty}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}^2
\end{split}\end{equation*}
completing the bound for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim}. Finally, for~\eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll} we find that
\begin{subequations}\begin{align}
\eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll} =& \sum_{k < j - 30}\chi_{\sim k}T_{m_j^\mathcal{T}}(\chi_k(|u|^2\partial_x u)_{\sim j}, \chi_{\sim k} w_{\sim j}, \chi_{\sim k} \overline{u}_{\ll j})\label{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-1}\\
&+ \chi_{[j - 40, j + 40]}T_{m_j^\mathcal{T}}(\chi_{[j - 30, j + 30]}(|u|^2\partial_x u)_{\sim j}, \chi_{[j - 40, j + 40]}w_{\lesssim j}, \chi_{[j - 40, j + 40]}\overline{u}_{\ll j})\label{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-2}\\
&+ T_{m_j^\mathcal{T}}(\chi_{> j + 30}(|u|^2\partial_x u)_{\sim j}, w_{\lesssim j}, \chi_{> j + 20}\overline{u}_{\ll j})\label{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
The estimate for~\eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-2} is essentially the same as the one for~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-2}, and the term~\eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-3} can be bounded in essentially the same manner as~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-ll-sim-1} and~\eqref{eqn:time-non-res-arg-deriv-cubic-lesssim-sim-sim-1}, so it only remains to bound~\eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-1}. For this term, the refined cubic estimate~\eqref{eqn:cubic-sim-w-deriv-small} gives us the bound
\begin{equation*}
\lVert \chi_k (|u|^2\partial_x u)_{\sim j} \rVert_{L^\infty} \lesssim M^3\epsilon^3 s^{-11/6} 2^{-j/2} 2^{-k}
\end{equation*}
from which we deduce that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:time-non-res-arg-deriv-cubic-sim-lesssim-ll-1} \rVert_{L^2} \lesssim& \sum_{k < j - 30} \lVert \chi_k(|u|^2\partial_x u)_{\sim j} \rVert_{L^\infty} \lVert \chi_{\sim k} w_{\lesssim j}\rVert_{L^2} \lVert \chi_{\sim k} {u}_{\ll j} \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 s^{-13/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient, completing the estimate for~\eqref{eqn:G-i}.
The term~\eqref{eqn:G-ii} can be controlled using the $L^p$ estimates for $D_p S$. Let us write
\begin{subequations}\begin{align}
T_{m_j^\mathcal{T}}(u_{\lesssim j}, Q_{\lesssim j} D_p S \partial_t \hat{u}(0,t), \overline{u}_{\lesssim j}) =& T_{m_j^\mathcal{T}}(u, Q_{\sim j} D_p S \partial_t \hat{u}(0,t), \overline{u})\label{eqn:G-ii-1}\\
&+ T_{m_j^\mathcal{T}}(u_{\sim j}, Q_{\ll j} D_p S \partial_t \hat{u}(0,t), \overline{u})\label{eqn:G-ii-2}\\
&+ T_{m_j^\mathcal{T}}(u_{\ll j}, Q_{\ll j} D_p S \partial_t \hat{u}(0,t), \overline{u}_{\sim j})\label{eqn:G-ii-3}
\end{align}\end{subequations}
For~\eqref{eqn:G-ii-1}, we use the $L^6$ estimates for $u$ and $Q_{\sim j} (D_p S)$ (equations~\eqref{eqn:lp-decay} and~\eqref{eqn:D-p-S-L-p-freq-loc}, respectively) to obtain the bound
\begin{equation*}\begin{split}
\left\lVert \sum_j\eqref{eqn:G-ii-1} \right\rVert_{L^2} \lesssim & \sum_{2^j \gtrsim s^{-1/3}}\lVert u \rVert_{L^6}^2 \lVert Q_{\sim j} D_p S \rVert_{L^6} |\partial_s \hat{u}(0,s)|\\
\lesssim& M^5\epsilon^5 s^{-17/9 - \beta} \sum_{2^j \gtrsim s^{-1/3}} \ln(2 + t^{-1/3} 2^j) 2^{-j/6}\\
\lesssim& M^5 \epsilon^5 s^{-11/6 - \beta}
\end{split}\end{equation*}
which is better than required, since $\epsilon \ll M^{-3/2}$. For~\cref{eqn:G-ii-2,eqn:G-ii-3}, similar reasoning using the $L^6$ bounds for $u_{\sim j}$ and $Q_{\ll j} D_p S$ (given in~\eqref{eqn:freq-loc-lp-decay} and~\eqref{eqn:D-p-S-L-p}) yields
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:G-ii-2} + \eqref{eqn:G-ii-3} \right\rVert_{L^2}
\lesssim& M^5 \epsilon^5 s^{-11/6 - \beta}
\end{split}\end{equation*}
completing the argument for~\eqref{eqn:G-ii}.
Let us now consider the term~\eqref{eqn:G-iii}. The estimates here require us to obtain bounds for cubic expressions of the form $|u|^2 \partial_x w$. Since $g \not\in X$, these bounds do not immediately follow from the work in~\Cref{sec:cubic-bounds}, so we instead argue directly. By combining estimates from~\eqref{eqn:u-lin-ests} and~\eqref{eqn:w-lin-ests-L-2}, we see that
\begin{equation}\label{eqn:cubic-u2-w-space-loc-bound}\begin{split}
\lVert \chi_k |u|^2 \partial_x w \rVert_{L^\infty} \lesssim M^2 \epsilon^2 s^{-3/2} \lVert xg \rVert_{L^2}c_k
\end{split}\end{equation}
which immediately implies that
\begin{equation}\label{eqn:cubic-u2-w-bound}\begin{split}
\lVert |u|^2 \partial_x w \rVert_{L^\infty} \lesssim& \sup_{k} \lVert \chi_k |u|^2 \partial_x w \rVert_{L^\infty} \\
\lesssim& M^2 \epsilon^2 t^{-3/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation}
since the supports of the $\chi_k$ have only finite overlap. We will also need refined estimates in the spirit of~\eqref{eqn:cubic-sim-with-k} for $\chi_k (|u|^2\partial_x w)_{\sim j}$ when $k < j - 30$. To obtain these bounds, we perform a Littlewood-Paley decomposition to write
\begin{equation*}\begin{split}
\chi_k (|u|^2\partial_x w)_{\sim j} =& \chi_k Q_{\sim j} \chi_{\sim k} \left( |u_{< j - 20}|^2 \partial_x w_{[j - 20 , j + 20]} + 2 \Re(u_{< j - 20} \overline{u}_{[j - 20 , j + 20]}) \partial_x w_{< j - 20} \right)\\
& + \btext{better}
\end{split}\end{equation*}
Here, $\btext{better}$ denotes more rapidly decaying terms which can be neglected. The leading order terms we estimate as
\begin{equation*}
\lVert \chi_{\sim k} |u_{< j - 20}|^2 \partial_x w_{[j - 20 , j + 20]} \rVert_{L^\infty} \lesssim M^2 \epsilon^2 s^{-11/6} 2^{-k} \lVert xg \rVert_{L^2} c_j
\end{equation*}
and
\begin{equation*}
\lVert \chi_{\sim k} u_{< j - 20} \overline{u}_{[j - 20 , j + 20]} \partial_x w_{< j - 20} \rVert_{L^\infty} \lesssim M^2\epsilon^2 s^{-11/6} 2^{k/2 - 3/2 j} \lVert xg \rVert_{L^2}
\end{equation*}
so
\begin{equation}\label{eqn:cubic-u2-w-refined-bound}
\lVert \chi_k (|u|^2\partial_x w)_{\sim j} \rVert_{L^\infty} \lesssim M^2 \epsilon^2 s^{-11/6} 2^{-k} \lVert xg \rVert_{L^2}c_j
\end{equation}
where we have used the fact that $2^{3/2(k - j)}$ is $\ell^2$ summable in $j$ if $k < j - 30$. Now, let us write
\begin{subequations}\begin{align}
T_{m_j^\mathcal{T}}(u, |u|^2\partial_x w, \overline{u}) =& T_{m_j^\mathcal{T}}(u_{\sim j}, |u|^2\partial_x w, \overline{u}_{\lesssim j})\label{eqn:G-iii-1}\\
& + T_{m_j^\mathcal{T}}(u_{\ll j}, |u|^2\partial_x w, \overline{u}_{\sim j})\label{eqn:G-iii-2}\\
& + Q_{\sim j}T_{m_j^\mathcal{T}}(u_{\ll j}, (|u|^2\partial_x w)_{\sim j}, \overline{u}_{\ll j})\label{eqn:G-iii-3}
\end{align}\end{subequations}
For the term~\eqref{eqn:G-iii-1}, we write
\begin{subequations}\begin{align}
\eqref{eqn:G-iii-1} =& \chi_{[j - 40, j + 40]} T_{m_j^\mathcal{T}}(u_{\sim j}, \chi_{[j - 40, j + 40]} |u|^2\partial_x w, \chi_{[j - 30, j + 30]}\overline{u}_{\lesssim j})\label{eqn:G-iii-1-1}\\
&+ T_{m_j^\mathcal{T}}((1- \chi_{[j - 20, j + 20]})u_{\sim j}, |u|^2\partial_x w, (1- \chi_{[j - 30, j + 30]})\overline{u}_{\lesssim j})\label{eqn:G-iii-1-2}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the subterm~\eqref{eqn:G-iii-1-2}, the $L^4$ estimates from~\eqref{eqn:u-lin-ests} together with~\eqref{eqn:cubic-u2-w-bound} imply
\begin{equation*}\begin{split}
\lVert \eqref{eqn:G-iii-1-2} \rVert_{L^2} \lesssim& \lVert (1- \chi_{[j - 20, j + 20]})u_{\sim j} \rVert_{L^4} \lVert |u|^2 \partial_x w \rVert_{L^\infty} \lVert (1- \chi_{[j - 30, j + 30]})u_{\lesssim j} \rVert_{L^4}\\
\lesssim& M^4\epsilon^4 s^{-13/6} 2^{-j/2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is acceptable. Turning to the subterm~\eqref{eqn:G-iii-1-1}, we use~\eqref{eqn:cubic-u2-w-space-loc-bound} to find that
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:G-iii-1-1} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \gtrsim s^{-1/3}} \lVert u_{\sim j} \rVert_{L^4}^2 \lVert \chi_{[j - 40 , j + 40]}|u|^2 \partial_x w \rVert_{L^\infty}^2 \lVert \chi_{[j - 30, j + 30]} u_{\lesssim j} \rVert_{L^4}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which completes the bound for~\eqref{eqn:G-iii-1}. The bound for~\eqref{eqn:G-iii-2} is identical, since none of the estimates change if we replace $u_{\lesssim j}$ with $u_{\ll j}$ and permute the arguments of the pseudoproduct. For~\eqref{eqn:G-iii-3}, we have
\begin{subequations}\begin{align}
\eqref{eqn:G-iii-3} =& Q_{\sim j}\sum_{k < j - 30}T_{m_j^\mathcal{T}}(\chi_{\sim k}u_{\ll j}, \chi_k(|u|^2\partial_x w)_{\sim j}, \chi_{\sim k} \overline{u}_{\ll j})\label{eqn:G-iii-3-1}\\
&+ Q_{\sim j}T_{m_j^\mathcal{T}}(\chi_{[j - 40 , j + 40]}u_{\ll j}, \chi_{[j - 30 , j + 30]}(|u|^2\partial_x w)_{\sim j}, \chi_{[j - 40 , j + 40]} \overline{u}_{\ll j})\label{eqn:G-iii-3-2}\\
&+ Q_{\sim j}T_{m_j^\mathcal{T}}(\chi_{> j + 20}u_{\ll j}, \chi_{> j + 30}(|u|^2\partial_x w)_{\sim j}, \chi_{> j + 20}\overline{u}_{\ll j})\label{eqn:G-iii-3-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the first subterm, we use the refined cubic bound~\eqref{eqn:cubic-u2-w-refined-bound} to conclude that
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:G-iii-3-1} \right\rVert_{L^2} \lesssim& \left( \sum_{j} \left( \sum_{k < j - 30} \lVert \chi_{\sim k}u_{\ll j}\rVert_{L^4}^2 \lVert \chi_k(|u|^2\partial_x w)_{\sim j}\rVert_{L^\infty}\right)^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-7/3} \left( \sum_{j} \left( \sum_{\substack{2^k \gtrsim s^{-1/3}\\ k < j - 30}} 2^{-k} c_j\right)^{2} \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required. For the second term, we use the bound~\eqref{eqn:cubic-u2-w-space-loc-bound} to control $\chi_{[j - 30 , j + 30]} |u|^2 \partial_x w$, yielding
\begin{equation*}\begin{split}
\left\lVert \sum_{j} \eqref{eqn:G-iii-3-2} \right\rVert_{L^2} \lesssim& \left( \sum_{2^j \gtrsim s^{-1/3}} \lVert \chi_{[j - 40 , j + 40]} u_{\ll j} \rVert_{L^4}^4 \lVert \chi_{[j - 30 , j + 30]}(|u|^2 \partial_x w)_{\sim j} \rVert_{L^\infty}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
Finally, for the last term, we use the refined $L^4$ bounds for $\chi_{> j + 20} u_{\ll j}$ given in~\eqref{eqn:u-lin-ests} to conclude that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:G-iii-3-1} \lesssim& \lVert \chi_{> j + 20} u_{\ll j} \rVert_{L^4}^2 \lVert |u|^2 \partial_x w \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 s^{-8/3} 2^{-2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient, completing the bound for~\eqref{eqn:G-iii}.
The argument for~\eqref{eqn:G-iv} is quite similar. We have that
\begin{subequations}\begin{align}
T_{m_j^\mathcal{T}}(u, u\partial_x S \overline{w}, \overline{u}) =& T_{m_j^\mathcal{T}}(u_{\sim j}, u\partial_x S \overline{w}, \overline{u}_{\lesssim j})\label{eqn:G-iv-1}\\
& + T_{m_j^\mathcal{T}}(u_{\ll j}, u\partial_x S \overline{w}, \overline{u}_{\sim j})\label{eqn:G-iv-2}\\
& + Q_{\sim j}T_{m_j^\mathcal{T}}(u_{\ll j}, (u\partial_x S \overline{w})_{\sim j}, \overline{u}_{\ll j})\label{eqn:G-iv-3}
\end{align}\end{subequations}
Now, the decay estimates given in~\Cref{thm:lin-decay-lemma} and \Cref{thm:w-lin-decay} immediately imply that
\begin{equation*}
\lVert \chi_k u \partial_x S \overline{w} \rVert_{L^\infty} \lesssim M\epsilon^2 s^{-3/2} \lVert xg \rVert_{L^2} c_k
\end{equation*}
and
\begin{equation*}
\lVert u \partial_x S \overline{w} \rVert_{L^\infty} \lesssim M\epsilon^2 s^{-3/2} \lVert xg \rVert_{L^2}
\end{equation*}
which are analogous to the estimates~\eqref{eqn:cubic-u2-w-space-loc-bound} and~\eqref{eqn:cubic-u2-w-bound}. Thus, we can control~\cref{eqn:G-iv-1,eqn:G-iv-2} by the same arguments used for~\cref{eqn:G-iii-1,eqn:G-iii-2}, respectively. Moreover, we see that for $k < j - 30$, we can write
\begin{equation*}\begin{split}
\chi_k (u \partial_x S \overline{w})_{\sim j} =& \chi_k Q_{\sim j} \chi_{\sim k} \left(u_{< j - 20} \partial_x S_{< j - 20} w_{[j - 20 , j + 20]}\}\right) + \\
&+\chi_k Q_{\sim j} \chi_{\sim k}\left( (u_{< j - 20} \partial_x S_{[j - 20, j + 20]} + u_{[j - 20 , j + 20]} \partial_x S_{< j - 20}) w_{< j - 20}\}\right)\\
&+ \btext{better}
\end{split}\end{equation*}
where $\{\text{better}\}$ denotes terms which decay faster on the support of $\chi_k$. For the first term, the decay estimates from~\eqref{eqn:u-lin-ests} and~\eqref{eqn:w-lin-ests-L-inf} give the bound
\begin{equation*}\begin{split}
\lVert \chi_k u_{< j - 20} \partial_x S_{< j - 20} w_{[j - 20 , j + 20]} \rVert_{L^\infty} \lesssim& M\epsilon^2 s^{-11/6} 2^{-j} \lVert xg \rVert_{L^2} c_j
\end{split}\end{equation*}
and similarly,
\begin{equation*}\begin{split}
\lVert \chi_k u_{< j - 20} \partial_x S_{[j - 20, j + 20]} w_{< j - 20} \rVert_{L^\infty} \lesssim& M\epsilon^2 s^{-\frac{11}{6}} 2^{-\frac{j+k}{2}} \lVert xg \rVert_{L^2} c_k
\end{split}\end{equation*}
with an identical estimate holding for $u_{[j - 20 , j + 20]} \partial_x S_{< j - 20} w_{< j - 20}$. Thus, after noting that the restriction $k < j - 30$ lets us write ${2^{-\frac{j+k}{2}} = 2^{-j} c_j}$ uniformly in $k$, we find that
\begin{equation*}\begin{split}
\lVert \chi_k (u \partial_x S \overline{w})_{\sim j} \rVert_{L^\infty} \lesssim& M\epsilon^2 s^{-\frac{11}{6}} \lVert xg \rVert_{L^2} 2^{-j} c_j
\end{split}\end{equation*}
which is better than~\eqref{eqn:cubic-u2-w-refined-bound}, since $k < j - 30$. Thus, the term~\eqref{eqn:G-iv-3} can be controlled in the same manner as~\eqref{eqn:G-iii-3}, which completes the argument for~\eqref{eqn:G-iv}. Combining all these bounds, we find that
\begin{equation*}
\lVert \mathring{T}(s) \rVert_{L^2} \lesssim M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2} + M^5\epsilon^5 t^{-11/6 - \beta}
\end{equation*}
which is better than required, since $\epsilon \ll M^{-3/2}$.
\subsubsection{The bound for~\eqref{eqn:time-non-inner-product-deriv}} The only remaining term is~\eqref{eqn:time-non-inner-product-deriv}. Expanding $x \partial_s g$ using~\eqref{eqn:xg-division} gives
\begin{equation*}\begin{split}
\eqref{eqn:time-non-inner-product-deriv} =& \pm \Im \int_1^t s^2 \Big\langle T_{\partial_\xi \phi}(u, \partial_x w, \overline{u}) + T_{\partial_\xi \phi}(u, \partial_x S, \overline{w}) + T_{\partial_\xi \phi}(w, \partial_x S, \overline{u}),T(s) \Big\rangle\,ds\\
&\mp\Re \int_1^t s \Big\langle |u|^2 \partial_x Lw, T(s) \Big\rangle\,ds\\
&\mp\Re \int_1^t s \Big\langle |u|^2 w + 2\Re(u\overline{w})S + \Re(u\overline{w}) \partial_x LS, T(s) \Big\rangle\,ds\\
&\mp \Re \int_1^t s \Big\langle LD_p S \partial_s \hat{u}(0,s), T(s) \Big\rangle\,ds
\end{split}\end{equation*}
If we expand the $T_{\partial_\xi \phi}$ pseudoproducts using~\eqref{eqn:basic-STR-division} and observe that
\begin{equation*}
i(\xi - \eta - \sigma) \partial_\xi \phi \chi^\mathcal{T}_s = \phi m_s
\end{equation*}
then we can write
\begin{subequations}\begin{align}
\begin{split}\eqref{eqn:time-non-inner-product-deriv} =& \pm \Im \int_1^t s^2 \Big\langle T_{\phi m_s}(u,w, \overline{u}) + T_{\phi m_s}(u, S, \overline{w}) + T_{\phi m_s}(w, S, \overline{u}), T(s) \Big\rangle\,ds\end{split}\label{eqn:TNR-IPD-time-pseudoprod-cancel}\\%
&\mp\Re \int_1^t s \Big\langle |u|^2 \partial_x Lw, T(s) \Big\rangle\,ds\label{eqn:TNR-IPD-bad-cubic}\\
&\mp\Re \int_1^t s \Big\langle H(s), T(s) \Big\rangle\,ds\label{eqn:TNR-IPD-H}\\
&\mp \Re \int_1^t s\Big\langle L D_p S \partial_s \hat{u}(0,s), T(s) \Big\rangle\,ds\label{eqn:TNR-IPD-LD-p-S}
\end{align}\end{subequations}
where
\begin{equation*}\begin{split}
H(s) =& e^{-s\partial_x^3} \left(T_{\partial_\xi \phi e^{is\phi}\chi^\mathcal{R}_s}(f, \partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{is\phi}\chi^\mathcal{R}_s}(f, \partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{is\phi} \chi^\mathcal{R}_s}(g, \partial_x h, \overline{f}) \right)\\
&+ e^{-s\partial_x^3} \left(T_{\partial_\xi \phi e^{is\phi}\chi^\mathcal{S}_s}(f, \partial_x g, \overline{f}) + T_{\partial_\xi \phi e^{is\phi}\chi^\mathcal{S}_s}(f, \partial_x h, \overline{g}) + T_{\partial_\xi \phi e^{is\phi} \chi^\mathcal{S}_s}(g, \partial_x h, \overline{f}) \right)\\
&+ |u|^2 w + 2\Re(u\overline{w})S + 2\Re(u\overline{w})LS
\end{split}\end{equation*}
We first consider the term~\eqref{eqn:TNR-IPD-H}. The arguments from the previous subsections let us conclude that
\begin{equation*}
\lVert H(s) \rVert_{L^2} \lesssim M^2\epsilon^2 s^{-1} \lVert xg(s) \rVert_{L^2}
\end{equation*}
Thus, using the bound~\eqref{eqn:T-t-bound} for $T(s)$, we immediately have that
\begin{equation*}
\left| \eqref{eqn:TNR-IPD-H} \right| \lesssim \int_1^t M^4\epsilon^4s^{-1} \lVert xg(s) \rVert_{L^2}^2\,ds
\end{equation*}
which is better than required. Similar reasoning using~\eqref{eqn:L-D-p-S-hat-u-bd} shows that
\begin{equation*}
\left| \eqref{eqn:TNR-IPD-LD-p-S} \right| \lesssim \int_1^t M^7\epsilon^7s^{-5/6-\beta} \lVert xg(s) \rVert_{L^2}\,ds
\end{equation*}
which is again better than required. Turning to the term~\eqref{eqn:TNR-IPD-bad-cubic}, we note that
\begin{equation*}
\left\langle |u|^2 \partial_x Lw, T(s)\right\rangle = - \left\langle Lw, \partial_x \left(|u|^2 T(s)\right)\right\rangle
\end{equation*}
so the desired bound will follow immediately if we can show that
\begin{equation*}
\lVert \partial_x \left(|u|^2 \left(T(s)\right)\right) \rVert_{L^2} \lesssim M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{equation*}
Using~\Cref{thm:simple-bilinear-decay} and the estimates in~\Cref{sec:T-m-s-bounds}, we see that
\begin{equation*}
\lVert \left(\partial_x |u|^2\right) T(s) \rVert_{L^2} \lesssim M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{equation*}
so it suffices to prove that
\begin{equation*}
\left\lVert |u|^2 \partial_x T(s) \right\rVert_{L^2} \lesssim M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{equation*}
We will show how to obtain the bound for $|u|^2 \partial_x T_{m_s}(u, w,\overline{u})$: the bounds for the other terms are similar. Recalling that $m_s = \frac{i(\xi-\eta-\sigma) \partial_\xi \phi \chi_s^\mathcal{T}}{\phi}$, we can write
\begin{subequations}\begin{align}
|u|^2\partial_x T_{m_s}(u,w,\overline{u}) =& \sum_{2^j \gtrsim s^{-1/3}} |u|^2 T_{m_j^\mathcal{T}}(u, \partial_x w, \overline{u})\notag\\
=&\sum_{2^j \gtrsim s^{-1/3}} |u|^2 T_{m_j^\mathcal{T}}(u_{\sim j}, \partial_x w_{\lesssim j}, \overline{u})\label{eqn:TNR-IPD-BC-1}\\
&\qquad+ |u|^2 T_{m_j^\mathcal{T}}(u_{\ll j}, \partial_x w_{\lesssim j}, \overline{u}_{\sim j})\label{eqn:TNR-IPD-BC-2}\\
&\qquad+ |u|^2 Q_{\sim j} T_{m_j^\mathcal{T}}(u_{\ll j}, \partial_x w_{\sim j}, \overline{u}_{\ll j})\label{eqn:TNR-IPD-BC-3}
\end{align}\end{subequations}
For~\eqref{eqn:TNR-IPD-BC-1}, we perform a further division in physical space
\begin{subequations}\begin{align}
|u|^2 T_{m_j^\mathcal{T}}(u_{\sim j}, \partial_x w_{\lesssim j}, \overline{u}) =& \sum_{k < j - 30} \chi_k |u|^2 T_{m_j^\mathcal{T}}(\chi_{\sim k} u_{\sim j}, \chi_{\sim k}\partial_x w_{\lesssim j}, \chi_{\sim k}\overline{u})\label{eqn:TNR-IPD-BC-1-1}\\
&+ \chi_{[j - 30 , j + 30]} |u|^2 T_{m_j^\mathcal{T}}( u_{\sim j}, \chi_{[j - 40 , j + 40]}\partial_x w_{\lesssim j}, \chi_{[j - 40 , j + 40]}\overline{u})\label{eqn:TNR-IPD-BC-1-2}\\
&+ \chi_{> j + 30} |u|^2 T_{m_j^\mathcal{T}}(\chi_{> j + 20} u_{\sim j}, \chi_{> j + 20} \partial_x w_{\lesssim j}, \chi_{> j + 20} \overline{u})\label{eqn:TNR-IPD-BC-1-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the first sub-term, we have that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:TNR-IPD-BC-1-1} \rVert_{L^2} \lesssim& \sum_{k < j - 30} \lVert \chi_{\sim k} u \rVert_{L^\infty}^3 \lVert \chi_{\sim k} u_{\sim j} \rVert_{L^\infty} \lVert \chi_{\sim k} \partial_x w_{\lesssim j} \rVert_{L^2}\\
\lesssim& M^4\epsilon^4 s^{-7/3} \sum_{k < j - 30} 2^{k/2 - 3/2 j} \lVert xg \rVert_{L^2}\\
\lesssim& M^4\epsilon^4 s^{-7/3}2^{-j}\lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which is sufficient. For the second sub-term, we use almost orthogonality to conclude that
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:TNR-IPD-BC-1-2} \right\rVert_{L^2} \lesssim& \left(\sum_{j} \lVert \chi_{[j - 40 , j + 40]} u \rVert_{L^\infty}^6\lVert u_{\sim j}\rVert_{L^\infty}^2 \lVert \chi_{[j - 40 , j + 40]}\partial_x w_{\lesssim j} \rVert_{L^2}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
as required. For the final sub-term, we use the $L^\infty$ estimates from~\eqref{eqn:w-lin-ests-L-inf} to conclude that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:TNR-IPD-BC-1-3} \rVert_{L^2} \lesssim& \lVert \chi_{> j - 20} u \rVert_{L^6} \lVert \chi_{> j - 20} u_{\sim j} \rVert_{L^\infty} \lVert \chi_{> j - 20} \partial_x w_{\lesssim j} \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 s^{-8/3} 2^{-2j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which can be summed over $2^j \gtrsim s^{-1/3}$ to give the desired result. Collecting the bounds now gives the bound for~\eqref{eqn:TNR-IPD-BC-1}. Since $u_{\ll j}$ satisfies better decay estimates that $u$, we can bound~\eqref{eqn:TNR-IPD-BC-2} in the same way. For~\eqref{eqn:TNR-IPD-BC-3}, we write
\begin{subequations}\begin{align}
|u|^2 T_{m_j^\mathcal{T}}(u_{\ll j}, \partial_x w_{\sim j}, \overline{u}_{\ll j}) =& \sum_{k < j - 30} \chi_k |u|^2 Q_{\sim j} T_{m_j^\mathcal{T}}(\chi_{\sim k} u_{\ll j}, \chi_{\sim k} \partial_x w_{\sim j}, \chi_{\sim k} \overline{u}_{\ll j})\label{eqn:TNR-IPD-BC-3-1}\\
&+ \chi_{[j - 30 , j + 30]} |u|^2 T_{m_j^\mathcal{T}}(\chi_{[j - 40 , j + 40]} u_{\ll j}, \partial_x w_{\sim j}, \chi_{[j - 40 , j + 40]} \overline{u}_{\ll j})\label{eqn:TNR-IPD-BC-3-2}\\
&+ \chi_{> j + 30} |u|^2 T_{m_j^\mathcal{T}}(\chi_{> j + 20} u_{\ll j}, \chi_{> j - 20}\partial_x w_{\sim j}, \chi_{> j + 20} \overline{u}_{\ll j})\label{eqn:TNR-IPD-BC-3-3}\\
&+ \btext{non-pseudolocal terms}\notag
\end{align}\end{subequations}
For the first term, we note that we can interchange the order of summation to obtain
\begin{equation*}
\left\lVert \sum_j \eqref{eqn:TNR-IPD-BC-3-1} \right\rVert_{L^2} = \left\lVert \sum_k \chi_k |u|^2 \sum_{j > k + 30} Q_{\sim j} T_{m_j^\mathcal{T}}(\chi_{\sim k} u_{\ll j}, \chi_{\sim k} \partial_x w_{\sim j}, \chi_{\sim k} \overline{u}_{\ll j}) \right\rVert_{L^2}
\end{equation*}
Thus, using the almost orthogonality in $j$, we find that
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:TNR-IPD-BC-3-1} \right\rVert_{L^2} \lesssim& \sum_k \lVert \chi_{\sim k} u \rVert_{L^\infty}^2 \left(\sum_{j > k + 30} \lVert \chi_{\sim k} u_{\ll j} \rVert_{L^4}^4 \lVert \chi_{\sim k} \partial_x w_{\sim j} \rVert_{L^\infty}^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-7/6} \lVert xg \rVert_{L^2} \sum_k 2^{-k/2} \left(\sum_{j > k + 30} c_j^2 \right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
Turning to the next sub-term, we again use almost orthogonality to obtain
\begin{equation*}\begin{split}
\left\lVert \sum_j \eqref{eqn:TNR-IPD-BC-3-2} \right\rVert_{L^2} \lesssim& \left(\sum_{j} \lVert \chi_{[j-40, j+40]} u \rVert_{L^\infty}^4 \lVert \chi_{[j-40, j+40]} u_{\ll j} \rVert_{L^\infty}^4 \lVert \partial_x w_{\sim j}\rVert_{L^2}^2\right)^{1/2}\\
\lesssim& M^4\epsilon^4 s^{-2} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
For the final sub-term, we have that
\begin{equation*}\begin{split}
\lVert \eqref{eqn:TNR-IPD-BC-3-3} \rVert_{L^2} \lesssim& \lVert \chi_{> j - 20} u \rVert_{L^\infty}^2 \lVert \chi_{> j - 20} u_{\ll j} \rVert_{L^4}^2 \lVert \chi_{> j - 20} \partial_x w_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^4\epsilon^4 s^{-3} 2^{-3j} \lVert xg \rVert_{L^2}
\end{split}\end{equation*}
which gives the desired bound after summing. This completes the argument for~\eqref{eqn:TNR-IPD-BC-3}.
It only remains to bound the contribution from~\eqref{eqn:TNR-IPD-time-pseudoprod-cancel}. Observe that
\begin{equation*}\begin{split}
\frac{1}{2}\partial_s \lVert T(s) \rVert_{L^2}^2 =& \Re \left\langle \partial_s T(s), T(s)\right\rangle\\
=& \Re \left\langle e^{-s\partial_x^3} \partial_s e^{s\partial_x^3}T(s), T(s)\right\rangle - \Re \left\langle e^{-s\partial_x^3} \partial_x^3 e^{s\partial_x^3}T(s), T(s)\right\rangle
\end{split}\end{equation*}
The second term on the last line vanishes because $\partial_x^3$ is a skew-adjoint operator. Noting that
\begin{equation*}
e^{s\partial_x^3}T(s) = T_{m_se^{is\phi}}(f,g,\overline{f}) + T_{m_se^{is\phi}}(f,h,\overline{g}) + T_{m_se^{is\phi}}(g,h,\overline{f})
\end{equation*}
we see that
\begin{equation}\label{eqn:time-double-pseudoprod-cancellation}\begin{split}
\frac{1}{2}\partial_s \lVert T(s) \rVert_{L^2}^2 =& \Re\left\langle T_{m_s\partial_s e^{is\phi}}(f,g,\overline{f}) + T_{m_s \partial_s e^{is\phi}}(f,h,\overline{g}) + T_{m_s \partial_s e^{is\phi}}(g,h,\overline{f}), T(s) \right\rangle\\
&+ \Re\left\langle \tilde{T}(s) + \mathring{T}(s), T(s) \right\rangle\\
\end{split}\end{equation}
Now,
\begin{equation*}
i\partial_\xi \phi (\xi - \eta - \sigma) e^{is\phi} \chi^\mathcal{T}_s = \phi m_s e^{is\phi} = -im_s \partial_s e^{is\phi}
\end{equation*}
so we can use~\eqref{eqn:time-double-pseudoprod-cancellation} to write
\begin{subequations}\begin{align}
\eqref{eqn:TNR-IPD-time-pseudoprod-cancel} =& \mp \Re \int_1^t \left\langle T_{m_s\partial_s e^{is\phi}}(f,g,\overline{f}) + T_{m_s \partial_s e^{is\phi}}(f,h,\overline{g}) + T_{m_s \partial_s e^{is\phi}}(g,h,\overline{f}), T(s)\right\rangle \,ds\notag\\
=& \mp \frac{1}{2} \int_1^t s^2 \partial_s\lVert T(s) \rVert_{L^2}^2 \,ds\label{eqn:TNR-IPD-C-1}\\
&\pm \Re \int_1^t s^2 \langle \tilde{T}(s) + \mathring{T}(s), T(s) \rangle\,ds\label{eqn:TNR-IPD-C-2}
\end{align}\end{subequations}
The bounds for $T(s)$, $\tilde{T}(s)$, and $\mathring{T}(s)$ imply that
\begin{equation*}\begin{split}
|\eqref{eqn:TNR-IPD-C-1}| \lesssim& \int_1^t s^2\lVert F(s) \rVert_{L^2}\lVert T(s) \rVert_{L^2} \,ds\\
\lesssim& \int_1^t M^4\epsilon^4 s^{-1}\lVert xg \rVert_{L^2}^2 + M^4\epsilon^5 s^{-5/6-\beta}\lVert xg \rVert_{L^2}\,ds
\end{split}\end{equation*}
as required. Turning to~\eqref{eqn:TNR-IPD-C-1}, integration by parts shows that
\begin{equation*}\begin{split}
|\eqref{eqn:TNR-IPD-C-1}| =& \left|- \left.\frac{1}{2}s^2 \lVert T(s) \rVert_{L^2}^2 \right|_{s=1}^{s=t} + \int_1^t s \lVert T(s)\rVert_{L^2}^2 \,ds\right|\\
\lesssim& M^4\epsilon^4 \lVert xg \rVert_{L^2}^2 + \epsilon^6 + \int_1^t M^4\epsilon^4 s^{-1} \lVert xg (s) \rVert_{L^2}^2\,ds
\end{split}\end{equation*}
which is better than required, completing the bound for~\eqref{eqn:time-non-inner-product-deriv}.
\section{The pointwise bounds in Fourier space}\label{sec:L-infty-est}
In this section, we will show how to control $\hat{f}(\xi, t)$ and $\partial_t \hat{u}(0,t)$. For $|\xi| < t^{-1/3}$, the H\"older continuity of $\hat{f}$ in $\xi$ reduces to the problem to estimating $\hat{f}(0,t)$, which in turn reduces to showing that~\eqref{eqn:desired-zero-mode-conv} holds for $\partial_t \hat{u}(0,t)$. We show this improved decay by taking advantage of the self-similar structure at low frequencies. On the other hand, when $|\xi| \geq t^{-1/3}$, we show that $\hat{f}(\xi,t)$ essentially has ODE dynamics, which produce a logarithmic phase correction given by~\eqref{eqn:phase-rot-dynamics-f-hat}.
\subsection{The low-frequency bounds}\label{sec:low-freq-bdds}
We first prove the bound~\eqref{eqn:desired-zero-mode-conv} on $|\partial_t\hat{u}(0,t)|$. Note that $\partial_t\hat{u}(0,t)$ satisfies
\begin{equation*}
\partial_t \hat{u}(0,t) = \pm\frac{1}{\sqrt{2\pi}}\int |u|^2 \partial_x u \,dx
\end{equation*}
Recalling that $S(x,t) = t^{-1/3} \sigma(t^{-1/3} x)$ and using the self-similar equation~\eqref{eqn:cmkdv-self-sim-prof-expanded}, we see that $|S|^2 \partial_x S$ is the derivative of an $L^2$ function. Thus, $\int |S|^2 \partial_x S\,dx = 0$, and we can write
\begin{equation*}\begin{split}
\partial_t \hat{u}(0,t) =& \pm\frac{1}{\sqrt{2\pi}}\int |u|^2 \partial_x u - |S|^2 \partial_x S\,dx\\
=& \pm\frac{1}{\sqrt{2\pi}} \int |u|^2 \partial_x w + (u \overline{w} + w\overline{u}) \partial_x S\,dx
\end{split}\end{equation*}
But, all of these terms can be controlled using~\Cref{thm:cubic-mean-thm}, giving us the bound
\begin{equation*}\begin{split}
|\partial_t \hat{u}(0,t)| \lesssim& M^2 \epsilon^2 t^{-7/6} \lVert xg \rVert_{L^2}\\
\lesssim& M^2 \epsilon^3 t^{-1-\beta}
\end{split}\end{equation*}
as required.
We now turn to the task of bounding $\hat{f}(\xi,t)$ in the low frequency region $|\xi| < t^{-1/3}$. Note that $\partial_t \hat{u}(0,t) = \partial_t \hat{f}(0,t)$, so integrating the above bound and recalling that $M^2\epsilon^2 \ll 1$ gives us the bound
\begin{equation*}\begin{split}
|\hat{f}(0,t)| \lesssim& |\hat{f}(0,1)| + \int_1^t |\partial_t \hat{u}(0,t)|\,dt\\
\lesssim& \epsilon + M^4\epsilon^5\\
\lesssim& \epsilon
\end{split}\end{equation*}
as required. For the other frequencies, we note that by the Sobolev-Morrey embedding, for $|\xi| < t^{-1/3}$,
\begin{equation*}\begin{split}
|\hat{f}(\xi,t) - \hat{f}(0,t)| \lesssim& |\xi|^{1/2} \lVert xf \rVert_{L^2}\\
\lesssim& \left( \lVert xg \rVert_{L^2} + \lVert LS \rVert_{L^2}\right)t^{-1/6}\\
\lesssim& \epsilon t^{-\beta} + M^3\epsilon^3\\
\lesssim& \epsilon
\end{split}\end{equation*}
allowing us to conclude that $\hat{f}$ is bounded in the region $|\xi| < t^{-1/3}$.
\subsection{The perturbed Hamiltonian dynamics}
In this section, we show that $\hat f(\xi,t)$ satisfies a perturbed Hamiltonian ODE for each $\xi \geq t^{-1/3}$, and as a consequence $\lVert \hat f \rVert_{L^\infty}$ is uniformly bounded in time. In particular, we will show that for $|\xi| \geq t^{-1/3}$, $\hat{f}(\xi,t)$ satisfies
\begin{equation}\label{eqn:hat-f-hamiltonian-dynamics}
\partial_t \hat{f}(\xi, t) = \pm \frac{i\sgn \xi}{6t} | \hat{f}(\xi, t)|^2 \hat{f}(\xi, t) + ce^{it8/9\xi^3}\frac{\sgn \xi}{t} | \hat{f}(\xi/3, t)|^2 \hat{f}(\xi/3, t) + R(\xi,t)
\end{equation}
for some constant $c$, where $|R(\xi, t)| \lesssim M^3\epsilon^3 t^{-1} (|\xi| t^{1/3})^{-1/14}$. From this, it will follow that if we define
\begin{equation*}
B(t,\xi) := \pm\frac{\sgn(\xi)}{6} \int_{1}^t \frac{|\hat f(\xi,s)|^2}{s}\,ds
\end{equation*}
then $v(t,\xi) = e^{iB(t,\xi)} \hat{f}(t,\xi)$ satisfies
\begin{equation*}
\partial_t v = ce^{it8/9\xi^3}e^{iB(t,\xi)}\frac{\sgn \xi}{t} | \hat{f}(\xi/3, t)|^2 \hat{f}(\xi/3, t) + R(\xi,t)
\end{equation*}
Let us consider $|v(t_1) - v(t_2)|$ for $\max(1, |\xi|^{-3}) \leq t_1 < t_2 \leq T$, where $T$ is the time given in the bootstrap argument. Integrating by parts, we find (omitting the $\xi$ factors in the argument):
\begin{equation*}\begin{split}
\left|\int_{t_1}^{t_2} e^{is8/9\xi^3}e^{iB(s)}\frac{\sgn \xi}{s} | \hat{f}(s)|^2 \hat{f}(s) \,ds\right| \lesssim& \left.\frac{1}{s|\xi|^3} | \hat{f}(s)|^3 \right|_{s=t_1}^{s=t_2} + \int_{t_1}^{t_2} |\hat{f}(s)|^3\,\frac{ds}{s^2|\xi|^3}\\
&+ \int_{t_1}^{t_2} |\partial_s B(s)| |\hat{f}(s)|^3\,\frac{ds}{s|\xi|^3}\\
&+ \int_{t_1}^{t_2} |\partial_s \hat{f}(s)||\hat{f}(s)|^2\,\frac{ds}{s|\xi|^3}\\
=:& \mathrm{I} + \mathrm{II} + \mathrm{III} + \mathrm{IV}
\end{split}\end{equation*}
Using the definition of $B$ and the bound on $\hat{f}$, we see that
\begin{equation}\label{eqn:v-bound-1}
|\mathrm{I}| + |\mathrm{II}| + |\mathrm{III}| \lesssim (M^3\epsilon^3 + M^5\epsilon^5) t_1^{-1} |\xi|^{-3}
\end{equation}
Moreover, substituting the expression given in~\eqref{eqn:hat-f-hamiltonian-dynamics} for $\partial_s f(s)$, we find that
\begin{equation}\label{eqn:v-bound-2}
|\mathrm{IV}| \lesssim \int_{t_1}^{t_2} \frac{M^2\epsilon^2}{s|\xi|^3}\left(\frac{M^3\epsilon^3}{s} + R(\xi,s)\right) \,ds \lesssim M^5\epsilon^5 t_1^{-1} |\xi|^{-3}
\end{equation}
Taking $t_1 = \max(1, |\xi|^{-3})$ and recalling from~\Cref{sec:low-freq-bdds} that $|v(\xi, t_1)| = |\hat{f}(\xi, t_1)| \lesssim \epsilon$, we see that for $t \in (t_1, T)$,
\begin{equation*}
|\hat{f}(\xi, t)| = |v(\xi, t)| \lesssim \epsilon + M^3 \epsilon^3 t_1^{-1}|\xi|^{-3} \lesssim \epsilon
\end{equation*}
since $\epsilon \ll M^{-3/2}$. In particular, this closes the bootstrap for the $\mathcal{F}L^\infty$ component of the $X$ norm. Moreover, we have shown that $v(\xi,t)$ is Cauchy as $t\to \infty$ for $\xi \neq 0$, so $v(\xi,t)$ converges as $t \to \infty$ for each fixed nonzero $\xi$. If we write $f_\infty(\xi) = \lim_{t\to\infty} v(\xi,t)$, then
\begin{equation*}
\hat{f}(\xi,t) = \exp\left(\pm \frac{i}{6} \int_1^t \frac{|\hat{f}(\xi,s)|^2}{s}\,ds\right) f_\infty(\xi) + O(M^3\epsilon^3(t^{-1/3}|\xi|)^{-1/14})
\end{equation*}
so~\eqref{eqn:phase-rot-dynamics-f-hat} holds. Thus, the proof of the main theorem will be complete once we verify~\eqref{eqn:hat-f-hamiltonian-dynamics}.
\subsection{The stationary phase estimate}
We now prove~\eqref{eqn:hat-f-hamiltonian-dynamics}. Note that we can write $\partial_t \hat{f}(\xi,t)$ as
\begin{equation}
\partial_t \hat{f}(\xi,t) = \pm \frac{i}{2\pi} \int e^{-it\phi} (\xi -\eta - \sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{f}(\xi - \eta - \sigma)\,d\eta d\sigma
\end{equation}
The stationary points for the phase $\phi$ are given by \begin{equation}\label{eqn:stationary-pts}\begin{split}
(\eta_1, \sigma_1) =& (\xi, \xi)\\
(\eta_1, \sigma_1) =& (-\xi, \xi)\\
(\eta_1, \sigma_1) =& (\xi, -\xi)\\
(\eta_4, \sigma_4) =& (\xi/3, \xi/3)
\end{split}
\end{equation}
We will now divide the integral dyadically in $\eta$ and $\sigma$, and use stationary phase to estimate each piece. Defining $j$ to be the integer with $2^{j-1} < |\xi| \leq 2^j$, let us write
\begin{equation*}
\partial_t \hat{f}(\xi,t) = \mathrm{I}_\text{lo} + \mathrm{I}_{\text{stat}} + \sum_{\ell > j + 10} \left(\mathrm{I}_{\ell} + \tilde{\mathrm{I}}_{\ell}\right)
\end{equation*}
where
\begin{equation*}\begin{split}
\mathrm{I}_{\text{lo}} =& \pm \frac{i}{2\pi} \int e^{-it\phi} \psi_{\ll j}(\eta) \psi_{\ll j}(\sigma) (\xi -\eta - \sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{f}(\xi - \eta - \sigma)\,d\eta d\sigma\\
\mathrm{I}_{\text{stat}} =& \pm \frac{i}{2\pi} \int e^{-it\phi} \psi_{j}^{\text{med}}(\eta,\sigma) (\xi -\eta - \sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{f}(\xi - \eta - \sigma)\,d\eta d\sigma\\
\mathrm{I}_{\ell} =& \pm \frac{i}{4\pi} \int e^{-it\phi} (\xi -\eta - \sigma) \psi_{\ell}(\eta) \psi_{\leq \ell}(\sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{f}(\xi - \eta - \sigma)\,d\eta d\sigma\\
\tilde{\mathrm{I}}_{\ell} =& \pm \frac{i}{4\pi} \int e^{-it\phi} (\xi -\eta - \sigma) \psi_{< \ell}(\eta) \psi_{\ell}(\sigma) \hat{f}(\eta) \overline{\hat{f}(-\sigma)} \hat{f}(\xi - \eta - \sigma)\,d\eta d\sigma
\end{split}\end{equation*}
with
\begin{equation*}
\psi_{j}^{\text{med}}(\eta,\sigma) = \psi_{\lesssim j}(\eta)\psi_{\lesssim j}(\sigma) - \psi_{\ll j}(\eta)\psi_{\ll j}(\sigma)
\end{equation*}
We will show how to estimate the terms $\mathrm{I}_{\text{lo}}$, $\mathrm{I}_{\text{stat}}$ and $\mathrm{I}_{\ell}$: the estimate for the $\tilde{\mathrm{I}}_{\ell}$ terms follows from similar reasoning.
\subsubsection{The estimate for $\mathrm{I}_{\textup{lo}}$} Over the support of the integrand, $|\partial_\eta \phi| \sim 2^{2j},$ so we can integrate by parts with respect to $\eta$ to obtain
\begin{subequations}\begin{align}
\mathrm{I}_{\text{lo}} =& \mp\frac{i}{2\pi t} \int e^{it\phi} \frac{\xi -\eta - \sigma}{\partial_\eta \phi} \psi_{\ll j}(\eta) \psi_{\ll j}(\sigma) \partial_\eta \hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\label{eqn:I-lo-1}\\
&\mp\frac{1}{4\pi t} \int e^{it\phi} \partial_\eta \left(\frac{(\xi - \eta - \sigma)}{\partial_\eta \phi} \psi_{\ll j}(\eta) \psi_{\ll j}(\sigma)\right) \hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\label{eqn:I-lo-2}\\
&+ \{\text{similar terms}\}\notag
\end{align}\end{subequations}
For the first term, we note that $m^1_j = 2^{j}\frac{\xi - \eta - \sigma}{\partial_\eta \phi} \psi_{\ll j}(\eta) \psi_{\ll j}(\sigma)\psi_{\sim k}(\xi)$ is a smooth symbol supported on $|\xi|, |\eta|, |\sigma| \lesssim 2^j$ and satisfies the Coifman-Meyer type symbol bounds
\begin{equation*}
|\partial_{\xi,\eta,\sigma}^\alpha m^1_j| \lesssim_\alpha 2^{-j|\alpha|}
\end{equation*}
Thus, arguing as in~\Cref{rmk:freq-loc-symbol-bounds} and using the Hausdorff-Young inequality, we find that
\begin{equation*}\begin{split}
|\eqref{eqn:I-lo-1}| =& t^{-1}2^{-j} \left|\hat{T}_{m^1_j}(Lu_{\lesssim j}, u_{\lesssim j}, \overline{u}_{\lesssim j})\right|\\
\lesssim& t^{-1}2^{-j} \left\lVert T_{m^1_j}(Lu_{\lesssim j}, u_{\lesssim j}, \overline{u}_{\lesssim j}) \right\rVert_{L^1}\\
\lesssim& t^{-1}2^{-j} \lVert xf \rVert_{L^2} \lVert P_{\lesssim j} f \rVert_{L^2} \lVert u_{\lesssim j} \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-7/6} 2^{-j/2}
\end{split}\end{equation*}
Similarly, defining the symbol $m^2_j = 2^{2j}\partial_\eta\left(\frac{\xi - \eta - \sigma}{\partial_\eta \phi} \psi_{\ll j}(\eta) \psi_{\ll j}(\sigma)\right) \psi_{\sim j}(\xi)$, we find that
\begin{equation*}\begin{split}
|\eqref{eqn:I-lo-2}| =& t^{-1}2^{-2j} \left|\hat{T}_{m^2_j}(u_{\lesssim j}, u_{\lesssim j}, \overline{u}_{\lesssim j})\right|\\
\lesssim& t^{-1}2^{-2j} \left\lVert T_{m^2_j}(u_{\lesssim j}, u_{\lesssim j}, \overline{u}_{\lesssim j}) \right\rVert_{L^1}\\
\lesssim& t^{-1}2^{-2j} \lVert P_{\lesssim j} f \rVert_{L^2}^2 \lVert u_{\lesssim j} \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-4/3} 2^{-j}
\end{split}\end{equation*}
It follows that
\begin{equation*}
|\mathrm{I}_\text{lo}| \lesssim M^3\epsilon^3 t^{-1} \left(t^{1/3} 2^j\right)^{-1/2}
\end{equation*}
which is consistent with the estimate for the remainder term in~\eqref{eqn:hat-f-hamiltonian-dynamics}.
\subsubsection{The estimate for \texorpdfstring{$\mathrm{I}_{\ell}$}{I\_l}}
For these terms, $|\nabla_{\eta,\sigma} \phi| \sim 2^{2\ell}$. Integrating by parts using the identity $\frac{1}{it|\nabla_{\eta,\sigma} \phi|^2} \nabla_{\eta,\sigma}\phi \cdot \nabla_{\eta,\sigma} e^{it\phi} = e^{it\phi}$, we find that
\begin{subequations}\begin{align}
\mathrm{I}_{\ell} =& \mp\frac{1}{2\pi t} \int e^{it\phi} \frac{(\xi - \eta - \sigma)\partial_\eta \phi}{|\nabla_{\eta,\sigma} \phi|^2} \psi_{\ell}(\eta) \psi_{\leq \ell}(\sigma) \partial_\eta \hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\label{eqn:I-k-hi-1}\\
&\mp \frac{1}{4\pi t} \int e^{it\phi} \nabla_{\eta,\sigma} \cdot \bigg(\frac{(\xi - \eta - \sigma) \psi_{\ell}(\eta) \psi_{\leq \ell}(\sigma)\nabla_{\eta,\sigma} \phi}{|\nabla_{\eta,\sigma} \phi|^2}\bigg)\hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\label{eqn:I-k-hi-2}\\
&+ \{\text{similar terms}\}\notag
\end{align}\end{subequations}
The argument is now similar to the one for~\cref{eqn:I-lo-1,eqn:I-lo-2}. Writing
\begin{equation*}\begin{split}
m^1_{\ell} =& 2^{\ell} \frac{(\xi - \eta - \sigma)\partial_\eta \phi}{|\nabla_{\eta,\sigma} \phi|^2} \psi_{\ell}(\eta) \psi_{\leq \ell}(\sigma) \psi_{\sim \ell}(\xi - \xi_0)\\
m^2_{\ell} =& 2^{2\ell} \nabla_{\eta,\sigma} \cdot \left(\frac{(\xi - \eta - \sigma)\partial_\eta \phi}{|\nabla_{\eta,\sigma} \phi|^2} \psi_{\ell}(\eta) \psi_{\leq \ell}(\sigma)\right) \psi_{\sim \ell}(\xi - \xi_0)
\end{split}\end{equation*}
for some $\xi_0$ within distance $O(2^\ell)$ of $\xi$ and observing that $m^1_{\ell}$, $m^2_{\ell}$ satisfy the conditions given in~\Cref{rmk:freq-loc-symbol-bounds} uniformly in $\ell$, we find that
\begin{equation*}\begin{split}
|\eqref{eqn:I-k-hi-1}| \lesssim& t^{-1}2^{-\ell} \lVert T_{m^1_{\ell}}(Lu, P^{\xi_0}_{\lesssim \ell} u, \overline{u}_{\lesssim \ell}) \rVert_{L^2}\\
\lesssim& t^{-1}2^{-\ell} \lVert Lu \rVert_{L^2} \lVert P^{\xi_0}_{\lesssim \ell} f \rVert_{L^2} \lVert u \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-7/6}2^{-\ell/2}\\
|\eqref{eqn:I-k-hi-2}| \lesssim& t^{-1}2^{-2\ell} \lVert T_{m^1_{\ell}}(u_{\lesssim \ell}, P^{\xi_0}_{\lesssim \ell} u, \overline{u}_{\lesssim \ell}) \rVert_{L^2}\\
\lesssim& t^{-1}2^{-2\ell} \lVert P^{\xi_0}_{\lesssim \ell} f \rVert_{L^2}^2 \lVert u \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-4/3}2^{-\ell}
\end{split}\end{equation*}
where $\tilde{P}^{\zeta}_{\lesssim \ell} = \psi_{\lesssim \ell}(D - \zeta)$ for $\zeta \in \mathbb{R}$. An analogous argument holds for $\tilde{\mathrm{I}}_{\ell}$, so summing over $\ell > k + 10$, we find that
\begin{equation*}
\left| \sum_{\ell > k + 10} \mathrm{I}_{\ell} + \tilde{\mathrm{I}}_{\ell} \right| \lesssim M^3\epsilon^3 t^{-1} \left( t^{1/3} 2^k \right)^{-1/2}
\end{equation*}
which allows us to treat these terms as remainders in~\eqref{eqn:hat-f-hamiltonian-dynamics}.
\subsubsection{The estimate for \texorpdfstring{$\mathrm{I}_{\textup{stat}}$}{I\_stat}} The integral here contains the four stationary points given in~\eqref{eqn:stationary-pts}. Note that each of the stationary points are at a distance $\sim 2^j$ from each other. Using this, we can write
\begin{equation*}
\mathrm{I}_{\text{stat}} = \sum_{r=1}^4 \sum_{2^{\ell} \ll 2^j} \left(J^{(r)}_{\ell} + \tilde{J}^{(r)}_{\ell}\right) + \{\text{remainder}\}
\end{equation*}
where
\begin{equation*}\begin{split}
J^{(r)}_{\ell} =& \pm \frac{i}{2\pi} \int e^{-it\phi}\psi^{[\ell_0]}_{\ell}(\eta - \eta_r) \psi^{[\ell_0]}_{\leq \ell}(\sigma - \sigma_r) (\xi - \eta - \sigma) \hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\\
J^{(r)}_{\ell} =& \pm \frac{i}{2\pi} \int e^{-it\phi}\psi^{[\ell_0]}_{\ell}(\eta - \eta_r) \psi^{[\ell_0]}_{\ell}(\sigma - \sigma_r) (\xi - \eta - \sigma) \hat{f}(\eta) \hat{f}(\xi - \eta - \sigma) \overline{\hat{f}(-\sigma)}\,d\eta d\sigma\\
\end{split}\end{equation*}
with
\begin{equation*}
\psi^{[\ell_0]}_{\ell} = \begin{cases}
\psi_{\ell} & \ell > \ell_0\\
\psi_{\leq \ell} & \ell = \ell_0\\
0 & \ell < \ell_0
\end{cases}
\end{equation*}
for a parameter $\ell_0$ defined such that $2^{\ell_0} \sim t^{-1/3}(t^{1/3} 2^{k})^{-\gamma}$, where $\gamma > 0$ is a constant which will be specified later. The contribution from the remainder can be controlled using an argument similar to the one for $\mathrm{I}_{\ell}$, so we will focus on controlling the contribution from the $J^{r}_{\ell}$ terms. There are two cases to consider: either $\ell = \ell_0$ or $\ell > \ell_0$.
\paragraph{\indent \textbf{Case} $\ell > \ell_0$}
We first consider the bound for $J^{(r)}_{\ell}$. Integrating by parts gives
\begin{subequations}\begin{align}
J^{(r)}_{\ell} &= t^{-1} 2^{-\ell} e^{it\xi^3}\hat T_{m^1_\ell}(Lu, \tilde{P}^{\xi_0 - \eta_r - \sigma_r}_{\lesssim \ell} u, \overline{u}_{\sim j})\label{eqn:I-stat-away-1}\\
&+ t^{-1} 2^{-2\ell} e^{it\xi^3}T_{m^2_{\ell}}(\tilde{P}^{\eta_r}_{\lesssim \ell} u,\tilde{P}^{\xi_0 - \eta_r + \sigma_r}_{\lesssim \ell} u, \overline{u}_{\sim j})\label{eqn:I-stat-away-2}\\
&+ \{\text{similar terms}\}\notag
\end{align}\end{subequations}
where $\xi_0$ is any point at a distance $\ll 2^{\ell}$ from $\xi$, and the symbols $m_{\ell}^1$ and $m_{\ell}^2$ are given by
\begin{equation*}\begin{split}
m_{\ell}^1 =& \pm 2^\ell (\xi - \eta - \sigma) \frac{\partial_\eta \phi}{|\nabla_{\eta,\sigma} \phi|^2} \psi_{\ell}(\eta - \eta_r) \psi_{\leq \ell}(\sigma - \sigma_r) \psi_{\leq \ell} (\xi - \xi_0)\\
m_{\ell}^2 =& \pm 2^{2\ell} \nabla_{\eta,\sigma} \cdot \left((\xi - \eta - \sigma) \frac{\nabla_{\eta,\sigma} \phi}{|\nabla_{\eta,\sigma} \phi|^2} \right) \psi_{\leq \ell} (\xi - \xi_0)
\end{split}\end{equation*}
It is clear that these symbols are supported on a region of volume $\sim 2^{3\ell}$. Moreover, over the support of the integral we have that
\begin{equation*}\begin{split}
|\xi - \eta - \sigma| \lesssim 2^j,\qquad
|\nabla_{\eta,\sigma} \phi| \sim 2^{j + \ell},\qquad |\partial^\alpha_{\xi,\eta,\sigma} \nabla_{\eta,\sigma} \phi| \lesssim 2^{(2 - |\alpha|) \ell} \qquad |\alpha| \geq 2
\end{split}\end{equation*}
where for the last inequality we have used the fact that $2^\ell \ll 2^j$. Thus, we see that $m^1_\ell$ and $m^2_\ell$ obey the Coifman-Meyer type bounds
\begin{equation*}
|\partial_{\xi,\eta,\sigma}^\alpha m^1_{\ell}| + |\partial_{\xi,\eta,\sigma}^\alpha m^2_{\ell}| \lesssim_\alpha 2^{-|\alpha|\ell}
\end{equation*}
Thus, we have the bounds
\begin{equation*}\begin{split}
\left|\eqref{eqn:I-stat-away-1}\right| \leq& t^{-1}2^{-\ell} \left\lVert\hat{T}_{m_{\ell}^1}(Lu, \tilde{P}^{(\xi - \eta_r - \sigma_r)}_{\lesssim \ell}u,\overline{u}_{\sim j}) \right\rVert_{L^1}\\
\lesssim& t^{-1} 2^{-\ell} \lVert xf \rVert_{L^2} \lVert \tilde{P}^{\xi - \eta_r - \sigma_r}_{\lesssim \ell} f \rVert_{L^2} \lVert u_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-4/3} 2^{-j/2} 2^{-\ell/2}
\end{split}\end{equation*}
and
\begin{equation*}\begin{split}
\left|\eqref{eqn:I-stat-away-2} \right| \leq& t^{-1} 2^{-2\ell} \left\lVert \hat{T}_{M_{\ell}^2}(\tilde{P}^{\eta_r}_{\lesssim \ell} u, \tilde{P}^{\xi - \eta_r - \sigma_r}_{\lesssim \ell} u, \overline{u}_{\sim j}) \right\rVert_{L^1}\\
\lesssim& t^{-1} 2^{-2\ell} \lVert \tilde{P}^{\eta_r}_{\lesssim \ell} f \rVert_{L^2} \lVert \tilde{P}^{\xi - \eta_r - \sigma_r}_{\lesssim \ell} f \rVert_{L^2} \lVert u_{\sim j} \rVert_{L^\infty}\\
\lesssim& M^3\epsilon^3 t^{-3/2} 2^{-j/2} 2^{-\ell}
\end{split}\end{equation*}
Summing over $\ell > \ell_0$ yields
\begin{equation}\label{eqn:J-hi-contrib}
\left|\sum_{\ell > \ell_0} J^{(r)}_{\ell}\right| \lesssim t^{-1} (t^{1/3} 2^j)^{-1/2 + \gamma}
\end{equation}
A similar argument gives an identical bound for the $\tilde{J}^{(r)}_{\ell}$.
\paragraph{\indent \textbf{Case} $\ell = \ell_0$}
By performing the linear change of variables $\eta \to \eta + \eta_r$, $\sigma \to \sigma + \sigma_r$, we obtain
\begin{equation*}\begin{split}
J^{(r)}_{\ell_0} =& \pm\frac{i}{2\pi} \int e^{-it\phi} \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) (\xi - \eta_r - \sigma_r - \eta - \sigma) F_r(\xi,\eta,\sigma)\,d\eta d\sigma\\
\tilde{J}^{(r)}_{\ell_0} =& 0
\end{split}\end{equation*}
where
\begin{equation*}
F_r(\xi, \eta, \sigma) = \hat{f}(\eta+ \eta_r) \hat{f}(\xi - \eta - \eta_r - \sigma -\sigma_r) \overline{\hat{f}(-\sigma - \sigma_r)}
\end{equation*}
We can re-write this as
\begin{subequations}\begin{align}
J^{(r)}_{\ell_0} =& \pm\frac{i}{2\pi} \int e^{-it\phi} \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) (\xi - \eta_r - \sigma_r - \eta - \sigma) \left(F_r(\xi,\eta,\sigma) - F_r(\xi,0,0)\right)\,d\eta d\sigma\label{eqn:I-stat-1}\\
& \mp \frac{i}{2\pi}F_r(\xi,0,0) \int e^{-it\phi} \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) (\eta + \sigma) \,d\eta d\sigma\label{eqn:I-stat-2}\\
& \pm \frac{i}{2\pi}F_r(\xi,0,0)(\xi - \eta_r - \sigma_r) \int e^{-it\phi} \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) \,d\eta d\sigma\label{eqn:I-stat-3}
\end{align}\end{subequations}
For~\eqref{eqn:I-stat-1}, we recall that the $L^2$ bound on $xf$ implies that $\hat{f}$ is $1/2$-H\"older, so
\begin{equation*}
|F_r(\xi, \eta, \sigma) - F_r(\xi,0,0)| \lesssim M^3\epsilon^3 t^{1/6} (|\eta| + |\sigma|)^{1/2}
\end{equation*}
and
\begin{equation*}\begin{split}
|\eqref{eqn:I-stat-1}| \lesssim& M^3\epsilon^3 t^{1/6}\int \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) (\eta + \sigma + \eta_r + \sigma_r) (|\eta| + |\sigma|)^{1/2}\,d\eta d\sigma\\
\lesssim& M^3\epsilon^3 2^j t^{1/6} 2^{5/2 \ell_0}\\
\lesssim& M^3\epsilon^3 t^{-1} \left(t^{1/3} 2^j\right)^{1 -5/2 \gamma}
\end{split}\end{equation*}
Similarly, the $L^\infty$ bound on $\hat{f}$ from~\eqref{eqn:bootstrap-hypotheses} shows that $|F_r(\xi,0,0)| \lesssim M^3\epsilon^3$, so
\begin{equation*}\begin{split}
\left|\eqref{eqn:I-stat-2}\right| \lesssim& M^3\epsilon^3 2^{3\ell_0}\\ \lesssim& M^3\epsilon^3 t^{-1} \left(t^{1/3} 2^j\right)^{-3\gamma}
\end{split}\end{equation*}
The term~\eqref{eqn:I-stat-3} contains the leading order contribution to~\eqref{eqn:hat-f-hamiltonian-dynamics}. We will extract this contribution using the method of stationary phase. By direct calculation, we find
\begin{equation*}
\phi(\xi,\eta+\eta_r, \sigma+\sigma_r) = \phi_r + Q_r(\eta,\sigma) + O(|\eta|^3 + |\sigma|^3)
\end{equation*}
where $\phi_r = \phi(\xi,\eta_r,\sigma_r)$ and $Q_r$ is the quadratic form associated to the Hessian matrix $\Hess_{\eta,\sigma} \phi(\xi,\eta_r,\sigma_r)$.
Thus, $\left|e^{-it\phi} - e^{-it(\phi_r + Q_r(\eta,\sigma))}\right| \lesssim t (|\eta|^3 + |\sigma|^3)$, so
\begin{equation*}\begin{split}
\left|(\xi - \eta_r - \sigma_r) F_r(\xi,0,0) \int \Big(e^{-it\phi} - e^{-it(\phi_r + Q_r(\eta,\sigma))}\Big) \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) \,d\eta d\sigma\right|
\lesssim& M^3\epsilon^3 2^j t 2^{5\ell_0}\\
\lesssim& M^3\epsilon^3 \big( t^{1/3} 2^j \big)^{1 - 5\gamma}
\end{split}\end{equation*}
By rescaling and using stationary phase, we find that
\begin{equation*}\begin{split}
\int e^{-it Q_r(\eta,\sigma)} \psi_{\leq \ell_0}(\eta) \psi_{\leq \ell_0}(\sigma) \,d\eta d\sigma =& 2^{2\ell_0}\int e^{-it2^{2\ell_0} Q_r(\eta,\sigma)} \psi_{\leq 0}(\eta) \psi_{\leq 0}(\sigma) \,d\eta d\sigma\\
=& \frac{2\pi e^{i\frac{\pi}{4} \signature \Hess_{\eta,\sigma} \phi(\xi,\eta_r,\sigma_r)}}{t \sqrt{|\det \Hess_{\eta,\sigma} \phi(\xi,\eta_r,\sigma_r)|}}\\
&+ O\left(t^{-2}2^{-2\ell_0}2^{-2j} \right)
\end{split}\end{equation*}
where on the last line we have used the fact that
$|\det \Hess_{\eta,\sigma} \phi(\xi,\eta_r,\sigma_r)| \sim 2^{2j}$ to obtain the error term. Collecting all these calculations, we find that
\begin{equation*}\begin{split}
\eqref{eqn:I-stat-3} =& \pm i F_r(\xi,0,0)\frac{(\xi - \eta_r - \sigma_r) e^{-it\phi_r + i\frac{\pi}{4} \signature Q_r}}{t \sqrt{|\det Q_r|}}\\
&+ O\left(M^3\epsilon^3 t^{-1} \left[\left(t^{1/3}2^j\right)^{1 - 5\gamma} + \left(t^{1/3} 2^j\right)^{-1 - 2\gamma}\right] \right)
\end{split}\end{equation*}
Collecting the results for~\cref{eqn:I-stat-1,eqn:I-stat-2,eqn:I-stat-3} and simplifying using the definition of $\ell_0$, we find that
\begin{equation}\label{eqn:J-r-stat-est} \begin{split}
J_{\ell_0}^{(r)} =& \pm i F_r(\xi,0,0)\frac{(\xi - \eta_r - \sigma_r) e^{-it\phi_r + i\frac{\pi}{4} \signature Q_r}}{t \sqrt{|\det Q_r|}}\\ &\qquad + O\left(M^3\epsilon^3 t^{-1} \left[\left(t^{1/3}2^j\right)^{1-5/2\gamma} + \left(t^{1/3}2^j\right)^{-3\gamma} + \left(t^{1/3}2^j\right)^{-1-2\gamma}\right]\right)
\end{split}\end{equation}
A quick calculation shows that for $r=1,2,3$, we have
\begin{align*}
\phi_r =& 0,& \det Q_r =& -36 \xi^2,& \signature Q_r =& 0\\
\phi_4 =& 8/9\xi^3,& \det Q_4 =& 12 \xi^2,&\signature Q_r =& -2 \sgn \xi
\end{align*}
Thus, combining~\eqref{eqn:J-r-stat-est} with~\eqref{eqn:J-hi-contrib} and taking $\gamma = 3/7$, we find that
\begin{equation*}\begin{split}
\sum_{r=1}^4 J^{(r)} =& \pm \frac{i\sgn \xi}{6t} | \hat{f}(\xi, t)|^2 \hat{f}(\xi, t) \pm e^{it8/9\xi^3}\frac{\sgn \xi e^{- i\frac{\pi}{2}\sgn\xi}}{3\sqrt{12}t} | \hat{f}(\xi/3, t)|^2 \hat{f}(\xi/3, t) \\
&\qquad+ O(M^3\epsilon^3 t^{-1} (t^{1/3} 2^j)^{-1/14})
\end{split}\end{equation*}
which concludes the proof of~\eqref{eqn:hat-f-hamiltonian-dynamics} and gives~\Cref{thm:nonlinear-bounds-thm}.
| {
"timestamp": "2022-07-29T02:05:50",
"yymm": "2111",
"arxiv_id": "2111.00630",
"language": "en",
"url": "https://arxiv.org/abs/2111.00630",
"abstract": "We study the asymptotics of the complex modified Korteweg-de Vries equation\\begin{equation*} \\partial_t u + \\partial_x^3 u = \\pm |u|^2 \\partial_x u \\end{equation*} In the real valued case, it is known that solutions with small, localized initial data exhibit modified scattering for $|x| \\geq t^{1/3}$, and behave self-similarly for $|x| \\leq t^{1/3}$. We prove that the same asymptotics hold for complex mKdV. The major difficulty in the complex case is that the nonlinearity cannot be expressed as a derivative, which makes the low-frequency dynamics harder to control. To overcome this difficulty, we introduce the decomposition $u = S + w$, where $S$ is a self-similar solution with the same mean as $u$ and $w$ is a remainder that has better decay. By using the explicit expression for $S$, we are able to get better low-frequency behavior for $u$ than we could from dispersive estimates alone.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Long time decay and asymptotics for the complex mKdV equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9780517443658748,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7089606264274645
} |
https://arxiv.org/abs/math/9307210 | Using sums of squares to prove that certain entire functions have only real zeros | It is shown how sums of squares of real valued functions can be used to give new proofs of the reality of the zeros of the Bessel functions $J_\alpha (z)$ when $\alpha \ge -1,$ confluent hypergeometric functions ${}_0F_1(c\/; z)$ when $c>0$ or $0>c>-1$, Laguerre polynomials $L_n^\alpha(z)$ when $\alpha \ge -2,$ and Jacobi polynomials $P_n^{(\alpha,\beta)}(z)$ when $\alpha \ge -1$ and $ \beta \ge -1.$ Besides yielding new inequalities for $|F(z)|^2,$ where $F(z)$ is one of these functions, the derived identities lead to inequalities for $\partial |F(z)|^2/\partial y$ and $\partial ^2 |F(z)|^2/\partial y^2,$ which also give new proofs of the reality of the zeros. | \section{Introduction}
In a 1975 survey paper \cite{gg75} on positivity and special functions
it was shown how sums of squares of special functions could be used to
prove the nonnegativity of the Fej\'er kernel, the positivity of
integrals of Bessel functions \cite{gg75B} and of the Cotes' numbers
for some Jacobi abscissas, a Tur\'an type inequality for Bessel
functions, the Askey-Gasper inequality (cf. \cite{ag76}, \cite{ag86},
\cite{gg86}, \cite{gg89})
\begin{equation}\label{1.1}
\sum_{k=0}^n P_k^{(\alpha,0)}(x)
\ge 0,\quad \alpha> -2,\quad -1\le x\le 1,
\end{equation}
which de Branges \cite{de} employed to complete his proof of the Bieberbach
conjecture, and to prove the more general inequalities \cite{gg77}
\begin{equation}\label{1.2}
\sum_{k=0}^n {(\lambda +1)_k\over k!}{(\lambda +1)_{n-k}\over (n-k)!}
{P_k^{(\alpha,\beta)}(x)\over P_k^{(\beta,\alpha)}(1)}
\ge 0, \quad -1\le x\le 1,
\end{equation}
when $0\le \lambda\le \alpha +\beta$ and $ \beta \ge -1/2$.
It was also pointed out in \cite{gg75} that, since one of
Jensen's necessary and sufficient conditions for the Riemann Hypothesis to
hold (given in P\'olya \cite{pol}) is the condition that
\begin{equation}\label{1.3}
\int^\infty_{-\infty}\int^\infty_{-\infty} \Phi(s)\Phi(t)
e^{i(s+t)x} (s- t)^{2n}dsdt\ge 0, \quad -\infty<x<\infty,
\end{equation}
\noindent
for $n = 0, 1, 2, 3, \dots ,$ where
\begin{equation}\label{1.4}
\Phi(t)=2\sum^\infty_{k=1} (2k^4\pi^2 e^{9t/2} - 3k^2\pi
e^{5t/2}) e^{-k^2\pi e^{2t}},
\end{equation}
\noindent
and the above integral is a square when $n = 0$, the method of sums of
squares is suggested for proving (1.3).
Another of Jensen's necessary and
sufficient conditions for the Riemann Hypothesis to hold is that
\begin{equation}\label{1.5}
\int^\infty_{-\infty}\int^\infty_{-\infty}
\Phi(s)\Phi(t) e^{i(s+t)x} e^{(s- t)y} (s-t)^2ds dt\ge0,
\quad -\infty<x,y<\infty,
\end{equation}
which can also be written in the equivalent
form
\begin{equation}\label{1.6}
{\partial^2 \over \partial y^2}|\Xi(x+iy)|^2 \ge 0,
\quad -\infty<x,y<\infty,
\end{equation}
with
\begin{equation}\label{1.7}
\Xi(z)=\int_{-\infty}^{\infty} \Phi(t) \exp(izt) dt =
2\int_0^{\infty} \Phi(t) \cos(zt) dt.
\end{equation}
That (1.6) is a sufficient condition for the Riemann $\Xi(z)$ function
to have only real zeros follows directly from observation that, since
$|\Xi(x+iy)|^2= \Xi(x+iy)\Xi(x-iy)$ is a nonnegative even function of
y, (1.6) implies that $|\Xi(x+iy)|^2$ is a nonnegative even convex
function of $y$ with its unique minimum at $y=0,$ and hence $\Xi (x+iy)
\ne 0$ whenever $y \ne 0$. If the function $\Phi(t)$ in (1.3) and (1.5)
is replaced by a function $\Psi(t)$ such that the conditions stated in
\cite[\S1]{pol}, are satisfied, then, by \cite[pp.~17,~18]{pol}, the
inequalities in (1.3) and (1.5) are necessary and sufficient conditions
for the Fourier (or cosine) transform of $\Psi(t)$ to have only real
zeros. In 1913 Jensen \cite{jen} proved that each of the inequalities
\begin{equation}\label{1.8}
\quad y{\partial \over \partial y }|F(x+i y)|^2 \ge 0, \quad
{\partial^2 \over \partial y^2}|F(x+i y)|^2
\ge 0,\quad -\infty<x, y<\infty,
\end{equation}
is necessary and sufficient for a real entire function $F(z) \not\equiv 0$ of
genus 0 or 1 (cf. Boas \cite[Chapter~2]{bo}) to have only real zeros.
Also see Titchmarsh \cite{titch} and Varga \cite[Chapter~3]{var}.
In view of these observations and the successes of the sums of squares
method (also see \cite{gg89a}, \cite[Chapter~8]{gr}),
since the early 1970's I have been investigating
how squares of real valued functions can
be used to prove that certain entire functions have only real zeros and to
prove inequalities of the form in (1.8).
In this paper I demonstrate how certain series expansions in sums of squares
of special functions give
new proofs of the reality of the zeros of the Bessel functions $J_\alpha (z)$
when $\alpha \ge -1,$ confluent hypergeometric functions ${}_0F_1(c\/; z)$
when $c>0$ or $0>c>-1$, Laguerre polynomials $L_n^\alpha(z)$ when
$\alpha \ge -2,$ and Jacobi polynomials $P_n^{(\alpha,\beta)}(z)$
when $\alpha \ge -1$ and $ \beta \ge -1.$ Here,
as elsewhere, $z = x + i y$ is a complex variable and
$x$ and $y$ are real variables. For
the definitions of these functions and their properties, see
Erd\'elyi \cite{erd} and Szeg\H o \cite{sz}. In addition, it will
be shown that besides yielding new inequalities for $|F(z)|^2,$
where $F(z)$ is one of these functions, the derived identities
lead to inequalities for $\partial |F(z)|^2/\partial y$ and
$\partial ^2 |F(z)|^2/\partial y^2,$ which also give new proofs of the reality
of the zeros.
\section{ Initial observations}
\setcounter{equation}{0}
In order to see how easily sums of squares
can be used to prove that all of the zeros of $\sin z$ and $\cos z$ are
real, it suffices to observe that we have the (easily verified) identities
\begin{equation}\label{2.1}
|\sin z|^2 = \sin^2 x + \sinh^2 y,
\end{equation}
\begin{equation}\label{2.2}
|\cos z|^2 = \cos^2 x +\sinh^2 y
\end{equation}
and to note that $\sinh y = (e^y - e^{-y})/2 > 0$ when
$y>0,$ and $\sinh y<0 $ when $y<0.$
One can also take partial derivatives of the identities
in (2.1) and (2.2) with respect to $y$ to obtain
\begin{equation}\label{2.3}
{\partial \over \partial y }|\sin z|^2=
{\partial \over \partial y}|\cos z|^2 = \sinh 2y
\end{equation}
which shows that $|\sin z|^2$ and $|\cos z|^2$ are increasing (decreasing)
functions of $y$ when $y>0$ ($y<0),$ and to obtain
\begin{equation}\label{2.4}
{\partial^2 \over \partial y^2}|\sin z|^2 =
{\partial^2 \over \partial y^2}|\cos z|^2 = 2\cosh 2y =
2(\cosh^2 y +\sinh^2 y) \ge 2,
\end{equation}
which shows that $|\sin z|^2$ and $|\cos z|^2$ are convex functions
of $y.$ Then, because $|\sin z|^2$ and $|\cos z|^2$ are nonnegative
even functions of $y$, it immediately follows from (2.3) and (2.4) that
$\sin z$ and $\cos z$ have only real zeros.
Observe that the reality of the
zeros of $\sin z$ and $\cos z$ also follows from the inequalities
\begin{equation}\label{2.5}
|\sin z|^2 > \sin^2 x,\quad |\cos z|^2 > \cos^2 x,\quad (y \ne 0)
\end{equation}
\begin{equation}\label{2.6}
|\sin z|^2 \ge \sinh^2 y, \quad |\cos z|^2 \ge \sinh^2 y,
\end{equation}
\begin{equation}\label{2.7}
y{\partial \over \partial y }|\sin z|^2=
y{\partial \over \partial y}|\cos z|^2 \ge 2 y^2,
\end{equation}
\begin{equation}\label{2.8}
{\partial^2 \over \partial y^2}|\sin z|^2 =
{\partial^2 \over \partial y^2}|\cos z|^2 \ge 2 \cosh^2 y
\end{equation}
\begin{equation}\label{2.9}
{\partial^2 \over \partial y^2}|\sin z|^2 =
{\partial^2 \over \partial y^2}|\cos z|^2 \ge 2 + 2 \sinh^2 y
\end{equation}
\noindent
which are consequences of (2.1)--(2.4).
\section{Bessel functions and ${}_0F_1(c\/; z)$ functions}
\setcounter{equation}{0}
Since the identities and inequalities in \S 2 give the reality of the
zeros of the Bessel functions \cite[(1.71.2)]{sz}
\begin{equation}\label{3.1}
J_{-{1\over2}}(z)=({2 \over{\pi z}})^{1 \over 2}\cos z,
\quad J_{1\over2}(z)=({2 \over {\pi z}})^{1 \over 2}\sin z,
\end{equation}
this suggests that it should be possible to use sums of squares
to prove Lommel's theorem (see Watson \cite[p.~482]{wat}) that all of
the zeros of the Bessel function \cite[7.2(3)]{erd}
\begin{equation}\label{3.2}
J_\alpha(z)={(z/2)^{\alpha} \over \Gamma(\alpha+1)}\;
{}_0F_1(\alpha+1;-z^2/4)
\end{equation}
\noindent
are real when $\alpha>-1$. With this aim in mind and in order to work
with entire functions, we set
\begin{equation}\label{3.3}
{\cal J}_\alpha(z)=z^{-\alpha}J_\alpha(z)={2^{-\alpha} \over
\Gamma(\alpha+1)}\; {}_0F_1(\alpha+1;-z^2/4),
\end{equation}
which is an even entire function of $z$ such that
${\overline{{\cal J}_\alpha(z)} = {\cal J}_\alpha(\overline z)}$
when $\alpha$ is real.
Let $\alpha>-1$. Then, from the product formula (37) in Carlitz \cite{car},
\begin{equation}\label{3.4}
|{\cal J}_\alpha(z)|^2 = \sum_{k=0}^\infty
\frac{(\alpha+\frac12)_k2^{k-\alpha}}{k!(2\alpha+1)_k\Gamma(\alpha+1)}
(x^2+y^2)^k{\cal J}_{\alpha+k}(2x).
\end{equation}
To express each of the Bessel functions on the right side of (3.4) as
a sum of squares of real valued Bessel functions
observe that from the addition theorem for
Bessel functions \cite[7.15(30)]{erd} we have the expansion
\begin{equation}\label{3.5}
{\cal J}_{\alpha+k}(2x) = 2^{k+\alpha}\Gamma(k+\alpha)\sum_{j=0}^\infty
\frac{(j+k+\alpha)(2k+2\alpha)_j}{j!} (-1)^j x^{2j}
({\cal J}_{\alpha+j+k}(x))^2.
\end{equation}
Hence, substituting (3.5) into (3.4) and changing the order of summation
we find that
\begin{multline}\label{3.6}
|{\cal J}_\alpha(z)|^2
=\sum_{n=0}^\infty
{(n+\alpha)(2\alpha)_n \over{\alpha \; n!}} (-1)^n x^{2n} \\
\times {}_2F_1(-n,n+2\alpha\/;2\alpha+1;1+y^2/x^2)
({\cal J}_{\alpha+n}(x))^2.
\end{multline}
Now apply the Euler transformation formula \cite[2.9(3)]{erd}
\begin{equation}\label{3.7}
{}_2 F_1(a,b\/;c\/;z) = (1-z)^{-a} {}_2 F_1(a,c-b\/;c\/;z/(z-1))
\end{equation}
to the above ${}_2F_1$ series to obtain the desired sum of squares
expansion formula
\begin{align}\label{3.8}
|{\cal J}_\alpha(z)|^2 &= ({\cal J}_{\alpha}(x))^2
+2(\alpha+1) y^2 ({\cal J}_{\alpha+1}(x))^2 \\ \notag
&+ \sum_{n=2}^\infty{(2n+2\alpha)(2\alpha+1)_{n-1} \over{ n!}} y^{2n}\\
&\times {}_2F_1(-n,1-n\/;2\alpha+1;1+x^2/y^2)
({\cal J}_{\alpha+n}(x))^2. \notag
\end{align}
When $n \ge 2, \alpha>-1$ and $y \ne 0,$ the positivity of the
coefficients of $({\cal J}_{\alpha+n}(x))^2$ in (3.8) follows from
\begin{multline}\label{3.9}
(2\alpha+1)\> {}_2F_1(-n,1-n\/; 2\alpha+1; 1+x^2/y^2)
=(2\alpha+1+n^2-n) \\
+ n(n-1)x^2/y^2 + \sum^n_{k=2}
\frac{(-n)_k(1-n)_k}{k!\,(2\alpha+2)_{k-1}} (1+x^2/y^2)^k > 0.
\end{multline}
Hence, since the real zeros of ${\cal J}_\alpha(x)$ and ${\cal
J}_{\alpha+1}(x)$ are interlaced, (3.8) gives a sum of squares proof
that the entire functions ${\cal J}_\alpha(z)$, and thus the Bessel
functions $ J_\alpha(z),$ have only real zeros when $\alpha>-1$. Letting
$\alpha \to -1$ it follows that the Bessel function
${\cal J}_{-1}(z) = \lim_{\alpha\to-1}$
${\cal J}_{\alpha}(z) = -{\cal J}_1 (z)$ has only real zeros.
Notice that the inequality
\begin{equation}\label{3.10}
|{\cal J}_\alpha(z)|^2 \ge ({\cal J}_{\alpha}(x))^2
+2(\alpha+1) (y{\cal J}_{\alpha+1}(x))^2 >0, \hfill
\> y \ne0, \> \alpha > -1,
\end{equation}
and in fact infinitely many inequalities follow from (3.8) by just
dropping terms from the right side of (3.8). Analogous to (2.7)--(2.9),
it follows by differentiating equation (3.6) with respect to $y$
and applying (3.7) that we also have the identities
\begin{multline}\label{3.11}
y {\partial\over \partial y}|{\cal J}_\alpha(z)|^2
= 4y^2 \sum_{n=0}^\infty
{(n+\alpha+1)(2\alpha+2)_n \over{ n!}} y^{2n} \\
\times {}_2F_1(-n,-n\/;2\alpha+2;1+x^2/y^2)
({\cal J}_{\alpha+n+1}(x))^2
\end{multline}
and
\begin{eqnarray}\label{3.12} \hskip3em
\frac{\partial^2}{\partial y^2}|{\cal J}_\alpha(z)|^2
&=& 4 \sum_{n=0}^\infty\frac{(n+\alpha+1)(2\alpha+2)_n}{n!}
y^{2n} \\
&\times& {}_2F_1(-n,-n\/;2\alpha+2;1+x^2/y^2)
({\cal J}_{\alpha+n+1}(x))^2 \notag \\ [2mm]
&+&8y^2 \sum_{n=0}^\infty
{(n+\alpha+2)(2\alpha+3)_{n+1} \over{ n!}} y^{2n} \notag \\
&\times& {}_2F_1(-n,-n-1;2\alpha+3;1+x^2/y^2)
({\cal J}_{\alpha+n+2}(x))^2, \notag
\end{eqnarray}
which give infinitely many inequalities, such as, e.g.,
\begin{equation}\label{3.13}
y{\partial\over \partial y}|{\cal J}_\alpha(z)|^2\ge 4(\alpha+1)
(y{\cal J}_{\alpha+1}(x))^2 \ge 0, \quad \alpha\ge-1,
\end{equation}
\begin{equation}\label{3.14}
{\partial^2\over \partial y^2}|{\cal J}_\alpha(z)|^2\ge
4(\alpha+1)({\cal J}_{\alpha+1}(x))^2 \ge 0, \quad \alpha\ge-1,
\end{equation}
each of which proves that $J_\alpha(z)$ has only real zeros when
$\alpha\ge-1.$
In view of (3.3) the reality of the zeros of ${\cal J}_\alpha(z)$ when
$\alpha > -1$ is equivalent to the statement that all of the zeros of
the confluent hypergeometric function ${}_0F_1(c\/;z)$ are real and
negative when $c>0.$ However, it is known \cite{hil} that the zeros of
${}_0F_1(c\/; z)$
are also real (but not necessarily negative) when
$-1<c<0.$ Because this fact does not follow from (3.8) or (3.11)--(3.14),
we will next show how it can also be proved by using sums of
squares of real value functions.
From formulas (53) and (52) in Burchnall and Chaundy \cite{bc41} it follows
that if $c$ is real valued and $c \ne 0, -1, -2, \dots,$ then we have
the expansion formulas
\begin{equation}\label{3.15}
\left|{}_0F_1(c\/;z)\right|^2 = \sum^\infty_{k=0}
\frac{1}{k!\,(c)_k(c)_{2k}} (x^2+y^2)^k {}_0F_1(c+2k\/;2x)
\end{equation}
and
\begin{equation}\label{3.16}
{}_0F_1(c+2k\/;2x) = \sum^\infty_{j=0} \frac{(-1)^j}
{j!\, (c+2k+j-1)_j(c+2k)_{2j}} x^{2j}\left({}_0F_1(c+2k+2j\/;x)\right)^2.
\end{equation}
As in the Bessel function case, substitute (3.16) into (3.15) and
change the order of summation to get
\begin{multline}\label{3.17}
\left|{}_0F_1(c\/;z)\right|^2 = \sum^\infty_{n=0} \frac{(-1)^n}
{n!\, (c+n-1)_n (c)_{2n}} x^{2n} \\
\times {}_2F_1(-n,n+c-1; c\/; 1+y^2/x^2)({}_0F_1(c+2n\/;x))^2
\end{multline}
which, by applying the transformation formula (3.7), gives
\begin{multline}\label{3.18}
\left|{}_0F_1(c\/;z)\right|^2 = \sum^\infty_{n=0} \frac{1}
{n!\,(n+c-1)_n(c)_{2n}} y^{2n} \\
\times {}_2F_1(-n,1-n\,;c\/;1+x^2/y^2)({}_0F_1(c+2n\/;x))^2.
\end{multline}
When $c>0$ and $y \ne 0$ the coefficient of $({}_0F_1(c+2n\/; x))^2$ in
the series in (3.18) is obviously positive.
Hence, since ${}_0F_1(c\/; x) > 0$ when $c>0$ and $x\ge 0,$
(3.18) gives another proof that ${}_0F_1(c\/; z)$ has
only real negative zeros when $c>0.$
To handle the case $-1<c<0$ differentiate equation (3.17) with respect to
$y$ and apply (3.7) to obtain
\begin{multline}\label{3.19}
y\frac{\partial}{\partial y}\left|c\>{}_0F_1(c\/;z)\right|^2 = 2y^2
\sum^\infty_{n=0} \frac{(c+1)_n}{n!\,(c+1)_{2n}(c+1)_{2n+1}}y^{2n} \\
\times {}_2F_1(-n,-n; c+1; 1+x^2/y^2)({}_0F_1(c+2n+2;x))^2
\end{multline}
and
\begin{align}\label{3.20}
\frac{\partial^2}{ \partial y^2}\left|c\>{}_0F_1(c\/;z)\right|^2
&= 2\sum^\infty_{n=0}
\frac{(c+1)_n}{n!\,(c+1)_{2n}(c+1)_{2n+1}}y^{2n} \\ \notag
&\times {}_2F_1(-n,-n\/;c+1;1+x^2/y^2)({}_0F_1(c+2n+2;x))^2 \\ \notag
&+ 4y^2 \sum^\infty_{n=0}
\frac{(c+2)_{n+1}}{n!\,(c+1)_{2n+2}(c+1)_{2n+3}} y^{2n} \\ \notag
&\times {}_2F_1(-n,-n-1;c+2;1+x^2/y^2)({}_0F_1(c+2n+4;x))^2
\end{align}
which, in particular, give the inequalities
\begin{equation}\label{3.21}
y\frac{\partial}{\partial y}\big|c(c+1)\>{}_0F_1(c\/;z)\big|^2 \ge
2(c+1)(y\>{}_0F_1(c+2;x))^2,\quad c\ge-1,
\end{equation}
and
\begin{equation}\label{3.22} \quad
\frac{\partial^2}{\partial y^2}\left|c(c+1)\>{}_0F_1(c;z)\right|^2 \ge
2(c+1)({}_0F_1(c+2;x))^2,\quad c\ge-1.
\end{equation}
Since the coefficients on the right hand sides of (3.19)--(3.22) are
clearly positive when $c > -1$ and $y \ne 0,$ these formulas prove that
the functions $c(c+1) \> {}_0F_1(c\/; z)$ have only real zeros when $c\ge
-1,$ where it is understood that $c(c+1) \> {}_0F_1(c\/; z)$ is to be
replaced by its $c \rightarrow 0$ limit case $z \> {}_0F_1(2; z)$ when
$c=0,$ and by its $c \rightarrow -1$ limit case $z^2 \> {}_0F_1(3;
z)/2$ when $c=-1.$
\section{Laguerre polynomials and ${}_1F_1(a\/; c\/; z)$ functions}
\setcounter{equation}{0}
When $\alpha > - 1$ the Laguerre polynomials
\begin{equation}\label{4.1}
L_n^\alpha(z) = \frac{(\alpha+1)_n}{n!}\>{}_1F_1(-n\/;\alpha+1; z)
\end{equation}
satisfy the orthogonality relation
\begin{equation}\label{4.2}
\int^\infty_0 L_n^\alpha(x) L_m^\alpha(x) x^\alpha e^{-x}\, dx =
\frac{\Gamma(n+\alpha+1)}{n!} \delta_{nm},\;\; n,m=0,1,2,\dots,
\end{equation}
from which it follows by a standard argument (cf. \cite[\S3.3]{sz})
that the zeros of $L_n^\alpha(z)$ are real and positive. Analogous to
the last part of the previous section, in this section we will derive
some sums of squares expansions which, besides proving the reality of
the zeros of these polynomials when $\alpha > - 1,$ also prove that
they have only real zeros (not necessarily positive) when $-1\ge\alpha
\ge -2,$ where $L_n^\alpha(z)$ is defined to be the $\alpha \rightarrow
-k$ limit case of (4.1) when $\alpha$ is a negative integer $-k.$ Thus
$L_1^\alpha(z) = \alpha + 1 - z,$ which has a negative zero when
$\alpha < -1,$ and $L_2^\alpha(z) = ((\alpha+1)(\alpha+2)-
2(\alpha+2)z+z^2)\big/2,$ which has non-real zeros when $\alpha < -2.$
Let $\alpha$ be real valued. Substituting the sum of squares of Laguerre
polynomials expansion (from \cite[(91)]{bc41})
\begin{align}\label{4.3}
L^{\alpha+2k}_{n-k} (2x) &= \sum^{n-k}_{j=0}
\frac{(n-k-j)!\, (2k+2j+\alpha)(2k+\alpha)_j}
{j!\, (2k+\alpha)(2k+\alpha+1)_{n+j-k}} \\ \notag
&\times (-1)^j x^{2j} \left( L^{\alpha+2k+2j}_{n-k-j} (x)\right)^2
\end{align}
into the special case of \cite[(5.4)]{ba}
\begin{equation}\label{4.4}
\left| L_n^\alpha(z)\right|^2 = \frac{(\alpha+1)_n}{n!}
\sum^n_{k=0}\frac{1}{k!\,(\alpha+1)_k} (x^2+y^2)^k
L^{\alpha+2k}_{n-k} (2x)
\end{equation}
and changing the order of summation yields
\begin{multline}\label{4.5}
\left| L_n^\alpha(z)\right|^2 = \frac{(\alpha+1)_n}{n!}\sum^n_{k=0}
\frac{(n-k)!\,(2k+\alpha)(\alpha)_k}{k!\,\alpha(\alpha+1)_{n+k}}
(-1)^k x^{2k} \\
\times {}_2F_1(-k,k+\alpha\/;\alpha+1;1+y^2/x^2)
\left(L^{\alpha+2k}_{n-k} (x)\right)^2.
\end{multline}
Then application of (3.7) gives
\begin{multline}\label{4.6}
\left| L_n^\alpha(z)\right|^2 = \frac{(\alpha+1)_n}{n!}\sum^n_{k=0}
\frac{(n-k)!\,(2k+\alpha)(\alpha)_k}{k!\,\alpha(\alpha+1)_{n+k}}
y^{2k} \\
\times {}_2F_1(-k,1-k\/;\alpha+1;1+x^2/y^2)
\left(L^{\alpha+2k}_{n-k} (x)\right)^2.
\end{multline}
Since $L_0^\alpha (x) \equiv 1$ and the coefficients on the right hand side
of (4.6) are clearly positive when $\alpha > -1$ and $y \ne 0,$
the expansion (4.6) proves that the Laguerre polynomials have only real zeros
when $\alpha > -1.$ This also follows, in particular, from the inequalities
\begin{equation}\label{4.7}
\left| L_n^\alpha(z)\right|^2 \ge
\frac{(\alpha+1)_n}{n!\, n!\,(n+\alpha)_n} y^{2n}
\>{}_2F_1(-n, 1-n\/;\alpha+1;1+x^2/y^2), \quad \alpha>-1,
\end{equation}
and
\begin{equation}\label{4.8}
\left| L_n^\alpha(z)\right|^2 \ge \left| L_n^\alpha(x)\right|^2 +
\frac{(\alpha+1)_n}{n!\, n!\,(n+\alpha)_n} y^{2n},
\quad \alpha>-1,\ n\ge1,
\end{equation}
which are consequences of (4.6).
Now differentiate equation (4.5) with respect to $y$ and apply (3.7) to
derive the expansions
\begin{multline}\label{4.9}
y\frac{\partial}{\partial y} \left| L_n^\alpha(z)\right|^2 =
2y^2 \sum^{n-1}_{k=0}
\frac{(n-k-1)!\,(2k+\alpha+2)(\alpha+2)_k}{n!\,k!\,(n+\alpha+1)_{k+1}}
y^{2k} \\
\times {}_2F_1(-k,-k\/;\alpha+2;1+x^2/y^2)
\left( L^{\alpha+2k+2}_{n-k-1} (x)\right)^2, \quad n\ge 1,
\end{multline}
and
\begin{align}\label{4.10}
\frac{\partial^2}{\partial y^2} \left| L_n^\alpha(z)\right|^2 &=
2 \sum^{n-1}_{k=0}
\frac{(n-k-1)!\,(2k+\alpha+2)(\alpha+2)_k}{n!\,k!\,(n+\alpha+1)_{k+1}}
\>y^{2k} \\ \notag
&\times {}_2F_1(-k,-k\/;\alpha+2;1+x^2/y^2)
\left( L^{\alpha+2k+2}_{n-k-1} (x)\right)^2 \\[2mm] \notag
&+ 4y^2 \sum^{n-2}_{k=0}
\frac{(n-k-2)!\,(2k+\alpha+4)(\alpha+3)_{k+1}}{n!\,k!\,(n+\alpha+1)_{k+2}}
\>y^{2k} \\ \notag
&\times {}_2F_1(-k,-k-1;\alpha+3;1+x^2/y^2)
\left( L^{\alpha+2k+4}_{n-k-2} (x)\right)^2\!, \quad n\ge1,
\end{align}
which yield, e.g., the inequalities
\begin{equation}\label{4.11}\quad
y\frac{\partial}{\partial y} \left| L_n^\alpha(z)\right|^2 \ge
\frac{2(\alpha+2)}{n(n+\alpha+1)}
\left(yL^{\alpha+2}_{n-1}(x)\right)^2, \quad \alpha>-2,\ n\ge1,
\end{equation}
and
\begin{equation}\label{4.12}\quad
\frac{\partial^2}{\partial y^2} \left| L_n^\alpha(z)\right|^2 \ge
\frac{2(\alpha+2)}{n(n+\alpha+1)}
\left(L^{\alpha+2}_{n-1}(x)\right)^2, \quad \alpha>-2,\ n\ge1,
\end{equation}
and prove (after letting $\alpha\to-2$) that the polynomials
$L_n^\alpha (z)$ have only real zeros when $\alpha \ge -2.$
For the confluent hypergeometric functions
${}_1F_1(a\/; c\/;z)$ with $a$ and $c$
real valued and $c \ne 0, -1, -2, \dots,$
use of the expansion formulas \cite[(42) and (43)]{bc41}
instead of (4.3) and (4.4)
yields the nonterminating extension of (4.5)
\begin{multline}\label{4.13}
|{}_1F_1(a\/; c\/;z)|^2 = \sum^\infty_{k=0}\frac{(a)_k(c-a)_k}
{k!\,(c)_{2k}(c+k-1)_k} x^{2k} \\
\times {}_2F_1(-k, c+k-1; c\/;1+y^2/x^2)({}_1F_1(a+k\/;c+2k\/;x))^2
\end{multline}
and hence, by (3.7),
\begin{multline}\label{4.14}
|{}_1F_1(a\/; c\/;z)|^2 = \sum^\infty_{k=0}\frac{(a)_k(c-a)_k}
{k!\,(c)_{2k}(c+k-1)_k} (-1)^k y^{2k} \\
\times {}_2F_1(-k,1-k\/;c\/;1+x^2/y^2)({}_1F_1(a+k\/;c+2k\/;x))^2.
\end{multline}
Then differentiation of equation (4.13) with respect of $y$ and application of
(3.7) gives the following extensions of (4.9) and (4.10)
(and also of (3.19) and (3.20)), respectively,
\begin{multline}\label{4.15}
y\frac{\partial}{\partial y}|c(c+1)\>{}_1F_1(a\/; c\/;z)|^2 =
2y^2\sum^\infty_{k=0}\frac{(a)_{k+1}(c-a)_{k+1}(c+1)}
{k!\,(c+2)_{2k}(c+k+1)_k} (-1)^{k+1} y^{2k} \\
\times {}_2F_1(-k,-k\/;c+1;1+x^2/y^2)({}_1F_1(a+k+1;c+2k+2;x))^2
\end{multline}
and
\begin{align}\label{4.16}
\frac{\partial^2}{\partial y^2} \, &|c(c+1)\>{}_1F_1(a\/;c\/;z)|^2
= 2\sum^\infty_{k=0} \frac{(a)_{k+1}(c-a)_{k+1}(c+1)}
{k!\,(c+2)_{2k}(c+k+1)_k} (-1)^{k+1} y^{2k} \\ \notag
&\times {}_2F_1(-k,-k; c+1;1+x^2/y^2)({}_1F_1(a+k+1;c+2k+2;x))^2 \\ \notag
&+ 4y^2\sum^\infty_{k=0}\frac{(a)_{k+2}(c-a)_{k+2}}
{k!\,(c+2)_{2k+2}(c+k+3)_k}(-1)^ky^{2k} \\ \notag
&\times {}_2F_1(-k,-k-1;c+2;1+x^2/y^2)({}_1F_1(a+k+2;c+2k+4;x))^2.
\end{align}
If $a = -n$ is a negative integer and $c = \alpha +1,$ then (4.13)--(4.16)
reduce to (4.5), (4.6), (4.9), (4.10), respectively. If $a = c+n$ with $n$
a nonnegative integer, then (4.15) and (4.16) reduce to terminating
sums of squares expansions with nonnegative coefficients which prove that
$c(c+1) \>{}_1F_1(c+n\/; c\/;z),$ as a
function of $z,$ has only real zeros when $c \ge -1,$ where this function
is to be replaced by its $c \rightarrow 0$ and $c \rightarrow -1$
limit cases when $c=0$ and $c=-1,$ respectively.
It should be noted that, in view of Kummer's transformation formula
\cite[6.3(7)]{erd}
\begin{equation}\label{4.17}
{}_1F_1(a\/;c\/;x) = e^x\>{}_1F_1(c-a\/;c\/;-x),
\end{equation}
these results on the zeros of $c(c+1)\> {}_1F_1(c+n\/; c\/;z)$
are equivalent to those obtained above for the Laguerre polynomials.
\section{Jacobi polynomials}
\setcounter{equation}{0}
When $ \alpha > -1$ and $\beta > -1$ the Jacobi polynomials
\begin{equation}\label{5.1}
P^{(\alpha,\beta)}_n (z) = \frac{(\alpha+1)_n}{n!}\>
{}_2F_1(-n,n+\alpha+\beta+1;\alpha+1;(1-z)/2)
\end{equation}
satisfy the orthogonality relation
\begin{equation}\label{5.2}
\int^1_{-1} P^{(\alpha,\beta)}_n (x)P^{(\alpha,\beta)}_m(x)(1-x)^\alpha
(1+x)^\beta\>dx = 0,\quad n\ne m,
\end{equation}
for $n, m = 0, 1, 2, \dots,$ and hence, by \cite[Theorem 3.3.1]{sz}, these
polynomials have only real zeros. In our derivation of sums of squares
expansions which imply the reality of the zeros of these
polynomials we will start out by deriving sums of squares
expansions for nonterminating ${}_2F_1(a, b\/; c\/; z)$ hypergeometric series
with $|z| < 1$ (for convergence).
Let $a, b, c$ be real valued, $c \ne 0, -1, -2, \dots,$ and $|z| < 1.$
Then formula \cite[(51)]{bc40} gives the expansion
\begin{align}\label{5.3}
|{}_2F_1(a,b\/;c\/;z)|^2 &= \sum^\infty_{k=0}
\frac{(a)_k(b)_k(c-a)_k(c-b)_k}{k!\,(c)_k(c)_{2k}} (x^2+y^2)^k \\
&\times {}_2F_1(a+k,b+k\/;c+2k\/;2x-x^2-y^2). \notag
\end{align}
Unfortunately, application of the inversion \cite[(50)]{bc40} of
\cite[(51)]{bc40} to each of the ${}_2F_1(a+k, b+k\/; c+2k\/;
2x-x^2-y^2)$ functions on the right side of equation (5.3) just returns
one back to the function that is on the left side. Therefore, we use
formulas (44), (45), (50) in \cite{bc40} to obtain, respectively, the
expansions
\begin{multline}\label{5.4}
{}_2F_1(a+k,b+k\/;c+2k\/;2x-x^2-y^2) \\
= \sum^\infty_{j=0}\frac{(a+k)_j(b+k)_j}{j!\,(c+2k)_j}(-1)^j
(x^2+y^2)^j\> {}_2F_1(a+k+j,b+k+j\/;c+2k+j\/;2x),
\end{multline}
\begin{multline}\label{5.5}
{}_2F_1(a+k+j,b+k+j\/;c+2k+j\/;2x)
= \sum^\infty_{m=0}\frac{(a+k+j)_m(b+k+j)_m}{m!\,(c+2k+j)_m} x^{2m} \\
\times {}_2F_1(a+k+j+m,b+k+j+m\/;c+2k+j+m\/;2x-x^2),
\end{multline}
\begin{multline}\label{5.6}
{}_2F_1(a+k+j+m,b+k+j+m\/;c+2k+j+m\/;2x-x^2) \\
= \sum^\infty_{n=0}\frac{(a+k+j+m)_n(b+k+j+m)_n(c-a+k)_n(c-b+k)_n}
{n!\,(c+2k+j+m+n-1)_n(c+2k+j+m)_{2n}} (-1)^n x^{2n} \\
\times({}_2F_1(a+k+j+m+n,b+k+j+m+n\/;c+2k+j+m+2n\/;x))^2,
\end{multline}
and then substitute these expansions in turn into (5.3), change the order
of summation and use the binomial theorem to obtain
\begin{multline}\label{5.7}
|{}_2F_1(a,b\/;c\/;z)|^2
= \sum^\infty_{m=0}\sum^m_{j=0}\frac{(a)_m(b)_m(c-a)_j(c-b)_j}
{j!\, (m-j)!\,(c)_{m+j}(m+c-1)_j} (-1)^m x^{2j}y^{2m-2j} \\
\times{}_2F_1(-j,m+c-1;c\/;1+y^2/x^2)({}_2F_1(m+a,m+b\/;m+j+c\/;x))^2.
\end{multline}
Application of (3.7) to the first ${}_2F_1$ series on the right
side of (5.7) gives
\begin{multline}\label{5.8}
|{}_2F_1(a,b\/;c\/;z)|^2
= \sum^\infty_{m=0}\sum^m_{j=0}\frac{(a)_m(b)_m(c-a)_j(c-b)_j}
{j!\, (m-j)!\,(c)_{m+j}(m+c-1)_j} (-1)^{m+j} y^{2m} \\
\times{}_2F_1(-j,1-m\/;c\/;1+x^2/y^2)({}_2F_1(m+a,m+b\/;m+j+c\/;x))^2,
\end{multline}
which contains (4.14) as a limit case. When $a = -n$ is a negative
integer, $b=n+\alpha+\beta+1$ and $c=\alpha+1,$ it follows from (5.8) that
\begin{multline}\label{5.9}
\left|\frac{n!}{(\alpha+1)_n} P^{(\alpha,\beta)}_n (1-2z)\right|^2 \\
= \sum^n_{m=0}\sum^m_{j=0}\frac{(-n)_m(n+\alpha+\beta+1)_m
(n+\alpha+1)_j(-n-\beta)_j}{j!\,(m-j)!\,(\alpha+1)_{m+j}(m+\alpha)_j}
(-1)^{m+j} y^{2m} \\
\times{}_2F_1(-j,1-m\/;\alpha+1;1+x^2/y^2)
({}_2F_1(m-n,m+n+\alpha+\beta+1;m+j+\alpha+1;x))^2,
\end{multline}
which gives a sums of squares proof that the Jacobi polynomials
$P_n^{(\alpha, \beta)}(z)$ have only real zeros when $ \alpha, \beta > -1$
(since the coefficients in (5.9) are then clearly positive) and hence, by
continuity, when $ \alpha, \beta \ge -1.$ The restriction that $ \alpha,
\beta \ge -1$ cannot be extended to $ \alpha, \beta \ge -2$
because $P_2^{(\alpha, \beta)}(z)$
has non-real zeros when $ \alpha, \beta > -2$ and $\alpha+\beta<-3.$
As in sections 3 and 4 one may repeatedly differentiate (5.7)
with respect to $y$ and apply (3.7) to obtain extensions of (4.15), (4.16), etc.
But, since the resulting identities are quite lengthly and do not add
any additional $(\alpha, \beta)$ for which the Jacobi polynomials have only
real zeros, we will omit them and only point out that the first two
differentiations give identities that yield, in particular, the inequalities
\begin{equation}\label{5.10}
y\frac{\partial}{\partial y}\left|P^{(\alpha,\beta)}_n (1-2z)\right|^2
\ge \frac{2n(n+\alpha+\beta+1)_n(\alpha+1)_n}{n!\,n!} y^{2n}
\end{equation}
and
\begin{equation}\label{5.11}
\frac{\partial^2}{\partial y^2}\left|P^{(\alpha,\beta)}_n (1-2z)\right|^2
\ge \frac{2n(2n-1)(n+\alpha+\beta+1)_n(\alpha+1)_n}{n!\,n!} y^{2n-2}
\end{equation}
when $n\ge 1$ and $ \alpha, \beta \ge -1.$
In subsequent papers it will be shown that squares of real valued functions
can also be used to prove the reality of the zeros of some non-classical
families of orthogonal polynomials, of the cosine transforms
$$ \int_0^\infty e^{-a \cosh t} \cos zt \, dt, \qquad a > 0,$$
and of some other entire functions.
| {
"timestamp": "1998-09-01T22:23:58",
"yymm": "9307",
"arxiv_id": "math/9307210",
"language": "en",
"url": "https://arxiv.org/abs/math/9307210",
"abstract": "It is shown how sums of squares of real valued functions can be used to give new proofs of the reality of the zeros of the Bessel functions $J_\\alpha (z)$ when $\\alpha \\ge -1,$ confluent hypergeometric functions ${}_0F_1(c\\/; z)$ when $c>0$ or $0>c>-1$, Laguerre polynomials $L_n^\\alpha(z)$ when $\\alpha \\ge -2,$ and Jacobi polynomials $P_n^{(\\alpha,\\beta)}(z)$ when $\\alpha \\ge -1$ and $ \\beta \\ge -1.$ Besides yielding new inequalities for $|F(z)|^2,$ where $F(z)$ is one of these functions, the derived identities lead to inequalities for $\\partial |F(z)|^2/\\partial y$ and $\\partial ^2 |F(z)|^2/\\partial y^2,$ which also give new proofs of the reality of the zeros.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Using sums of squares to prove that certain entire functions have only real zeros",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877049262134,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7089594894185285
} |
https://arxiv.org/abs/math-ph/9911038 | Continuous analog of Gauss-Newton method | A continuous analog of Gauss-Newton method for solving nonlinear ill-posed problems is proposed. Its converegence is proved. A numerical example is presented to demonstrate efficiency of the propsed method. | \section{Introduction}
\vspace*{-0.5pt}
\noindent
\setcounter{equation}{0}
\renewcommand{\theequation}{1.\arabic{equation}}
Let $H_1$ and $H_2$ be real Hilbert spaces and $\varphi:H_1 \to H_2$
a nonlinear operator. Let us consider the equation:
\begin{equation}\label{eq1} \varphi(x)=0.\end{equation}
We assume that the following condition on $\varphi$ holds.
\medskip
\noindent
{\bf Condition A}:\
Problem (\ref{eq1}) has a solution $\hat{x}$, not necessarily unique.
In the well-known Newton's method (\cite{or}) one constructs a sequence $\{x_n\}$ for
$n=0,1,\dots$ which converges to a solution (in general non unique) of
equation (\ref{eq1}). The first term $x_0$ is an initial
approximation point and the other terms are constructed by means of the
following iterative process:
\begin{equation}\label{nm}
x_{k+1}=x_k-\varphi^\pr(x_k)^{-1}\varphi(x_k),
\end{equation}
where $\varphi^\pr(x)$ is the Fr\'echet derivative of the operator $\varphi$. Recall that
$\varphi^\pr(x)$ is a linear operator from $H_1$ to $H_2$. The usual
necessary condition for the realization of the Newton method is the bounded
invertibility of $\varphi^\pr(x_k)$, that is, the existence of a bounded linear
operator $[\varphi^\pr(x_k)]^{-1}$ for all $k$. Actually in order to provide the convergence
of Newton iterations one needs bounded invertibility of $\varphi^\pr$ in a ball
$B(\hat{x},R):=\lbrace x:x \in H_1, ||x-\hat{x}||\leq R\rbrace$.
However this condition does not hold in many important applications. In order
to avoid this restriction several modifications of the Newton method have been
developed. In this paper we consider the Gauss-Newton procedure for
equation (\ref{eq1}) (see e.g. (\cite{or})):
\begin{equation}\label{eq2}
x_{k+1}=x_k-[\varphi^{\pr*}(x_k)\varphi^\pr(x_k)]^{-1}\varphi^{\pr*}(x_k)\varphi(x_k), \quad x_0=x_0.
\end{equation}
If the operator $\varphi^{\pr*}(x)\varphi^\pr(x)$ is not boundedly
invertible one needs some regularization procedure. In order to construct
such a procedure one can introduce a sequence of positive numbers
$\alpha_k$, $\alpha_k\to 0$, and replace iterative method~(\ref{eq2}) by the following one
(\cite{or,bak}):
\begin{equation}\label{eq3}
x_{k+1}=x_k-[\varphi^{\pr*}(x_k)\varphi^\pr(x_k)+\alpha_k I]^{-1}[\varphi^{\pr*}(x_k)\varphi(x_k)+\alpha_k(x_k-x_0)],
\end{equation}
where $I$ is the identity operator.
The methods constructed above can be also considered as discrete analogs
of some continuous methods (called sometimes continuation
methods). In (\cite{g}) the following Cauchy problem
has been considered as a continuous analog of (\ref{nm}):
\begin{equation}\label{cnm}
\dot{x}(t)=-\varphi^\pr(x(t))\varphi(x(t)),\quad x(0)=x_0, \quad \dot{x}(t):=
\frac { dx }{dt}.
\end{equation}
A solution to problem (\ref{eq1}) can be obtained as limit of the
function $x(t)$ for $t\to\infty$.
If one solves this Cauchy problem by means of Euler's
method with a stepsize $\tau=1$ one gets (\ref{nm})
with $x_k=x(k)$. Continuous
analogs of iterative methods have several advantages over
the discrete ones. Convergence theorems for continuous methods usually
can be obtained easier. If a convergence theorem is proved for a continuous
method, that is, for the Cauchy problem for a differential equation, for
instance (\ref{cnm}), one can construct various finite difference schemes
for the solution of this Cauchy problem. These difference schemes give
discrete methods for the solution of equation (\ref{eq1}). For instance
the methods of Euler and Runge-Kutta can be used. More detailed information
about the applications and modifications of continuous Newton methods can
be found in (\cite{g,zp,ap}).
The aim of this paper is to construct a continuous analog of iterative
scheme ~(\ref{eq3}), to prove a convergence theorem for this continuous analog
of (\ref{eq3}), and to test the method numerically by applying it to a
practically interesting nonlinear inverse problem of gravimetry.
The paper is organized as follows. In section 2 a continuous analog of
method (\ref{eq3}) is described and a convergence theorem for this method
is formulated. In section 3 this convergence theorem is proved. In
section 4 an inverse gravimetry problem is considered and the proposed
method is numerically tested. In our numerical experiments comparison of
different regularization functions
is done. Based on the results of the numerical experiments some recommendations
are given for the choice of the regularization function.
\newpage
\bigskip
\noindent
{\bf 2.\ Continuous Gauss-Newton Method and Convergence Theorem}
\\$\left.\right.$
\setcounter{equation}{0}
\renewcommand{\theequation}{2.\arabic{equation}}
\vspace*{-0.5pt}
\noindent
In order to describe convergence rates we introduce the following
\medskip
\noindent
{\bf Definition 2.1.\ }
A positive function $\alpha(t)\in C^1 [0,\infty)$
is said to be a convergence rate function if $\alpha (t)$ decreases monotonically
to zero as $t\to \infty,$ $\alpha (t) \in C^1 [0,\infty)$ and $\ln \alpha (t)$ is concave,
that is, $\dot{\alpha}(t)/\alpha (t)$ is monotonically increasing.
\medskip
\noindent
{\bf Remark 2.2.\ }
The number $\alpha (0)$ can be chosen sufficiently large and simultaneously
the number $|\dot{\alpha}(0)/\alpha(0)|$ can be sufficiently small.
Here and below the over dot denotes the derivative with respect to time
$\dot{x}:=dx/dt$.
For example, one can choose $\alpha(t)=b/(t+a)$, where $a$ and $b$ are positive
constants such that $a$ and $b/a$ are sufficiently large.
\vspace*{12pt}
A continuous analog of iterative process~(\ref{eq3}) is the following
Cauchy problem:
\begin{equation}\label{eq4}
\dot{x}(t)=-[\varphi^{\pr*}(x(t))\varphi^\pr(x(t))+\alpha(t)I]^{-1}[\varphi^{\pr*}(x(t))\varphi(x(t))+
\alpha(t)(x(t)-x_0)],
\end{equation}
$$
x(0)=x_0.
$$
Denote by $\hbox{Ran}(L)$ the range of the linear operator $L$. The convergence
of the continuous analog of Gauss-Newton method (CAGNM) is established by
the following theorem, in which (and throughout this paper) the norms
$||\varphi^\pr(x)||$ and $||\varphi^{\pr\pr}(x)||$ are the norms of linear and bilinear operators
from $H_1$ to $H_2$ and from $H_1\times H_1$ to $H_2$ respectively.
\medskip
\noindent
{\bf Theorem 2.3.\ }
Let $\alpha(t)$ be a convergence rate function.
Assume that there exists a positive number $R$ for which Condition A
and the following conditions hold:
\begin{description}
\item[(i)] The Fr\'echet derivatives $\varphi^\pr(x)$ and $\varphi^{\pr\pr}(x)$ exist in the ball
$B(\hat{x},R)$ and satisfy the following inequalities:
\begin{equation}\label{bound}
||\varphi^\pr(x)|| \leq N_1, \quad ||\varphi^{\pr\pr}(x)|| \leq N_2 \quad \forall x \in B(\hat{x},R),
\end{equation}
where
$$
\frac{\alpha(0)}{N_1N_2}\left(1-2N_1N_2||v||+\frac{\dot{\alpha} (0)}{\alpha (0)}\right)\le R.
$$
\item[(ii)]
\begin{equation}\label{iop}
x_0\in B(\hat{x},R)\cap[\hat{x}+\hbox{Ran}(\varphi^{\pr*}(\hat{x})\varphi^\pr(\hat{x}))].
\end{equation}
\item[(iii)] For some $v$, such that $\hat{x}-x_0=\varphi^{\pr*}(\hat{x})\varphi^\pr(\hat{x})v$,
the following inequalities hold:
\begin{equation}
1-2N_1N_2||v||+\frac{\dot{\alpha} (0)}{\alpha (0)}>0,
\end{equation}
\begin{equation}\label{in2}
\left(1-2N_1N_2||v||+\frac{\dot{\alpha} (0)}{\alpha (0)}\right)^2-2N_1N_2||v||>0.
\end{equation}
\end{description}
Then the following conclusions hold:
\begin{description}
\item[(i)] The solution $x=x(t)$ of problem (\ref{eq4}) exists, and
$x(t)\in B(\hat{x},R)$ for $t\in[0,\infty)$,
\item[(ii)] $||x(t)-\hat{x}||=O(\alpha (t))$ for $t \to \infty$.
\end{description}
\medskip
\noindent
{\bf Remark 2.4.\ }
Condition (ii) in Theorem 2.3 gives some restriction on the choice of
an initial approximation point. It is not easy to verify this condition
algorithmically. However some kind of this condition is necessary if one works with
equation (\ref{eq1}) with the operator $\varphi^{\pr*}(x)\varphi^\pr(x)$, which is not boundedly
invertible. If the operator $\varphi^{\pr*}\varphi^\pr$ is injective but is not boundedly invertible,
then the image of the linear selfadjoint operator $\varphi^{\pr*}\varphi^\pr$ is dense in $B(\hat{x},R)$
and consequently the set of the suitable initial approximation points satisfying
condition (ii) is also dense in $B(\hat{x},R)$. As our numerical results show
(see section 4) the proposed method is practically efficient.
\bigskip
\noindent
{\bf 3.\ Proof of Theorem 2.3}
\\$\left. \right. $
\setcounter{equation}{0}
\renewcommand{\theequation}{3.\arabic{equation}}
\vspace*{-0.5pt}
\noindent
The main part of the proof is to show that the solution to problem (\ref{eq4}) does
not leave the ball $B(\hat{x},R)$ (Lemma 3.3).
In order to prove it, let us assume that
there exists such a point $t_1 \in [0,\infty)$ that $x(t)$ intersects the
boundary of $B(\hat{x},R)$ for the first time at $t=t_1$. Hence $x(t)$
belongs to the interior of the $B(\hat{x},R)$ for $t \in [0,t_1)$ and
$||x(t_1)-\hat{x}||=R$. Let us introduce an auxiliary function
\begin{equation}\label{eq6}
w(t):=||x(t)-\hat{x}||/\alpha (t).
\end{equation}
First, in Lemma 3.1, we derive a nonlinear differential inequality
for $w(t)$. From
this differential inequality we get the estimate which shows that for all
$t \in [0,t_1]$ the points of the integral curve of problem (\ref{eq4}) belong
to the interior of the ball $B(\hat{x},R)$. This contradiction proves that
the integral curve of the solution does not leave the above ball, and consequently
problem (\ref{eq4}) has
the global solution for $t\in [0,\infty)$. Also we show the boundedness
of the $w(t)$ and this implies, by formula (\ref{eq6}), strong convergence of
$x(t)$ to $\hat x$ for $t\to\infty$.
\vspace*{12pt}
\noindent
{\bf Lemma 3.1.\ }
If the assumptions of Theorem 2.3 hold then the
differential inequality
\begin{equation}\label{eq5}
\frac{dw}{dt}\leq C_1w^2-C_2w+C_3,
\end{equation}
is valid for $t \in [0,t_1]$, where
\begin{equation}\label{eq11}
C_1=\frac{N_1N_2}{2},\quad C_2=1-2N_1N_2||v||+\frac{\dot{\alpha}(0)}{\alpha (0)},\quad
C_3=||v||.
\end{equation}
\vspace*{12pt}
\noindent
{\bf Proof.}
The G\^ateaux derivative $\varphi^{\prime\pr}(x,\xi_1,\xi_2)$ is a bilinear
operator such that
$$\varphi^\pr(x+\xi_1)\xi_2-\varphi^\pr(x)\xi_2:=\varphi^{\prime\pr}(x,\xi_1,\xi_2) +\eta\xi_2,
\hbox{ and }||\eta||\cdot||\xi_1||^{-1}\to 0 \hbox{ for }\xi_1\to 0,
||\xi_1||>0.$$
Let us define operators $K,G:B(\hat{x},R)\times H_1\times H_1\to H_2$
by the formulas:
\begin{equation}\label{f1}
K(x,\xi_1,\xi_2)=\int\limits_0^1\int\limits_0^1\varphi^{\pr\pr}(x+st\xi_1,\xi_1,\xi_2)
tdtds
\end{equation}
and
\begin{equation}\label{f2}
G(x,\xi_1,\xi_2)=\int\limits_0^1\varphi^{\pr\pr}(x+t\xi_1,\xi_1,\xi_2)dt.
\end{equation}
Then from (\ref{bound}) we get
\begin{equation}\label{eq8}
|K(x,\xi_1,\xi_2)|\leq\frac{N_2}{2}||\xi_1||\cdot||\xi_2||,\quad
|G(x,\xi_1,\xi_2)|\leq N_2||\xi_1||\cdot||\xi_2||.
\end{equation}
The following formulas will be used:
\begin{equation}\label{eq7}
\varphi (\hat x)-\varphi (x)=\varphi^\pr (x)(\hat x -x)+K(x,\hat x -x,\hat x -x),
\end{equation}
and
\begin{equation}\label{eq9}
(\varphi^\pr (\hat x)- \varphi^\pr(x))\xi=G(x,\hat x -x,\xi),
\end{equation}
where $K$ and $G$ are defined by (\ref{f1}) and (\ref{f2})
respectively. Let us derive formulas (\ref{eq7}) and (\ref{eq9}).
One has
$$
\varphi(\hat x)-\varphi(x)=\int\limits_0^1\frac{d}{dt}\varphi(x+t(\hat x -x))dt
=\int\limits_0^1\varphi^\pr(x+t(\hat x -x))(\hat x -x)dt
$$
$$
=\varphi^\pr(x)(\hat x -x)+\int\limits_0^1[\varphi^\pr(x+t(\hat x -x))-\varphi^\pr(x)](\hat x -x)dt
$$
$$
=\varphi^\pr(x)(\hat x -x)+\int\limits_0^1 tdt\int\limits_0^1 ds[\varphi^{\pr\pr}(x+st(\hat x -x))](\hat x -x)
(\hat x -x)
$$
and
$$
[\varphi^\pr(\hat x)-\varphi^\pr(x)]\xi=\int\limits_0^1\frac{d}{dt}\varphi^\pr(x+t(\hat x -x))\xi dt
=\int\limits_0^1\varphi^{\pr\pr}(x+t(\hat x -x),\hat x -x,\xi)dt.
$$
Since $\hat x$ solves (\ref{eq1}), one can rewrite equation
~(\ref{eq4}) as
$$
\frac{dx}{dt}=-[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}[\varphi^{\pr*} (x)(\varphi (x)-\varphi(\hat
x))+\alpha (x-x_0)].
$$
From the condition (iii) of Theorem 2.3 and from (\ref{eq7})
we get
$$
\frac{dx}{dt}=-[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}[-\varphi^{\pr*}(x)(\varphi^\pr (x)(\hat x -x)+
K(x,\hat x -x,\hat x -x))+
$$
$$
\alpha (x-\hat x)+\alpha \varphi^{\pr*} (\hat x)\varphi^\pr (\hat x)v],
$$
and therefore
$$
\frac{dx}{dt}=
-(x-\hat x )-[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}[-\varphi^{\pr*} (x)K(x,\hat x -x,\hat x -x)+
\alpha\varphi^{\pr*} (x)\varphi^\pr (x)v+
$$
$$
\alpha (\varphi^{\pr*}(\hat x)\varphi^\pr (\hat x)-\varphi^{\pr*} (x)\varphi^\pr (x))v].
$$
Since $\hat x$ does not depend on t, it follows from (\ref{eq9}) that
$$
\frac{d(x(t)-\hat x)}{dt}=\frac{dx}{dt}=
-(x-\hat x )-[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}
[-\varphi^{\pr*} (x)K(x,\hat x -x,\hat x -x)
$$
$$
+\alpha\varphi^{\pr*} (x)\varphi^\pr (x)v+\alpha(\varphi^{\pr*}(\hat x)-\varphi^{\pr*} (x))\varphi^\pr(\hat x )v+
\alpha\varphi^{\pr*}(x)G(x,\hat x -x,v)].
$$
Now let us derive an inequality for $\frac{d}{dt} ||x-\hat x ||^2.$ One has
$$
\frac{d}{dt} ||x-\hat x ||^2=-2||x-\hat x ||^2+2([\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}
[\varphi^{\pr*}(x)K(x,\hat x -x,\hat x -x)], x-\hat x)-
$$
$$
2\alpha([\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}\varphi^{\pr*} (x)\varphi^\pr(x)v,x-\hat x )+
2\alpha(\varphi^\pr(\hat x)v,G(x,\hat x -x,[\varphi^{\pr*} (x)\varphi^\pr (x)
$$
$$
+\alpha I]^{-1}(x-\hat x)))+
2\alpha([\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}\varphi^{\pr*}(x)G(x,\hat x -x,v),x-\hat x ).
$$
Since the operator $\varphi^{\pr*} (x)\varphi^\pr (x)$ is selfadjoint and nonnegative
we have the following spectral representation:
$$
[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha I]^{-1}\varphi^{\pr*} (x)\varphi^\pr(x)=\int_0^\infty\frac{\lambda}{\lambda+\alpha}dE_\lambda,
$$
where $E_\lambda$ is the resolution of the identity of the selfadjoint operator
$\varphi^{\pr*}(x)\varphi^\pr(x)$.
Since $0\le\lambda/(\lambda+\alpha)\le 1$ for $\alpha>0$ and $\lambda\ge 0$, it follows that
\begin{equation}\label{spes}
||[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha(t)I]^{-1}\varphi^{\pr*} (x)\varphi^\pr(x)||\le 1.
\end{equation}
Also one has the following estimate:
\begin{equation}\label{es1}
||[\varphi^{\pr*} (x)\varphi^\pr (x)+\alpha(t)I]^{-1}||\le 1/\alpha(t).
\end{equation}
From (\ref{spes}),(\ref{es1}) and (\ref{eq8})
one gets the following differential
inequality for $A(t):=||x(t)-\hat x||:$
$$
\dot{A}\leq -A+\frac{N_1N_2}{2\alpha}A^2+\alpha ||v||+2N_1N_2 ||v|| A.
$$
In order to finish the proof of the Lemma 3.1, we
derive from the last inequality the inequality for $w(t)$ by taking into
account that $\dot{\alpha}(t)/\alpha (t)$ is monotonically increasing function.
$\Box $
The following lemma is a simple corollary of the more general results
established in (\cite{JS}).
\newpage
\vspace*{12pt}
\noindent
{\bf Lemma 3.2.\ }
Let $f(t,u)$ be a continuous function on
$[0,T]\times (-\infty,+\infty)$ such that the Cauchy problem
\begin{equation}\label{eq12}
\dot u = f(t,u(t)),\quad u(0)=u_0
\end{equation}
is uniquely solvable on $[0,T]$ and $v(t)$ be a differentiable function
defined on $[0,T]$ and satisfies the conditions
\begin{equation}\label{eq13}
\dot v \leq f(t,v(t)),\quad t\in[0,T],\quad v(0)=v_0.
\end{equation}
If $v_0\leq u_0$ then
$$v(t)\le u(t) \hbox{ for } t\in [0,T].$$
\vspace*{12pt}
It follows from inequality (\ref{in2}) that $c=\sqrt{C_2^2-4C_1C_3}>0$ for
constants $C_1,$ $C_2,$ $C_3$ defined in (\ref{eq11}).
Let $u_1$ and $u_2$ be correspondingly the smaller and the larger roots
of the equation $C_1u^2-C_2u+C_3=0$. For $u_0$ satisfying the
inequality $u_2<u_0<C_2/(2C_1)$ the solution of the Cauchy problem
\begin{equation}\label{eq15}
\dot{u}=C_1u^2-C_2u+C_3,\quad u(0)=u_0.
\end{equation}
is given by the formula:
$$
\left\vert \frac{u-u_2}{u-u_1}\right\vert
=\frac{u_2-u_0}{u_0-u_1}e^{ct}.
$$
Let us show that $u(t)$ is defined for all $t\in [0,\infty).$
Indeed, $u_0\in(u_1,u_2)$, so for sufficiently small $t$ one has
\begin{equation}\label{eq17}
u=u_1+\frac{u_2-u_1}{\frac{u_2-u_0}{u_0-u_1}e^{ct}+1}.
\end{equation}
Thus $u_1<u(t)<u(0)<u_2$.
This means that $u(t)$ does not leave the interval $(u_1,u_2)$ for all $t\in
[0,\infty)$
and $u(t)$ is well defined for all $t\in [0,\infty)$.
From condition (i) of Theorem 2.3 one obtains the following estimate:
$$
w(0)=\frac{||x_0-\hat x ||}{\alpha (0)}<\frac{1-2N_1 N_2 ||v||+\frac{\dot{\alpha} (0)}{\alpha(0)}}{N_1
N_2}=\frac{C_2}{2C_1}.
$$
Therefore from Lemmas 3.1 and 3.2 it follows that
\begin{equation}\label{estim}
\frac{||x(t_1)-\hat x ||}{\alpha (t_1)} \leq
u_1+\frac{u_2-u_1}{\frac{u_2-u_0}{u_0-u_1}e^{ct_1}+1}<u_0<\frac{C_2}{2C_1}.
\end{equation}
Thus
$$
|| x(t_1)-\hat x ||<\frac{C_2}{2C_1}\alpha(t_1)<\frac{C_2}{2C_1}\alpha (0)\le R.
$$
This contradicts the assumption $|| x(t_1)-\hat x ||=R$. So the following lemma is
proved.
\newpage
\vspace*{12pt}
\noindent
{\bf Lemma 3.3.\ }
If the assumptions of Theorem 2.3 hold and for an arbitrary positive $T$ the solution
of the problem (\ref{eq4}) exists on the interval $[0,T]$, then the integral curve of
the solution of (\ref{eq4}) lies in the interior of the ball $B(\hat{x},R)$ for all
$t$ from the interval $[0,T]$.
\vspace*{12pt}
Now let us show that there exists the unique solution of (\ref{eq4}) on
$[0,\infty)$provided that $x_0$ satisfies conditions ii) and iii) of Theorem 2.3. The
Cauchy problem (\ref{eq4}) is equivalent
to the integral equation
\begin{equation}\label{eq18}
x(t)=x_0+\int\limits_0^t F(s,x(s))ds,
\end{equation}
where
$$
F(s,x(s)):=-[\varphi^{\pr*}(x(s))\varphi^\pr(x(s))+\alpha (s) I]^{-1}[\varphi^{\pr*}(x(s))\varphi(x(s))+\alpha (s) (x(s)-x_0)].
$$
Let us fix an arbitrary large positive number $T$ and use the successive
approximation method to solve equation (\ref{eq18}) on $[0,T]$:
\begin{equation}\label{eq19}
x_{n+1}(t)=x_0+\int\limits_0^t F(s,x_n (s))ds,\quad x_0(t)=x_0,\quad t\in [0,T].
\end{equation}
Since $\varphi^\pr(x)$ and $\varphi^{\pr\pr}(x)$ are assumed to be bounded in $B(\hat{x},R)$, see
(\ref{bound}), and $\alpha(t)$ is positive on $[0,T]$, for every $t\in [0,T]$ the
function $F(t,x)$ has bounded Fr\'echet derivative with respect to $x$ in
$B(\hat{x},R)$. So one has:
$$||F(t,x_1)-F(t,x_2)||\le K(T)||x_1-x_2||$$
for all $t\in [0,T]$ and $x_1$, $x_2$ belong to $B(\hat{x},R)$.
Thus, one easily gets the estimate
$$
||x_{n+1}(s)-x_n (s) || \leq ||F(x_0)|| K^n(T)\frac{T^n}{n!}
$$
valid on the maximal subinterval $[0,T_1]=\{t: t\in [0,T] \hbox{ and } x(t)\in
B(\hat{x},R)\}$. Therefore iterative process (\ref{eq19}) converges
uniformly and determines the unique solution of equation (\ref{eq18}) on $[0,T_1]$.
If $T_1<T$, it follows
from the maximality of the subinterval $[0,T_1]$ that $x(T_1)$ is a
boundary point
of $B(\hat{x},R)$. But this contradicts to Lemma 3.3. So the solution
of the problem (\ref{eq4}) exists and belongs to the interior of the ball
$B(\hat{x},R)$ on every interval $[0,T]$ and consequently on $[0,\infty)$.
To finish the proof of Theorem 2.3 it is sufficient to note that equation
(\ref{estim}) implies estimate
$$
||x(t)-\hat x || \leq \frac{C_2}{2C_1}\alpha(t)
$$
for all $t\in[0,\infty)$.
$\Box$
\newpage
\bigskip
\noindent
{\bf 4.\ Numerical Results}
\\$\left. \right. $
\setcounter{equation}{0}
\renewcommand{\theequation}{4.\arabic{equation}}
\vspace*{-0.5pt}
\noindent
To test numerically the method described above, we chose the inverse gravimetry
problem (\cite{vas}). The goal of the numerical test is to illustrate the
choice of the regularization function $\alpha(t)$ and to compare two methods of
solving the Cauchy problem (\ref{eq4}): the Euler method, which corresponds to
the iterative scheme (\ref{eq3}), and the Runge-Kutta method.
Let the sources of a gravitational field with a constant density $\rho$ be
distributed in the domain
$$
D=\{-l\leq t \leq l,\quad -H\leq z \leq -H+x(t)\},
$$
where $x(t)$ is an interface between two media, $l$ and $H$ are
parameters of the domain.
The potential $V$ of such a field is given by the double integral:
$$
V(t,z)=\frac{1}{2\pi}\int_D\int \rho\ln\frac{1}{\sqrt{(t-s)^2+
(z-\tau )^2}}dS =
$$
$$
=-\frac{\rho}{4\pi}\int\limits_{-l}^l ds\int\limits_{-H}^{-H+x(s)}\ln [(t-s)^2+
(z-\tau)^2]d\tau.
$$
For the $z$ - component of the gravitational field one has
$$
-\frac{\partial V(t,z)}{\partial z} = -\frac{\rho}{4\pi}\int\limits_{-l}^l
ds\int\limits_{-H}^{-H+x(s)}
\frac{\partial }{\partial \tau}\ln [t-s)^2+(x-\tau)^2]d\tau =
$$
$$
=\frac{\rho}{4
\pi}\int\limits_{-l}^l\ln\frac{(t-s)^2+(z+H)^2}{(t-s)^2+(z+H-x(s))^2} ds.
$$
In particular, on the surface $z=0$ we obtain the following nonlinear
operator equation
\begin{equation}\label{eq26}
\varphi(x)\equiv\frac{\rho}{4\pi}\int\limits_{-l}^lK(t,s,x(s))ds-y(t)=0,
\end{equation}
where
$$
K(t,s,x(s))=\ln \frac{(t-s)^2+H^2}
{(t-s)^2+(H-x(s))^2}.
$$
The gravity strength anomaly $y(t)=-\frac{\partial V(t,0)}{\partial z}$
is given and the interface between two media (with and without the sources
of a gravimetry field) $x(s)$ is to be determined.
Let $\varphi$ act between the pair of Hilbert spaces $H_1$ and $H_2.$ Assume that
$H_1=H^1 [-l,l]$ or $L_2 [-l,l]$ and $H_2 = L_2 [-l,l].$
The Fr\'echet derivative of this operator is the following one
\begin{equation}\label{eq27}
\varphi^\pr(x)h=\int\limits_{-l}^l \frac{2(H-x(s))h(s)}{(t-s)^2+(H-x(s))^2} ds.
\end{equation}
For any fixed $x\in \{x\in L_2 [-l,l],x\leq H-\varepsilon ,
\varepsilon > 0\}$ the kernel
$$
K'_x(t,s,x(s))\equiv \frac{2(H-x(s))}{(t-s)^2+(H-x(s))^2}
$$
is a square integrable function on $[-l,l]\times[-l,l]$, therefore $\varphi^\pr(x)$
in (\ref{eq27}) is a compact linear operator in $L_2[-l,l]$.
This means that the operators $\varphi^\pr(x)$ and $\varphi^{\pr*}(x)\varphi^\pr(x)$
are not boundedly invertible. So, one can not use classical iterative
schemes such as Newton or Gauss - Newton in the case of
equation (\ref{eq26}).
We solve the Cauchy problem (\ref{eq4}) for $\varphi$ given by (\ref{eq26})
with some regularization function $\alpha(t)$. The problem is numerically solved by means
of two finite difference methods, namely, Euler's method
\begin{equation}\label{eq28}
x_{k+1}=x_k +\tau F(t_k,x_k),\quad x_0=x(0)
\end{equation}
and the Runge - Kutta method
\begin{equation}\label{eq29}
x_{k+\frac{1}{2}} = x_k +\frac{\tau}{2} F(t_k,x_k),
\end{equation}
$$
x_{k+1} = x_k+\tau F(t_{k+\frac{1}{2}},x_{k+\frac{1}{2}}),\quad x_0 = x(0).
$$
Here
$$
F(t_k,x_k)\equiv -[\varphi^{\pr*} (x_k)\varphi^\pr (x_k)+\alpha(t_k)I]^{-1}(\varphi^{\pr*} (x_k)\varphi (x_k)
+\alpha(t_k)(x_k-x_0))
$$
and an equal grid size $\tau > 0$ defines the node points,
$$
t_k=k\tau,\quad k=0,1,..
$$
For the successful realization of Continuous Gauss-Newton method
an appropriate regularization function should be chosen. At the beginning
of the process values of $\alpha(t)$ should not be very small for the operator
$\varphi^{\pr*}(x(t))\varphi^\pr(x(t))+\alpha(t)I$ to be stably invertible and at the
same time $\alpha(t)$ should tend to zero sufficiently fast to ensure
convergence of the function $x(t)$ to the solution of problem (\ref{eq26}).
In the numerical experiments the functions $\alpha_0/(\beta+t)^m$, $\alpha_0e^{-\beta t}$
and $\alpha_02^{-\beta t}$ were used. The experiments have shown that
for all the considered functions the numerical solution is evaluated with
appropriate accuracy for sufficiently large range of the parameters
$m$, $\alpha_0$ and $\beta$. The following tables illustrate the dependence of
the accuracy of the numerical results on parameters
for the following data $l=1,\quad H=2, \quad \rho =1,\quad x_0=1$. For the numerical
tests the function $y(t)$ in (\ref{eq26}) was chosen as the solution
of the direct problem for the model function $x_{mod}(t)=(1-t^2)^2$. The integral in
(\ref{eq26}) was calculated by Simpson's formula with the number of node
points equals 201 and with a step size equal to $0.01$.
In the tables below $\Delta_E$ and $\Delta_R$ are the absolute errors,
$\sigma_E$ and $\sigma_R$ are the discrepancies $||\varphi(x(t))||$, see (\ref{eq26}),
of the Euler and the Runge-Kutta methods respectively. The first table shows the
dependence of the absolute errors and the discrepancies $\varphi(x)$ on the
regularization function.
\newpage
\centerline{Table 1.}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{$\alpha_0=0.1,\quad\tau=0.1$}\\
\hline
$\alpha(t)$ & N & $\Delta_E $ & $\sigma_E$ &$\Delta_R $ & $\sigma_R$\\
\hline
$\alpha_0(1+t)^{-2}$ & 38 & $0.26$ & $3.75\cdot10^{-2}$ & $0.27$
&$4.09\cdot10^{-2}$ \\
$\alpha_0(1+t)^{-4}$ & 25 & $0.25$ &$0.25$ & $0.11$
&$0.11$ \\
$\alpha_0(1+t)^{-6}$ & 43 &$0.12$ &$0.12$ &$1.75\cdot10^{-2}$ &
$2.03\cdot10^{-2}$ \\
$\alpha_0(1+t)^{-8}$ & 61 & $5.30\cdot10^{-2}$ & $2.87\cdot10^{-3}$
&$5.33\cdot10^{-2}$
&$3.58\cdot10^{-3}$ \\
$\alpha_0(1+t)^{-10}$ & 79 & $2.25\cdot10^{-2}$ & $4.90\cdot10^{-4}$
&$2.23\cdot10^{-2}$ & $6.22\cdot10^{-4}$ \\
$\alpha_02^{-3.5t}$ & 127 & $1.07\cdot10^{-2}$ & $1.14\cdot10^{-5}$
&$1.08\cdot10^{-2}$ & $3.03\cdot10^{-5}$ \\
$\alpha_0e^{-3.5t}$ & 85 & $1.08\cdot10^{-2}$ & $2.84\cdot10^{-4}$
& $1.08\cdot10^{-2}$ & $3.60\cdot10^{-4}$ \\
\hline
\end{tabular}
\vspace{1cm}
Then it is assumed that the type of a function $\alpha(t)$ is chosen and
the dependence of the absolute errors and the discrepancies on
parameters $\beta$ (Tables 2 and 3) and $\alpha_0$ (Table 4) is analyzed.
\vspace{1.cm}
\centerline{Table 2.}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{$\alpha(t)=\alpha_0e^{-\beta t},\quad\alpha_0=0.1,\quad\tau=0.1$}\\
\hline
$\beta$ & N & $\Delta_E $ & $\sigma_E$ &$\Delta_R $ & $\sigma_R$\\
\hline
1 & 29 & 0.26 & $7.60\cdot10^{-2}$ &0.26 &$9.10\cdot10^{-2}$ \\
2 & 156 & $1.09\cdot10^{-2}$ &$4.58\cdot10^{-6}$ &$1.10\cdot10^{-2}$
&$1.52\cdot10^{-5}$
\\
3 &100 &$1.06\cdot10^{-2}$ &$5.78\cdot10^{-5}$ &$1.07\cdot10^{-2}$ &
$9.30\cdot10^{-5}$ \\
4 &73 &$1.15\cdot10^{-2}$ &$ 7.54\cdot10^{-4}$ &$1.14\cdot10^{-2}$
&$1.10\cdot10^{-3}$
\\
5 &56 &$1.52\cdot10^{-2}$ &$4.50\cdot10^{-3} $ &$1.50\cdot10^{-2}$ &
$6.12\cdot10^{-3}$ \\
6 &45 &$2.08\cdot10^{-2}$&$1.40\cdot10^{-2}$ &$2.17\cdot10^{-2}$
&$1.84\cdot10^{-2}$ \\
7 &37 &$3.90\cdot10^{-2}$ &$3.06\cdot10^{-2}$ &$3.89\cdot10^{-2}$ &
$3.80\cdot10^{-2}$ \\
8 &31 &0.11 &$5.82\cdot10^{-2}$ &$8.59\cdot10^{-2}$ & $6.52\cdot10^{-2}$ \\
9 &26 & 0.18 &$0.10$ &0.15 &0.11 \\
10 &23 &0.24 &0.14 &0.20 & 0.16 \\
\hline
\end{tabular}
\newpage
\centerline{Table 3.}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{$\alpha(t)=\alpha_0e^{-\beta t},\quad\alpha_0=0.1,\quad\tau=0.6$}\\
\hline
$\beta$ & N & $\Delta_E $ & $\sigma_E$ &$\Delta_R $ & $\sigma_R$\\
\hline
1 & 4 & 0.23 & $5.35\cdot10^{-2}$ &0.26 &0.12 \\
2 & 20& $4.13\cdot10^{-2}$ &$1.00\cdot10^{-3}$ &$1.09\cdot10^{-2}$
&$6.11\cdot10^{-5}$
\\
3 &16 &$8.90\cdot10^{-2}$ &$5.04\cdot10^{-2}$ &$1.05\cdot10^{-2}$ &
$1.73\cdot10^{-4}$ \\
4 &12 &$1.46\cdot10^{-2}$ &$ 1.29\cdot10^{-4}$ &$1.25\cdot10^{-2}$
&$1.79\cdot10^{-3}$
\\
5 &9 &$1.76\cdot10^{-2}$ &$7.14\cdot10^{-4} $ &$1.84\cdot10^{-2}$ &
$1.08\cdot10^{-2}$ \\
6 &7 &0.16&$3.85\cdot10^{-3}$ &$2.98\cdot10^{-2}$
&$3.22\cdot10^{-2}$ \\
7 &6 &$0.38$ &$2.58\cdot10^{-2}$ &$0.12$ &
$5.61\cdot10^{-2}$ \\
8 &5 &0.83 &$1.48$ &$0.18$ & $8.60\cdot10^{-2}$ \\
9 &4 & 0.83 &$01.48$ &0.21 &0.15 \\
10 &3 &0.83 &01.48 &0.23 & 0.19 \\
\hline
\end{tabular}
\vspace{1.cm}
\centerline{Table 4.}
\vspace{0.5cm}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{$\alpha(t)=\alpha_0e^{-\beta t},\quad\beta=3.5,\quad\tau=0.1$}\\
\hline
$\alpha_0$ & N & $\Delta_E $ & $\sigma_E$ &$\Delta_R$ & $\sigma_R$\\
\hline
$10^{-3}$ & 71 & $1.62\cdot10^{-2}$ & $9.77\cdot10^{-4}$ &$1.61\cdot10^{-2}$
&$1.33\cdot10^{-2}$ \\
$10^{-2}$ & 79& $1.33\cdot10^{-2}$ &$4.55\cdot10^{-4}$
&$2.29\cdot10^{-2}$ &$2.90\cdot10^{-4}$
\\
$10^{-1}$ &85 &$1.07\cdot10^{-2}$ &$2.84\cdot10^{-4}$
&$1.08\cdot10^{-2}$ &
$3.60\cdot10^{-4}$ \\
\hline
\end{tabular}
\vspace{1cm}
Analyzing the results of the numerical experiments (a part of them is included
in the Tables) one concludes the following:
\begin{description}
\item[(i)] The Runge-Kutta method is more stable with respect to changes of
the regularization function $\alpha(t)$ and the step size $\tau$, than the Euler method;
\item[(ii)] for all the considered functions suitable parameters can be chosen,
however in the case when $\alpha(t)=\alpha_0e^{-\beta t}$ the accuracy with which the solution
$x(t)$ is calculated is higher;
\item[(iii)] an appropriate range of values of the parameter $\alpha_0$ is from
0.001 to 0.1, for larger values the accuracy is lower, and for smaller values
the processes do not converge;
\item[(iv)] the range of appropriate values of $\beta$ is large enough: from 2 to
6 for $\tau=0.6$ and from 2 to 7 for $\tau=0.1$.
\end{description}
\newpage
\medskip
\noindent
{\bf Remark 4.1.\ }
The reason why result (i), formulated above, is emphasized can
be understood if one remembers that problem (\ref{eq26}) is ill-posed.
If a problem is ill-posed then the usage of a higher-order accuracy difference scheme
(or quadrature formula) may lead to less accurate results, as was observed in the
literature (see e.g. (\cite{mg}), p. 155). On the other hand, if a problem is
well-posed, then the usage of a higher-order accuracy scheme should lead to more
accurate results.
\bigskip
\noindent
{\bf Acknowledgments}
\\$\left. \right. $
\setcounter{equation}{0}
\renewcommand{\theequation}{4.\arabic{equation}}
\vspace*{-0.5pt}
The authors thank Dr. V.Protopopescu for useful remarks.
\newpage
\nonumsection{References}
\noindent
\medskip
| {
"timestamp": "1999-11-26T18:46:39",
"yymm": "9911",
"arxiv_id": "math-ph/9911038",
"language": "en",
"url": "https://arxiv.org/abs/math-ph/9911038",
"abstract": "A continuous analog of Gauss-Newton method for solving nonlinear ill-posed problems is proposed. Its converegence is proved. A numerical example is presented to demonstrate efficiency of the propsed method.",
"subjects": "Mathematical Physics (math-ph); Analysis of PDEs (math.AP); Numerical Analysis (math.NA)",
"title": "Continuous analog of Gauss-Newton method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287697148445,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7089594838049769
} |
https://arxiv.org/abs/2006.02685 | Phase transitions in non-linear urns with interacting types | We investigate reinforced non-linear urns with interacting types, and show that where there are three interacting types there are phenomena which do not occur with two types. In a model with three types where the interactions between the types are symmetric, we show the existence of a double phase transition with three phases: as well as a phase with an almost sure limit where each of the three colours is equally represented and a phase with almost sure convergence to an asymmetric limit, which both occur with two types, there is also an intermediate phase where both symmetric and asymmetric limits are possible. In a model with anti-symmetric interactions between the types, we show the existence of a phase where the proportions of the three colours cycle and do not converge to a limit, alongside a phase where the proportions of the three colours can converge to a limit where each of the three is equally represented. | \section{Introduction and definitions}
In general terms, an urn model is a system containing a number of particles of different types
(often regarded as balls of different colours, for ease of visualisation).
At each time step, a set of particles is sampled from the system, whose contents are then altered depending on the sample which was drawn.
Pemantle \cite{pemantlesurvey} surveys several ways to approach this model framework.
This paper is limited to models with a single urn from which a single ball is drawn, its colour is noted and it is then returned to the urn along with one new ball of that same colour.
In addition, we introduce a graph-based interaction according to which the probability of choosing a ball of a given colour is reinforced not only by its own proportion in the urn, but also by the proportions of balls of other colours.
Therefore, the interaction arises among balls of different colours, as opposed to the so-called interacting urn models consisting of systems of multiple urns (e.g. Bena\"{i}m et al \cite{BBCL15}, Launay and Limic \cite {LL12}) in which different urns (each containing balls of different colours) interact with one another. Our model is also different from the graph-based competition described in van der Hofstad et al \cite{van2016strongly}, where the colours correspond to edges of the graph, which compete with, as opposed to being reinforced by, other edges incident on the same vertices.
We now formally define our model. Consider an urn containing balls of $d$ colours.
The vector $x(n) = (x_1(n),\ldots, x_d(n)) \in \mathbb N^d$ denotes the number of balls of each colour at time $n=0,1,2,...$.
The strength of the reinforcement is given by a positive real number $\beta>0$ and we denote by $x^{\beta}(n)$ the coordinate-wise $\beta$ power of the column vector $x(n)$.
The interaction is defined as follows.
Given a non-negative matrix $A = (a_{ij})_{i,j=1}^d$, define the column vector
\begin{equation}
\label{eq:potential}
u(n) := A x^{\beta}(n)
\end{equation}
Let $\mathcal{F}_n$ be the $\sigma$-algebra generated by the $x(m)$ for $0\leq m\leq n$ and write $u_i(n)$ for the $i$-th component of $u(n)$.
The transition probabilities are then
\begin{equation}
\label{eq:transition}
\mathbb P(x(n+1)-x(n)=\mathbf{e}_i \> | \> \mathcal{F}_n)=\frac{u_i(n)}{\sum_{j=1}^{N} u_j(n)}, \quad i=1,\ldots,d,
\end{equation}
where $\mathbf{e}_i$ is the unit vector in direction $i$. That is, one ball is added to the urn at each time step, and the right hand side of \eqref{eq:transition} gives the probability that it is of colour $i$.
Now, let $n_0$ be the initial number of balls so that at time $n$ the urn contains $n+n_0$ balls.
Then, the proportion of balls of each colour is a process in the $(d-1)$-dimensional simplex $\Delta^{d-1}$ given by the vector
\begin{equation}
\label{eq:proportion}
\bar{x}(n)=x(n)/(n+n_0).
\end{equation}
Assuming that $A$ is a multiple of the identity matrix (therefore, $\bar{x}(n)$ having no interaction), it is well-known (see Oliveira \cite{RO07}) that the process $\bar{x}$ undergoes a phase transition as follows.
For $\beta < 1$, the process converges almost surely to the `centre' of the simplex, that is, the asymptotic proportion of balls of each colour are all the same.
For $\beta = 1$, commonly referred to as the P\'olya urn model, the process converges almost surely to a non-trivial random variable supported in the interior of the simplex.
For $\beta > 1$, the process converges almost surely to one of the corners of the simplex.
In this case that a single type dominates was proved by Khanin and Khanin \cite{khanin2001} following on from the two-type case which can be covered using Rubin's Theorem in Davis \cite{davis_rrw}.
For a two-colour urn model $d=2$ and symmetric interaction,
\[ A =
\left(
\begin{matrix}
1 & a \\
a & 1
\end{matrix}
\right), \quad a>0, \]
it was proved by the first author in Theorem 2.2.1, \cite{MCthesis}, that there was a phase transition as follows.
\begin{align}
& (i) \quad \text { if } \quad \left(\tfrac{1-a}{1+a}\right) \beta \leq 1, \quad \text{then} \quad \bar{x}(n) \rightarrow (\tfrac{1}{2},\tfrac{1}{2}) \quad a.s. \nonumber\\
& (ii) \quad \text {if } \quad \left(\tfrac{1-a}{1+a}\right) \beta > 1, \quad \text{then} \quad \bar{x}(n) \rightarrow \Psi \quad a.s., \nonumber
\end{align}
where $\Psi$ is a random vector supported on $\left\{ \left( \frac{1}{1+r}, \frac{r}{1+r} \right), \left( \frac{r}{1+r}, \frac{1}{1+r}\right)\right\}$
and $r:= r(a, \beta)$ is the unique root in $(0,1)$ of $\mathcal{P}_{a,\beta}(z) = az^{\beta+1}- z^{\beta}+ z -a = 0.$
In case (\emph{ii}), $\mathbb P[\bar{x}(n) \rightarrow (\frac{1}{2},\frac{1}{2})] = 0.$
Note that for $\beta=1$, the process $(u(n))_{n \geq 0}$ is a Friedman's urn model and statement (\emph{i}) yields $u(n)/(u_1(n)+u_2(n)) \rightarrow (\frac{1}{2}, \frac{1}{2})$ $a.s.$ as expected.
In this paper, we follow up on the results for $d=2$ in \cite{MCthesis}, with the aim to generalise from $d=2$ to larger values of $d$ and to see whether more types of behaviour emerge when this is done. We show that this is indeed the case when $d=3$, where we consider two particular choices of $A$.
First of all, we consider a choice of $A$ with a symmetric interaction of the same strength $a$ for each pair of colours. The following theorem shows that in this system there are three phases, as opposed to two when $d=2$; there are phases where there is almost sure convergence to a symmetric limit and where there is almost sure convergence to one of a number of asymmetric limits, which are analogues of the phases when $d=2$, but there is also an intermediate phase where both symmetric and asymmetric limits are possible.
\renewcommand{\labelenumi}{(\roman{enumi})}
\begin{thm}\label{sym3main}
Let $A$ be the matrix
\begin{equation}
\begin{pmatrix} 1 & a & a \\ a & 1 & a \\ a & a & 1 \end{pmatrix}, \quad a > 0.
\end{equation}\begin{enumerate}
\item Fix $a<1$. Then there exists $\beta_1(a)$ satisfying $1<\beta_1(a)<\frac{1+2a}{1-a}$, with $\beta_1(a)$ an increasing function of $a$ satisfying $\beta_1(a)\to\infty$ as $a\to 1$, and we have the following three phases.
\begin{enumerate}
\item \label{symlim} \emph{Symmetric limit almost surely.} If $\beta<\beta_1(a)$, then almost surely $\bar{x}(n) \to \left(\frac13,\frac13,\frac13\right)$.
\item \label{bothlim} \emph{Symmetric or asymmetric limit.} If $\beta_1(a)<\beta<\frac{1+2a}{1-a}$ then there exists $r_2>1$ such that almost surely $\bar{x}(n)$ converges to one of the four points in $\Delta^2$ given by $\left(\frac13,\frac13,\frac13\right)$, $\left(\frac{r_{2}}{2+r_{2}},\frac{1}{2+r_{2}},\frac{1}{2+r_{2}}\right)$, $\left(\frac{1}{2+r_{2}},\frac{r_{2}}{2+r_{2}},\frac{1}{2+r_{2}}\right)$ and $\left(\frac{1}{2+r_{2}},\frac{1}{2+r_{2}},\frac{r_{2}}{2+r_{2}}\right)$. All of these points have positive probability of being limits.
\item \label{asymlim} \emph{Asymmetric limit almost surely.} If $\beta>\frac{1+2a}{1-a}$ then there exists $r_{+}>1$ such that almost surely $\bar{x}(n)$ converges to one of the three points in $\Delta^2$ given by $\left(\frac{r_{+}}{2+r_{+}},\frac{1}{2+r_{+}},\frac{1}{2+r_{+}}\right)$, $\left(\frac{1}{2+r_{+}},\frac{r_{+}}{2+r_{+}},\frac{1}{2+r_{+}}\right)$ and $\left(\frac{1}{2+r_{+}},\frac{1}{2+r_{+}},\frac{r_{+}}{2+r_{+}}\right)$.
\end{enumerate}
\item Fix $a\geq 1$. Then almost surely $\bar{x}(n) \to \left(\frac13,\frac13,\frac13\right)$.
\end{enumerate}
\end{thm}
Theorem \ref{sym3main} presents the results in terms of phase transitions in $\beta$ with $a$ fixed. However, because both $\beta_1(a)$ and $\frac{1+2a}{1-a}$ are increasing functions of $a$ which converge to $1$ as $a\to 0$ and to $\infty$ as $a\to 1$, it is also possible to see them as phase transitions in $a$ with $\beta>1$ fixed: if $a<\frac{\beta-1}{2+\beta}$ then we will be in case (c), if $\frac{\beta-1}{2+\beta}<a<\beta_1^{-1}(\beta)$ then we will be in case (b), and if $a>\beta_1^{-1}(\beta)$ we will be in case (a).
We also consider a system where each colour is reinforced by itself and by one other, in a cyclic way. For this system, the following theorem shows the existence of a phase transition between a phase with convergence with positive probability to a symmetric limit and a phase where there is no convergence to a limit and there is cycling behaviour.
\begin{thm}\label{cyclicmain}Let $A$ be the matrix
$$\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \end{pmatrix}.$$
\begin{itemize} \item When $\beta<4$ there is positive probability that $\bar{X}(n)\to (1/3,1/3,1/3)$. \item When $\beta>4$, almost surely $\bar{X}(n)$ fails to converge, and the limit set is either a periodic orbit or a connected union of periodic orbits.\end{itemize}\end{thm}
In Section \ref{stochapprox} we discuss the stochastic approximation methods we use in the proofs, while the proofs themselves are in Section \ref{symmetric} for Theorem \ref{sym3main} and Section \ref{cyclic} for Theorem \ref{cyclicmain}. In the final Section \ref{ex_sim}, we illustrate the results with some examples and simulations, including some examples beyond those covered by Theorems \ref{sym3main} and \ref{cyclicmain}.
\section{Stochastic approximation approach} \label{stochapprox}
In this section we introduce some of the stochastic approximation ideas which appear in our proofs.
For a general matrix $A$ and a given configuration of balls $x(n)$ at time $n$, let $i_{n+1} \in \{1,\ldots,d\}$ be the random colour of the ball to be added in the urn at time $n+1$.
Then, note that
\begin{align}
\bar{x}(n+1) & = \frac{x(n) + {\bf e}_{i_{n+1}}}{n_0 + n+1} = \frac{(n_0+n )\bar{x}(n) + {\bf e}_{i_{n+1}}}{n_0+n+1} \nonumber \\
& = \left( 1 -\frac{1}{n_0+n+1} \right)\bar{x}(n) + \frac{{\bf e}_{i_{n+1}}}{n_0+n+1},
\end{align}
implying
\begin{equation}
\label{eq:SAA}
\bar{x}(n+1) - \bar{x}(n) = \frac{1}{n_0+n+1}({\bf e}_{i_{n+1}} - \bar{x}(n)).
\end{equation}
The idea here is to rearrange the right hand side of \eqref{eq:SAA}
into a deterministic part and a zero mean ``noise''.
More specifically, let
\begin{equation}
\label{eq:VFF}
F(\bar{x}(n)) := \mathbb E[ {\bf e}_{i_{n+1}} \, | \, \mathcal{F}_n] - \bar{x}(n),
\end{equation}
and
\begin{equation}
\label{eq:NOISE}
\xi_{n+1} := {\bf e}_{i_{n+1}} - \mathbb E[ {\bf e}_{i_{n+1}} \, | \, \mathcal{F}_n].
\end{equation}
By setting $\gamma_n = 1/(n_0+n+1)$, we obtain
\begin{equation}
\label{eq:SAAX}
\bar{x}(n+1) - \bar{x}(n) = \gamma_n(F(\bar{x}(n)) + \xi_{n+1}).
\end{equation}
The above sequence can be thought as a numerical approximating method with varying step size $\gamma_n$ for solving the ODE $dx/dt = F(x)$.
For small enough $\gamma_n$ and under mild conditions, the asymptotic behavior of $(\bar{x}_n)_{n \in \mathbb N}$ and the underlying ODE are closely connected.
This is called the \emph{ODE method} or \emph{the dynamical system approach}, which alongside some probabilistic techniques, is applied to examine almost sure dynamics of stochastic approximation processes.
In case $F$ is a gradient-like vector field and in the presence of a strict Lyapunov function whose set of critical values has empty interior,
Bena\"im \cite{Ben96, benaim} shows that the limit set $l(\bar{x}, \omega) := \bigcap_{t \geq 0}\overline{\{\bar{x}(s, \omega) \> : \> s \geq t\}}$
is almost surely a connected subset of the equilibria for the flow induced by the underlying vector field $F:\mathbb R^d\to\mathbb R^d$.
In our model, we have
$$F_i(x)=\frac{u_i}{\sum_{j=1}^N u_j}-x_i, \quad i=1,\ldots,d,$$
where $u_i$ has the same relationship to $x$ as $u_i(n)$ to $x(n)$.
\section{Proofs for the symmetric case}\label{symmetric}
In this section we prove Theorem \ref{sym3main}.
Throughout this section we let $A$ be the matrix
\begin{equation}
\label{eq:matrix}
\begin{pmatrix} 1 & a & a \\ a & 1 & a \\ a & a & 1 \end{pmatrix}, \quad a > 0.
\end{equation}
In this case the vector field $F$ is given by
\begin{eqnarray}F_1(x_1,x_2,x_3) &=& \frac{x_1^{\beta}+ax_2^{\beta}+ax_3^{\beta}}{(1+2a)(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_1 \label{eq:1}\\
F_2(x_1,x_2,x_3) &=& \frac{ax_1^{\beta}+x_2^{\beta}+ax_3^{\beta}}{(1+2a)(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_2 \label{eq:2}\\
F_3(x_1,x_2,x_3) &=& \frac{ax_1^{\beta}+ax_2^{\beta}+x_3^{\beta}}{(1+2a)(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_3 \label{eq:3}\end{eqnarray}
We will first prove almost sure convergence, and then characterise the possible limits.
\begin{lem}\label{point}
The limit set of the process $(\bar{x}(n))_{n\in\mathbb N}$, defined in \eqref{eq:proportion} with given matrix \eqref{eq:matrix}, will, almost surely, be a single point which is a stationary point of $F$.
\end{lem}
\begin{proof}
Let
\begin{equation}
\label{eq:lyapunov}
L(x_1,x_2,x_3) = -(x_1+x_2+x_3) + \frac{1}{2a+1} \left[ a\log(x_1x_2x_3) - \frac{1}{\beta}(a-1)\log(x_1^{\beta}+x_2^{\beta}+x_3^{\beta}) \right].
\end{equation}
Then $L$ is a strict Lyapunov function for $F$.
In fact, straightforward differentiation gives $$\frac{\partial L}{\partial x_i} = \frac{1}{x_i} F_i.$$
Then, denoting an integral curve of $F$ by $x(t) = (x_1(t), x_2(t), x_3(t))$, we obtain
\[ \frac{d (L \circ x)}{d t} = \sum_{i=1}^3\frac{\partial L }{\partial x_i}\frac{d x_i}{d t} = \sum_{i=1}^3x_i\left( \frac{\partial L}{\partial x_i}\right)^2 \geq 0,\]
where the equality holds in the above inequality if and only if $F(x)=0$.
From standard results on stochastic approximation in the presence of a strict Lyapunov function, for example Corollary 6.6 of Bena\"{i}m \cite{benaim}, the limit set of $(\bar{x}(n))_{n\in\mathbb N}$ will almost
surely be a connected set of stationary points of $F$.
In this case, $F$ has no connected sets of stationary points other than single points, so the limit set must be a single point, and these points are the stationary points of $F$.
\end{proof}
Note that this Lyapunov function generalises in an obvious way to more than three types, as long as all off-diagonal entries are equal.
We now need to investigate the stationary points of the vector field $F$. Noting that the lines $x_1=x_2$, $x_1=x_3$ and $x_2=x_3$ are each invariant under $F$, define the function
\begin{equation}
\label{eq:pol}
\mathcal{P}_{a,\beta}(z) = az^{\beta+1} - z^{\beta} + (1+a)z -2a,
\end{equation}
which we will see is related to the dynamics restricted to one of these lines. The following result shows that all stationary points of $F$ are located on at least one of these lines and expresses them in terms of solutions to $\mathcal{P}_{a,\beta}(z)=0$.
\begin{prop}\label{eqcoord}
All stationary points $(x_1,x_2,x_3)$ of $F$ have at least two of $x_1, x_2, x_3$ equal, and are of one of the forms $(\frac{r}{r+2},\frac{1}{r+2},\frac{1}{r+2})$, $(\frac{1}{r+2},\frac{r}{r+2},\frac{1}{r+2})$ or $(\frac{1}{r+2},\frac{1}{r+2},\frac{r}{r+2})$, with $r$ a solution of $\mathcal{P}_{a,\beta}(z)=0$ in $\mathbb R^+$.
Furthermore, there are at most three possible values of $r$, one of which is always $1$, corresponding to the stationary point $(\frac13,\frac13,\frac13)$.
\end{prop}
\begin{proof}
We start off by showing that any stationary point has at least two co-ordinates equal. We do this by writing the stationary point in the form $(x,rx,sx)$ and showing that one of $r=1$, $s=1$ or $r=s$ must hold.
Rearranging \eqref{eq:1}, \eqref{eq:2} and \eqref{eq:3} at $(x,rx,sx)$ gives \begin{eqnarray}\label{rex} x &=& \frac{1+a(r^{\beta}+s^{\beta})}{(2a+1)(1+r^{\beta}+s^{\beta})} \\ \label{rey} rx &=& \frac{r^{\beta}+a(1+s^{\beta})}{(2a+1)(1+r^{\beta}+s^{\beta})} \\ \label{rez} sx &=& \frac{s^{\beta}+a(1+s^{\beta})}{(2a+1)(1+r^{\beta}+s^{\beta})}.\end{eqnarray}
It follows that \begin{eqnarray}\label{req} r^{\beta}+a(1+s^{\beta})=r(1+a(r^{\beta}+s^{\beta})) \\ \label{seq} s^{\beta}+a(1+r^{\beta})=s(1+a(r^{\beta}+s^{\beta})).\end{eqnarray}
Take the linear combination $(s+\frac{1}{a})\times$\eqref{req}$-(r+1)\times$\eqref{seq}. This eliminates $s^{\beta}$ and $s^{\beta+1}$, giving \begin{equation}\label{rearr3}(r^{\beta}+a)s-ar(1+r^{\beta})=(ar^{\beta}+1)s-\left(a(1+r^{\beta})+\frac{r}{a}-\frac{r^{\beta}}{a}+r^{\beta+1}-1\right),\end{equation} which can be rearranged to give \begin{equation}\label{rearr4}s(r^{\beta}-1)(a-1)=(r^{\beta+1}-1)(1-a)+(r^{\beta}-r)\left(a-\frac{1}{a}\right).\end{equation} Assuming $a\neq 1$, \eqref{rearr4} gives $r=1$ or
\begin{equation}\label{sfromr}s=\frac{a(r^{\beta}+1)(1-r)+r^{\beta}-r}{a(r^{\beta}-1)}=\frac{\mathcal{P}_{a,\beta}(r)-ar^{\beta}+a}{a(1-r^{\beta})}.\end{equation}
Using this form for $s$ in \eqref{seq} gives (if $r\neq 1$) \begin{equation}\label{sbfromr}s^{\beta}=\frac{-ar^{\beta+1}+r^{\beta}+a-r}{a(r-1)}=\frac{\mathcal{P}_{a,\beta}(r)}{a(1-r)}+1.\end{equation} Combining \eqref{sfromr} and \eqref{sbfromr} tells us that either $r=1$ or $s=1$ or $$\frac{s^{\beta}-1}{s-1}=\frac{r^{\beta}-1}{r-1},$$ and the latter case implies $r=s$. Hence any stationary point has two co-ordinates equal.
We now assume, without loss of generality, that the stationary point is of the form $(x,x,rx)$ or equivalently $\left(\frac{1}{r+2},\frac{1}{r+2},\frac{r}{r+2}\right)$. That the stationary point equations for a point of this form imply $\mathcal{P}_{a,\beta}(r)=0$ is easy to check, and it is also easy to check that $\mathcal{P}_{a,\beta}(1)=0$ for any $a,\beta>0$.
The function $\mathcal{P}_{a,\beta}$ satisfies $\mathcal{P}_{a,\beta}(0)<0$ and $\mathcal{P}_{a,\beta}(z)\to \infty$ as $z\to\infty$; furthermore it is concave for $z<\frac{\beta-1}{a(\beta+1)}$ and convex for $z>\frac{\beta-1}{a(\beta+1)}$, which indicates that it has either one root or three in $\mathbb R^{+}$, counting multiplicity. This completes the proof.
\end{proof}
\begin{prop}\label{Proots} Consider $\mathcal{P}_{a,\beta}(z) = az^{\beta+1} - z^{\beta} + (1+a)z -2a$ for $z\in \mathbb R^+$, with $a,\beta > 0$.
\begin{enumerate}
\item For a given value of $a>1$, $\mathcal{P}_{a,\beta}$ has only one root at $z=1$.\label{a>1}
\item For a given value of $a<1$, there exists $\beta_1(a)$ satisfying $\frac{1+2a}{1-a}>\beta_1(a)>1$ such that
\begin{enumerate}
\item If $\beta>\frac{1+2a}{1-a}$ then $\mathcal{P}_{a,\beta}'(1)<0$ and we have that $\mathcal{P}_{a,\beta}$ has three roots in $\mathbb R^+$, $1$, $r_{-}$ and $r_{+}$, labelled so that $r_{-}<1<r_{+}$. As functions of $\beta$ for fixed $a$, $r_{+}$ is increasing and $r_{-}$ is decreasing.
\item If $\beta_1(a)<\beta<\frac{1+2a}{1-a}$ then $\mathcal{P}_{a,\beta}'(1)>0$ and $\mathcal{P}_{a,\beta}$ has three roots in $\mathbb R^+$, $1$, $r_1$ and $r_2$, labelled so that $1<r_1<r_2$. As functions of $\beta$ for fixed $a$, $r_{2}$ is increasing and $r_{1}$ is decreasing.
\item If $\beta<\beta_1(a)$ then $\mathcal{P}_{a,\beta}'(1)>0$ and the only root of $\mathcal{P}_{a,\beta}$ in $\mathbb R^+$ is $1$.
\end{enumerate}
Furthermore $\beta_1(a)$ is an increasing function of $a$ with $\beta_1(a) \to\infty$ as $a\to 1$.
\end{enumerate}
\end{prop}
\begin{proof}
\emph{(i)} First, if $\beta<1$ we have that $\mathcal{P}_{a,\beta}''(z)>0$ in $\mathbb R^+$ implying that $\mathcal{P}_{a,\beta}'(z)$ is strictly increasing.
Moreover, $\mathcal{P}_{a,\beta}'(z)$ goes from $-\infty$ to $+\infty$ when $z$ ranges from $0$ to $+\infty$ and $\mathcal{P}_{a,\beta}'(1) > 0 $.
Then $\mathcal{P}_{a,\beta}'(z)$ changes sign only once at some $z^*<1$.
Now, since $\mathcal{P}_{a,\beta}(0) = -2a < 0$ and $\mathcal{P}_{a,\beta}(z)$ is decreasing for $z<z^*<1$ and increasing otherwise, it follows that $\mathcal{P}_{a,\beta}(z)$ crosses the $z=0$ line only once at $z=1$.
Second, the same happens for $\beta > 1$ since $\mathcal{P}_{a,\beta}'(z) > 0$ in $\mathbb R^+$ and so $\mathcal{P}_{a,\beta}(z)$ is strictly increasing.
The case $\beta = 1 $ is trivial.
\emph{(ii)} If $\beta>\frac{1+2a}{1-a}$ we have $\mathcal{P}_{a,\beta}'(1)<0$, indicating that in this case $\mathcal{P}_{a,\beta}$ must have three roots.
Note that if $\beta=\frac{1+2a}{1-a}$ then $\mathcal{P}_{a,\beta}'(1)=0$ but that $\mathcal{P}_{a,\beta}''(1)<0$, showing that this is a double root, not a triple root, and so there must be another root in that case for larger $z$.
If $\beta<\frac{1+2a}{1-a}$ then $\mathcal{P}_{a,\beta}'(1)>0$ and $\mathcal{P}_{a,\beta}$ has no root less than $1$.
Then, there must be either none, one double, or two distinct additional roots greater than 1.
The derivative of $\mathcal{P}_{a,\beta}(z)$ with respect to $\beta$, for fixed $a$ and $z$, is $(az-1)(\log z)z^{\beta}$. Thus, for any $z\in(1,1/a)$, $\mathcal{P}_{a,\beta}(z)$ is decreasing in $z$, and it follows that if there are roots of $\mathcal{P}_{a,\beta}$ in this range for a particular value of $\beta$ there must also be for any larger $\beta$. As $\mathcal{P}_{a,\beta}(z)>0$ if $z\geq 1/a$, this shows that there exists $\beta_1(a)\in[1,\frac{1+2a}{1-a}]$ such that there is one root of $\mathcal{P}_{a,\beta}$ when $\beta<\beta_1(a)$ and three when $\beta>\beta_1(a)$.
Let $\beta_0(a)>\frac{2}{1-a}-1>1$ be the unique solution to \begin{equation}\label{beta0}1+a=\left(\left(1-\frac{2}{\beta+1}\right)\frac{1}{a}\right)^{\beta-1}.\end{equation} (It can be seen that \eqref{beta0} has a unique solution for fixed $a<1$, as in that case the right hand side is increasing in $\beta$ if the right hand side is greater than $1$, is equal to $1$ at $\beta=1$ and tends to $\infty$ as $\beta\to\infty$. That $\beta_0(a)>\frac{2}{1-a}-1$ can be seen by noting that if $\beta$ is a solution of \eqref{beta0} we must have $\left(1-\frac{2}{\beta+1}\right)\frac{1}{a}>1$.) Then if $\beta<\beta_0(a)$ we have $\mathcal{P}_{a,\beta}'(z)>0$ for all $z$ and hence $\mathcal{P}_{a,\beta}$ is increasing in $z$ and so $z=1$ is the only root. This shows that $\beta_1(a) \geq \beta_0(a)>1$.
Now, the fact that as mentioned above there is a root greater than $1$ when $\beta=\frac{1+2a}{1-a}$ together with the continuity of $\mathcal{P}_{a,\beta}(z)$ in $\beta$ ensures that there remains a root greater than $1$ for $\beta\in\left(\frac{1+2a}{1-a}-\epsilon,\frac{1+2a}{1-a}\right)$ for some $\epsilon>0$, so $\beta_1(a)<\frac{1+2a}{1-a}$.
The claims that $r_{+}$ and $r_2$ are increasing functions of $\beta$ and that $r_1$ is a decreasing function of $\beta$ also follow from the negative derivative of $\mathcal{P}_{a,\beta}(z)$ with respect to $\beta$ for $z\in(1,1/a)$. Similarly the claim that $r_{-}$ is a decreasing function of $\beta$ follows from the derivative of $\mathcal{P}_{a,\beta}(z)$ with respect to $\beta$ being positive on $(0,1)$.
To see that $\beta_1(a)$ is an increasing function of $a$, note that for fixed $z>1$ the derivative of $\mathcal{P}_{a,\beta}(z)$ with respect to $a$ is positive, meaning that if we are in case (c) for particular choices of $a$ and $\beta$ we will also be in case (c) for the same value of $\beta$ and any larger value of $a$. That $\beta_1(a)\to \infty$ as $a\to 1$ follows from $\beta_0(a)>\frac{2}{1-a}-1$.
\end{proof}
We now investigate the stability of these roots. We will consider a stationary point $p$ of $F$ to be stable if it is a local maximum of the Lyapunov function $L$, and to be unstable if it is a saddle point or local minimum of $L$. Furthermore, if all eigenvalues of $DF(p)$ have negative real part, $p$ is said to be linearly stable, while if some eigenvalue has positive real part, $p$ is said to be linearly unstable.
\begin{prop}\label{stable}
If a stationary point for $F$ is of the form $(x,x,rx)$ or $(x,rx,x)$ or $(rx,x,x)$, then it is linearly stable if $\mathcal{P}_{a,\beta}'(r)>0$ and $\frac{r^{\beta}+2}{r+2}>\frac{\beta(1-a)}{2a+1}$, and linearly unstable if either $\mathcal{P}_{a,\beta}'(r)<0$ or $\frac{r^{\beta}+2}{r+2}<\frac{\beta(1-a)}{2a+1}$.\end{prop}
\begin{proof}
Without loss of generality we focus on the case $(x,x,rx)$.
We note that the differential equation driven by $F$ keeps the line $x_1=x_2$ invariant, so we consider it restricted to this line; the equation for $F_3$ gives $$F_3\left(\frac{1-x_3}{2},\frac{1-x_3}{2},x_3\right)=\frac{2a\left(\frac{1-x_3}{2}\right)^{\beta}+x_3^{\beta}}{(1+2a)\left(2\left(\frac{1-x_3}{2}\right)^{\beta}+x_3^{\beta}\right)}-x_3.$$ Let $x_3=z/(z+2)$ so that $x_1=x_2=1/(z+2)$. Then $$F_3\left(\frac{1-x_3}{2},\frac{1-x_3}{2},x_3\right)=\frac{-2\mathcal{P}_{a,\beta}(z)}{(1+2a)(2+z^{\beta})},$$ and so is positive when $\mathcal{P}_{a,\beta}(z)$ is negative and vice versa. Hence a stationary point $(x,x,rx)$ is stable in this direction if $\mathcal{P}_{a,\beta}'(r)>0$ and unstable in this direction if $\mathcal{P}_{a,\beta}'(r)<0$.
Because $F$ is symmetric in $x_1$ and $x_2$, the other direction in which we need to consider stability will be perpendicular to this one. Hence we consider
\begin{equation}\label{estability}
\begin{split}
F_1(x+\epsilon,x-\epsilon,rx) &= \frac{(x+\epsilon)^{\beta}+(x-\epsilon)^{\beta}+ar^{\beta}x^{\beta}}{(2a+1)((x+\epsilon)^{\beta}+(x-\epsilon)^{\beta}+r^{\beta}x^{\beta})}-x-\epsilon \\
&= F_1(x,x,rx)+\epsilon\left(-1+\frac{\beta(1-a)}{(2a+1)(2+r^{\beta})x}\right)+o(\epsilon) \\
&= F_1(x,x,rx)+\epsilon\left(-1+\frac{\beta(1-a)(r+2)}{(2a+1)(2+r^{\beta})}\right)+o(\epsilon).
\end{split}
\end{equation}
It follows that $(x,x,rx)$ is a stable stationary point in the direction perpendicular to the line $x_1=x_2$ if $\frac{r^{\beta}+2}{r+2}>\frac{\beta(1-a)}{2a+1}$ and unstable in that direction if the reverse inequality applies.
\end{proof}
We shall henceforth restrict ourselves to the case $a<1$ since Propositions \ref{eqcoord}, \ref{Proots}(\ref{a>1}) and \ref{stable} imply that if $a>1$, $(\frac13,\frac13,\frac13)$ is the only stable stationary point for $F$ and by Corollary \ref{point}, $(\bar{x}(n))_{n \in \mathbb N}$ must converge to it. The case $a=1$ has the probabilities of each colour being $1/3$ regardless of $\bar{x}(n)$ and so it is easily seen that $\bar{x}(n)\to (1/3,1/3,1/3)$ almost surely.
\begin{cor}\label{criteria}
Assume $a<1$.
\begin{enumerate}
\item If $\beta<\beta_1(a)$, then the stationary point $(\frac13,\frac13,\frac13)$ is stable, and is the limit with probability $1$.
\item If $\beta_1(a)<\beta<\frac{1+2a}{1-a}$, then the stationary points $(\frac13,\frac13,\frac13)$ and $\left(\frac{1}{r_{2}+2},\frac{1}{r_{2}+2},\frac{r_2}{r_{2}+2}\right)$ (and its permutations) are stable, while the stationary point $\left(\frac{1}{r_{1}+2},\frac{1}{r_{1}+2},\frac{r_1}{r_{1}+2}\right)$ and its permutations are linearly unstable.
\item If $\beta>\frac{1+2a}{1-a}$, then there are three stationary points of $F$ of the form $(x,x,rx)$ corresponding to the three solutions $r_{-}<1<r_{+}$ of $\mathcal{P}_{a,\beta}(z)=0$ in $\mathbb R^+$. The stationary points $(\frac13,\frac13,\frac13)$ and $\left(\frac{1}{r_{-}+2},\frac{1}{r_{-}+2},\frac{2}{r_{-}+2}\right)$ (and its permutations) are linearly unstable, while $\left(\frac{1}{r_{+}+2},\frac{1}{r_{+}+2},\frac{r_{+}}{r_{+}+2}\right)$ and its permutations are stable.
\end{enumerate}
\end{cor}
\begin{proof}
\emph{(i)}
Stability follows from Proposition \ref{stable}, and almost sure convergence from Corollary \ref{point}.
\emph{(ii)}
The shape of $\mathcal{P}_{a,\beta}$ ensures that $r_1,r_2>1$ and that $\mathcal{P}_{a,\beta}'(r_1)<0$ and $\mathcal{P}_{a,\beta}'(r_2)>0$, showing that $\left(\frac{1}{r_{1}+2},\frac{1}{r_{1}+2},\frac{r_1}{r_{1}+2}\right)$ is unstable, and that for the other two stationary points we just need to check the stability perpendicular to the line $x_1=x_2$. But $\frac{r_2^{\beta}+2}{r_2+2}>1>\frac{\beta(1-a)}{2a+1}$ by our assumption on $\beta$, so the condition from Propostion \ref{stable} is satisfied and so $(\frac13,\frac13,\frac13)$ and $\left(\frac{1}{r_{2}+2},\frac{1}{r_{2}+2},\frac{r_2}{r_{2}+2}\right)$ are stable.
\emph{(iii)}
That $\mathcal{P}_{a,\beta}(z)$ has three solutions follows from Proposition \ref{Proots}. As $\mathcal{P}_{a,\beta}'(1)<0$ it follows from Propostion \ref{stable} that $(\frac13,\frac13,\frac13)$ is unstable, and as $r_{-}<1$ we have $\frac{r_{-}^{\beta}+2}{r_{-}+2}<1<\frac{\beta(1-a)}{2a+1}$, so $\left(\frac{1}{r_{-}+2},\frac{1}{r_{-}+2},\frac{r_{-}}{r_{-}+2}\right)$ is also unstable. The global maximum of the Lyapunov function on $\Delta^2$ must be a stable stationary point of $F$, so the remaining stationary points, $\left(\frac{1}{r_{+}+2},\frac{1}{r_{+}+2},\frac{r_{+}}{r_{+}+2}\right)$ and its permutations, must be stable.
\end{proof}
Since our process $(\bar{x}(n))_{n \in \mathbb N}$ is in some sense a discrete version of the differential flow $dx(t)/dt = F(x(t))$, we would like to ascertain whether the stochastic process may or may not converge to certain critical points in terms of what type of critical points they are for the associated flow.
A subset $A$ of the phase state is called an attractor for a flux $\Phi$ if it is a nonempty, compact and invariant subset having a neighbourhood $W$ such that the distance $d(\Phi_t, A) \rightarrow 0$ as $t \rightarrow \infty$ uniformly in $W$.
Now, let $p$ be a stationary point of a smooth vector field $F$. If $p$ is a stable fixed point of $F$, then $p$ is an attractor for the flux induced by $F$.
The following result completes the proof of Theorem \ref{sym3main}.
\begin{prop}\label{convergence} Let $l(\bar{x})$ denote the limit set of the process $(\bar{x}(n))_{n \in \mathbb N}$ defined in \eqref{eq:SAAX} and recall the stability criteria in Proposition \ref{stable}. Then we have
\begin{enumerate}
\item $\mathbb P[l(\bar{x}) = \{p\}] > 0$ for stable points $p$ of $F$.
\item $\mathbb P[l(\bar{x}) = \{p\}] = 0$ for linearly unstable points $p$ of $F$.
\end{enumerate}
\end{prop}
\begin{proof}
Without loss of generality we focus on the case $(x,x,rx)$.
\emph{(i)} Let us now show that the process $\bar{x}$ in fact converges with positive probability toward a given attractor.
Of course, it is necessary that the process has positive probability of being arbitrarily close to the attractor at arbitrarily large times.
That is, a point $p$ is said to be attainable by a process $X$ if for each $t>0$ we have that $\mathbb P[\exists ~ s \geq t ~:~ X(s) \in N_p] > 0$ for every neighborhood $N_p$ of $p$.
It turns out that if the function $F + Id$ associated with an urn process $X$ maps the simplex into its interior, it follows that every point of the simplex is attainable by $X$.
This is indeed the case for our process $\bar{x}$.
Finally, Bena\"{i}m \cite{benaim} Theorem 7.3 ensures that a given attainable attractor $p$ with non-empty basin of attraction is such that $\mathbb P[l(\bar{x}) = \{p\}] > 0$.
\emph{(ii)} Let $p$ be a linearly unstable critical point and $\mathcal{N}_p \subset \Delta_2$ a neighborhood of $p$.
The simplex is considered as a differential manifold by identifying its tangent space at any point with the linear subspace
$T\Delta_2 = \{x \in \mathbb R^3 \> : \> \sum_i x_i = 0\}$.
In our case, the only non-trivial condition ensuring Pemantle's non-convergence criteria (see \cite{pemantle1990}) is that we have positive expectation of the positive part of the component of the noise in any given direction.
Formally, we need that whenever $\bar{x}_n \in \mathcal{N}_p$, there is a constant $\kappa$ such that
$\mathbb E[\max\{\xi_{n+1} \cdot \theta,0\} \> | \> \mathcal{F}_n] \geq \kappa$
for every unit vector $\theta \in T\Delta_2$.
For notational simplicity, write $\tilde{u}_i = u_i(n)/(\sum_j u_j(n))$ and note that
\begin{align}\label{eq:drift}
\mathbb E[\max\{\xi_{n+1} \cdot \theta,0\} \> | \> \mathcal{F}_n] =
& \tilde{u}_1 \max\{\theta_1(1-\tilde{u}_1)-\theta_2 \tilde{u}_2-\theta_3 \tilde{u}_3,0\} + \nonumber \\
& \tilde{u}_2 \max\{-\theta_1 \tilde{u}_1+\theta_2(1-\tilde{u}_2)-\theta_3 \tilde{u}_3),0\} + \nonumber \\
& \tilde{u}_3 \max\{\theta_1 \tilde{u}_1-\theta_2 \tilde{u}_2+\theta_3(1-\tilde{u}_3),0\}.
\end{align}
Now, write $\theta = (\theta_1,\theta_2,\theta_3)$ and suppose that $\theta_i < 0$ for exactly two coordinates.
Then, it is simple to find a positive term in \eqref{eq:drift}.
On the other hand, suppose that $\theta_i < 0$ for exactly one coordinate.
Let us say $\theta_3 < 0$.
Then, depending on which of the inequalities $\theta_1 > \theta_2 \geq 0$ or $\theta_2 > \theta_1 \geq 0$ holds, it is also not difficult to find a positive term in \eqref{eq:drift}.
Finally, to prove that it is in fact uniformly positive, it is enough that $\theta$ is an unit vector in $T\Delta_2$ and that all stationary points $p$ are such that $\tilde{u}_i$ is uniformly positive in a neighborhood $\mathcal{N}_p$ of $p$ for $i=1,2,3$.
\end{proof}
\section{Proofs for the cyclic case}\label{cyclic}
\subsection{Introduction}
In this section we prove Theorem \ref{cyclicmain}. Here $A$ is the matrix
$$\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \end{pmatrix},$$ and we have \begin{eqnarray*}F_1(x_1,x_2,x_3) &=& \frac{x_1^{\beta}+x_2^{\beta}}{2(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_1\\
F_2(x_1,x_2,x_3) &=& \frac{x_2^{\beta}+x_3^{\beta}}{2)(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_2\\
F_3(x_1,x_2,x_3) &=& \frac{x_3^{\beta}+x_1^{\beta}}{2(x_1^{\beta}+x_2^{\beta}+x_3^{\beta})}-x_3\end{eqnarray*}
First, we note that for any choice of $\beta$, $(\frac13,\frac13,\frac13)$ is a stationary point of $F$.
The following two results give information on its stability and show that it is in fact the only stationary point.
\begin{lem}\label{stability}For the vector field $F$, the stationary point at $(\frac13,\frac13,\frac13)$ is stable if $\beta<4$, and a linearly unstable source if $\beta>4$. \end{lem}
\begin{proof}
As we are working with $x\in\Delta^2$, write $x_3=1-x_1-x_2$. Routine calculus then shows that the Jacobian matrix at $(\frac13,\frac13,\frac13)$ is $$\begin{pmatrix}\frac{\beta}{2}-1 & \frac{\beta}{2} \\ -\frac{\beta}{2} & -1 \end{pmatrix}.$$
The eigenvalues of this Jacobian are then the roots $\lambda$ of $$\lambda^2+\lambda\left(2-\frac{\beta}2\right)-\left(\frac{\beta}2-1\right)+\left(\frac{\beta}{2}\right)^2$$
which are $$\left(\frac{\beta}{4}-1\right)\pm \frac{i\sqrt{3}\beta }{4}.$$ The result follows.
\end{proof}
\begin{lem}\label{statpoint}The only stationary point of $F$ in $\Delta^2$ is $(\frac13,\frac13,\frac13)$.\end{lem}
\begin{proof}
For $x=(x_1,x_2,x_3)\in \Delta^2$ with $F_1(x)=F_2(x)=F_3(x)=0$, we have $x_1=\frac{x_1^{\beta}+x_2^{\beta}}{x_2^{\beta}+x_3^{\beta}}x_2$ and similarly $x_2=\frac{x_2^{\beta}+x_3^{\beta}}{x_3^{\beta}+x_1^{\beta}}x_3$ and $x_3=\frac{x_3^{\beta}+x_1^{\beta}}{x_1^{\beta}+x_2^{\beta}}x_1$. Using this, $$x_1-x_2=x_2\frac{x_1^{\beta}-x_3^{\beta}}{x_2^{\beta}+x_3^{\beta}},$$ indicating that (if $x_2>0$) if $x_1>x_2$ then also $x_1>x_3$, while if $x_1<x_2$ then $x_1<x_3$. Similarly, if $x_3>0$ then the signs of $x_2-x_3$ and $x_2-x_1$ are the same, and if $x_1>0$ then the signs of $x_3-x_1$ and $x_3-x_2$ are the same. Hence the only stationary point of $F$ in the interior of $\Delta^2$ is $(\frac13,\frac13,\frac13)$.
It is also easy to check that if $x_1=0$ then $x_2=0$, and similarly that if $x_2=0$ then $x_3=0$ and if $x_3=0$ then $x_1=0$. Hence there are no stationary points of $F$ on the boundary of $\Delta^2$.
\end{proof}
We can now complete the proof of Theorem \ref{cyclicmain}.
That we only have one stationary point, and that it is never a saddle, restricts the possibilities for chain transitive sets. In two dimensions Theorem 6.12 of Bena\"{i}m \cite{benaim} states that chain transitive sets must be unions of stationary points, periodic orbits and cyclic orbit chains. However, with only one stationary point which is not a saddle cyclic orbit chains are impossible. We can thus conclude that the limit set must be a connected union of periodic orbits and stationary points.
By Lemma \ref{stability}, if $\beta<4$ then by Lemma \ref{stability} the stationary point $(\frac13,\frac13,\frac13)$ is stable, and hence is an attractor for the flow given by $F$. By Theorem 7.3 of Bena\"{i}m \cite{benaim}, to show that there is positive probability of convergence to $(\frac13,\frac13,\frac13)$ it is enough to show that it is an attainable point, that is that the process has positive probability of being arbitrarily close to $(\frac13,\frac13,\frac13)$ at arbitrarily large times. This is straightforward to show: for any $\epsilon>0$ there will be points of the form $\left(\frac{n_1}{n},\frac{n_2}{n},\frac{n_3}{n}\right)$ with $n_1,n_2,n_3$ integers satisfying $n_1+n_2+n_3=n$ and $\max\left\{\frac{n_1}{n}-\frac13,\frac{n_2}{n}-\frac13,\frac{n_3}{n}-\frac13\right\}<\epsilon$ for arbitrarily large $n$. There will be positive probability of any such point being reached, so $(\frac13,\frac13,\frac13)$ is indeed attainable.
If $\beta>4$, then by Lemma \ref{stability} $(\frac13,\frac13,\frac13)$ is linearly unstable, so as in Proposition \ref{convergence} it will follow that it is a limit with probability zero if we have positive expectation of the positive part of the component of the noise in any given direction. In fact, the same argument as in Proposition \ref{convergence} will work here, so we can conclude that $\bar{x}(n)$ has probability zero of converging to a stationary point. It follows that the limit set must be a periodic orbit or a connected union of periodic orbits, completing the proof of Theorem \ref{cyclicmain}.
\section{Examples and simulations} \label{ex_sim}
In this section we consider some examples, including some where exact calculations are possible, and some simulations. We also consider some examples which go beyond the cases covered by Theorems \ref{sym3main} and \ref{cyclicmain}.
\subsection{The symmetric case with $\beta=2$}
In the case of Theorem \ref{sym3main} with $\beta=2$, the possible limits and the phase transitions can be explicitly identified. We find that $\mathcal{P}_{a,2}(z)=(z-1)(az^2+(a-1)+2a)$, with roots given by $z=1$ and $z=\frac{1-a \pm\sqrt{1-2a-7a^2}}{2a}$. If $a< \frac{\sqrt{8}-1}{7}$, then these are real and positive, and letting $\lambda_1=\frac{3a+1-\sqrt{1-2a-7a^2}}{4(2a+1)}, \lambda_2=\frac{3a+1+\sqrt{1-2a-7a^2}}{4(2a+1)}, \lambda_3=\frac{a+1-\sqrt{1-2a-7a^2}}{2(2a+1)}$ and $\lambda_4=\frac{a+1+\sqrt{1-2a-7a^2}}{2(2a+1)}$, we obtain linearly stable stationary points $(\lambda_1,\lambda_1,\lambda_4)$, $(\lambda_1,\lambda_4,\lambda_1)$ and $(\lambda_4,\lambda_1,\lambda_1)$, and linearly unstable stationary (except at $a=\frac14$) points $(\lambda_2,\lambda_2,\lambda_3)$, $(\lambda_2,\lambda_3,\lambda_2)$, $(\lambda_3,\lambda_2,\lambda_2)$. In addition, the stationary point $(\frac13,\frac13,\frac13)$ is linearly stable if $a>\frac14$, and linearly unstable if $a<\frac14$.
If $a>\frac{\sqrt{8}-1}{7}$, then $(\frac13,\frac13,\frac13)$ is the only stationary point, and is stable. In the notation of Theorem \ref{sym3main}, we have $\beta_1\left(\frac{\sqrt{8}-1}{7}\right)=2$, and the three phases are as follows:
\begin{figure}[h]\label{fig:simulations-sym}
\caption{20 simulations of the symmetric model for $\beta=2$}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=5cm, height=5cm]{Rplot11}
\caption{$a=0.2$}\label{ap2}
\label{fig:sub1}
\end{subfigure}%
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=5cm, height=5cm]{Rplot22}
\caption{$a=0.26$}\label{ap26}
\label{fig:sub2}
\end{subfigure}%
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=5cm, height=5cm]{Rplot33}
\caption{$a=0.5$}\label{ap5}
\label{fig:sub3}
\end{subfigure}
\end{figure}
\begin{itemize}
\item When $a<\frac14$, $(\frac13,\frac13,\frac13)$ is unstable; there are three other stationary points, $(\lambda_1,\lambda_1,\lambda_4)$ and permutations, placed symmetrically, which are stable. For example when $a=\frac15$, $(0.1847,0.1847,0.6306)$ and permutations are stable. Almost surely, one of these three points will be the limit. A simulation of 20 trajectories of the stochastic process in this case appears in Figure \ref{ap2}.
\item For $\frac14<a<\frac{\sqrt{8}-1}{7}$, $(\frac13,\frac13,\frac13)$ is now stable but there are also stable stationary points elsewhere, near $(\frac14,\frac14,\frac12)$. In this case, both symmetric and asymmetric limits have positive probability. For example, at $a=0.26$, there are stable stationary points at $(0.2792,0.2792,0.4416)$ and permutations as well as $(\frac13,\frac13,\frac13)$. A simulation of 20 trajectories of the stochastic process in this case appears in Figure \ref{ap26}.
\item For $a>\frac{\sqrt{8}-1}{7}$, $(\frac13,\frac13,\frac13)$ is the only stationary point, and is stable, and will be the limit almost surely. A simulation of 20 trajectories of the stochastic process in the $a=\frac12$ case appears in Figure \ref{ap5}.
\end{itemize}
At the critical value $a=\frac14$, $(\frac13,\frac13,\frac13)=(\lambda_2,\lambda_2,\lambda_3)$ has zeros as eigenvalues of its Jacobian and so is neither linearly stable nor linearly unstable, while there are stable stationary points at $(\frac14,\frac14,\frac12)$ and permutations; similarly at the critical value $a=\frac{\sqrt{8}-1}{7}$ the stationary point $(\lambda_2,\lambda_2,\lambda_3)=(\lambda_1,\lambda_1,\lambda_4)$ is neither linearly stable nor linearly unstable.
\subsection{The symmetric case with $\beta=3$}
It is also possible to do some explicit calculations when $\beta=3$. In this case $(\frac13,\frac13,\frac13)$ is linearly stable when $a>\frac25$ and linearly unstable when $a<\frac25$, and we have $$\mathcal{P}_{a,3}(z)=(z-1)(az^3+(a-1)z^2+(a-1)z+2a),$$ where the cubic factor has one real root (which is negative) when $a>a_c=\frac{1}{166}(3.(2)^{1/2}.(3)^{1/4} + 24.(3)^{1/2} + 13.(2)^{1/2}.(3)^{3/4}-20) = 0.4160306$ and three real roots (one of which is negative) when $a<a_c$. (In the notation of Theorem \ref{sym3main}, $\beta_1(a_c)=3$.) Hence for $a>a_c$ we get almost sure convergence to $(\frac13,\frac13,\frac13)$, for $a<\frac25$ we get almost sure convergence to one of three asymmetric stationary points, and for $\frac25<a<a_c$ the process may converge either to $(\frac13,\frac13,\frac13)$ or to an asymmetric stationary point, each with positive probability.
\subsection{The cyclic model}
Figure \ref{cyclic3} shows 20 simulations of the cyclic model when $\beta=3$, showing convergence to $(\frac13,\frac13,\frac13)$.
Figure \ref{cyclic6} shows 20 simulations with $\beta=6$, showing apparent convergence to a single limit cycle.
\begin{figure}[h]\label{fig:simulations-cyclic}
\caption{20 simulations of the cyclic model}
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=5cm, height=5cm]{Rplot44}
\caption{$\beta=3$}\label{cyclic3}
\label{fig:sub1-cyclic}
\end{subfigure}%
\centering
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[width=5cm, height=5cm]{Rplot55}
\caption{$\beta=6$}\label{cyclic6}
\label{fig:sub2-cyclic}
\end{subfigure}%
\end{figure}
\subsection{The symmetric case with more than three types}
It is natural to extend Section \ref{symmetric} to $d>3$ types, letting $A$ be the $d\times d$ matrix with $a_{ii}=1$ for each $i$ and $a_{ij}=a$ for $i\neq j$. It is straightforward to extend the Lyapunov function \eqref{eq:lyapunov} to this case, meaning that Lemma \ref{point} applies. However, the later calculations, starting with Proposition \ref{eqcoord}, do not apply. It thus may be interesting to investigate whether more complex patterns of phases can occur in this case than when $d=3$; however, numerical solution of the stationary point equations for particular values of $a$ when $d=4,5,6$ suggests that the behaviour is in fact very similar to the $d=3$ case, with three phases which parallel those found in Theorem \ref{sym3main}.
\subsection{A more general cyclic case}
It is natural to extend Section \ref{cyclic} by allowing the matrix $A$ to take the form $$\begin{pmatrix} 1 & a & 0 \\ 0 & 1 & a \\ a & 0 & 1\end{pmatrix},$$ allowing for different strengths of the cyclic reinforcement. It is straightforward to extend Lemma \ref{stability} to this case, showing that the stationary point at $(\frac13,\frac13,\frac13)$ is linearly stable if $a\geq 2$ or if $a<2$ and $\beta<\frac{2(1+a)}{2-a}$, and linearly unstable if $a<2$ and $\beta>\frac{2(1+a)}{2-a}$. However Lemma \ref{statpoint} does not apply for general $a$ and there may be other stationary points.
Numerical investigation when $\beta=2$ suggests that there are three phases in $a$: in addition to a phase with almost sure convergence to $(\frac13,\frac13,\frac13)$ and a phase with convergence to a limit cycle, there is a phase up to $a \approx 0.25057$ where there are stable stationary points other than $(\frac13,\frac13,\frac13)$ and that the process usually converges to one of these.
\section*{Acknowledgements}
M.C.’s research was supported by CONICET [grant number 10520170300561CO].
The authors are grateful to Andrew Wade for suggestions that improved the presentation.
\bibliographystyle{plain}
| {
"timestamp": "2020-06-05T02:09:39",
"yymm": "2006",
"arxiv_id": "2006.02685",
"language": "en",
"url": "https://arxiv.org/abs/2006.02685",
"abstract": "We investigate reinforced non-linear urns with interacting types, and show that where there are three interacting types there are phenomena which do not occur with two types. In a model with three types where the interactions between the types are symmetric, we show the existence of a double phase transition with three phases: as well as a phase with an almost sure limit where each of the three colours is equally represented and a phase with almost sure convergence to an asymmetric limit, which both occur with two types, there is also an intermediate phase where both symmetric and asymmetric limits are possible. In a model with anti-symmetric interactions between the types, we show the existence of a phase where the proportions of the three colours cycle and do not converge to a limit, alongside a phase where the proportions of the three colours can converge to a limit where each of the three is equally represented.",
"subjects": "Probability (math.PR)",
"title": "Phase transitions in non-linear urns with interacting types",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877049262134,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594835393414
} |
https://arxiv.org/abs/2102.02140 | Optimally reconnecting weighted graphs against an edge-destroying adversary | We introduce a model involving two adversaries Buster and Fixer taking turns modifying a connected graph, where each round consists of Buster deleting a subset of edges and Fixer responding by adding edges from a reserve set of weighted edges to leave the graph connected. With the weights representing the cost for Fixer to use specific reserve edges to reconnect the graph, we provide a reasonable definition for what should constitute an optimal strategy for Fixer to keep the graph connected for as long as possible as cheaply as possible, and prove that a greedy strategy for Fixer satisfies our conditions for optimality. | \section{\@startsection{section}{1}{\z@}
{-30pt \@plus -1ex \@minus -.2ex}
{2.3ex \@plus.2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{2}{\z@}
{-3.25ex\@plus -1ex \@minus -.2ex}
{1.5ex \@plus .2ex}
{\normalfont\normalsize\bfseries}}
\renewcommand{\@seccntformat}[1]{\csname the#1\endcsname. }
\makeatother
\begin{document}
\begin{center}
\uppercase{\bf Optimally reconnecting weighted graphs against an edge-destroying adversary}
\vskip 20pt
{\bf Daniel C. McDonald}\\
{\smallit Wolfram Research Inc., Champaign, Illinois}\\
{\tt daniel.cooper.mcdonald@gmail.com}\\
\end{center}
\vskip 30pt
\centerline{\bf Abstract}
\noindent
We introduce a model involving two adversaries Buster and Fixer taking turns modifying a connected graph, where each round consists of Buster deleting a subset of edges and Fixer responding by adding edges from a reserve set of weighted edges to leave the graph connected. With the weights representing the cost for Fixer to use specific reserve edges to reconnect the graph, we provide a reasonable definition for what should constitute an optimal strategy for Fixer to keep the graph connected for as long as possible as cheaply as possible, and prove that a greedy strategy for Fixer satisfies our conditions for optimality.
\pagestyle{myheadings}
\thispagestyle{empty}
\baselineskip=12.875pt
\vskip 30pt
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{case}[thm]{Case}
\theoremstyle{definition}
\newtheorem*{ack}{Acknowledgment}
\newcommand{\mc}[1]{\ensuremath{\mathbb{#1}}}
\newcommand{\es}[3]{\ensuremath{#1^{\mathbb{#2}}_{#3}}}
\section{Introduction}
\label{intro}
Suppose a network must stay connected in the face of some adversary that periodically destroys subsets of its edges, with the network being reconnected after each attack by adding replacement edges, each of which has its own cost. Beyond the requirement that each individual selection of replacement edges must reconnect the network, ideally taken together these selections should, in some sense, keep the network connected for as long as possible as cheaply as possible. In Subsection \ref{model} we introduce a graph-theoretic framework to formally model this situation, including a definition of optimality for selection of replacement edges, and in Subsection \ref{example} we give an example of the model in action. In Subsection \ref{statement} we state our main theorem, that a greedy strategy for selecting replacement edges is in fact optimal, and in Subsection \ref{future} we compare and contrast our model to the well-studied Maker-Breaker game as well as outline some directions for future research. Using tools developed in Section \ref{prelim}, we prove our main theorem in Section \ref{proof}.
\subsection{Description of Model}
\label{model}
To initialize an instance of our model, we are given a finite multigraph $G$ (for this paper, all vertex sets of multigraphs will be unambiguous, so we view multigraphs simply as multisets of edges) and a finite multiset $R$ of weighted "reserve" edges between vertices of $G$. Each edge $r\in R$ has some nonnegative, finite weight $w(r)$; for $R'\subseteq R$, define $w(R')=\sum _{r\in R'}w(r)$.
There are two parties in this model: a positive actor Fixer and an antagonist Buster. A multigraph is to be first "busted" by Buster and then "fixed" by Fixer in each of a series of rounds. A given series will be given a name such as \mc{S}. The $k$th round of series \mc{S} begins with a finite multigraph \es{G}{S}{k} on the vertex set $V$ and a multiset \es{R}{S}{k} of weighted "reserve" edges in the complete graph on $V$. Start with $\es{G}{S}{1}=G$ and $\es{R}{S}{1}=R$.
Buster begins the $k$th round by removing some nonempty multiset \es{B}{S}{k} of edges from the current multigraph \es{G}{S}{k}. If adding all of the remaining multiset \es{R}{S}{k} of reserve edges to $\es{G}{S}{k}-\es{B}{S}{k}$ does not result in a connected multigraph, then we say \emph{Buster wins in the $k$th round}, and for convenience set $\es{F}{S}{k}=\emptyset$ (Fixer cannot reconnect the graph no matter how much she spends on reserve edges, so she might as well not spend anything at all). Otherwise, Fixer responds by creating a connected multigraph from $\es{G}{S}{k}-\es{B}{S}{k}$ by adding a (potentially empty) multiset \es{F}{S}{k} of edges from the remaining multiset \es{R}{S}{k} of reserve edges. In either case, set $\es{G}{S}{k+1}=(\es{G}{S}{k}-\es{B}{S}{k})\cup\es{F}{S}{k}$ and $\es{R}{S}{k+1}=\es{R}{S}{k}-\es{F}{S}{k}$. If Buster does not win in the $k$th round, he has the option to quit (for instance, in a real-life scenario, external factors could prevent Buster from continuing), in which case we say \emph{Fixer wins in the $k$th round}. If either Fixer or Buster wins in the $k$th round, denote it by $|\mc{S}|=k$.
If \mc{S} is a series satisfying $|\mc{S}|\geq k$, then a \emph{Fixer strategy to continue \mc{S} after the $k$th round} is a set $\phi$ of series satisfying the following:
\begin{enumerate}
\item The series consisting solely of the first $k$ rounds of \mc{S} belongs to $\phi$.
\item If $\mc{T}\in\phi$, Fixer wins \mc{T} in the $j$th round, and $B\subseteq \es{G}{T}{j}$ is nonempty, then there exists exactly one series $\mc{U}\in\phi$ such that $\es{B}{U}{i}=\es{B}{T}{i}$ and $\es{F}{U}{i}=\es{F}{T}{i}$ for $1\leq i\leq j$, $\es{B}{U}{j+1}=B$, and $|\mc{U}|=j+1$.
\item All series in $\phi$ are identical to \mc{S} through the first $k$ rounds.
\item If \mc{T} and \mc{U} are in $\phi$, with $\es{B}{T}{i}=\es{B}{U}{i}$ for $1\leq i\leq j$ and $\es{F}{T}{i}=\es{F}{U}{i}$ for $1\leq i<j$, then $\es{F}{T}{j}=\es{F}{U}{j}$.
\end{enumerate}
Equivalently, $\phi$ can be defined as the set of series represented by directed paths originating from the root vertex in some decision tree $T$ with the following structure. The root vertex $r$ of $T$ represents the first $k$ rounds of \mc{S}, and a non-root vertex $v$ of $T$ at distance $j$ from $r$ represents the $(k+j)$th round of a series whose first $k$ rounds are represented by $r$, and whose $(k+i)$th round for $1\leq i< j$ is represented by the vertex on the path from $r$ to $v$ at distance $i$ from $r$; the path $P$ in $T$ from $r$ to $v$ represents the series \mc{T} whose $k+j$ rounds are represented by the vertices along $P$. Given a vertex $v$ in $T$ and a series \mc{T} whose first $k+j$ rounds are identical to the series represented by the path from $r$ to $v$, for each possible Buster move \es{B}{T}{k+j+1} in the $(k+j+1)$st round of \mc{T} (i.e. each nonempty subset of a connected \es{G}{T}{k+j+1}), there is a child of $v$ representing the $(k+j+1)$st round of \mc{T}; that is, each possible Buster move \es{B}{T}{k+j+1} is assigned a single Fixer response \es{F}{T}{k+j+1}. Note that each path in $T$ from $r$ to a leaf represents a series won by Buster, while each path in $T$ from $r$ to an interior vertex (including the path consisting solely of the root $r$, if $T$ has multiple vertices, i.e. Buster doesn't win \mc{S} in the $k$th round) represents a series won by Fixer.
Let \mc{S} and \mc{S'} be series such that $\es{G}{S}{1}=\es{G}{S'}{1}$ and $\es{R}{S}{1}=\es{R}{S'}{1}$. Say \mc{S} is \emph{Fixer-superior} to \mc{S'} if each of the following holds:
\begin{enumerate}
\item Fixer wins \mc{S} or Buster wins \mc{S'}.
\item $\sum _{j=1}^{|\mc{S}|}|\es{B}{S}{j}|\geq\sum _{j=1}^{|\mc{S'}|}|\es{B}{S'}{j}|$
\item $\sum _{j=1}^{|\mc{S}|} w(\es{F}{S}{j})\leq\sum _{j=1}^{|\mc{S'}|} w(\es{F}{S'}{j})$
\end{enumerate}
The first requirement means that in terms of winning or losing, Fixer does at least as well in \mc{S} as in \mc{S'}. The second requirement means that Buster works at least as hard deleting edges in \mc{S} as in \mc{S'}. The third requirement means that Fixer spends at least as much on reserve edges in \mc{S'} as in \mc{S}. In establishing the second and third requirements, it is often helpful to observe that $\sum _{j=1}^{|\mc{T}|}|\es{B}{T}{j}|=|\es{G}{T}{1}|+|\es{R}{T}{1}|-|\es{G}{T}{|\mc{T}|+1}|-|\es{R}{T}{|\mc{T}|+1}|$ and $\sum _{j=1}^{|\mc{T}|} w(\es{F}{T}{j})=w(\es{R}{T}{1})-w(\es{R}{T}{|\mc{T}|+1})$ for any series \mc{T}. Note that every move is Fixer-superior to itself.
Fixer move \es{F}{S}{k} is \emph{optimal} if there exists a Fixer strategy $\phi$ to continue \mc{S} after the $k$th round, such that for any series \mc{S'} identical to \mc{S} though Buster's move of the $k$th round and any Fixer strategy $\phi'$ to continue \mc{S'} after the $k$th round, for any $\mc{T}\in\phi$ there exists $\mc{T'}\in\phi'$ for which \mc{T} is Fixer-superior to \mc{T'}. That is, for a series \mc{T} identical to \mc{S} through the first $k$ rounds, against any Buster moves Fixer can always continue play in \mc{T} with $\es{B}{T}{k},\es{F}{T}{k},\es{B}{T}{k+1},\es{F}{T}{k+1},\ldots,\es{B}{T}{|\mc{T}|},\es{F}{T}{|\mc{T}|}$, in such a way that for any series \mc{T'} identical to \mc{S} through Buster's move in the $k$th round, against any Fixer moves Buster can always continue play in \mc{T'} with $\es{B}{T'}{k},\es{F}{T'}{k},\es{B}{T'}{k+1},\es{F}{T'}{k+1},\ldots,\es{B}{T'}{|\mc{T'}|},\es{F}{T'}{|\mc{T'}|}$, in such a way that \mc{T} is Fixer-superior to \mc{T'}. It is not immediately obvious that an optimal move need exist.
\subsection{Example}
\label{example}
We consider a series \mc{S} starting with $\es{G}{S}{1}=\{e_1,e_2,e_3\}$, where these edges form a triangle, and $\es{R}{S}{1}=\{e_4,e_5\}$, where $e_4$ has the same endpoints as $e_1$ and satisfies $w(e_4)=1$, while $e_5$ has the same endpoints as $e_2$ and satisfies and $w(e_5)=2$; see Figure \ref{exampleGraph}.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\Vertex[x=0,y=2,fontsize=\large]{a}
\Vertex[x=-1.7,y=-1,fontsize=\large]{b}
\Vertex[x=1.7,y=-1,fontsize=\large]{c}
\Edge[label=$e_1$,position={above},fontsize=\large](a)(b)
\Edge[label=$e_2$,position={left},fontsize=\large](b)(c)
\Edge[label=$e_3$,position={below},fontsize=\large](c)(a)
\Edge[label=$e_4$,bend=-45,style={dashed},fontsize=\large](a)(b)
\Edge[label=$e_5$,bend=-45,style={dashed},fontsize=\large](b)(c)
\end{tikzpicture}
\caption{The solid edges form the graph \es{G}{S}{1}, while the dashed edges form the reserve set \es{R}{S}{1}, with $w(e_4)=1$ and $w(e_5)=2$.}\label{exampleGraph}
\end{figure}
Suppose Buster's first move in \mc{S} is to play $\es{B}{S}{1}=\{e_1,e_2\}$, and Fixer responds with $\es{F}{S}{1}=\{e_4\}$. We detail a Fixer strategy $\phi=\{\mc{T_1},\ldots,\mc{T_{10}}\}$ to continue \mc{S} after the first round below. A series \mc{T_i} in the same row as the $j$th round means that the rounds of \mc{T_i} are given by the first $j$ rows of the given box (ignoring the ``Winner'' column until the $j$th round).
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\mc{T} & $j$ & \es{G}{T}{j} & \es{R}{T}{j} & \es{B}{T}{j} & \es{F}{T}{j} & $\sum |\es{B}{T}{i}|$ & $\sum w(\es{F}{T}{i})$ & Winner \tabularnewline
\hline
\mc{T_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4\}$ & 2 & 1 & Fixer \tabularnewline
\mc{T_2} & $2$ & $\{e_3,e_4\}$ & $\{e_5\}$ & $\{e_3\}$ & $\{e_5\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T_3}/\mc{T_4}/\mc{T_5} & $3$ & $\{e_4,e_5\}$ & $\{\}$ & $\{e_4\}$/$\{e_5\}$/$\{e_4,e_5\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4\}$ & 2 & 1 & Fixer \tabularnewline
\mc{T_6} & $2$ & $\{e_3,e_4\}$ & $\{e_5\}$ & $\{e_4\}$ & $\{e_5\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T_7}/\mc{T_8}/\mc{T_9} & $3$ & $\{e_3,e_5\}$ & $\{\}$ & $\{e_3\}$/$\{e_5\}$/$\{e_3,e_5\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4\}$ & 2 & 1 & Fixer \tabularnewline
\mc{T_{10}} & $2$ & $\{e_3,e_4\}$ & $\{e_5\}$ & $\{e_3,e_4\}$ & $\{\}$ & $4$ & $1$ & Buster \tabularnewline
\hline
\end{tabular}
\end{center}
Now consider an alternate series \mc{S'} identical to \mc{S} up through Buster's removal of a set of edges in the first round, but suppose Fixer responds in \mc{S'} with $\es{F}{S'}{1}=\{e_5\}$. We detail a Fixer strategy $\phi'=\{\mc{T'_1},\ldots,\mc{T'_{10}}\}$ to continue \mc{S'} after the first round below.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\mc{T'} & $j$ & \es{G}{T'}{j} & \es{R}{T'}{j} & \es{B}{T'}{j} & \es{F}{T'}{j} & $\sum |\es{B}{T'}{i}|$ & $\sum w(\es{F}{T'}{i})$ & Winner \tabularnewline
\hline
\mc{T'_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_5\}$ & 2 & 2 & Fixer \tabularnewline
\mc{T'_2} & $2$ & $\{e_3,e_5\}$ & $\{e_4\}$ & $\{e_3\}$ & $\{e_4\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T'_3}/\mc{T'_4}/\mc{T'_5} & $3$ & $\{e_4,e_5\}$ & $\{\}$ & $\{e_4\}$/$\{e_5\}$/$\{e_4,e_5\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T'_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_5\}$ & 2 & 2 & Fixer \tabularnewline
\mc{T'_6} & $2$ & $\{e_3,e_5\}$ & $\{e_4\}$ & $\{e_5\}$ & $\{e_4\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T'_7}/\mc{T'_8}/\mc{T'_9} & $3$ & $\{e_3,e_4\}$ & $\{\}$ & $\{e_3\}$/$\{e_4\}$/$\{e_3,e_4\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T'_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_5\}$ & 2 & 2 & Fixer \tabularnewline
\mc{T'_{10}} & $2$ & $\{e_3,e_5\}$ & $\{e_4\}$ & $\{e_3,e_5\}$ & $\{\}$ & $4$ & $2$ & Buster \tabularnewline
\hline
\end{tabular}
\end{center}
Observe that $\phi'$ is, in fact, the only Fixer strategy to continue \mc{S'} after the first round. Indeed, $\es{G}{S'}{2}=\{e_3,e_5\}$, which is a path graph, and $\es{R}{S'}{2}=\{e_4\}$, where $e_4$ spans the enpdoints of that path. Thus Buster deleting any single edge from \es{G}{S'}{2} necessitates Fixer reconnecting the graph using $e_4$, from which point on any further deletions by Buster cannot be countered by Fixer, and Buster initially deleting both edges from \es{G}{S'}{2} also cannot be countered by Fixer. Furthermore, see that for $1\leq i\leq 10$, the series \mc{T_i} in $\phi$ is Fixer-superior to the series \mc{T'_i} in $\phi'$. Hence $\phi$ is a Fixer strategy to continue \mc{S} after the first round, such that for any Fixer strategy $\phi'$ to continue \mc{S'} after the first round, for any $\mc{T}\in\phi$ there exists $\mc{T'}\in\phi'$ for which \mc{T} is Fixer-superior to \mc{T'}.
Now consider another alternate series \mc{S''} identical to \mc{S} up through Buster's removal of a set of edges in the first round, but suppose Fixer responds in \mc{S''} with $\es{F}{S''}{1}=\{e_4,e_5\}$. We detail a Fixer strategy $\phi''=\{\mc{T''_1},\ldots,\mc{T''_{17}}\}$ to continue \mc{S''} after the first round below.
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\mc{T''} & $j$ & \es{G}{T''}{j} & \es{R}{T''}{j} & \es{B}{T''}{j} & \es{F}{T''}{j} & $\sum |\es{B}{T''}{i}|$ & $\sum w(\es{F}{T''}{i})$ & Winner \tabularnewline
\hline
\mc{T''_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4,e_5\}$ & 2 & 3 & Fixer \tabularnewline
\mc{T''_2} & $2$ & $\{e_3,e_4,e_5\}$ & $\{\}$ & $\{e_3\}$ & $\{\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T''_3}/\mc{T''_4}/\mc{T''_5} & $3$ & $\{e_4,e_5\}$ & $\{\}$ & $\{e_4\}$/$\{e_5\}$/$\{e_4,e_5\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T''_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4,e_5\}$ & 2 & 3 & Fixer \tabularnewline
\mc{T''_6} & $2$ & $\{e_3,e_4,e_5\}$ & $\{\}$ & $\{e_4\}$ & $\{\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T''_7}/\mc{T''_8}/\mc{T''_9} & $3$ & $\{e_3,e_5\}$ & $\{\}$ & $\{e_3\}$/$\{e_5\}$/$\{e_3,e_5\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T''_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4,e_5\}$ & 2 & 3 & Fixer \tabularnewline
\mc{T''_{10}} & $2$ & $\{e_3,e_4,e_5\}$ & $\{\}$ & $\{e_5\}$ & $\{\}$ & 3 & 3 & Fixer \tabularnewline
\mc{T''_{11}}/\mc{T''_{12}}/\mc{T''_{13}} & $3$ & $\{e_3,e_4\}$ & $\{\}$ & $\{e_3\}$/$\{e_4\}$/$\{e_3,e_4\}$ & $\{\}$ & $4$/$4$/$5$ & $3$ & Buster \tabularnewline
\hline
\mc{T''_1} & $1$ & $\{e_1,e_2,e_3\}$ & $\{e_4,e_5\}$ & $\{e_1,e_2\}$ & $\{e_4,e_5\}$ & 2 & 3 & Fixer \tabularnewline
\mc{T''_{14}}/\mc{T''_{15}}/\mc{T''_{16}}/\mc{T''_{17}} & $2$ & $\{e_3,e_4,e_5\}$ & $\{\}$ & $\{e_3,e_4\}$/$\{e_3,e_5\}$/$\{e_4,e_5\}$/$\{e_3,e_4,e_5\}$ & $\{\}$ & 4/4/4/5 & 3 & Buster \tabularnewline
\hline
\end{tabular}
\end{center}
Again, observe that $\phi''$ is the only Fixer strategy to continue \mc{S''} after the first round since \es{R}{S''}{2} is empty, leaving Fixer with no options besides playing the empty set when possible. See that for $1\leq i\leq 9$, the series \mc{T_i} in $\phi$ is Fixer-superior to the series \mc{T''_i} in $\phi''$, while \mc{T_{10}} is Fixer-superior to \mc{T''_{11}}. Hence $\phi$ is a Fixer strategy to continue \mc{S} after the first round, such that for any Fixer strategy $\phi''$ to continue \mc{S''} after the first round, for any $\mc{T}\in\phi$ there exists $\mc{T''}\in\phi'$ for which \mc{T} is Fixer-superior to \mc{T''}.
Since \es{F}{S}{1}, \es{F}{S'}{1}, and \es{F}{S''}{1} are Fixer's only choices for responding to Buster in the first round, we have shown that $\phi$ is a Fixer strategy to continue \mc{S} after the first round, such that for any Fixer strategy $\phi'$ or $\phi''$ to continue any alternate move \es{F}{S'}{1} or \es{F}{S''}{1} after the first round, for any $\mc{T}\in\phi$ there exists $\mc{T'}\in\phi'$ and $\mc{T''}\in\phi''$ for which \mc{T} is Fixer-superior to \mc{T'} and \mc{T''}. Hence \es{F}{S}{1} is optimal. Note that because \mc{T'_1} and \mc{T''_1} are part of $\phi'$ and $\phi''$, respectively (the only Fixer strategies to continue \mc{S'} and \mc{S''} after the first round), but are not Fixer-superior to any series in $\phi$, by definition \es{F}{S'}{1} and \es{F}{S''}{1} are not optimal.
\subsection{Statement of Main Theorem}
\label{statement}
Fixer move $\es{F}{S}{k}\subseteq \es{R}{S}{k}$ to create a connected multigraph \es{G}{S}{k+1} by adding \es{F}{S}{k} to $\es{G}{S}{k}-\es{B}{S}{k}$ is \emph{greedy} if, for any other series \mc{S'} identical to \mc{S} through Buster's move of the $k$th round, Fixer move $\es{F}{S'}{k}\subseteq \es{R}{S'}{k}$ to create a connected multigraph \es{G}{S'}{k+1} by adding \es{F}{S'}{k} to $\es{G}{S'}{k}-\es{B}{S'}{k}$ satisfies $w(\es{F}{S}{k})\leq w(\es{F}{S'}{k})$. That is, Fixer plays greedily in response to a move by Buster by adding no reserve edge if the graph remains connected, and otherwise adding some cheapest set of reserve edges that connects the graph.
Note that every optimal Fixer move is necessarily greedy. Indeed, if \mc{S} and \mc{S'} are identical through Buster's move of the $k$th round, but $w(\es{F}{S}{k}) > w(\es{F}{S'}{k})$, then for any Fixer strategy $\phi$ to continue \mc{S} after the $k$th round, the series $\mc{T}\in\phi$ identical to \mc{S} through the $k$th round but ending with $|\mc{T}|=k$ cannot be Fixer-superior to any series \mc{T'} in a Fixer strategy $\phi'$ to continue \mc{S'} after the $k$th round. This is because if $|\mc{T'}|=k$ then $\sum _{j=1}^{|\mc{T}|} w(\es{F}{T}{j}) > \sum _{j=1}^{|\mc{T'}|} w(\es{F}{T'}{j})$, while if $|\mc{T'}|>k$ then $\sum _{j=1}^{|\mc{T}|}|\es{B}{T}{j}|<\sum _{j=1}^{|\mc{T'}|}|\es{B}{T'}{j}|$. Our main theorem states that greediness is also a sufficient condition for optimality.
\begin{thm}\label{mainthm}
Every greedy move by Fixer is optimal.
\end{thm}
After establishing some facts about spanning trees, Fixer-superiority, and optimality in Section \ref{prelim}, we use them to prove Theorem \ref{mainthm} in Section \ref{proof}. Our proof will be by induction, split into cases by the number $c$ of components of $\es{G}{S}{1}-\es{B}{S}{1}$. We shall see that the case $c=1$ is mostly trivial, the case $c=2$ is the most difficult and requires case analysis of each move to verify that certain invariants are maintained, and the case $c\geq 3$ is better-suited for a more direct application of the inductive hypothesis.
\subsection{Past and Future Work}
\label{future}
For a family $\mathcal{F}$ of subgraphs of of the complete graph $K_n$, the unbiased Maker-Breaker game on $\mathcal{F}$ consists of players Maker and Breaker taking turns claiming edges of $K_n$ (see \cite{L} and \cite{CE} for some notable early results, and \cite{HKSS} for a more recent survey). Maker wins by claiming all edges of some graph in $\mathcal{F}$, while Breaker wins if all edges are claimed before Maker wins (equivalently, Breaker wins by claiming an edge from each minimal member of $\mathcal{F}$). The family $\mathcal{F}$ most relevant for comparison of the Maker-Breaker game on $\mathcal{F}$ with our game is the family of connected spanning subgraphs of $K_n$. The gameplay of Maker-Breaker differs from that of Buster-Fixer in several obvious ways, including the following:
\begin{enumerate}
\item Maker only needs to end up with a connected graph, while Fixer must maintain connectedness after each turn.
\item Maker cannot replace edges claimed by Breaker, whereas the reserve edges Fixer may select could include edges with the same endpoints as the edges deleted by Buster.
\item The Maker-Breaker game does not typically include weighted edges, which are is a consideration in the Buster-Fixer model.
\item Maker's objective is to directly beat Breaker (and vice-versa), but the relationship between Fixer and Buster is less symmetric; Buster is more of an agent of chaos than a goal-oriented player (e.g. Buster can simply choose to stop participating at any point), and Fixer's objective is to do her best to keep the graph connected for as long and cheaply as possible based on Buster's actions, regardless of how well Buster's edge deletions actually do to disconnect the graph in expensive ways.
\end{enumerate}
We believe the last difference listed is the most important to take note of, as it provides a contrast in the fundamental structure of the models, which further dictates how results are stated for each model. A standard Maker-Breaker result (similar to many typical results in positional game theory) would be a statement of conditions on $n$ and $\mathcal{F}$ for Maker or Breaker to win, most likely constructively proven via an explicit strategy for Maker or Breaker. Our Theorem \ref{mainthm}, that every greedy move by Fixer is optimal, is of a different flavor though. Since Fixer can't even ``win'' if Buster plays for long enough, and has no way of forcing Buster to quit, we must compare Fixer strategies against each other, rather than use a single Fixer strategy to prove Fixer can achieve a certain goal. Hence the optimal strategy for Fixer \emph{is} the statement, proven by showing its superiority to all other Fixer strategies.
Many avenues exist for future research into variations on our Buster-Fixer model. In particular, we wonder about optimal Fixer strategies for alternative games, where the condition that Fixer must maintain on the graph through each round is changed from maintaining connectedness to one of the following conditions:
\begin{enumerate}
\item Two given vertices $s$ and $t$ must stay in the same component.
\item The graph must stay $k$-connected for a given $k>1$.
\item Instead of a simple graph, the graph is directed, and Fixer must maintain one of the following conditions:
\begin{enumerate}
\item The directed graph must stay strongly connected.
\item The directed graph must have directed paths from (or to) a given vertex $s$ to (or from) all other vertices.
\item The directed graph must have a directed path from a given vertex $s$ to a given vertex $t$.
\item The directed graph must have directed paths in both directions between given vertices $s$ and $t$.
\end{enumerate}
\end{enumerate}
\section{Preliminaries}
\label{prelim}
\subsection{Spanning Trees and Prim's Algorithm}
\label{graphTheory}
A \emph{bridge} in a multigraph $M$ is an edge $e$ such $M-\{e\}$ has one more component than $M$; equivalently, $e$ is part of no cycle in $M$. A \emph{spanning tree} of a connected multigraph $M$ is a subgraph $T$ of $M$ such that $T$ is a tree (i.e. connected and acyclic) whose vertex set matches that of $M$. A \emph{minimum spanning tree} of an edge-weighted multigraph $M$ is a spanning tree of $M$ minimizing the total weight of the edges. Minimum spanning trees are of interest to us because \es{F}{S}{k} is greedy if and only if it is a minimum spanning tree of the multigraph whose vertices are the components of $\es{G}{S}{k}-\es{B}{S}{k}$ and whose edges are the edges of \es{R}{S}{k} (identifying each endpoint of the edges in \es{R}{S}{k} with the component of $\es{G}{S}{k}-\es{B}{S}{k}$ within which it lies).
\emph{Prim's Algorithm} (first discovered by Jarnik \cite{J} and later by Prim \cite{P} and Dijkstra \cite{D}) finds a minimum spanning tree $T$ of a weighted connected multigraph $M$ one edge at a time by the following construction: with $T$ initialized as any vertex, iteratively add to $T$ any cheapest edge of $M$ joining a vertex in $T$ to one not yet in $T$, until all vertices of $M$ are in $T$.
We require not just the fact that Prim's Algorithm successfully produces a minimum spanning tree, but also the fact that any minimum spanning tree can be constructed via Prim's Algorithm.
\begin{prop}\label{primProp}
A spanning tree of a weighted connected multigraph is a minimum spanning tree if and only if it can be constructed via Prim's Algorithm.
\end{prop}
\begin{proof}
Let $M$ be a weighted connected multigraph, let $P$ be a subgraph of $M$ constructed by applying Prim's Algorithm, and let $T$ be a minimum spanning tree of $M$. We complete the proof by showing that $P$ is in fact a minimum spanning tree of $M$, and $T$ can be constructed via Prim's Algorithm.
If $P$ is constructed via Prim's Algorithm, then see that $P$ is a spanning tree of $M$:
\begin{enumerate}
\item $P$ is connected because $P$ is initialized as a single component (its single starting vertex), and the connectedness of $P$ is maintained as each new vertex is added as the endpoint of an edge whose other endpoint was already in $P$.
\item $P$ spans $M$ because $M$ is connected, so if some vertex of $M$ is not yet in $P$, then some new vertex can always be added to $P$.
\item $P$ is acyclic because every edge added is a bridge in $P$.
\end{enumerate}
If $P=T$ then $P$ is a minimum spanning tree. Otherwise, let $e$ be the first edge added during the construction of $P$ that is not in $T$, let $V$ be the set of vertices connected by the edges added before adding $e$, and let $f$ be an edge in the path through $T$ between the endpoints of $e$ such that one endpoint of $f$ is in $V$ but the other is not. Let $T'$ be the spanning tree of $M$ constructed from $T$ by replacing $f$ with $e$.
Since $e$ and $f$ are each edges with exactly one endpoint in $V$ and $e$ was added to $P$ by Prim's Algorithm, $e$ cannot weigh more than $f$. Hence $T'$ cannot weigh more than $T$, so $T'$ must be a minimum spanning tree of $M$ as well. This process of constructing minimum spanning trees of $M$ each with one more edge in common with $P$ than the last can be continued until $P$ is the minimum spanning tree constructed.
Since $T$ and $T'$ are both minimum spanning trees of $M$, they must weigh the same. Since $T'$ was constructed from $T$ by replacing $e$ with $f$, $e$ and $f$ must weigh the same. Hence $f$ could have also been added by Prim's Algorithm to extend the construction of a minimum spanning tree of $M$. This process of growing by an edge the subtree of $T$ that can be shown to have been created according to Prim's Algorithm can be continued until all of $T$ is shown to have been created by Prim's Algorithm.
\end{proof}
\subsection{Facts about Fixer-superiority and Optimality}
\label{facts}
We first verify that Fixer-superiority is transitive.
\begin{prop}\label{fixSupTransitivityProp}
Suppose \mc{S}, \mc{S'}, and \mc{S''} are series such that $\es{G}{S}{1}=\es{G}{S'}{1}=\es{G}{S''}{1}$ and $\es{R}{S}{1}=\es{R}{S'}{1}=\es{R}{S''}{1}$. If \mc{S} is Fixer-superior to \mc{S'}, and \mc{S'} is Fixer-superior to \mc{S''}, then \mc{S} is Fixer-superior to \mc{S''}.
\end{prop}
\begin{proof}
We have
\begin{enumerate}
\item Fixer wins \mc{S}, or Buster wins \mc{S'} (since \mc{S} is Fixer-superior to \mc{S'}), in which case Buster wins \mc{S''} (since \mc{S'} is Fixer-superior to \mc{S''}).
\item $\sum _{j=1}^{|\mc{S}|}|\es{B}{S}{j}|\geq\sum _{j=1}^{|\mc{S'}|}|\es{B}{S'}{j}|\geq\sum _{j=1}^{|\mc{S''}|}|\es{B}{S''}{j}|$
\item $\sum _{j=1}^{|\mc{S}|} w(\es{F}{S}{j})\leq\sum _{j=1}^{|\mc{S'}|} w(\es{F}{S'}{j})\leq\sum _{j=1}^{|\mc{S''}|} w(\es{F}{S''}{j})$
\end{enumerate}
so \mc{S} is Fixer-superior to \mc{S''} by definition.
\end{proof}
We next show that Fixer playing only optimal moves past some round leads to a series that is Fixer-superior to certain other series, further justifying our definition of ``optimal''.
\begin{lem}\label{optLemFixer}
Let \mc{S} be identical to \mc{S'} up through Buster's removal of a set of edges in the $k$th round, and let $\phi'$ be a strategy for Fixer to continue \mc{S'} after the $k$th round. If \es{F}{S}{j} is optimal for $j\geq k$, then there exists $\mc{T'}\in\phi'$ such that \mc{S} is Fixer-superior to \mc{T'}.
\end{lem}
\begin{proof}
For $k\leq j\leq |\mc{S}|$, since \es{F}{S}{j} is optimal there exists a strategy $\phi_j$ for Fixer to continue \mc{S} after the $j$th round, such that for any series \mc{S_j} identical to \mc{S} up through Buster's move in the $j$th round, every series in $\phi_j$ is Fixer-superior to some series in any strategy for Fixer to continue \mc{S_j} after the $j$th round. For each $k\leq j<|\mc{S}|$, set $\hat{\phi}_j$ as the subset of $\phi_j$ consisting of its series which either end by the $j$th round or have Buster's move in the $(j+1)$st round match \es{B}{S}{j+1}, and set $\hat{\phi}_{k-1}=\phi'$ and $\hat{\phi}_{|\mc{S}|}=\phi_{|\mc{S}|}$. Note that $\mc{S}\in\hat{\phi}_{|\mc{S}|}$, and for $k\leq j\leq |\mc{S}|$ every series in $\hat{\phi}_j$ is Fixer-superior to some series in $\hat{\phi}_{j-1}$ (since $\hat{\phi}_j\subseteq\phi_j$ and for a series \mc{S_j} identical to \mc{S} up through Buster's move in the $j$th round, $\hat{\phi}_{j-1}$ is a strategy for Fixer to continue \mc{S_j} after the $j$th round).
We iteratively construct a sequence $\mc{T_{|\mc{S}|}},\mc{T_{|\mc{S}|-1}},\ldots,\mc{T_{k-1}}$ of series, with $\mc{T_j}\in\hat{\phi}_j$ for each $j$. First set $\mc{T_{|\mc{S}|}}=\mc{S}\in\hat{\phi}_{|\mc{S}|}$. Then, having already constructed $\mc{T_{|\mc{S}|}},\mc{T_{|\mc{S}|-1}},\ldots,\mc{T_j}$ for some $j\geq k$, select $\mc{T_{j-1}}\in\hat{\phi}_{j-1}$ so that \mc{T_j} is Fixer-superior to \mc{T_{j-1}}. Hence by Proposition \ref{fixSupTransitivityProp}, \mc{T_{|\mc{S}|}} is Fixer-superior to \mc{T_{k-1}}. Since $\mc{S}=\mc{T_{|\mc{S}|}}$ and $\mc{T_{k-1}}\in\hat{\phi}_{k-1}=\phi'$, the proof is complete.
\end{proof}
Finally, we show that to verify the optimality of some Fixer move, we need only compare it to alternate Fixer moves consisting solely of bridges.
\begin{prop}\label{subsetSupProp}
Suppose $\phi$ is a Fixer strategy to continue \mc{S} after the $k$th round such that for every series \mc{S'} identical to \mc{S} through Buster's move of the $k$th round for which every edge of \es{F}{S'}{k} is a bridge in \es{G}{S'}{k+1}, for any $\mc{T}\in\phi$ and any Fixer strategy $\phi'$ to continue \mc{S'} after the $k$th round, there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}. Then \es{F}{S}{k} is optimal.
\end{prop}
\begin{proof}
Let $\mc{T}\in\phi$, let \mc{S'} and \mc{S''} be series identical to \mc{S} through Buster's move of the $k$th round such that $\es{F}{S'}{k}\subseteq\es{F}{S''}{k}$ and every edge of \es{F}{S'}{k} is bridge in \es{G}{S'}{k+1}, and let $\phi''$ be any Fixer strategy to continue \mc{S''} after the $k$th round. To complete the proof, we construct a Fixer strategy $\phi'$ to continue \mc{S'} after the $k$th round such that for every $\mc{T'}\in\phi'$ there exists $\mc{T''}\in\phi''$ for which \mc{T'} is Fixer-superior to \mc{T''}. Indeed, by the hypothesis of this proposition there would exist $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}, and if there exists $\mc{T''}\in\phi''$ such that \mc{T'} is Fixer-superior to \mc{T''}, then \mc{T} would be Fixer-superior to \mc{T''}, by Proposition \ref{fixSupTransitivityProp}; hence \es{F}{S}{k} would be optimal by definition since $\mc{T}\in\phi$, \es{F}{S''}{k}, and $\phi''$ were arbitrary. To construct $\phi'$, we define an arbitrary series $\mc{T'}\in\phi'$; that is, we let \mc{T'} be identical to \mc{S'} through the $k$th round, and for arbitrary plays \es{B}{T'}{j} from Buster in the $j$th round for $j>k$, we assign Fixer responses \es{F}{T'}{j} derived from $\phi''$ in such a way that \mc{T'} is Fixer-superior to some $\mc{T''}\in\phi''$.
First suppose $|\mc{T'}|=k$, in which case let \mc{T''} be the lone series in $\phi''$ satisfying $|\mc{T''}|=k$ (i.e. \mc{T''} consists of the first $k$ rounds of \mc{S''}). Note that
\begin{enumerate}
\item Either Fixer wins \mc{T'}, or Buster wins \mc{T'}, in which case $(\es{G}{T'}{k}-\es{B}{T'}{k})\cup\es{R}{T'}{k}$ is disconnected, leaving $(\es{G}{T''}{k}-\es{B}{T''}{k})\cup\es{R}{T''}{k}$ also disconnected since it's the same graph, meaning Buster also wins \mc{T''}
\item $\sum _{j=1}^{|\mc{T'}|}|\es{B}{T'}{j}|=\sum _{j=1}^{k}|\es{B}{S}{j}|=\sum _{j=1}^{|\mc{T''}|}|\es{B}{T''}{j}|$
\item $\sum _{j=1}^{|\mc{T'}|} w(\es{F}{T'}{j})=\sum _{j=1}^{k-1} w(\es{F}{S}{j})+w(\es{F}{T'}{k})\leq\sum _{j=1}^{k-1} w(\es{F}{S}{j})+w(\es{F}{T''}{k})=\sum _{j=1}^{|\mc{T''}|} w(\es{F}{T''}{j})$
\end{enumerate}
so \mc{T'} is Fixer-superior to \mc{T''}.
Now suppose $|\mc{T'}|>k$. Note that $\es{F}{S'}{k}\subseteq \es{F}{S''}{k}$ implies for any $\mc{T''}\in\phi''$ both that $\es{R}{T''}{k+1}\subseteq \es{R}{T'}{k+1}$ (since $\es{R}{T'}{k+1}$ and $\es{R}{T''}{k+1}$ were constructed from $\es{R}{S}{k}$ by removing \es{F}{S'}{k} and \es{F}{S''}{k}, respectively) as well as that \es{G}{T'}{k+1} is a subgraph of \es{G}{T''}{k+1} (since \es{G}{T'}{k+1} and \es{G}{T''}{k+1} were constructed from $\es{G}{S}{k}-\es{B}{S}{k}$ by adding \es{F}{S'}{k} and \es{F}{S''}{k}, respectively). By the latter of these observations, there exists $\mc{T''}\in\phi''$ such that $\es{B}{T''}{k+1}=\es{B}{T'}{k+1}$; we shall choose our $\mc{T''}\in\phi''$ for which \mc{T'} is Fixer-superior to \mc{T''} to satisfy $\es{B}{T''}{k+1}=\es{B}{T'}{k+1}$. Note that for $D=\es{F}{S''}{k}-\es{F}{S'}{k}$,
\begin{align*}
(\es{G}{T'}{k+1}-\es{B}{T'}{k+1})\cup\es{R}{T'}{k+1}
&=(((\es{G}{T'}{k}-\es{B}{T'}{k})\cup\es{F}{T'}{k})-\es{B}{T'}{k+1})\cup (\es{R}{T'}{k}-\es{F}{T'}{k}) \\
&=(((\es{G}{T''}{k}-\es{B}{T''}{k})\cup(\es{F}{T''}{k}-D))-\es{B}{T''}{k+1})\cup (\es{R}{T''}{k}-\es{F}{T''}{k})\cup D \\
&=(((\es{G}{T''}{k}-\es{B}{T''}{k})\cup\es{F}{T''}{k})-\es{B}{T''}{k+1})\cup (\es{R}{T''}{k}-\es{F}{T''}{k}) \\
&=(\es{G}{T''}{k+1}-\es{B}{T''}{k+1})\cup\es{R}{T''}{k+1}
\end{align*}
so $(\es{G}{T'}{k+1}-\es{B}{T'}{k+1})\cup\es{R}{T'}{k+1}$ is connected if and only if $(\es{G}{T''}{k+1}-\es{B}{T''}{k+1})\cup\es{R}{T''}{k+1}$ is connected.
If Buster wins \mc{T'} in the $(k+1)$st round, then
\begin{enumerate}
\item Buster also wins \mc{T''} in the $(k+1)$st round, as $(\es{G}{T''}{k+1}-\es{B}{T''}{k+1})\cup\es{R}{T''}{k+1}$ is disconnected since $(\es{G}{T'}{k+1}-\es{B}{T'}{k+1})\cup\es{R}{T'}{k+1}$ is disconnected due to Buster winning \mc{T'} in the $(k+1)$st round
\item $\sum _{j=1}^{|\mc{T'}|}|\es{B}{T'}{j}|=\sum _{j=1}^{k}|\es{B}{S}{j}|+|\es{B}{T'}{k+1}|=\sum _{j=1}^{k}|\es{B}{S}{j}|+|\es{B}{T''}{k+1}|=\sum _{j=1}^{|\mc{T''}|}|\es{B}{T''}{j}|$
\item $\sum _{j=1}^{|\mc{T'}|} w(\es{F}{T'}{j})=\sum _{j=1}^{k-1} w(\es{F}{S}{j})+w(\es{F}{T'}{k})+0\leq\sum _{j=1}^{k-1} w(\es{F}{S}{j})+w(\es{F}{T''}{k})+0=\sum _{j=1}^{|\mc{T''}|} w(\es{F}{T''}{j})$ since $\es{F}{T'}{k+1}=\es{F}{T''}{k+1}=\emptyset$
\end{enumerate}
so \mc{T'} is Fixer-superior to \mc{T''}.
Thus we may suppose either Fixer wins \mc{T'} in the $(k+1)$st round or $|\mc{T'}|\geq k+2$. Then $(\es{G}{T'}{k+1}-\es{B}{T'}{k+1})\cup\es{R}{T'}{k+1}$ is connected, so $(\es{G}{T''}{k+1}-\es{B}{T''}{k+1})\cup\es{R}{T''}{k+1}$ is also connected, meaning either Fixer wins \mc{T''} in the $(k+1)$st round or $|\mc{T''}|\geq k+2$. Suppose according to $\phi''$ that Fixer repairs $\es{G}{T''}{k+1}-\es{B}{T''}{k+1}$ with the set $\es{F}{T''}{k+1}\subseteq \es{R}{T''}{k+1}$ to create the connected graph \es{G}{T''}{k+2}. Define $\phi'$ so that Fixer repairs $\es{G}{T'}{k+1}-\es{B}{T'}{k+1}$ with the set $\es{F}{T'}{k+1}=\es{F}{T''}{k+1}\cup ((\es{F}{T''}{k}-\es{F}{T'}{k})-\es{B}{T'}{k+1})$ to create the graph $\es{G}{T'}{k+2}$.
First, note that $\es{F}{T'}{k+1}\subseteq \es{R}{T'}{k+1}$ since $\es{F}{T''}{k+1}\subseteq \es{R}{T''}{k+1}\subseteq \es{R}{T'}{k+1}$ and $\es{F}{T''}{k}-\es{F}{T'}{k}=\es{R}{T'}{k+1}-\es{R}{T''}{k+1}\subseteq \es{R}{T'}{k+1}$. Hence Fixer can play \es{F}{T'}{k+1} as long as it makes $\es{G}{T'}{k+2}$ connected, which is the case since \es{G}{T''}{k+2} is connected and $\es{G}{T'}{k+2}=\es{G}{T''}{k+2}$:
\begin{align*}
\es{G}{T'}{k+2}&=(\es{G}{T'}{k+1}-\es{B}{T'}{k+1})\cup\es{F}{T'}{k+1} \\
&=(\es{G}{T''}{k+1}-(\es{F}{T''}{k}-\es{F}{T'}{k})-\es{B}{T'}{k+1})\cup\es{F}{T''}{k+1}\cup ((\es{F}{T''}{k}-\es{F}{T'}{k})-\es{B}{T'}{k+1}) \\
&=(\es{G}{T''}{k+1}-\es{B}{T'}{k+1})\cup\es{F}{T''}{k+1} \\
&=(\es{G}{T''}{k+1}-\es{B}{T''}{k+1})\cup\es{F}{T''}{k+1} \\
&=\es{G}{T''}{k+2}
\end{align*}
Next, note that $\es{R}{T'}{k+2}=\es{R}{T''}{k+2}$ since
\begin{align*}
\es{R}{T'}{k+2}&=\es{R}{T'}{k}-(\es{F}{T'}{k}\cup \es{F}{T'}{k+1}) \\
&=\es{R}{T''}{k}-(\es{F}{T'}{k}\cup \es{F}{T''}{k+1}\cup ((\es{F}{T''}{k}-\es{F}{T'}{k})-\es{B}{T'}{k+1})) \\
&=\es{R}{T''}{k}-((\es{F}{T''}{k}-(\es{B}{T'}{k+1}-\es{F}{T'}{k}))\cup \es{F}{T''}{k+1}) \\
&=\es{R}{T''}{k}-(\es{F}{T''}{k}\cup \es{F}{T''}{k+1}) \\
&=\es{R}{T''}{k+2}
\end{align*}
(using the facts that $\es{R}{T'}{k}=\es{R}{T''}{k}$, that $\es{F}{T'}{k+1}=\es{F}{T''}{k+1}\cup ((\es{F}{T''}{k}-\es{F}{T'}{k})-\es{B}{T'}{k+1})$, and that $\es{F}{T''}{k}\cap (\es{B}{T'}{k+1}-\es{F}{T'}{k})=(\es{F}{T''}{k}-\es{F}{T'}{k})\cap \es{B}{T'}{k+1}=\emptyset$ since $\es{F}{T''}{k}-\es{F}{T'}{k}\subseteq\es{R}{T'}{k+1}$ and $\es{R}{T'}{k+1}\cap\es{B}{T'}{k+1}=\emptyset$).
Finally, see that since $\es{G}{T'}{k+2}=\es{G}{T''}{k+2}$ and $\es{R}{T'}{k+2}=\es{R}{T''}{k+2}$, $\phi'$ can continue to be defined by copying $\phi''$ in the following way. Assuming \mc{T'} has been defined up to the start of the $j$th round for some $j\geq k+2$ in such a way that $\es{G}{T'}{j}=\es{G}{T''}{j}$ and $\es{R}{T'}{j}=\es{R}{T''}{j}$ for some $\mc{T''}\in\phi''$, and Buster removes some set \es{B}{T'}{j} of edges from \es{G}{T'}{j} in \mc{T'}, set $\es{B}{T''}{j}=\es{B}{T'}{j}$ and let \es{F}{T''}{j} be Fixer's response in \mc{T''} prescribed by $\phi''$. Then set $\es{F}{T'}{j}=\es{F}{T''}{j}$, leaving $\es{G}{T'}{j+1}=\es{G}{T''}{j+1}$ and $\es{R}{T'}{j+1}=\es{R}{T''}{j+1}$. Continuing this process up through the final round $\ell$ of \mc{T'}, which we also let be the final round of \mc{T''} (either automatically if Buster wins, or by letting Buster quit if Fixer wins), we see that \mc{T'} is Fixer-superior to \mc{T''} because
\begin{enumerate}
\item Either Fixer wins \mc{T'}, or Buster wins \mc{T'}, in which case $(\es{G}{T'}{\ell}-\es{B}{T'}{\ell})\cup\es{R}{T'}{\ell}$ is disconnected, leaving $(\es{G}{T''}{\ell}-\es{B}{T''}{\ell})\cup\es{R}{T''}{\ell}$ also disconnected since it's the same graph, meaning Buster also wins \mc{T''}
\item $\sum _{j=1}^{|\mc{T'}|}|\es{B}{T'}{j}| = |\es{G}{S}{1}|+|\es{R}{S}{1}|-|\es{G}{T'}{\ell+1}|-|\es{R}{T'}{\ell+1}| = |\es{G}{S}{1}|+|\es{R}{S}{1}|-|\es{G}{T''}{\ell+1}|-|\es{R}{T''}{\ell+1}| = \sum _{j=1}^{|\mc{T''}|}|\es{B}{T''}{j}|$
\item $\sum _{j=1}^{|\mc{T'}|} w(\es{F}{T'}{j}) = w(\es{R}{S}{1})-w(\es{R}{T'}{\ell+1}) = w(\es{R}{S}{1})-w(\es{R}{T''}{\ell+1}) = \sum _{j=1}^{|\mc{T''}|} w(\es{F}{T''}{j})$
\end{enumerate}
and thus we have constructed $\phi'$ so that for every $\mc{T'}\in\phi'$ there exists $\mc{T''}\in\phi''$ for which \mc{T'} is Fixer-superior to \mc{T''}. Hence \es{F}{S}{k} is optimal.
\end{proof}
\section{Proof of Main Theorem}
\label{proof}
To prove our main theorem, that during any series \mc{S}, any greedy Fixer move \es{F}{S}{k} is optimal, we perform induction on $|\es{G}{S}{k}|+|\es{R}{S}{k}|$. To help with the base case, we use the following proposition.
\begin{prop}\label{baseCaseProp}
If Buster wins \mc{S} in the $k$th round, then \es{F}{S}{k} is greedy and optimal.
\end{prop}
\begin{proof}
If Buster wins \mc{S} in the $k$th round, then $(\es{G}{S}{k}-\es{B}{S}{k})\cup\es{R}{S}{k}$ is disconnected, and by convention $\es{F}{S}{k}=\emptyset$, which is greedy. Clearly the only series identical to \mc{S} through Buster's move in the $k$th round is \mc{S} itself, and the only strategy for Fixer to continue \mc{S} after the $k$th round is $\{\mc{S}\}$, so \es{F}{S}{k} is optimal because \mc{S} is Fixer-superior to itself.
\end{proof}
Let $V$ be the vertex set of \es{G}{S}{1}, with $|V|=n$, and without loss of generality assume $k=1$. Note that $|\es{G}{S}{1}|\geq n-1$ since \es{G}{S}{1} is connected, so for our base case we consider $|\es{G}{S}{1}|+|\es{R}{S}{1}|=n-1$. In this case, $|\es{G}{S}{1}|=n-1$, $\es{R}{S}{1}=\emptyset$, and $\es{B}{S}{1}\subseteq\es{G}{S}{1}$ is nonempty; then $(\es{G}{S}{1}-\es{B}{S}{1})\cup\es{R}{S}{1}$ is disconnected because it has at most $n-2$ edges, in which case Buster wins \mc{S} during the first round, and Proposition \ref{baseCaseProp} applies.
Hence we may suppose $|\es{G}{S}{1}|+|\es{R}{S}{1}|\geq n$, and inductively assume during any series \mc{T} such that \es{G}{T}{1} is a connected graph on $V$ and $|\es{G}{T}{k}|+|\es{R}{T}{k}|<|\es{G}{S}{1}|+|\es{R}{S}{1}|$, any greedy Fixer move \es{F}{T}{k} is optimal. Furthermore, by Proposition \ref{baseCaseProp}, we may assume that Buster does not win \mc{S} during the first round. Let $\phi$ be a greedy Fixer strategy to continue \mc{S} after Fixer's greedy move \es{F}{S}{1} of the first round; by the inductive hypothesis, all Fixer moves in $\phi$ past the first round are optimal. Let \mc{S'} be an arbitrary series identical to \mc{S} through Buster's move of the first round such that every edge of \es{F}{S'}{1} is a bridge in \es{G}{S'}{2}, and let $\phi'$ be an arbitrary strategy for Fixer to continue \mc{S'} after the first round. By Proposition \ref{subsetSupProp}, in order to show \es{F}{S}{1} is optimal, it suffices to show that for any $\mc{T}\in\phi$ there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}. Note that if Fixer wins \mc{T} in the first round then \mc{T} is clearly Fixer-superior to the only series $\mc{T'}\in\phi'$ satisfying $|\mc{T'}|=1$, so we may assume $|\mc{T}|>1$.
For the rest of this section, fix any greedy Fixer strategy $\phi$ to continue \mc{S} after Fixer's greedy move \es{F}{S}{1} of the first round, fix any series $\mc{T}\in\phi$ satisfying $|\mc{T}|>1$, and fix any Fixer strategy $\phi'$ to continue \mc{S'} after the first round. Let $c$ equal the number of components of $\es{G}{S}{1}-\es{B}{S}{1}$. Let $M$ be the multigraph whose vertices are the components of $\es{G}{S}{1}-\es{B}{S}{1}$ and whose edges are the edges of \es{R}{S}{1} (identifying each endpoint of the edges in \es{R}{S}{1} with the component of $\es{G}{S}{1}-\es{B}{S}{1}$ within which it lies), so \es{F}{S}{1} is a minimum spanning tree of $M$, and \es{F}{S'}{1} is a spanning tree of $M$. We complete the proof by showing for each value of $c$ that there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}, handling separately the cases $c=1$, $c=2$, and $c\geq 3$ in Subsections \ref{c1Proof}, \ref{c2Proof}, and \ref{c3Proof}, respectively.
\subsection{The case $c=1$}
\label{c1Proof}
If $c=1$, then the only spanning tree of $M$ is edgeless, so $\es{F}{S}{1}=\es{F}{S'}{1}=\emptyset$. For our fixed series $\mc{T}\in\phi$, let \mc{T'} be any series in $\phi'$ identical to \mc{T} through Buster's move of the second round. Since \es{F}{T}{j} is optimal for $j\geq 2$, and the subset $\phi''$ of $\phi'$ consisting of all series in $\phi'$ identical to \mc{T'} through the second round forms a strategy for Fixer to continue \mc{T'} after the second round, by Lemma \ref{optLemFixer} there exists $\mc{T''}\in\phi''\subseteq\phi'$ such that \mc{T} is Fixer-superior to \mc{T''}.
\subsection{The case $c=2$}
\label{c2Proof}
If $c=2$, then the spanning trees of $M$ are the individual edges in \es{R}{S}{1} joining the two components of $\es{G}{S}{1}-\es{B}{S}{1}$. Hence for two such edges $s$ and $s'$, where no such edge is cheaper than $s$, we have $\es{F}{S}{1}=\{s\}$ and $\es{F}{S'}{1}=\{s'\}$. We establish Lemmas \ref{primSplit} and \ref{singleton} in order to prove Proposition \ref{proofStrategy}, which provides a strategy for proving that $\es{F}{S}{1}=\{s\}$ is optimal.
\begin{lem}\label{primSplit}
Suppose \mc{U} is a series identical to \mc{S} through the first round, with \es{F}{U}{j} greedy for $j>k$ for some $k$, which satisfies $1\leq k\leq |\mc{U}|-2$ if Buster wins \mc{U} and $1\leq k\leq |\mc{U}|-1$ if Fixer wins \mc{U}. Then there exists a series \mc{U(k+1)} identical to \mc{U} through the $k$th round such that $|\es{B}{U(k+1)}{k+1}|=1$, \es{F}{U(k+1)}{j} is greedy for $j>k$, and \mc{U} is Fixer-superior to \mc{U(k+1)}.
\end{lem}
\begin{proof}
If $|\es{B}{U}{k+1}|=1$, set $\mc{U(k+1)}=U$, so \mc{U(k+1)} is identical to \mc{U} through the $k$th round, $|\es{B}{U(k+1)}{k+1}|=|\es{B}{U}{k+1}|=1$, $\es{F}{U(k+1)}{j}=\es{F}{U}{j}$ is greedy for $j>k$, and \mc{U} is Fixer-superior to \mc{U(k+1)} since every series is Fixer-superior to itself.
Thus we may assume $|\es{B}{U}{k+1}|>1$. Let \mc{U(k+1)} be the series identical to \mc{U} through the $k$th round, with the rest of \mc{U(k+1)} constructed as follows. We show that there exists an edge $b\in\es{B}{U}{k+1}$ such that if Buster plays $\es{B}{U(k+1)}{k+1}=\{b\}$ in \mc{U(k+1)}, then Fixer can respond with some greedy $\es{F}{U(k+1)}{k+1}\subseteq\es{F}{U}{k+1}$. If this is the case, then for the $(k+2)$nd round in \mc{U(k+1)} Buster could play $\es{B}{U(k+1)}{k+2}=\es{B}{U}{k+1}-\{b\}$ since
\begin{align*}
\es{B}{U(k+1)}{k+2}&=\es{B}{U}{k+1}-\{b\} \\
&\subseteq\es{G}{U}{k+1}-\{b\} \\
&=\es{G}{U(k+1)}{k+1}-\es{B}{U(k+1)}{k+1} \\
&\subseteq(\es{G}{U(k+1)}{k+1}-\es{B}{U(k+1)}{k+1})\cup\es{F}{U(k+1)}{k+1} \\
&=\es{G}{U(k+1)}{k+2}
\end{align*}
and Fixer could respond with $\es{F}{U(k+1)}{k+2}=\es{F}{U}{k+1}-\es{F}{U(k+1)}{k+1}$ since
\begin{align*}
\es{F}{U(k+1)}{k+2}&=\es{F}{U}{k+1}-\es{F}{U(k+1)}{k+1} \\
&\subseteq\es{R}{U}{k+1}-\es{F}{U(k+1)}{k+1} \\
&=\es{R}{U(k+1)}{k+1}-\es{F}{U(k+1)}{k+1} \\
&=\es{R}{U(k+1)}{k+2}
\end{align*}
and
\begin{align*}
\es{G}{U(k+1)}{k+3}&=(\es{G}{U(k+1)}{k+2}-\es{B}{U(k+1)}{k+2})\cup\es{F}{U(k+1)}{k+2} \\
&=(((\es{G}{U(k+1)}{k+1}-\{b\})\cup\es{F}{U(k+1)}{k+1})-(\es{B}{U}{k+1}-\{b\}))\cup(\es{F}{U}{k+1}-\es{F}{U(k+1)}{k+1}) \\
&=(((\es{G}{U}{k+1}-\{b\})\cup\es{F}{U(k+1)}{k+1})-(\es{B}{U}{k+1}-\{b\}))\cup(\es{F}{U}{k+1}-\es{F}{U(k+1)}{k+1}) \\
&=(\es{G}{U}{k+1}-\es{B}{U}{k+1})\cup\es{F}{U}{k+1} \\
&=\es{G}{U}{k+2}
\end{align*}
which is connected since Buster doesn't win \mc{U} in the $(k+1)$st round. Furthermore, Fixer's move $\es{F}{U(k+1)}{k+2}$ in \mc{U(k+1)} would be greedy, since otherwise Fixer's move $\es{F}{U}{k+1}=\es{F}{U(k+1)}{k+1}\cup\es{F}{U(k+1)}{k+2}$ in \mc{U} would not have been greedy, contradicting the hypotheses of this lemma. Then $\es{G}{U(k+1)}{k+3}=\es{G}{U}{k+2}$ and $\es{R}{U(k+1)}{k+3}=\es{R}{U(k+1)}{k+1}-(\es{F}{U(k+1)}{k+1}\cup\es{F}{U(k+1)}{k+2})=\es{R}{U}{k+1}-\es{F}{U}{k+1}=\es{R}{U}{k+2}$, so setting $\es{B}{U(k+1)}{j+1}=\es{B}{U}{j}$ and $\es{F}{U(k+1)}{j+1}=\es{F}{U}{j}$ for $j\geq k+2$ would be valid plays by Buster and Fixer in \mc{U(k+1)}, with Fixer's moves being greedy because they were greedy in \mc{U}. Thus we'd have
\begin{enumerate}
\item Fixer wins \mc{U}, or Buster wins \mc{U} and thus also wins \mc{U(k+1)}.
\item $\sum _{j=1}^{|\mc{U}|}|\es{B}{U}{j}| = \sum _{j=1}^{|\mc{U(k+1)}|}|\es{B}{U(k+1)}{j}|$
\item $\sum _{j=1}^{|\mc{U}|} w(\es{F}{U}{j}) = \sum _{j=1}^{|\mc{U(k+1)}|} w(\es{F}{U(k+1)}{j})$
\end{enumerate}
so \mc{U} would be Fixer-superior to \mc{U(k+1)}. We complete the proof by showing there exists $b\in\es{B}{U}{k+1}$ such that Fixer can respond to $\es{B}{U(k+1)}{k+1}=\{b\}$ with some greedy $\es{F}{U(k+1)}{k+1}\subseteq\es{F}{U}{k+1}$.
If there exists $b\in\es{B}{U}{k+1}$ such that $\es{G}{U}{k+1}-\{b\}$ is connected, let Buster play $\es{B}{U(k+1)}{k+1}=\{ b\}$ in \mc{U(k+1)}. Since $\es{G}{U(k+1)}{k+1}-\es{B}{U(k+1)}{k+1}=\es{G}{U}{k+1}-\{b\}$ and is therefore connected, Fixer can respond greedily in \mc{U(k+1)} with $\es{F}{U(k+1)}{k+1}=\emptyset\subseteq\es{F}{U}{k+1}$.
Finally, suppose $\es{G}{U}{k+1}-\{b\}$ is disconnected for every $b\in\es{B}{U}{k+1}$. Since \es{F}{U}{k+1} is greedy, it is therefore a minimum spanning tree of the multigraph $H$ whose vertices are the components of $\es{G}{U}{k+1}-\es{B}{U}{k+1}$ and whose edges are the edges of \es{R}{U}{k+1} (identifying each endpoint of the edges in $\es{R}{U}{k+1}$ with the component of $\es{G}{U}{k+1}-\es{B}{U}{k+1}$ within which it lies). By Proposition \ref{primProp} there exists an ordering of \es{F}{U}{k+1} where edges appear in the order they were added by Prim's Algorithm; let $f$ be the first edge in this ordering. Let $b$ be an edge in \es{B}{U}{k+1} such that $(\es{G}{U}{k+1}-\{b\})\cup\{f\}$ is connected; note that such a $b$ exists because otherwise $f$ would be a loop in $H$ and therefore couldn't be part of any minimum spanning tree. Let Buster play $\es{B}{U(k+1)}{k+1}=\{ b\}$ in \mc{U(k+1)}, and have Fixer respond with $\es{F}{U(k+1)}{k+1}=\{f\}$. Fixer's move is greedy because if there were some edge $r\in\es{R}{U(k+1)}{k+1}$ such that $w(r)<w(f)$ and $(\es{G}{U}{k+1}-\{b\})\cup\{r\}$ was connected, then $r$ would've been chosen before $f$ by Prim's Algorithm in constructing a minimum spanning tree of $H$, contradicting $f$ being the first edge chosen.
\end{proof}
\begin{lem}\label{singleton}
Suppose \mc{U(k)} is a series identical to \mc{S} through the first round, with \es{F}{U(k)}{j} greedy for $j\geq k$ for some $1<k<|\mc{U(k)}|$. If $F$ is a subset of \es{R}{U(k)}{k} such that $(\es{G}{U(k)}{k}-\es{B}{U(k)}{k})\cup F$ is connected, then there exists a series \mc{U(k+1)} identical to \mc{U(k)} through Buster's move in the $k$th round such that $\es{F}{U(k+1)}{k}=F$, $|\es{B}{U(k+1)}{k+1}|=1$ if Buster doesn't win \mc{U(k+1)} in the $(k+1)$st round, \es{F}{U(k+1)}{j} is greedy for $j>k$, and \mc{U(k)} is Fixer-superior to \mc{U(k+1)}.
\end{lem}
\begin{proof}
By the inductive hypothesis of this section, for $j\geq k$, \es{F}{U(k)}{j} is optimal since \es{F}{U(k)}{j} is greedy. Let $\hat{\phi}$ be a greedy strategy for Fixer to continue the series \mc{\hat{U}} after the $k$th round, where \mc{\hat{U}} is identical to \mc{U(k)} through Buster's move in the $k$th round, and $\es{F}{\hat{U}}{k}=F$. Since \mc{U(k)} is part of some Fixer strategy to continue \mc{U(k)} after the $k$th round where Fixer only makes optimal moves after the $k$th round, and \mc{\hat{U}} is a series identical to \mc{U(k)} through Buster's move in the $k$th round, by Lemma \ref{optLemFixer} \mc{U(k)} is Fixer-superior to some series $\mc{U}\in\hat{\phi}$. Note that $k<|\mc{U}|$, since otherwise
\begin{align*}
\sum _{j=1}^{|\mc{U(k)}|}|\es{B}{U(k)}{j}| &> \sum _{j=1}^{k}|\es{B}{U(k)}{j}| \\
&=\sum _{j=1}^{|\mc{U}|}|\es{B}{U}{j}|
\end{align*}
contradicting \mc{U(k)} being Fixer-superior to \mc{U}. If Buster wins \mc{U} in the $(k+1)$st round, then \es{F}{U}{k+1} is empty and greedy by convention, so we can set $\mc{U(k+1)}=\mc{U}$. If Buster doesn't win \mc{U} in the $(k+1)$st round, then by Lemma \ref{primSplit} there exists a series \mc{U(k+1)} identical to \mc{U} through the $k$th round such that $|\es{B}{U(k+1)}{k+1}|=1$, \es{F}{U(k+1)}{j} is greedy for $j>k$, and \mc{U} is Fixer-superior to \mc{U(k+1)}. Since \mc{U(k)} is Fixer-superior to \mc{U}, and \mc{U} is Fixer-superior to \mc{U(k+1)}, by Proposition \ref{fixSupTransitivityProp} \mc{U(k)} is Fixer-superior to \mc{U(k+1)}.
\end{proof}
Recall that in order to show \es{F}{S}{1} is optimal, we fixed a series $\mc{T}\in\phi$ that we must show is Fixer-superior to some $\mc{T'}\in\phi'$, where $\phi$ is a greedy Fixer strategy to continue \mc{S} after the first round, and $\phi'$ is any Fixer strategy to continue \mc{S'} after the first round.
\begin{prop}\label{proofStrategy}
Suppose there exists a Fixer strategy to continue \mc{S} after the first round such that for its subset $\phi_1$ consisting of each of its series $\mc{T_1}$ satisfying $|\es{B}{T_1}{j}|=1$ for $1<j<|\mc{T_1}|$, and also $|\es{B}{T_1}{j}|=1$ for $j=|\mc{T_1}|$ if Fixer wins \mc{T_1} (i.e. Buster is restricted to removing singletons after the first round, except for the final round if Buster wins), for every $\mc{T_1}\in\phi_1$ there exists a series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{j} is greedy for $j>1$ and \mc{T_1} is Fixer-superior to \mc{T'_g}. Then there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}.
\end{prop}
\begin{proof}
We first show that there exists a series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{j} is greedy for $j>1$ and \mc{T} is Fixer-superior to \mc{T'_g}.
If Buster wins \mc{T} in the second round, then $\mc{T}\in\phi_1$, so by hypothesis there exists a series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{j} is greedy for $j>1$ and \mc{T} is Fixer-superior to \mc{T'_g}.
If Buster doesn't win \mc{T} in the second round, then by Lemma \ref{primSplit}, there exists a series \mc{U(2)} such that \mc{U(2)} is identical to \mc{T} through the first round, $|\es{B}{U(2)}{2}|=1$, \es{F}{U(2)}{j} is greedy for $j>1$, and \mc{T} is Fixer-superior to \mc{U(2)}. We iteratively apply Lemma \ref{singleton} to construct a sequence $\mc{U(2)},\mc{U(3)},\ldots,\mc{U(\ell)}$ such that for $1<k<\ell$, \mc{U(k+1)} is a series identical to \mc{U(k)} through Buster's move in the $k$th round such that \es{F}{U(k+1)}{k} is the set $F$ prescribed by $\phi_1$, $|\es{B}{U(k+1)}{k+1}|=1$ if Buster doesn't win \mc{U(k+1)} in the $(k+1)$st round, \es{F}{U(k)}{j} is greedy for $j>k$, and \mc{U(k)} is Fixer-superior to \mc{U(k+1)}; the sequence terminates after we reach a series \mc{U(\ell)} in $\phi_1$ (that is, either Buster wins \mc{U(\ell)} in round $\ell$, or $|\es{B}{U(\ell)}{\ell}|=1$ and Fixer wins \mc{U(\ell)} in round $\ell$, or $|\es{B}{U(\ell)}{\ell}|=1$ and Buster wins \mc{U(\ell)} in round $\ell+1$). Since $\mc{U(\ell)}\in\phi_1$, by the hypothesis of this proposition there exists a series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{j} is greedy for $j>1$ and \mc{U(\ell)} is Fixer-superior to \mc{T'_g}. Since \mc{T} is Fixer-superior to \mc{U(2)}, \mc{U(k)} is Fixer-superior to \mc{U(k+1)} for $1<k<\ell$, and \mc{U(\ell)} is Fixer-superior to \mc{T'_g}, by Proposition \ref{fixSupTransitivityProp} \mc{T} is Fixer-superior to \mc{T'_g}.
Thus regardless of whether Buster wins \mc{T} in the second round, there exists a series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{j} is greedy for $j>1$ and \mc{T} is Fixer-superior to \mc{T'_g}. We now show that \mc{T'_g} is Fixer-superior to some $\mc{T'}\in\phi'$. Since all Fixer moves after the first round of \mc{T'_g} are greedy, by the inductive hypothesis of the section they are optimal. Let $\phi''$ be the subset of $\phi'$ consisting of its series for which Buster's move in the second round matches \es{B}{T'_g}{2}, and let \mc{T''} be any element of $\phi''$, so $\phi''$ is a strategy for Fixer to continue \mc{T''} after the second round. Since \es{F}{T'_g}{j} is optimal for $j\geq 2$, by Lemma \ref{optLemFixer} there exists $\mc{T'}\in\phi''\subseteq\phi'$ such that \mc{T'_g} is Fixer-superior to \mc{T'}.
Hence \mc{T} is Fixer-superior to \mc{T'_g}, which is Fixer-superior to \mc{T'}. By Proposition \ref{fixSupTransitivityProp}, \mc{T} is Fixer-superior to \mc{T'}, as desired since $\mc{T'}\in\phi'$.
\end{proof}
We use Proposition \ref{proofStrategy} to complete the proof for the case $c=2$ by showing there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}. We define a subset $\phi_1$ of a Fixer strategy to continue \mc{S} after the first round consisting of each of its series $\mc{T_1}$ satisfying $|\es{B}{T_1}{k}|=1$ for $1<k<|\mc{T_1}|$, and also $|\es{B}{T_1}{k}|=1$ for $k=|\mc{T_1}|$ if Fixer wins \mc{T_1}, by constructing an arbitrary member \mc{T_1} of $\phi_1$. \mc{T_1} will be constructed simultaneously alongside some series $\mc{T'_g}$ identical to \mc{S'} through the first round such that \es{F}{T'_g}{k} is greedy for $k>1$, in the following way. Let \mc{T_1} be identical to \mc{S} through the first round, and let \mc{T'_g} be identical to \mc{S'} through the first round. For a given round $k>1$, Buster will remove an arbitrary singleton set \es{B}{T_1}{k} of edges from \es{G}{T_1}{k} in \mc{T_1} (unless Buster wins \mc{T_1} in the $k$th round, in which case the singleton requirement is dropped for \es{B}{T_1}{k}), then based on that move in \mc{T_1} Buster will remove a set \es{B}{T'_g}{k} of edges from \es{G}{T'_g}{k} in \mc{T'_g} (or, in a particular case, have \mc{T'_g} skip a round with respect to \mc{T_1}, only to make it up later). Fixer will then respond in \mc{T'_g} with some greedy set \es{F}{T'_g}{k} of edges from \es{R}{T'_g}{k} to connect $\es{G}{T'_g}{k}-\es{B}{T'_g}{k}$, then based on that response in \mc{T'_g} Fixer will add a set \es{F}{T_1}{k} of edges from \es{R}{T_1}{k} to connect $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ in \mc{T_1}. Since each Buster move in \mc{T_1} is an arbitrary singleton after the first round and before the final round, and in the final round is an arbitrary singleton if Fixer wins and an arbitrary set if Buster wins, \mc{T_1} is an arbitrary member of $\phi_1$, so our procedure for defining \mc{T_1} fully defines $\phi_1$. Since \mc{T_1} is an arbitary member of $\phi_1$, if we show \mc{T_1} is Fixer-superior to \mc{T'_g}, then by Proposition \ref{proofStrategy} there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}.
In order to analyze \mc{T_1} and \mc{T'_g}, we categorize the corresponding rounds of each series into Scenarios \ref{scenario1}, \ref{scenario2}, and \ref{scenario3}. Each scenario will include a list of conditions that must be satisfied by \mc{T_1} and \mc{T'_g}, plus round-by-round instructions for both Buster to make moves in \mc{T'_g} based on his moves in \mc{T_1} as well as for Fixer to respond in \mc{T_1} based on her responses in \mc{T'_g}. After each round we shall show either that \mc{T_1} and \mc{T'_g} are complete, with \mc{T_1} Fixer-superior to \mc{T'_g}, or that \mc{T_1} and \mc{T'_g} still satisfy the conditions of the current scenario, or that \mc{T_1} and \mc{T'_g} have advanced to a new scenario.
\subsubsection{Scenario where Buster has not used $s$ in \mc{T_1} and Fixer has not used $s$ in \mc{T'_g}}
\label{scenario1}
This scenario involves \mc{T_1} and \mc{T'_g} each starting the $k$th round with the following properties:
\begin{enumerate}
\item $s\in\es{G}{T_1}{k}$, $s'\in\es{G}{T'_g}{k}$, and $\es{G}{T_1}{k}-\{s\}=\es{G}{T'_g}{k}-\{s'\}$ (i.e. the only difference between graphs is $s$ in \es{G}{T_1}{k} being replaced by $s'$ in \es{G}{T'_g}{k})
\item $s'\in\es{R}{T_1}{k}$, $s\in\es{R}{T'_g}{k}$, and $\es{R}{T_1}{k}-\{s'\}=\es{R}{T'_g}{k}-\{s\}$ (i.e. the only difference between reserve sets is $s'$ in \es{R}{T_1}{k} being replaced by $s$ in \es{R}{T'_g}{k})
\item $s$ and $s'$ are bridges in \es{G}{T_1}{k} and \es{G}{T'_g}{k}, respectively, between the same subgraphs $X_k$ and $Y_k$, but perhaps in different spots (i.e. removing both edges from their respective graphs leaves the same graphs, each with two components); see Figure \ref{figsGT1GTgXkYk}
\item for every $r\in\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ such that $r$ joins $X_k$ to $Y_k$, $w(r)\geq w(s)$ (i.e. in either series, no reserve edge going between subgraphs $X_k$ and $Y_k$ can be cheaper than $s$)
\end{enumerate}
\begin{figure}[htb]
\centering
\subcaptionbox{\es{G}{T_1}{k} \label{figGT1XkYk}}[9cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=0,size=1.5,label=$X_k$,fontsize=\large]{xk}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s$,bend=45,position={below=.5mm},fontsize=\large](xk)(yk)
\end{tikzpicture}
}
\subcaptionbox{\es{G}{T'_g}{k} \label{figGTgXkYk}}[9cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=0,size=1.5,label=$X_k$,fontsize=\large]{xk}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](xk)(yk)
\end{tikzpicture}
}
\caption{Graphs \es{G}{T_1}{k} and \es{G}{T'_g}{k} from Scenario \ref{scenario1}.}\label{figsGT1GTgXkYk}
\end{figure}
We divide our analysis of this scenario in the following way. Proposition \ref{scenario1prop1} deals with the case that Fixer wins \mc{T_1} in the $(k-1)$st round (i.e. Buster decides to quit before the $k$th round of \mc{T_1}). Propositions \ref{scenario1prop2} and \ref{scenario1prop3} deal with the case that Buster wins \mc{T_1} in the $k$th round, each dealing with a subcase of whether $s\in\es{B}{T_1}{k}$. The remaining propositions in our analysis of this scenario deal with the remaining case that Buster makes a move in the $k$th round, and Fixer is able to reconnect the graph in response. Proposition \ref{scenario1prop4} deals with the subcase where $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ is connected, while Proposition \ref{scenario1prop5} deals with the subcase that $\es{B}{T_1}{k}=\{s\}$ (which would result in $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ being disconnected, since $s$ is a bridge in \es{G}{T_1}{k}). Propositions \ref{scenario1prop6}, \ref{scenario1prop7}, and \ref{scenario1prop8} deal with the remaining subcases in a manner described later on.
\begin{prop}\label{scenario1prop1}
If Fixer wins \mc{T_1} in the $(k-1)$st round, then Buster can quit after the $(k-1)$st round of \mc{T'_g}, resulting in Fixer winning \mc{T'_g} in the $(k-1)$st round, and \mc{T_1} being Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
We have
\begin{enumerate}
\item Fixer wins \mc{T_1}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k}|-|\es{R}{T_1}{k}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k}|-|\es{R}{T'_g}{k}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k}) = w(\es{R}{T'_g}{1})-(w(\es{R}{T'_g}{k})-w(s)+w(s')) \leq w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
and thus \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{lem}\label{endgamelem}
If Buster wins \mc{T_1} in the $k$th round, and there exists $\ell$ such that \mc{T'_g} satisfies either $\bigcup_{j=k}^{\ell}\es{B}{T'_g}{j}\subseteq\es{B}{T_1}{k}$ with $(\es{G}{T'_g}{\ell}-\es{B}{T'_g}{\ell})\cup\es{R}{T'_g}{\ell}$ disconnected, or $\bigcup_{j=k}^{\ell}\es{B}{T'_g}{j}=\es{B}{T_1}{k}$, then Buster wins \mc{T'_g} in the $\ell$th round and \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{lem}
\begin{proof}
To begin, note that $\es{G}{T_1}{k}\cup\es{R}{T_1}{k}=(\es{G}{T'_g}{k}-\{s'\})\cup\{s\}\cup(\es{R}{T'_g}{k}-\{s\})\cup\{s'\}=\es{G}{T'_g}{k}\cup\es{R}{T'_g}{k}$. If $(\es{G}{T'_g}{\ell}-\es{B}{T'_g}{\ell})\cup\es{R}{T'_g}{\ell}$ is disconnected, then Buster wins \mc{T'_g} in the $\ell$th round by definition, so to show Buster wins \mc{T'_g} in the $\ell$th round in the case $\bigcup_{j=k}^{\ell}\es{B}{T'_g}{j}=\es{B}{T_1}{k}$ we show $(\es{G}{T'_g}{\ell}-\es{B}{T'_g}{\ell})\cup\es{R}{T'_g}{\ell}$ is a spanning subgraph of a disconnected graph and thus disconnected itself. Indeed
\begin{align*}
(\es{G}{T'_g}{\ell}-\es{B}{T'_g}{\ell})\cup\es{R}{T'_g}{\ell}&=((\es{G}{T'_g}{k}\cup(\bigcup_{j=k}^{\ell-1}\es{F}{T'_g}{j}))-\bigcup_{j=k}^{\ell}\es{B}{T'_g}{j})\cup(\es{R}{T'_g}{k}-\bigcup_{j=k}^{\ell-1}\es{F}{T'_g}{j}) \\
&=(\es{G}{T'_g}{k}-\es{B}{T_1}{k})\cup(\es{R}{T'_g}{k}-\es{B}{T_1}{k}) \\
&=(\es{G}{T'_g}{k}\cup\es{R}{T'_g}{k})-\es{B}{T_1}{k} \\
&=(\es{G}{T_1}{k}\cup\es{R}{T_1}{k})-\es{B}{T_1}{k} \\
&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}
\end{align*}
which is disconnected since Buster wins $\mc{T_1}$ in the $k$th round. Furthermore, noting that the convention $\es{F}{T_1}{k}=\emptyset$ implies $\es{R}{T_1}{k+1}=\es{R}{T_1}{k}$, we have
\begin{enumerate}
\item Buster wins \mc{T'_g} in the $\ell$th round because $(\es{G}{T'_g}{\ell}-\es{B}{T'_g}{\ell})\cup\es{R}{T'_g}{\ell}$ is disconnected
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k}|-|\es{R}{T_1}{k}|+|\es{B}{T_1}{k}| \geq |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k}|-|\es{R}{T'_g}{k}|+\sum_{j=k}^{\ell}|\es{B}{T'_g}{j}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k}) = w(\es{R}{T'_g}{1})-(w(\es{R}{T'_g}{k})-w(s)+w(s')) \leq w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k}) = \sum _{j=1}^{k-1}w(\es{F}{T'_g}{j}) \leq \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
so \mc{T_1} is Fixer-superior to \mc{T'_g}, as desired.
\end{proof}
\begin{prop}\label{scenario1prop2}
If Buster wins \mc{T_1} in the $k$th round and $s\notin\es{B}{T_1}{k}$, then Buster can play $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ in \mc{T'_g}, resulting in Buster winning \mc{T'_g} in the $k$th round and \mc{T_1} Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
Buster can play $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ in \mc{T'_g} because $\es{B}{T_1}{k}\subseteq\es{G}{T_1}{k}-\{s\}\subseteq\es{G}{T'_g}{k}$, so Buster wins \mc{T'_g} in the $k$th round and \mc{T_1} is Fixer-superior to \mc{T'_g} by Lemma \ref{endgamelem}.
\end{proof}
\begin{prop}\label{scenario1prop3}
If Buster wins \mc{T_1} in the $k$th round and $s\in\es{B}{T_1}{k}$, then in \mc{T'_g} Buster can play $\es{B}{T'_g}{k}=\es{B}{T_1}{k}-\{s\}$, as well as $\es{B}{T'_g}{k+1}=\{s\}$ following any Fixer response if $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}$ is connected, resulting in Buster winning \mc{T'_g} in the $k$th or $(k+1)$st round and \mc{T_1} Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
Note that $\es{B}{T_1}{k}\neq\{s\}$, since otherwise setting $\es{F}{T_1}{k}=\{s'\}$ would contradict Buster winning \mc{T_1} in the $k$th round since $\es{G}{T_1}{k+1}=(\es{G}{T_1}{k}-\{s\})\cup\{s'\}=\es{G}{T'_g}{k}$, which is connected. Hence $\emptyset\neq\es{B}{T_1}{k}-\{s\}\subseteq\es{G}{T_1}{k}-\{s\}\subseteq\es{G}{T'_g}{k}$, so Buster can play $\es{B}{T'_g}{k}=\es{B}{T_1}{k}-\{s\}$.
If $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}$ is disconnected, then Buster wins \mc{T'_g} in the $k$th round and \mc{T_1} is Fixer-superior to \mc{T'_g} by Lemma \ref{endgamelem}.
If $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}$ is connected, then Fixer will respond with some greedy \es{F}{T'_g}{k} to create a connected graph \es{G}{T'_g}{k+1}. Note that $s\in\es{F}{T'_g}{k}$, since otherwise $\es{G}{T'_g}{k+1}=(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{F}{T'_g}{k}\subseteq(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$, meaning $\es{G}{T'_g}{k+1}$ is a spanning subgraph of a disconnected graph and thus disconnected itself, a contradiction. Hence Buster can play $\es{B}{T'_g}{k+1}=\{s\}$, so $\es{B}{T'_g}{k}\cup\es{B}{T'_g}{k+1}=\es{B}{T_1}{k}$, so Buster wins \mc{T'_g} in the $(k+1)$st round and \mc{T_1} is Fixer-superior to \mc{T'_g} by Lemma \ref{endgamelem}.
\end{proof}
\begin{prop}\label{scenario1prop4}
If Buster plays in the $k$th round of \mc{T_1} and $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ is connected, then Buster can copy her move from \mc{T_1} to \mc{T'_g} by playing $\es{B}{T'_g}{k}=\es{B}{T_1}{k}=\{b\}$, and Fixer can respond greedily in both \mc{T'_g} and \mc{T_1} with $\es{F}{T'_g}{k}=\es{F}{T_1}{k}=\emptyset$ to maintain the conditions of this scenario.
\end{prop}
\begin{proof}
Note that $b\neq s$, since $s$ is a bridge and $\es{G}{T_1}{k}-\{b\}$ is connected. Hence Buster can play $\es{B}{T'_g}{k}=\{b\}\subseteq\es{G}{T_1}{k}-\{s\}\subset\es{G}{T'_g}{k}$. Both $X_k-\{b\}$ and $Y_k-\{b\}$ are connected (since the only edge in \es{G}{T_1}{k} with one endpoint in $X_k$ and the other in $Y_k$ is $s$, which is a bridge), and thus $\es{G}{T'_g}{k}-\es{B}{T'_g}{k}$ is connected, as the only edge in \es{G}{T'_g}{k} with one endpoint in $X_k$ and the other in $Y_k$ is $s'$, which is a bridge such that $b\neq s'$ (since $b\in\es{G}{T_1}{k}$ but $s'\notin\es{B}{T_1}{k}$). Hence the only greedy response for Fixer is $\es{F}{T'_g}{k}=\emptyset$, which Fixer can copy in \mc{T_1} with $\es{F}{T_1}{k}=\emptyset$. Setting $X_{k+1}=X_k-\{b\}$ and $Y_{k+1}=Y_k-\{b\}$, we have
\begin{enumerate}
\item $s\in\es{G}{T_1}{k}-\{b\}=\es{G}{T_1}{k+1}$, $s'\in\es{G}{T'_g}{k}-\{b\}=\es{G}{T'_g}{k+1}$, and $\es{G}{T_1}{k+1}-\{s\}=\es{G}{T_1}{k}-\{b,s\}=\es{G}{T'_g}{k}-\{b,s'\}=\es{G}{T'_g}{k+1}-\{s'\}$
\item $s'\in\es{R}{T_1}{k}=\es{R}{T_1}{k+1}$, $s\in\es{R}{T'_g}{k}=\es{R}{T'_g}{k+1}$, and $\es{R}{T_1}{k+1}-\{s'\}=\es{R}{T_1}{k}-\{s'\}=\es{R}{T'_g}{k}-\{s\}=\es{R}{T'_g}{k+1}-\{s\}$
\item $s$ and $s'$ are bridges in \es{G}{T_1}{k} and \es{G}{T'_g}{k}, respectively, between $X_{k+1}$ and $Y_{k+1}$
\item for every $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$, since $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}=\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and $r$ would have also joined $X_k$ to $Y_k$ (because $X_k$ and $X_{k+1}$ share the same set of vertices, as do $Y_k$ and $Y_{k+1}$)
\end{enumerate}
and thus the conditions of this scenario are maintained.
\end{proof}
\begin{prop}\label{scenario1prop5}
If $\es{B}{T_1}{k}=\{s\}$, then Fixer can play $\es{F}{T_1}{k}=\{s'\}$ to advance to Scenario \ref{scenario2}.
\end{prop}
\begin{proof}
Letting \mc{T'_g} fall a round behing \mc{T_1} and setting $X_{k+1}=X_k$ and $Y_{k+1}=Y_k$, we have
\begin{enumerate}
\item $s'\in\es{G}{T_1}{k+1}=(\es{G}{T_1}{k}-\{s\})\cup\{s'\}=\es{G}{T'_g}{k}$
\item $\es{R}{T_1}{k+1}=\es{R}{T_1}{k}-\{s'\}=\es{R}{T'_g}{k}-\{ s\}$ and $s\in\es{R}{T'_g}{k}$
\item $s'$ and $s$ are bridges in \es{G}{T_1}{k+1} and $(\es{G}{T'_g}{k}-\{s'\})\cup\{s\}$, respectively, between $X_{k+1}$ and $Y_{k+1}$
\item for every $r\in\es{R}{T'_g}{k}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$, since $r\in\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and $r$ would have also joined $X_k$ to $Y_k$
\end{enumerate}
and thus the conditions of Scenario \ref{scenario2} are satisfied.
\end{proof}
Now suppose Fixer doesn't win \mc{T_1} in the $(k-1)$st round, Buster doesn't win \mc{T_1} in the $k$th round, $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ is disconnected, and $\es{B}{T_1}{k}=\{b\}\neq\{ s\}$. Since $s$ bridges $X_k$ and $Y_k$ in \es{G}{T_1}{k}, either $b\in X_k$ and $X_k-\{b\}$ is disconnected with two components, or $b\in Y_k$ and $Y_k-\{b\}$ is disconnected with two components. Without loss of generality, assume $b\in X_k$ and $X_k-\{b\}$ is disconnected with two components $X_k^1$ and $X_k^2$, with $s$ bridging $X_k^1$ and $Y_k$; see Figure \ref{figGT1Xk1Xk2Yk}. Note that Buster can copy his move from \mc{T_1} in \mc{T'_g} by playing $\es{B}{T'_g}{k}=\es{B}{T_1}{k}=\{b\}$, since $b\in\es{G}{T_1}{k}-\{s\}\subset\es{G}{T'_g}{k}$; the two possibilities for \es{G}{T'_g}{k} are shown in Figures \ref{figGTgXk1Yk} and \ref{figGTgXk2Yk}. Furthermore, if $s\notin\es{F}{T'_g}{k}$ and $(\es{G}{T_1}{k}-\es{B}{T'_g}{k})\cup\es{F}{T'_g}{k}$ is connected, then Fixer can copy her move from \mc{T'_g} in \mc{T_1} by playing $\es{F}{T_1}{k}=\es{F}{T'_g}{k}$ since $\es{F}{T'_g}{k}\subseteq\es{R}{T'_g}{k}-\{s\}\subseteq\es{R}{T_1}{k}$.
\begin{figure}[htb]
\centering
\subcaptionbox{\es{G}{T_1}{k}\label{figGT1Xk1Xk2Yk}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s$,bend=45,position={below=.5mm},fontsize=\large](x1k)(yk)
\Edge[label=$b$,position={left},fontsize=\large,style={dashed}](x1k)(x2k)
\end{tikzpicture}
}
\subcaptionbox{One \es{G}{T'_g}{k} possibility\label{figGTgXk1Yk}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](x1k)(yk)
\Edge[label=$b$,position={left},fontsize=\large,style={dashed}](x1k)(x2k)
\end{tikzpicture}
}
\subcaptionbox{The other \es{G}{T'_g}{k} possibility\label{figGTgXk2Yk}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](x2k)(yk)
\Edge[label=$b$,position={left},fontsize=\large,style={dashed}](x1k)(x2k)
\end{tikzpicture}
}
\caption{\es{G}{T_1}{k} and the two possibilities for \es{G}{T'_g}{k}}\label{figsThreePossibilities}
\end{figure}
We separate into the following cases that together comprise every remaining possibility. Proposition \ref{scenario1prop6} deals with the case that $s'$ bridges $X_k^1$ and $Y_k$ in \es{G}{T'_g}{k} (see Figure \ref{figGTgXk1YkFixed}). Propositions \ref{scenario1prop7} and \ref{scenario1prop8} deal with the case that $s'$ bridges $X_k^2$ and $Y_k$ in \es{G}{T'_g}{k}, with the former dealing with the subcase that the cheapest connecting edge $f$ in \es{R}{T'_g}{k} is between $X_k^1$ and $X_k^2$ (see Figure \ref{figGTgXk1Xk2Fixed}) and the latter dealing with the subcase that the cheapest connecting edge $f$ in \es{R}{T'_g}{k} is between $X_k^1$ and $Y_k$ (see Figure \ref{figGTgXk2YkXk1YkFixed}).
\begin{figure}[htb]
\centering
\subcaptionbox{$s'$ bridges $X_k^1$ and $Y_k$\label{figGTgXk1YkFixed}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](x1k)(yk)
\end{tikzpicture}
}
\subcaptionbox{$s'$ bridges $X_k^2$ and $Y_k$, and cheapest connecting edge $f\in\es{R}{T'_g}{k}$ joins $X_k^1$ to $X_k^2$\label{figGTgXk1Xk2Fixed}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](x2k)(yk)
\Edge[label=$f$,position={left},fontsize=\large,style={loosely dashed}](x1k)(x2k)
\end{tikzpicture}
}
\subcaptionbox{$s'$ bridges $X_k^2$ and $Y_k$, and cheapest connecting edge $f\in\es{R}{T'_g}{k}$ joins $X_k^1$ to $Y_k$\label{figGTgXk2YkXk1YkFixed}}[6cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=1,size=1,label=$X^1_k$,fontsize=\large]{x1k}
\Vertex[x=0,y=-1,size=1,label=$X^2_k$,fontsize=\large]{x2k}
\Vertex[x=2,y=0,size=1.5,label=$Y_k$,fontsize=\large]{yk}
\Edge[label=$s'$,bend=-45,position={above=.5mm},fontsize=\large](x2k)(yk)
\Edge[label=$f$,position={above},fontsize=\large,style={loosely dashed}](x1k)(yk)
\end{tikzpicture}
}
\caption{The three remaining possibilities for $\es{G}{T'_g}{k}-\es{B}{T'_g}{k}$, where in the latter two $f$ is the cheapest edge in \es{R}{T'_g}{k} such that $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\{f\}$ is connected.}\label{figsThreePossibilitiesFixed}
\end{figure}
\begin{prop}\label{scenario1prop6}
If $s'$ bridges $X_k^1$ and $Y_k$ in \es{G}{T'_g}{k}, then for $\es{B}{T'_g}{k}=\es{B}{T_1}{k}=\{b\}$ Fixer can copy her greedy response from \mc{T'_g} in \mc{T_1} by playing $\es{F}{T_1}{k}=\es{F}{T'_g}{k}=\{f\}$ in order to maintain the conditions of this scenario.
\end{prop}
\begin{proof}
If $f$ bridges $X_k^1$ and $X_k^2$, set $X_{k+1}=(X_k-\{b\})\cup\{f\}$ and $Y_{k+1}=Y_k$. Then $s$ and $s'$ are bridges in \es{G}{T_1}{k+1} and \es{G}{T'_g}{k+1}, respectively, between $X_{k+1}$ and $Y_{k+1}$, as $s$ and $s'$ each bridged $X_k^1$ and $Y_k$, and the only new edge $f$ resides entirely inside $X_{k+1}$. For every $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$, since $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}\subset\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and $r$ would have also joined $X_k$ and $Y_k$ (because $X_k$ and $X_{k+1}$ share the same set of vertices, as do $Y_k$ and $Y_{k+1}$).
If $f$ bridges $X_k^2$ and $Y_k$, set $X_{k+1}=X_k^1$ and $Y_{k+1}=X_k^2\cup\{f\}\cup Y_k$. Then $s$ and $s'$ are bridges in \es{G}{T_1}{k+1} and \es{G}{T'_g}{k+1}, respectively, between $X_{k+1}$ and $Y_{k+1}$, as $s$ and $s'$ each bridged $X_k^1$ and $Y_k$, and the only new edge $f$ resides entirely inside $Y_{k+1}$. For every $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$, since $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}\subset\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and either $r$ joined $X_k^1$ to $Y_k$ (so $w(r)\geq w(s)$ by hypothesis of this scenario), or $r$ joined $X_k^1$ to $X_k^2$ (so $w(r)\geq w(f)$ because $\es{F}{T'_g}{k}=\{f\}$ was greedy, and $w(f)\geq w(s)$ by hypothesis of this scenario, since $f\in\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and $f$ joined $X_k$ to $Y_k$).
Thus
\begin{enumerate}
\item $s\in\es{G}{T_1}{k}-\{b\}\subset\es{G}{T_1}{k+1}$, $s'\in\es{G}{T'_g}{k}-\{b\}\subset\es{G}{T'_g}{k+1}$, and $\es{G}{T_1}{k+1}-\{s\} = (\es{G}{T_1}{k}-\{b,s\})\cup\{f\} = (\es{G}{T'_g}{k}-\{b,s'\})\cup\{f\} = \es{G}{T'_g}{k+1}-\{s'\}$
\item $s'\in\es{R}{T_1}{k}-\{f\}=\es{R}{T_1}{k+1}$, $s\in\es{R}{T'_g}{k}-\{f\}=\es{R}{T'_g}{k+1}$, and $\es{R}{T_1}{k+1}-\{s'\} = \es{R}{T_1}{k}-\{f,s'\} = \es{R}{T'_g}{k}-\{f,s\} = \es{R}{T'_g}{k+1}-\{s\}$
\item $s$ and $s'$ are bridges in \es{G}{T_1}{k+1} and \es{G}{T'_g}{k+1}, respectively, between $X_{k+1}$ and $Y_{k+1}$
\item for every $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$
\end{enumerate}
so the conditions of this scenario are maintained.
\end{proof}
\begin{prop}\label{scenario1prop7}
If $s'$ bridges $X_k^2$ and $Y_k$ in \es{G}{T'_g}{k}, and the cheapest edge $f\in\es{R}{T'_g}{k}$ such that $(\es{G}{T'_g}{k}-\{b\})\cup\{f\}$ is connected has one endpoint in $X_k^1$ and the other in $X_k^2$, then for $\es{B}{T'_g}{k}=\es{B}{T_1}{k}=\{b\}$ Fixer can copy her greedy response from \mc{T'_g} in \mc{T_1} by playing $\es{F}{T_1}{k}=\es{F}{T'_g}{k}=\{f\}$ in order to maintain the invariants in this scenario.
\end{prop}
\begin{proof}
By hypothesis of this proposition, Fixer can greedily play $\es{F}{T'_g}{k}=\{f\}$ in \mc{T'_g} to create a connected graph \es{G}{T'_g}{k+1}. Note that $f\in\es{R}{T_1}{k}$, since $f\in\es{R}{T'_g}{k}$ and $\es{R}{T'_g}{k}-\es{R}{T_1}{k}=\{s\}\neq\{f\}$, as $f$ has both endpoints in $X_k$ whereas $s$ has one in $Y_k$. Furthermore, $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ consists of two components, one $X_k^2$ and the other the featuring $X_k^1$ and $Y_k$ being bridged by $s$, so Fixer can also play $\es{F}{T_1}{k}=\{f\}$ in \mc{T_1} to create a connected graph \es{G}{T_1}{k+1}. Set $X_{k+1}=(X_k-\{b\})\cup\{f\}$ and $Y_{k+1}=Y_k$, so
\begin{enumerate}
\item $s\in\es{G}{T_1}{k}-\{b\}\subset\es{G}{T_1}{k+1}$, $s'\in\es{G}{T'_g}{k}-\{b\}\subset\es{G}{T'_g}{k+1}$, and $\es{G}{T_1}{k+1}-\{s\} = (\es{G}{T_1}{k}-\{b,s\})\cup\{f\} = (\es{G}{T'_g}{k}-\{b,s'\})\cup\{f\} = \es{G}{T'_g}{k+1}-\{s'\}$
\item $s'\in\es{R}{T_1}{k}-\{f\}=\es{R}{T_1}{k+1}$, $s\in\es{R}{T'_g}{k}-\{f\}=\es{R}{T'_g}{k+1}$, and $\es{R}{T_1}{k+1}-\{s'\} = \es{R}{T_1}{k}-\{f,s'\} = \es{R}{T'_g}{k}-\{f,s\} = \es{R}{T'_g}{k+1}-\{s\}$
\item $s$ and $s'$ are bridges in \es{G}{T_1}{k+1} and \es{G}{T'_g}{k+1}, respectively, between $X_{k+1}$ and $Y_{k+1}$
\item for every $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}$ such that $r$ joins $X_{k+1}$ to $Y_{k+1}$, $w(r)\geq w(s)$, since $r\in\es{R}{T_1}{k+1}\cup\es{R}{T'_g}{k+1}\subset\es{R}{T_1}{k}\cup\es{R}{T'_g}{k}$ and $r$ would have also joined $X_k$ to $Y_k$ (because $X_k$ and $X_{k+1}$ share the same set of vertices, as do $Y_k$ and $Y_{k+1}$)
\end{enumerate}
so the conditions of this scenario are maintained.
\end{proof}
\begin{prop}\label{scenario1prop8}
If $s'$ bridges $X_k^2$ and $Y_k$ in $\es{G}{T'_g}{k}$, and the cheapest edge $f\in\es{R}{T'_g}{k}$ such that $(\es{G}{T'_g}{k}-\{b\})\cup\{f\}$ is connected has one endpoint in $X_k^1$ and the other $Y_k$, then for $\es{B}{T'_g}{k}=\es{B}{T_1}{k}=\{b\}$ Fixer can greedily play $\es{F}{T_1}{k}=\{s\}$ in \mc{T'_g}, as well as play $\es{F}{T_1}{k}=\{ s'\}$ in \mc{T_1}, to advance to Scenario \ref{scenario3}.
\end{prop}
\begin{proof}
Since the cheapest connecting reserve edge in \mc{T'_g} is between $X_k^1$ and $Y_k$, $s$ is such an edge by hypothesis of this scenario, so Fixer can greedily play $\es{F}{T'_g}{k}=\{s\}$ in \mc{T'_g}. Since $\es{G}{T_1}{k}-\es{B}{T_1}{k}$ consists of two components, one $X_k^2$ and the other the featuring $X_k^1$ and $Y_k$ being bridged by $s$, and $s'\in\es{R}{T_1}{k}$ bridges $X_k^2$ and $Y_k$, Fixer can play $\es{F}{T_1}{k}=\{s'\}$ in \mc{T_1} to leave \mc{T_1} and \mc{T'_g} so that
\begin{enumerate}
\item $\es{G}{T_1}{k+1}=(\es{G}{T_1}{k}-\{b\})\cup\{s'\}=(\es{G}{T'_g}{k}-\{b\})\cup\{s\}=\es{G}{T'_g}{k+1}$
\item $\es{R}{T_1}{k+1}=\es{R}{T_1}{k}-\{s'\}=\es{R}{T'_g}{k}-\{s\}=\es{R}{T'_g}{k+1}$
\end{enumerate}
which are the conditions of Scenario \ref{scenario3}.
\end{proof}
\subsubsection{Scenario where Fixer has used $s$ in \mc{T_1} but Buster has not used $s$ in \mc{T'_g}, which is a round behind \mc{T_1}}
\label{scenario2}
This scenario involves \mc{T_1} starting the $k$th round and \mc{T'_g} starting the $(k-1)$st round with the following properties:
\begin{enumerate}
\item $s'\in\es{G}{T_1}{k}=\es{G}{T'_g}{k-1}$ (i.e. the graphs are identical and contain $s'$)
\item $\es{R}{T_1}{k}=\es{R}{T'_g}{k-1}-\{ s\}$ and $s\in\es{R}{T'_g}{k-1}$ (i.e. the only difference between reserve sets is $s$ being in \es{R}{T'_g}{k-1} but not in \es{R}{T_1}{k})
\item $s'$ and $s$ are bridges in \es{G}{T_1}{k} and $(\es{G}{T'_g}{k-1}-\{s'\})\cup\{s\}$, respectively, between the same connected subgraphs $X_k$ and $Y_k$, but perhaps in different spots (i.e. removing both edges from their respective graphs leaves the same graphs, each with two components)
\item for every $r\in\es{R}{T'_g}{k-1}$ such that $r$ bridges $X_k$ and $Y_k$, $w(r)\geq w(s)$ (i.e. in either series, no reserve edge bridging subgraphs $X_k$ and $Y_k$ can be cheaper than $s$)
\end{enumerate}
Note that in this scenario Buster can always copy his move from \mc{T_1} with $\es{B}{T'_g}{k-1}=\es{B}{T_1}{k}$ since $\es{B}{T_1}{k}\subseteq\es{G}{T_1}{k}=\es{G}{T'_g}{k-1}$. Furthermore, if Buster doesn't win \mc{T_1} in the $k$th round and $\es{B}{T'_g}{k-1}=\es{B}{T_1}{k}$, then Buster doesn't win \mc{T'_g} in the $(k-1)$st round, since if $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$ is connected, then so must be $(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{R}{T'_g}{k-1}$ because $\es{G}{T_1}{k}=\es{G}{T'_g}{k-1}$, $\es{B}{T_1}{k}=\es{B}{T'_g}{k-1}$, and $\es{R}{T_1}{k}\subseteq\es{R}{T'_g}{k-1}$.
We divide our analysis of this scenario in the following way. Proposition \ref{scenario2prop1} deals with the case that Fixer wins \mc{T_1} in the $(k-1)$st round (i.e. Buster decides to quit before the $k$th round of \mc{T_1}). Propositions \ref{scenario2prop2} and \ref{scenario2prop3} deal with the case that Buster wins \mc{T_1} in the $k$th round, each dealing with a subcase of whether $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}$ is connected. Propositions \ref{scenario2prop4} and \ref{scenario2prop5} deal with the remaining case that Buster makes a move in the $k$th round, and Fixer is able to reconnect the graph in response; each deals with a subcase of whether Fixer can respond greedily in \mc{T'_g} to $\es{B}{T'_g}{k-1}=\es{B}{T_1}{k}$ with \es{F}{T'_g}{k-1} containing $s$.
\begin{prop}\label{scenario2prop1}
Suppose Fixer wins \mc{T_1} in the $(k-1)$st round. Then for $\es{B}{T'_g}{k-1}=\{s'\}$ Fixer can play $\es{F}{T'_g}{k-1}=\{s\}$ as a greedy response in \mc{T'_g}, and if Buster subsequently quits then Fixer wins \mc{T'_g} in the $(k-1)$st round and \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
Since $\es{G}{T'_g}{k-1}-\{s'\}$ is the graph consisting of the components $X_k$ and $Y_k$, and $s\in\es{R}{T'_g}{k-1}$ is a bridge between $X_k$ and $Y_k$, $\es{F}{T'_g}{k-1}=\{s\}$ is a valid move by Fixer in \mc{T'_g}. Furthermore, $\es{F}{T'_g}{k-1}=\{s\}$ is a greedy move because for every $r\in\es{R}{T'_g}{k-1}$ such that $r$ bridges $X_k$ and $Y_k$, $w(r)\geq w(s)$. Note that this leaves $s'\in\es{G}{T_1}{k}$, $s\in\es{G}{T'_g}{k}$, and $\es{G}{T'_g}{k}-\{s\}=\es{G}{T'_g}{k-1}-\{s'\}=\es{G}{T_1}{k}-\{s'\}$ (i.e. the only difference between graphs is $s'$ in \es{G}{T_1}{k} being replaced by $s$ in \es{G}{T'_g}{k}) as well as $\es{R}{T'_g}{k}=\es{R}{T'_g}{k-1}-\{s\}=\es{R}{T_1}{k}$. Hence
\begin{enumerate}
\item Fixer wins \mc{T_1}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k}|-|\es{R}{T_1}{k}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k}|-|\es{R}{T'_g}{k}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
so \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{prop}\label{scenario2prop2}
Suppose Buster wins \mc{T_1} in the $k$th round and $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}$ is disconnected. Then for $\es{B}{T'_g}{k-1}=\{s'\}$ Fixer can play $\es{F}{T'_g}{k-1}=\{s\}$ as a greedy response in \mc{T'_g}. Furthermore, Buster playing $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ in \mc{T'_g} if $s'\notin\es{B}{T_1}{k}$, or Buster playing $\es{B}{T'_g}{k}=(\es{B}{T_1}{k}-\{s'\})\cup\{s\}$ in \mc{T'_g} if $s'\in\es{B}{T_1}{k}$, both result in Buster winning \mc{T'_g} in the $k$th round and \mc{T_1} Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
Since $\es{G}{T'_g}{k-1}-\{s'\}$ is the graph consisting of the components $X_k$ and $Y_k$, and $s\in\es{R}{T'_g}{k-1}$ is a bridge between $X_k$ and $Y_k$, $\es{F}{T'_g}{k-1}=\{s\}$ is a valid move by Fixer in \mc{T'_g}. Furthermore, $\es{F}{T'_g}{k-1}=\{s\}$ is a greedy move because for every $r\in\es{R}{T'_g}{k-1}$ such that $r$ bridges $X_k$ and $Y_k$, $w(r)\geq w(s)$. Note that this leaves $s'\in\es{G}{T_1}{k}$, $s\in\es{G}{T'_g}{k}$, and $\es{G}{T'_g}{k}-\{s\}=\es{G}{T'_g}{k-1}-\{s'\}=\es{G}{T_1}{k}-\{s'\}$ (i.e. the only difference between graphs is $s'$ in \es{G}{T_1}{k} being replaced by $s$ in \es{G}{T'_g}{k}) as well as $\es{R}{T'_g}{k}=\es{R}{T'_g}{k-1}-\{s\}=\es{R}{T_1}{k}$.
Let Buster play $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ in \mc{T'_g} if $s'\notin\es{B}{T_1}{k}$, or play $\es{B}{T'_g}{k}=(\es{B}{T_1}{k}-\{s'\})\cup\{s\}$ in \mc{T'_g} if $s'\in\es{B}{T_1}{k}$. We first show in either case that Buster wins \mc{T'_g} in the $k$th round by showing that $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}$ is a spanning subgraph of $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}$ and thus disconnected as well. If $s'\notin\es{B}{T_1}{k}$, then $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ and
\begin{align*}
(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}&=(((\es{G}{T_1}{k}-\{s'\})\cup\{s\})-\es{B}{T_1}{k})\cup\es{R}{T_1}{k} \\
&\subseteq(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}
\end{align*}
since $s\notin\es{B}{T_1}{k}$, as $s\notin\es{G}{T_1}{k}$ and $\es{B}{T_1}{k}\subseteq\es{G}{T_1}{k}$. If $s'\in\es{B}{T_1}{k}$, then $\es{B}{T'_g}{k}=(\es{B}{T_1}{k}-\{s'\})\cup\{s\}$ and
\begin{align*}
(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k} &= (((\es{G}{T_1}{k}-\{s'\})\cup\{s\})-((\es{B}{T_1}{k}-\{s'\})\cup\{s\}))\cup\es{R}{T_1}{k} \\
&= (\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}
\end{align*}
since $s'\in\es{G}{T_1}{k}\cap\es{B}{T_1}{k}$ and $s\notin\es{G}{T_1}{k}\cup\es{B}{T_1}{k}$.
In either case of final Buster moves, noting that the convention of $\es{F}{T'_g}{k}=\es{F}{T_1}{k}=\emptyset$ implies $|\es{G}{T_1}{k+1}| = |\es{G}{T_1}{k}|-|\es{B}{T_1}{k}| = |\es{G}{T'_g}{k-1}|-|\es{B}{T'_g}{k}| = |(\es{G}{T'_g}{k-1}-\{s'\})\cup\{s\}|-|\es{B}{T'_g}{k}| = |\es{G}{T'_g}{k+1}|$ as well as $\es{R}{T_1}{k+1} = \es{R}{T_1}{k} = \es{R}{T'_g}{k} = \es{R}{T'_g}{k+1}$,
\begin{enumerate}
\item Buster wins \mc{T'_g}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k+1}|-|\es{R}{T_1}{k+1}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k+1}|-|\es{R}{T'_g}{k+1}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k+1}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k+1}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
so \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{prop}\label{scenario2prop3}
Suppose Buster wins \mc{T_1} in the $k$th round and $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}$ is connected. Then for $\es{B}{T'_g}{k-1}=\es{B}{T_1}{k}$ Buster does not win \mc{T'_g} in $(k-1)$st round, and any valid Fixer response in \mc{T'_g} must satisfy $s\in\es{F}{T'_g}{k-1}$. Furthermore, this allows Buster to play $\es{B}{T'_g}{k}=\{s\}$ in \mc{T'_g}, resulting in Buster winning \mc{T'_g} in the $k$th round and \mc{T_1} Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
First see that Buster does not win \mc{T'_g} in $(k-1)$st round since $(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{R}{T'_g}{k-1}=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}\cup\{s\}$, which is connected by hypothesis of this proposition. Next, note that any valid Fixer response in \mc{T'_g} must satisfy $s\in\es{F}{T'_g}{k-1}$, since otherwise $(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{F}{T'_g}{k-1}\subseteq(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$, which is disconnected because Buster wins \mc{T_1} in the $k$th round.
With $\es{B}{T'_g}{k}=\{s\}$, Buster wins \mc{T'_g} in the $k$th round since
\begin{align*}
(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}&=(((\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{F}{T'_g}{k-1})-\{s\})\cup(\es{R}{T'_g}{k-1}-\es{F}{T'_g}{k-1}) \\
&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup(\es{R}{T'_g}{k-1}-\{s\}) \\
&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}
\end{align*}
which is disconnected because Buster wins \mc{T_1} in the $k$th round. Furthermore, noting that the convention of $\es{F}{T'_g}{k}=\es{F}{T_1}{k}=\emptyset$ implies $\es{R}{T_1}{k+1} = \es{R}{T_1}{k}$ and $\es{R}{T'_g}{k+1} = \es{R}{T'_g}{k}$,
\begin{enumerate}
\item Buster wins \mc{T'_g}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k+1}|-|\es{R}{T_1}{k+1}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k+1}|-|\es{R}{T'_g}{k+1}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k-1}-\{s\}) \leq w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k-1}-\es{F}{T'_g}{k-1}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
so \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{prop}\label{scenario2prop4}
Suppose Buster plays some set $\es{B}{T_1}{k}=\{e\}$ that does not win \mc{T_1} for Buster in the $k$th round, and when Buster copies that move in \mc{T'_g} with $\es{B}{T'_g}{k-1}=\{e\}$, Fixer responds greedily in \mc{T'_g} with a set \es{F}{T'_g}{k-1}, but no possible greedy response for Fixer in \mc{T'_g} contains $s$. Then Fixer can copy that move in \mc{T_1} with $\es{F}{T_1}{k}=\es{F}{T'_g}{k-1}$ to stay in this scenario.
\end{prop}
\begin{proof}
Fixer can validly play $\es{F}{T_1}{k}=\es{F}{T'_g}{k-1}$ in \mc{T_1} because $\es{F}{T'_g}{k-1}\subseteq\es{R}{T'_g}{k-1}-\{s\}=\es{R}{T_1}{k}$ and
\begin{align*}
\es{G}{T_1}{k+1}&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{F}{T_1}{k} \\
&=(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{F}{T'_g}{k-1} \\
&=\es{G}{T'_g}{k}
\end{align*}
which is connected. In addition to $\es{G}{T_1}{k+1}=\es{G}{T'_g}{k}$, we also have
\begin{align*}
\es{R}{T_1}{k+1}&=\es{R}{T_1}{k}-\es{F}{T_1}{k} \\
&=(\es{R}{T'_g}{k-1}-\{ s\})-\es{F}{T'_g}{k-1} \\
&=\es{R}{T'_g}{k}-\{ s\}
\end{align*}
and $s\in\es{R}{T'_g}{k-1}-\es{F}{T'_g}{k-1}=\es{R}{T'_g}{k}$.
We show that $e\neq s'$ by showing that if $e=s'$ then Fixer could have contradicted the assumption that no possible greedy response in \mc{T'_g} contained $s$ by playing $\es{F}{T'_g}{k-1}=\{s\}$. Indeed, $\es{F}{T'_g}{k-1}=\{s\}$ would have been a valid Fixer response since $s$ is a bridge in $(\es{G}{T'_g}{k-1}-\{s'\})\cup\{s\}$ between connected subgraphs $X_k$ and $Y_k$. Furthermore, it would have been a greedy response since $s'$ being a bridge between $X_k$ and $Y_k$ implies that any greedy response would have to be a single edge in \es{R}{T'_g}{k-1} bridging $X_k$ and $Y_k$, and for every $r\in\es{R}{T'_g}{k-1}$ such that $r$ bridges $X_k$ and $Y_k$, $w(r)\geq w(s)$ by assumption of this scenario.
To complete the proof that Fixer playing $\es{F}{T_1}{k}=\es{F}{T'_g}{k-1}$ in \mc{T_1} maintains the conditions of this scenario, we show that $\es{G}{T_1}{k+1}-\{s'\}$ consists of two components $X_{k+1}$ and $Y_{k+1}$ such that $s'$ and $s$ are bridges in \es{G}{T_1}{k+1} and $(\es{G}{T'_g}{k}-\{s'\})\cup\{s\}$, respectively, between $X_{k+1}$ and $Y_{k+1}$, with every $r\in\es{R}{T'_g}{k}$ bridging $X_{k+1}$ and $Y_{k+1}$ also satisfying $w(r)\geq w(s)$. If $e$ is not a bridge in $X_k$ or $Y_k$, then both $X_k-\{e\}$ and $Y_k-\{e\}$ are connected, and furthermore $\es{G}{T'_g}{k-1}-\{e\}$ is connected because $s'$ is a bridge between $X_k-\{e\}$ and $Y_k-\{e\}$ and $e\neq s'$; hence $\es{F}{T_1}{k}=\es{F}{T'_g}{k-1}=\emptyset$ since that would be the only greedy move by Fixer in \mc{T'_g}, so we can set $X_{k+1}=X_k-\{e\}$ and $Y_{k+1}=Y_k-\{e\}$. Thus without loss of generality we may assume $e$ is a bridge in $X_k$ between the two components $X^1_k$ and $X^2_k$ of $X_k-\{e\}$, and $\es{F}{T_1}{k}=\es{F}{T'_g}{k-1}=\{f\}$ for some edge $f\in\es{R}{T_1}{k}$ either bridging $X^1_k$ and $X^2_k$, or bridging $Y_k$ and one of $X^1_k$ or $X^2_k$. If $f$ bridges $X^1_k$ and $X^2_k$, then we may set $X_{k+1}=(X_k-\{e\})\cup\{f\}$ and $Y_{k+1}=Y_k$. Thus without loss of generality we may assume $f$ bridges $X^1_k$ and $Y_k$, so $w(f)\geq w(s)$ by the assumptions of this scenario, and $w(f)\leq w(r)$ for any $r\in\es{R}{T'_g}{k-1}$ bridging $X^1_k$ and $X^2_k$, since otherwise $\es{F}{T'_g}{k-1}=\{r\}$ would have been a cheaper valid response for Fixer in \mc{T'_g}, contradicting $\es{F}{T'_g}{k-1}=\{f\}$ being greedy. Hence we can set $X_{k+1}=X^2_k$ and $Y_{k+1}=Y_k\cup X^1_k\cup\{f\}$, since for every $r\in\es{R}{T_1}{k+1}$ such that $r$ bridges $X_{x+1}$ and $Y_{k+1}$, either $r$ bridged $X_k$ and $Y_k$ in which case $w(r)\geq w(s)$ by the assumptions of this scenario, or $r$ bridged $X^1_k$ and $X^2_k$, in which case we've already shown $w(r)\geq w(f)\geq w(s)$.
\end{proof}
\begin{prop}\label{scenario2prop5}
Suppose Buster plays some set \es{B}{T_1}{k} that does not win \mc{T_1} for Buster in the $k$th round, and when Buster copies that move in \mc{T'_g} with $\es{B}{T'_g}{k-1}=\es{B}{T_1}{k}$, Fixer can respond greedily in \mc{T'_g} with a set \es{F}{T'_g}{k-1} containing $s$. Then Buster can play $\es{B}{T'_g}{k}=\{s\}$ and Fixer can create a connected graph \es{G}{T'_g}{k+1} with some greedy \es{F}{T'_g}{k} in \mc{T'_g}, and Fixer can play $\es{F}{T_1}{k}=(\es{F}{T'_g}{k-1}-\{s\})\cup\es{F}{T'_g}{k}$ in \mc{T_1}, to trigger Scenario \ref{scenario3} for the $(k+1)$st round.
\end{prop}
\begin{proof}
Buster can play $\es{B}{T'_g}{k}=\{s\}$ in \mc{T'_g} because $s\in\es{F}{T'_g}{k-1}\subseteq\es{G}{T'_g}{k}$. Fixer can respond with some $\es{F}{T'_g}{k}\subseteq\es{R}{T'_g}{k}$ such that $\es{G}{T'_g}{k+1}=(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{F}{T'_g}{k}$ is connected (so Buster doesn't win \mc{T'_g} in the $k$th round), because Buster's failure to win \mc{T_1} in the $k$th round implies $(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$ is connected, and
\begin{align*}
(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}&=(\es{G}{T'_g}{k}\cup\es{R}{T'_g}{k})-\{s\} \\
&=((\es{G}{T'_g}{k-1}\cup\es{R}{T'_g}{k-1})-\es{B}{T'_g}{k-1})-\{s\} \\
&=(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup(\es{R}{T'_g}{k-1}-\{s\}) \\
&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}
\end{align*}
so $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}$ is connected as well. Fixer can play $\es{F}{T_1}{k}=(\es{F}{T'_g}{k-1}-\{s\})\cup\es{F}{T'_g}{k}$ in \mc{T_1} because
\begin{align*}
\es{F}{T_1}{k}&=(\es{F}{T'_g}{k-1}-\{s\})\cup\es{F}{T'_g}{k} \\
&\subseteq\es{R}{T'_g}{k-1}-\{s\} \\
&=\es{R}{T_1}{k}
\end{align*}
and
\begin{align*}
\es{G}{T_1}{k+1}&=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{F}{T_1}{k} \\
&=(\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup(\es{F}{T'_g}{k-1}-\{s\})\cup\es{F}{T'_g}{k} \\
&=((\es{G}{T'_g}{k-1}-\es{B}{T'_g}{k-1})\cup\es{F}{T'_g}{k-1})-\{s\})\cup\es{F}{T'_g}{k} \\
&=(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{F}{T'_g}{k} \\
&=\es{G}{T'_g}{k+1}
\end{align*}
which we already showed was connected because there existed a valid Fixer move \es{F}{T'_g}{k} to prevent Buster from winning \mc{T'_g} in the $k$th round. Since we just showed $\es{G}{T_1}{k+1}=\es{G}{T'_g}{k+1}$, and
\begin{align*}
\es{R}{T_1}{k+1}&=\es{R}{T_1}{k}-\es{F}{T_1}{k} \\
&=(\es{R}{T'_g}{k-1}-\{s\})-((\es{F}{T'_g}{k-1}-\{s\})\cup\es{F}{T'_g}{k}) \\
&=(\es{R}{T'_g}{k-1}-\es{F}{T'_g}{k-1})-\es{F}{T'_g}{k} \\
&=\es{R}{T'_g}{k}-\es{F}{T'_g}{k} \\
&=\es{R}{T'_g}{k+1}
\end{align*}
Scenario \ref{scenario3} is triggered for the $(k+1)$st round.
\end{proof}
\subsubsection{Scenario where \mc{T_1} and \mc{T'_g} are in the same state}
\label{scenario3}
This scenario involves \mc{T_1} and \mc{T'_g} each starting the $k$th round with the following properties:
\begin{enumerate}
\item $\es{G}{T_1}{k}=\es{G}{T'_g}{k}$
\item $\es{R}{T_1}{k}=\es{R}{T'_g}{k}$
\end{enumerate}
Note that in this scenario, after Buster plays some \es{B}{T_1}{k} in \mc{T_1}, Buster can copy that move in \mc{T'_g} with $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ since $\es{G}{T_1}{k}=\es{G}{T'_g}{k}$.
We divide our analysis of this scenario in the following way. Proposition \ref{scenario3prop1} deals with the case that Fixer wins \mc{T_1} in the $(k-1)$st round (i.e. Buster decides to quit before the $k$th round of \mc{T_1}). Proposition \ref{scenario3prop2} deals with the case that Buster wins \mc{T_1} in the $k$th round. Proposition \ref{scenario3prop3} deals with the remaining case that Buster makes a move in the $k$th round, and Fixer is able to reconnect the graph in response.
\begin{prop}\label{scenario3prop1}
If Fixer wins \mc{T_1} in the $(k-1)$st round, then Buster can quit after the $(k-1)$st round of \mc{T'_g}, resulting in Fixer winning \mc{T'_g} in the $(k-1)$st round, and \mc{T_1} being Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
We have
\begin{enumerate}
\item Fixer wins \mc{T_1}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k}|-|\es{R}{T_1}{k}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k}|-|\es{R}{T'_g}{k}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
and thus \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{prop}\label{scenario3prop2}
If Buster wins \mc{T_1} in the $k$th round, then Buster playing $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$ results in Buster winning \mc{T'_g} in the $k$th round and \mc{T_1} Fixer-superior to \mc{T'_g}.
\end{prop}
\begin{proof}
We have $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$, which is disconnected since Buster wins \mc{T_1} in the $k$th round, so Buster wins \mc{T'_g} in the $k$th round. Hence $\es{F}{T'_g}{k}=\es{F}{T_1}{k}=\emptyset$ by convention, implying $\es{G}{T_1}{k+1}=\es{G}{T_1}{k}-\es{B}{T_1}{k}=\es{G}{T'_g}{k}-\es{B}{T'_g}{k}=\es{G}{T'_g}{k+1}$ and $\es{R}{T_1}{k+1}=\es{R}{T_1}{k}=\es{R}{T'_g}{k}=\es{R}{T'_g}{k+1}$, so
\begin{enumerate}
\item Buster wins \mc{T'_g}
\item $\sum _{j=1}^{|\mc{T_1}|}|\es{B}{T_1}{j}| = |\es{G}{T_1}{1}|+|\es{R}{T_1}{1}|-|\es{G}{T_1}{k+1}|-|\es{R}{T_1}{k+1}| = |\es{G}{T'_g}{1}|+|\es{R}{T'_g}{1}|-|\es{G}{T'_g}{k+1}|-|\es{R}{T'_g}{k+1}| = \sum_{j=1}^{|\mc{T'_g}|}|\es{B}{T'_g}{j}|$
\item $\sum _{j=1}^{|\mc{T_1}|}w(\es{F}{T_1}{j}) = w(\es{R}{T_1}{1})-w(\es{R}{T_1}{k+1}) = w(\es{R}{T'_g}{1})-w(\es{R}{T'_g}{k+1}) = \sum _{j=1}^{|\mc{T'_g}|} w(\es{F}{T'_g}{j})$
\end{enumerate}
and thus \mc{T_1} is Fixer-superior to \mc{T'_g}.
\end{proof}
\begin{prop}\label{scenario3prop3}
If Buster plays some set \es{B}{T_1}{k} that does not win \mc{T_1} for Buster in the $k$th round, then after Buster plays $\es{B}{T'_g}{k}=\es{B}{T_1}{k}$, Fixer can copy her greedy move from \mc{T'_g} in \mc{T_1} by playing $\es{F}{T_1}{k}=\es{F}{T'_g}{k}$ to stay in this scenario.
\end{prop}
\begin{proof}
Since Buster does not win \mc{T_1} in the $k$th round and $(\es{G}{T'_g}{k}-\es{B}{T'_g}{k})\cup\es{R}{T'_g}{k}=(\es{G}{T_1}{k}-\es{B}{T_1}{k})\cup\es{R}{T_1}{k}$, Buster also doesn't win \mc{T'_g} in the $k$th round, so Fixer can respond greedily in \mc{T'_g} with some \es{F}{T'_g}{k}. Since $\es{R}{T_1}{k}=\es{R}{T'_g}{k}$, Fixer can copy that move in \mc{T_1} with $\es{F}{T_1}{k}=\es{F}{T'_g}{k}$. Hence $\es{G}{T_1}{k+1}=\es{G}{T'_g}{k+1}$ and $\es{R}{T_1}{k+1}=\es{R}{T'_g}{k+1}$, leaving us in the same scenario.
\end{proof}
\subsection{The case $c\geq 3$}
\label{c3Proof}
If $c\geq 3$, then \es{F}{S}{1} and \es{F}{S'}{1} each have multiple edges. Let \mc{S''} be a series such that $\es{G}{S}{1}=\es{G}{S'}{1}=\es{G}{S''}{1}$, $\es{R}{S}{1}=\es{R}{S'}{1}=\es{R}{S''}{1}$, and $\es{B}{S}{1}=\es{B}{S'}{1}=\es{B}{S''}{1}$. To show that for our fixed series $\mc{T}\in\phi$, there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}, we first show (via Proposition \ref{intersectionSupProp1}) that there exists a Fixer move \es{F}{S''}{1} and strategy $\phi''$ for Fixer to continue \mc{S''} after the first round such that \es{F}{S''}{1} contains an edge $e\in\es{F}{S}{1}$ and for every $\mc{T''}\in\phi''$ there exists $\mc{T'}\in\phi'$ such that \mc{T''} is Fixer-superior to \mc{T'}. We then show (via Proposition \ref{intersectionSupProp2}) that for every $\mc{T}\in\phi$ there exists $\mc{T''}\in\phi''$ such that \mc{T} is Fixer-superior to \mc{T''}. Then for every $\mc{T}\in\phi$, we would have $\mc{T''}\in\phi''$ and $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T''}, and \mc{T''} is Fixer-superior to \mc{T'}. Since Fixer-superiority is transitive, \mc{T} would be Fixer-superior to \mc{T'}.
In order to prove Propositions \ref{intersectionSupProp1} and \ref{intersectionSupProp2}, we make use of the following observation. Suppose \mc{P} and \mc{U} are series such that for some subset $F$ of \es{F}{P}{1}, the situation facing Fixer during her first move in \mc{U} is the same situation she faced in \mc{P} after having partially fixed $\es{G}{P}{1}-\es{B}{P}{1}$ with $F$ from $\es{R}{P}{1}$ (i.e. $\es{G}{U}{1}-\es{B}{U}{1}=(\es{G}{P}{1}-\es{B}{P}{1})\cup F$ and $\es{R}{U}{1}=\es{R}{P}{1}-F$). Further suppose \mc{Q} is a series identical to \mc{P} up through Fixer partially fixing each graph with $F$ at the start of her first move, but Fixer finishes her first move in \mc{Q} by copying her entire first move in \mc{U} (i.e. $\es{G}{Q}{1}=\es{G}{P}{1}$, $\es{R}{Q}{1}=\es{R}{P}{1}$, $\es{B}{Q}{1}=\es{B}{P}{1}$, and $\es{F}{Q}{1}=F\cup\es{F}{U}{1}$). Finally, suppose \es{F}{U}{1} is optimal, $\phi_{\mc{U}}$ is a strategy of optimal moves for Fixer to continue \mc{U} after the first round, $\phi_{\mc{Q}}$ is the strategy for Fixer to continue \mc{Q} after the first round constructed by replacing the first round of each series in $\phi_{\mc{U}}$ with the first round of \mc{Q}, and $\phi_{\mc{P}}$ is a strategy for Fixer to continue \mc{P} after the first round. Then for every $\mc{Q'}\in\phi_{\mc{Q}}$, we should expect by dint of Fixer copying the first part of her move from \mc{P} into \mc{Q'} before (in a sense) finishing that move optimally, and playing all subsequent moves optimally, that there exists $\mc{P'}\in\phi_{\mc{P}}$ such that \mc{Q'} is Fixer-superior to \mc{P'}. We formally verify below that this indeed holds true.
\begin{lem}\label{equivalenceLem}
Let \mc{P} and \mc{U} be series such that for some subset $F$ of \es{F}{P}{1}, $\es{G}{U}{1}-\es{B}{U}{1}=(\es{G}{P}{1}-\es{B}{P}{1})\cup F$ and $\es{R}{U}{1}=\es{R}{P}{1}-F$. Let \mc{Q} be a series such that $\es{G}{Q}{1}=\es{G}{P}{1}$, $\es{R}{Q}{1}=\es{R}{P}{1}$, $\es{B}{Q}{1}=\es{B}{P}{1}$, and $\es{F}{Q}{1}=F\cup\es{F}{U}{1}$. If \es{F}{U}{1} is optimal, $\phi_{\mc{U}}$ is a strategy of optimal moves for Fixer to continue \mc{U} after the first round, $\phi_{\mc{Q}}$ is the strategy for Fixer to continue \mc{Q} after the first round constructed by replacing the first round of each series in $\phi_{\mc{U}}$ with the first round of \mc{Q}, and $\phi_{\mc{P}}$ is a strategy for Fixer to continue \mc{P} after the first round, then for every $\mc{Q'}\in\phi_{\mc{Q}}$ there exists $\mc{P'}\in\phi_{\mc{P}}$ such that \mc{Q'} is Fixer-superior to \mc{P'}.
\end{lem}
\begin{proof}
Fixer's strategy $\phi_{\mc{Q}}$ against any Buster strategy will be a translation of Fixer's strategy $\phi_{\mc{U}}$ against the same Buster strategy. Note that
\begin{align*}
\es{G}{Q}{2}&=(\es{G}{Q}{1}-\es{B}{Q}{1})\cup\es{F}{Q}{1} \\
&=(\es{G}{P}{1}-\es{B}{P}{1})\cup F\cup\es{F}{U}{1} \\
&=(\es{G}{U}{1}-\es{B}{U}{1})\cup\es{F}{U}{1} \\
&=\es{G}{U}{2}
\end{align*}
and
\begin{align*}
\es{R}{Q}{2}&=\es{R}{Q}{1}-\es{F}{Q}{1} \\
&=\es{R}{P}{1}-(F\cup\es{F}{U}{1}) \\
&=(\es{R}{P}{1}-F)-\es{F}{U}{1} \\
&=\es{R}{U}{1}-\es{F}{U}{1} \\
&=\es{R}{U}{2}
\end{align*}
so \mc{Q} and \mc{U} are equivalent starting in the second round.
Since \es{F}{U}{1} is optimal, and $\phi_{\mc{U}}$ is a strategy of optimal moves for Fixer to continue \mc{U} after the first round, by Lemma \ref{optLemFixer} for any series \mc{V} identical to \mc{U} through Buster's move of the first round, for any $\mc{U'}\in\phi_{\mc{U}}$ and any strategy $\phi_{\mc{V}}$ for Fixer to continue \mc{V} after the first round, there exists $\mc{V'}\in\phi_{\mc{V}}$ such that \mc{U'} is Fixer-superior to \mc{V'}. Let \mc{V} be identical to \mc{U} through Buster's move of the first round, but set $\es{F}{V}{1}=\es{F}{P}{1}-F$. Note that
\begin{align*}
\es{G}{P}{2}&=(\es{G}{P}{1}-\es{B}{P}{1})\cup\es{F}{P}{1} \\
&=(\es{G}{P}{1}-\es{B}{P}{1})\cup F\cup (\es{F}{P}{1}-F) \\
&=(\es{G}{U}{1}-\es{B}{U}{1})\cup(\es{F}{P}{1}-F) \\
&=(\es{G}{V}{1}-\es{B}{V}{1})\cup\es{F}{V}{1} \\
&=\es{G}{V}{2}
\end{align*}
and
\begin{align*}
\es{R}{P}{2}&=\es{R}{P}{1}-\es{F}{P}{1} \\
&=(\es{R}{P}{1}-F)-(\es{F}{P}{1}-F) \\
&=\es{R}{U}{1}-\es{F}{V}{1} \\
&=\es{R}{V}{1}-\es{F}{V}{1} \\
&=\es{R}{V}{2}
\end{align*}
so \mc{P} and \mc{V} are equivalent starting in the second round.
Let $\phi_{\mc{Q}}$ be the strategy for Fixer to continue \mc{Q} after the first round constructed by replacing the first round of each series in $\phi_{\mc{U}}$ with the first round of \mc{Q}, let $\phi_{\mc{P}}$ be a strategy for Fixer to continue \mc{P} after the first round, and let $\phi_{\mc{V}}$ be the strategy for Fixer to continue \mc{V} after the first round constructed by replacing the first round of each series in $\phi_{\mc{P}}$ with the first round of \mc{V}. Let $\mc{Q'}\in\phi_{\mc{Q}}$, and let $\mc{U'}\in\phi_{\mc{U}}$ be the series from which \mc{Q'} was constructed by replacing the first round with the first round of \mc{Q}. Let $\mc{V'}\in\phi_{\mc{V}}$ be a series such that \mc{U'} is Fixer-superior to \mc{V'}, and let $\mc{P'}\in\phi_{\mc{P}}$ be the series from which \mc{V'} was constructed by replacing the first round of \mc{P'} with the first round of \mc{V}. Then
\begin{enumerate}
\item Fixer wins \mc{Q'}, or Buster wins \mc{Q'}, in which case $(\es{G}{Q'}{|\mc{Q'}|}-\es{B}{Q'}{|\mc{Q'}|})\cup\es{R}{Q'}{|\mc{Q'}|}$ is disconnected, implying Buster wins \mc{U'} since $(\es{G}{U'}{|\mc{U'}|}-\es{B}{U'}{|\mc{U'}|})\cup\es{R}{U'}{|\mc{U'}|}=(\es{G}{Q'}{|\mc{Q'}|}-\es{B}{Q'}{|\mc{Q'}|})\cup\es{R}{Q'}{|\mc{Q'}|}$, implying Buster wins \mc{V'} since \mc{U'} is Fixer-superior to \mc{V'}, implying $(\es{G}{V'}{|\mc{V'}|}-\es{B}{V'}{|\mc{V'}|})\cup\es{R}{V'}{|\mc{V'}|}$ is disconnected, implying Buster wins \mc{P'} since $(\es{G}{P'}{|\mc{P'}|}-\es{B}{P'}{|\mc{P'}|})\cup\es{R}{P'}{|\mc{P'}|}=(\es{G}{V'}{|\mc{V'}|}-\es{B}{V'}{|\mc{V'}|})\cup\es{R}{V'}{|\mc{V'}|}$
\item $\sum _{j=1}^{|\mc{Q'}|}|\es{B}{Q'}{j}|=|\es{B}{Q}{1}|-|\es{B}{U}{1}|+\sum _{j=1}^{|\mc{U'}|}|\es{B}{U'}{j}|\geq |\es{B}{P}{1}|-|\es{B}{V}{1}|+\sum _{j=1}^{|\mc{V'}|}|\es{B}{V'}{j}|=\sum _{j=1}^{|\mc{P'}|}|\es{B}{P'}{j}|$
\item $\sum _{j=1}^{|\mc{Q'}|} w(\es{F}{Q'}{j})=w(F)+\sum _{j=1}^{|\mc{U'}|} w(\es{F}{U'}{j})\leq w(F)+\sum _{j=1}^{|\mc{V'}|} w(\es{F}{V'}{j})=\sum _{j=1}^{|\mc{P'}|} w(\es{F}{P'}{j})$
\end{enumerate}
so \mc{Q'} is Fixer-superior to \mc{P'}.
\end{proof}
\begin{prop}\label{intersectionSupProp1}
There exists a move \es{F}{S''}{1} and strategy $\phi''$ for Fixer to continue \mc{S''} after the first round such that \es{F}{S''}{1} contains an edge $e\in\es{F}{S}{1}$ and for every $\mc{T''}\in\phi''$ there exists $\mc{T'}\in\phi'$ such that \mc{T''} is Fixer-superior to \mc{T'}.
\end{prop}
\begin{proof}
Since every series is Fixer-superior to itself, if $\es{F}{S}{1}\cap\es{F}{S'}{1}\neq\emptyset$, then we could set $\es{F}{S''}{1}=\es{F}{S'}{1}$ and $\phi''=\phi'$. Hence we may assume $\es{F}{S}{1}\cap\es{F}{S'}{1}=\emptyset$.
Since $c\geq 3$, \es{F}{S}{1} has multiple edges. Let $e$ be the cheapest edge of \es{F}{S}{1}, and let $e'$ be in the path through \es{F}{S'}{1} between the endpoints of $e$; see Figure \ref{fig1intersectionSupProp1}. Recalling that \es{F}{S}{1} is a minimum spanning tree of the multigraph $M$ whose vertices are the components of $\es{G}{S}{1}-\es{B}{S}{1}$ and whose edges are the edges of \es{R}{S}{1} (identifying each endpoint of the edges in \es{R}{S}{1} with the component of $\es{G}{S}{1}-\es{B}{S}{1}$ within which it lies), note that no non-loop edge in $M$ (i.e. edge in \es{R}{S}{1} joining two components of $\es{G}{S}{1}-\es{B}{S}{1}$) can be cheaper than $e$, or else by Proposition \ref{primProp} it would have been added to \es{F}{S}{1} by Prim's algorithm as the edge immediately after the first of its endpoints joined the tree.
Consider the series \mc{U}, initialized by the following constructions of \es{G}{U}{1} and \es{R}{U}{1}. Construct \es{G}{U}{1} by adding $\es{F}{S'}{1}-\{ e'\}$ to \es{G}{S'}{1} and then deleting from that graph all the edges in \es{B}{S'}{1} except for an edge $g$ joining the two components of $\es{G}{S'}{2}-e'$; see Figure \ref{fig2intersectionSupProp1}. Note that $g$ must exist, or else \es{F}{S'}{1} would not be a tree. Construct \es{R}{U}{1} by deleting $\es{F}{S'}{1}-\{ e'\}$ from \es{R}{S'}{1}.
Note that \es{G}{U}{1} is connected. Indeed, \es{G}{S'}{2} is connected, and \es{G}{U}{1} is \es{G}{S'}{2} with $e'$ replaced by $g$, where $e'$ and $g$ connect the same two components of $\es{G}{S'}{2}-e'$.
Furthermore, $|\es{G}{U}{1}\cup\es{R}{U}{1}|<|\es{G}{S}{1}\cup\es{R}{S}{1}|$. The graph \es{G}{U}{1} and reserve edge set \es{R}{U}{1} are obtained from the graph \es{G}{S}{1} and reserve edge set \es{R}{S}{1} by transferring the edges $\es{F}{S'}{1}-\{ e'\}$ from \es{R}{S}{1} to \es{G}{U}{1}, and then deleting a positive number of edges from the graph, since $|\es{B}{S}{1}|>1$ or else $c<3$.
Hence by the inductive hypothesis, against any set \es{B}{U}{1} of edges removed by Buster, any greedy choice of \es{F}{U}{1} by Fixer is optimal. Set $\es{B}{U}{1}=\{ g\}$.
We claim that $\es{F}{U}{1}=\{ e\}$ is optimal. First, see that $e\in\es{R}{U}{1}$:
\begin{align*}
e &\in\es{F}{S}{1}-\es{F}{S'}{1} \\
&\subseteq\es{R}{S}{1}-\es{F}{S'}{1} \\
&=\es{R}{S'}{1}-\es{F}{S'}{1} \\
&\subset\es{R}{S'}{1}-(\es{F}{S'}{1}-\{ e'\}) \\
&=\es{R}{U}{1}
\end{align*}
Next, see that adding $e$ would connect $\es{G}{U}{1}-\es{B}{U}{1}$, since $\es{G}{U}{1}-\es{B}{U}{1}=\es{G}{S'}{2}-\{e'\}$, and \es{G}{S'}{2} is connected, with $e$ and $e'$ both joining the two components of $\es{G}{S'}{2}-\{e'\}$. Finally, see that no edge $h$ in \es{R}{U}{1} that would connect $\es{G}{U}{1}-\es{B}{U}{1}$ can be cheaper than $e$, since $\es{R}{U}{1}\subset\es{R}{S}{1}$, and $h\in M$ because $\es{G}{U}{1}-\es{B}{U}{1}=\es{G}{S'}{2}-\{e'\}$ and if $h$ joins the two components of $\es{G}{S'}{2}-\{e'\}$ and $e'\in\es{F}{S'}{1}$ then $h$ must join two components of $\es{G}{S'}{2}-\es{F}{S'}{1}=\es{G}{S'}{1}-\es{B}{S'}{1}=\es{G}{S}{1}-\es{B}{S}{1}$, so $h$ being cheaper than $e$ would contradict $e$ being the cheapest edge in $M$. Hence $\es{F}{U}{1}=\{ e\}$ is optimal, by the inductive hypothesis.
Let $\phi_U$ be a strategy of greedy moves for Fixer to continue \mc{U} after the first round; by the inductive hypothesis, these greedy moves are optimal. Let \mc{S''} be a series such that $\es{G}{S''}{1}=\es{G}{S'}{1}$, $\es{R}{S''}{1}=\es{R}{S'}{1}$, $\es{B}{S''}{1}=\es{B}{S'}{1}$, and $\es{F}{S''}{1}=(\es{F}{S'}{1}-\{ e'\})\cup\{ e\}$, noting that for $F=\es{F}{S'}{1}-\{ e'\}$ we have $F\subset\es{F}{S'}{1}$, $\es{G}{U}{1}-\es{B}{U}{1}=(\es{G}{S'}{1}\cup F)-(\es{B}{S'}{1}-\{g\})-\{g\}=(\es{G}{S'}{1}-\es{B}{S'}{1})\cup F$ (since $F\subset\es{F}{S'}{1}$ and $\es{F}{S'}{1}\cap\es{B}{S'}{1}=\emptyset$ imply $F\cap\es{B}{S'}{1}=\emptyset$), $\es{R}{U}{1}=\es{R}{S'}{1}-F$, and $\es{F}{S''}{1}=F\cup\es{F}{U}{1}$. Let $\phi''$ be the strategy for Fixer to continue \mc{S''} after the first round constructed by replacing the first round of each series in $\phi_U$ with the first round of \mc{S''}. By Lemma \ref{equivalenceLem}, for every $\mc{T''}\in\phi''$ there exists $\mc{T'}\in\phi'$ such that \mc{T''} is Fixer-superior to \mc{T'}.
\end{proof}
\begin{figure}[htb]
\centering
\subcaptionbox{The dashed lines are the edges of \es{F}{S}{1}, while the dotted lines are the edges of \es{F}{S'}{1}.\label{fig1intersectionSupProp1}}[9cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=0,size=1]{a}
\Vertex[x=0,y=2,size=1]{b}
\Vertex[x=1.7,y=-1,size=1]{c}
\Vertex[x=-1.7,y=-1,size=1]{d}
\Edge[label=$e$,position={left},fontsize=\large,style={loosely dashed}](a)(b)
\Edge[style={loosely dashed}](b)(c)
\Edge[style={loosely dashed}](c)(d)
\Edge[style={densely dotted}](b)(d)
\Edge[label=$e'$,position={above},fontsize=\large,style={densely dotted}](d)(a)
\Edge[style={densely dotted}](a)(c)
\end{tikzpicture}
}
\subcaptionbox{The graph \es{G}{Q}{1}, where dotted lines are the edges of $\es{F}{S'}{1}-\{ e'\}$.\label{fig2intersectionSupProp1}}[9cm]
{
\begin{tikzpicture}
\Vertex[x=0,y=0,size=1]{a}
\Vertex[x=0,y=2,size=1]{b}
\Vertex[x=1.7,y=-1,size=1]{c}
\Vertex[x=-1.7,y=-1,size=1]{d}
\Edge[style={densely dotted}](b)(d)
\Edge[label=$g$,position={right},fontsize=\large](b)(c)
\Edge[style={densely dotted}](a)(c)
\end{tikzpicture}
}
\caption{Two graphs from the proof of Proposition \ref{intersectionSupProp1}. The blobs are the components of $\es{G}{S}{1}-\es{B}{S}{1}$.}\label{figsForIntersectionSupProp1}
\end{figure}
\begin{prop}\label{intersectionSupProp2}
If \es{F}{S''}{1} contains an edge $e\in\es{F}{S}{1}$ and $\phi''$ is a strategy for Fixer to continue \mc{S''} after the first round, then for every $\mc{T}\in\phi$ there exists $\mc{T''}\in\phi''$ such that \mc{T} is Fixer-superior to \mc{T''}.
\end{prop}
\begin{proof}
Pick $h\in\es{B}{S}{1}$ that joins the two components of $\es{G}{S}{2}-\{e\}$, and consider the series \mc{U} satisfying $\es{G}{U}{1}=(\es{G}{S}{1}-\{h\})\cup\{e\}$, $\es{R}{U}{1}=\es{R}{S}{1}-\{ e\}$, and $\es{B}{U}{1}=\es{B}{S}{1}-\{h\}$. Then $\es{G}{U}{1}-\es{B}{U}{1}=(\es{G}{S''}{1}-\es{B}{S''}{1})\cup\{ e\}$ and $\es{R}{U}{1}=\es{R}{S''}{1}-\{ e\}$. Since $\es{G}{U}{1}\cup\es{R}{U}{1}=(\es{G}{S}{1}\cup\es{R}{S}{1})-\{h\}$ where $h\in\es{B}{S}{1}\subseteq\es{G}{S}{1}$, by the inductive hypothesis any greedy play by Fixer is optimal, including $\es{F}{U}{1}=\es{F}{S}{1}-\{ e\}$.
Let $\phi_U$ be the strategy for Fixer to continue \mc{U} after the first round constructed by replacing the first round of each series in $\phi$ with the first round of \mc{U}. Since all Fixer moves in $\phi$ are greedy, all Fixer moves in $\phi_U$ are also greedy, so all Fixer moves in $\phi_U$ are optimal by the inductive hypothesis. Furthermore, $\phi$ is the strategy for Fixer to continue \mc{S} after the first round constructed by replacing the first round of each series in $\phi_U$ with the first round of \mc{S}. Since \mc{S} is a series such that $\es{G}{S}{1}=\es{G}{S''}{1}$, $\es{R}{S}{1}=\es{R}{S''}{1}$, $\es{B}{S}{1}=\es{B}{S''}{1}$, and $\es{F}{S}{1}=\{ e\}\cup\es{F}{U}{1}$, where $\{e\}\subseteq\es{F}{S''}{1}$ satisfies $\es{G}{U}{1}-\es{B}{U}{1}=(\es{G}{S''}{1}-\es{B}{S''}{1})\cup\{ e\}$ and $\es{R}{U}{1}=\es{R}{S''}{1}-\{ e\}$, by Lemma \ref{equivalenceLem} for every $\mc{T}\in\phi$ there exists $\mc{T''}\in\phi''$ such that \mc{T} is Fixer-superior to \mc{T''}.
\end{proof}
Combining the previous two propositions with the transitivity of Fixer-superiority yields the following conclusion to this subsection.
\begin{cor}
For every $\mc{T}\in\phi$, there exists $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T'}.
\end{cor}
\begin{proof}
By Proposition \ref{intersectionSupProp1}, there exists a move \es{F}{S''}{1} and strategy $\phi''$ for Fixer to continue \mc{S''} after the first round such that \es{F}{S''}{1} contains an edge $e\in\es{F}{S}{1}$ and for every $\mc{T''}\in\phi''$ there exists $\mc{T'}\in\phi'$ such that \mc{T''} is Fixer-superior to \mc{T'}. By Proposition \ref{intersectionSupProp2}, for every $\mc{T}\in\phi$ there exists $\mc{T''}\in\phi''$ such that \mc{T} is Fixer-superior to \mc{T''}. Hence for every $\mc{T}\in\phi$, there exists $\mc{T''}\in\phi''$ and $\mc{T'}\in\phi'$ such that \mc{T} is Fixer-superior to \mc{T''}, and \mc{T''} is Fixer-superior to \mc{T'}. By Proposition \ref{fixSupTransitivityProp}, \mc{T} is Fixer-superior to \mc{T'}.
\end{proof}
\vskip 20pt
\begin{ack}
The author thanks R.T. Solo for the initial problem statement.
\end{ack}
| {
"timestamp": "2021-02-04T02:19:22",
"yymm": "2102",
"arxiv_id": "2102.02140",
"language": "en",
"url": "https://arxiv.org/abs/2102.02140",
"abstract": "We introduce a model involving two adversaries Buster and Fixer taking turns modifying a connected graph, where each round consists of Buster deleting a subset of edges and Fixer responding by adding edges from a reserve set of weighted edges to leave the graph connected. With the weights representing the cost for Fixer to use specific reserve edges to reconnect the graph, we provide a reasonable definition for what should constitute an optimal strategy for Fixer to keep the graph connected for as long as possible as cheaply as possible, and prove that a greedy strategy for Fixer satisfies our conditions for optimality.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Optimally reconnecting weighted graphs against an edge-destroying adversary",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877033706598,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594824166311
} |
https://arxiv.org/abs/1603.04154 | Impacts of Network Topology on the Performance of a Distributed Algorithm Solving Linear Equations | Recently a distributed algorithm has been proposed for multi-agent networks to solve a system of linear algebraic equations, by assuming each agent only knows part of the system and is able to communicate with nearest neighbors to update their local solutions. This paper investigates how the network topology impacts exponential convergence of the proposed algorithm. It is found that networks with higher mean degree, smaller diameter, and homogeneous degree distribution tend to achieve faster convergence. Both analytical and numerical results are provided. | \section{Introduction}
A major goal in studying networked systems is to understand the impact of network topology within the context of the application of interest, from epidemic spreading \cite{pastor2001epidemic,cohen2000resilience} to synchronization \cite{nishikawa2003heterogeneity,wang2005partial}, controllability \cite{liu2011controllability,jadbabaie2004stability,pasqualetti2014controllability} , observability \cite{liu2013observability}, flocking \cite{vicsek1995novel,jadbabaie2003coordination} and consensus \cite{tsitsiklis1984problems,tsitsiklis1986distributed,murray2003consensus,olfati2007consensus}.
Recently, Mou \emph{et al.} proposed a network-based distributed algorithm to solve for $x$ in the linear equation $\mathbf{A}x=b$ \cite{mou2013fixed,mou2014distributed}. In this algorithm it is assumed that each agent is located in a communication network and has partial knowledge of $\mathbf{A}$ and $b$. Under mild conditions on the connectivity of the underlying network, all the agents' states (or local solutions) converge to the exact solution $x=\mathbf{A}^{-1}b$ \cite{mou2013fixed,liu2013asynchronous,mou2014distributed,anderson2015decentralized,mou2015distributed}.
The proposed algorithm in \cite{mou2014distributed} is distributed, applicable for all linear equations as long as they have solutions, works for time-varying networks, converges exponentially fast, operates asynchronously, and does not involve any small step-size. The aim of this paper is to further characterize the relation between its exponential convergence and the network topology. The main contribution of this work is an analytical bound that connects the convergence rate of the algorithm to the network topology and the linear equation. Both theoretical and numerical results show that networks with higher mean degree, smaller diameter, and homogeneous degree distributions tend to speed up this distributed algorithm.
The following notation is used throughout the paper. The $\ell^{2}$-norm is denoted as $\|\cdot\|$. Matrices are denoted by upper case letters in bold such as $\mathbf{A}$ and $\mathbf{P}$. A partition of a matrix is denoted by an upper case letter with a subscript, i.e. $A_i$ is a partition of matrix $\mathbf{A}$, which can also be a row vector. Vectors are denoted by lower case italic letters, such as $x$, $y$, $z$. A network or graph is denoted as $\mathcal{G}(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ is the node (or vertex) set and $\mathcal{E}$ is the link (or edge) set. The network topology is represented by the adjacency matrix $\mathcal{A}=\{\alpha_{ij}\}$ of the network. This paper is organized as follows. The network-based distributed algorithm is briefly presented in Section \ref{sec_alg}. The theory of how the network topology impacts the algorithm performance is present in Section \ref{sec_topo}. The main proof is presented in Section \ref{sec_proof}. Finally, the conclusion is presented in Section \ref{sec_conclu}.
\\
\section{A Distributed Algorithm for Solving Linear Equations}
\label{sec_alg}
Consider a system of linear algebraic equations
\begin{equation}
\label{Ax=b}
\mathbf{A}x =b,
\end{equation} which has a unique solution $x^*$. Here $\mathbf{A} \in \mathbb{R}^{\mathit{n}\times\mathit{n}}$, $b \in \mathbb{R}^{\mathit{n}}$ and $x\in \mathbb{R}^{n}$. The partition of the matrix $\mathbf{A}$ is defined as $\mathbf{A}=\mathrm{col}\left\lbrace A_1, A_2,\cdots, A_m \right\rbrace$, where $\mathrm{col}\{\cdot\}$ is an operator that stacks elements into a column, $A_i\in \mathbb{R}^{n_i\times n}$, and the partition of the vector $b$ is defined as $b=\left[ b_1, b_2, \cdots, b_m \right]^{\mathrm{T}}$, $b_i\in \mathbb{R}^{n_i}$, where $\sum_{i=1}^{m}n_i=n$. Assume that the entire system $\left(\mathbf{A},b\right)$ is unavailable to a single agent; instead different partitions of the system $\left( A_i^{n_i \times n},b_i^{n_i}\right)$ are available to different agents. In this paper we consider the simplest case: $n_i=1$ and $m=n$, i.e. each agent knows exactly one row of $\mathbf{A}$ matrix and one element of the $b$ vector.
The distributed algorithm proposed in \cite{mou2014distributed} computes the solution of the linear equation \eqref{Ax=b} through a multi-agent network $\mathcal{G}(\mathcal{V}, \mathcal{E})$, where $\mathcal{V}=\{1,2,\cdots, n\}$ and $\mathcal{E}\subseteq \mathcal{V} \times \mathcal{V}$. The topology of this $n$-agent network is represented by its adjacency matrix $\mathcal{A}(\mathcal{G})=\left[ \alpha_{ij}\right]_{n\times n}$ with
\begin{equation*}
\alpha_{ij}= \left\lbrace
\begin{aligned}
& 1\ \mathrm{if}(i,j)\in \mathcal{E} \\
& 0\ \mathrm{otherwise.}
\end{aligned}\right.
\end{equation*} Agent $i$ in the network is synonymous with vertex $i$ in the graph $\mathcal{G}(\mathcal{V}, \mathcal{E})$. The topology of the multi-agent network is completely independent of the linear equation in \eqref{Ax=b}.
For simplicity we make the following assumption:
\begin{assumption}
The graph $\mathcal{G}$ is undirected and connected. Every vertex has a self loop and there are no multiple edges between two vertices.
\end{assumption}
Consider agent $i$ who knows $\left( A_i,b_i \right)$. It calculates its local solution $x_i\in \mathbb{R}^n$ to $A_ix_i=b_i$ and exchanges the solution $x_i$ with its neighbors, denoted as $\mathcal{N}_{i}=\{j\in \mathcal{V}|(i,j)\in \mathcal{E}\}$. In this work $t$ is the discrete time variable and takes values in $\{0,\ 1,\ 2,\cdots\}$. The exact (or global) solution to $\mathbf{A}x=b$ is obtained when all the local solutions $x_i$'s reach consensus through the following iteration procedure:
\begin{equation}
\label{x(t+1)=Mx(t)}
x_i(t+1) =x_i(t)-\frac{1}{d_i}\mathbf{P}_i\left(d_ix_i(t)-\sum_{j \in \mathcal{N}_{i}}x_j(t)\right),
\end{equation}
where $\mathbf{P}_i =\mathbf{I}-A_i^{\mathrm{T}}{\left(A_i\cdot A_i^{\mathrm{T}}\right)}^{-1}A_i$ is the orthogonal projection on the kernel of $A_i$, $i=1,\cdots,n$, and $d_i=\sum_{j=1}^n \alpha_{ij}$ is the degree of agent $i$.
Let $x^*$ be the true solution to \eqref{Ax=b} and it must satisfy $A_ix^*=b_i$ for $i=1,\cdots,n$. Define the error between $x_i(t)$ and $x^*$ as
\begin{equation}
\label{y_i}
y_i(t)=x_i(t)-x^*,
\end{equation} which is in the kernel of $A_i$. In addition, note that $\mathbf{P}_i^2=\mathbf{P}_i$ and $\mathbf{P}_iy_i(t)=y_i(t)$. Replacing $x_i(t+1)$ and $x_i(t)$ by $y_i(t+1)$ and $\mathbf{P}_iy_i(t)$ in \eqref{x(t+1)=Mx(t)}, we get the {\em error updating equation}
\begin{equation}
\label{y+=my}
y_i(t+1) =\frac{1}{d_i}\mathbf{P}_i\sum_{j \in \mathcal{N}_{i}}\mathbf{P}_jy_j(t),
\end{equation}
for $i=1,\cdots,n$. These $n$ equations can be rewritten in the following compact form
\begin{equation}
\label{y=My}
y(t) =\left(\mathbf{P}_{\mathrm{diag}}\left[\left(\mathbf{D}^{-1}\mathcal{A}^{\mathrm{T}}\right)\otimes \mathbf{I}\right]\mathbf{P}_{\mathrm{diag}}\right)^t y(0) =\mathbf{M}^t y(0),
\end{equation}
where the matrix $\mathbf{M}$ is called the {\em updating matrix} and $y(t)=\mathrm{col}\left\lbrace y_1(t),y_2(t),\cdots,y_n(t)\right\rbrace$. The matrix $\mathbf{P}_{\mathrm{diag}}=\mathrm{diag}\{\mathbf{P}_1,\mathbf{P}_2,\cdots,\mathbf{P}_n\}\in \mathbb{R}^{n^2 \times n^2}$ is a block diagonal matrix with $\mathbf{P}_i \in \mathbb{R}^{n \times n}$ and $\mathbf{D}=\mathrm{diag}\{d_1,d_2,\cdots,d_n\}$ is a diagonal matrix. The operator $\otimes$ is the kronecker product \cite{neudecker1969note}.
This algorithm has been proven to converge by using the mixed norm \cite{russo2013contraction} \cite[Chapter 4.3.1]{mou2014distributed} of $\mathbf{M}$ defined as
\begin{equation*}
\|\mathbf{M}\|_{\mathrm{mix}}=\| \mathbf{Q} \|_\infty,
\end{equation*} where $\mathbf{Q}=\{q_{ij}\}$, $q_{ij}=\frac{\alpha_{ij}}{d_i}\|\mathbf{P}_i\mathbf{P}_j\|$. Indeed, $\mathbf{M}^t$ satisfies $\lim_{t\to \infty}\|\mathbf{M}^t\|_{\mathrm{mix}}=0$ if the undirected multi-agent network is connected \cite{mou2014distributed}. Therefore $y=\mathbf{M}^ty(0)\to 0$ and thus $x_i\to x^*$ for all $i\in \mathcal{V}$.
Network properties play important roles in consensus problems. In particular, the second smallest eigenvalue $\lambda_2(\mathcal{L})$ of the graph laplacian bounds the convergence rate of consensus \cite{fiedler1973algebraic,olfati2007consensus}. Given the fact that projection matrices $\mathbf{P}_i$'s are used in constructing the updating matrix $\mathbf{M}$, it is not clear how the network topology $\mathcal{A}$ impacts the convergence rate of this algorithm. Thus, in this work we approach the proof of convergence from a different angle.
\section{Impacts of Network Topology on the Distributed Algorithm}
\label{sec_topo}
\subsection{Theoretical Analysis}
In this section, we study how network topology impacts the performance of the network-based distributed algorithm. Before we state the main theorem, we introduce the following definitions.
\begin{definition}[Walk]
In a graph $\mathcal{G}$, a walk $w^{l}\in \mathcal{V}^{l+1}$ \cite{chung1997spectral} of length $l$ is a sequence of vertices $(v_0,v_1,\cdots,v_l)$ with $\{v_{i-1},v_{i}\}\in \mathcal{E}(\mathcal{G})$ for all $1\leqslant i \leqslant l$ when $l \geqslant 1$. If $l=0$, then $w^{0}$ is simply a vertex $v_0$. Specifically, we denote a walk of length $l$ starting at vertex $v_0$ and ending at vertex $v_{l}$ as $w_{v_0 v_l}^{l}$.
\end{definition}
\begin{definition}[$f(w^l,\beta)$ Product of a Walk]
Let $w^{l}$ be a walk of length $l$. Let $\beta_{v_i}\in U$ be a value associated with vertex $v_i$. We can define a function of the walk $w^l$ as
\begin{equation*}
f(w^l,\beta)=\Pi_{i=0}^{i=l}\beta_{v_i},
\end{equation*} where $\beta$ is indexed by the walk $w^l = (v_0, v_1, \cdots, v_l)$ with values $\beta = (\beta_{v_0}, \beta_{v_1}, \cdots, \beta_{v_l})$. The function $f(w^l,\beta)\in U$ is called the {\em product of walk $w^{l}$}. In this work $U$ is either $\mathbb{R}$ or $\mathbb{R}^{n\times n}$.
\end{definition}
\begin{definition}[$\mathbb{S}(l)$ and $\mathbb{S}^{1}(l)$ Spaces]
In a graph $\mathcal{G}$, all the possible walks of length $l$ form the $\mathbb{S}(l)$ Space. Denote a subspace of $\mathbb{S}(l)$ as $\mathbb{S}^{1}(l)$ if and only if
\begin{itemize}
\item the walk $w^{l}$ starts from an arbitrary vertex $v_0$ and ends at $v_{l}$ and visits all the vertices $v_i\in \mathcal{V}$ of $\mathcal{G}$,
\item there does not exist a vertex $v_j\in \mathcal{V}$ that divides $w^{l}$ into two sub-walks, where one walk starts at $v_0$ and ends at $v_j$, the other one starts at $v_j$ and ends at $v_l$, that both of them visit all the vertices $v_i\in \mathcal{V}$ of $\mathcal{G}$.
\end{itemize}
Note that the end vertex of the previous sub-walk and the starting vertex of the following sub-walk are repeated twice when dividing a walk. It is trivial that for $w^{l}$ walks of length $l\leqslant n-1$, they can't be in the $\mathbb{S}^{1}(l)$ subspace.
\end{definition}
\begin{definition}[Order $r$]
If a walk $w^{l}$ can be divided into several walks $w^{l_1}$, $w^{l_2}$, $\cdots$, $w^{l_r}$, where $l_i\geqslant 1$ and $w^{l_i}\in \mathbb{S}^{1}(l_i)$, then all the walks of the same number $r$ form a subspace $\mathbb{S}^{r}(l)$ where $r$ is called the {\em order} of the space. We also say that $r$ is the {\em order} of the walk $w^{l}$. $\mathbb{S}^r(l) \subsetneq\mathbb{S}(l)$ for any order $r$.
If a walk $w^l$ does not visit all the vertices in a graph $\mathcal{G}$, then its order is $r=0$ and it is in $\mathbb{S}^{0}(l)$. This special case means that there exists at least one vertex $v_i\in \mathcal{V}$ which does not appear in the sequence of the walk $w^{l}$. The order of any $w^{l}$ walk is uniquely determined and non-negative, i.e. $r\geqslant 0$.
\end{definition}
Let $\varphi=\frac{1}{\left(\sqrt{n}\tau\|\mathbf{A}^{-1}\|\right)^2}$, $\tau=\underset{i}{\max}\left(\|A_i\|\right)$, $\frac{1}{d}=\left(\frac{1}{d_{i}},\frac{1}{d_{v_1}},\cdots,\frac{1}{d_j}\right)$ be indexed by the walk $w_{ij}^{t}=\left(i,v_1,\cdots,v_{t-1},j\right)$ which starts at agent $i$ and ends at agent $j$ where $w_{ij}^{t}\in \mathcal{V}^{t+1}$, then we have the following theorem
\begin{theorem}[Convergence Bound]
\label{th_bound}
Given a linear equation $\mathbf{A}x=b$, $\mathbf{A}=\mathrm{col}\{A_i\}\in \mathbb{R}^{n \times n}$ and its unique solution $x^*$, let $x_{i}(t)$ be the local solution at agent $i$ located in an undirected network $\mathcal{G}(\mathcal{V},\mathcal{E})$ whose adjacency matrix is $\mathcal{A}=\{\alpha_{ij}\}$, then the error $y_{i}(t)$ defined in \eqref{y_i} is bounded as
\begin{equation}
\label{bound_of_th}
\|y_{i}(t+1)\| \leqslant \sum_{\mathcal{N}_j}\sum_{r=0}^{r_m(t)}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}} f(w_{ij}^t,\frac{1}{d})\left(1-\varphi \right)^{\frac{nr}{2}} \|y_j(0)\|
\end{equation} for $i=1,\cdots, n$. Here $r_m(t)\leqslant \lfloor\frac{t}{n}\rfloor$ is the maximum order of the product. Note that $w_{ij}^{0}=w_{ii}^{0}=\left(i\right)$ and $w_{ij}^{1}=\left(i,j\right)$.
\end{theorem}
Theorem \ref{th_bound} provides another method to prove that the distributed algorithm converges to the true solution $x^*$ besides the mixed norm method in \cite{mou2014distributed}, which is discussed at the end of this work. The bound in \eqref{bound_of_th} connects the network topology with the convergence rate of the algorithm, by the degree $d_i$ of agent $i$ explicitly, and by counting the number of $w^t\in \mathbb{S}^{r}(t)$ walks in every order $r\geqslant 0$ in the network implicitly. Before moving to the detailed proof of this theorem, we first discuss how topology impacts the performance of the algorithm. To illustrate the topology impacts, we start with the definition of a walk $w^{t}$, then we discuss the properties of the corresponding ${f}(w^t,\frac{1}{d})$ product.
Given a network $\mathcal{G}$ of size $n$, all the possible walks of length $t$ are determined by its adjacency matrix $\mathcal{A}=\{\alpha_{ij}\}$. Let $\frac{1}{d_i}$ be the inverse degree of agent $i$, then the product $\frac{1}{d_{i_0}}\frac{1}{d_{i_1}}\cdots\frac{1}{d_{i_t}}$ can be represented by ${f}^{r}(w_{i_0i_t}^t,\frac{1}{d})$, where we recall that $\frac{1}{d}$ is indexed by the walk $w_{i_0i_t}^{t}$. For simplicity, we let $i=i_0$ and $j=i_t$. Hence given a starting agent $i$, the summation of all products of the walk $w^{1}$ from $i$ to all the agents $j=1,2,\cdots,n$ is represented as $\sum_{j=1}^{n}\frac{\alpha_{ij}}{d_{i}d_{j}}$. In general, we have
\begin{equation*}
\begin{aligned}
\sum_{r=0}^{r_m(t)}\sum_{w_{ij}^{t}} {f}^{r}(w_{ij}^t,\frac{1}{d}) = \sum_{l_{t-1}=1}^{n}\cdots\sum_{l_{1}=1}^{n}\frac{\alpha_{il_1}\alpha_{l_1l_2}}{d_id_{l_1}}\cdots\frac{\alpha_{l_{t-1}j}}{d_{l_{t-1}}}\frac{1}{d_{j}}.
\end{aligned}
\end{equation*} It is trivial that for any $r$, $i$, $j$ and the walk $w_{ij}^t$, $f(w_{ij}^t,\frac{1}{d})\in (0,1)$. We now explore a scenario when the above mentioned sum remains a constant, even if the walk length increases.
Given a network $\mathcal{G}$ and given a starting agent $i$, if all walks $w_{ij}^{t}$, $j=1,2,\cdots,n$ are repeated by walks $w_{ij^{\prime}}^{t+1}$ who visit one more agent $j^{\prime}$ at the end, after reaching agent $j$, then the summation of all ${f}(w_{ij^{\prime}}^{t+1},\frac{1}{d})$ products remains the same. This visit of agent $j^{\prime}$ generates $n$ products based on each $f(w_{ij}^t,\frac{1}{d})$ and each of them equals to $\frac{\alpha_{jj^{\prime}}}{d_{j^{\prime}}}f(w_{ij}^t,\frac{1}{d})$, $j=1,2,\cdots,n$. Only $d_{j}$ out of $n$ products are not zero when $\alpha_{jj^{\prime}}=1$. The summation of all newly generated products is unchanged, which is
\begin{equation}
\label{sum_visit}
\sum_{\mathcal{N}_{j^{\prime}}} \frac{1}{d_{j^{\prime}}} \sum_{\mathcal{N}_{j}} {f(w_{ij}^t,\frac{1}{d})} = \sum_{\mathcal{N}_{j}} f(w_{ij}^t,\frac{1}{d})
\end{equation} for $\sum_{\mathcal{N}_{j^{\prime}}} =d_{j^{\prime}}$. In general, the summation of all products of all walks by $t+1$ visits starting from a given agent $i$ to all the neigbors of all the agents $j$ is
\begin{equation}
\label{sum_walk}
\begin{aligned}
& \sum_{\mathcal{N}_j}\sum_{r=0}^{r_m(t)}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}(t)} f(w_{ij}^t,\frac{1}{d}) \\
= & \sum_{j^{\prime}=1}^{n}\sum_{j}^{n}\sum_{i_{t-1}}^{n}\cdots \sum_{i_{1}}^{n}\frac{\alpha_{ii_{1}}}{d_{i}}\cdots \frac{\alpha_{i_{t-1}j}}{d_{i_{t-1}}}\frac{\alpha_{jj^{\prime}}}{d_{j}} = 1. \\
\end{aligned}
\end{equation}
Given a network $\mathcal{G}$ and a starting agent $i$, the summation $\sum_{\mathcal{N}_j}\sum_{w_{ij}^{t}\in \mathbb{S}^{0}(t)}f(w_{ij}^t,\frac{1}{d})$ is never increasing and the order $r$ of the $f(w^t,\frac{1}{d})$ product is never decreasing as the walk length $t$ grows. Given an arbitrary $f(w_{i_0i_t}^t,\frac{1}{d})$ product of the walk $w_{i_0i_t}^t\in \mathbb{S}^{0}(t)$, when the walk $w_{i_0i_t}^t$ makes one more visit from agent $i_{t}$ to the next agent $i_{t+1}$, it forms $d_{i_{t}}$ new products and the summation of all $d_{i_{t}}$ products is unchanged, which is already shown in \eqref{sum_visit}. However, there exists a walk of length $t_1$ when there exists at least one walk changing from the $\mathbb{S}^{0}(t_1)$ subspace to the $\mathbb{S}^{0}(t_1+1)$ subspace. For every $w^{t_2}$ walk (of order $r\geqslant 1$) of length $t_2$, it never changes to a walk of order $r=0$. This hold for any walk $w^t\in \mathbb{S}^{0}(t)$, hence the summation of all ${f}(w^t,\frac{1}{d})$, $w^t\in \mathbb{S}^{0}(t)$ product is never increasing, that is
\begin{equation*}
\sum_{\mathcal{N}_j}\sum_{w_{ij}^{t+1}\in \mathbb{S}^{0}(t+1)}f(w_{ij}^{t+1},\frac{1}{d}) \leqslant \sum_{\mathcal{N}_j} \sum_{w_{ij}^{t}\in \mathbb{S}^{0}(t)}f(w_{ij}^t,\frac{1}{d})
\end{equation*} and given a walk of length $t$ and a starting agent $i$, the bound in \eqref{bound_of_th} decreases when the order of walks increases, due to the exponential factor $\lim_{r\to \infty}\left(1-\varphi \right)^{\frac{nr}{2}}=0$. Since the summation of all $f$ products starting from a chosen agent $i$ is always $1$ \eqref{sum_walk}, the bound in \ref{th_bound} can only be decreased by either i) for a fixed length $t$, increasing the percentage of walks with higher $r$, or ii) by increasing the order $r$ for all walks as rapidly as possible.
With the above two observations we conclude that given any two networks $\mathcal{G}_1$ and $\mathcal{G}_2$, the distributed algorithm \eqref{x(t+1)=Mx(t)} tends to converge faster on networks $\mathcal{G}_1$ if $\mathcal{G}_1$ and $\mathcal{G}_2$ have similar topology properties except any combinations of the following
\begin{itemize}
\item[1] $\mathcal{G}_1$ has a shorter diameter,
\item[2] $\mathcal{G}_1$ has a more homogeneous degree distribution,
\item[3] $\mathcal{G}_1$ has a higher mean degree.
\end{itemize}
Although Theorem \ref{th_bound} has $\frac{1}{d}$ as a factor in the products, it is not trivial to conclude that higher degree makes the products smaller since higher degree decreases each product while increases the number of products. The summation of all products remains a constant, as shown in \eqref{sum_walk}. However the bound decreases when the order $r$ of the products increases. We address these three points in order.
\subsubsection{Diameter}
For two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ with the same degree distribution and hence the same mean degree, if $\mathcal{G}_1$ has a shorter diameter \cite{diestel2005graph} than $\mathcal{G}_2$, then for fixed $t$, walks from $\mathcal{G}_1$ will necessarily have a larger minimum order $r$ as compared to those from $\mathcal{G}_2$. This follows from the fact that all the agents can be visited with fewer steps in a network with shorter diameter. Thus, all things being equal between two graphs, if $r(t)$ increases more rapidly for one graph as opposed to another, the exponential factor $\left(1-\varphi \right)^{\frac{nr}{2}}$ will decrease more rapidly. Therefore networks with shorter diameter make the distributed algorithm converge faster.
\subsubsection{Degree Distribution}
Let $\mathcal{G}_1$ and $\mathcal{G}_2$ be two graphs with same mean degree but different degree distributions. Let $\mathcal{G}_1$ have a more homogeneous degree distribution than $\mathcal{G}_2$. Walks in $\mathcal{G}_2$ typically have lower order $r$ than the walks of the same length in $\mathcal{G}_1$. This is because walks on $\mathcal{G}_2$ rather than $\mathcal{G}_1$ have to walk though the high degree vertices again and again to reach all the other low degree vertices. Hence for a given length of walks, the order $r$ from the walks on $\mathcal{G}_1$ is higher. Therefore homogeneous degree distribution makes the algorithm converges faster.
\subsubsection{Mean Degree}
Adding edges to a graph typically results in a shorter diameter. Given two graphs $\mathcal{G}_1$ and $\mathcal{G}_2$ with similar degree distribution where $\mathcal{G}_1$ has a higher mean degree, the diameter of $\mathcal{G}_1$ is typically no larger than $\mathcal{G}_2$. Hence the orders $r$'s from $\mathcal{G}_1$ are typically higher than those in $\mathcal{G}_2$ for walks of fixed length. Adding a new edge can either make the degree distribution homogeneous or make it heterogeneous, depending on where the new edge is added. The overall change of degree distribution for each newly added edge is difficult to analyze. However, if multiple new edges are added uniformly to a graph, this will typically result in a more homogeneous degree distribution, thus increasing the mean degree of the network makes the distributed algorithm converge faster.
\subsection{Simulation Results}
To verify our theoretical predictions, we perform extensive numerical simulations. We first quantify the convergence rate of the network-based distributed algorithm. One measure is the solution accuracy of the algorithm, which is the Euclidean distance between the local solution and the exact (or global) one:
\begin{equation*}
\epsilon_{i}(t)=\|x_i(t)-x^*\|,\ i=1,2,\cdots,n.
\end{equation*} Smaller $\epsilon_{i}$ means faster convergence rate and hence better algorithm performance. The impacts of different network topologies are measured by the statistical performances of the distributed algorithm, i.e. $E\left(\sum_{i=1}^n\epsilon_{i}\right)$ on an ensemble of linear equations. We notice that the Euclidean distance defined above needs a reference. For example, if the true solutions of two cases are $\|x^{*,1}\|=100$ and $\|x^{*,2}\|=0.1$ respectively, while the summation of Euclidean distances of all local solutions to $x^{*,j}$ are both $\sum_{i=1}^{n}\epsilon_{i}^{j}=\sum_{i=1}^{n}\|x_i^{j}-x^{*,j}\|=1$, $j=1,2$, it is obvious the accuracy of the former iterative process is much higher than the latter one. Therefore the Euclidean distance should be scaled by the initial error $\sum_{i=1}^n \epsilon_{i}(0)$, yielding the relative error
\begin{equation}
\label{rel_err}
R(t)=\frac{\sum_{i=1}^n \epsilon_{i}(t)}{\sum_{i=1}^n \epsilon_{i}(0)}=\frac{\sum_{i=1}^n \|x_i(t)-x^*\|_2}{\sum_{i=1}^n \|x_i(0)-x^*\|_2}.
\end{equation}
In this way, convergence performances among a system of linear equations can be compared.
\begin{figure}
\includegraphics[angle=-90,width=0.5\textwidth]{CV100n20nvrg10000steps0915}
\caption{\label{fig_cv_box}Impact of network topology on the performance of the network-based distributed algorithm. . Tens of different linear equations are solved by the distributed algorithm on six groups of networks of size $n=100$. The complex networks in each group are (a-c) Small-world (SW) networks; (d-f) Scale-free (SF) networks, Erd\"{o}s-R\'{e}nyi (ER) random graphs, random regular (RR) graphs. In each case, we show the box-and-whisker plots and the median value of the relative error (or convergence rate) $R(t)$ as functions of $t$. At each marked iteration step $t$, a box-and-whisker plot is drawn. The mean degree of the complex networks is represented as $\langle k \rangle$.}
\end{figure}
Figure.~\ref{fig_cv_box} shows the relative error changes with different network topologies, including small-world (SW) networks \cite{watts1998collective} with random rewiring probability $p$, scale-free (SF) networks \cite{barabasi1999emergence} with degree exponent $\gamma$, Erd\"{o}s-R\'{e}nyi (ER) random graphs \cite{erdos1960evolution} with connectivity probability $p$ and random regular (RR) graphs \cite{wormald1999models} with mean degree $\langle k \rangle$. The networks in each subfigure are the same in their mean degree and they are different on only one parameter. Small-world networks (a-c) are different in rewiring probabilities $p$, which determines network diameters. Scale-free networks, graphs and RR graphs are drastically different in their degree distributions: scale-free networks are most heterogeneous and random regular graphs are most homogeneous.
\begin{figure}
\includegraphics[angle=-90,width=0.5\textwidth]{0313CDC_last2}
\caption{\label{fig_cv_last_all}Convergence rate at a chosen time step for complex networks with different topologies. The box-plot shows the relative errors at a given step $T_s=2000$. Networks with similar topological features are grouped together in a particular subfigure.}
\end{figure}
The numerical results shown in Figure.~\ref{fig_cv_box} clearly verify our theoretical predictions, i.e. if two networks share similar topological properties, the one with smaller diameter (or more homogeneous degree distribution, or higher mean degree) perform better than the other. To further demonstrate the topology impacts, consider $R(t)$ at $t=2000$ shown as box-and-whisker plots in Figure.~\ref{fig_cv_last_all}. The smaller relative error $R(t)$ means higher convergence rate. It is clear from Figure.~\ref{fig_cv_last_all}a-c and Figure.~\ref{fig_cv_last_all}d-f that the upper bound of relative errors decreases as the mean degree increases for a given network model. In other words, higher mean degree makes the algorithm reach the true solution faster, and is consistent with our theoretical analysis. Figure.~\ref{fig_cv_last_all}a-c display that small-world networks with higher rewiring probability (and hence smaller diameters) have smaller relative errors $R$ , confirming our theoretical prediction smaller diameter contributes to higher convergence rate. As shown in Figure.~\ref{fig_cv_last_all}d-f, for any given mean degree, the random regular graphs have the smallest relative errors while scale free networks perform the worst. This means that the degree heterogeneity degrades the performance of the network-based distributed algorithm in solving linear equations \eqref{Ax=b}.
\section{Proof of the Bound Theorem}
\label{sec_proof}
Before the formal proof of Theorem \ref{th_bound}, we discuss the structure of the matrix $\mathbf{M}^t$ \eqref{y=My} and introduce some technical lemmas.
Let $m_{ij}^{(1)}\in \mathbb{R}^{n \times n}$ be the $i,j$-th partition matrix of $\mathbf{M}$, then
\begin{equation*}
m_{ij}^{(1)}=\frac{\alpha_{ij}}{d_i} \mathbf{P}_i \cdot \mathbf{P}_j,
\end{equation*} where we recall that $\mathbf{P}_i$ is an orthogonal projection matrix defined right after \eqref{x(t+1)=Mx(t)}. Theses block matrices $m_{ij}^{(1)}$ are actually the updating matrix of $y_i(t)$, which means $y_i(t+1)=\sum_{j=1}^{n}m_{ij}^{(1)}y_j(t)$. Similarly, let $m_{ij}^{(t)}$ denote the partition matrix of $\mathbf{M}^t$, then
\begin{equation*}
\begin{aligned}
m_{ij}^{(t)}
& = \sum_{l_{t-1}=1}^n \cdots \sum_{l_{1}=1}^n m_{il_1} \cdots m_{l_{t-1}j} \\
& = \sum_{l_{t-1}=1}^n \frac{\alpha_{l_{{t-1}}j}}{d_{l_{t-1}}}\cdots \sum_{l_{1}=1}^n \frac{\alpha_{il_{1}}\cdot \alpha_{l_{1}l_{2}}}{d_i\cdot d_{l_{1}}} \mathbf{P}_i \cdots \mathbf{P}_{l_{t-1}} \mathbf{P}_j.
\end{aligned}
\end{equation*} Although the expression of $m_{ij}^{(t)}$ is long, it shows that $\mathbf{M}^t$ is simply a weighted sum of projection products. It follows that \eqref{y+=my} can be written as $y_i(t)=\sum_{j=1}^n m_{ij}^{(t)}y_j(0)$. Define $\mu_{ij}=\frac{\alpha_{ij}}{d_i}\in \left[0,0.5\right]$, then we have
\begin{equation}
\label{y=mu_p_y}
y_{i}(t) = \sum_{j=1}^n \cdots \sum_{l_{1}=1}^n \mu_{il_{1}} \cdots \mu_{l_{t-1}l_{j}} \mathbf{P}_{i} \cdots \mathbf{P}_{l_{t-1}} \mathbf{P}_j y_j(0).
\end{equation} Note that it is a summation of $n^t$ products. We now separate $\mu_{il_{1}} \mu_{l_{1}l_{2}} \cdots \mu_{l_{t-1}l_{j}} \mathbf{P}_{i} \mathbf{P}_{l_{1}} \cdots \mathbf{P}_{l_{t-1}} \mathbf{P}_jy_j(0)$ into a {\em $\mu$ product}
\begin{equation}
\label{lmdlmd}
\mu_{il_{1}} \mu_{l_{1}l_{2}} \cdots \mu_{l_{t-1}j}
\end{equation} and its corresponding projection product with $y_j(0)$, which is called {\em error sequence},
\begin{equation}
\label{pppy}
\mathbf{P}_{i} \mathbf{P}_{l_1} \cdots \mathbf{P}_{l_{t-1}} \mathbf{P}_j y_j(0).
\end{equation} From \eqref{sum_visit} the summation of all $\mu$ products \eqref{lmdlmd} satisfies the following equality
\begin{equation}
\label{sum_lmd}
\sum_{j=1}^n\sum_{l_{t-1}=1}^n \cdots \sum_{l_{1}=1}^n \mu_{il_{1}} \mu_{l_{1}l_{2}} \cdots \mu_{l_{t-1}j} = 1.
\end{equation}
The construction of $\mathbf{M}^t$ as a $\mu$ product and an error sequence of projections allows us to separate the topological features from the part of the algorithm that is specific to a particular linear equation. We first analyse each product in the error updating equation \eqref{y=mu_p_y} by bounding the error sequences of \eqref{pppy}.
Define a sequence of vectors $z(t)\in \mathbb{R}^{n}$ as following
\begin{equation}
\label{kacz}
z^{(j)}(t+1)=z(t)+\frac{b_j-A_jz(t)}{\|A_j\|^2}A_j^{\mathrm{T}},
\end{equation} where $t\geqslant 0$ and the superscript $(j)$ corresponds to its row vector $A_j$ and its scaler $b_j$. Then
\begin{equation*}
\begin{aligned}
\mathbf{P}_{i}\left(z(0)-x^*\right)
& = z(0)-\frac{A_iz(0)}{\|A_i\|^2}A_i^{\mathrm{T}}-x^*+ \frac{b_j}{\|A_i\|^2}A_i^{\mathrm{T}}\\
& = z(0)+\frac{b_i-A_iz(0)}{\|A_i\|^2}A_i^{\mathrm{T}}-x^* \\
& = z^{(i)}(1)-x^*.
\end{aligned}
\end{equation*} Let $z^{\left(j\right)}(0)=x_j(0)$, then each error sequence in \eqref{pppy} can be written as
\begin{equation}
\label{pppy=z-x}
\begin{aligned}
& \mathbf{P}_{i} \mathbf{P}_{l_1} \cdots \mathbf{P}_{l_{t-2}} \mathbf{P}_{l_{t-1}} \mathbf{P}_j y_j(0) \\
= & \mathbf{P}_{i} \mathbf{P}_{l_1} \cdots \mathbf{P}_{l_{t-2}} \mathbf{P}_{l_{t-1}} \left(z^{(j)}(0)-x^*\right) \\
= & \mathbf{P}_{i} \cdots \mathbf{P}_{l_{t-2}} \left(z^{(j)}(0)+\frac{b_{l_{t-1}}-A_{l_{t-1}}z^{(j)}(0)}{\|A_{l_{t-1}}\|^2}A_{l_{t-1}}^{\mathrm{T}}-x^* \right) \\
= & \mathbf{P}_{i} \cdots \mathbf{P}_{l_{t-2}} \left(z^{(jl_{t-1})}(1)-x^*\right) \\
= & z^{(il_{1}\cdots l_{t-2}l_{t-1}j)}(t)-x^*.
\end{aligned}
\end{equation}
Essentially, $z^{(il_{1}\cdots l_{t-2}l_{t-1}j)}(t)$ forms the sequence of $z(t)$ by taking different combinations of orthogonal projection $\mathbf{P}_i$ at different agents, $i=1,2,\cdots,n$. We now show that sequences $z(t)$ can be bounded, so that the error sequence is bounded as well.
We now present two theorems for bounding $z(t)-x^*$, first for the case when the walk $w^t$ is associated with the product $f\left(w^{t}, \mathbf{P}_i\right)$, $w^{t}\in \mathbb{S}^0(t)$, and second for the $f\left(w^{t}, \mathbf{P}_i\right)$ product where $w^{t}\in S^r(t)$ and $r\geqslant 1$.
\begin{theorem}[${f}^{0}$ Bound]
\label{th_Sbar}
For any $w^t\in \mathbb{S}^{0}(t)$ it follows that $\|f(w^t,\mathbf{P})\|\leqslant 1$ and thus $\|f(w^t,\mathbf{P})\|\leqslant 1$. Therefore the dynamics in \eqref{kacz} satisfy the following inequality
\begin{equation}
\|z(t)-x^*\| \leqslant \|z(0)-x^*\|
\end{equation}
\end{theorem}
\begin{proof}
Given that $\mathbf{P}_i$ is a normalized projection matrix it follows that $\|\mathbf{P}_i\|=1$.
\end{proof}
\begin{theorem}[${f}$ Bound]
\label{th_S}
The sequence $z(t)-x^*$ of the part whose $\mathbf{P}_{i} \mathbf{P}_{l_1} \cdots \mathbf{P}_{l_{t-1}} \mathbf{P}_j$ product is an ${f}\left(w^{t},\mathbf{P}\right)$ product where $w^{t}\in \mathbb{S}^{r}(t)$ and $r\geqslant 1$, then all the sequence $\mathbf{P}_{i_1}\mathbf{P}_{i_2}\cdots y_j(0)$ in this part from \eqref{pppy=z-x} can be written as
\begin{equation*}
z(t)-x^* = f\left(w^{t},\mathbf{P}\right)y_j(0),
\end{equation*} where $z(t)-x^*$ consists of several $f(w^{i},\mathbf{P})$, $w^{i}\in \mathbb{S}^{1}(i)$ products. Then all the sequences $z(t)-x^*$ in this part are bounded by
\begin{equation*}
\begin{aligned}
\|z(t)-x^*\|
& \leqslant \left( 1-\frac{1}{\left(\sqrt{n}\tau\|\mathbf{A}^{-1}\|\right)^2} \right)^{\frac{nr}{2}} \|z(0)-x^*\| \\
& < \left(1-\kappa(\mathbf{A})^{-2}\right)^{ \frac{nr}{2} } \|z(0)-x^*\|,
\end{aligned}
\end{equation*} where $\kappa(\mathbf{A})=\|\mathbf{A}\|\cdot\|\mathbf{A}^{-1}\|$ is the usual condition number of $\mathbf{A}$ and we recall the definition $\tau=\underset{i}{\max}\left(\|A_i\|\right)$.
\end{theorem}
The proof of Theorem \ref{th_S} requires several technical Lemmas.
\begin{lemma}[Orthogonal Projection]
\label{lm_OP}
Let $z(t)\in \mathbb{R}^{n}$, $\|z(0)\|=0$ be a sequence that follows
\begin{equation*}
z^{(j)}(t+1)=z(t)+\frac{b_j-A_jz(t)}{\|A_j\|^2}A_j^{\mathrm{T}},
\end{equation*} where $A_j$, $b_j$ are defined as those in linear equation \eqref{Ax=b}, which is the same as \eqref{kacz}. Then the orthogonal projection matrix $\mathbf{P}_{i}^{\star}$ onto the solution space of the linear equation \eqref{Ax=b} is given in \cite{strohmer2009randomized} as
\begin{equation*}
z(t+1)=\mathbf{P}_i^{\star}z(t).
\end{equation*}
Let $\langle z(t+1),z(t)\rangle$ denotes the inner product of two vectors $z(t+1)$ and $z(t)$, then the above equation can be written as follows by using the updating function \eqref{kacz}
\begin{equation*}
\begin{aligned}
\mathbf{P}_i^{\star}z(t)
& =z(t)-\frac{A_iz(t)-b_i}{\|A_i\|^2}A_i^{\mathrm{T}}\\
& =z(t)-\frac{A_iz(t)-A_iz^*}{\|A_i\|}\frac{A_i^{\mathrm{T}}}{\|A_i\|}\\
& =z(t)-\langle z(t)-z^*,Z_i\rangle Z_i^{\mathrm{T}},
\end{aligned}
\end{equation*}
where $Z_i=\frac{A_i}{\|A_i\|}$, $i=1,2,\cdots, n$, $\|Z_i\|=1$ is a set of normal vectors in the hyperplane $\{z(t): \langle A_i, z(t)\rangle=b_i\}$.
\end{lemma}
\begin{lemma}[Orthogonality]
\label{lm_orthogonality}
Consider the linear equation \eqref{Ax=b} and let $x^*$ be the unique solution. The difference of two vectors $z(t+1)$ and $z(t)$ is in the kernel of $\mathbf{P}_i^{\star}$ by Orthogonal Projection Lemma \ref{lm_OP}, which means that it is orthogonal to the solution space. Therefore it is also orthogonal to $z(t+1)-x^*$. In other words, the orthogonality of two vectors $z(t+1)-z(t)$ and $z(t+1)-x^*$ satisfies
\begin{equation*}
\|z(t+1)-z(t)\|^2+\|z(t+1)-x^*\|^2=\|z(t)-x^*\|^2.
\end{equation*}
\end{lemma}
\begin{lemma}[Inequality]
\label{lm_inequal}
Let $\mathbf{A}=\mathrm{col}\{A_i\}$, $\mathbf{A}\in \mathbb{R}^{n\times n}$ is full rank. Then the following inequality holds
\begin{equation*}
\sum_{i=1}^n \|\langle \frac{A_i}{\|A_i\|}, x \rangle\|^2 \geqslant \frac{1}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\|x\|^2.
\end{equation*} where $\langle A_i, x \rangle$ denotes the inner product of vector $A_i$ and $x$ and we recall the definition $\tau=\underset{i}{\max}\left(\|A_i\|\right)$.
\end{lemma}
\begin{proof}[Proof of Inequality Lemma \ref{lm_inequal}]
Consider the linear equation in \eqref{Ax=b} and using the submultiplicative property of the $\ell^2$-norm the following holds
\begin{equation*}
\|\mathbf{A}^{-1}\|^2\cdot\|\mathbf{A}x\|^2 \geqslant \|\mathbf{A}^{-1}\mathbf{A}x\|^2,\ \forall\ x\in\mathbb{R}^{n},
\end{equation*} where $\mathbf{A}^{-1}$ is defined because $x^*$ is the unique solution of the linear equation in \eqref{Ax=b}.
Considering the matrix partition $\mathbf{A}=\mathrm{col}\{A_i\}$, we have
\begin{equation*}
\sum_{i=1}^n \|\langle A_i, x \rangle\|^2
= \sum_{i=1}^n \|A_i\|^2\|\langle \frac{A_i}{\|A_i\|}, x \rangle\|^2
\geqslant \frac{\|x\|^2}{\|\mathbf{A}^{-1}\|^2}.
\end{equation*} Moreover,
\begin{equation*}
\begin{aligned}
\sum_{i=1}^n \tau^2\|\langle \frac{A_i}{\|A_i\|}, x \rangle\|^2
& \geqslant \sum_{i=1}^n \|A_i\|^2\|\langle \frac{A_i}{\|A_i\|}, x \rangle\|^2 \\
& \geqslant \frac{1}{\|\mathbf{A}^{-1}\|^2}\|x\|^2,
\end{aligned}
\end{equation*} where $\tau>0$ since $\mathbf{A}$ is full rank. Dividing by $\tau$ we arrive at the following inequality
\begin{equation*}
\sum_{i=1}^n \|\langle \frac{A_i}{\|A_i\|}, x \rangle\|^2 \geqslant \frac{1}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\|x\|^2.
\end{equation*}
\end{proof}
\begin{proof}[Proof of ${f}$ Bound \ref{th_S}]
Let $x^*$ denote the unique solution to the linear equation \eqref{Ax=b}. Let $z(t)-x^*$ be vector sequence from the $f(w^{t},\mathbf{P}_i)$, $w^{t}\in \mathbb{S}^{r}(t)$ product part of the error sequence \eqref{pppy=z-x} where $r\geqslant 1$ and substitute the $z(t+1)$ by the updating function \eqref{kacz} in the the Orthogonality Lemma \ref{lm_orthogonality} then we have
\begin{equation*}
\begin{aligned}
& \|z(t+1)-x^*\|^2 \\
= & -\|z(t+1)-z(t)\|^2+\|z(t)-x^*\|^2 \\
= & -\|\frac{\langle A_i,z(t)-x^*\rangle}{\|A_i\|}\frac{A_i^{\mathrm{T}}}{\|A_i\|}\|^2+\|z(t)-x^*\|^2 \\
= & -\|\langle z(t)-x^*,Z_i \rangle\|^2 +\|z(t)-x^*\|^2,
\end{aligned}
\end{equation*} where $Z_i=\frac{A_i}{\|A_i\|}$. Since the walk $w^{t}\in \mathbb{S}^{r}(t)$, $r\geqslant 1$, the subscript $i$ in $Z_i=\frac{A_i}{\|A_i\|}$ takes all the values $1,2,\cdots,n$ at least once. There exists $\theta_{i(t)}\geqslant 0$ such that
\begin{equation*}
\|\langle z(t)-x^*,Z_{i} \rangle\|^2 \geqslant \frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\|z(t)-x^*\|^2
\end{equation*} for ${i(t)}=1,2,\cdots,n$, by the Inequality Lemma \ref{lm_inequal}. Note that
\begin{equation*}
\begin{aligned}
\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\|z(t)-x^*\|^2
& \leqslant \|\langle z(t)-x^*,Z_{i} \rangle\|^2 \\
& \leqslant \|z(t)-x^*\|^2,
\end{aligned}
\end{equation*} where $\|Z_{i}\|=1$. Therefore $\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\leqslant 1$ for $i(t)=1,\cdots, n$, and then $\|z(t)-x^*\|^2$ is bounded as
\begin{equation*}
\begin{aligned}
& \|z(t)-x^*\|^2 \\
\leqslant & \left(1-\frac{\theta_{i(1)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \cdots \left(1-\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \|z(0)-x^*\|^2,
\end{aligned}
\end{equation*} where
\begin{equation}
\label{>=0}
0 \leqslant \left(1-\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \leqslant 1.
\end{equation}
Note that the sequence $z(t)-x^*$ forms the $f(w^{t},\mathbf{P})$, $w^{t}\in \mathbb{S}^{r}(t)$, $r\geqslant 1$ product part. Because of the fact $\mathbf{P}_{i}^{r}=\mathbf{P}_{i}$, all ${i(t)}=1,2,\cdots,n$ are present at least once in the each sub-walk of the original walk by definition. Hence the walk $w^{t}$ corresponding to $\left(1-\frac{\theta_{i(1)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \cdots \left(1-\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right)$ is divided into $r$ sub-walks $w^{t_i}\in \mathbb{S}^{1}(t_i)$ and each sub-walk corresponds to an $f\left(w^{t_i}, 1-\frac{\theta}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right)$ product where $\theta=(\theta_i)$ are the values at all the agents indexed by the walk $w^{t_i}$ and all the agents $i=1,2,\cdots,n$ appear in the walk $w^{t_i}$ at least once. Then each sub-part of the product corresponding to the walk $w^{t_i}$ is denoted as
\begin{equation*}
\Pi_{w^{t_i}}\left(1-\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) = f\left(w^{t_i},1-\frac{\theta}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right)
\end{equation*} where the subscript $w^{t_i}$ denotes the consecutive product corresponding to the walk $w^{t_i}$. Furthermore each product corresponding to a $w^{t_i}\in \mathbb{S}^{1}(t_i)$ is bounded as
\begin{equation*}
f\left(w^{t_i},1-\frac{\theta}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \leqslant \Pi_{i=1}^{n} \left(1-\frac{\theta_i}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right)
\end{equation*} since we can always pick $n$ agents $i=1,2,\cdots,n$ in the walk $w^{t_i}$ and keep their values unchanged and let all the left $\theta_i=0$. Since $\left(1-\frac{\theta_{i(t)}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right)\geqslant 0$ \eqref{>=0} and
\begin{equation*}
{\Pi_{l=1}^{n}\theta_l} \leqslant \left(\frac{1}{n}{\sum_{l=1}^{n}\theta_l}\right)^{n}
\end{equation*} holds when $\theta_l\geqslant 0$. Therefore the $\|z(t)-x^*\|^2$ is bounded as
\begin{equation*}
\begin{aligned}
& \|z(t)-x^*\|^2 \\
\leqslant & \Pi_{i=1}^{r} f\left(w^{t_i},1-\frac{\theta}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \|z(0)-x^*\|^2 \\
\leqslant & \Pi_{l=1}^{r} \Pi_{i=1}^{n}\left(1-\frac{\theta_{i}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2}\right) \|z(0)-x^*\|^2 \\
\leqslant & \Pi_{l=1}^{r} \left( \frac{1}{n} \sum_{i=1}^{n}\left( \ 1-\frac{\theta_{i}}{\left(\tau\|\mathbf{A}^{-1}\|\right)^2} \right) \right)^n \|z(0)-x^*\|^2 \\
= & \left( 1-\frac{1}{\left(\sqrt{n}\tau\|\mathbf{A}^{-1}\|\right)^2} \right)^{nr} \|z(0)-x^*\|^2,
\end{aligned}
\end{equation*} where $\sum_{{i}=1}^{n} \theta_{i} = 1$ by the Inequality Lemma \ref{lm_inequal}.
A loose bound given in terms of condition number $\kappa(\mathbf{A})=\|\mathbf{A}\|\cdot\|\mathbf{A}^{-1}\|$ is as follows. Since $\tau=\underset{i}{\mathrm{max}}\left(\|A_i\|\right)$ and $\mathbf{A}$ is full rank, then
\begin{equation*}
\tau=\underset{i}{\mathrm{max}}\left(\|A_i\|\right) < \|\mathbf{A}\|_F,
\end{equation*} where $\|\mathbf{A}\|_F=\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}a_{ij}^{2}}$ is the Frobenius norm. The scaled condition number \cite{demmel1988probability} $\kappa_s(\mathbf{A})=\|\mathbf{A}\|_{F}\|\mathbf{A}^{-1}\|$ and the condition number $\kappa(\mathbf{A})$ satisfies the following inequality $1 \leqslant \frac{\kappa_s(\mathbf{A})}{\sqrt{n}} \leqslant \kappa(\mathbf{A})$, then
\begin{equation*}
\sqrt{n}\tau\|\mathbf{A}^{-1}\| < \sqrt{n}\|\mathbf{A}\|_F\|\mathbf{A}^{-1}\| \leqslant \kappa(\mathbf{A})
\end{equation*} and therefore the loose bound is
\begin{equation*}
\|z(t)-x^*\|^2 < \left( 1-\kappa(\mathbf{A})^{-2} \right)^{nr} \|z(0)-x^*\|^2.
\end{equation*} This concludes the proof of Theorem \ref{th_S}.
\end{proof}
\begin{remark}
The ${f}$ Bound Theorem \ref{th_S} is important since it also bounds the convergence rate of Kaczmarz's algorithm \cite{kaczmarz1937angenaherte}, which was not well solved in literature \cite{gower2015randomized}. It gives a tight bound in terms of matrix inverse $\|\mathbf{A}^{-1}\|$ and a loose bound in terms of condition number $\kappa(\mathbf{A})$. The bounds can be easily computed when the iterative sequence of Kaczmarz's algorithm is given, compared to the known estimate \cite{galantai2005rate}. Furthermore the ${f}$ Bound Theorem \ref{th_S} clearly explains the reason that Kaczmarz's algorithm is slower than a randomized Kaczmarz's algorithm \cite{strohmer2009randomized,dai2014randomized,gower2015randomized}.
\end{remark}
\begin{remark}
With the help of ${f}^{0}$ Bound Theorem \ref{th_Sbar} and ${f}$ Bound Theorem \ref{th_S}, each product in the error updating equation \eqref{y=mu_p_y} can be divided into two parts and bounded separately. One corresponding to the ${f}$ product part where $r\geqslant 1$ and all the $1\leqslant i\leqslant n$ are present and the other one corresponding to the ${f}^{0}$ product part where not all $1\leqslant i\leqslant n$ are present. We can now prove the Bound Theorem \ref{th_bound}.
\end{remark}
\begin{proof}[Proof of Bound Theorem \ref{th_bound}]
Let $f(w^{t},\mu_{ij})$ denote the corresponding $\mu$ product of the error sequence \eqref{pppy} in \eqref{y=mu_p_y}, then according to the ${f}^{0}$ Bound Theorem \ref{th_Sbar} and ${f}$ Theorem \ref{th_S} the error updating equation \eqref{y=mu_p_y} is bounded as follows
\begin{equation*}
\begin{aligned}
& \|y_i(t+1)\| \\
\leqslant & \left( \sum_{j=1}^n \cdots \sum_{l_{1}=1}^n \|\mu_{il_{1}} \cdots \mu_{l_{t-1}l_{j}} \mathbf{P}_i \cdots \mathbf{P}_j y_j(0)\| \right) \\
\leqslant & \sum_{j=1}^{n}\sum_{r=1}^{r_m(t)}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}(t)} f(w_{ij}^{t},\frac{1}{d}) \left(1-\varphi\right)^{\frac{nr}{2}} \|y_j(0)\| \\
& + \sum_{j=1}^{n}\sum_{w_{ij}^{t}\in \mathbb{S}^{0}(t)} f(w_{ij}^{t},\frac{1}{d}) \left(1-\varphi\right)^{\frac{0}{2}} \|y_j(0)\| \\
= & \sum_{j=1}^{n}\sum_{r=0}^{r_m(t)}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}} f(w_{ij}^{t},\frac{1}{d}) \left(1-\varphi\right)^{\frac{nr}{2}} \|y_j(0)\|
\end{aligned}
\end{equation*} where $\mu_{ij}=\frac{\alpha_{ij}}{d_i}$. Hence the proof is finished.
\end{proof}
The bound in \eqref{bound_of_th} gives another proof that the distributed algorithm studied in this paper converges to $x^*$ for connected undirected networks, as shown below.
\subsection*{Discussion of the Algorithm Convergence}
Note that the order of a $w^t$ walk typically increases as the length of walks keeps growing, since $\mathcal{G}$ is a connected network. This implies that for any given order $r$, the total number of $w^{t}\in \mathbb{S}^{r}(t)$ is limited and hence the summation of all corresponding $f(w^{t},\frac{1}{d})$ products is bounded, for the summation of all walks is 1 \eqref{sum_walk}. The number of all walks starting at vertex $v_i$ for any given order $r$ and length $t$ can be estimated by combinatorics. This method is shown when the network topology is a complete graph. For any given network, the number of walks can be bounded similarly, but it can become quite involved.
For any walk of length $t$ starting at a fixed vertex $v_0$ in a complete network $\mathcal{G}\in \mathbb{R}^{n\times n}$, the total number of all walks is $n^{t}$. Let $t\gg n$. In order to count the maximum number of $w^{t}\in \mathbb{S}^{0}(t)$ walks, we first choose subsets of vertices $\mathcal{V}_{k}^{0}\subsetneq \mathcal{V}$ by picking $k\leqslant n-2$ vertices out of $n$ and $\mathcal{V}_{n-1}^{0}\subsetneq \mathcal{V}$ by picking $n-1$ vertices except the case $v_0$ is not picked, which results in walks in $\mathbb{S}^{1}(t)$ space rather than $\mathbb{S}^{0}(t)$. There are a total $C_{n}^{k}$ of $\mathcal{V}_{k}^{0}$ sets where $C_{n}^{k}=\frac{n!}{k!\left(n-k\right)!}$, $k\leqslant n-2$ and $C_{n}^{n-1}-1$ of $\mathcal{V}_{n-1}^{0}$ sets. Then we choose a vertex with replacement each time from $\mathcal{V}_{k}^{0}$ and $\mathcal{V}_{n-1}^{0}$ and put it into the sequence of walks to generate all possible walks. The total number $c^{0}(t)$ of $w^{0}(t)$ walks is
\begin{equation*}
c^{0}(t)=\sum_{k=1}^{n-1}C_{n}^{k}k^{t}-\left(n-1\right)^{t}.
\end{equation*} The $w^{t}\in \mathbb{S}^{1}(t)$ walks are regarded as combinations of $w^{n}\in \mathbb{S}^{1}(n)$ walks and $w^{t-n}\in \mathbb{S}^{0}(t-n)$ walks. We first choose $n$ positions out of $t$ in the sequences of walks and make these $n$ positions form $w^{n}\in \mathbb{S}^{1}(n)$ walks. There are $C_{t}^{n}$ ways to choose $n$ vertices to form a $\mathcal{V}_{n}^{1}$ set and the number $w^{1}(n)$ walks is exactly $n!$ for each set, so the number of different sub-sequences in $w^{1}(t)$ walks is $P_{t}^{n}=n!C_{t}^{n}$. The number of walks $w^{0}(t-n)$ is simply $c^{0}(t-n)$. Hence the number of $w^{1}(t)$ walks is bounded by
\begin{equation*}
c^{1}(t)= P_{t}^{n}c^{0}(t-n).
\end{equation*} In general cases where $r\geqslant 2$, we pick $n$ positions out of $1,\cdots,t-(r-1)n$ locations to form the first $w^{n}\in \mathbb{S}^{1}(n)$ walk sequence and pick the left-over locations till $t-(r-2)n$ to form the second $w^{n}\in \mathbb{S}^{1}(n)$ walk and so on. Let $t_1$ denote the start position and $t_2-1$ be the end position picked out by the first $w^{n}\in \mathbb{S}^{1}(n)$ walk sequence, then the total number of sub-sequences is $P_{t_2-t_1}^{n}$. Define $t_3,\cdots t_r$ similarly, then the second $w^{n}\in \mathbb{S}^{1}(n)$ walk sequence can pick from position $t_2$ till $t_3-1$ in the original $w^{t}\in \mathbb{S}^{r}(t)$ sequence. The total number of sub-sequences for the second $w^{n}\in \mathbb{S}^{1}(n)$ walk is $P_{t_3-t_2}^{n}$. The total number of $w^{r}(t)$ walks is bounded by
\begin{equation*}
c^{r}(t)=\Pi_{i=1}^{r} P_{t_{i+1}-t_i}^{n}c^{0}\left(t-rn\right).
\end{equation*} The number of total walks then satisfies
\begin{equation*}
\lim_{t\to \infty} \frac{c^{r}(t)}{n^{t}} = \lim_{t\to \infty} \frac{t^{rn}c^{0}(t-rn)}{n^{rn}n^{t-rn}}= 0
\end{equation*} since $\frac{c^{0}(t-rn)}{n^{t-rn}}$ reduces exponentially to 0. This means for any given order $r$, the corresponding walks account for only a minor portion of all walks. In other words, the order of all products keeps growing when the length of walks increases.
Since the summation of all $f\left(w^{t},\frac{1}{d}\right)$ products is 1 \eqref{sum_walk} and the limit of the portion of walks among all walks of a given order $r_c$ is $0$, therefore the following two limits exist
\begin{equation*}
\begin{aligned}
& \lim_{t\to \infty} \rho_1=\lim_{t\to \infty} \sum_{j=1}^{n}\sum_{r=r_c+1}^{r_m(t)}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}}f(w_{ij}^{t},\mu) = 1,\\
& \lim_{t\to \infty} \rho_2=\lim_{t\to \infty} \sum_{j=1}^{n}\sum_{r=0}^{r_c}\sum_{w_{ij}^{t}\in \mathbb{S}^{r}(t)}f(w_{ij}^{t},\mu) = 0.
\end{aligned}
\end{equation*} for any finite $r_c$. Furthermore, the limit of the error in \eqref{y_i} satisfies the following
\begin{equation*}
\begin{aligned}
& \lim_{t\to \infty} \|y_i(t+1)\| \\
\leqslant & \lim_{t\to \infty} \rho_1\left(1-\varphi\right)^{\frac{nr}{2}} \|y_j(0)\| +\lim_{t\to \infty} \rho_2 \left(1-\varphi\right)^{\frac{nr}{2}} \|y_j(0)\| \\
= & 0,
\end{aligned}
\end{equation*} where $\lim_{t\to \infty}r_m(t) = \infty$. Therefore the algorithm converges as all the $\lim_{t\to \infty}x_i(t)\to x^*$ for regular networks.
Future work will look to analyze the combinatorics of more general network topologies.
\section{CONCLUSIONS}
\label{sec_conclu}
In this work, we systematically study the impact of network topology on the performance of a network-based distributed algorithm in solving linear algebraic equations. Both theoretical analysis and simulation results show that networks with higher mean degree, smaller diameter, and more homogeneous degree distribution make the algorithm converge faster. Interestingly, $k$-regular random networks with small mean degree could have a comparable performance as degree-heterogeneous networks with very high mean degree. Hence, it is possible to reduce the communication cost (i.e. by designing sparser networks) and simultaneously keep the fast convergence rate.
Besides classical consensus problems, we expect that more complicated problems can also be solved with network-based distributed algorithms. Our results presented here provide a method to analyse the topology impacts on a network-based distributed algorithm. It may shed light on the design of better network topologies to improve the performance of general multi-agent distributed algorithms in solving more challenging real-world problems.
\section*{Acknowledgement}
This work was partially supported by the John Templeton Foundation (award number 51977)
\bibliographystyle{IEEEtran}
| {
"timestamp": "2016-03-15T01:12:57",
"yymm": "1603",
"arxiv_id": "1603.04154",
"language": "en",
"url": "https://arxiv.org/abs/1603.04154",
"abstract": "Recently a distributed algorithm has been proposed for multi-agent networks to solve a system of linear algebraic equations, by assuming each agent only knows part of the system and is able to communicate with nearest neighbors to update their local solutions. This paper investigates how the network topology impacts exponential convergence of the proposed algorithm. It is found that networks with higher mean degree, smaller diameter, and homogeneous degree distribution tend to achieve faster convergence. Both analytical and numerical results are provided.",
"subjects": "Optimization and Control (math.OC)",
"title": "Impacts of Network Topology on the Performance of a Distributed Algorithm Solving Linear Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9822877028521421,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594820423945
} |
https://arxiv.org/abs/1406.3361 | Scaling-rotation distance and interpolation of symmetric positive-definite matrices | We introduce a new geometric framework for the set of symmetric positive-definite (SPD) matrices, aimed to characterize deformations of SPD matrices by individual scaling of eigenvalues and rotation of eigenvectors of the SPD matrices. To characterize the deformation, the eigenvalue-eigenvector decomposition is used to find alternative representations of SPD matrices, and to form a Riemannian manifold so that scaling and rotations of SPD matrices are captured by geodesics on this manifold. The problems of non-unique eigen-decompositions and eigenvalue multiplicities are addressed by finding minimal-length geodesics, which gives rise to a distance and an interpolation method for SPD matrices. Computational procedures to evaluate the minimal scaling--rotation deformations and distances are provided for the most useful cases of $2 \times 2$ and $3 \times 3$ SPD matrices. In the new geometric framework, minimal scaling--rotation curves interpolate eigenvalues at constant logarithmic rate, and eigenvectors at constant angular rate. In the context of diffusion tensor imaging, this results in better behavior of the trace, determinant and fractional anisotropy of interpolated SPD matrices in typical cases. | \section{Introduction}
The analysis of symmetric positive-definite (SPD) matrices as data objects arises in many contexts. A prominent example is diffusion tensor imaging (DTI), which is a widely-used technique that measures the diffusion of water molecules in a biological object \cite{Basser1994,LeBihan2001,Alexander2005}. The diffusion of water is characterized by a 3D tensor, which is a $3 \times 3$ SPD matrix. The SPD matrices also appear in other contexts of tensor computing \cite{Pennec2006}, tensor-based morphometry \cite{Lepore2008} and as covariance matrices \cite{vemulapalli2015riemannian}.
In recent years statistical analyses of SPD matrices have been received great attention \cite{Zhu2009,Schwartzman2008,Schwartzman2008a,Schwartzman2010,Moakher2011,Yuan2012,Osborne2013}.
The main challenge in the analysis of SPD matrices is that the set of $p \times p$ SPD matrices, $\mbox{Sym}^+(p)$, is a proper open subset of a real matrix space, so it is not a vector space. This has led researchers to consider alternative geometric frameworks to handle analytic and statistical tasks for SPD matrices.
The most popular framework is a Riemannian framework, where the set of SPD matrices is endowed with an affine-invariant Riemannian metric \cite{Moakher2005,Pennec2006,Lenglet2006,Fletcher2007}. The Log-Euclidean metric, discussed in \cite{Arsigny2007}, is also widely used, because of its simplicity. \cite{Dryden2009} lists these popular approaches including the Cholesky decomposition-based approach of \cite{Wang2004} and their own approach which they call the Procrustes distance. \cite{Bonnabel2009} proposed a different Riemannian approach for symmetric positive {semidefinite} matrices of fixed rank.
Although these approaches are powerful in generalizing statistics to SPD matrices, they are not easy to interpret in terms of SPD matrix deformations. In particular, in the context of DTI, tensor changes are naturally characterized by changes in diffusion orientation and intensity, but the above frameworks do not provide such an interpretation.
\cite{Schwartzman2006} proposed a scaling--rotation curve in $\mbox{Sym}^+(p)$, which is interpretable as rotation of diffusion directions and scaling of the main modes of diffusivity.
In this paper we develop a novel framework to formally characterize scaling--rotation deformations between SPD matrices and introduce a new distance, called here the scaling--rotation distance, defined by the minimum amount of rotation and scaling needed to deform one SPD matrix into another.
To this aim, an alternative representation of $\mbox{Sym}^+(p)$, obtained from the decomposition of each SPD matrix into an eigenvalue matrix and eigenvector matrix, is identified as a Riemannian manifold.
This manifold, a generalized cylinder embedded in a higher-dimensional matrix space, is easy to endow with a Riemannian geometry. A careful analysis is provided to handle the case of equal eigenvalues and, more generally, the non-uniqueness of the eigen-decomposition.
We show that the scaling--rotation curve corresponds to geodesics in the new geometry, and characterize the family of geodesics. A minimal deformation of SPD matrices in terms of the smallest amount of scaling and rotation is then found by a minimal scaling--rotation curve, through a minimal-length geodesic. Sufficient conditions for the uniqueness of minimal curves are given.
The proposed framework not only provides a minimal deformation, but also yields a distance between SPD matrices. This distance function is a semi-metric on $\mbox{Sym}^+(p)$ and invariant to simultaneous rotation, scaling and inversion of SPD matrices. The invariance to matrix inversion is particularly desirable in analysis of DTI data, where both large and small diffusions are unlikely \cite{Arsigny2007}. While these invariance properties are also found in other frameworks \cite{Moakher2005,Pennec2006,Lenglet2006,Fletcher2007,Arsigny2007}, the proposed distance is directly interpretable in terms of the relative scaling of eigenvalues and rotation angle between eigenvector frames of two SPD matrices.
For $\mbox{Sym}^+(3)$, other authors \cite{collard2012anisotropy,yang2012feature} have proposed dissimilarity-measures and interpolation schemes based on the same general idea as ours, \emph{i.e.}, separating the scaling and rotation of SPD matrices. Their deformations of SPD matrices can be similar to ours in many cases, thus enjoying similar interpretability.
But while \cite{collard2012anisotropy,yang2012feature} mainly focused on the $p =3$ case, our work is more flexible by allowing \emph{unordered} and \emph{equal} eigenvalues. We discuss the importance of this later in Section~\ref{sec:minimalscrotcurve}.
The proposed geometric framework for analysis of SPD matrices is viewed as an important first step to develop statistical tools for SPD matrix data that will inherit the interpretability and the advantageous regular behavior of the scaling--rotation curve. Development of tools similar to those already existing for other geometric framework, such as bi- or tri-linear interpolations \cite{Arsigny2007}, weighted geometric means and spatial smoothing \cite{Moakher2005,Dryden2009,Carmichael2013}, principal geodesic analysis \cite{Fletcher2007}, regression and statistical testing \cite{Zhu2009,Schwartzman2008a, Schwartzman2010,Yuan2012}, will also be needed in the new framework, but we do not address them here.
The proposed framework also has potential future applications beyond diffusion tensor study such as high-dimensional factor models \cite{forni2000generalized} and classification among SPD matrices \cite{Jung2014,vemulapalli2015riemannian}. Algorithms allowing fast computation or approximation of the proposed distance may be needed, but we will leave this as a subject of future work.
The current paper focuses only on analyzing minimal scaling--rotation curves and the distance defined by them.
The main advantage of the new geometric framework for SPD matrices is that minimal scaling--rotation curves interpolate eigenvalues at constant logarithmic rate, and eigenvectors at constant angular rate, with a minimal amount of scaling and rotation.
These are desirable characteristics in fiber-tracking in DTI \cite{Batchelor2005}. Moreover, scaling--rotation curves exhibit regular evolution of determinant, and in typical cases, of fractional anisotropy and mean diffusivity.
Linear interpolation of two SPD matrices by the usual vector operation is known to have a \emph{swelling} effect: the determinants of interpolated SPD matrices are larger than those of the two ends. This is physically unrealistic in DTI \cite{Arsigny2007}. The Riemannian frameworks in \cite{Moakher2005,Pennec2006,Arsigny2007} do not suffer from the {swelling} effect, which was in part the rationale to favor the more sophisticated geometry. However, all of these exhibit a \emph{fattening} effect: interpolated SPD matrices are more isotropic than the two ends \cite{Chao2009681}. The Riemannian frameworks also produce an unpleasant \emph{shrinking} effect: the trace of interpolated SPD matrices are smaller than those of the two ends \cite{Batchelor2005}. The scaling--rotation framework, on the other hand, does not suffer from the fattening effect and produces a smaller shrinking effect with no shrinking at all in the case of pure rotations.
The rest of the paper is organized as follows.
Scaling--rotation curves are formally defined in Section~\ref{sec:ScalingRotation}.
Section~\ref{sec:minimalscrotcurve} is devoted to precisely characterizing minimal scaling--rotation curves between two SPD matrices and the distance obtained accordingly. The cylindrical representation of $\mbox{Sym}^+(p)$ is introduced to handle the non-uniqueness of the eigen-decomposition and repeated eigenvalue cases.
Section~\ref{sec:computation} provides details for the computation of the distance and curves for the special but most commonly useful cases of $2\times 2$ and $3 \times 3$ SPD matrices.
In Section~\ref{sec:interpolation}, we highlight the advantageous regular evolution of the scaling--rotation interpolations of SPD matrices.
Technical details including proofs of theorems are contained in Appendix.
\section{Scaling--rotation curves in $\mbox{Sym}^+(p)$}\label{sec:ScalingRotation}
An SPD matrix $M \in \mbox{Sym}^+(p)$ can be identified with an ellipsoid in $\mathbb R^p$ (ellipse if $p = 2$). In particular, the surface coordinates $x \in \mathbb R^p$ of the ellipsoid corresponding to $M$ satisfy $x'M^{-1}x = 1$. The semi-principal axes of the ellipsoid are given by eigenvector and eigenvalue pairs of $M$.
Fig.~\ref{fig:scarotdeformationsex} illustrates some SPD matrices in $\mbox{Sym}^+(3)$ as ellipsoids in $\mathbb R^3$.
Any deformation of the SPD matrix $X$ to another SPD matrix can be achieved by the combination of two operations:
\begin{remunerate}
\item individual scaling of the eigenvalues, or stretching (shrinking) the ellipsoid along principal axes;
\item rotation of the eigenvectors, or rotation of the ellipsoid.
\end{remunerate}
Denote an eigen-decomposition of $X$ by $X = UDU'$, where the columns of $U \in \mbox{SO}(p)$ consist of orthogonal eigenvectors of $X$, and $D \in \mbox{Diag}^+(p)$ is the diagonal matrix of positive eigenvalues that need not be ordered. Here, $\mbox{SO}(p)$ denotes the set of $p\times p$ real rotation matrices.
To parameterize scaling and rotation, the matrix exponential and logarithm, defined in Appendix~\ref{sec:preliminaries}, are used.
A continuous scaling of the eigenvalues in $D$ at a constant proportionality rate can be described by a curve $D(t) = \exp(L t) D$ in $\mbox{Diag}^+(p)$ for some $L=\mbox{diag}(l_1,\ldots,l_p) \in \mbox{Diag}(p)$, $t \in \mathbb R$, where $\mbox{Diag}(p)$ is the set of all $p\times p$ real diagonal matrices.
Since $\frac{d}{dt}D(t) = LD(t)$, we call $L$ the \emph{scaling velocity}. Each element $l_i$ of $L$ provides the scaling factor for the $i$th coordinate $d_i$ of $D$.
A rotation of the eigenvectors in the ambient space at a constant ``angular rate'' is described by a curve $U(t) = \exp(At)U$ in $\mbox{SO}(p)$, where $A \in \mathfrak{so}(p)$, the set of antisymmetric matrices (the Lie algebra of $\mbox{SO}(p)$). Since $\frac{d}{dt}U(t) = AU(t)$, we call $A$ the \emph{angular velocity}.
Incorporating the scaling and rotation together results in the general scaling--rotation curve (introduced in \cite{Schwartzman2006}),
\begin{equation}\label{eq:sc-rot-curve}
\chi(t) = \chi(t; U,D,A,L) = \exp(At)U D\exp(Lt) U'\exp(A't) \in \mbox{Sym}^+(p), \quad t \in \mathbb R.
\end{equation}
\begin{figure}[tb!]
\begin{center}
\includegraphics[width=0.65\textwidth]{Scaling_Rotation_Paper_Fig1.png}
\end{center}
\caption{Scaling--rotation curves in $\mbox{Sym}^+(3)$: (top) pure rotation with rotation axis normal to the screen, (middle) individual scaling along principal axes without any rotation, and (bottom) simultaneous scaling and rotation. The rotation axis is shown as a black line segment. The ellipsoids are colored by the direction of principal axes, to help visualize the effect of rotation.
\label{fig:scarotdeformationsex}}
\end{figure}
The scaling--rotation curve characterizes deformations of $X = \chi(0) \in \mbox{Sym}^+(p)$ so that the ellipsoid corresponding to $X$ is smoothly rotated, and each principal axis stretched and shrunk, as a function of $t$. For $p = 2, 3$, the matrix $A$ gives the axis and angle of rotation (\emph{cf}. Appendix~\ref{sec:preliminaries}).
Fig.~\ref{fig:scarotdeformationsex} illustrates discretized trajectories of scaling--rotation curves in $\mbox{Sym}^+(3)$, visualized by the corresponding ellipsoids.
These curves in general do not coincide with straight lines or geodesics in other geometric frameworks such as \cite{Wang2004,Moakher2005,Pennec2006,Lenglet2006,Fletcher2007,Arsigny2007,Dryden2009,collard2012anisotropy,yang2012feature}.
In section \ref{sec:minimalscrotcurve}, we introduce a Riemannian metric which reproduces these scaling--rotation curves as images of geodesics.
Given two points $X,Y \in \mbox{Sym}^+(p)$, we will define the distance between them as the length of a scaling--rotation curve $\chi(t)$ that joins $X$ and $Y$. Thus it is of interest to identify the parameters of the curve $\chi(t)$ that starts at $X= \chi(0)$ and meets $Y = \chi(1)$ at $t=1$.
From eigen-decompositions of $X$ and $Y$, $X = UDU'$, $Y = V\Lambda V'$, we could equate $\chi(1)$ and $V\Lambda V'$,
and naively solve for eigenvector matrix and eigenvalue matrix separately, leading to
$ A = \log(VU') \in \mathfrak{so}(p)$, $L = \log(D^{-1}\Lambda) \in \mbox{Diag}(p).$
This solution is generally correct, if the eigen-decompositions of X and Y are chosen carefully (see Theorem~\ref{thm:main_result}). The difficulty is that there are many other scaling--rotation curves that also join $X$ and $Y$, due to the non-uniqueness of eigen-decomposition. Thus it is required to consider a \emph{minimal} scaling--rotation curve among all such curves.
\section{Minimal scaling--rotation curves in $\mbox{Sym}^+(p)$ }\label{sec:minimalscrotcurve}
\subsection{Decomposition of SPD matrices into scaling and rotation components}
An SPD matrix $X$ can be eigen-decomposed into a matrix of eigenvectors $U \in \mbox{SO}(p)$ and a diagonal matrix $D \in \mbox{Diag}^+(p)$ of eigenvalues. In general, there are many pairs $(U,D)$ such that $X = UDU'$. Denote the set of all pairs $(U,D)$ by
$$(\mbox{SO} \times \mbox{Diag}^+)(p) = \mbox{SO}(p) \times \mbox{Diag}^+(p).$$
We use the following notations:
\begin{definition} For all pairs $(U,D) \in (\mbox{SO} \times \mbox{Diag}^+)(p)$ such that $X = UDU'$,
\begin{romannum}
\item An eigen-decomposition $(U,D)$ of $X$ is called an (unobservable) \emph{version} of $X$ in $(\mbox{SO} \times \mbox{Diag}^+)(p)$;
\item $X$ is the \emph{eigen-composition} of $(U,D)$, defined by a mapping $c: (\mbox{SO} \times \mbox{Diag}^+)(p) \to \mbox{Sym}^+(p)$, $c(U,D) = UDU' = X$.
\end{romannum}
\end{definition}
The many-to-one mapping $c$ from $(\mbox{SO} \times \mbox{Diag}^+)(p)$ to $\mbox{Sym}^+(p)$ is surjective. (The symbol $c$ stands for \emph{composition}.)
Fig.~\ref{fig:representations} illustrates the relationship between an SPD matrix and its many versions (eigen-decompositions). While $\mbox{Sym}^+(p)$ is an open cone, the set $(\mbox{SO} \times \mbox{Diag}^+)(p)$ can be understood as the boundary of a generalized cylinder, \emph{i.e.}, $(\mbox{SO} \times \mbox{Diag}^+)(p)$ forms a shape of cylinder whose cross-section is ``spherical'' ($\mbox{SO}(p)$) and the centers of the cross section are on the positive orthant of $\mathbb R^p$, \emph{i.e.}, $\mbox{Diag}^+(p)$. The set $(\mbox{SO} \times \mbox{Diag}^+)(p)$ is a complete Riemannian manifold, as described below in Section~\ref{sec:RiemannianFramework}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\textwidth]{representations.pdf}
\caption{An SPD matrix $X$ and its versions in $(\mbox{SO} \times \mbox{Diag}^+)(p)$. The eigen-composition of $(U,D)$ is depicted as a many-to-one mapping from $(\mbox{SO} \times \mbox{Diag}^+)(p)$ to $\mbox{Sym}^+(p)$.\label{fig:representations}}
\end{figure}
Note that considering $(\mbox{SO} \times \mbox{Diag}^+)(p)$ as the set of all possible eigen-decompositions is an important relaxation of the usual ordered eigenvalue assumption.
We will see in the subsequent sections that this is necessary to describe the desired family of deformations. As an example, the scaling--rotation curve depicted at the middle row of Fig.~\ref{fig:scarotdeformationsex} is made possible by allowing \emph{unordered} eigenvalues. Moreover, our manifold $(\mbox{SO} \times \mbox{Diag}^+)(p)$ has no boundaries, which not only allows us to handle \emph{equal} eigenvalues but also makes the applied Riemannian geometry simple.
We first discuss which elements of $(\mbox{SO} \times \mbox{Diag}^+)(p)$ are the versions of any given SPD matrix $X$.
\begin{definition}\label{defn:permutation&signchange}
Let $S_p$ denote the symmetric group, i.e., the group of permutations of the set $\{1,\ldots,p\}$, for $p \ge 2$. A permutation $\pi \in S_p$ is a bijection $\pi: \{1,\ldots,p\} \to \{1,\ldots,p\}$.
Let $\mbox{\boldmath $\sigma$ \unboldmath_p = \{(\epsilon_1,\ldots,\epsilon_p) \in \mathbb R^p : \epsilon_i \in \{ \pm 1\}, 1\le i \le p \}$ and $\mbox{\boldmath $\sigma$ \unboldmath_p^+ = \{ (\epsilon_1,\ldots,\epsilon_p) \in \mbox{\boldmath $\sigma$ \unboldmath_p : \prod_{i=1}^p \epsilon_i = 1\}.$
\begin{romannum}
\item
For a permutation $\pi \in S_p$, its \emph{permutation matrix} is the $p \times p$ matrix $P_\pi^0$ whose entries are all 0 except that in column $i$ the entry $\pi(i)$ equals $1$. Moreover, define
$P_\pi = P_\pi^0$ if $\det(P_\pi^0) = 1$,
$P_\pi = \begin{bmatrix}
-1 &\mathbf 0' \\
\mathbf 0 & I_{p-1} \\
\end{bmatrix}P_\pi^0$ if $\det(P_\pi^0) = -1$.
\item
For $\sigma = (\epsilon_1,\ldots,\epsilon_p) \in \mbox{\boldmath $\sigma$ \unboldmath_p$, its associated \emph{sign-change matrix} is the $p \times p$ diagonal matrix $I_\sigma$ whose $i$th diagonal element is $\epsilon_i$. If $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$, we call $I_\sigma$ an \emph{even sign-change matrix}.
\item For any $D \in \mbox{Diag}(p)$, the \emph{stabilizer subgroup} of $D$ is $G_D = \{R \in \mbox{SO}(p) : RDR' = D \}$.
\end{romannum}
\end{definition}
For any $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$, $\pi \in S_p$, $P_\pi, I_\sigma \in \mbox{SO}(p)$. The number of different permutations (or sign-changes) is $p!$ (or $2^{p-1}$, respectively). These two types of matrices provide operations for permutation and sign-changes in eigenvalue decomposition. In particular,
for $U \in \mbox{SO}(p)$, a column-permuted $U$, by a permutation $\pi \in S_p$, is $U P_\pi' \in \mbox{SO}(p)$, and a sign-changed $U$, by $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$, is $U I_\sigma \in \mbox{SO}(p)$.
For $D = \mbox{diag}(d_1,\ldots,d_p)$, define $\pi \cdot D = \mbox{diag} (d_{\pi^{-1}(1)},\ldots,d_{\pi^{-1}(p)}) \in \mbox{Diag}(p)$ as a diagonal matrix whose elements are permuted by $\pi \in S_p$. $D_\pi:= P_\pi D P_\pi'$ is exactly the diagonal matrix $\pi \cdot D$. The same is true if $P_\pi$ is replaced by $I_\sigma P_\pi$, for any $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p$. Finally, for any $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$, $\pi \in S_p$, there exists $\sigma^0 \in \mbox{\boldmath $\sigma$ \unboldmath_p$ such that $I_{\sigma^0} P_\pi^0 = I_\sigma P_\pi$.
\begin{theorem}\label{thm:versions}
Every version of $X = UDU'$ is of the form
$(U^*,D^*) = (URP_\pi', D_\pi),$
for $R \in G_D$, $\pi \in S_p$, and $D_\pi = P_\pi D P_\pi'$.
Moreover, if the eigenvalues of $X$ are all distinct, every $R \in G_D$ is an even sign-change matrix $I_\sigma$, $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$.
\end{theorem}
\begin{remark}
{\rm
If the eigenvalues of $X$ are all distinct, there are exactly $p! 2^{p-1}$ eigen-decompositions of $X$. In such a case, all versions of $X$ can be explicitly obtained by application of permutations and sign-changes to any version $(U,D)$ of $X$.
}\end{remark}
\begin{remark} \label{remark:RinRDR=R}
{\rm
If the eigenvalues of $X$ are not all distinct, there are infinitely many eigen-decompositions of $X$ due to the arbitrary rotation $R$ of eigenvectors. The stabilizer group of $D$, $G_D$, to which $R$ belongs in Theorem~\ref{thm:versions}, does not depend on particular eigenvalues but only on which eigenvalues are equal. More precisely,
for $D = \mbox{diag}(d_1,\ldots,d_p) \in \mbox{Diag}^+(p)$, let $\mathcal{J}_D$ be the partition of coordinate indices $\{1,\ldots,p\}$ determined by $D$, \emph{i.e.}, for which $i$ and $j$ are in the same block if and only if $d_i = d_j$. A block can consist of non-consecutive numbers.
For a partition $\mathcal{J} = \{J_1,\ldots, J_r\}$ with $r$ blocks, let $\{W_1,\ldots, W_r\} = \{\mathbb R^{J_1},\ldots, \mathbb R^{J_r}\}$ denote the corresponding subspaces of $\mathbb R^p$; $x \in \mathbb R^{J_i}$ if and only if the $j$th coordinate of $x$ is $0$ for all $j \notin J_i$.
The stabilizer $G_D$ depends only on the partition $\mathcal{J}_D$.
Define $G_\mathcal{J} \subset \mbox{SO}(p)$ by
\begin{equation}\label{eq:Liesubgroup}
G_\mathcal{J} = \{R \in \mbox{SO}(p) : RW_i = W_i, 1 \le i \le r \}.
\end{equation}
Then $G_D = G_{\mathcal{J}_D}$.
As an illustration, let $D = \mbox{diag}(1,1,2)$. Then $\mathcal{J}_D = \{\{1,2\},\{3\}\}$. An example of $R \in G_D$ is a $3 \times 3$ block-diagonal matrix where the first $2 \times 2$ block is any $R_1 \in \mbox{SO}(2)$ and the last diagonal element is $r_2 = 1$. Intuitively, $RDR'$ with this choice of $R$ behaves as if the first $2 \times 2$ block of $D$, $D_1$, is arbitrarily rotated. Since $D_1 = I_2$, rotation makes no difference. Another example is given by setting $R_1 \in \mbox{O}(2)$ with $\det (R_1) = -1$ and $r_2 = -1$.
}\end{remark}
\subsection{A Riemannian framework for scaling and rotation of SPD matrices}\label{sec:RiemannianFramework}
The set of rotation matrices $\mbox{SO}(p)$ is a $p(p-1)/2$-dimensional smooth Riemannian manifold equipped with the usual Riemannian inner product for the tangent space \cite[Ch. 18]{Gallier2011}. The set of positive diagonal matrices $\mbox{Diag}^+(p)$ is also a $p$-dimensional smooth Riemannian manifold. The set $(\mbox{SO} \times \mbox{Diag}^+)(p)$, being a direct product of two smooth and complete manifolds, is a complete Riemannian manifold \cite{Small1996,Absil2009}. We state some geometric facts necessary to our discussion.
\begin{lemma}\label{lem:SR1}
\begin{romannum}
\item $(\mbox{SO} \times \mbox{Diag}^+)(p)$ is a differentiable manifold of dimension $p + p(p-1)/2$.
\item $(\mbox{SO} \times \mbox{Diag}^+)(p)$ is the image of $ \mathfrak{so}(p) \times \mbox{Diag}(p)$ under the exponential map
$ \mathrm{Exp}((A,L)) = (\exp(A),\exp(L))$, $(A,L) \in \mathfrak{so}(p) \times \mbox{Diag}(p).$
\item The tangent space $\tau(I,I)$ to $(\mbox{SO} \times \mbox{Diag}^+)(p)$ at the identity $(I,I)$ can be naturally identified as a copy of $\mathfrak{so}(p) \times \mbox{Diag}(p)$.
\item The tangent space $\tau(U,D)$ to $(\mbox{SO} \times \mbox{Diag}^+)(p)$ at an arbitrary point $(U,D)$ can be naturally identified as the set $\tau(U,D) = \{(AU,LD): A \in \mathfrak{so}(p), L \in \mbox{Diag}(p)\}$.
\end{romannum}
\end{lemma}
Our choice of Riemannian inner product at $(U,D)$ for two tangent vectors $(A_1U,L_1D)$ and $(A_2U,L_2D)$ is
\begin{align}
\langle (A_1U,L_1D), (A_2U,L_2D) \rangle_{(U,D)}
&= \frac{k}{2} \langle U'A_1U, U'A_2U \rangle +
\langle D^{-1}L_1D, D^{-1}L_2D \rangle \nonumber\\
& = \frac{k}{2}\mbox{trace}(A_1A_2') + \mbox{trace}(L_1L_2),\quad k>0, \label{eq:RiemmanianMetric}
\end{align}
where $\langle X,Y \rangle$ for $X,Y \in \mbox{GL}(p)$ denotes the Frobenius inner product $\langle X,Y \rangle = \mbox{trace} (XY')$.
\cite{collard2012anisotropy} used a structure similar to (\ref{eq:RiemmanianMetric}), with the scaling factor $k$ being a function of $D$, to motivate their distance function. We use $k = 1$ for all of our illustrations in this paper. The practical effect on using different values of $k$ is discussed in the supplementary material. The practical effect on using different values of $k$ is discussed in Section \ref{sec:discussion}.
For any fixed $k$, we show that this choice of Riemannian inner product leads to interpretable distances with invariance properties (\emph{cf}. Proposition~\ref{prop:invariance_geoddist} and Theorem~\ref{thm:1alternative}).
The exponential map from a tangent space $\tau(U,D)$ to $(\mbox{SO} \times \mbox{Diag}^+)(p)$ is
$\mathrm{Exp}_{(U,D)}: \tau(U,D) \to (\mbox{SO} \times \mbox{Diag}^+)(p)$,
$$
\mathrm{Exp}_{(U,D)}((AU,LD)) = (U\exp(U'AU), D \exp(D^{-1}LD))
= ( \exp(A)U, \exp(L)D ).
$$
The inverse of exponential map is $\mathrm{Log}_{(U,D)}: (\mbox{SO} \times \mbox{Diag}^+)(p) \to \tau(U,D)$,
$$
\mathrm{Log}_{(U,D)}((V,\Lambda)) = (U\log(U'V), D \log(D^{-1} \Lambda ))
= ( \log(VU')U , \log(\Lambda D^{-1}) D) .
$$
A geodesic in $(\mbox{SO} \times \mbox{Diag}^+)(p)$ starting at $(U,D)$ with initial direction $(AU,LD) \in \tau(U,D)$ is parameterized as
\begin{equation}\label{eq:geodesicformula}
\gamma(t) = \gamma(t; U,D, A, L) = \mathrm{Exp}_{(U,D)}( (AUt,LDt) ).
\end{equation}
The inner product (\ref{eq:RiemmanianMetric}) provides the geodesic distance function on $(\mbox{SO} \times \mbox{Diag}^+)(p)$.
Specifically, the squared geodesic distance from $(U,D)$ to $(V,\Lambda)$ is
\begin{align}
d^2\left((U,D),(V,\Lambda) \right)
& = \langle (AU,LD), (AU,LD) \rangle_{(U,D)} \label{eq:geoddist}\\
& = k d_{{\rm SO}(p)}(U,V)^2 + d_{\mathcal{D}}(D_, \Lambda)^2, \quad k>0, \nonumber
\end{align}
where $A = \log(VU')$ , $L = \log(\Lambda D^{-1})$, $d_{{\rm SO}(p)}(U_1,U_2)^2 = \frac{1}{2}\norm{\log(U_2U_1')}_F^2$,
$d_{\mathcal{D}} (D_1,D_2)^2 = \norm{\log(D_2 D_1^{-1})}_F^2$, and $\| \ \|_F$ is the Frobenius norm.
The geodesic distance (\ref{eq:geoddist}) is a metric, well-defined for any $(U,D)$ and $(V,\Lambda) \in (\mbox{SO} \times \mbox{Diag}^+)(p)$, and is the length of the minimal geodesic curve $\gamma(t)$ that joins the two points.
Note that for any two points $(U,D)$ and $(V,\Lambda)$, there are infinitely many geodesics that connect the two points, just like there are many ways of wrapping a cylinder with a string.
There is, however, a unique minimal-length geodesic curve that connects $(U,D)$ and $(V,\Lambda)$ if $VU'$ is not an involution \cite{Moakher2002}. (A rotation matrix $R$ is an \emph{involution} if $R \neq I$ and $R^2 = I$.)
For $p = 2,3$, $R$ is an involution if it consists of a rotation through angle $\pi$, in which case there exactly two shortest-length geodesic curves.
If $VU'$ is an involution, then $V$ and $U$ are said to be antipodal in $\mbox{SO}(p)$, and the matrix logarithm of $VU'$ is not unique (there is no principal logarithm), but as discussed in Appendix~\ref{sec:preliminaries} $\log(VU')$ means any solution $A$ of $\exp(A) = VU'$ whose Frobenius norm is the smallest among all such $A$.
\begin{proposition}\label{prop:invariance_geoddist}
The geodesic distance (\ref{eq:geoddist}) on $ (\mbox{SO} \times \mbox{Diag}^+)(p)$ is invariant under simultaneous left or right multiplication by orthogonal matrices, permutations and scaling: For any $R_1,R_2 \in \mbox{O}(p)$, $\pi \in S_p$ and $S \in \mbox{Diag}^+(p)$, and for any $(U,D),(V,\Lambda) \in (\mbox{SO} \times \mbox{Diag}^+)(p)$,
$ d\left((U,D),(V,\Lambda) \right)
= d\left((R_1U R_2,S D_\pi),(R_1V R_2 ,S\Lambda_\pi) \right).$
\end{proposition}
\subsection{Scaling--rotation curves as images of geodesics}
We can give a precise characterization of scaling--rotation curves using the Riemannian manifold $(\mbox{SO} \times \mbox{Diag}^+)(p)$.
In particular, any geodesic in $(\mbox{SO} \times \mbox{Diag}^+)(p)$ determines to a scaling--rotation curve in $\mbox{Sym}^+(p)$.
The geodesic (\ref{eq:geodesicformula}) gives rise to the scaling--rotation curve $\chi(t) = \chi(t; U,D,A,L) \in \mbox{Sym}^+(p)$ (\ref{eq:sc-rot-curve}), by the eigen-composition $c \circ \gamma = \chi$.
On the other hand, a scaling--rotation curve $\chi$ corresponds to many geodesics in $(\mbox{SO} \times \mbox{Diag}^+)(p)$.
To characterize the family of geodesics corresponding to a single curve $\chi(t)$, the following notations are used. For a partition $\mathcal{J}$ of the set $\{1,\ldots,p\}$, $G_\mathcal{J}$ denotes the Lie subgroup of $\mbox{SO}(p)$ defined in (\ref{eq:Liesubgroup}).
Let $\mathfrak g_\mathcal{J}$ denote the Lie algebra of $G_\mathcal{J}$. Then,
$$\mathfrak g_\mathcal{J} = \{A \in \mathfrak{so}(p) : A_{ij} = 0 \mbox{ for } i \not\sim j\} \subset \mathfrak{so}(p), $$
where $i \not\sim j$ if $i$ and $j$ are in different blocks of $\mathcal{J}$.
For $D \in \mbox{Diag}(p)$, recall from Remark~\ref{remark:RinRDR=R} that $\mathcal{J}_D$ is the partition determined by eigenvalues of $D$, $G_D = G_{\mathcal{J}_D}$ and define
$\mathfrak g_D = \mathfrak g_{\mathcal{J}_D}$.
For $D, L \in \mbox{Diag}(p)$, let $\mathcal{J}_{D,L} $ be the common refinement of $\mathcal{J}_D $ and $ \mathcal{J}_L$ so that $i$ and $j$ are in the same block of $\mathcal{J}_{D,L}$ if and only if $d_i = d_j$ and $l_i = l_j$.
Define $G_{D,L} = G_{\mathcal{J}_{D,L}} = G_D \cap G_L$, and let $\mathfrak g_{D,L} = \mathfrak g_{\mathcal{J}_{D,L}} = \mathfrak g_D \cap \mathfrak g_L$ be the Lie algebra of $G_{D,L}$.
Finally, for $B \in \mathfrak{so}(p)$, let $\mbox{ad}_B: \mathfrak{so}(p) \to \mathfrak{so}(p)$ be the linear map defined by $\mbox{ad}_B(C) = [B,C] = BC - CB$.
\begin{theorem}\label{thm:1alternative}
Let $(U,D,A,L)$ be the parameters of a scaling--rotation curve $\chi(t)$ in $\mbox{Sym}^+(p)$. Let $I$ be a positive-length interval containing $0$.
Then a geodesic $\gamma: I \to (\mbox{SO} \times \mbox{Diag}^+)(p)$ is identified with $\chi$, \emph{i.e.}, $\chi(t) = c (\gamma(t)),$ for all $t \in I$, if and only if $\gamma(t) = \gamma(t; URP_\pi' ,D_\pi, B, L_\pi)$ for some $\pi \in S_p$, $R \in G_{D,L}$, and $B \in \mathfrak{so}(p)$ satisfying both (i) $\tilde{B} - \tilde{A} \in \mathfrak g_{D,L}$, where $\tilde{B} = U'BU$ and $\tilde{A} = U'AU$, and (ii) $(\mbox{ad}_{\tilde{B}})^j(\tilde{A}) \in \mathfrak g_{D,L}$ for all $j \ge 1$.
\end{theorem}
Note that the conjugation $\tilde{A} = U'AU$ expresses the infinitesimal rotation parameter $A$ in the coordinate system determined by $U$.
If $A$ in Theorem~\ref{thm:1alternative} is such that $\tilde{A} \in \mathfrak g_{D,L}$, then the conditions (i) and (ii) are equivalent to $\tilde{B} \in \mathfrak g_{D,L}$. If $p = 2$ or $3$ and $\tilde{A} \not\in \mathfrak g_{D,L}$, then the condition is $\tilde{B} = \tilde{A}$.
It is worth emphasizing a special case where there are only finitely many geodesics corresponding to a scaling--rotation curve $\chi(t)$.
\begin{corollary}\label{cor:them3.8}
Suppose, for some $t$, $\chi(t) = \chi(t; U,D,A,L)$ is an SPD matrix with distinct eigenvalues. Then $\chi$ corresponds to only finitely many $(p!2^{p-1})$ geodesics
$\gamma(t) = \gamma(t ; U I_\sigma P_\pi',D_\pi, A, L_\pi)$, where $\pi \in S_p$ and $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$.
\end{corollary}
\subsection{Scaling--rotation distance between SPD matrices}
In $(\mbox{SO} \times \mbox{Diag}^+)(p)$, consider the set of all elements whose eigen-composition is $X$:
$$\mathcal{E}_X = \{(U,D)\in (\mbox{SO} \times \mbox{Diag}^+)(p) : X = UDU'\}.$$
Since the eigen-composition is a surjective mapping, the collection of these sets $\mathcal{E}_X$ partitions the manifold $(\mbox{SO} \times \mbox{Diag}^+)(p)$. The set $\mathcal{E}_X = c^{-1}(X)$ is called the fiber over $X$.
Theorem~\ref{thm:versions} above characterizes all members of $\mathcal{E}_X$ for any $X$.
It is natural to define a distance between $X$ and $Y$ $\in \mbox{Sym}^+(p)$ to be the length of the shortest geodesic connecting $\mathcal{E}_X$ and $\mathcal{E}_Y \subset (\mbox{SO} \times \mbox{Diag}^+)(p)$.
\begin{definition}
For $X,Y \in \mbox{Sym}^+(p)$, the scaling--rotation distance is defined as
\begin{equation}\label{eq:quotientdistance}
d_{\mathcal{S}\mathcal{R}} (X,Y)
:= \inf_{ \substack{
(U,D) \in \mathcal{E}_X, \\
(V,\Lambda) \in \mathcal{E}_Y }
} d( (U,D), (V,\Lambda) ),
\end{equation}
where $d(\cdot,\cdot)$ is the geodesic distance function (\ref{eq:geoddist}).
\end{definition}
The geodesic distance $d( (U,D), (V,\Lambda) )$ measures the length of the shortest geodesic segment connecting $(U,D)$ and $(V,\Lambda)$.
Any geodesic, mapped to $\mbox{Sym}^+(p)$ by the eigen-composition, is a scaling--rotation curve connecting $X = UDU'$ and $Y = V\Lambda V'$. In this sense, the scaling--rotation distance $d_{\mathcal{S}\mathcal{R}}$ measures the minimum amount of smooth deformation from $X$ to $Y$ (or vice versa) only by the rotation of eigenvectors and individual scaling of eigenvalues.
Note that $d_{\mathcal{S}\mathcal{R}}$ on $\mbox{Sym}^+(p)$ is well-defined and the infimum is actually achieved, as both $\mathcal{E}_X$ and $\mathcal{E}_Y$ are non-empty and compact.
It has desirable invariance properties, and is a semi-metric on $\mbox{Sym}^+(p)$.
\begin{theorem}\label{thm:properties}
For any $X,Y \in \mbox{Sym}^+(p)$, the scaling--rotation distance $d_{\mathcal{S}\mathcal{R}}$ is
\begin{romannum}
\item invariant under matrix inversion, i.e., $d_{\mathcal{S}\mathcal{R}}(X,Y) = d_{\mathcal{S}\mathcal{R}}(X^{-1},Y^{-1})$,
\item invariant under simultaneous uniform scaling and conjugation by a rotation matrix, i.e., $d_{\mathcal{S}\mathcal{R}}(X,Y) = d_{\mathcal{S}\mathcal{R}}(s RXR' ,s RYR' )$ for any $s > 0$, $R \in \mbox{SO}(p)$,
\item a semi-metric on $\mbox{Sym}^+(p)$. That is,
$d_{\mathcal{S}\mathcal{R}}(X,Y) \ge 0$,
$d_{\mathcal{S}\mathcal{R}}(X,Y) = 0$ if and only if $X =Y$, and
$d_{\mathcal{S}\mathcal{R}}(X,Y) = d_{\mathcal{S}\mathcal{R}}(Y,X)$.
\end{romannum}
\end{theorem}
Although $d_{\mathcal{S}\mathcal{R}}$ is not a metric on the entire set $\mbox{Sym}^+(p)$, it is a metric on an important subset of $\mbox{Sym}^+(p)$.
\begin{theorem}\label{thm:properties2}
$d_{\mathcal{S}\mathcal{R}}$ is a metric on the set of SPD matrices whose eigenvalues are all distinct.
\end{theorem}
\subsection{Minimal scaling--rotation curves in $\mbox{Sym}^+(p)$}
To evaluate the scaling--rotation distance (\ref{eq:quotientdistance}), it is necessary to find a shortest-length geodesic in $(\mbox{SO} \times \mbox{Diag}^+)(p)$ between the fibers $\mathcal{E}_X$ and $ \mathcal{E}_Y$. There are multiple geodesics connecting two fibers, because each fiber contains at least $p!2^{p-1}$ elements (Theorem~\ref{thm:versions}), as depicted in Fig.~\ref{fig:horizontalgeodesics}.
We think of fibers $\mathcal{E}_X$ arranged vertically in $(\mbox{SO} \times \mbox{Diag}^+)(p)$ with the mapping $c$ (eigen-composition) as downward projection.
It is clear that there exists a geodesic that joins the two fibers with the minimal distance.
We call such a geodesic a \emph{minimal geodesic} for the two fibers $\mathcal{E}_X$ and $\mathcal{E}_Y$.
A necessary, but generally not sufficient, condition for a geodesic to be minimal for $\mathcal{E}_X$ and $\mathcal{E}_Y$ is that it is perpendicular to $\mathcal{E}_X$ and $\mathcal{E}_Y$ at its endpoints.
A pair $((U,D), (V,\Lambda)) \in \mathcal{E}_X \times \mathcal{E}_Y$ is called a \emph{minimal pair} if $(U,D)$ are $(V,\Lambda)$ are connected by a minimal geodesic.
The distance $d_{\mathcal{S}\mathcal{R}} (X,Y)$ is the length of any minimal geodesic segment connecting the fibers $\mathcal{E}_X$ and $ \mathcal{E}_Y$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\textwidth]{horizontalgeodesics.pdf}
\caption{(left) $(\mbox{SO} \times \mbox{Diag}^+)(2)$ is drawn as a curved manifold. In this picture, the four versions of $X$ (and of $Y$) are displayed vertically. For a fixed version $(U_3,D_3)$ of $X$, there are four geodesics $\gamma_i$ joining $(U_3,D_3)$ and the $i$th version of $Y$. A minimal geodesic ($\gamma_3$ in this figure) has the shortest length among these geodesics.
(right) The fiber $\mathcal{E}_X$ has infinitely many versions, shown as a vertical dotted curve in $(\mbox{SO} \times \mbox{Diag}^+)(2)$. There exist multiple minimal geodesics $\gamma_i$ with the shortest length, all of which meet the vertical fiber $\mathcal{E}_X$ in the right angle. Here, $\gamma*$ is an example of a non-minimal geodesic, which does not meet $\mathcal{E}_X$ orthogonally. \label{fig:horizontalgeodesics}}
\end{figure}
\begin{definition}\label{def:minimalSCROTcurve}
Let $X, Y \in \mbox{Sym}^+(p)$.
A scaling--rotation curve $\chi: [0,1] \to \mbox{Sym}^+(p)$, as defined in (\ref{eq:sc-rot-curve}), with $\chi(0) = X$ and $\chi(1) = Y,$ is called \emph{minimal} if $\chi = c\circ\gamma$ for some minimal geodesic segment $\gamma$ connecting $\mathcal{E}_X$ and $\mathcal{E}_Y$.
\end{definition}
\begin{theorem}\label{thm:main_result}
Let $X, Y \in \mbox{Sym}^+(p)$.
Let $((U,D),(V,\Lambda))$ be a minimal pair for $X$ and $Y$, and
let $A = \log(VU'), L = \log(D^{-1} \Lambda)$.
Then the scaling-rotation curve $\chi(t; U,D,A,L)$, $0 \le t \le 1$, is minimal.
\end{theorem}
The above theorem tells us that for any two points $X,Y \in \mbox{Sym}^+(p)$, a minimal scaling--rotation curve is determined by a minimal pair of $\mathcal{E}_X$ and $\mathcal{E}_Y$.
Procedures to evaluate the parameters of the minimal rotation--scaling curve and to compute the scaling--rotation distance are provided for the special cases $p = 2,3$ in Section~\ref{sec:computation}.
The minimal scaling--rotation curve may not be unique.
The following theorem gives sufficient conditions for uniqueness.
\begin{theorem}\label{thm:horizontal_geodesic_and_scarotcurve}
Let $((U,D),(V,\Lambda))$ be a minimal pair for $\mathcal{E}_X$ and $\mathcal{E}_Y$, and let
$\chi_o(t) = \chi(t; U,D, \log(VU'), \log(D^{-1}\Lambda))$ be the corresponding minimal scaling--rotation curve.
\begin{enumerate}
\item[(i)] If either all eigenvalues of $D$ are distinct or $\Lambda$ has only one distinct eigenvalue, and if $(V,\Lambda)$ is the unique minimizer of $d((U,D),(V_0,\Lambda_0))$ among all $(V_0,\Lambda_0) \in \mathcal{E}_Y$, then all minimal geodesics between $\mathcal{E}_X$ and $\mathcal{E}_Y$ are mapped by $c$ to the unique $\chi_o(t)$ in $\mbox{Sym}^+(p)$.
\item[(ii)] If there exists $(V_1,\Lambda_1) \in \mathcal{E}_Y$ such that $(V_1,\Lambda_1) \neq(V,\Lambda)$ and the pair $((U,D),(V_1,\Lambda_1))$ is also minimal, then $\chi_1(t) = \chi(t; U,D, \log(V_1U'), \log(D^{-1}\Lambda_1))$ is also minimal and $\chi_1(t) \neq \chi_o(t)$ for some $t$.
\end{enumerate}
\end{theorem}
The following example shows a case with a unique minimal scaling--rotation curve, and two cases exhibiting non-uniqueness.
\emph{Example}.
{\rm
Consider $X = \mbox{diag}(e,e^{-1})$ and $Y = R_\theta (2X) R_\theta'$, where $R_\theta$ is the $2 \times 2$ rotation matrix
by counterclockwise angle $\theta$.
\emph{(i)} If $\theta = \pi/3$, then there exists a unique minimal scaling--rotation curve between $X,Y$. This ideal case is depicted in Fig.~\ref{fig:example_paper}, where among the four scaling--rotation curves, the red curve $\chi_4$ is minimal as indicated by the length of the curves. In the upper right panel, a version $(I,X)$ of $X$, depicted as a diamond, and a version of $Y$ are joined by the red minimal geodesic segment.
\begin{figure}[tb!]
\centering
\includegraphics[width=1\textwidth]{Scaling_Rotation_Paper_Fig4.png}
\vspace{-0.8in}
\caption{ Two SPD matrices $X$ (blue) and $Y$ (green) in the cone of $\mbox{Sym}^+(2)$ (top left), and their four versions in a flattened $(\mbox{SO} \times \mbox{Diag}^+)(2)$ (top right). The eigen-composition of each shortest geodesic connecting versions of $X$ and $Y$ is a scaling--rotation curve in $\mbox{Sym}^+(2)$. Different colors represent four different such curves. The red scaling--rotation curve has the shortest geodesic distance in $(\mbox{SO} \times \mbox{Diag}^+)(2)$, and thus is minimal. Its trajectory is shown as the deformation of ellipses in the bottom panel (from leftmost $X$ to rightmost $Y$). \label{fig:example_paper}}
\end{figure}
\emph{(ii)} Suppose $\theta = \pi/2$. There are two minimal scaling--rotation curves, one by uniform scaling and counterclockwise rotation, the other by the same uniform scaling but by clockwise rotation.
\emph{(iii)} Let $X = \mbox{diag}(e^{\epsilon/2},e^{-\epsilon/2})$ and $Y = R_\theta X R_\theta'$.
For $0\le\theta \le \pi/2$,
\begin{align*}
d_{\mathcal{S}\mathcal{R}}(X,Y)& = \min\left\{\theta, \sqrt{(\frac{\pi}{2}-\theta)^2 + 2\epsilon^2} \right\}
= \left\{
\begin{array}{ll}
\theta, & {\theta \le \frac{\pi}{4} + \frac{2\epsilon^2}{\pi}}, \\
\sqrt{(\frac{\pi}{2}-\theta)^2 + 2\epsilon^2}, & \hbox{otherwise.}
\end{array}
\right.
\end{align*}
If the rotation angle is less than 45 degrees or the SPD matrices are highly anisotropic (large $\epsilon$), then the minimal scaling--rotation is a pure rotation (leading to the distance $\theta$). On the other hand, if the matrices are close to be isotropic (eigenvalues $\approx$ 1), the minimal scaling--rotation curve is given by simultaneous rotation and scaling. An exceptional case arises when $\theta = \frac{\pi}{4} + \frac{2\epsilon^2}{\pi}<\frac{\pi}{2}$, where both curves are of the same length, and there are two minimal scaling--rotation curves.
}
\section{Computation of the minimal scaling--rotation curve and scaling--rotation distance}\label{sec:computation}
We provide computation procedures for the scaling--rotation distance $d_{\mathcal{S}\mathcal{R}}(X,Y)$ for $X,Y \in \mbox{Sym}^+(2)$ or $\mbox{Sym}^+(3)$. Theorems~\ref{thm:minimaldistancep=2} and \ref{thm:minimaldistancep=3} below provide the minimal pair(s), based on which the exact formulation of the minimal scaling--rotation curve is evaluated in Theorem~\ref{thm:main_result} above.
\subsection{Scaling--rotation distance for $2 \times 2$ SPD matrices}
Let $(d_1,d_2)$ be the eigenvalues of $X$, $(\lambda_1,\lambda_2)$ the eigenvalues of $Y$.
\begin{theorem}\label{thm:minimaldistancep=2}
Given any $2 \times 2$ SPD matrices $X$ and $Y$, the distance (\ref{eq:quotientdistance}) is computed as follows.
\begin{romannum}
\item If $d_1 \neq d_2$ and $\lambda_1 \neq \lambda_2$, then there are exactly four versions of $X$, denoted by $(U_i, D_i), i=1,\ldots,4$, and for any version $(V,\Lambda)$ of $Y$,
\begin{equation} \label{eq:thm:minimaldistancep=2}
d_{\mathcal{S}\mathcal{R}}(X,Y) = \min_{i=1,\ldots,4} d((U_i, D_i), (V,\Lambda) ).
\end{equation}
These versions are given by the permutation and sign changes.
\item If $d_1 = d_2$, then for any version $(V,\Lambda)$ of $Y$,
$d_{\mathcal{S}\mathcal{R}}(X,Y) = d((V, D), (V,\Lambda) ),$
regardless of whether the eigenvalues of $Y$ are distinct or not.
\end{romannum}
\end{theorem}
Therefore, the minimizer of $(U_o, D_o)$ of (\ref{eq:thm:minimaldistancep=2}) and $(V,\Lambda)$ are a minimal pair for the case (\emph{i}); $(V, D), (V,\Lambda)$ are a minimal pair for the case (\emph{ii}).
\subsection{Scaling-rotation distance for $3 \times 3$ SPD matrices}
Let $X,Y \in \mbox{Sym}^+(3)$.
Let $(d_1,d_2,d_3)$ be the eigenvalues of $X$, $(\lambda_1,\lambda_2,\lambda_3)$ the eigenvalues of $Y$, without any given ordering.
In order to separately analyze and catalogue all cases of eigenvalue multiplicities in Theorem~\ref{thm:minimaldistancep=3} below, we will use the following details for the case where an eigenvalue of $X$ is of multiplicity 2.
For any version ($U,D$) with $D = \mbox{diag}(d_1,d_1,d_3)$, $d_1=d_2$, all other versions of $X$ are of the form $(UR_1P_\pi', D_\pi)$ for permutation $\pi$ and rotation matrix $R_1 \in G_D$ (Theorem~\ref{thm:versions}).
We can take $R_1 = RI_\sigma$ for some $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$ and for some diagonal rotation matrix $R$ with +1 on the lower right hand corner.
For fixed $(U,D)$, $(V,\Lambda)$, $\sigma \in \mbox{\boldmath $\sigma$ \unboldmath_p^+$ and $\pi \in S_p$, one can find a \textit{minimal rotation} $\hat{R}_{\sigma,\pi}$ satisfying
$$d((U \hat{R}_{\sigma,\pi} I_\sigma P_\pi', D_\pi),(V,\Lambda)) \le d((UR I_\sigma P_\pi', D_\pi),(V,\Lambda)),$$
for all such $R$, as the following lemma states.
\begin{lemma}\label{lem:minimalrotation}
Let $\Gamma = I_\sigma P_\pi' V'U =
\begin{bmatrix}
\Gamma_{11} & \Gamma_{12} \\
\Gamma_{21} & \gamma_{22} \\
\end{bmatrix}$, where $\Gamma_{11}$ is the first $2 \times 2$ block of $\Gamma$.
The minimal rotation matrix $\hat{R}_{\sigma,\pi} = \hat{R} $ is given by
$ \hat{R} = \begin{bmatrix}
E_2E_1' & 0 \\
0 & 1 \\
\end{bmatrix},$
where $E_1\Lambda_\Gamma E_2' $ is the ``semi-singular values'' decomposition of $\Gamma_{11}$. (In semi-singular values decomposition, we require $E_1, E_2 \in \mbox{SO}(2)$ and that the diagonal entries $\lambda_1$ and $\lambda_2$ of $\Lambda_\Gamma$ satisfy $\lambda_1 \ge |\lambda_2| \ge 0 $.)
\end{lemma}
Each choice of $\sigma$ and $\pi$ produces a {\em minimally rotated version} $(\hat{U}_{\sigma,\pi}, D_\pi) = \linebreak (U \hat{R}_{\sigma,\pi} I_\sigma P'_\pi, D_\pi)$.
To provide a minimal pair as needed in Theorem~\ref{thm:main_result}, a combinatorial problem involving the $3! 2^{3-1} = 24$ choices of $(\sigma, \pi)$ needs to be solved, since the version of $X$ closest to $ (V,\Lambda)$ is found by comparing distances between $(\hat{U}_{\sigma,\pi}, D_\pi) $ and $(V,\Lambda)$.
Fortunately, there are only six such minimally rotated versions corresponding to six choices of $(\sigma, \pi)$.
In particular, we need only
$ \pi_1 : (1,2,3) \to (1,2,3)$,
$ \pi_2 : (1,2,3) \to (3,1,2)$,
$ \pi_3 : (1,2,3) \to (1,3,2)$,
and ${\sigma_1} =(1,1,1)$, ${\sigma_2} = (-1,1,-1),$ and $\hat{R}_{{\sigma_j}, \pi_i}$ can be found for each
$({\sigma_j}, \pi_i)$, $i=1,2,3,j=1,2$.
The other pairs of permutations and sign-changes do not need to be considered because
each of them will produce one of the six minimally rotated versions, with the same distance from $(V,\Lambda)$.
\begin{theorem} \label{thm:minimaldistancep=3}
Given any $3 \times 3$ SPD matrices $X$ and $Y$, the distance (\ref{eq:quotientdistance}) is computed as follows.
\begin{enumerate}
\item[(i)] If the eigenvalues of $X$ (and also of $Y$) are all distinct, then there are exactly twenty four versions of $X$, denoted by $(U_i, D_i), i=1,\ldots,24$, and for any version $(V,\Lambda)$ of $Y$,
$d_{\mathcal{S}\mathcal{R}}(X,Y) = \min_{i=1,\ldots,24} d((U_i, D_i), (V,\Lambda) ).$
\item[(ii)] If $d_1 = d_2 \neq d_3$ and $\{\lambda_1,\lambda_2,\lambda_3\}$ are distinct, then for any version $(V,\Lambda)$ of $Y$ and a version $(U,D)$ of $X$ satisfying $D = \mbox{diag}(d_1,d_1,d_3)$,
$$d_{\mathcal{S}\mathcal{R}}(X,Y) = \min_{i=1,2,3, j = 1,2} d((\hat{U}_{\sigma_j,\pi_i}, D_{\pi_i}) , (V,\Lambda) ),$$
where $(\hat{U}_{\sigma_j,\pi_i}, D_{\pi_i})$, $i=1,2,3, j = 1,2$ are the six minimally rotated versions.
\item[(iii)] If $d_1 = d_2 \neq d_3$ and $\lambda_1 = \lambda_2 \neq \lambda_3$, choose $D = \mbox{diag}(d_1,d_2,d_3)$ and $\Lambda = \mbox{diag}(\lambda_1,\lambda_2,\lambda_3)$. For any versions $(U,D)$, $(V,\Lambda)$ of $X$ and $Y$,
$$d_{\mathcal{S}\mathcal{R}}(X,Y) = \min_{i=1,2,3, j = 1,2} d(( U R_{\theta_{ij}} I_{\sigma_i}P_{\pi_j}', D_{\pi_j}), (VR_{\phi_{ij}},\Lambda) ),$$
where $R_\theta = \exp([a]_\times)$, $a = (0,0,\theta)'$ (cf. Appendix~\ref{sec:preliminaries}),
and $({\theta_{ij}},\phi_{ij})$ simultaneously maximizes
$G(\theta, \phi) = \mbox{trace} (U R_\theta I_{\sigma_i}P_{\pi_j}' R_\phi' V').$
\item[(iv)] If $d_1 = d_2 = d_3$, then for any version $(V,\Lambda)$ of $Y$,
$d_{\mathcal{S}\mathcal{R}}(X,Y)= d((V, D), (V,\Lambda) ),$
regardless of whether the eigenvalues of $Y$ are distinct or not.
\end{enumerate}
\end{theorem}
The minimizer $({\theta_{ij}},\phi_{ij})$ of $G(\theta,\phi)$ in Theorem~\ref{thm:minimaldistancep=3}(iii)
is found by a numerical method. Specifically, given the $m$th iterates $\theta^{(m)}, \phi^{(m)}$, the $(m+1)$th iterate $\theta^{(m+1)}$ is the solution $\theta$ in Lemma~\ref{lem:minimalrotation}, treating $VR_{\phi^{(m)}}$ as $V$. We then find $\phi^{(m+1)}$ similarly by using Lemma~\ref{lem:minimalrotation}, with the role of $U$ and $V$ switched. In our experiments, convergence to the unique maximum was fast and reached by only a few iterations.
\section{Scaling--rotation interpolation of SPD matrices}\label{sec:interpolation}
For $X,Y \in \mbox{Sym}^+(p)$, \emph{ a scaling--rotation interpolation from $X$ to $Y$} is defined as any minimal scaling--rotation curve $f_{SR}(t) := \chi_o(t)$, $t \in [0,1]$, such that $f_{SR}(0) = X$, $f_{SR}(1) = Y$.
By definition, every scaling--rotation curve $\chi(t; U,D,A,L)$, and hence every scaling--rotation interpolation, has a log-constant scaling velocity $L$ and constant angular velocity $A$. The scalar $\mbox{trace}(L)$ gives the (constant) speed at which log-determinant evolves: $\log(\det\chi(t)) = \log(\det(D)) + \mbox{trace}(L) t$. Analogously, we view the scalar quantity $\norm{A}_F/\sqrt{2}$ as a constant {\em speed of rotation}, and for all $t\geq 0$ we define the \emph{amount of rotation} applied from time 0 to time $t$ to be $\theta_t := t \norm{A}_F/\sqrt{2}$. For a minimal pair $((U,D),(V,\Lambda))$ of $X$ and $Y$, and the corresponding scaling--rotation interpolation $f_{SR}$, we have
\begin{equation}\label{eq:logdet}
\log(\det f_{SR}(t) ) = (1-t) \log(\det(X)) + t \log(\det(Y)),
\end{equation}
and we define the \emph{ amount of rotation applied by $f_{SR}$ from $X$ to $Y$} to be
$\theta := \norm{\log(VU')}_F/\sqrt{2}$.
For $p = 2,3$, $\theta$ is equal to the angle of rotation.
\subsection{An application to diffusion tensor computing}\label{sec:applications}
This work provides an interpretative geometric framework in analysis of diffusion tensor magnetic resonance images \cite{LeBihan2001}, where diffusion tensors are given by $3\times 3$ SPD matrices. Interpolation of tensors is important for fiber tracking, registration and spatial normalization of diffusion tensor images \cite{Batchelor2005,Chao2009681}.
The scaling--rotation curve can be understood as a deformation path from one diffusion tensor to another, and is nicely interpreted as scaling of diffusion intensities and rotation of diffusion directions. This advantage in interpretation has not been found in popular geometric frameworks such as \cite{Pennec2006,Fletcher2007,Arsigny2007,Dryden2009,Bonnabel2009}. The approaches in \cite{Batchelor2005,Chao2009681,collard2012anisotropy,yang2012feature} also explicitly use rotation of directions and many scaling--rotation curves are very similar to the deformation paths given in \cite{collard2012anisotropy,yang2012feature}. We defer the discussion on the difference between our framework and those in \cite{collard2012anisotropy,yang2012feature} to Section~\ref{sec:discussion}.
As an example, consider interpolating from $X = \mbox{diag}(15,2,1)$ to $Y$, whose eigenvalues are $(100,2,1)$ and whose principal axes are different from those of $X$.
The first row of Fig.~\ref{fig:linearinterpolation} presents the corresponding evolution ellipsoids by the scaling--rotation interpolation $f_{SR}$. This evolution is consistent with human perception when deforming $X$ to $Y$. As shown in the left two bottom panels of Fig.~\ref{fig:linearinterpolation}, the interpolation exhibits the constant angular rate of rotation, and log-constant rate of change of determinant.
\begin{figure}[tb!]
\centering
\includegraphics[width=1\textwidth]{Scaling_Rotation_Paper_Fig2.png}\\
\vskip -0.3in
\includegraphics[width=1\textwidth]{Scaling_Rotation_Paper_Fig3new.png}
\vspace{-2in}
\caption{(Top) Interpolations of two $ 3\times 3$ SPD matrices. Row 1: Scaling--rotation interpolation by the minimal scaling--rotation curve. Row 2: (Euclidean) linear interpolation on coefficients. Row 3: Log-Euclidean geodesic interpolation. Row 4: Affine-invariant Riemannian interpolation. The pointy shape of ellipsoids on both ends is well-preserved in the scaling--rotation interpolation. \label{fig:linearinterpolation}
(Bottom) Evolution of rotation angle, determinant, FA and MD for these four interpolations. Only the scaling--rotation interpolation provides a monotone pattern.}
\end{figure}
By way of comparison, the Euclidean interpolation in row 2 is defined by $f_E(t) = (1-t) X + t Y$. The log-Euclidean and affine-invariant Riemannian interpolation in rows 3 and 4 are defined by $f_{LE}(t) = \exp( (1-t) \log(X)+ t \log(Y))$ and $f_{AI}(t) = X^{\frac{1}{2}} \exp( t \log( X^{-\frac{1}{2}} Y X^{-\frac{1}{2}})) X^{\frac{1}{2}}$, respectively; see \cite{Arsigny2007}. For these interpolations, we define the rotation angle at time $t$ by the angle of swing from the major axis at time 0 to that at time $t$. These rotation angles are not in general linear in $t$, as the bottom left panel illustrates.
The log-Euclidean $f_{LE}$ and affine-invariant interpolations $f_{AI}$ are log-linear in determinant, and in fact (\ref{eq:logdet}) holds exactly for $f_{LE}$ and $f_{AI}$.
On the other hand, the Euclidean interpolation is known to suffer from the \emph{swelling effect}: $\det(f_E(t)) > \max(\det(X),\det(Y))$, for some $t \in [0,1]$ \cite{Arsigny2007}. This is shown in the bottom second panel of Fig.~\ref{fig:linearinterpolation} for the same example. The other interpolations, $f_{SR},f_{LE}$ and $f_{AI}$ do not suffer from the swelling effect.
Minimal scaling–rotation curves not only provide regular evolution of rotation
angles and determinant, but also minimize the combined amount of scaling
and rotation, as in Definition \ref{def:minimalSCROTcurve}. This results in a particularly desirable property: in many examples, the fractional anisotropy (FA) and mean diffusivity (MD) evolve monotonically. FA measures a degree of anisotropy that is zero if all eigenvalues are equal, and approaches 1 if one eigenvalue is held constant and the other two approach zero; see \cite{LeBihan2001}. MD is the average of eigenvalues $\mbox{MD}(X) = \mbox{trace}(X)/3$.
In the example of Fig.~\ref{fig:linearinterpolation}, FA$(f_{SR}(t))$ increases monotonically.
In contrast, other interpolations of the highly anisotropic $X$ and $Y$ become less anisotropic.
This phenomenon may be called a \emph{fattening effect}: interpolated SPD matrices are more isotropic than the two ends \cite{Chao2009681}.
Moreover, log-Euclidean and affine-invariant Riemannian interpolations can suffer from a \emph{shrinking effect}: the MD of interpolated SPD matrices are smaller than those of the two ends \cite{Batchelor2005}, as shown in the bottom right panel. In this example, the scaling--rotation interpolation does not suffer from fattening and shrinking effects. These adverse effects are less severe in $f_{SR}$ than in $f_{LE}$ or $f_{AI}$ in most typical examples, as shown in the online supplementary material.
The advantageous regular evolution results from the rotational part of the interpolation. To see this, consider, as in \cite{Batchelor2005}, a case where the interpolation by $f_{SR}$ consists only of rotation (a precise example is shown in Figure 1 in the online supplementary material). The scaling-rotation interpolation preserves the determinant, FA and MD, while the other modes of interpolations exhibit irregular behavior in some of the measurements.
On the other hand, when $f_{SR}(t)$ is composed of pure scaling, then $f_{SR}(t) = f_{LE}(t) = f_{AI}(t)$ for all $t$, and there is no guarantee that MD or FA grow monotonically for neither curve. The equality of these three curves in this special case is a consequence of the geometric scaling of eigenvalues in the scaling--rotation curve (\ref{eq:sc-rot-curve}), which in turn is a consequence of our use of the Riemannian inner product (\ref{eq:RiemmanianMetric}).
In summary, while the three most popular methods suffer from swelling, fattening, or shrinking effects, the scaling--rotation interpolation provides good regular evolution of all three summary statistics, and solely provides constant angular rate of rotation. More examples illustrating these effects in various scenarios are given in the online supplementary material.
\subsection{Comparison to other rotation--scaling schemes}\label{sec:discussion}
Geometric frameworks for $3 \times 3$ SPD matrices that decouple rotation from scaling have also been developed by \cite{collard2012anisotropy,yang2012feature}. However, our framework differs in two major ways. First, we allow unordered and equal eigenvalues in any dimension, while \cite{collard2012anisotropy,yang2012feature} considered only dimension 3 and only the case of distinct, ordered eigenvalues. In our framework, every scaling--rotation curve corresponds to geodesics in a smooth manifold, which is not possible if eigenvalues are ordered. This leads to a more flexible family of interpolations than those of \cite{collard2012anisotropy,yang2012feature}, as we illustrate in the online supplementary material.
Another difference lies in the choice of the metric for $\mbox{SO}(3)$, and the weight $k$ in (\ref{eq:RiemmanianMetric}).
While we use geodesic distance and interpolation determined by the standard Riemannian metric on $\mbox{SO}(3)$, \cite{collard2012anisotropy} used a chordal distance and extrinsic interpolation.
As a consequence, the interpolation in \cite{collard2012anisotropy} is close to, but not equal to, a special case of minimal scaling--rotation curves, in particular when $k$ in (\ref{eq:RiemmanianMetric}) is small. An example illustrating this effect of $k$ is given in Section 2 of the online supplementary material.
\Appendix
| {
"timestamp": "2015-06-02T02:20:10",
"yymm": "1406",
"arxiv_id": "1406.3361",
"language": "en",
"url": "https://arxiv.org/abs/1406.3361",
"abstract": "We introduce a new geometric framework for the set of symmetric positive-definite (SPD) matrices, aimed to characterize deformations of SPD matrices by individual scaling of eigenvalues and rotation of eigenvectors of the SPD matrices. To characterize the deformation, the eigenvalue-eigenvector decomposition is used to find alternative representations of SPD matrices, and to form a Riemannian manifold so that scaling and rotations of SPD matrices are captured by geodesics on this manifold. The problems of non-unique eigen-decompositions and eigenvalue multiplicities are addressed by finding minimal-length geodesics, which gives rise to a distance and an interpolation method for SPD matrices. Computational procedures to evaluate the minimal scaling--rotation deformations and distances are provided for the most useful cases of $2 \\times 2$ and $3 \\times 3$ SPD matrices. In the new geometric framework, minimal scaling--rotation curves interpolate eigenvalues at constant logarithmic rate, and eigenvectors at constant angular rate. In the context of diffusion tensor imaging, this results in better behavior of the trace, determinant and fractional anisotropy of interpolated SPD matrices in typical cases.",
"subjects": "Metric Geometry (math.MG)",
"title": "Scaling-rotation distance and interpolation of symmetric positive-definite matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594816681577
} |
https://arxiv.org/abs/1907.10857 | Weak maximum principle for biharmonic equations in quasiconvex Lipschitz domains | In dimension two or three, the weak maximum principal for biharmonic equation is valid in any bounded Lipschitz domains. In higher dimensions (greater than three), it was only known that the weak maximum principle holds in convex domains or $C^1$ domains, and may fail in general Lipschitz domains. In this paper, we prove the weak maximum principle in higher dimensions in quasiconvex Lipschitz domains, which is a sharp condition in some sense and recovers both convex and $C^1$ domains. | \section{Introduction}
\subsection{Background}
In this paper, we are interested in the weak maximum principle (also known as Agmon-Miranda maximum principle) for biharmonic equations, i.e., if $\Delta^2 u = 0$ in a bounded Lipschitz domain $\Omega \subset \mathbb{R}^d$,
\begin{equation}\label{est.MP}
\norm{\nabla u}_{L^\infty(\Omega)} \le C\norm{\nabla u}_{L^\infty(\partial \Omega)}.
\end{equation}
We first briefly recall the history of this problem. The classical maximum principle (for second order elliptic equations) was first extended by Miranda and Agmon \cite{ADN59,A60} to biharmonic equations (or general higher order elliptic equations) in smooth domains (e.g., class of $C^4$) in any dimensions. Particularly, a related maximum principle was proved in \cite{M48} for very general domains in $\mathbb{R}^2$, including Lipschitz domains. In \cite{PV93}, by using the regularity theory in Lipschitz domains \cite{V90,PV92}, Pipher and Verchota established the weak maximum principle in Lipschitz domains in $\mathbb{R}^3$ or in $C^1$ domains for any dimensions. They also gave a counterexample that shows the maximum principle may fail for $d\ge 4$ in some Lipschitz domains containing the exterior of a cone with small aperture (the Lipschitz constant is large). The result then was extended in \cite{PV95} to polybiharmonic equations in Lipschitz domains in $\mathbb{R}^3$. On the other hand, the weak maximum principle for biharmonic equation in arbitrary convex domains in any dimensions was established by Shen in \cite{KS11}. Note that a convex domain must be Lipschitz but may not be of $C^1$, such as convex polyhedrons. We summarize the above known results for biharmonic equations as follows:
\begin{itemize}
\item[(i)] For $d = 2$ or $3$, the weak maximum principle holds in arbitrary bounded Lipschitz domains;
\item[(ii)] For $d \ge 4$, the weak maximum principle holds in bounded $C^1$ domains or convex domains;
\item[(iii)] For $d \ge 4$, the weak maximum principle fails in some Lipschitz domain containing the exterior of a cone with small aperture.
\end{itemize}
Note that $C^1$ (smoothness) and convexity seem to be completely different geometric properties. Then, a remaining natural question arises: for $d\ge 4$, can we extend the weak maximum principle to a unified class of Lipschitz domains, covering both $C^1$ and convex domains? The purpose of this paper to give a positive answer to this question.
\subsection{Statement of main results}
In this paper, we will prove the weak maximum principle for biharmonic equations in the so-called quasiconvex Lipschitz domains for $d\ge 4$, which covers both cases in (ii). Moreover, the quaiconvexity condition is sharp in the sense that it exactly rules out the counterexamples in (iii); or in other words, the exterior of a cone with sufficiently large aperture is allowed in quasiconvex domains.
The notion of quasiconvex domains was introduced in \cite{JLW10} by Jia, Li and Wang to study the boundary regularity of the elliptic equations. Roughly speaking, a quasiconvex domain is a domain whose local boundary is close to be convex at small scales. We give an equivalent definition below which seems more natural and convenient for our application.
Let $A,B\subset \mathbb{R}^d$ be two non-empty sets. The Hausdorff distance between $A$ and $B$ is given by
\begin{equation*}
d_H(A,B) = \max \{ \sup_{x\in A} \inf_{y\in B} |x-y|, \sup_{y\in B} \inf_{x\in A} |x-y| \}.
\end{equation*}
\begin{definition}[Quasiconvex domains]\label{def.quasiconvex}
A bounded domain is said to be $(\delta,\sigma,R)$-quasiconvex if for any $0<r<R$ and $Q \in \partial\Omega$, the following conditions hold:
\begin{itemize}
\item[(i)] Non-degeneracy: $\Omega\cap B_r(Q)$ is connected and there exists $\sigma\in (0,1)$ so that
\begin{equation}\label{est.reg.cond}
|\Omega \cap B_r(Q)| \ge \sigma |B_r(Q)|.
\end{equation}
\item[(ii)] Quasiconvexity: there exists a convex domain $V = V(Q,r)$ such that $(B_r(Q)\cap \Omega) \subset V$ and $d_{H}(\partial(B_r(Q)\cap \Omega), \partial V) \le \delta r$.
\end{itemize}
\end{definition}
\begin{remark}
(1) The non-degeneracy condition (i) is satisfied automatically for Lipschitz domains and $\sigma$ depends only on the Lipschitz constant.
(2) We may always assume the convex domain $V$ in condition (ii) is the convex hull of $B_r(Q)\cap \Omega$, which is the smallest convex domain containing $B_r(Q)\cap \Omega$. (3) Note that any $C^1$ ($\delta$ is arbitrarily small) or convex domains ($\delta = 0$) are quasiconvex, while a quasiconvex domain with $\delta>0$ is not necessarily $C^1$ or convex.
\end{remark}
The following is the main result of the paper.
\begin{theorem}\label{thm.MP}
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$ with $d\ge 4$. There exists $\delta_0 >0$, depending only on $d$ and the Lipschitz constant, such that the weak maximum principle (\ref{est.MP}) is true, if $\Omega$ is $(\delta,\sigma,R)$-quasiconvex with $\delta<\delta_0$.
\end{theorem}
Up to now we have not explained how we define the boundary value of the biharmonic equation, and in what sense we may understand $\nabla u$, restricted to $\partial\Omega$, on the right-hand side of (\ref{est.MP}). This question is essentially related to the solvability of Dirichlet problems in Lipschitz domains \cite{DKV86,PV91}.
Let $\Omega$ be a bounded Lipschitz domain. For each $Q\in \partial\Omega$, there is a non-tangential ``cone'' $\Gamma(Q) = \{ x\in \Omega: |x-Q|\le (1+\alpha)\text{dist}(x,\partial\Omega) \}$ where $\alpha>0$. If $v$ is a function in $D$, the non-tangential maximal function of $v$ is defined by
\begin{equation*}
(v)^*(Q) = \sup_{x\in \Gamma(Q)} |v(x)|, \qquad Q\in \partial\Omega.
\end{equation*}
We will use $W\!\!A^{1,p}(\partial \Omega)$ to denote the completion of the set of arrays of functions
\begin{equation*}
\{(f,g): \phi \in C_0^\infty(\mathbb{R}^d), f = \phi|_{\partial\Omega}, g = \nabla \phi|_{\partial\Omega} \}
\end{equation*}
under the scale-invariant norm on $\partial\Omega$
\begin{equation*}
\norm{(f,g)}_{W\!\!A^{1,p}(\partial\Omega)} := |\partial\Omega|^{\frac{1}{1-d}} \norm{f}_{L^p(\partial\Omega)} + \norm{g}_{L^p(\partial\Omega)}.
\end{equation*}
Define $W\!\!A^{2,p}(\partial\Omega) = \{(f,g)\in W\!\!A^{1,p}(\partial\Omega): g\in W^{1,p}(\partial\Omega;\mathbb{R}^d) \}.$
We say that the $L^p$ Dirichlet problem, denoted by $(D)_p$, is uniquely solvable if given any $(f,g)\in W\!\!A^{1,p}(\partial\Omega)$, there exists a unique function $u$ so that
\begin{equation}\label{eq.Dp}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } \Omega, \\
& \lim_{\Gamma(Q) \ni x\to Q} u(x) = f(Q), \quad \txt{a.e. } Q\in \partial\Omega,\\
& \lim_{\Gamma(Q) \ni x\to Q} \nabla u(x) = g(Q), \quad \txt{a.e. } Q\in \partial\Omega.
\end{aligned}
\right.
\end{equation}
Moreover, the solution $u$ satisfies
\begin{equation}\label{est.Dp}
\norm{(\nabla u)^*}_{L^p(\partial\Omega)} \le C \norm{g}_{L^p(\partial\Omega)}.
\end{equation}
Observe that the maximum principle (\ref{est.MP}) in fact is corresponding to $(D)_\infty$ and for a.e. $Q\in \partial\Omega$, $\nabla u(Q)$ should be understood as the non-tangential limit of $\nabla u(x)$ as $\Gamma(Q)\ni x\to Q$.
\begin{remark}\label{rmk.D2Dn}
We mention that since $\Omega$ is a Lipschitz domain, the normal $n(Q)$ exists for a.e. $Q\in \partial\Omega$. Thus, the second boundary value condition $\nabla u(Q) = g(Q)$ in (\ref{eq.Dp}) actually may be replaced by the normal derivative $\frac{\partial}{\partial n} u(Q) = h(Q):= n(Q)\cdot g(Q)$ in the sense of non-tangential limit. This can be seen by noticing the orthogonal decomposition $g(Q) = \nabla_{\tan} f(Q) + n(Q) h(Q)$ and $|g(Q)|^2 = |\nabla_{\tan} f(Q)|^2 + |h(Q)|^2$. Hence, the tangential information of $f$ is contained in $g$ and therefore $f$ is not needed on the right-hand side of (\ref{est.Dp}). For the same reason, there is no $f$ on the right-hand side of (\ref{est.Rp}).
\end{remark}
We say that the $L^p$ regularity problem, denoted by $(R)_p$, is uniquely solvable if given any $(f,g)\in W^{2,p}(\partial\Omega)$, there exists a unique function $u$ satisfying (\ref{eq.Dp}) and
\begin{equation}\label{est.Rp}
\norm{(\nabla^2 u)^*}_{L^p(\partial\Omega)} \le C \norm{\nabla_{\tan} g}_{L^p(\partial\Omega)}.
\end{equation}
The solvability of $(D)_p$ and $(R)_p$ has been studied in many references; see, e.g., \cite{DKV86,V87,V90,PV91,PV92,DKPV97,S06,S06-2,KS11,KS11-2}. We mention a few results here. The solvability of $(D)_p$ for $2-\varepsilon < p<2+\varepsilon$ was discovered by Dahlberg, Kenig and Verchota in \cite{DKV86} for all dimensions. The left endpoint $p = 2-\varepsilon$ is sharp in the sense that for any $p<2$, there exits a Lipschitz domain so that $(D)_p$ is not solvable, even in dimension two \cite[Lemma 5.1]{DKV86}. The optimal range for $p>2$ in general Lipschitz domains is a much more complicated problem (sharp ranges are known for $d \le 7$) and is still open for $d\ge 8$ \cite{S06}.
For regularity problem, Verchota in \cite{V90} first proved the solvability of $(R)_p$ with $2-\varepsilon < p<2+\varepsilon$ for any $d\ge 2$, while the right endpoint $2+\varepsilon$ is sharp in general Lipschitz domains. Among others, Kilty and Shen then in \cite{KS11} established the duality relation between $(D)_p$ and $(R)_q$ in any Lipschitz domains, namely, $(D)_p$ is solvable if and only if $(R)_q$ is solvable for $1/p+1/q = 1, 1<p<\infty$. In particular, this property, combined with Theorem \ref{thm.MP}, leads to
\begin{corollary}
Let $\Omega$ be a bounded Lipschitz domain in $\mathbb{R}^d$ with $d\ge 4$. There exists $\delta_0 >0$, depending only on $d$ and the Lipschitz constant, such that if $\Omega$ is $(\delta, \sigma,R)$-quasiconvex with $\delta < \delta_0$, then $(D)_p$ and $(R)_q$ are uniquely solvable, for all $2-\varepsilon < p \le \infty, 1<q<2+\varepsilon$.
\end{corollary}
\subsection{Main idea of proof}
The proof of Theorem \ref{thm.MP} follows the generic framework contained in \cite{PV93,S95} and takes a new generic path which may apply unifiedly to other equations or systems. For $C^1$ \cite{PV93} or convex domains \cite{KS11}, the proof of the weak maximum principle relies essentially on the solvability of $(R)_p$ for some $p>d-1$, which may fails for $d\ge 4$. However, our new path is only based on $(R)_2$, which is always true, and a reverse H\"{o}lder inequality (Calder\'{o}n-Zygmund estimate).
Heuristically, if the weak maximum principle holds for a Lipschitz domain $\Omega$, we may expect the following local $L^\infty$ property: given any $Q\in \partial\Omega, 0<r<R$, if $u\in W^{2,2}(D_r)$ is a weak solution of
\begin{equation}\label{eq.local}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } D_r, \\
&u = 0,\ \nabla u = 0, \quad \txt{on } \Delta_r,
\end{aligned}
\right.
\end{equation}
where $D_r = \Omega \cap B_r(Q)$ and $\Delta_r = \partial\Omega \cap B_r(Q)$, then $|\nabla u|$ is uniformly bounded in $D_{r/2}$. In analysis, oftentimes we will ask for a slightly stronger property, namely, for any $x,y\in D_{r/2}$,
\begin{equation}\label{est.C1a}
|\nabla u(x) - \nabla u(y)| \le C \bigg( \frac{|x-y|}{r} \bigg)^{\varepsilon} \bigg( \fint_{D_{r}} |\nabla u|^2 \bigg)^{1/2}.
\end{equation}
Note that this $C^{1,\varepsilon}$ continuity can be compared to the boundary De Giorgi-Nash estimate for the second-order elliptic equations. In this paper, we will develop a general scheme to show that the property (\ref{est.C1a}), together with $(R)_2$ regularity, implies the weak maximum principle. The main idea of this scheme is to make use of the Poisson integral for biharmonic functions
\begin{equation}\label{eq.Poisson}
\begin{aligned}
u(x) & = \int_{\partial \Omega} \Delta_Q G(Q,x) \frac{\partial}{\partial n}u(Q) d\sigma(Q) \\
&\qquad + \int_{\partial \Omega}\frac{\partial}{\partial n} \Delta_Q G(Q,x) u(Q) d\sigma(Q),
\end{aligned}
\end{equation}
where $G$ is the Green's function, and combine (\ref{est.C1a}) and $(R)_2$ to estimate the Green's function near the boundary. Some classical techniques from \cite{PV93,S95} will also be critical to get rid of the higher-order derivatives of $G$.
We point out that, with suitable adjustments, the above scheme may be applied to all dimensions (see Remark \ref{rmk.d23}) and to elliptic systems, Stokes systems\footnote{The behavior of the biharmonic functions is very like the solutions of Stokes systems. The weak maximum principle for Stokes system in Lipschitz domains in $\mathbb{R}^3$ was proved in \cite{S95}. But this problem is quite open for $d\ge 4$, as far as we know, even no counterexamples are known. Our approach in this paper may be used to show the weak maximum principle for Stokes system with $d\ge 4$ in Lipschitz domains with sufficiently small Lipschitz constant. }, etc.
The problem now is reduced to the property (\ref{est.C1a}). In view of the Sobolev embedding theorem, it is sufficient to show the following boundary reverse H\"{o}lder inequality for some $p>d$
\begin{equation}\label{est.reverse}
\bigg( \fint_{D_{r/2}} |\nabla^2 u|^p \bigg)^{1/p} \le C\bigg( \fint_{D_r} |\nabla^2 u|^2 \bigg)^{1/2},
\end{equation}
where $u\in W^{2,2}(D_r)$ is the weak solution of (\ref{eq.local}). For lower dimensions ($d = 2,3$), this can be shown in any Lipschitz domains; see Remark \ref{rmk.d23}. For higher dimensions ($d\ge 4$), (\ref{est.reverse}) is not true for $p>d$ in general Lipschitz domains. Therefore, some additional geometric condition for the domain is necessary. And it turns out that the quasiconvexity is exactly the right condition (for (\ref{est.reverse}), the domain does not need to be Lipschitz).
In order to prove (\ref{est.reverse}) for some $p>d$ in quasiconvex domains, we develop a new boundary perturbation approach, of independent interest, using Meyers' estimate (see Theorem \ref{thm.Meyer}) and a real variable method of Shen \cite{S05,S06-2,S07,S18} (see Theorem \ref{thm.RealVar} for a simplified version). The philosophy of the perturbation approach is as follows: (1) First, Mayboroda and Maz'ya proved in \cite{MM08} that in any convex domains, the Hessian $|\nabla^2 u|$ of a solution of (\ref{eq.local}) is uniformly bounded (see Theorem \ref{thm.Hessian.Bound}). (2) Then it is possible to derive the weaker estimate (\ref{est.reverse}) when the boundary of a domain can be locally viewed as perturbations of convex domains at all small scales. We mention that another well-known boundary perturbation approach applied to Reifenberg flat domains, originating from Caffarelli and Peral \cite{CP98}, was developed by Wang and Byun in \cite{BW04} (more application may be found in, e.g., \cite{BW05,BW08,BW08-2,BW10,MP12}), which uses compactness and a proof of contradiction (as a result, the dependence of constants cannot be specified). Their approach was also used in \cite{JLW10,JLW11,BKSW15} in the setting of quasiconvex domains. Our new approach in this paper, quite different from theirs, is straightforward and do not use compactness or the proof of contradiction. Therefore, all the constants in the estimates can be computed explicitly, if necessary. In particular, with a careful examination, one can actually know how $\delta_0$ in Theorem \ref{thm.MP} depends on the parameters, and how small it needs to be. We would also like to say a few words about Shen's real variable method as a key ingredient in our proof. This method was also inspired by \cite{CP98} and can be viewed as a refined and dual version of the Calder\'{o}n-Zygmund lemma \cite{S18}. Previouly, it has been used to deal with problems in different motivations. In this paper, for the first time, we show that Shen's real variable method is also a powerful (simple, well-organized and quantitative) tool for boundary perturbation problems and we expect many applications of our new approach for regularity theory in non-smooth domains.
Finally, as we have indicated, the property (\ref{est.C1a}) is slightly stronger than the boundness of $|\nabla u|$, which means it might implies the $C^{1,\alpha}$ continuity of the solution in appropriate setting. Actually, we will show that under the same condition as Theorem \ref{thm.MP}, a biharmonic function is indeed in $C^{1,\alpha}(\overline{\Omega})$ ($0\le \alpha<\varepsilon, C^1 = C^{1,0}$), i.e., $\nabla u$ is $C^{\alpha}$-H\"{o}lder continuous up to the boundary in the classical way, if the boundary value is $C^{1,\alpha}$ in proper sense. Again, the approach we use here can be carried out for lower-dimensional cases in arbitrary Lipschitz domains, as well as elliptic systems, Stokes systems, etc.
The organization of the paper is as follows. In Section 2, we introduce the weak solutions, Caccioppoli inequality and Meyers' estimate. In Section 3, we prove the reverse H\"{o}lder inequality in quasiconvex domains. The pointwise estimates of Green's function is established in Section 4. The main result, Theorem \ref{thm.MP}, is proved in Section 5. Finally, the classical solutions in $C^{1,\alpha}(\overline{\Omega})$ is obtained in Section 6.
\section{Preliminaries}
\subsection{Weak solutions}
We first give a proper interpretation for the weak solution of (\ref{eq.local}). Let $\Omega$ be a bounded domain and $Q\in \partial \Omega$. Let $D_r = D_r(Q) = \Omega\cap B_r(Q)$ and $\Delta_r = \Delta_r(Q) = \partial\Omega \cap B_r(Q)$ (these notations will be used throughout). We say $u\in W^{2,2}(D_r)$ is a weak solution of
\begin{equation}\label{eq.weaksol}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } D_r \\
&u = 0,\ \nabla u = 0, \quad \txt{on } \Delta_r,
\end{aligned}
\right.
\end{equation}
if for any $\phi \in C_0^\infty(D_r)$,
\begin{equation*}
\int_{D_r} \Delta u \Delta \phi = 0,
\end{equation*}
and for any $\psi\in C_0^\infty(B_r(Q))$, $u\psi \in W_0^{2,2}(D_r)$. Recall that for any open set $E$, $W_0^{2,2}(E)$ is the closure of $C_0^\infty(E)$ under the norm of $W^{2,2}(E)$.
By the above definition, it is not difficult to see that the solution $u$ can be extended to a function $\widetilde{u}$ in $W^{2,2}(B_r(Q))$ by zero-extension, namely
\begin{equation*}
\widetilde{u}(x) =
\left\{
\begin{aligned}
&u(x), \qquad &x&\in D_r, \\
&0, \qquad & x&\in B_r(Q)\setminus D_r.
\end{aligned}
\right.
\end{equation*}
This property will be useful for us.
\subsection{Caccioppoli inequality}
For the Caccioppoli inequality and the following Meyer's estimate, we require that the domain $\Omega$ satisfies the exterior non-degeneracy condition: there exists a constant $c>0$ such that for any $Q\in \partial\Omega$ and $|B_r(Q)\setminus \Omega| \ge c|B_r(Q)|$.
Before we proceed, we first show that quasiconvex domains always satisfy the exterior non-degeneracy condition.
\begin{lemma}
A $(\delta,\sigma,R)$-Quasiconvex domain ($\delta<1$) must satisfy the exterior non-degeneracy condition.
\end{lemma}
\begin{proof}
Let $\Omega$ be a $(\delta,\sigma,R)$-Quasiconvex domain with $\delta<1$. By definition, for any $Q\in \partial\Omega$ and $0<r<R$, there exists a convex domain $V$ so that $d_H(\Omega\cap B_r(Q), \partial V) \le \delta r$. Hence, there exists a point $z\in \partial V$ so that $|z-Q|\le \delta r$ and thus $\text{dist}(z,\partial B_r(Q)) \ge (1-\delta)r$. Because $V$ is convex, we can find a ``tangent'' plane at $z\in \partial V$, so that $V$ lies in one side of the ``tangent'' plane, namely, $V\subset \{ x\in \mathbb{R}^d: n\cdot (x-z) > 0 \}$ for some unit vector $n$. Since $\Omega\cap B_r(Q) \subset V$, we have $B_r(Q)\setminus \Omega \supset B_r(Q)\cap \{x\in \mathbb{R}^d: n\cdot (x-z) < 0 \}$. Now, the desired estimate $|B_r(Q)\setminus \Omega| \ge c|B_r(Q)|$ follows from a simple geometrical observation and the facts $z\in B_r(Q)$ and $\text{dist}(z,\partial B_r(Q)) \ge (1-\delta)r$.
\end{proof}
\begin{theorem}[Caccioppoli inequality] \label{lem.Caccioppoli}
Let $\Omega\subset \mathbb{R}^d$ satisfy the exterior non-degeneracy condition. Let $r \in (0,\text{\em diam}(\Omega))$ and $Q\in \partial\Omega$. Suppose $u\in W^{2,2}(D_r)$ is a weak solution of (\ref{eq.weaksol}).
Then,
\begin{equation}\label{est.Caccioppoli}
\int_{D_{r/4}(Q)} |\nabla^2 u|^2 \le \frac{C}{\rho^2} \int_{D_{r/2}(Q)} |\nabla u|^2 \le \frac{C}{\rho^4} \int_{D_{r}(Q)} |u|^2,
\end{equation}
where $C$ depends only on the dimension $d$ and $c$ only.
\end{theorem}
This theorem has been proved, for example, in \cite[Corollary 23]{B16}. The exterior non-degeneracy condition is only necessary for the first inequality of (\ref{est.Caccioppoli}), in order to use the Poincar\'{e} inequality.
\subsection{Meyers' estimate}
The Meyers' estimate for second order elliptic equations are well-known. In this paper, we will use a version of Meyer's estimate for biharmonic functions.
\begin{theorem}[Meyers' estimate]\label{thm.Meyer}
Let $\Omega$ satisfy the exterior non-degeneracy condition. Let $u \in W^{2,2}(D_r)$ be the weak solution of (\ref{eq.weaksol}).
Then there exists some $p_0>2$, depending only on $d$ and $c$, so that
\begin{equation*}
\bigg( r^{-d} \int_{D_{r/2}} |\nabla^2 u|^{p_0} \bigg)^{1/{p_0}} \le C\bigg(r^{-d} \int_{D_r} |\nabla^2 u|^2 \bigg)^{1/2},
\end{equation*}
where $C$ depends only on $d$ and $c$.
\end{theorem}
\begin{proof}
A more general version may be found in \cite[Theorem 24]{B16}. We give an outline of the proof for the readers' convenience. First of all, extend $u$ to a function in $W^{2,2}(B_r)$ by zero-extension, which will still denoted by $u$. Let $Q\in \Delta_{r/2}$ and $\rho \in (0,cr)$. By the Caccioppoli inequality, one has
\begin{equation}\label{est.reverse.BQ}
\begin{aligned}
\bigg( \fint_{B_\rho(Q)} |\nabla^2 u|^2 \bigg)^{1/2} & \le \frac{C}{\rho^4} \bigg( \fint_{B_{4\rho}(Q)} |u|^2 \bigg)^{1/2} \\
& \le \bigg( \frac{C\rho^2 |B_{4\rho}(Q)|}{|B_{4\rho}(Q) \setminus D_r|} \bigg)^2 \bigg( \fint_{B_{4\rho}(Q)} |\nabla^2 u|^q \bigg)^{1/q}\\
& \le C \bigg( \fint_{B_{4\rho}(Q)} |\nabla^2 u|^q \bigg)^{1/q},
\end{aligned}
\end{equation}
where
\begin{equation*}
\frac{1}{q} =\min \Big\{ \frac{1}{2} + \frac{2}{d},1 \Big\},
\end{equation*}
and in the second ienquality we also used (twice) a Riesz potential representation \cite[Lemma 7.16]{GT01}
\begin{equation*}
|u(x)| \le \frac{C\rho^2 |B_{4\rho}|}{|B_{4\rho} \setminus D_r|} \int_{B_{4\rho}} \frac{|\nabla u(x)|}{|x-y|^{d-1}}dy \qquad \text{for } x\in B_{4\rho},
\end{equation*}
and the well-known Hardy-Littlewood-Sobolev inequality. In view of the zero-extension, the estimate actually holds for any $Q\in B_r$. Hence, by a generalized Gehring's inequality (i.e., the self-improving property of reverse H\"{o}lder inequality; see \cite{G73} or \cite[Chapter 12, Theorem 4.1]{CW98}), there exists some $p_0>2$ depending only on $d$ and $c$ so that
\begin{equation*}
\bigg( \fint_{B_{r/2}} |\nabla^2 u|^{p_0} \bigg)^{1/{p_0}} \le C\bigg(\fint_{B_r} |\nabla^2 u|^2 \bigg)^{1/2}.
\end{equation*}
Finally, replacing $B_r$ by $D_r$ leads to the desired estimate.
\end{proof}
\section{Reverse H\"{o}lder inequality}
In this section, we are interested in the boundary reverse H\"{o}lder inequality for the Hessian
\begin{equation*}
\bigg( \fint_{D_{r/2}} |\nabla^2 u|^p \bigg)^{1/p} \le C\bigg( \fint_{D_r} |\nabla^2 u|^2 \bigg)^{1/2}.
\end{equation*}
Note that Theorem \ref{thm.Meyer} gives such estimate for some $p_0>2$ close to $2$. We will improve this estimate to large $p$ (particularly for $p>d$), provided the domain is $(\delta,\sigma,R)$-quasiconvex with small $\delta>0$. To apply a boundary perturbation approach, we need the following a priori estimate proved by Mayboroda and Maz'ya.
\begin{theorem}[\cite{MM08}]\label{thm.Hessian.Bound}
Let $\Omega$ be a convex domain in $\mathbb{R}^d$. Fix some $r\in (0,\text{\em diam}(\Omega))$ and $Q\in \partial\Omega$. Suppose $u$ is a weak solution of (\ref{eq.weaksol}) in $D_r = D_{r}(Q)$. Then
\begin{equation*}
|\nabla^2 u(x)| \le \frac{C}{r^2 } \bigg( \fint_{D_r} |u|^2 \bigg)^{1/2}, \qquad \text{for any } x\in D_{r/2},
\end{equation*}
where $C$ depends only on the dimension $d$ only.
\end{theorem}
Our boundary perturbation approach is based on Shen's real variable argument. The following is a simplified version of \cite[Theorem 4.2.3]{S18}.
\begin{theorem}[Shen's real variable argument]\label{thm.RealVar}
Let $B_0$ be a ball in $\mathbb{R}^d$ and $F\in L^2(4B_0)$. Let $q>2$. Suppose that for each ball $B \subset 2B_0$ with $|B|\le c_0|B_0|$, there exist two measurable functions $F_B$ and $R_B$ on $2B$ such that $|F|\le |F_B| + |R_B|$ on $2B$, and
\begin{equation}\label{est.RealCond}
\begin{aligned}
\bigg( \fint_{2B} |R_B|^q \bigg)^{1/q} & \le C_1 \bigg( \fint_{4B} |F|^2 \bigg)^{1/2},\\
\bigg( \fint_{2B} |F_B|^2 \bigg)^{1/2} &\le \eta \bigg( \fint_{4B} |F|^2 \bigg)^{1/2},
\end{aligned}
\end{equation}
where $C_1 >1$ and $0<c_0<1$. Then for any $2<p<q$ there exists $\eta_0>0$, depending only on $C_1,c_0,p,q$, with the property that if $0\le \eta \le \eta_0$, then $F\in L^p(B_0)$ and
\begin{equation*}
\bigg( \fint_{B_0} |F|^p \bigg)^{1/p} \le C\bigg( \fint_{4B_0} |F|^2 \bigg)^{1/2},
\end{equation*}
where $C$ depends at most on $C_1,c_0,p$ and $q$.
\end{theorem}
We point out that the balls $4B$ and $2B$ in (\ref{est.RealCond}) are not essential and may be replaced by $\alpha B$ for any fixed $\alpha>0$.
We also need the following lemmas regarding geometric properties of the quasiconvex domains.
\begin{lemma}\label{lem.LipConst}
Let $V$ be a convex domain contained in $B_1 = B_1(0)$ and $|V| \ge \sigma |B_1|$. Then $V$ is a Lipschitz domain with a constant depending only on $d$ and $\sigma$.
\end{lemma}
\begin{proof}
First of all, $V$ is a Lipschitz domain since it is convex. Since $V$ is contained in $B_1$, we have
\begin{equation*}
\mathbb{H}^{d-1}(\partial V) \le \mathbb{H}^{d-1}(\partial B_1).
\end{equation*}
This can be shown by approximating $\partial V$ by convex polyhedrons and taking the limit. Now, consider the set $V_t = \{ x\in V: \text{dist}(x,\partial V) < t \}$. Again, by an approximation argument, we see
\begin{equation*}
|V_t| \le t \mathbb{H}^{d-1}(\partial V) \le t \mathbb{H}^{d-1}(\partial B_1).
\end{equation*}
If
\begin{equation*}
t < r_0: = \frac{\sigma |B_1|}{\mathbb{H}^{d-1}(\partial B_1)},
\end{equation*}
then $|V_t| < |V|$ and thus, $V\setminus V_t \neq \emptyset.$ This implies that there exists some point $x_0 \in V$ so that $B_{r_0}(x_0) \subset V$.
Now, by the convexity of $V$, for each $Q \in \partial V$, the cone connecting $Q$ and $B_{r_0}(x_0)$,
\begin{equation*}
\mathcal{C}_Q := \{ (1-t)Q+t x: x\in B_{r_0}(x_0), 0<t<1 \}
\end{equation*}
is contained in $V$, since $Q$ is on the boundary and $B_{r_0}(x_0)$ is contained strictly in $V$. Note that all these cones $\{ \mathcal{C}_Q: Q\in \partial V\}$ have uniform lower bounds of height and aperture comparable to $r_0$. This implies that the Lipschitz constant of $\partial V$ comparable to $r_0$. Actually, if we consider the localized boundary $\partial V \cap B_{r_0/4}(P)$ for some $P\in \partial V$, there exists a fixed cone $\mathcal{C}_0 = C_0(P)$ whose vertex is the origin, axis is parallel to $\overrightarrow{Px_0}$ and aperture is smaller but still comparable to $r_0$, so that $Q+ \mathcal{C}_0 \subset V$ for all $Q\in \partial V \cap B_{r_0/4}(P)$. This shows that the Lipschitz constant of $\partial V \cap B_{r_0/4}(P)$ is comparable to $r_0$.
\end{proof}
\begin{lemma}\label{lem.flatness}
Let $\Omega$ be a $(\delta,\sigma,R)$-quaiconvex domain. Let $Q\in \partial \Omega, r\in (0,R/4), B_r = B_r(Q)$ and $V_{4r}$ be the convex hull of $\Omega\cap B_{4r}(Q)$. Then for $0<t<1$,
\begin{equation}\label{est.flat}
\begin{aligned}
&\{ x\in \Omega: \text{\em dist}(x,\partial\Omega) < tr \} \cap B_r \\
& \qquad \subset W_{r,t}:= \{ x\in V_{4r}: \text{\rm dist}(x,\partial V_{4r} \cap B_{3r}) \le (t+\delta)r \}.
\end{aligned}
\end{equation}
Moreover, $|W_{r,t}| \le C(t+\delta) r^d$, where $C$ depends only on $d$ and $\sigma$.
\end{lemma}
\begin{proof}
Let $w\in \{ x\in \Omega: \text{dist}(x,\partial\Omega) < tr \} \cap B_r$. Then, there exists a point $y\in \partial\Omega$ so that $|w-y|< tr$. Thus, $|y-Q| \le |w-Q| + |w-y| < r+tr < 2r$. Hence $y\in B_{2r}(Q) \cap \partial\Omega$. Now, by the definition of $V_{4r}$ and the $(\delta,\sigma,R)$-quasiconvexity, there exists a point $z\in \partial V_{4r}$ so that $|y-z| \le \delta r$. Since $\delta \in (0,1)$, $|z-Q| \le |y-Q| + |z-y| < 3r$ and $|w-z| \le |w-y| + |y-z| < tr + \delta r = (t+\delta) r$. This implies $z\in \partial V_{4r} \cap B_{3r}(Q)$ and therefore
\begin{equation*}
w \in \{ x\in V_{4r}: \text{dist}(\partial V_{4r} \cap B_{3r} ) \le (t+\delta )r \}.
\end{equation*}
This gives (\ref{est.flat}) since $x$ is an arbitrary point in $\{ x\in \Omega: \text{dist}(x,\partial\Omega) < tr \} \cap B_r$.
To estimate $|W_{r,t}|$, note that $V_{4r} \subset B_{4r}$ is a convex set and $|V_{4r}| \ge \sigma |B_{4r}|$, by the quasiconvexity. By rescaling and Lemma \ref{lem.LipConst}, the Lipschitz constant of $r^{-1} V_{4r}$ depends only on $d$ and $\sigma$. This actually implies $|W_{r,t}| \le C(t+\delta) r \mathbb{H}^{d-1}(\partial V_{4r}) \le C(t+\delta) r^d$.
\end{proof}
\begin{lemma}\label{lem.newPoincare}
Let $\Omega$ be a $(\delta,\sigma,R)$-quaiconvex domain. Let $Q\in \partial \Omega, r\in (0,R/4), B_r = B_r(Q)$ and $V_{4r}$ be the convex hull of $B_{4r} \cap \Omega$. Suppose $u\in W^{1,2}(B_{4r})$ and $u = 0$ on $B_{4r} \setminus V_{4r}$. Then, if $0<t+\delta<1$,
\begin{equation}\label{est.newPoincare}
\int_{B_r \cap \Omega_{tr}} u^2 \le C(t+\delta)^2 r^2 \int_{W_{t,r}} |\nabla u|^2,
\end{equation}
where $\Omega_{tr} = \{ x\in \Omega: \text{\rm dist}(x,\partial\Omega) < tr \}$, $W_{t,r}$ is given as in (\ref{est.flat}) and $C$ depends only on $d$ and $\sigma$.
\end{lemma}
\begin{proof}
Note that Lemma \ref{lem.flatness} implies $B_r \cap \Omega_{tr} \subset W_{r,t}$. By our assumption, we see $u = 0$ on $\partial V_{4r} \cap B_{4r}$. Then, (\ref{est.newPoincare}) follows from the Poincar\'{e} inequality in $W_{r,t}$, i.e.,
\begin{equation*}
\int_{W_{r,t}} u^2 \le C(t+\delta)^2 r^2 \int_{W_{r,t}} |\nabla u|^2,
\end{equation*}
where $C$ depends only on $d$ and $\sigma$. We point out that the last inequality is valid because the Lipschitz constant of $V_{4r}$ depends only on $d$ and $\sigma$ (after rescaling), and $W_{r,t}$ is a boundary layer with thickness $(t+\delta) r$.
\end{proof}
The following is the main theorem of this section.
\begin{theorem}\label{thm.Reverse Holder}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex domain. Then for any $p\in (2,\infty)$, there exists $\delta_0>0$, depending only on $d,p$ and $\sigma$, such that if $\delta\in (0,\delta_0)$, $r\in (0,R)$, and $u \in W^{2,2}(D_{2r})$ is the weak solution of
\begin{equation}\label{est.weak.D2r}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } D_{2r}, \\
&u = 0,\ \nabla u = 0, \quad \txt{on } \Delta_{2r},
\end{aligned}
\right.
\end{equation}
then
\begin{equation}\label{est.reverseLp}
\bigg( \fint_{D_{r/2}} |\nabla^2 u|^p \bigg)^{1/p} \le C\bigg( \fint_{D_{2r}} |\nabla^2 u|^2 \bigg)^{1/2},
\end{equation}
where $C$ depends only on $d$ and $\Omega$\footnote{Here and after, for convenience, we simply say $C$ depends on $\Omega$ if $C$ depends on $(\delta,\sigma,R)$ and/or the Lipschitz constant.}.
\end{theorem}
\begin{proof}
Since $u$ and $\nabla u$ vanish on $\Delta_{2r} = \partial\Omega \cap B_{2r}$, we may first extend $u\in W^{2,2}(D_{2r})$ to $\tilde{u} \in W^{2,2}(B_{2r})$ by the zero-extension. Fix $\rho\in (0,r/16)$ and $D_\rho = D_\rho(Q)\subset D_r = D_r(Q)$. Since $\Omega$ is $(\delta,\sigma,R)$-quasiconvex, by Definition \ref{def.quasiconvex}, the convex hull of $D_\rho$, denoted by $V_\rho$, is a convex domain so that
\begin{equation*}
V_\rho \cap \Omega = D_\rho \quad \txt{and} \quad |d_H(\partial V_\rho, \partial D_\rho)| \le \delta \rho.
\end{equation*}
We now construct an approximation of $\tilde{u}$ in $V_\rho$. Let $v$ be the weak solution of
\begin{equation*}
\left\{
\begin{aligned}
&\Delta^2 v = 0, \quad \txt{in } V_\rho, \\
&v = \tilde{u},\ \nabla v = \nabla \tilde{u}, \quad \txt{on } \partial V_\rho.
\end{aligned}
\right.
\end{equation*}
Note that $\tilde{u} = 0$ and $\nabla \tilde{u} = 0$ on $\partial V_\rho\cap B_\rho$.
We claim that
\begin{equation}\label{est.claim1}
\norm{\nabla^2 v}_{L^\infty(D_{\rho/2})} \le C \bigg( \fint_{D_{\rho}} |\nabla^2 u|^2 \bigg)^{1/2},
\end{equation}
and
\begin{equation}\label{est.claim2}
\bigg( \fint_{D_{\rho}} |\nabla^2 (v-u)|^2 \bigg)^{1/2} \le C\delta^{ \varepsilon } \bigg( \fint_{D_{8\rho}} |\nabla^2 u|^{2} \bigg)^{1/2},
\end{equation}
for some $\varepsilon>0$.
Due to the zero-extension, in the above claim, $D_\rho$ can be replaced by $B_\rho$ in the above estimates. Also, it may be applied to $B_\rho(Q')$ for any $Q'\in B_{r/2}(Q)$ and $\rho\le r/2$. Thanks to Theorem \ref{thm.RealVar}, this implies the desired estimate (\ref{est.reverseLp}). Actually, for any given $p\in (2,\infty)$, choose $\delta_0>0$ such that $C\delta_0^{\varepsilon}< \eta_0$. Then for any $\delta\in (0,\delta_0)$,we apply Theorem \ref{thm.RealVar} to $F = |\nabla^2 u|, R_B = |\nabla^2 v|,$ and $R_B = |\nabla^2(v- u)|$ and obtain (\ref{est.reverseLp}).
It suffices to show the claims (\ref{est.claim1}) and (\ref{est.claim2}). First of all, (\ref{est.claim1}) follows from Theorem \ref{thm.Hessian.Bound} applied to $v$ in the convex domain $V_\rho$. Actually,
\begin{equation*}
\begin{aligned}
\norm{\nabla^2 v}_{L^\infty(D_{\rho/2})} &\le \norm{\nabla^2 v}_{L^\infty(V_{\rho/2})} \\
& \le C\bigg( \fint_{V_\rho} |\nabla^2 v|^2 \bigg)^{1/2} \\
& \le C\bigg( \fint_{D_\rho} |\nabla^2 u|^2 \bigg)^{1/2},
\end{aligned}
\end{equation*}
where we have used the energy estimate in the last inequality and the fact $|V_\rho|\simeq |D_\rho| \simeq \rho^d$ (due to the non-degeneracy condition (\ref{est.reg.cond})). Here and after, we say $A\simeq B$ if there exist positive constants $c$ and $C$ (depending only on the parameters of the domain) so that $cB \le A \le CB$.
To see (\ref{est.claim2}), by the integration by parts, we have
\begin{equation}\label{eq.DeltaU-V}
\begin{aligned}
\int_{V_\rho} |\Delta (v-\tilde{u})|^2 & = -\int_{V_\rho} \Delta \tilde{u} \Delta (v - \tilde{u}) \\
& = -\int_{D_\rho} \Delta u \Delta (v - u).
\end{aligned}
\end{equation}
Let $\Omega_t = \{ x\in \Omega: \text{dist}(x,\partial\Omega) < t \}$. Let $\theta_{\delta \rho}$ be a cut-off function such that $\theta_{\delta \rho} = 1$ in $\Omega\setminus \Omega_{2\delta \rho}$ and $\theta_{\delta \rho} = 0$ in $\Omega_{\delta \rho}$, and $|\nabla^k \theta_{\delta \rho}| \le C(\delta \rho)^{-k} $. Write
\begin{equation}\label{est.DeltaUL2}
\int_{V_\rho} |\Delta (v-\tilde{u})|^2 = -\int_{D_\rho} \Delta (\theta_{\delta \rho} u) \Delta (v - u) - \int_{D_\rho} \Delta ((1-\theta_{\delta \rho}) u) \Delta (v - u).
\end{equation}
By the integration by parts and the fact $\Delta^2 u =0$ in $D_\rho$, we write the first integral of (\ref{est.DeltaUL2}) as
\begin{equation}\label{est.DeltaU1}
\begin{aligned}
& \int_{D_\rho} \Delta (\theta_{\delta \rho} u) \Delta (v - u) \\
&\qquad = \int_{D_\rho} \Delta \theta_{\delta \rho} u \Delta(v-u) + 2\int_{D_\rho} \nabla \theta_{\delta \rho}\cdot \nabla u \Delta(v-u) \\
& \qquad\qquad -\int_{D_\rho} \Delta \theta_{\delta \rho} \Delta u (v-u) - 2\int_{D_\rho} \nabla \theta_{\delta \rho} \cdot \nabla(v-u) \Delta u.
\end{aligned}
\end{equation}
Note that $\nabla \theta_{\delta \rho}$ is supported in $\Omega_{2\delta \rho}$, and Lemma \ref{lem.flatness} implies
\begin{equation*}
D_\rho \cap \Omega_{2\delta \rho} \subset W_{\rho,2\delta}:= \{ x\in V_{4\rho}: \text{dist}(x,\partial V_{4\rho} \cap B_{3\rho}) < 3\delta \rho \}
\end{equation*}
and $|W_{\rho,2\delta}| \le C\delta \rho^d$.
Hence, using the vanishing boundary conditions, Lemma \ref{lem.newPoincare} and the Meyers' estimate, we have
\begin{equation}\label{est.uL2}
\begin{aligned}
\bigg( \int_{D_\rho \cap \Omega_{2\delta \rho}} |u|^2 \bigg)^{1/2} + \delta \rho \bigg( \int_{D_\rho \cap \Omega_{2\delta \rho}} |\nabla u|^2 \bigg)^{1/2} &\le C\delta \rho \bigg( \int_{W_{\rho,2\delta}} |\nabla u|^2 \bigg)^{1/2}\\
& \le C(\delta \rho)^2 \bigg( \int_{W_{\rho,2\delta}} |\nabla^2 u|^2 \bigg)^{1/2} \\
& \le C(\delta \rho)^{2}(\delta \rho^d)^\varepsilon \bigg( \int_{D_{4\rho}} |\nabla^2 u|^{p_0} \bigg)^{1/p_0},
\end{aligned}
\end{equation}
where $\varepsilon = \frac{1}{2} - \frac{1}{p_0}>0$. In the last inequality, we also use the facts $W_{\rho,2\delta} \subset B_{4\rho}$ and $\nabla^2 u = 0$ in $B_{4\rho}\setminus D_{4\rho}$.
Similarly, since $v-u$ and $\nabla(v-u)$ vanish on $\partial V_\rho$, and $D_\rho \cap \Omega_{2\delta\rho} \subset U_\rho:= \{x\in V_{\rho}: \text{dist}(x,\partial V_\rho) \le 3\delta \rho \}$, we may apply the Poincar\'{e} inequality on the layer $U_\rho$ to obtain
\begin{equation}\label{est.uvL2}
\begin{aligned}
&\bigg( \int_{D_\rho \cap \Omega_{2\delta \rho} } |v - \tilde{u}|^2 \bigg)^{1/2} + \delta \rho\bigg( \int_{D_\rho \cap \Omega_{2\delta \rho}} |\nabla( v - \tilde{u})|^2 \bigg)^{1/2} \\
&\qquad \le C(\delta \rho)^2 \bigg( \int_{ U_\rho} |\nabla^2 (v-\tilde{u})|^{2} \bigg)^{1/2} \\
&\qquad \le C(\delta \rho)^2 \bigg( \int_{V_{\rho}} |\nabla^2 (v-\tilde{u})|^{2} \bigg)^{1/2},
\end{aligned}
\end{equation}
where $C$ depends only on $d$ and $\sigma$, due to Lemma \ref{lem.LipConst}.
Combining (\ref{est.uL2}) and (\ref{est.uvL2}), we obtain from (\ref{est.DeltaU1}) that
\begin{equation}\label{est.DeltaUV}
\begin{aligned}
&\bigg| \int_{D_\rho} \Delta (\theta_{\delta \rho} u) \Delta (v - u) \bigg| \\
&\quad \le C(\delta \rho^d)^{\varepsilon } \bigg( \int_{D_{4\rho}} |\nabla^2 u|^{p_0} \bigg)^{1/p_0} \bigg( \int_{V_\rho} |\nabla^2 (v-u)|^{2} \bigg)^{1/2}.
\end{aligned}
\end{equation}
Similarly, the second integral in the right-hand side of (\ref{est.DeltaUL2}) has the same bound as (\ref{est.DeltaUV}). It follows that
\begin{equation}\label{est.uVr}
\begin{aligned}
&\int_{V_\rho} |\Delta (v-\tilde{u})|^2 \\
&\quad \le C(\delta \rho^d)^{\varepsilon } \bigg( \int_{D_{4\rho}} |\nabla^2 u|^{p_0} \bigg)^{1/p_0} \bigg( \int_{V_\rho} |\nabla^2 (v-u)|^{2} \bigg)^{\frac{1}{2}}.
\end{aligned}
\end{equation}
Note that
\begin{equation*}
\int_{V_\rho} |\Delta (v-\tilde{u})|^2 = \int_{V_\rho} |\nabla^2 (v-\tilde{u})|^2.
\end{equation*}
Consequently, we obtain by the Meyers' estimate
\begin{equation*}
\begin{aligned}
\bigg( \fint_{D_\rho} |\nabla^2 (v-\tilde{u})|^2 \bigg)^{1/2} &\le C\delta^{\varepsilon } \bigg( \fint_{D_{4\rho}} |\nabla^2 u|^{p_0} \bigg)^{1/p_0} \\
& \le C\delta^{\varepsilon } \bigg( \fint_{D_{8\rho}} |\nabla^2 u|^{2} \bigg)^{1/2},
\end{aligned}
\end{equation*}
which proves (\ref{est.claim2}) and hence completes the proof.
\end{proof}
\begin{corollary}\label{cor.DuCe}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex domain. Then for any given $\varepsilon \in (0,1)$, there exists $\delta_0>0$, depending only on $d,\varepsilon$ and $\sigma$, such that if $\delta\in (0,\delta_0)$, $r\in (0,R)$, and $u \in W^{2,2}(D_{2r})$ is the weak solution of (\ref{est.weak.D2r}), then for any $x,y\in D_r$
\begin{equation}\label{est.Du.Holder}
|\nabla u(x) - \nabla u(y)| \le C \bigg( \frac{|x-y|}{r} \bigg)^{\varepsilon} \bigg( \fint_{D_{2r}} |\nabla u|^2 \bigg)^{1/2},
\end{equation}
where $C$ depends only on $d,\varepsilon$ and $\Omega$.
\end{corollary}
\begin{proof}
This follows readily from Theorem \ref{thm.Reverse Holder} (with $p = d/(1-\varepsilon)$), Theorem \ref{lem.Caccioppoli} and the Sobolev embedding theorem.
\end{proof}
\begin{remark}\label{rmk.d23}
As mentioned in the introduction, the reverse H\"{o}lder inequality (\ref{est.reverseLp}) with $q>d$ is critical for us. We emphasize here that this can be shown in general Lipschitz domains for lower dimensions $d=2$ or $3$.
For $d = 2$, the Meyers' estimate gives (\ref{est.reverseLp}) with $p = 2+\varepsilon$. For $d = 3$, the Meyers' estimate alone is not enough, while the sharp $(R)_{2+\varepsilon}$ regularity in Lipschitz domains is also needed. To see this, let $u\in W^{2,2}(D_{2r})$ be the weak solution of (\ref{est.weak.D2r}). Without loss of generality, assume $D_{2r}$ is a Lipschitz domain. By the coarea formula and the Meyers estimate, there exists $t\in (1,2)$ so that
\begin{equation}\label{est.Meyers.3d}
\bigg( \fint_{\partial D_{tr} \cap \Omega } |\nabla^2 u|^{2+\varepsilon} d\sigma \bigg)^{1/(2+\varepsilon)} \le C \bigg( \fint_{D_{2r}} |\nabla^2 u|^2 \bigg)^{1/2}.
\end{equation}
Due to the fact that $u =0 $ and $\nabla u = 0$ on $\Delta_{2r} = \partial\Omega \cap B_{2r}$, we use the $(R)_2$ regularity to obtain
\begin{equation}\label{est.R2.3d}
\norm{(\nabla^2 u)^*}_{L^{2+\varepsilon}(\partial D_{tr})} \le C\norm{\nabla^2 u}_{L^{2+\varepsilon}(\partial D_{tr} \cap \Omega)} \le Cr^{\frac{2}{2+\varepsilon} - \frac{3}{2}} \norm{\nabla^2 u}_{L^2(D_{2r})}.
\end{equation}
Now, recall a general inequality: $\norm{F}_{L^q(D_r)} \le C\norm{(F)^*}_{L^p(\partial D_r)}$ with $q = dp/(d-1)$; see e.g., \cite[Remark 9.3]{KLS13}. Hence, if $d =3$, combining (\ref{est.Meyers.3d}) and (\ref{est.R2.3d}) gives
\begin{equation*}
\norm{\nabla^2 u}_{L^{3+\frac{3\varepsilon}{2}}(D_{tr})} \le Cr^{\frac{2}{2+\varepsilon} - \frac{2}{3}} \norm{\nabla^2 u}_{L^2(D_{2r})}.
\end{equation*}
This implies
\begin{equation*}
\bigg( \fint_{D_{tr}} |\nabla^2 u|^{3+\frac{3\varepsilon}{2}}\bigg)^{1/(3+\frac{3\varepsilon}{2})} \le C\bigg( \fint_{D_{2r}} |\nabla^2 u|^2 \bigg)^{1/2},
\end{equation*}
which yields the desired estimate as $t\in (1,2)$.
\end{remark}
\section{Green's function}
This section is devoted to constructing the Green's function in quasiconvex domains and obtaining some global pointwise estimates.
For each $y\in \mathbb{R}^d$, let $\Gamma^y(x) = \Gamma(x,y)$ be the fundamental solution of $\Delta^2$ in $\mathbb{R}^d$ \cite{S07}:
\begin{equation*}
\Gamma(x,y) = \left\{
\begin{aligned}
&\frac{1}{8\pi} |x-y|^2 (\ln|x-y|-1), \quad &d=2, \\
& \frac{-1}{2\omega_3} |x-y|, \quad &d =3, \\
& \frac{-1}{4\omega_4} \ln |x-y|, \quad & d = 4, \\
& \frac{1}{2(n-2)(n-4)\omega_d|x-y|^{d-4}},\quad & d\ge 5,
\end{aligned}
\right.
\end{equation*}
where $\omega_d$ is the surface area of the $d$-dimensional unit sphere.
To construct the Green's function in $\Omega$, we assume $\Omega$ is a bounded quasiconvex domain. For any $y\in \Omega$, let $\gamma(x,y)$ be the solution of
\begin{equation}\label{eq.gamma}
\left\{
\begin{aligned}
&\Delta_x^2 \gamma(x,y) = 0, \quad \txt{in } \Omega, \\
&\gamma(x,y) = \Gamma(x,y),\ \nabla_x \gamma(x,y) = \nabla_x \Gamma(x,y), \quad \txt{on } \partial\Omega.
\end{aligned}
\right.
\end{equation}
Since $\Gamma(\cdot,y)$ is smooth in $\mathbb{R}^d\setminus \{y \}$, the variational solution of (\ref{eq.gamma}) exists and $\gamma(\cdot,y) \in W^{2,2}(\Omega)$. Define the Green's function
\begin{equation*}
G(x,y) = \Gamma(x,y) - \gamma(x,y), \qquad x,y\in \Omega.
\end{equation*}
Then, $G(x,y)$ satisfies
\begin{equation*}
\left\{
\begin{aligned}
&\Delta_x^2 G(x,y) = \delta_y(x), \quad \txt{in } \Omega, \\
&G(x,y) = 0,\ \nabla_x G(x,y) = 0, \quad \txt{on } \partial\Omega,
\end{aligned}
\right.
\end{equation*}
and $G(\cdot, y)\in W^{2,2}(\Omega\setminus B_r(y) )$ for any $r>0$. Moreover, by a standard argument, one may show the symmetry property:
\begin{equation*}
G(x,y) = G(y,x), \qquad \text{for any } x,y\in \Omega, x\neq y.
\end{equation*}
\begin{lemma}\label{lem.EstG}
Let $d\ge 4$ and $\Omega$ be a $(\delta,\sigma,R)$-quasiconvex Lipschitz domain. There exists $\delta_0>0$, depending only on $d$ and $\sigma$, so that the Green's function $G(x,y)$ satisfies, for any $x,y\in \Omega$,
\begin{equation}\label{est.G}
|G(x,y)| \le \left\{
\begin{aligned}
&C\ln \frac{d_\Omega}{|x-y|}, \quad &d = 4, \\
&\frac{C}{|x-y|^{d-4}}, \quad &d \ge 5,
\end{aligned}
\right.
\end{equation}
\begin{equation}\label{est.DxG.DyG}
|\nabla_x G(x,y)| + |\nabla_y G(x,y)| \le \frac{C}{|x-y|^{d-3}},
\end{equation}
and
\begin{equation}\label{est.DxDyG}
|\nabla_x \nabla_y G(x,y)| \le \frac{C}{|x-y|^{d-2}}.
\end{equation}
In (\ref{est.G}), $d_\Omega$ is the diameter of $\Omega$.
\end{lemma}
\begin{proof}
Fix $x_0, y_0\in \Omega$ and $r_0 = \frac{1}{2}|x_0 - y_0|$.
Let $L$ be the straight line through $x_0$ and $y_0$, and $\bar{x} = (x_0+y_0)/2$. Let $x_k$ be points on $L$ such that $|x_k - \bar{x}| = \frac{5}{3}|x_{k-1} - \bar{x}|$ for any $k = 1,2,\cdots$, and $|x_k - x_0|<|x_k - y_0|$. Then, it is not hard to see $r_k := |x_k - \bar{x}| = (\frac{5}{3})^{k} r_0$. Note that there are only finite points $\{ x_k \}$ contained in $\Omega \cap L$. Moreover, $\{ B_{r_k/4} (x_k) \}$ forms a sequence of balls connecting $x_0$ to the boundary $\partial\Omega$. Similarly, we may let $y_k$ be the points on $L$ such that $|y_k -\bar{x}| = (\frac{5}{3})^{k} r_0$ and $|y_k - y_0|<|y_k-x_0|$.
Let $M, N$ be the smallest natural numbers such that the ball $B_{r_M/4}(x_M)$ and $B_{r_N/4}(y_N)$ intersect $\partial\Omega$.
We will use a duality argument. Let $f_k = (f_{k,ij}) \in C_0^\infty(B_{r_k}(y_k)\cap \Omega;\mathbb{R}^{d\times d})$ and $u\in W_0^{2,2}(\Omega)$ be the weak solution of
\begin{equation*}
\left\{
\begin{aligned}
&\Delta^2 u = \nabla^2\cdot f_k, \quad \txt{in } \Omega, \\
&u = 0,\ \nabla u = 0, \quad \txt{on } \partial\Omega.
\end{aligned}
\right.
\end{equation*}
Integrating the equation against $u$ and by the integration by parts, we have
\begin{equation*}
\int_{\Omega} |\nabla^2 u|^2
= \int_{\Omega} |\Delta u|^2 = \int_{B_{r_k}(y_k)} f_k\cdot \nabla^2 u,
\end{equation*}
which yields
\begin{equation}\label{est.Du2.f}
\bigg( \int_{\Omega} |\nabla^2 u|^2 \bigg)^{1/2} \le C \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2}.
\end{equation}
It follows that, for any $\ell\ge 0$,
\begin{equation}\label{est.D2u.fk}
\bigg( \fint_{B_{r_\ell}(x_\ell)} |\nabla^2 u|^2 \bigg)^{1/2} \le Cr_\ell^{-d/2} \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2}.
\end{equation}
Observe that $B_{r_\ell}(x_\ell) \cap \txt{supp}(f_k) = \emptyset$. By Corollary \ref{cor.DuCe}, there exists $\delta_0>0$, depending only on $d$ and $\sigma$, so that if $\delta<\delta_0$,
\begin{equation*}
\begin{aligned}
\underset{B_{r_\ell/4}(x_\ell)}{\text{osc}} [\nabla u] &\le Cr_\ell \bigg( \fint_{B_{r_\ell}(x_\ell)} |\nabla^2 u|^2 \bigg)^{1/2} \\
& \le Cr_\ell^{1-d/2} \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2},
\end{aligned}
\end{equation*}
where $\underset{U}{\text{osc}} [F] = \sup_{x,y\in U} |F(x) - F(y)|$ and we have used the Poincar\'{e} inequality and (\ref{est.D2u.fk}). By noting that $\nabla u(x) = 0$ on $\partial\Omega$ and $x_0$ is connected to the boundary through a sequence of balls $B_{r_\ell/4}(x_\ell)$ with $1\le \ell \le N$, we know that the above estimate actually implies
\begin{equation}\label{est.Du.fL2}
\begin{aligned}
|\nabla u(x_0)| &\le \sum_{\ell = 1}^{N} Cr_\ell^{1-d/2} \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2} \\
&\le Cr_0^{1-d/2} \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2}.
\end{aligned}
\end{equation}
Now, recall the representation formula
\begin{equation*}
\nabla u(x) = \int_{\Omega} \nabla_x \nabla_y^2 G(x,y) \cdot f_k(y)dy.
\end{equation*}
Thus, (\ref{est.Du.fL2}) implies
\begin{equation*}
\bigg| \int_{\Omega} \nabla_x \nabla_y^2 G(x_0,y) \cdot f_k(y)dy. \bigg| \le \frac{C}{r_0^{d/2-1}} \bigg( \int_{B_{r_k}(y_k)} |f_k|^2 \bigg)^{1/2}.
\end{equation*}
By duality, it follows that for any $x,w\in B_{r_k/4}(x_k)$
\begin{equation}\label{est.DxDDyG}
\bigg( \fint_{B_{r_k}(y_k)} |\nabla_x \nabla_y^2 G(x_0,y)|^2 dy \bigg)^{1/2} \le \frac{C}{r_0^{d/2-1} r_k^{d/2} }.
\end{equation}
Note that $\nabla_x G(x,\cdot)=\nabla_x G(\cdot,x)$ is biharmonic in $\Omega\setminus \{x \}$ with vanishing boundary condition. Again, by (\ref{est.Du.Holder}), (\ref{est.DxDDyG}) and the reverse H\"{o}lder inequality, we have
\begin{equation*}
\underset{B_{r_k/4}(y_k)}{\text{osc}} [\nabla_x \nabla_y G(x_0,\cdot)] \le \frac{C}{r_0^{d/2-1} r_k^{d/2-1} }.
\end{equation*}
Due to the boundary condition $\nabla_x \nabla_y G(x_0, y) = 0$ for $y\in \partial\Omega$, by connecting $y_0$ to the boundary through a sequence of $B_{r_k/4}(y_k)$, the above condition implies
\begin{equation*}
| \nabla_x \nabla_y G(x_0 ,y_0) | \le
\sum_{k=1}^{M} \frac{C}{r_0^{d/2-1} r_k^{d/2-1}} \le
\frac{C}{r_0^{d - 2}} = \frac{C}{|x_0 - y_0|^{d-2}},
\end{equation*}
which yields (\ref{est.DxDyG}). Then (\ref{est.DxG.DyG}) and (\ref{est.G}) follows easily by the fundamental theorem of calculus and the vanishing boundary conditions of $G(x,y)$.
\end{proof}
\begin{theorem}\label{thm.Gxy}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex domain. Then for any given $\varepsilon \in (0,1)$, there exists $\delta_0>0$, depending only on $d,\varepsilon$ and $\sigma$, such that if $\delta\in (0,\delta_0)$, the Green's function satisfies
\begin{equation}\label{est.G.Holder}
| G(x,y)| \le \frac{C\delta(x)^{1+ \varepsilon} \delta(y)^{1+\varepsilon} }{|x-y|^{d-2+2\varepsilon}},
\end{equation}
\begin{equation}\label{est.DxG.Holder}
|\nabla_x G(x,y)| \le \frac{C\delta(x)^\varepsilon \delta(y)^{1+\varepsilon} }{|x-y|^{d-2+2\varepsilon}},
\end{equation}
\begin{equation}\label{est.DyG.Holder}
|\nabla_y G(x,y)| \le \frac{C\delta(x)^{1+\varepsilon} \delta(y)^{\varepsilon} }{|x-y|^{d-2+2\varepsilon}},
\end{equation}
and
\begin{equation}\label{est.DxDy.Holder}
|\nabla_x \nabla_y G(x,y)| \le \frac{C\delta(x)^\varepsilon \delta(y)^\varepsilon }{|x-y|^{d-2+2\varepsilon}},
\end{equation}
where $\delta(x) = \text{\rm dist}(x,\partial\Omega)$ and $C$ depends only on $d,\varepsilon$ and $\Omega$.
\end{theorem}
\begin{proof}
We show (\ref{est.DxDy.Holder}) first. Observe that $H(x,\cdot) := \nabla_x G(x,\cdot)$ is a solution in $\Omega\setminus \{x\}$. Fix $x,y\in \Omega$ and $r = |x-y|$. Without loss of generality, assume $\delta(y) < \frac{1}{3} r$ and let $Q_y\in \partial\Omega$ be a point that $\delta(y) = |y - Q_y|$. We apply (\ref{est.Du.Holder}) and (\ref{est.DxDyG}) to $H(x,\cdot)$ in $D_{2r}(Q_y)$ and obtain
\begin{equation*}
\begin{aligned}
&|\nabla_y H(x,y) - \nabla_y H(x,Q_y)| \\
&\qquad \le C\bigg( \frac{|y-Q_y|}{r} \bigg)^\varepsilon \bigg( \fint_{D_{2r}(Q_y)} |\nabla_z H(x,z)|^2 dz \bigg)^{1/2} \\
& \qquad \le C \frac{\delta(y)^\varepsilon }{|x-y|^{d-2+\varepsilon} }.
\end{aligned}
\end{equation*}
Recall that $\nabla_y H(x,Q_y) = 0$. Hence,
\begin{equation}\label{est.DxDyG.dy}
|\nabla_y\nabla_x G(x,y)| \le C \frac{\delta(y)^\varepsilon }{|x-y|^{d-2+\varepsilon} }.
\end{equation}
Now, consider $H_1(\cdot,y)$ = $\nabla_y G(\cdot, y)$. Using the same argument as before and (\ref{est.DxDyG.dy}), we have
\begin{equation*}
|\nabla_x H_1(x,y)| \le C \bigg( \frac{\delta(x)}{|x-y|} \bigg)^\varepsilon \frac{\delta(y)^\varepsilon }{|x-y|^{d-2+\varepsilon} },
\end{equation*}
which implies (\ref{est.DxDy.Holder}).
Next, (\ref{est.DxG.Holder}) and (\ref{est.DyG.Holder}) follow by integrating (\ref{est.DxDy.Holder}) in $y$ or in $x$, respectively, and using the boundary condition $\nabla_x G(x,\cdot) = \nabla_y G(\cdot,y) = 0$ on $\partial\Omega$. Finally, (\ref{est.G.Holder}) follows readily by integrating (\ref{est.DxG.Holder}) in $x$ and using the fact $G(\cdot, y) = 0$ on $\partial\Omega$.
\end{proof}
\section{Maximal principle}
Let $(f,g) \in W\!\!A^{1,\infty}(\partial\Omega)$. In view of Remark \ref{rmk.D2Dn}, we may rewrite the equation (\ref{eq.Dp}) as
\begin{equation}\label{eq.bi.fg}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } \Omega, \\
&u = f,\ \frac{\partial}{\partial n}u = h, \quad \txt{on } \partial\Omega,
\end{aligned}
\right.
\end{equation}
where $h = n\cdot g$ on $\partial\Omega$. Due to the $(D)_2$ solvability in Lipschitz domains \cite{DKV86}, we know there is a unique solution of (\ref{eq.bi.fg}) so that
\begin{equation*}
\norm{(\nabla u)^*}_{L^2(\partial\Omega)} \le C\norm{g}_{L^2(\partial\Omega)} \simeq C\big( \norm{\nabla_{\tan} f}_{L^2(\partial\Omega)} +\norm{h}_{L^2(\partial\Omega)} \big).
\end{equation*}
The purpose of this section is to prove Theorem \ref{thm.MP}. Precisely, we would like to show the solution of (\ref{eq.bi.fg}) satisfies
\begin{equation}\label{est.Du.Linfty}
\norm{\nabla u}_{L^\infty(\Omega)} \le C\norm{g}_{L^\infty(\partial\Omega)} \simeq C\big( \norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)} +\norm{h}_{L^\infty(\partial\Omega)} \big).
\end{equation}
The starting point of the proof is the integral representation for the solution of (\ref{eq.bi.fg}) in terms of the Green's function
\begin{equation}\label{eq.IntRepre}
u(x) = \int_{\partial \Omega} \frac{\partial}{\partial n} \Delta_Q G(Q,x) f(Q) d\sigma(Q) - \int_{\partial \Omega} \Delta_Q G(Q,x) h(Q) d\sigma(Q).
\end{equation}
Since $u$ is biharmonic in $\Omega$, the interior estimate implies
\begin{equation*}
|\nabla u(x)| \le \frac{C \sup \{ |u(y)|: y\in B_{\delta(x)/2}(x) \} }{\delta(x)}.
\end{equation*}
Consequently, to show (\ref{est.Du.Linfty}), it suffices to show the pointwise estimate of $u(x)$ for $x\in \Omega$. This will be done by considering the two integrals in (\ref{eq.IntRepre}) separately. For convenience, throughout this section, we fix a small absolute value $\varepsilon \in (0,1)$ and let $\delta_0 >0$ be given by Corollary \ref{cor.DuCe} (or Theorem \ref{thm.Gxy}). Since $\Omega$ is a Lipschitz domain and $\sigma$ can be determined in terms of the Lipschitz constant, $\delta_0$ actually depends only on $d$ and the Lipschitz constant.
\begin{lemma}\label{lem.DeltaG}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Then
\begin{equation*}
\int_{\partial\Omega} |\Delta_Q G(Q,x)| d\sigma(Q) \le C\delta(x),
\end{equation*}
where $C$ depends only on $d$ and $\Omega$.
\end{lemma}
\begin{proof}
Let $\{ B_{k\ell} = B_{2^k \delta(x)}(Q_{k\ell}) : 1\le \ell \le N, k\ge 0, Q_{k\ell}\in \partial\Omega\}$ be a sequence of balls that cover $\partial\Omega$ with finite overlaps, where $N$ is a finite number depending only on dimension. Moreover,
\begin{equation*}
\txt{dist}(x, 4B_{k\ell} ) \simeq 2^k \delta(x).
\end{equation*}
Let $D_{k\ell} = B_{k\ell}\cap \Omega$ and $\Delta_{k\ell} = B_{k\ell}\cap \partial\Omega$. For each $k$ and $\ell$, we estimate
\begin{equation*}
J_{k\ell} = \int_{\Delta_{k\ell}} |\Delta_Q G(Q,x)| d\sigma(Q).
\end{equation*}
Using the $(R)_2$ regularity in Lipschitz domain $D_{k\ell}$, we have
\begin{equation*}
\begin{aligned}
J_{k\ell} &\le |\Delta_{k\ell}|^{1/2} \norm{(\nabla^2 G)^*_{D_{k\ell}}(\cdot,x)}_{L^2(\Delta_{k\ell})} \\
& \le C|\Delta_{k\ell}|^{1/2} \big(\norm{\nabla_{\tan}\nabla G(\cdot,x)}_{L^2(\partial B_{k\ell}\cap \Omega)} + \norm{\nabla_{\tan} \nabla G(\cdot,x) }_{L^2(\Delta_{k\ell})} \big).
\end{aligned}
\end{equation*}
Since $G(\cdot,x) = \nabla G(\cdot,x) = 0$ on $\partial\Omega$, we see that the last term in the above inequality vanishes. It follows
\begin{equation*}
(J_{k\ell})^2 \le C|\Delta_{k\ell}| \int_{\partial B_{k\ell}\cap \Omega} |\nabla^2_y G(y,x)|d\sigma(y).
\end{equation*}
At this point, one may realize the right-hand side of the above inequality does not have a good estimate. To overcome this difficulty, we will use the coarea formula to reduce the surface integral into a volume integral.
Precisely, for $t\in [1,2]$, let $D_{k\ell}^t = (tB_{k\ell})\cap \Omega$ and $\Delta_{k\ell}^t = (tB_{k\ell}) \cap \partial\Omega$. By the similar argument as above, one can show
\begin{equation}\label{est.Jkl.t}
(J_{k\ell}^t )^2 \le C|\Delta_{k\ell}| \int_{\partial (t B_{k\ell})\cap \Omega} |\nabla^2_y G(y,x)|d\sigma(y).
\end{equation}
Observing $J_{k\ell} \le J_{k\ell}^t$ and integrating (\ref{est.Jkl.t}) in $t$ over $[1,2]$, one has
\begin{equation*}
\begin{aligned}
(J_{k\ell})^2 &\le \int_1^2 (J_{k\ell}^t)^2 dt \le \frac{C|\Delta_{k\ell}|}{2^{k} \delta(x)} \int_{D_{k\ell}^2} |\nabla_y^2 G(y,x)|^2 dy \\
& \le \frac{C|\Delta_{k\ell}|}{2^{3k} \delta(x)^3} \int_{D_{k\ell}^4} |\nabla_y G(y,x)|^2 dy \\
& \le \frac{C|\Delta_{k\ell}|}{2^{3k} \delta(x)^3} \int_{D_{k\ell}^4} \Big( \frac{C\delta(y)^\varepsilon \delta(x)^{1+\varepsilon} }{|x-y|^{d-2+2\varepsilon}} \Big)^2 dy \\
& \le C\frac{\delta(x)^2}{2^{2k\varepsilon}},
\end{aligned}
\end{equation*}
where we have used the co-area formula in the second inequality, the Caccioppoli inequality in the third inequality (\ref{est.Caccioppoli}) and (\ref{est.DxG.Holder}) in the forth inequality.
Consequently,
\begin{equation*}
\int_{\partial\Omega} |\Delta_Q G(Q,x)| d\sigma(Q) \le \sum_{k = 0}^{\infty} \sum_{\ell = 1}^N J_{k\ell} \le C\delta(x).
\end{equation*}
The proof is complete.
\end{proof}
Next, we need to deal with the second integral of (\ref{eq.IntRepre}).
\begin{lemma}\label{lem.DnDeltaG}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Let
\begin{equation*}
u(x) = \int_{\partial \Omega} \frac{\partial}{\partial n} \Delta_Q G(Q,x) f(Q) d\sigma(Q).
\end{equation*}
Then
\begin{equation*}
|u(x) - u(P)| \le C\delta(x) \norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)},
\end{equation*}
where $P$ is the point on $\partial\Omega$ such that $x\in \Gamma(P)$.
\end{lemma}
\begin{proof}
The key idea is to use the integration by parts on the boundary. For this purpose, we need to decompose $\partial\Omega$ into the boundaries of finite starlike Lipschitz subdomains and convert the normal derivative to tangential derivatives.
We construct a partition of unity. Let $\varphi_{k\ell}$ be a sequence of smooth functions, with $k \in \mathbb{N}$ and $1\le \ell\le N $, with $N$ depending only on the dimension $d$. Moreover, $\varphi_{k\ell}$ is supported in $B_{k\ell}:= B_{2^k \delta(x)}(Q_{k\ell})$, where $Q_{k\ell} \in \partial\Omega$, $\txt{dist}(x, 2B_{k\ell} ) \simeq 2^k \delta(x)$,
\begin{equation*}
\sum_{k,\ell} \varphi_{k\ell}(Q) = 1, \qquad \txt{for any } Q\in \partial\Omega,
\end{equation*}
and
\begin{equation*}
|\nabla \varphi_{k\ell}| \le \frac{C}{2^k \delta(x)}.
\end{equation*}
Now write
\begin{equation*}
\begin{aligned}
&u(x) - u(P) \\
&\qquad = \sum_{k,\ell} \int_{\partial \Omega \cap B_{k\ell} } \frac{\partial}{\partial n} \Delta_Q G(Q,x) (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma(Q).
\end{aligned}
\end{equation*}
For any Lipschitz domains, there exists some $R_0>0$ (depending only on $\Omega$) so that if $r<R_0$, $B_{r}(Q) \cap \Omega$ is a starlike Lipschitz domain for any $Q\in \partial\Omega$. Without loss of generality, we assume all the subdomains $D_{k\ell}: = B_{k\ell} \cap \Omega$ are starlike. Now fix $k$ and $\ell$, and let $x^{k\ell} $ be a suitable ``center" of the starlike domain $D_{k\ell}$.
Since $D_{k\ell}$ is a starlike Lipschitz domain with ``center'' $x^{k\ell}$, for any given harmonic function $\psi$ in $D_{k\ell}$, we may define a transformation
\begin{equation*}
\Psi(y) = \int_0^1 \psi(t(y-x^{k\ell}) + x^{k\ell}) \frac{dt}{t}.
\end{equation*}
Then, one may directly verifies that
\begin{equation*}
\psi(y) - \psi(x^{k\ell}) = (y-x^{k\ell})\cdot \nabla \Psi(y).
\end{equation*}
For any $Q\in \partial(D_{k\ell})$, a direct computation shows that
\begin{equation*}
\begin{aligned}
\frac{\partial \psi(Q)}{\partial n} &= \frac{\partial}{\partial n}((y-x^{k\ell})\cdot \nabla \Psi(y)) \Big|_{y=Q} \\
& = \sum_{i,j} (n_i \partial_j - n_j \partial_i )((y - x^{k\ell})_j \partial_i \Psi) \Big|_{y=Q} + (1-d) \frac{\partial \Psi}{\partial n}(Q).
\end{aligned}
\end{equation*}
Observe that for any fixed $i$ and $j$, $n_i \partial_j - n_j \partial_i$ is a tangential derivative on the boundary $\partial(D_{k\ell})$. Hence, since $\psi(y) = \Delta_y G(y,x)$ is harmonic in $D_{k\ell}$, we have
\begin{equation*}
\begin{aligned}
&\int_{\Delta_{k\ell} } \frac{\partial}{\partial n} \psi(Q) (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma \\
& = \sum_{i,j} \int_{\Delta_{k\ell} } (n_i \partial_j - n_j \partial_i )((Q - x^{k\ell})_j \partial_i \Psi) (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma \\
& \qquad + (1-d) \int_{\Delta_{k\ell}} \frac{\partial \Psi}{\partial n} (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma \\
& = -\sum_{i,j} \int_{\Delta_{k\ell} } (Q - x^{k\ell})_j \partial_i \Psi (n_i \partial_j - n_j \partial_i )\Big( (f(Q) - f(P))\varphi_{k\ell}(Q) \Big) d\sigma \\
& \qquad + (1-d) \int_{\Delta_{k\ell}} \frac{\partial \Psi}{\partial n} (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma.
\end{aligned}
\end{equation*}
where $\Delta_{k\ell} = B_{k\ell}\cap \partial\Omega$.
Now using the result in \cite{V87}, for $1<p<\infty$,
\begin{equation*}
\norm{(\nabla \Psi)^*_{D_{k\ell}}}_{L^p(\partial (D_{k\ell}))} \le C \text{diam}(D_{k\ell}) \norm{(\psi)_{D_{k\ell}}^*}_{L^p(\partial (D_{k\ell}))}.
\end{equation*}
It follows
\begin{equation*}
\begin{aligned}
&\bigg| \int_{\Delta_{k\ell} } \frac{\partial}{\partial n} \psi(Q) (f(Q) - f(P))\varphi_{k\ell}(Q) d\sigma \bigg| \\
& \qquad \le C \norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)} |D_{k\ell}|^{1/2} \bigg( \int_{\partial (D_{k\ell})} |(\psi)_{D_{k\ell}}^*|^2 d\sigma\bigg)^{1/2},
\end{aligned}
\end{equation*}
where we have used the mean value theorem and the fact $|Q-P| \simeq \text{diam}(D_{k\ell})$ for $Q\in D_{k\ell}$.
Now recall that $\psi(y) = \Delta_y G(y,x)$. Similar as $J_{k\ell}$ in the proof of Lemma \ref{lem.DeltaG}, by the $(R)_2$ regularity and the coarea formula, we may derive
\begin{equation*}
|D_{k\ell}|^{1/2} \bigg( \int_{\partial (D_{k\ell})} |(\psi)_{D_{k\ell}}^*|^2 d\sigma\bigg)^{1/2} \le C\frac{\delta(x)}{2^{k\varepsilon}}.
\end{equation*}
Hence
\begin{equation*}
|u(x) - u(P)| \le \sum_{k = 0}^{\infty} \sum_{\ell = 1}^{N} \frac{C\delta(x)}{2^{k\varepsilon}} \norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)} \le C\delta(x)\norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)},
\end{equation*}
which ends the proof.
\end{proof}
Now, Theorem \ref{thm.MP} follows readily from the previous two lemmas.
\begin{proof}[Proof of Theorem \ref{thm.MP}]
For any $x\in \Omega$ and $x\in \Gamma(P), P\in \partial\Omega$, Lemma \ref{lem.DeltaG} and Lemma \ref{lem.DnDeltaG} implies
\begin{equation}\label{est.uxxx}
\begin{aligned}
|u(x) - u(P)| &\le C\delta(x) \big( \norm{g}_{L^\infty(\partial\Omega)} + \norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)}\big) \\
& \le C\delta(x) \norm{g}_{L^\infty(\partial\Omega)}.
\end{aligned}
\end{equation}
The term $\norm{\nabla_{\tan} f}_{L^\infty(\partial\Omega)}$ is not necessary in the last inequality, due to Remark \ref{rmk.D2Dn}. Now, note that if $x\in \Gamma(P)$, for any $y\in B_{\delta(x)/2}(x)$, $y$ is contained in a larger non-tangential cone $\Gamma'(P) \supset \Gamma(P)$, for which (\ref{est.uxxx}) still holds. Since $u(\cdot) - u(P)$ is also a solution, by the interior estimate, one arrives
\begin{equation*}
|\nabla u(x)| \le \frac{C}{\delta(x)} \bigg(\fint_{B_{\delta(x)/2}(x)} |u(y) - u(P)|^2 dy\bigg)^{1/2} \le C\norm{g}_{L^\infty(\partial\Omega)}.
\end{equation*}
Since $x\in \Omega$ is arbitrary and $\nabla u = g$ on $\partial\Omega$ in the sense of non-tangential limit, we obtain (\ref{est.MP}) as desired.
\end{proof}
\begin{remark}
Due to Remark \ref{rmk.d23}, our approach for Theorem \ref{thm.MP} provides an alternative proof of the weak maximum principle in arbitrary Lipschitz domains for $d =2$ or $3$.
\end{remark}
\section{Classical Solutions}
In this section, we will show the existence of the classical solution for (\ref{eq.Dp}) if the boundary value $(f,g)\in W\!\!A^{1,2}(\partial\Omega)$ is continuous. For $0\le \alpha < 1$, let
\begin{equation*}
C\!A^{1,\alpha}(\partial\Omega) = \{ (f,g) = (\phi,\nabla \phi)|_{\partial\Omega}: \phi\in C^{1,\alpha}_0(\mathbb{R}^d) \}.
\end{equation*}
We stipulate $C^{1,0} = C^1, C\!A^{1,0} = C\!A^{1}$ and so forth. For $\alpha\in (0,1)$ and $U \subset \mathbb{R}^d$, define
\begin{equation*}
[F]_{C^{\alpha}(U)} := \sup_{x,y\in U} \frac{|F(x) - F(y)|}{|x-y|^\alpha}, \qquad [F]_{C^{1,\alpha}(U)} := \sup_{x,y\in U} \frac{|\nabla F(x) - \nabla F(y)|}{|x-y|^\alpha}.
\end{equation*}
Throughout this section, we assume $\varepsilon\in (0,1)$ and $\delta_0>0$ is given by Corollary \ref{cor.DuCe} (or Theorem \ref{thm.Gxy}). Note that $\delta_0$ depends only on $d,\varepsilon$ and the Lipschitz constant.
Let $(f,g)\in C\!A^{1,\alpha}(\partial\Omega)$ and $h = n\cdot g$. Consider the equation
\begin{equation}\label{eq.biharmonic}
\left\{
\begin{aligned}
&\Delta^2 u = 0, \quad \txt{in } \Omega, \\
&u = f,\ \frac{\partial}{\partial n}u = h, \quad \txt{on } \partial\Omega.
\end{aligned}
\right.
\end{equation}
In this section, we will show that if $\Omega$ is a quasiconvex domain with $\delta<\delta_0$, $0\le \alpha<\varepsilon$ and $(f,g)\in C\!A^{1,\alpha}(\partial\Omega)$, then $u\in C^{1,\alpha}(\overline{\Omega})$.
To this end, we fix an arbitrary point $P_0\in \partial\Omega$ and would like to subtract a linear function from $u$ so that the resulting function and its gradient both vanish at $P_0$. We first assume $\alpha>0$. By our definition, $(f,g) = (\phi,\nabla \phi)$ on $\partial\Omega$ for some $\phi\in C^{1,\alpha}_0(\mathbb{R}^d)$. Then, define a linear function
\begin{equation*}
L(x) = \phi(P_0) + \nabla \phi(P_0) \cdot (x-P_0).
\end{equation*}
Now, if we set
\begin{equation*}
\widetilde{u}(x) = u(x) - L(x),
\end{equation*}
then $\widetilde{u}$ satisfies
\begin{equation*}
\left\{
\begin{aligned}
&\Delta^2 \widetilde{u} = 0, \quad \txt{in } \Omega, \\
&\widetilde{u} = \widetilde{f},\ \frac{\partial}{\partial n}u = \widetilde{h}, \quad \txt{on } \partial\Omega,
\end{aligned}
\right.
\end{equation*}
where $\widetilde{f} = \phi-L$ and $\widetilde{h} = n \cdot ( \nabla \phi -\nabla L)$ on $\partial\Omega$. Note that $\widetilde{h}$ is well-defined as the normal $n(Q)$ exists for a.e. $Q\in \partial\Omega$. Using the observations $(\phi-L)(P_0) = 0$ and $\nabla(\phi-L)(P_0) = 0$, we have
\begin{equation*}
|\widetilde{f}(Q)| \le C|Q-P_0|^{1+\alpha} [\phi]_{C^{1,\alpha}(\mathbb{R}^d)},
\end{equation*}
and
\begin{equation}\label{est.ha}
|\widetilde{h}(Q)| \le C|Q-P_0|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation}
\begin{theorem}\label{thm.tu.C1a}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Let $\widetilde{u}$ be defined as above. Then, for any $x\in \Gamma(P_0)$
\begin{equation*}
|\nabla \widetilde{u}(x)| + \delta(x)^{-1} |\widetilde{u}(x)| \le C\delta(x)^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation*}
where $\alpha\in (0,\varepsilon)$ and $C$ depends only on $d, \varepsilon, \alpha$ and $\Omega$.
\end{theorem}
Recall the integral representation for the solution $\widetilde{u}$
\begin{equation}\label{eq.tu.int}
\widetilde{u}(x) = \int_{\partial \Omega} \frac{\partial}{\partial n} \Delta_Q G(Q,x) \widetilde{f}(Q) d\sigma(Q) + \int_{\partial \Omega} \Delta_Q G(Q,x) \widetilde{h}(Q) d\sigma(Q).
\end{equation}
The following are improved estimates of Lemma \ref{lem.DeltaG} and Lemma \ref{lem.DnDeltaG}.
\begin{lemma}\label{lem.DG.alpha}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Then
\begin{equation*}
\int_{\partial\Omega} |\Delta_Q G(Q,x)| |x-Q|^{\alpha}d\sigma(Q) \le C\delta(x)^{1+\alpha},
\end{equation*}
where $\alpha\in (0,\varepsilon)$ and $C$ depends only on $d, \varepsilon, \alpha$ and $\Omega$.
\end{lemma}
\begin{proof}
For each $x\in \Omega$, let $D_{k\ell}$ and $\Delta_{k\ell}$ be the same as Lemma \ref{lem.DeltaG}. Observe that if $Q\in \Delta_{k\ell}$ and $y\in D_{k\ell}$,
\begin{equation*}
|x-Q| \simeq |x-y| \simeq 2^k \delta(x).
\end{equation*}
Let
\begin{equation*}
\widetilde{J}_{k\ell} = \int_{\Delta_{k\ell}} |\Delta_Q G(Q,x)| |x-Q|^{\alpha}d\sigma(Q).
\end{equation*}
By the similar argument as Lemma \ref{lem.DeltaG}, we have
\begin{equation*}
\begin{aligned}
(\widetilde{J}_{k\ell})^2 &\le \frac{C|\Delta_{k\ell}|}{2^{3k} \delta(x)^3} \int_{D_{k\ell}^4} \Big( \frac{C\delta(y)^\varepsilon \delta(x)^{1+\varepsilon} }{|x-y|^{d-2+2\varepsilon}} \Big)^2 (2^k\delta(x))^{2\alpha} dy \\
& \le C\frac{\delta(x)^{2+2\alpha} }{2^{2k(\varepsilon-\alpha)}}.
\end{aligned}
\end{equation*}
Now if $\alpha<\varepsilon$, we may obtain the desired estimate by taking square root and summing over $k$ and $\ell$.
\end{proof}
\begin{lemma}\label{lem.DnDG.alpha}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Let
\begin{equation*}
u(x) = \int_{\partial \Omega} \frac{\partial}{\partial n} \Delta_Q G(Q,x) \widetilde{f}(Q) d\sigma(Q).
\end{equation*}
Then, if $x\in \Gamma(P_0)$,
\begin{equation*}
|u(x)| \le C\delta(x)^{1+\alpha} [\phi]_{C^{1,\alpha}(\mathbb{R}^d)},
\end{equation*}
where $\alpha\in (0,\varepsilon)$ and $C$ depends only on $d, \varepsilon, \alpha$ and $\Omega$.
\end{lemma}
\begin{proof}
By our construction $\widetilde{f} = \phi - L$ (defined in the entire $\mathbb{R}^d$), and the facts $(\phi-L)(P_0) = 0$ and $\nabla(\phi-L)(P_0) = 0$, we know
\begin{equation}\label{est.tf}
|\nabla \widetilde{f}(Q) |+ |Q-P_0|^{-1} |\widetilde{f}(Q)| \le C|Q-P_0|^{\alpha} [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation}
For each $x\in \Gamma(P_0)$, let $D_{k\ell}$, $\Delta_{k\ell}$ and $\varphi_{k\ell}$ be the same as Lemma \ref{lem.DnDeltaG}. Note that for $Q\in \Delta_{k\ell}$, $|Q-P_0| \le C 2^{k} \delta(x)$. Therefore,
by (\ref{est.tf}) and the same argument as Lemma \ref{lem.DnDeltaG}, one has
\begin{equation*}
\begin{aligned}
&\bigg| \int_{\Delta_{k\ell} } \frac{\partial}{\partial n} \psi(Q) \widetilde{f}(Q) \varphi_{k\ell}(Q) d\sigma \bigg| \\
& \qquad \le C (2^k \delta(x))^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)} |D_{k\ell}|^{1/2} \bigg( \int_{\partial (D_{k\ell})} |(\psi)_{D_{k\ell}}^*|^2 d\sigma\bigg)^{1/2} \\
& \qquad \le C\frac{\delta(x)^{1+\alpha}}{2^{k(\varepsilon-\alpha)}},
\end{aligned}
\end{equation*}
where $\psi(Q) = \Delta_Q G(Q,x)$. This implies the desired estimate for $u(x)$ by summing over $k$ and $\ell$, provided $\alpha<\varepsilon$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm.tu.C1a}]
The estimate of $|\widetilde{u}(x)|$ follows readily from the representation formula (\ref{eq.tu.int}), Lemma \ref{lem.DG.alpha}, Lemma \ref{lem.DnDG.alpha}, and (\ref{est.ha}). Now, note that if $x\in \Gamma(P_0)$, then $B_{\delta(x)/4}(x)$ is contained in a larger non-tangential cone $\Gamma'(P_0)$, in which the estimate for $\widetilde{u}$ still holds. Then, the interior estimate gives
\begin{equation*}
|\nabla \widetilde{u}(x)| \le C\delta(x)^{-1} \sup_{y\in B_{\delta(x)/4} (x)} |\widetilde{u}(y)| \le C\delta(x)^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation*}
This completes the proof.
\end{proof}
The following is the main result of this section.
\begin{theorem}\label{thm.C1a}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. Let $u$ be the solution of (\ref{eq.biharmonic}). If $0<\alpha<\varepsilon$ and $ (f,g) = (\phi,\nabla\phi)\in C\!A^{1,\alpha}(\partial\Omega)$ with $\phi\in C^{1,\alpha}(\mathbb{R}^d)$, then $u\in C^{1,\alpha}(\overline{\Omega})$. Moreover,
\begin{equation}\label{est.C1a.phi}
[\nabla u]_{C^\alpha(\Omega)} \le C[\phi]_{C^{1,\alpha}(\mathbb{R}^d)},
\end{equation}
where $C$ depends only on $d,\alpha$ and $\Omega$.
\end{theorem}
\begin{proof}
We first show that $\nabla u$ is $C^\alpha$-H\"{o}lder continuous on the boundary along the non-tangential directions. Let $\widetilde{u}$ be defined as before with some $P_0\in \partial\Omega$. Note that $\nabla \widetilde{u}(P_0) = \nabla u(P_0) - \nabla L(P_0) = 0$ and that $\nabla L(x)$ is constant. Thus,
\begin{equation*}
|\nabla \widetilde{u}(x)| = |\nabla u(x) - \nabla L(x) - \nabla u(P_0) + \nabla L(P_0)| = |\nabla u(x) - \nabla u(P_0)|.
\end{equation*}
Hence, Theorem \ref{thm.tu.C1a} implies
\begin{equation}\label{est.C1a.nontan}
|\nabla u(x) - \nabla u(P_0)| \le C\delta(x)^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation}
Apparently, (\ref{est.C1a.nontan}) holds for any $P_0\in \partial\Omega$ and $x\in \Gamma(P_0)$.
On the other hand, by our assumption on $\phi$, $\nabla u(Q) = \nabla \phi(Q)$ is obviously $C^\alpha$-H\"{o}lder continuous for $Q\in \partial\Omega$, i.e., for any $P,Q\in \partial\Omega$
\begin{equation}\label{est.bdry.C1a}
|\nabla u(P) - \nabla u(Q)| \le C|P-Q|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation}
Now, consider any two points $x,y\in \Omega$ and $x\in \Gamma(P)$ and $y\in \Gamma(Q)$. Without loss of generality, we assume $\delta(x) \ge \delta(y)$. It suffices to estimate $|\nabla u(x) - \nabla u(y)|$, which will be done in two cases as follows.
Case 1: $|x-y| \le \delta(x)/4$. Let $\widetilde{u}$ be constructed as previously with respect to the point $P$ (instead of $P_0$). Since $x\in \Gamma(P)$, $B_{\delta(x)/2}(x)$ is contained in a larger non-tangential cone $\Gamma'(P)$ for which Theorem \ref{thm.tu.C1a} still holds. Thus, using the interior $C^{1,\alpha}$ estimate for $\widetilde{u}$ and Theorem \ref{thm.tu.C1a}, we obtain
\begin{equation}\label{est.tuxy}
\begin{aligned}
|\nabla \widetilde{u}(x) - \nabla \widetilde{u}(y) | &\le C\bigg( \frac{|x-y|}{\delta(x)} \bigg)^{\alpha} \bigg( \fint_{B_{\delta(x)/2} (x) } |\nabla \widetilde{u}|^2 \bigg)^{1/2} \\
& \le C|x-y|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{aligned}
\end{equation}
Recall that $\widetilde{u} = u - L$ and $\nabla L$ is constant. Thus, (\ref{est.tuxy}) gives
\begin{equation*}
|\nabla u(x) - \nabla u(y)| \le C|x-y|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)}.
\end{equation*}
Case 2: $|x-y| \ge \delta(x)/4$. In this case,
\begin{equation*}
\begin{aligned}
|P-Q| &\le |P-x| + |x-y| + |y-Q| \\
&\le (1+\beta) \delta(x) + |x-y| + (1+\beta)\delta(y) \\
&\le (8(1+\beta)+1) |x-y|.
\end{aligned}
\end{equation*}
It follows from (\ref{est.C1a.nontan}) and (\ref{est.bdry.C1a}) that
\begin{equation*}
\begin{aligned}
|\nabla u(x) - \nabla u(y)| &\le |\nabla u(x) - \nabla u(P)| + |\nabla u(P) - \nabla u(Q)| + |\nabla u(Q) - \nabla u(y)| \\
& \le C\delta(x)^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)} + C|P-Q|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)} + C\delta(y)^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)} \\
& \le C|x-y|^\alpha [\phi]_{C^{1,\alpha}(\mathbb{R}^d)},
\end{aligned}
\end{equation*}
The proof hence is complete.
\end{proof}
As a corollary, we can deal with the endpoint case $\alpha=0$ by the weak maximum principle and an approximation argument.
\begin{corollary}
Let $d\ge 4$ and $\Omega$ be a bounded $(\delta,\sigma,R)$-quasiconvex Lipschitz domain with $\delta<\delta_0$. If $u$ is the solution of (\ref{eq.biharmonic}) with $ (f,g) \in C\!A^{1}(\partial\Omega)$, then $u\in C^1(\overline{\Omega})$. Moreover, for any $Q\in \partial\Omega$,
\begin{equation*}
\lim_{\Omega \ni x\to Q } u(x) = f(Q), \quad \text{and} \quad \lim_{\Omega \ni x\to Q } \nabla u(x) = g(Q).
\end{equation*}
\end{corollary}
\begin{proof}
Let $(f,g) = (\phi,\nabla \phi)|_{\partial\Omega} \in C\!A^{1}(\partial\Omega)$ for some $\phi\in C_0^1(\mathbb{R}^d)$. For any given $\rho>0$, we may construct $\phi_\rho \in C^{1,\alpha}_0(\mathbb{R}^d)$ for some $\alpha\in (0,\varepsilon)$ so that
\begin{equation}\label{est.phi.rho}
\norm{\phi - \phi_\rho}_{L^\infty(\mathbb{R}^d)} \le \rho, \qquad \norm{\nabla \phi - \nabla \phi_\rho}_{L^\infty(\mathbb{R}^d)} \le \rho.
\end{equation}
Let $(f_\rho,g_\rho) = (\phi_\rho,\nabla \phi_\rho)|_{\partial\Omega} \in C\!A^{1,\alpha}(\partial\Omega)$.
Thus, (\ref{est.phi.rho}) implies
\begin{equation*}
\norm{f - f_\rho}_{L^\infty(\mathbb{R}^d)} \le \rho, \qquad \norm{g - g_\rho}_{L^\infty(\mathbb{R}^d)} \le \rho.
\end{equation*}
Consider the biharmonic equation with data $(f_\rho,g_\rho)$
\begin{equation}
\left\{
\begin{aligned}
&\Delta^2 u_\rho = 0, \quad \txt{in } \Omega, \\
&u_\rho = f_\rho,\ \frac{\partial}{\partial n}u_\rho = h_\rho, \quad \txt{on } \partial\Omega,
\end{aligned}
\right.
\end{equation}
where $h_\rho = n\cdot g_\rho$. Then, obviously, the difference $u - u_\rho$ satisfies
\begin{equation}
\left\{
\begin{aligned}
&\Delta^2 (u-u_\rho) = 0, \quad \txt{in } \Omega, \\
&u - u_\rho = f - f_\rho,\ \frac{\partial}{\partial n}(u-u_\rho) = n\cdot (g - g_\rho), \quad \txt{on } \partial\Omega,
\end{aligned}
\right.
\end{equation}
where $(f-f_\rho, g-g_\rho) \in W\!\!A^{1,\infty}(\partial\Omega)$. Now, by the weak maximum principle, we have
\begin{equation*}
\norm{\nabla u - \nabla u_\rho}_{L^\infty(\Omega)} \le C\norm{g - g_\rho} \le C\rho.
\end{equation*}
Since the domain $\Omega$ is bounded, the above estimate also gives
\begin{equation*}
\norm{u-u_\rho}_{L^\infty(\Omega)} \le C\rho,
\end{equation*}
where $C$ is independent of $\rho$.
On the other hand, by Theorem \ref{thm.C1a}, for any given $Q\in \partial\Omega$,
\begin{equation*}
\lim_{\Omega \ni x\to Q } u_\rho(x) = f_\rho(Q), \quad \text{and} \quad \lim_{\Omega \ni x\to Q } \nabla u_\rho(x) = g_\rho(Q).
\end{equation*}
Therefore,
\begin{equation*}
\begin{aligned}
\limsup_{\Omega \ni x\to Q} |u(x) - f(Q)| & \le \norm{u - u_\rho}_{L^\infty(\Omega)} + \limsup_{\Omega \ni x\to Q} |u_\rho(x) - f_\rho(Q)| + |f(Q) - f_\rho(Q)| \\
& \le C\rho.
\end{aligned}
\end{equation*}
Since $\rho>0$ is arbitrary, we may let $\rho \to 0$ and obtain
\begin{equation*}
\lim_{\Omega \ni x\to Q } u(x) = f(Q).
\end{equation*}
The proof for $\nabla u(x)$ is the same.
\end{proof}
\begin{remark}
In view of Remark \ref{rmk.d23}, the above classical solution may also be obtained in arbitrary Lipschitz domains for $d = 2$ or $3$. In particular, for $d=2$, we may even show (\ref{est.C1a.phi}) with $\alpha \in (0,\frac{1}{2} +\varepsilon)$, instead of $\alpha\in (0,\varepsilon)$. This is due to a better reverse H\"{o}lder inequality. Actually, by the same argument in Remark \ref{rmk.d23}, we may use the $(R)_{2+\varepsilon}$ regularity to show the reverse H\"{o}lder inequality with $p = 4+\varepsilon$. By the Sobolev embedding theorem, this implies the Green's function is actually of $C^{1,\frac{1}{2} + \varepsilon}$ (excluding the poles), which yields Lemma \ref{lem.DG.alpha} and Lemma \ref{lem.DnDG.alpha} with $\alpha\in (0,\frac{1}{2}+\varepsilon)$.
\end{remark}
\textbf{Acknowledgment.} The author is supported in part by National Science Foundation grant DMS-1600520.
\bibliographystyle{plain}
| {
"timestamp": "2019-07-26T02:06:13",
"yymm": "1907",
"arxiv_id": "1907.10857",
"language": "en",
"url": "https://arxiv.org/abs/1907.10857",
"abstract": "In dimension two or three, the weak maximum principal for biharmonic equation is valid in any bounded Lipschitz domains. In higher dimensions (greater than three), it was only known that the weak maximum principle holds in convex domains or $C^1$ domains, and may fail in general Lipschitz domains. In this paper, we prove the weak maximum principle in higher dimensions in quasiconvex Lipschitz domains, which is a sharp condition in some sense and recovers both convex and $C^1$ domains.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Weak maximum principle for biharmonic equations in quasiconvex Lipschitz domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877007780707,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594805454474
} |
https://arxiv.org/abs/2006.04849 | The length of the shortest closed geodesic on positively curved 2-spheres | We show that the shortest closed geodesic on a 2-sphere with non-negative curvature has length bounded above by three times the diameter. We prove a new isoperimetric inequality for 2-spheres with pinched curvature; this allows us to improve our bound on the length of the shortest closed geodesic in the pinched curvature setting. | \section{Introduction}
Gromov \cite{Gro83} has asked if there exist constants $c(n)$ such that the length of the shortest closed geodesic $L(M^n)$ on a closed Riemannian manifold $M^n$ is bounded above by $c(n) D(M^n)$, where $D(M^n)$ is the diameter of the manifold. On non-simply connected manifolds the shortest non-contractible closed curve is a geodesic with length bounded above by $2D(M^n)$. On manifolds homeomorphic to the 2-sphere Croke \cite{croke1} provided the first bound of $L(S^2,g) \leq 9D(S^2,g)$, which was improved by Maeda \cite{maeda}, and finally by Nabutovsky and Rotman \cite{NR2002} and independently Sabourau \cite{Sab} to $L(S^2,g) \leq 4D(S^2,g)$. Rotman \cite{rot5} has further proved that $L(S^2,g) \leq 4R(S^2,g)$ where $$R(S^2,g)= \min_{x \in S^2} \max_{y \in S^2} d(x,y) \leq D(S^2,g) = \max_{x \in S^2} \max_{y \in S^2} d(x,y)$$ is the radius of the Riemannian sphere.
Gromov's question is open for simply connected manifolds in dimensions $n \geq 3$.
An attractive conjecture is that $L(M^n) \leq 2D(M^n)$ for all closed Riemannian manifolds $M^n$. To explore this bound one might consider Zoll spheres:~metrics on the 2-sphere all of whose geodesics are closed and of the same length. The conjecture turns out to be overly optimistic, as Balacheff, Croke, and Katz \cite{bck} have produced Zoll spheres with $L(S^2, Zoll) > 2D(S^2, Zoll)$. These examples are not constructive, and it is unknown how much longer than $2D(S^2, g)$ the shortest closed geodesic could be. In this paper we prove the following:
\begin{theorem}\label{main} Non-negatively curved 2-spheres have $L(S^2, g) \leq 3R(S^2, g) \leq 3D(S^2, g)$.
\end{theorem}
While one does not expect the inequality $L(S^2, g) \leq 3D(S^2, g)$ to be sharp, we note that the inequality $L(S^2, g) \leq 3R(S^2, g)$ is realized by the metric space formed when gluing two equilateral triangles along their common boundaries, the so called Calabi-Croke sphere.
The centers of the triangles realize the radius whereas the vertices realize the diameter. The shortest closed geodesic is a doubled altitude which has length exactly 3 times the radius and $\sqrt{3} $ times the diameter.
We should note that the results cited above are curvature free bounds, whereas our bounds require non-negative curvature. Calabi and Cao \cite{calabi} studied simple closed geodesic on non-negatively curved 2-spheres, and showed that any closed geodesic of shortest length must be simple. We can therefore improve our Theorem~\ref{main} to the following:~on non-negatively curved $(S^2,g)$ the shortest closed geodesic is simple and has length bounded above by three times the radius.
Additional work in the positive curvature setting includes \cite{recent} where pinched metrics on the 2-sphere are studied and an upper bound for the length of the shortest closed geodesic is provided in terms of the area. We combine this result with a new isoperimetric inequality for pinched metrics on the 2-sphere (Theorem~\ref{iso}) to yield the following:
\begin{theorem}\label{pinched}
Let $(S^2,g)$ be a $\delta$-pinched metric with $\delta > \frac{4+\sqrt{7}}{8} \approx 0.83$. Then
\[L(S^2,g) \leq \frac{2}{\sqrt{\delta}}D(S^2,g)
\]
with equality if and only if the sphere is round.
\end{theorem}
For $\delta = .83$ our theorem yields the bound $L(S^2,g) \leq 2.19 D(S^2,g)$. The theorem is optimal in the sense that the constant $\frac{2}{\sqrt{\delta}}$ converges to $2$ as $\delta$ approaches $1$. Finally, we note that this theorem can be used to further comment on the Zoll spheres due to Balacheff, Croke, and Katz \cite{bck} where we can now say that $$2D(S^2,Zoll) < L(S^2, Zoll) < \frac{2}{\sqrt{\delta}}D(S^2,Zoll).$$
The paper proceeds as follows. In Section~\ref{rot} we present a proof due to Rotman \cite{video} of the fact that $L(S^2,g) \leq 4D(S^2,g)$. A crucial step in the proof uses a weighted length shortening to avoid stationary theta-graphs (critical points of the length functional on nets). This weighted flow increases the bound on the shortest closed geodesic from $3D(S^2,g)$ to $4D(S^2,g)$. In Section~\ref{deets} we show for positive curvature metrics on the 2-sphere that non-trivial stationary theta-graphs are never local minima. The weighted flow in Rotman's proof can therefore be avoided, and Theorem~\ref{main} follows. In Section~\ref{pinch} we prove a new isoperimetric inequality for pinched metrics on the 2-sphere and combine this with the main result of \cite{recent} to prove Theorem~\ref{pinched}. The new isoperimetric inequality is further refined in the Appendix by studying a Sturm-Liouville problem (SLP) related to lower bounds on the first eigenvalue of the Laplace-Beltrami operator.
\textbf{Acknowledgements:}~The authors would like to recognize Christina Sormani and her NSF grant DMS-1612049. The grant made possible a conference at Yale where the authors learned of many of the results cited in this paper and had productive conversations with Alexander Nabutovsky, Regina Rotman, and Frank Morgan. The authors would also like to thank Stéphane Sabourau for helpful suggestions, as well as Ben Andrews for his helpful advice on how to linearly approximate eigenvalues of the related SLP. We also acknowledge Wolfgang Ziller for helpful discussions about short closed geodesics in the pinched curvature setting, the fruits of which will appear in a forthcoming paper. Finally, we want to acknowledge the generous contribution of the referee who indicated how our proof of the inequality $L(S^2,g) \leq 3D(S^2,g)$ could be extended to the sharp bound $L(S^2,g) \leq 3R(S^2,g)$.
\begin{comment}
\section{Background on Zoll spheres}\label{background}
Zoll spheres are metrics on the 2-sphere all of whose geodesics are closed. It was shown by Gromoll and Grove \cite{GG} that these geodesics all have the same length. The first examples were given by Zoll in 1903 who produced surfaces of revolution with the Zoll property. By the triangle inequality one can show that the poles of these surfaces of revolution are diameter realizing, hence $L=2D$ for these examples. See \cite{besse} for a comprehensive treatment of manifolds all of whose geodesics are closed.
Another class of Zoll spheres are due to Funk and Guillemin. In 1913 Funk showed that a smooth conformal variation $g_t = \Phi_t g_0$ of the round metric through smooth Zoll metrics must have $(d \Phi_t / dt) |_{t=0} \in C^{\infty}_{odd}(S^2)$. Guillemin in 1976 \cite{gui} proved the converse, namely that for any odd function $f \in C^{\infty}_{odd}(S^2)$ there exists a smooth conformal variation $g_t = \Phi^f_t g_0$ where $(d \Phi^f_t / dt) |_{t=0} =f $ and $\Phi^f_0=1$ is through smooth Zoll metrics.
The Zoll spheres a la Guillemin can have $L(S^2, Zoll) > 2D(S^2, Zoll)$ as was shown by Balacheff, Croke, and Katz \cite{bck}. They define and show existence of a class of \emph{amply negative} odd functions with the following property:~for sufficiently small $t>0$, every direction from every point on $(S^2, g_t )= (S^2, \Phi^f_t g_0)$ decreases distance to the antipode. The diameter is therefore strictly less than $\pi$, whereas the odd condition guarantees that the length of closed geodesics remains $2\pi$. As their proof is not constructive it is unknown how much longer than twice the diameter the length of a Zoll geodesic can be; our main result demonstrates that $L(S^2, Zoll) \leq 3D(S^2, Zoll)$.
\end{comment}
\section{Proof that $L(S^2,g) \leq 4D(S^2,g)$}\label{rot}
We start with some preliminaries that will be used both in the proof of the bound $L(S^2,g) \leq 4D(S^2,g)$ and throughout the remainder of the paper.
\begin{definition}
A \emph{geodesic net} is a finite graph immersed in a Riemannian manifold such that each edge is a geodesic segment. A geodesic net is said to be \emph{stationary} if at each vertex the sum of the unit vectors tangent to the incident edges equals zero.
\end{definition}
As such, stationary geodesic nets are critical points of the length functional on the space of nets. Closed geodesics are the first examples of stationary geodesic nets. A figure eight curve is a stationary geodesic net if each loop is geodesic and if the stationarity condition is satisfied at the vertex. In dimension two the stationarity condition implies that a geodesic net based on the figure eight curve will be a self-intersecting closed geodesic.
An example of a stationary geodesic net that is not a closed geodesic is the stationary theta-graph. A theta-graph is a net consisting of exactly two vertices joined by exactly three edges. The stationarity condition ensures that these edges pairwise meet at angle $\frac{2\pi}{3}$ at each vertex. Hass and Morgan \cite{morgan} gave one of the only known existence results for geodesic nets, demonstrating that convex metrics on the 2-sphere nearby the round metric admit stationary theta-graphs.
Nabutovsky and Rotman \cite{NR2002} and independently Sabourau \cite{Sab} gave the original proofs that $L(S^2,g) \leq 4D(S^2,g)$. In working to improve this bound Rotman obtained alternate unpublished proofs, one of which we present here \cite{NR2005, rot2, video, rot3}. This proof uses a pseudo-filling technique, analogous to the technique introduced by Gromov \cite{Gro83} in proving bounds for essential manifolds on the length of the shortest closed geodesic in terms of volume.
\begin{theorem}[\cite{NR2002, Sab}]\label{4d} Riemannian 2-spheres have $L(S^2,g) \leq 4D(S^2,g)$.
\end{theorem}
\begin{proof}[Proof \cite{video}.]
Let $M=(S^2,g)$ be a Riemannian 2-sphere and $f \colon (S^2, std) \to M$ a diffeomorphism. We attempt to extend $f$ to a map $\tilde{f} \colon (D^3, std) \to M$. As $M$ is a 2-sphere such a map should not exist, and as an obstruction to this extension we obtain a periodic geodesic on $M$ with length $\leq 4D(M)$.
First triangulate $(S^2, std)$ such that the diameter of the triangulation on $M$ induced by $f$ is less than $\delta$. Next triangulate $(D^3, std)$ as a cone over the triangulated $(S^2, std)$, i.e.~add a single vertex $p \in D^3$ at the center of the ball and the corresponding 1, 2, and 3-simplexes. We attempt to extend the map $f$ inductive to this skeleton.
{\bf 0-skeleton}:~We need only choose a point $\tilde{p} \in M$ with $\tilde{f}(p) = \tilde{p}$.
{\bf 1-skeleton}:~Let $v_i $ be the vertices of the triangulation of $S^2$ and $f(v_i)=\tilde{v}_i$ the corresponding vertices of the induced triangulation of $M$. We send the 1-simplex between $p$ and $v_i$ on $D^3$ to a minimizing geodesic between $\tilde{p}$ and $\tilde{v}_i$ on $M$ with length less than the diameter of $M$.
{\bf 2-skeleton}:~We attempt to send the 2-simplex on $D^3$ associated to the triple $(p, v_i, v_j)$ to a 2-simplex on $M$ associated to the triple $(\tilde{p}, \tilde{v}_i, \tilde{v}_j)$. The triple on $M$ is already connected by 1-simplexes that form a piecewise smooth closed curve with length less than $2D(M)+\delta$. We use Birkhoff curve shortening process to deform this closed curve without increasing its length, either to a closed geodesic with length less than $2D(M)+\delta$, or to a point in which case we have swept out the desired 2-simplex.
{\bf 3-skeleton}:~We attempt to send the 3-simplex on $D^3$ associated to the tuple $(p, v_i, v_j, v_k)$ to a 3-simplex on $M$ associated to the tuple $(\tilde{p}, \tilde{v}_i, \tilde{v}_j , \tilde{v}_k)$. By the previous steps we know where the boundary of this 3-simplex is sent; call this boundary 2-sphere $S^2_0 \subset M$. If we are able to contract $S^2_0$ to a point, i.e.~construct a homotopy $S^2_t$ with $S^2_1 = \{x\}$, then we will have succeeded in sending 3-complexes to 3-complexes, thus extending the map $f$ to $\tilde{f}$. Such an extension is not possible, and as an obstruction we obtain a short periodic geodesic on $M$.
We first contract the small 2-simplex associated to the triple $(\tilde{v}_i, \tilde{v}_j , \tilde{v}_k)$ to a point which we call $\tilde{v} \in M$ (c.f.~\cite{rot3}, Remark ending Section 1). We then have a theta-graph between the pair of points $(\tilde{p}, \tilde{v})$ consisting of three 1-simplexes, which we call $e_1, e_2, e_3$. As before, the Birkhoff curve shortening process on each pair of 1-simplexes $\{ e_i , e_j \}$ yields the boundary 2-sphere $S^2_0$.
In order to construct the homotopy $S^2_t$ we first use length shortening flow for nets to deform the theta-graph to a point, c.f.~\cite[Section 3]{NR2005}. At each time in this deformation we apply the Birkhoff curve shortening process to each pair of edges, sweeping out the desired $S^2$. The continuity of the Birkhoff curve shortening process (with respect to the initial pair $\{ e_i , e_j \}$ of edges) in the absence of short closed geodesics is what allows us to extend the homotopy which contracts the theta-graph to the desired homotopy $S^2_t$.
We therefore need only study the situation in which the theta-graph gets stuck on a stationary geodesic net before contracting to a point during the length shortening process. There are three cases to consider:
Case 1: The theta-graph degenerates to a periodic geodesic; this geodesic will have length less than $3D(M)$.
Case 2: One of the edges disappears during the length shortening yielding a stationary figure eight with length less than $3D(M)$. The stationarity condition in dimension two implies that this is a (self-intersecting) periodic geodesic.
Case 3: The theta-graph gets stuck on a stationary theta-graph. In this situation we apply a weighted length shortening process. Let $(w_1, w_2, w_3)$ be the triple of unit direction vectors at a vertex of a theta-graph. We consider a weighted length shortening flow where we double the weight of the third vector. The stationarity condition is then $| w_1 + w_2 + 2w_3 | = 0$ which implies that stationary theta-graphs are not critical points of the the weighted flow. Critical points occur when $w_1$ and $w_2$ collapse to a single edge or one of these edges disappears, which means we are in one of the two previous cases. Because we doubled the weight of one of the edges we now produce a (potentially self-intersecting) periodic geodesic with length bounded above by $4D(M)$.
\end{proof}
\section{Positive metrics on the 2-sphere} \label{deets}
In this section we indicate how the proof of Theorem~\ref{4d} adapts in the positive curvature setting to yield our Theorem~\ref{main}. Under the positive curvature assumption we show that stationary theta-graphs are never local minima of the length functional on nets, allowing us to avoid weighted length shortening. Once we prove the theorem in the positive setting, we show how it extends to the non-negative setting by considering conformally close positive metrics.
We begin by recalling the first and second variations of length, which can be found for instance in \cite[Section 5.1]{Jost}, see also \cite{morgan2} and \cite{morgan3} for formula that apply more directly in the setting of stationary nets.
\begin{proposition}[First variation of length, Lemma 5.1.1 \cite{Jost}]\label{1stvar} Given a smooth curve $\gamma:[a,b]\rightarrow M$ parametrized by arc-length and a vector field $V$ on $\gamma$, let $H$ be a variation of $\gamma$ in the direction of $V$ so that $H:[a,b]_t\times[-\epsilon,\epsilon]_s \rightarrow M$ is smooth, $H(\cdot,0)=\gamma,\, \frac{dH}{ds}|_{s=0}(\cdot,0)=V$. If we denote by $L(s):= \ell(H(\cdot,s))$ then
\[L'(0) = \langle V, \gamma'\rangle|_a^b - \int_a^b \langle V(t),\nabla_{\gamma'}\gamma'(t) \rangle dt
\]
\end{proposition}
\begin{proposition}[Second variation of length, Theorem 5.1.1 \cite{Jost}]\label{2ndvar} Given a smooth geodesic $\gamma:[a,b]\rightarrow M$ parametrized by arc-length and a vector field $V$ on $\gamma$, let $H$ be a variation of $\gamma$ in the direction of $V$ so that $H:[a,b]_t\times[-\epsilon,\epsilon]_s \rightarrow M$ is smooth, $H(\cdot,0)=\gamma,\, \frac{d}{ds}|_{s=0}H(\cdot,0)=V$. If we denote by $V^\perp$ the perpendicular projection of $V$ with respect to $\gamma'$ and by $L(s):= \ell(H(\cdot,s))$ then
\[L''(0) = \biggl\langle\frac{D}{ds}\frac{dH}{ds}, \gamma'\biggr\rangle\bigg|_{(a,0)}^{(b,0)} + \int_a^b \Vert \nabla_{\gamma'}V^\perp(t)\Vert^2 - \langle R(V^\perp(t),\gamma'(t))\gamma'(t), V^\perp(t) \rangle dt
\]
\end{proposition}
\begin{lemma}\label{lem} Any stationary theta-graph on a positively curved 2-sphere admits directions of decrease (within the space of nets) for the length shortening flow.
\end{lemma}
\begin{proof}
We simply demonstrate a variation with negative second variation of length. Give each edge a unit speed parametrization $\gamma_i \colon [a_i,b_i] \to (S^2,g)$ and define vector fields $V_i$ so that $V_1^\perp$ and $V_2^\perp$ are of constant size $1$ (hence parallel), $V_3^\perp\equiv0$, and the $V_i$ all agree at the vertices of the theta-graph. For example: \\
$V_1(t) = \frac{1}{\sqrt{3}}\cos{(\frac{t-a_1}{b_1-a_1}\pi)}\dot{\gamma}_1+1\dot{\gamma}_1^{\perp}$ \\
$V_2(t) = \frac{1}{\sqrt{3}}\cos{(\frac{t-a_2}{b_2-a_2}\pi)}\dot{\gamma}_2-1\dot{\gamma}_2^{\perp}$ \\
$V_3(t) = \frac{-2}{\sqrt{3}}\cos{(\frac{t-a_3}{b_3-a_3}\pi)}\dot{\gamma}_3+0\dot{\gamma}_3^{\perp}$ \\
For the given variational fields $V_i$, we choose variations $H_i(\cdot,s)$ which agree at the vertices; for example, one could set $H_i(t,s)=\exp_{\gamma_i(t)}sV_i(t)$. The fact that the $H_i(\cdot,s)$ agree at the vertices ensures that we are deforming through theta-graphs. Moreover, as the variations keep each edge embedded, and maintain angles close to the initial $\frac{2\pi}{3}$ angles, we are guaranteed the theta-graphs remain embedded during the deformation.
If we denote by $L(s)$ the sum of lengths of $H_1(\cdot,s), H_2(\cdot,s), H_3(\cdot,s)$, then by the first variation formula (Proposition \ref{1stvar}) we have that
\begin{equation}\label{eq:length} L'(0)= \sum_{i=1}^3\langle V_i, \gamma_i'\rangle|_{a_i}^{b_i},
\end{equation}\noindent
since $\lbrace\gamma_i\rbrace_{i=1}^3$ are geodesics. And because the vector fields $\lbrace V_i\rbrace_{i=1}^3$ agree at the endpoints and the geodesics meet at angles $\frac{2\pi}{3}$, the summands in Equation~\ref{eq:length} cancel out for each vertex. Hence $L'(0)=0$.
For the second variation, note that because $\lbrace V^\perp_i\rbrace_{i=1}^3$ are parallel and $M$ has positive curvature we have that for $1\leq i\leq3$
\[\int_{a_i}^{b_i} \Vert \nabla_{\gamma_i'}V_i^\perp(t)\Vert^2 - \langle R(V_i^\perp(t),\gamma_i'(t))\gamma_i'(t), V^\perp_i(t) \rangle dt < 0
\]
Applying the second variation formula (Proposition \ref{2ndvar}) we have then
\begin{equation}\label{eq:2ndvar}
L''(0)< \sum_{i=1}^3\biggl\langle\frac{D}{ds}\frac{dH_i}{ds}, \gamma_i'\biggr\rangle\bigg|_{(a_i,0)}^{(b_i,0)}
\end{equation}
Given our choice of $H_i(t,s)=\exp_{\gamma_i(t)}sV_i(t)$ we see that $\frac{D}{ds}\frac{dH_i}{ds}=0$ for each $i \in \{1,2,3\}$ and therefore that the right hand side of Equation~\ref{eq:2ndvar} vanishes. Thus under the described variation we have $L''(0)<0$, which together with $L'(0)=0$, implies that length decreases for small values of $s$ in $\lbrace H_i(\cdot,s)\rbrace_{i=1}^3$.
\end{proof}
We show by example that Lemma~\ref{lem} is sharp in the sense that there exist stationary theta-graphs on non-negatively curved 2-spheres which do not admit directions of decrease. Consider the metric space formed by gluing two equilateral triangles along their common boundaries, so that a geodesic on the top face billiards around an edge to the bottom face. This \emph{doubled triangle} is a 2-sphere with flat metric and three conical singularities; it is sometimes called the Calabi-Croke sphere \cite{Sab2}.
The doubled triangle admits a degenerate stationary theta-graph. Connect the center of the top face with the center of the bottom face via three geodesic segments which pass perpendicularly through each edge of the triangular boundary (see Figure~\ref{graph}). Such an arrangement ensures that the edges of the graph meet at the vertices at angle $2\pi /3$. By moving the vertices of the graph towards one of the vertices of the triangle, and keeping the edges of the graph perpendicular to the edges of the triangle, one produces a degenerate family of stationary theta-graphs, all having the same total length, and failing to admit directions of decrease.
\begin{figure}[h]
\includegraphics[scale=.25]{theta2}
\centering
\caption{A degenerate theta-graph on the doubled triangle, where edges on the top face are solid and edges on the bottom face are dashed.}
\label{graph}
\end{figure}
Because these theta-graphs on the doubled triangle avoid the vertices of the triangle, this degenerate family also exists on smooth non-negative 2-sphere metrics close to the doubled triangle metric. This example illustrates that the proof of Theorem~\ref{main} in the non-negative setting can not rely directly on Lemma~\ref{lem}, and instead will follow by first proving the theorem in the positive setting, and then extending by considering conformally close positive metrics.
\begin{proof}[Proof of Theorem~\ref{main}]
Let us assume first $M = (S^2, g)$ to be a metric of positive curvature. We follow the proof of Theorem~\ref{4d}, trying to extend the map $f \colon (S^2, std) \to M$ to a map $\tilde{f} \colon (D^3, std) \to M$. When extending the 0-skeleton we now choose a point $\tilde{p} \in M$ that realizes the radius, i.e.~such that $B(\tilde{p},R)$ covers $M$. This choice ensure that $\tilde{p}$ is at distance at most $R(M)$ from the points in the triangulation; it is the same choice that Rotman \cite{rot5} makes when improving her bound from $4D(S^2, g)$ to $4R(S^2, g)$.
Next we follow the proof of Theorem~\ref{4d} until Case 3 where the theta-graph gets stuck on a stationary theta-graph, a critical point of the length shortening flow for nets. By Lemma~\ref{lem} this critical point admits a direction of decrease, allowing us to continue the contraction past the stationary theta-graph, and eliminating the necessity of the weighted length shortening.
While we are required to make a choice about the direction we deform from a stationary theta-graph, this choice can be made independently in each $3$-cell; we do not expect this extended (restarted) flow to depend continuously on the initial theta-graph. Indeed, we only need the fact \cite[Lemma 4]{NR2005} that the space of theta-graphs with length less than $3R(M)+\delta$ is connected in order to contract the initial theta-graph to a point. The continuity of the Birkoff curve shortening process (with respect to an initial pair of edges) in the absence of short closed geodesics allows us to extend this contraction to the entire 3-cell.
.
Upon resuming the flow, it is possible that we encounter another stationary theta-graph. Note that this stationary theta-graph will be distinct from the first, as the length shortening flow is strictly decreasing (preventing us from visiting the same sequence of theta-graphs repeatedly). It is possible running the flow in this extended (restarted) manner that we encounter a sequence of stationary theta-graphs accumulating to some limit object. In this case an application of transfinite induction ensures that this limit object is again a stationary theta-graph from which we can continue the flow.
As it is impossible to contract all theta-graphs to a point (this would extend the map $f \colon (S^2, std) \to M$ to a map $\tilde{f} \colon (D^3, std) \to M$) we must end in Case 1 or 2. By avoiding the weighted length shortening we produce a (potentially self-intersecting) closed geodesic with length bounded above by $3R(M) \leq 3D(M)$.
The theorem is thus proved in the positive case. In the case where the sphere is non-negatively curved, we proceed as follows. Choose a smooth function $\varphi \colon S^2 \to \mathbb{R}$ such that $\Delta\varphi < 0$ on the set $K_{0} = \lbrace x\in S^2 \,|\, K_x=0 \rbrace$. The existence of such a $\varphi$ is possible because $K_0$ is a proper subset of the sphere (via Gauss-Bonnet). We consider the metrics $g_t = e^{2t\varphi}g$ which have strictly positive curvature $K_t = e^{-2t\varphi}(K-t\Delta\varphi)$ for $t>0$ small. Applying the result for positively curved metrics to $(S^2,g_t)$, and letting $t\rightarrow 0^+$, we use that the convergence $(S^2,g_t)\rightarrow(S^2,g)$ is smooth and that $L(S^2,.)$ is lower-semicontinuous (this last property follows from the fact that in smooth convergence closed geodesics converge to closed geodesics) to conclude that the inequality $L(S^2,g) \leq 3R(S^2,g)$ holds in the non-negative setting.
\end{proof}
Finally, note that in \cite{calabi} it is proved that the shortest closed geodesic on a non-negatively curved $(S^2,g)$ is simple. Therefore, if the obstruction in the above proof yields a self-intersecting periodic geodesic (Case 2), then there also exists a simple closed geodesic with length bounded above by three times the radius. In short, we can use the main result of \cite{calabi} to improve our Theorem~\ref{main} to the following: on non-negatively curved $(S^2,g)$ the shortest closed geodesic is simple and has length bounded above by three times the radius.
\section{Pinched metrics on the 2-sphere}\label{pinch}
The main goal of this section is to prove the following new isoperimetric inequality for pinched metrics on the 2-sphere. A positive metric $(S^2,g)$ is said to be $\delta$-pinched if $K_{\min}/K_{\max}\geq \delta$.
\begin{theorem}\label{iso}
Let $(S^2,g)$ be a $\delta$-pinched metric. Then $$\pi A \leq \frac{4D^2}{\delta}$$ with equality if and only if the sphere is round.
\end{theorem}
We combine this inequality with the main result from \cite{recent} in order to prove Theorem~\ref{pinched}. The main result of \cite{recent} is achieved via a combination of techniques from Riemannian and symplectic geometry.
\begin{theorem}[\cite{recent}]\label{thm:L2Aineq}
Let $(S^2,g)$ be a $\delta$-pinched metric with $\delta > \frac{4+\sqrt{7}}{8} \approx 0.83$. Then $L^2(S^2, g) \leq \pi A(S^2, g)$ where $A(S^2,g)$ is the area.
\end{theorem}
By combining Theorem~\ref{iso} with Theorem~\ref{thm:L2Aineq}, we have for $(S^2,g)$ a $\delta$-pinched metric with $\delta > \frac{4+\sqrt{7}}{8} \approx 0.83$, that $$L^2(S^2, g) \leq \pi A(S^2, g) \leq \frac{4D^2(S^2,g)}{\delta} $$ and therefore that $$L(S^2,g) \leq \frac{2}{\sqrt{\delta}}D(S^2,g).$$
Equality here implies equality in the isoperimetric inequality, and therefore that the sphere is round by Theorem~\ref{iso}. We have thus proved Theorem~\ref{pinched} and all that remains in this section is the proof of Theorem~\ref{iso}.
We first note that Theorem~\ref{iso} is a curvature pinched version of the following result due to Calabi and Cao. Moreover, the proof techniques we use are adaptations of theirs to the pinched curvature setting.
\begin{theorem}[\cite{calabi}, Theorem C]
Let $(S^2,g)$ have non-negative curvature. Then $A \leq \frac{8}{\pi} D^2$.
\end{theorem}
Calabi and Cao proved the above inequality by combining an upper bound on the first eigenvalue $\lambda_1(S^2,g) \leq \frac{8\pi}{A(S^2,g)}$ due to Hirsch \cite{hersch, YangYau} with a lower bound $ \frac{\pi^2}{D^2(M^n,g)} \leq \lambda_1 (M^n,g)$ due to Zhong and Yang \cite{zhongyang} which holds in the non-negative Ricci setting. This lower bound on $\lambda_1(M^n,g)$ has been improved many times in the setting of positive lower bound on Ricci (see \cite{he} for a survey or \cite{AndrewsClutterbuck} for the optimal bound). We use the version from \cite{AndrewsClutterbuck}, which we state for $\lambda_1(S^2,g)$.
\begin{theorem}[\cite{AndrewsClutterbuck}]\label{thm:evlowerbound}
The quantity
\[\lambda_1(d,k) = \inf\lbrace \lambda_1(S^2,g)\,|\, Diam(S^2,g)\leq d,\,K(S^2,g)\geq k\rbrace
\]
is equal to the first eigenvalue $\mu$ of the following Sturm-Liouville problem (SLP) with Neumann initial conditions
\[ (y'\cos(\sqrt{k}x))' + \mu\cos(\sqrt{k}x)y = 0,\quad y'(\pm d/2)=0
\]
\end{theorem}
We can apply this directly to the case $d=\pi,\, k=1$ to obtain the case ${\rm dim}=2$ of the classical result of Lichnerowicz \cite{Lichnerowicz}.
\begin{lemma}\label{two} For $(S^2,g)$ with $Diam(S^2,g)\leq \pi$ and $\,K(S^2,g)\geq 1$ we have $\lambda_1 \geq 2.$
\end{lemma}
\begin{proof}
The first eigenfunction of $(y'\cos(x))' + \mu\cos(x)y = 0,~ y'(\pm \pi/2)=0$ is $y(x)=\sin(x)$ which has eigenvalue $2$.
\end{proof}
We can now provide a proof of Theorem~\ref{iso}. The main insight is that when considering diameter bounds, we can use Klingenberg's injectivity radius estimate to translate between the positive and the pinched curvature settings.
\begin{proof}[Proof of Theorem~\ref{iso}]
For simplicity we will drop $(S^2,g)$ from our notation. Since the inequality is scale invariant, let us rescale the metric so $K_{min}=1$ and by Myers theorem $D \leq \pi$. Lemma~\ref{two} then gives the bound $ \lambda_1\geq2$. Following the ideas of \cite{calabi} we combine this lower bound on $\lambda_1$ with the upper bound due to Hirsch $\lambda_1 \leq \frac{8\pi}{A}$ (see \cite{hersch}, \cite{YangYau}) to yield $A \leq 4\pi$.
Klingenberg's injectivity radius estimate for positive metrics on 2-spheres says that $$D \geq \frac{\pi}{\sqrt{K_{max}}} = \pi \sqrt{\delta}.$$ We have therefore related both area and diameter to $\pi$, and conclude that $$\pi A \leq 4\pi^2 \leq \frac{4D^2}{\delta}.$$
In the case of equality we have $4\pi^2 = \frac{4D^2}{\delta}$ and therefore $D=\frac{\pi}{\sqrt{K_{max}}}$. This is the limiting case of Klingenberg's injectivity radius estimate, and therefore diameter equals injectivity radius. This is the so called Blaschke condition, and for Blaschke metrics on the 2-sphere Green \cite{green} gives an elementary proof using classical surface geometry that the sphere must be round.
\end{proof}
We finish by observing that the isoperimetric inequality in Theorem~\ref{iso} is sharp in the sense that the estimate $A/D^2$ approaches $4/\pi$ as we move towards the round metric, i.e.~as $\delta$ approaches $1$. We do not recover the Calabi-Cao inequality as $\delta$ approaches 0, i.e.~for $(S^2,g)$ with non-negative curvature. Note that the Calabi-Cao inequality is not sharp for convex metrics on $(S^2,g)$; Alexandrov has conjectured that the sharp inequality is realized by the doubled disk metric where $A/D^2 = \pi/2$.
\section{Appendix}
Using more advanced techniques from Sturm-Liouville theory we can improve the isoperimetric inequality in Theorem~\ref{iso} when the sphere is not round. This improved version of the inequality is not needed for the proof of Theorem~\ref{pinched} that was presented in Section~\ref{pinch}.
\begin{theorem}\label{ap_thm}
Let $(S^2,g)$ be a $\delta$-pinched metric and denote by $\eta=D\sqrt{K_{min}}$. Then
\begin{equation*}\label{eq:CCpinched}
\pi A \leq \frac{4D^2}{\delta(2-\sin(\eta/2))}.
\end{equation*}
\end{theorem}
First note that $2- \sin(\eta/2) \geq 1$ so that this isoperimetric inequality is indeed an improvement on that of Theorem~\ref{iso}. Moreover, because $2- \sin(\eta/2) > 1$ when $\eta \neq \pi$ we note that Theorem~\ref{ap_thm} together with Cheng's \cite{cheng} rigidity result when $D=\pi/\sqrt{K_{min}}$ provide an alternate proof of the equality case of Theorem~\ref{pinched} that does not depend on the Blaschke ideas from Section~\ref{pinch}.
This new inequality follows as before by combining the inequality due to Hirsch with an improved lower bound on the first eigenvalue. We therefore set out to calculate the linear approximation of $\lambda_1(d,1)$ at $d=\pi$. The main idea is that because the coefficients of the SLP are constant while the domain varies, we can restrict and extend eigenfunctions to compare the values $\lambda_1(d,1)$ as $d\rightarrow\pi$. Let us then define by $\mu(d)$ the SLP eigenvalue $\lambda_1(d,1)$. We then have
\begin{lemma}\label{lem:1sttaylorlambda} For $\pi\geq d >0$
\[2 + 2(1-\sin(d/2)) \leq \mu(d).
\]
\end{lemma}
\begin{proof}
Define the change of variable $\tanh(u)=\sin(x)$ on the SLP, helpfully suggested to us by Ben Andrews. This becomes
\begin{equation}\label{eq:modSLP}
y'' + \mu.\sech^2(u)y = 0,\quad y'(\pm T)=0,\, \quad \tanh(T)=\sin(d/2)
\end{equation}
In particular, for $d=1, T=+\infty$, the eigenvalue $\mu=2$ is realized by $y_{+\infty}=\tanh(u)$.
Recall that the first eigenvalue for a SLP can be written as a Rayleigh quotient
\begin{equation*}
\mu(d) = \inf_{y\neq 0, y'(\pm T)=0} R[y,T]
\end{equation*}
where
\[R[y,t]= \frac{\int_{-T}^T -y.y''du}{\int_{-T}^T y^2\sech^2(u)du}
\]
Denote by $\hat{y}_T$ the eigenfunction of the SLP (\ref{eq:modSLP}) normalized so $\int_{-T}^T \hat{y}_T^2\sech^2(u)du=1$. Extended $\hat{y}_T$ as a constant to the entire real line so (while keeping the same notation):
\[ \hat{y}_T(u) =
\begin{cases}
\hat{y}_T(u) & \text{if $|u|\leq T$} \\
\hat{y}_T(T) & \text{if $u>T$}\\
\hat{y}_T(-T) & \text{if $u<-T$}
\end{cases}
\]
In order to use $\mu(1)\leq R[\hat{y}_T,+\infty]$ we will need to bound $\hat{y}_T(\pm T)$. Observe first that given the symmetries of the SLP (\ref{eq:modSLP}) the function $\hat{y}_T$ is an odd function. Since $\mu(d)$ is the first eigenvalue, we know that $\hat{y}'_T$ does not vanish in the open interval $(-T,T)$. Hence $|\hat{y}_T(\pm T)| = \max_{-T\leq u \leq T} |\hat{y}_T(u)|$, so then
\begin{equation*}
1=\int_{-T}^T \hat{y}^2_T\sech^2(u)du \leq |\hat{y}_T(\pm T)|^2 \int_{-T}^T \sech^2(u)du \leq 2|\hat{y}_T(\pm T)|^2
\end{equation*}
Replacing now $R[\hat{y}_T,+\infty]$
\begin{equation*}
2=\mu(\pi)\leq R[\hat{y}_T,+\infty] = \frac{\mu(d)}{1 + 2|\hat{y}_T(T)|^2\int_T^{\infty} \sech^2(u)du}
\end{equation*}
Hence we obtain
\begin{equation*}
2 + 2(1-\tanh(T)) \leq \mu(d)
\end{equation*}
Replacing now $\tanh(T)=\sin(d/2)$ and using the Taylor series of sine at $\pi/2$ we finally get
\begin{equation*}
2 + 2(1-\sin(d/2)) \leq \mu(d).
\end{equation*}\end{proof}
We are now ready to prove Theorem~\ref{ap_thm}.
\begin{proof}
The proof proceeds exactly as the proof for Theorem~\ref{iso} and we begin by rescaling the metric so that $K_{min}=1$ and by Myers theorem $\eta = D \leq \pi$. Lemma~\ref{lem:1sttaylorlambda} then gives the bound $ 2 + 2(1-\sin(d/2)) \leq \mu(d) = \lambda_1(d,1)$. Following the ideas of \cite{calabi} we combine this lower bound on $\lambda_1$ with the upper bound due to Hirsch $\lambda_1 \leq \frac{8\pi}{A}$ (see \cite{hersch}, \cite{YangYau}) to yield
\begin{equation*}
2 + 2(1-\sin(\eta/2)) \leq \frac{8\pi}{A}
\end{equation*}
which we can manipulate to
\begin{equation*}
\pi A \leq \frac{8\pi^2}{2 + 2(1-\sin(\eta/2))}.
\end{equation*}
Klingenberg's injectivity radius estimate for positive metrics on 2-spheres says that $$D \geq \frac{\pi}{\sqrt{K_{max}}} = \pi \sqrt{\delta}.$$
We have therefore related both area and diameter to $\pi$, and conclude that
\begin{equation*}\label{eq:sdependance}
\pi A \leq \frac{8\pi^2}{2 + 2(1-\sin(\eta/2))} = \frac{4\pi^2}{K_{min}(2-\sin(\eta/2))} \leq \frac{4D^2}{\delta(2-\sin(\eta/2))}.
\end{equation*} \end{proof}
Finally, while not necessary for the proof of Theorem \ref{ap_thm}, the following Lemma of independent interest expands the approach of Lemma \ref{lem:1sttaylorlambda} and gives an idea of the sharpness of its inequality, both in general and at the limiting case of $d=\pi$.
\begin{lemma}
For $\pi\geq d \geq 2\arcsin(\tanh(3/2)) \approx2.263$
\[\mu(d) \leq 2 + \cos^2(d/2) A(d),
\]
where $$A(d) = \frac{6[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}{\sin^3(d/2) - 3\cos^2(d/2)[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}.$$
Moreover, at the limiting case $d=\pi$ we have $\mu'(\pi)=0$ and $\frac14\leq\mu''(\pi)\leq\frac32$.
\end{lemma}
\begin{proof}
Define the test function $y_T = \tanh(u) - u\sech^2(T)$, which by design satisfies the Neumann initial conditions of (\ref{eq:modSLP}). By direct calculation
\begin{eqnarray*}
\begin{split}
\int_{-T}^T -y_T.y_T''du &= \int_{-T}^T 2\tanh^2(u)\sech^2(u)du - \sech^2(T)\int_{-T}^T 2u\tanh(u)\sech^2(u)du\\
&= \frac23[2\tanh^3(T)] - \sech^2(T)[2\tanh(T) - 2T\sech^2(T)]
\end{split}
\end{eqnarray*}
\begin{eqnarray*}
\begin{split}
\int_{-T}^T y_T^2\sech(u)du &= \int_{-T}^T \left(\tanh^2(u) -2u\tanh(u))\sech^2(T) + u^2\sech^4(T)\right)\sech^2(u)du\\
&= \int_{-T}^T \tanh^2(u)\sech^2(u)du -\sech^2(T)\int_{-T}^T 2u\tanh(u)\sech^2(u)du \\& + \sech^4(T)\int_{-T}^T u^2\sech^2(u)du\\
&= \frac13[2\tanh^3(T)] - \sech^2(T)[2\tanh(T) - 2T\sech^2(T)] \\&+ \sech^4(T) \int_{-T}^T u^2\sech^2(u)du.
\end{split}
\end{eqnarray*}
Hence our test function $y_T$ gives us the inequality
\begin{eqnarray*}
\begin{split}
\mu(d) &\leq \frac{\int_{-T}^T 2\tanh^2(u)\sech^2(u)du - \sech^2(T)\int_{-T}^T 2u\tanh(u)\sech^2(u)du}{\int_{-T}^T \tanh^2(u)\sech^2(u)du -\sech^2(T)\int_{-T}^T 2u\frac{\tanh(u)}{\cosh^2(u)}du + \sech^4(T)\int_{-T}^T u^2\sech^2(u)du}\\
&\leq \frac{\frac23[2\tanh^3(T)] - \sech^2(T)[2\tanh(T) - 2T\sech^2(T)]}{\frac13[2\tanh^3(T)] - \sech^2(T)[2\tanh(T) - 2T\sech^2(T)]}
\end{split}
\end{eqnarray*}
where the last inequality is valid if $T\geq\frac32$, so for $d\geq2\arcsin(\tanh(3/2))\approx 2.263$.
Using that $\sech^2(T)=1-\tanh^2(T) = 1-\sin^2(d/2)=\cos^2(d/2)$ we have the desired inequality:
\begin{eqnarray*}
\begin{split}
\mu(d) &\leq \frac{2\sin^3(d/2) - 3\cos^2(d/2)[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}{\sin^3(d/2) - 3\cos^2(d/2)[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}\\
&= 2 + \cos^2(d/2) \frac{6[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}{\sin^3(d/2) - 3\cos^2(d/2)[\sin(d/2) - \arctanh(\sin(d/2))\cos^2(d/2)]}
\end{split}
\end{eqnarray*}
Turning our attention to the limiting case of $d=\pi$, we Taylor expand cosine at $\pi/2$ to yield
\begin{equation*}\label{eq:upper2deriv}
\mu(d) \leq 2 + \frac32(\pi-d)^2 + O((\pi-d)^3).
\end{equation*}
Similarly, we combine Lemma \ref{lem:1sttaylorlambda} and the Taylor expansion of sine at $\pi/2$ to yield
\begin{equation*}\label{eq:lower2deriv}
2 + \frac14(\pi-d)^2 + O((\pi-d)^3) \leq \mu(d).
\end{equation*}
From this pair of Taylor expansions we calculate that
\begin{equation*}
\mu'(\pi)=0,\quad\frac14 \leq \mu''(\pi) \leq \frac32 .
\end{equation*}
\end{proof}
| {
"timestamp": "2021-09-08T02:03:57",
"yymm": "2006",
"arxiv_id": "2006.04849",
"language": "en",
"url": "https://arxiv.org/abs/2006.04849",
"abstract": "We show that the shortest closed geodesic on a 2-sphere with non-negative curvature has length bounded above by three times the diameter. We prove a new isoperimetric inequality for 2-spheres with pinched curvature; this allows us to improve our bound on the length of the shortest closed geodesic in the pinched curvature setting.",
"subjects": "Differential Geometry (math.DG)",
"title": "The length of the shortest closed geodesic on positively curved 2-spheres",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877007780707,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594805454474
} |
https://arxiv.org/abs/0901.1511 | Braid presentation of spatial graphs | We define braid presentation of edge-oriented spatial graphs as a natural generalization of braid presentation of oriented links. We show that every spatial graph has a braid presentation. For an oriented link it is known that the braid index is equal to the minimal number of Seifert circles. We show that an analogy does not hold for spatial graphs. | \section{Introduction}
Throughout this paper we work in the piecewise linear category. Let $G$ be a finite edge-oriented graph. Namely $G$ consists of finite vertices and finite edges, and each edge has a fixed orientation. Edge-oriented graph is called digraph in graph theory. We consider a graph as a topological space in a usual way. Let ${\mathbb S}^3$ be the unit 3-sphere in the $xyzw$-space ${\mathbb R}^4$ centered at the origin of ${\mathbb R}^4$. An embedding of $G$ into ${\mathbb S}^3$ is called a {\it spatial embedding} of $G$. Then the image is also called a spatial embedding or a {\it spatial graph}.
Let $A$ (resp. $C$) be the intersection of ${\mathbb S}^3$ and the $zw$-plane (resp. $xy$-plane). Then the union $A\cup C$ is a Hopf link in ${\mathbb S}^3$. We call $A$ the {\it axis} and $C$ the {\it core}.
Let $\pi:{\mathbb S}^3-A\to C$ be a natural projection defined by
$\displaystyle{\pi(x,y,z,w)=(\frac{x}{\sqrt{x^2+y^2}},\frac{y}{\sqrt{x^2+y^2}},0,0)}$.
We give a counter-clockwise orientation to $C$ on $xy$-plane and fix it. We say that a continuous map $\varphi:G\to C$ is {\it locally orientation preserving} if for any edge $e$ of $G$ and any point $p$ on $e$ there is a neighbourhood $U$ of $p$ in $e$ such that the restriction map of $\varphi$ to $U$ is an orientation preserving embedding. Let $f:G\to {\mathbb S}^3$ be a spatial embedding. We say that $f$ or its image $f(G)$ is a {\it braid presentation} if $f(G)$ is disjoint from $A$ and the composition map $\pi\circ f':G\to C$ is locally orientation preserving where $f':G\to {\mathbb S}^3-A$ is the map defined by $f'(p)=f(p)$ for any $p$ in $G$. Note that this generalizes the braid presentation defined for $\theta_m$-curve in \cite{S-T}.
The following theorem shows that every edge-oriented spatial graph has a braid presentation up to ambient isotopy of ${\mathbb S}^3$. This generalizes Alexander's theorem that every oriented link can be expressed by a closed braid \cite{Alexander} and that proved for $\theta_m$-curve in \cite{S-T}.
\vskip 3mm
\begin{Theorem}\label{main-theorem1}
Let $G$ be a finite edge-oriented graph and $f:G\to {\mathbb S}^3$ a spatial embedding. Then there is a braid presentation $g:G\to {\mathbb S}^3$ that is ambient isotopic to $f$ in ${\mathbb S}^3$.
\end{Theorem}
\vskip 3mm
In \cite{Yamada} it is shown that the minimal number of Seifert circles of an oriented link is equal to the braid index of the link. We consider an analogy to spatial graphs. Let $g:G\to {\mathbb S}^3$ be a braid presentation. Let $P_p=\pi^{-1}(p)$ for $p\in C$. Let $\tilde{b}(g)=\tilde{b}(g(G))$ be the maximum of $|g(G)\cap P_p|$ where $|X|$ denotes the cardinality of the set $X$ and $p$ varies over all points in $C$. Let $f:G\to {\mathbb S}^3$ a spatial embedding. Let $b(f)=b(f(G))$ be the minimum of $\tilde{b}(g)$ where $g$ varies over all braid presentation that is ambient isotopic to $f$.
We call $b(f)=b(f(G))$ the {\it braid index} of $f$ or $f(G)$.
Let ${\mathbb S}^2$ be the intersection of ${\mathbb S}^3$ and the $xyz$-space. Then any spatial embedding $f:G\to {\mathbb S}^3$ has a diagram on ${\mathbb S}^2$ up to ambient isotopy of ${\mathbb S}^3$. Let $D$ be a diagram of $f$ on ${\mathbb S}^2$. Let $S(D)$ be a plane graph in ${\mathbb S}^2$ obtained from $D$ by smoothing every crossing of $D$. Here smoothing respect the orientations of the edges. See Figure \ref{smoothing}.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.4}{\includegraphics*{smoothing.eps}}
\end{center}
\caption{}
\label{smoothing}
\end{figure}
Let $\mu(X)$ be the number of connected components of a space $X$. Let $s(f)=s(f(G))$ be the minimum of $\mu(S(D))$ where $D$ varies over all diagrams of $f$ up to ambient isotopy of ${\mathbb S}^3$. We call $s(f)=s(f(G))$ the {\it smoothing index} of $f$ or $f(G)$. Note that our $s(f)$ is different from $s(G)$ defined for $\theta_m$-curve in \cite{S-T}.
We will show in Proposition \ref{proposition1} that for any natural number $n$ there is a spatial embedding $f$ of $G$ with $b(f)\geq n$ unless $G$ contains no cycles as an unoriented graph. In contrast we will show in the next theorem that $s(f,G)=s(g,G)$ for any two spatial embeddings $f$ and $g$ of $G$ unless $G$ satisfies certain conditions.
By ${\rm indeg}(v,G)={\rm indeg}(v)$ (resp. ${\rm outdeg}(v,G)={\rm outdeg}(v)$) we denote the number of the edges whose head (resp. tail) is the vertex $v$ of $G$. Then ${\rm deg}(v,G)={\rm deg}(v)={\rm indeg}(v,G)+{\rm outdeg}(v,G)$ is called the {\it degree} of $v$ in $G$.
We say that an edge-oriented graph $G$ is {\it circulating} if ${\rm indeg}(v,G)={\rm outdeg}(v,G)$ for any vertex $v$ of $G$. Namely each component of a circulating graph is Eulerian. Let $\chi(X)$ be the Euler characteristic of a space $X$.
Then we have the following theorem.
\vskip 3mm
\begin{Theorem}\label{main-theorem2}
Let $G$ be a finite edge-oriented graph without isolated vertices.
(1) Suppose that $G$ is not circulating. Then for any spatial embedding $f:G\to {\mathbb S}^3$, $s(f)={\rm max}\{1,\chi(G)\}$.
(2) Suppose that $G$ is circulating. Then for any natural number $n$ there is a spatial embedding $f:G\to {\mathbb S}^3$ such that $s(f)\geq n$.
\end{Theorem}
\vskip 3mm
\begin{Remark}\label{remark}
{\rm
Another choice for the smoothing index as a generalization of the number of Seifert circles of an oriented link is the use of the first Betti number instead of the number of connected components. Let $s'(f)$ be the minimum of $\beta_1(S(D))$ among all diagrams $D$ of $f$. By the Euler-Poincar\'{e} formula we have $\beta_1(S(D))=\mu(S(D))-\chi(S(D))$. Since smoothing does not change the Euler characteristic we have $\chi(S(D))=\chi(G)$. Then we have $s'(f)=s(f)-\chi(G)$. Thus we have that $s'(f)$ is determined by $s(f)$ after all.
}
\end{Remark}
\section{Proof of Theorem \ref{main-theorem1}}
The following proof is a natural extension of a proof of Alexander's theorem by Cromwell \cite{Cromwell} using {\it rectangular diagram} of oriented links that appears in \cite{Brunn} \cite{Cromwell} \cite{M-M} etc.
In this section we regard ${\mathbb S}^2$ as a one-point compactification of the $xy$-plane. Thus we may suppose that all diagrams are on the $xy$-plane.
In the following we sometimes do not distinguish an abstract vertex or edge from its image in ${\mathbb S}^3$ or on ${\mathbb S}^2$.
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem1}.}
Let $D$ be a diagram of the spatial embedding $f:G\to {\mathbb S}^3$. We will deform $D$ step by step so that it is still a diagram of $f$ up to ambient isotopy in ${\mathbb S}^3$ as follows. First we move $D$ if necessary so that $D$ is left to the $y$-axis. Namely $D$ is contained in the region of the $xy$-plane defined by $x<0$.
By a local deformation near each vertex we may suppose that all edges go down with respect to the $y$-coordinate in each small neighbourhood of a vertex of $G$. Then we further deform $D$ so that it satisfies the following conditions.
(1) $D$ is a union of finitely many line segments.
(2) Each vertex $v$ has a small disk neighbourhood $N_v$ such that the diagram $D$ on $N_v$ is a union of ${\rm indeg}(v,G)+{\rm outdeg}(v,G)$ line segments each of which has $v$ as one of its end points, and each of which goes down with respect to the $y$-coordinate.
(3) A line segment that is not contained in any $N_v$ is parallel to $x$-axis or $y$-axis.
Then we have that each crossing of $D$ is a crossing between a horizontal line segment (parallel to $x$-axis) and a vertical line segment (parallel to $y$-axis). Then by a local deformation as illustrated in Figure \ref{manji} we may further assume that a horizontal line segment is over a vertical line segment at each crossing. Note that the disk neighbourhood $N_v$ can be taken to be arbitrarily small. Then by a slight deformation we have that a straight line that contains a vertical line segment going up with respect to the $y$-coordinate contains no other vertical line segments and is disjoint from any $N_v$.
Let $s_1,\cdots,s_n$ be the vertical line segments that go up with respect to the $y$-coordinate. We may suppose without loss of generality that the $x$-coordinate of $s_i$ is less than that of $s_j$ if $i<j$. Let $R_1,\cdots,R_n$ be sufficiently large upright rectangles with $R_1\supset\cdots\supset R_n$ and $s_i\subset \partial R_i$ for each $i$. We replace each $s_i$ by $\partial R_i-{\rm int}(s_i)$ that crosses under the horizontal line segments at every crossings. Finally we tilt the horizontal line segments other than $\partial R_i$ slightly so that they go down with respect to the $y$-coordinate. Then we finally have a diagram $D$ of $f$ that totally turns around the origin of the $xy$-plane. Then $D$ represents a braid presentation. See for example Figure \ref{braiding}. This completes the proof. $\Box$
\begin{figure}[htbp]
\begin{center}
\scalebox{0.4}{\includegraphics*{manji.eps}}
\end{center}
\caption{}
\label{manji}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.4}{\includegraphics*{braiding.eps}}
\end{center}
\caption{}
\label{braiding}
\end{figure}
\section{Proof of Theorem \ref{main-theorem2}}
A vertex $v$ of an edge-oriented graph $G$ is called a {\it source} (resp. {\it sink}) of $G$ if ${\rm indeg}(v,G)=0$ (resp. ${\rm outdeg}(v,G)=0$).
\begin{Proposition}\label{proposition1}
Let $G$ be a finite edge-oriented graph. Suppose that $G$ contains a cycle as an unoriented graph.
Then for any natural number $n$ there is a spatial embedding $f:G\to {\mathbb S}^3$ such that $b(f)\geq n$.
\end{Proposition}
\vskip 3mm
\noindent{\bf Proof.} Let $\gamma$ be a cycle of $G$. Note that $\gamma$ may not be an oriented cycle as an edge-oriented subgraph of $G$. Let $\alpha$ be the number of the sources of $\gamma$. Let $f:G\to {\mathbb S}^3$ be a spatial embedding of $G$ such that the bridge index ${\rm bridge}(f(\gamma))$ of the knot $f(\gamma)$ is greater than or equal to $n+\alpha$. By the definition we have $b(f(\gamma))\leq b(f(G))$. We may suppose that $f(\gamma)$ is a braid presentation with $\tilde{b}(f(\gamma))=b(f(\gamma))$. Then $f(\gamma)$ has at most $2(\tilde{b}(f(\gamma))+\alpha)$ critical points with respect to the $y$-coordinate. Therefore we have ${\rm bridge}(f(\gamma))\leq\tilde{b}(f(\gamma))+\alpha$. Therefore we have $n+\alpha\leq{\rm bridge}(f(\gamma))\leq\tilde{b}(f(\gamma))+\alpha=b(f(\gamma))+\alpha$. Thus we have $n\leq b(f(\gamma))\leq b(f(G))$ as desired. $\Box$
\vskip 3mm
\begin{Proposition}\label{proposition2}
Let $G$ be a finite edge-oriented graph and $f:G\to {\mathbb S}^3$ a spatial embedding.Then $s(f)\geq\chi(G)$.
\end{Proposition}
\vskip 3mm
\noindent{\bf Proof.} Let $D$ be a diagram of $f$. It is sufficient to show that $\mu(S(D))\geq\chi(G)$.
Since smoothing does not change the Euler characteristic we have that $\chi(G)=\chi(S(D))$.
Let $\beta_1(X)$ be the first Betti number of a space $X$.
By the Euler-Poincar\'{e} formula we have $\chi(S(D))=\mu(S(D))-\beta_1(S(D))$. Therefore we have $\chi(S(D))\leq\mu(S(D))$. Thus we have $\mu(S(D))\geq\chi(G)$. $\Box$
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem2} (1).} First we show that $s(f)\geq{\rm max}\{1,\chi(G)\}$. Let $D$ be a diagram of $f$. By the definition we have $\mu(S(D))\geq1$. By Proposition \ref{proposition2} we have $\mu(S(D))\geq\chi(G)$. Thus we have $\mu(S(D))\geq{\rm max}\{1,\chi(G)\}$ holds for any diagram $D$ of $f$. This implies $s(f)\geq{\rm max}\{1,\chi(G)\}$.
Next we show that $f(G)$ has a diagram $D$ with $\mu(S(D))={\rm max}\{1,\chi(G)\}$. Let $H$ be the maximal subgraph of $G$ that has no vertices of degree less than 2. If $H$ is not an empty graph and $H$ is not circulating then we set $G'=H$. Suppose that $H$ is an empty graph or a circulating graph. Suppose that there is a component $I$ of $H$ that is not a component of $G$. Let $J$ be the component of $G$ containing $I$. Let $e$ be an edge of $J$ that is not an edge of $I$ but incident to a vertex of $I$. Let $G'$ be the minimal subgraph of $G$ that contains $H$ and $e$. Suppose that every component of $H$ is also a component of $G$. Let $e$ be an edge of $G$ that is not an edge of $H$. Let $G'$ be the minimal subgraph of $G$ that contains $H$ and $e$. Note that in any case we have $\chi(G')\leq1$.
Let $f'$ be the restriction map of the spatial embedding $f:G\to {\mathbb S}^3$ to $G'$. We will construct a diagram $D'$ of $f'$ with $\mu(S(D'))={\rm max}\{1,\chi(G')\}=1$. Namely we will construct $D'$ such that $S(D')$ is connected. We start from a diagram $D'$ of $f'$ and deform it step by step and finally have $D'$ with $S(D')$ connected.
By Theorem \ref{main-theorem1} we may suppose that $f'$ is a braid presentation. By deforming the braid presentation if necessary we have that $f'$ has a diagram $D'$ on the $xy$-plane with the following properties.
(1) There exists a rectangle $B=[-3,-1]\times[-2,2]$ in the $xy$-plane such that at every point on $D'$ in $B$ the edge-orientation goes down with respect to the $y$-coordinate.
(2) Outside of $B$ the diagram $D'$ consists of some parallel arcs turning around the origin of the $xy$-plane.
See for example Figure \ref{proof1} (a). In the following deformations we always keep the condition that everything goes down with respect to the $y$-coordinate inside $B$.
Suppose that $G'$ has some sources and/or sinks. Then by pulling up the sources and moving them to the right as illustrated in Figure \ref{proof1-2} and pulling down the sinks and moving them to the left we have that all sources are in $[-3,-1]\times[1,2]$ and all sinks are in $[-3,-1]\times[-2,-1]$ and all parallel arcs go left to the sources and go right to the sinks outside of $B'=[-3,-1]\times[-1,1]$. See for example Figure \ref{proof1} (b).
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof1.eps}}
\end{center}
\caption{}
\label{proof1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.4}{\includegraphics*{proof1-2.eps}}
\end{center}
\caption{}
\label{proof1-2}
\end{figure}
Then we deform the diagram $D'$ in $B'$ such that the following conditions hold.
(1) If a vertex $v$ of $G'$ satisfies $1\leq{\rm indeg}(v,G')\leq{\rm outdeg}(v,G')$ then it is rightmost in $B'$.
(2) If a vertex $v$ of $G'$ satisfies ${\rm indeg}(v,G')>{\rm outdeg}(v,G')\geq1$ then it is leftmost in $B'$.
See for example Figure \ref{proof2} (a).
Now we perform smoothing for the crossings in $B'$ and obtain $S(D')$ on $B'$. See for example Figure \ref{proof2} (b) and (c).
We do not deform $D'$ inside $B'$ any more. We will deform $D'$ only outside of $B'$. However to make the situation simple we further perform the following replacement of $S(D')$ on $B'$. For each vertex $v$ with $2\leq{\rm indeg}(v,G')\leq{\rm outdeg}(v,G')$ we replace a neighbourhood of it on $B'$ by ${\rm indeg}(v,G')-1$ parallel arcs and a vertex $u$ with ${\rm indeg}(u)=1$ and ${\rm outdeg}(u)={\rm outdeg}(v,G')-{\rm indeg}(v,G')+1$. Similarly for each vertex $v$ with ${\rm indeg}(v,G')>{\rm outdeg}(v,G')\geq2$ we replace a neighbourhood of it on $B'$ by ${\rm outdeg}(v,G')-1$ parallel arcs and a vertex $u$ with ${\rm outdeg}(u)=1$ and ${\rm indeg}(u)={\rm indeg}(v,G')-{\rm outdeg}(v,G')+1$. See Figure \ref{proof3} and Figure \ref{proof2} (d). Then we perform edge contractions if necessary so that there exist at most one vertex, say $v_+$ with $1={\rm indeg}(v_+)<{\rm outdeg}(v_+)$ and at most one vertex, say $v_-$ with ${\rm indeg}(v_-)>{\rm outdeg}(v_-)=1$. See for example Figure \ref{proof2} (e). Note that these replacements never decrease the number of connected components. Therefore it is sufficient to show that $S(D')$ is connected after these replacements.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.7}{\includegraphics*{proof2.eps}}
\end{center}
\caption{}
\label{proof2}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof3.eps}}
\end{center}
\caption{}
\label{proof3}
\end{figure}
Now suppose that there is just one sink of $G'$. Let $P$ be any point of $S(D')$. We start from $P$ along the flow of edge orientations of $S(D')$. If we come across the vertex $v_+$ then we choose the leftmost way. Namely we turn to the right at $v_+$. Then we see that as we turn around the origin of ${\mathbb R}^2$ we move to left and we finally reach to the sink. Thus we have that $S(D')$ is arcwise connected. Similarly if there are no sinks of $G'$ then starting from any point of $S(D')$ we finally reach to the outermost circle turning around the origin. Thus $S(D')$ is arcwise connected. Suppose that there are at most one source of $G'$. Then we see by going against the flow that $S(D')$ is arcwise connected.
Therefore it is sufficient to consider the case that there are at least two sinks and two sources of $G'$. Then by the definition of $G'$ we have that $G'$ has no vertices of degree one.
Let ${\mathcal B}$ be a disk in $B-B'$ containing all sinks in its interior. Let $s_1,\cdots,s_k$ be the sinks and $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k,1}$, $\cdots$, $P_{k,{\rm indeg}(s_k,G')}$ the points of intersection of $S(D')$ and $\partial{\mathcal B}$ such that they appear in this order on $\partial{\mathcal B}$, $P_{1,1}$ is adjacent to $v_-$ and $P_{i,j}$ is adjacent to $s_i$ for each $i$ and $j$.
See for example Figure \ref{proof1} and Figure \ref{proof4}.
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof4.eps}}
\end{center}
\caption{}
\label{proof4}
\end{figure}
We will deform $D'$ only on ${\mathcal B}$. We divide the points $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k,1}$, $\cdots$, $P_{k,{\rm indeg}(s_k,G')}$ into some sets of points ${\mathcal S}_1,\cdots,{\mathcal S}_\alpha$ such that for each $i$ the points in ${\mathcal S}_i$ are consecutive on $\partial{\mathcal B}$ and any two points in ${\mathcal S}_i$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}$. We may suppose without loss of generality that $P_{1,1}\in{\mathcal S}_1$ and for each $i$ there is a pair of consecutive points on $\partial{\mathcal B}$ such that one is contained in ${\mathcal S}_i$ and the other is contained in ${\mathcal S}_{i+1}$ where we consider $\alpha+1=1$.
We will show that ${\mathcal S}_i$ contains two or more points possibly except $i=1$. To see this we start from $P_{i,j}\neq P_{1,1}$ and trace $S(D')$ against the flow. Then we reach to $v_+$ or a source. Then we choose an edge that is next to the edge where we come and along the flow we trace $S(D')$. If we come across $v_+$ then we choose the leftmost way.
Then we must reach to $P_{i,j+1}$ or $P_{i,j-1}$ where we consider $P_{i,0}=P_{i-1,{\rm indeg}(s_{i-1},G')}$, $P_{i,{\rm indeg}(s_i,G')+1}=P_{i+1,1}$ and $P_{k+1,1}=P_{1,1}$.
Let ${\mathcal B}={\mathcal B}_1\supset{\mathcal B}_2\supset\cdots\supset{\mathcal B}_k$ be a sequence of disks such that their boundaries $\partial{\mathcal B}_1,\partial{\mathcal B}_2,\cdots,\partial{\mathcal B}_k$ forms concentric circles in ${\mathcal B}$.
Let ${\mathcal A}_i={\mathcal B}_{i}-{\rm int}{\mathcal B}_{i+1}$ be the annulus.
Suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is contained in ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$ but not contained in ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_{i-1}$.
First suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is a proper subset of ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$.
Then we leave $s_1$ in ${\mathcal A}_1$ and rename the sinks $s_2,\cdots,s_k$ $s_1,\cdots,s_{k-1}$ and the points of intersection of $S(D')$ and $\partial{\mathcal B}_2$ as illustrated in Figure \ref{proof5}. Then we redivide the points $P_{1,1}$, $\cdots$, $P_{1,{\rm indeg}(s_1,G')}$, $\cdots$,$P_{k-1,1}$, $\cdots$, $P_{k-1,{\rm indeg}(s_{k-1},G')}$ into some sets of points, still denoted by ${\mathcal S}_1,\cdots,{\mathcal S}_\alpha$, such that for each $i$ the points in ${\mathcal S}_i$ are consecutive on $\partial{\mathcal B}_2$ and any two points in ${\mathcal S}_i$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}_2$. We may suppose without loss of generality that $P_{1,1}\in{\mathcal S}_1$ and for each $i$ there is a pair of consecutive points on $\partial{\mathcal B}_2$ such that one is contained in ${\mathcal S}_i$ and the other is contained in ${\mathcal S}_{i+1}$ where we consider $\alpha+1=1$.
Then by the construction we have that ${\mathcal S}_i$ contains two or more points possibly except $i=1$, or except $i=\alpha$.
If ${\mathcal A}_\alpha$ contains just one point then we reverse the cyclic order for the next step. Namely we rename again $s_1,s_2,\cdots,s_{k-1}$ $s_{k-1},s_{k-2},\cdots,s_{1}$ and rename ${\mathcal S}_1,{\mathcal S}_2,\cdots,{\mathcal S}_\alpha$ ${\mathcal S}_\alpha,{\mathcal S}_{\alpha-1},\cdots,{\mathcal S}_1$, and rename the points $P_{i,j}$ along the new cyclic order on $\partial{\mathcal B}_2$.
Next suppose that the set $\{P_{1,1},\cdots,P_{1,{\rm indeg}(s_1,G')}\}$ is equal to the set ${\mathcal S}_1\cup\cdots\cup{\mathcal S}_i$.
Then we deform $D'$ as illustrated in Figure \ref{proof6} and consider $S(D')$.
Note that new $P_{1,1}$ and new $P_{k-1,{\rm indeg}(s_{k-1},G')}$ can be connected by an arc in $S(D')$ outside ${\rm int}{\mathcal B}_2$.
Therefore we have that each new ${\mathcal S}_i$ contains at least two points.
Next we deform $D'$ inside ${\mathcal B}_2$ and leave new $s_1$ in ${\mathcal A}_2$ in a similar way.
We continue this deformation and finally have the desired $S(D')$.
Now we return to the whole graph $G$. Let $G''$ be the maximal subgraph of $G$ that contains $G'$ and $\mu(G'')=\mu(G')$. Let $T_1,\cdots,T_n$ be the tree components of $G$ that are disjoint from $G'$. Then $G=G''\cup T_1\cup\cdots\cup T_n$.
Let $f''$ be the restriction of the spatial embedding $f:G\to{\mathbb S}^3$ to $G''$. Let $D''$ be a diagram of $f''$ whose subdiagram for $f'$ is $D'$ and has no more crossings than $D'$. Then we have that $S(D'')$ and $S(D')$ have the same homotopy type. In particular $S(D'')$ is connected. Let $m={\rm min}(\beta_1(S(D'')),n)$. Let $Q_1,\cdots,Q_m$ be points on $S(D'')$ other than the vertices such that $S(D'')-\{Q_1,\cdots,Q_m\}$ is still connected. We may suppose that these points are away from the neighbourhoods of the crossings of $D''$ where the smoothings are performed. Let $D$ be a diagram of $f$ whose subdiagram for $f''$ is $D''$ such that the crossings of $D$ other than that of $D''$ are exactly the points $Q_1,\cdots,Q_m$ where the crossing $Q_i$ is between an edge of $G'$ and an edge of $T_i$.
Then we see that $\mu(S(D))=1+n-m$. See for example Figure \ref{proof7}.
Note that we have the following equality.
\[
\chi(G)=\chi(G'')+n=\chi(S(D''))+n=\mu(S(D''))-\beta_1(S(D''))+n=1-\beta_1(S(D''))+n.
\]
Therefore if $m={\rm min}(\beta_1(S(D'')),n)=n$ then we have $\chi(G)\leq1$ and $\mu(S(D))=1+n-m=1$ as desired.
If $m={\rm min}(\beta_1(S(D'')),n)=\beta_1(S(D''))$ then we have $\chi(G)\geq1$ and $\mu(S(D))=1+n-\beta_1(S(D''))=\mu(S(D''))-\beta_1(S(D''))+n=\chi(S(D''))+n=\chi(G'')+n=\chi(G)$ as desired. This completes the proof.
$\Box$
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof5.eps}}
\end{center}
\caption{}
\label{proof5}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof6.eps}}
\end{center}
\caption{}
\label{proof6}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof7.eps}}
\end{center}
\caption{}
\label{proof7}
\end{figure}
\vskip 3mm
\noindent{\bf Proof of Theorem \ref{main-theorem2} (2).} Let $\gamma$ be an oriented cycle of $G$. Let $f:G\to {\mathbb S}^3$ be a spatial embedding of $G$ such that the braid index $b(f(\gamma))$ of the knot $f(\gamma)$ is greater than or equal to $n-\chi(G)$.
Let $D$ be any diagram of $f$. It is sufficient to show that $\mu(S(D))\geq n$. We replace each neighbourhood of a vertex $v$ of $D$ to ${\rm indeg}(v,G)$ oriented arcs as follows. For a vertex $v$ that is not on $\gamma$ we replace it to mutually disjoint oriented arcs. See for example Figure \ref{proof8}.
Let $v$ be a vertex of $G$ that is on $\gamma$. Let $N$ be a small neighbourhood of $v$ on $D$. Suppose that there is a pair of edges not contained in $\gamma$, say $e_i$ and $e_o$, such that the head of $e_i$ is $v$, the tail of $e_o$ is $v$ and they are next to each other in $N$. Then we take them away from $v$ and connect them. We do this for all such pairs. Then we have the situation that all edges in $N$ not on $f(\gamma)$ go from the right of $f(\gamma)$ to the left of $f(\gamma)$ or from the left of $f(\gamma)$ to the right of $f(\gamma)$. Then we split off them and let $f(\gamma)$ goes over them. Let $D'$ be the result of these replacements.
See for example Figure \ref{proof9}. Then we have that $D'$ is a diagram of some oriented link, say $L$. Since $L$ contains a knot $f(\gamma)$ we have that the braid index $b(L)$ of $L$ is greater than or equal to $n-\chi(G)$. By the result in \cite{Yamada} we have that $\mu(S(D'))\geq b(L)$. Therefore we have $\mu(S(D'))\geq n-\chi(G)$. Note that the homotopy type of $S(D)$ is obtained from $S(D')$ by adding $\sum_{v}({\rm indeg}(v,G)-1)$ edges where the summation is taken over all vertices $v$ of $G$. Therefore we have that
$\mu(S(D))\geq\mu(S(D'))-\sum_{v}({\rm indeg}(v,G)-1)$. By the handshaking lemma and by the assumption that $G$ is circulating we have that ${\displaystyle \sum_{v}({\rm indeg}(v,G)-1)=-\chi(G)}$.
Thus we have $\mu(S(D))\geq n$ as desired. $\Box$
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof8.eps}}
\end{center}
\caption{}
\label{proof8}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{proof9.eps}}
\end{center}
\caption{}
\label{proof9}
\end{figure}
The following example shows that even for the circulating graphs the difference $b(f)-s(f)$ depends on the spatial embedding $f$.
\vskip 3mm
\begin{Example}\label{example}
{\rm
Let $G$ be a circulating graph on two vertices and eight edges joining them.
Let $f:G\to {\mathbb S}^3$ be a trivial embedding of $G$. Then we have that $b(f)=4$ and $s(f)=1$.
Let $g:G\to {\mathbb S}^3$ be a spatial embedding of $G$ illustrated in Figure \ref{example1}. Note that $g(G)$ contains a knot $K$ that is a connected sum of three figure eight knots. Then we have ${\rm bridge}(K)=4$. Suppose that $K$ is a braid presentation as its edge orientations. Then we may suppose that $K$ is as illustrated in Figure \ref{example2} where the box represents some $n$ braid. Then we have that ${\rm bridge}(K)\leq n-1$. Therefore we have that $n\geq5$. Since $G$ has two more oriented cycles other than $K$ we have that $b(g)\geq7$. Since $g$ is a braid presentation with $\tilde{b}(g)=7$ we have $b(g)=7$. However we have that $s(g)=1$ as illustrated in Figure \ref{example1}.
}
\end{Example}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.4}{\includegraphics*{example1.eps}}
\end{center}
\caption{}
\label{example1}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\scalebox{0.6}{\includegraphics*{example2.eps}}
\end{center}
\caption{}
\label{example2}
\end{figure}
\section*{Acknowledgments}
The authors are grateful to Professor Shin'ichi Suzuki for his constant guidance and encouragement. The authors are also grateful to Dr. Ryuzo Torii for his helpful comments.
{\normalsize
| {
"timestamp": "2009-01-12T08:50:47",
"yymm": "0901",
"arxiv_id": "0901.1511",
"language": "en",
"url": "https://arxiv.org/abs/0901.1511",
"abstract": "We define braid presentation of edge-oriented spatial graphs as a natural generalization of braid presentation of oriented links. We show that every spatial graph has a braid presentation. For an oriented link it is known that the braid index is equal to the minimal number of Seifert circles. We show that an analogy does not hold for spatial graphs.",
"subjects": "Geometric Topology (math.GT)",
"title": "Braid presentation of spatial graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595527,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594801712106
} |
https://arxiv.org/abs/2212.00302 | An analysis of the Rayleigh-Ritz and refined Rayleigh-Ritz methods for nonlinear eigenvalue problems | We establish a general convergence theory of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing some simple eigenpair ($\lambda_{*},x_{*}$) of a given analytic nonlinear eigenvalue problem (NEP). In terms of the deviation $\varepsilon$ of $x_{*}$ from a given subspace $\mathcal{W}$, we establish a priori convergence results on the Ritz value, the Ritz vector and the refined Ritz vector, and present sufficient convergence conditions for them. The results show that, as $\varepsilon\rightarrow 0$, there is a Ritz value that unconditionally converges to $\lambda_*$ and the corresponding refined Ritz vector does so too but the Ritz vector may fail to converge and even may not be unique. We also present an error bound for the approximate eigenvector in terms of the computable residual norm of a given approximate eigenpair, and give lower and upper bounds for the error of the refined Ritz vector and the Ritz vector as well as for that of the corresponding residual norms. These results nontrivially extend some convergence results on these two methods for the linear eigenvalue problem to the NEP. Examples are constructed to illustrate some of the results. | \section{Introduction}
Given the nonlinear eigenvalue problem (NEP)
\begin{equation}\label{NEP}
T(\lambda)x=0,
\end{equation}
where $T(\cdot): \Omega\subseteq\mathbb{C}\rightarrow\mathbb{C}^{n\times n}$ is a
nonlinear analytic matrix-valued function with respect to the complex number $\lambda\in\Omega$
with $\Omega$ a connected open set,
the spectrum of $T(\cdot)$ is defined by the set \cite{Guttel,Meerbergen}:
\begin{equation}\label{spect}
\Lambda(T(\cdot)):=\bigg\{\lambda\in \mathbb{C}: \text{det}(T(\lambda))=0\bigg\}.
\end{equation}
We call $\lambda\in \Lambda(T(\cdot))$ an eigenvalue of $T(\cdot)$
and the corresponding $x$ a right eigenvector corresponding to $\lambda$.
Throughout the paper,
we normalize $x$ so that $\|x\|=1$ with $\|\cdot\|$ the 2-norm of a matrix or vector.
Assume that $T(\lambda)$ is regular, that is, $T(\lambda)$ does not have
identically zero determinant for all $\lambda\in \Omega$. Then every eigenvalue $\lambda$ is isolated, that is, there exists
an open neighborhood $\mathcal{B}\subset \Omega$ of $\lambda$ so that $\lambda$ is the unique eigenvalue of $T(\cdot)$
in $\mathcal{B}$ (cf. \cite{Neumaier} and Theorem 2.1 of \cite{Guttel}). There are many differences between NEP (\ref{NEP}) and the regular linear eigenvalue problem $(A-\lambda B)x=0$.
For example, the number of eigenvalues of NEP can be arbitrary, and eigenvectors associated with distinct eigenvalues may be linearly dependent. More theoretical background on NEP can be found in \cite{Guttel}.
NEP (\ref{NEP}) arises in many applications, such as dynamic analysis of damped structural systems \cite{app4}, nonlinear ordinary differential eigenvalue problems \cite{app3}, stability analysis of linear delayed systems \cite{app2}, acoustic problems with absorbing boundary conditions \cite{Day}, electronic structure calculation for quantum dots \cite{Betcke} and stability analysis in fluid mechanics \cite{app1}. More NEPs can be found in, e.g.,
\cite{Higham2,Guttel,Higham1,Mehrmann,Mehrmann2001,Porzio}
and the references therein.
Notice that \eqref{NEP} can be written as
\begin{equation}\label{NEP1}
T(\lambda)=\sum_{i=0}^{k}f_{i}(\lambda)A_{i
\end{equation}
for some $k\leq n^2$.
where $f_{i}: \Omega\subseteq \mathbb{C}\rightarrow \mathbb{C}$ are nonlinear analytic functions with $\Omega$ being an open set, $A_{i}\in \mathbb{C}^{n\times n}$ are constant coefficient matrices. The polynomial eigenvalue problem (PEP) \cite{Przemieniecki,Rothe,Voss0} corresponds to $f_{i}(\lambda)=\lambda^{i}, \ i=0, 1, \ldots,k$,
in which the quadratic eigenvalue problem (QEP) for $k=2$ is a special case. The QEP has very wide applications and has been intensively studied \cite{Hilliges,Lancaster,Leguillon,Mackey,Mehrmann2001,tisseur2002}. Another common class of \eqref{NEP1} is the rational eigenvalue problem (REP), where the functions $f_{i}(\lambda),\ i=0, 1, \ldots,k$ are rational functions of $\lambda$. Very importantly, from both perspectives of mathematics and numerical computations, NEPs of (\ref{NEP}) on a given subset $\Omega\subseteq\mathbb{C}$ can be approximated by rational matrix functions accurately \cite{dopico,guttel2022,Guttel,Saad}, so that one can solve the eigenvalue problems of PEPs and REPs and obtains their eigenvalues and eigenvectors.
Suppose that we are interested in finding a simple eigenpair ($\lambda_{*},x_{*}$) of NEP \eqref{NEP}, that is, $\text{det}(T(\lambda))=0$ has a simple root at $\lambda=\lambda_{*}$. Several methods have been presented to solve NEP (\ref{NEP1}) for small and medium sized NEPs. For instance, Newton's methods \cite{N1,N2,Ruhe} are popular choices for NEPs, and the QR or QZ method is used when NEPs are transformed into linear eigenvalue problems \cite{dopico,Saad}. For large-scale NEPs, projection methods are most commonly used numerical methods, which include the nonlinear Lanczos method, the nonlinear Arnoldi method, the nonlinear Jacobi--Davidson method, and some others \cite{Jacobi,Guttel,Voss}. We will focus on some popular projection methods and establish their general convergence theory in this paper.
A general projection method for \eqref{NEP} is to generate a sequence of $m$-dimensional subspaces $\mathcal{W}_{m}\subset \mathbb{C}^n$ that contain increasingly accurate approximations to $x_{*}$, and projects NEP \eqref{NEP} onto $\mathcal{W}_{m}$'s to obtain small projected NEPs, solving which for obtaining approximations to the desired $(\lambda_*,x_*)$. The Rayleigh--Ritz method is a widely used projection method for extracting approximations to $\lambda_{*}$ and $x_{*}$ with respect to $\mathcal{W}_{m}$. We describe one iteration of the method as Algorithm \ref{alg-RR}, where the subscript $m$ is omitted for brevity, the superscript $H$ denotes the conjugate transpose of a matrix or vector. $\mu$ and $\widetilde{x}$ are called a Ritz value and Ritz vector of $T(\lambda)$ with respect to $\mathcal{W}$, respectively.
\begin{algorithm}
\caption{The Rayleigh--Ritz method}
\label{alg-RR}
\begin{algorithmic}
\STATE{Compute an orthonormal basis $W$ of $\mathcal{W}$}.
\STATE{Form $B(\lambda)=W^{H}T(\lambda)W\in \mathbb{C}^{m\times m}$}.
\STATE{Solve $B(\lambda) z=0$, and let ($\mu,z$) be an eigenpair of $B(\cdot)$ with $\|z\|=1$ satisfying
\begin{equation}\label{ritzvalue}
\mu=\text{argmin}_{\zeta}\{|\zeta-\lambda_{*}|: \zeta\in\Lambda(B(\cdot))\}.
\end{equation}}
\STATE{Take ($\mu,\widetilde{x}$)=($\mu,Wz$) as an approximation to ($\lambda_{*},x_{*}$). }
\end{algorithmic}
\end{algorithm}
We point out that, for NEP \eqref{NEP1}, we can
always compute $B(\lambda)$ accurately in exact arithmetic.
We may solve the projected NEP $B(\lambda)z=0$ by a method for small to medium sized problems,
e.g., the afore-mentioned Newton's methods or the more robust QZ algorithm for the matrix pencils resulting from PEPs, REPs themselves and REP approximations of NEPs \cite{dopico,guttel2022,Saad,vandooren79}.
Regarding the choice of $\mu$ defined by \eqref{ritzvalue} in computations, since $\lambda_*$ is to be sought and unknown, for a specified $\lambda_*$, such as the the largest one in magnitude and the one with the largest (or smallest) real part or the one closest to a given target, we label all the Ritz values in the corresponding prescribed order, and pick up the Ritz value $\mu$ as approximation to $\lambda_*$ according to the labeling rule. It is exactly the way that the method does for eigenvalue problems in computations.
How to construct $\mathcal{W}$ effectively is not our concern in this paper.
Instead, we focus on the convergence of the Ritz pair $(\mu,\widehat{x})$
under the hypothesis that $\mathcal{W}$ is sufficiently accurate for
computing the desired eigenpair $(\lambda_*,x_*)$.
To this end, we define the distance between the desired normalized $x_*$ and $\mathcal{W}$ or the deviation of $x_*$ from $\mathcal{W}$ by the quantity
\begin{equation}\label{devia}
\varepsilon=\sin\angle(x_{*},\mathcal{W})=\| W_{\bot}^{H}x_{*}\|=\|(I-P_{\mathcal{W}})x_{*}\|,
\end{equation}
where the columns of $W_{\bot}$ form an orthonormal basis for the orthogonal complement of $\mathcal{W}$ with respect to $\mathbb{C}^{n}$ and $P_{\mathcal{W}}$ is the orthogonal projector onto $\mathcal{W}$.
Since $\sin\angle(x_*,y)\geq\varepsilon$ for any $y\in\mathcal{W}$,
any projection method, e.g., the above Rayleigh--Ritz method, definitely cannot find an accurate approximation to $x_*$ if $\varepsilon$ does not tend to zero. In other words, $\varepsilon\rightarrow 0$ is a necessary condition for
$\widetilde{x}\rightarrow x_*$. Therefore, in the sequel we naturally assume that $\varepsilon\rightarrow 0$, only under which it is meaningful to speak of the convergence of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method to be introduced later.
For the linear eigenvalue problem $(A-\lambda I)x=0$, Jia \cite{Jia1995} and Jia and Stewart \cite{Jia1999,Jia2001} prove that as $\varepsilon\rightarrow 0$, the Rayleigh quotient $W^{H}AW$ contains a Ritz value that converges to $\lambda_{*}$ but the Ritz vector may fail to converge even when $x_*\in \mathcal{W}$, i.e., $\varepsilon=0$. The refined Rayleigh--Ritz method extracts a normalized refined Ritz vector or refined eigenvector approximation $\widehat{x}$ that minimizes the residual formed with $\mu$ over $\mathcal{W}$ and uses it to approximate $x_*$ \cite{Jia1997,Jia1999,Jia1999-1,Jia2000}. It is shown in \cite{Jia1999,Jia2001} that the convergence of $\widehat{x}$ is ensured provided that $\varepsilon\rightarrow 0$; see also \cite{Stewart2001,vorst2002}. Numerous theoretical comparisons of refined Ritz vectors and Ritz vectors for linear eigenvalue problems are made in \cite{Jia2004}. Huang, Jia and Lin \cite{Huang} generalize the
Rayleigh--Ritz method and the refined Rayleigh--Ritz method to the QEP. By transforming the QEP into a generalized linear eigenvalue problem, they extend the main convergence results in \cite{Jia1999,Jia2001} on the Rayleigh--Ritz method and the refined Rayleigh--Ritz method to the
QEP case. Hochstenbach and Sleijpen \cite{Hoch} generalize the harmonic and refined Rayleigh--Ritz methods to the PEP. Schwetlick and Schreiber \cite{Schwetlick} study the nonlinear Rayleigh functionals for NEP (\ref{NEP}). They estimate the quality of the Rayleigh functional, i.e., the error of the Ritz value, in terms of the angle between the desired eigenvector and the Ritz vector, and present a number of results. For NEP (\ref{NEP}) derived from finite element analysis of Maxwell's equation with waveguide boundary conditions, Liao et al. \cite{Liao} propose a nonlinear Rayleigh--Ritz method, but they do not analyze the convergence of Ritz values and Ritz vectors.
In this paper, we propose a refined Rayleigh--Ritz method that solves NEP (\ref{NEP}). For a given $m$-dimensional subspace $\mathcal{W}\subset\mathbb{C}^n$, the method
seeks for a unit-length vector $\widehat{x}\in \mathcal{W} $ such that
\begin{equation}\label{Refin}
\| T(\mu)\widehat{x}\|=\min_{v\in \mathcal{W},\| v\|=1}\| T(\mu)v\|
\end{equation}
with $\mu$ being the Ritz value defined by \eqref{ritzvalue}. We call
$\widehat{x}$ a refined eigenvector approximation to $x_*$ or simply the refined Ritz vector corresponding to $\mu$. By definition, $\widehat{x}=Wy$, where $y$ is the right singular vector of $T(\mu)W$ associated with its smallest singular value.
We will study the convergence of Ritz values and Ritz vectors obtained by Algorithm~\ref{alg-RR} and that of refined Ritz vectors defined by \eqref{Refin} under the hypothesis that $\varepsilon\rightarrow 0$. We will extend the main convergence results in \cite{Jia2004,Jia1999} to the NEP case and get insight into similarities and dissimilarities of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for the linear eigenvalue problem and NEP.
We will show that, unlike the linear eigenvalue problem, the hypothesis $\varepsilon\rightarrow 0$ cannot unconditionally ensure that $\mu\rightarrow\lambda_{*}$. We will present a sufficient condition
that ensures the convergence. Similarly to the linear case, we will prove that the Ritz vector $\widetilde{x}$ may fail to converge to
$x_*$ even if $\mu$ converges. We will present an approach to identify the convergence of $\widetilde{x}$ associated with the converging $\mu$. For the refined Rayleigh--Ritz method,
we prove that the refined Ritz vector $\widehat{x}$ converges unconditionally as $\varepsilon\rightarrow 0$. we also prove that $\widehat{x}$ is unique as $\varepsilon\rightarrow 0$, an
important property that is different from that of $\widetilde{x}$. Furthermore, we establish lower and upper bounds for the error of $\widehat{x}$ and
$\widetilde{x}$, and study the residual norms
$\|T(\mu)\widetilde{x}\|$ and $\|T(\mu)\widehat{x}\|$, showing that the latter is strictly smaller and
can be much smaller than that of the former whenever the Ritz pair
$(\mu,\widetilde{x})\not=(\lambda_*,x_*)$. This indicates that the refined Ritz vector $\widehat{x}$ is generally more accurate and
can be much more accurate than the Ritz vector $\widetilde{x}$.
In Section \ref{sec-con-Rv}, we study the convergence of $\mu$. In Section \ref{sec:RR}, we consider the convergence of $\widetilde{x}$ and
$\widehat{x}$. In Section \ref{sec:bound}, we derive lower and upper bounds for $\sin\angle(\widetilde{x},\widehat{x})$, and construct examples to illustrate our results. In Section~\ref{sec:concl}, we conclude
the paper.
\section{Convergence of the Ritz value}\label{sec-con-Rv}
In this section, we study the convergence of the Ritz values obtained
by Algorithm \ref{alg-RR}. The following result shows that $\lambda_{*}$
is an exact eigenvalue of
$\widetilde{B}(\cdot)$ that is a slight perturbation of $B(\cdot)$ in Algorithm~\ref{alg-RR}, which extends Theorem 4.1 of \cite{Jia2001} to the Rayleigh--Ritz method for NEP~\eqref{NEP}.
\begin{theorem}\label{l1}
For a given subspace $\mathcal{W}$, let $\varepsilon$ be defined by \eqref{devia}. Then for the projected matrix-valued function $B(\lambda)=W^HT(\lambda) W$, there exists a matrix-valued function
$E(\lambda):\lambda\in \Omega\subset \mathbb{C}\rightarrow\mathbb{C}^{m\times m}$ satisfying
\begin{equation}\label{normE}
\| E(\lambda_{*})\|\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}}\| T(\lambda_{*})\|
\end{equation}
such that $\lambda_{*}$ is an eigenvalue of the perturbed $B(\lambda)+E(\lambda)$.
\end{theorem}
\begin{proof}
Recall that the columns of $W$ and $W_{\bot}$ form orthonormal bases of $\mathcal{W}$ and its orthogonal complement with respect to $\mathbb{C}^n$. Let $u=W^{H}x_{*}$ and $u_{\bot}=W_{\bot}^{H}x_{*}$. Then $\|u_{\bot}\|=\varepsilon$.
Notice that
$$
1= \|x_{*}\|^{2}=x_{*}^{H}(WW^{H}+W_{\bot}W_{\bot}^{H})x_{*}=\|u\|^{2}+\|u_{\bot}\|^{2}.
$$
We have $\|u\|=\sqrt{1-\varepsilon^{2}}$. Since $T(\lambda_{*})x_{*}=0$, we obtain
$$
W^{H}T(\lambda_{*})(WW^{H}+W_{\bot}W_{\bot}^{H})x_{*}=0.
$$
Hence
\begin{equation}\label{Bpert}
B(\lambda_{*})u+W^{H}T(\lambda_{*})W_{\bot}u_{\bot}=0.
\end{equation}
Let the normalized $\widehat{u}=\dfrac{u}{\sqrt{1-\varepsilon^{2}}}$,
and denote $r=B(\lambda_{*})\widehat{u}$, which is the residual of $(\lambda_*,\widehat{u})$ as an approximate eigenpair of $B(\mu)z=0$. Then we have
\begin{equation}\label{err}
\| r\|=\bigg\| -\frac{W^{H}T(\lambda_{*})W_{\bot}u_{\bot}}{\sqrt{1-\varepsilon^{2}}}
\bigg\|\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}}\|T(\lambda_{*})\|.
\end{equation}
Define
$E(\lambda)=\dfrac{1}{\sqrt{1-\varepsilon^{2}}}W^{H}T(\lambda)
W_{\bot}u_{\bot}\widehat{u}^{H}$.
Then (\ref{normE}) holds, and it is from \eqref{Bpert} that $(B(\lambda_{*})+E(\lambda_{*}))\widehat{u}=0$.
\end{proof}
Theorem~\ref{l1} shows that $B(\lambda_*)+E(\lambda_*)$ is singular and $E(\lambda_*)\rightarrow 0$ as $\varepsilon\rightarrow 0$. Therefore, $B(\lambda_*)$ tends to the changing singular matrix $B(\mu$) as
$\varepsilon\rightarrow 0$ for $\mu$ defined in Algorithm~\ref{alg-RR}. It is, therefore, expected that $\mu\rightarrow \lambda_*$ as $\varepsilon\rightarrow 0$. Later on, we will investigate if and when this is true.
We present the following result, which, in terms of $\varepsilon$, establishes an upper bound for the smallest singular value $\sigma_{\min}(B(\lambda_{*}))$ of $B(\lambda_{*})$ and quantitatively
shows how near $B(\lambda_*)$ is to singularity.
\begin{theorem}\label{C1} The smallest singular value $\sigma_{\min}(B(\lambda_{*}))$
of $B(\lambda_{*})$ satisfies
\begin{equation}\label{sigmB}
\sigma_{\min}(B(\lambda_{*}))\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}}\| T(\lambda_{*})\|.
\end{equation}
\end{theorem}
\begin{proof}
By definition and (\ref{err}), we have
$$\sigma_{\min}(B(\lambda_{*}))=\min_{\|u\|=1}\|B(\lambda_{*})u\|\leq
\| B(\lambda_{*})\widehat{u}\|\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}}\| T(\lambda_{*})\|.$$
~
\end{proof}
Theorem \ref{C1} shows that the distance of $B(\lambda_*)$ to the singularity is no more than the upper bound \eqref{sigmB}. Furthermore, by the analyticity of $B(\cdot)$ and the continuity argument, there must be some eigenvalue $\mu$ of $B(\cdot)$, defined in Algorithm~\ref{alg-RR}, that converges to $\lambda_*$ as $\varepsilon\rightarrow 0$. In order to establish a quantitative convergence result on $\mu$, we need the following lemma \cite[pp.208]{Stewart2001}, which gives the first-order expansion of a simple singular value of a matrix and trivially extends to a complex matrix.
\begin{lemma}\label{singular-pertur} Let $(\sigma,u,v)$ be a simple singular
triplet of $X\in\mathbb{C}^{m\times n}$, and $\widetilde{\sigma}$ be the corresponding singular value of the perturbed matrix $\widetilde{X}=X+E$. Then
$$\widetilde{\sigma}=\sigma+u^{H}Ev+O(\|E\|^{2}).$$
\end{lemma}
\begin{theorem}\label{T1} Assume that $(\sigma(B(\lambda_*)),u_{\lambda_*}, v_{\lambda_*})$ be a simple triplet of $B(\lambda_*)$, and
let $\sigma(B(\lambda))$ be the corresponding singular value of
the perturbed analytic matrix $B(\lambda)=B(\lambda_*)+E(\lambda)$.
Then $\sigma(B(\lambda))$ is analytic at $\lambda_*$ and
\begin{equation}\label{div}
\sigma^{\prime}(B(\lambda_*))=u_{\lambda_*}^{H}B^{\prime}(\lambda_*)v_{\lambda_*}.
\end{equation}
\end{theorem}
\begin{proof} Let $\Delta\lambda=\lambda-\lambda_*$. By the definition
of $B(\lambda)$, we have $E(\lambda_*)=0$. Therefore,
\begin{equation}\label{T1-1}
\lim_{\Delta\lambda\rightarrow 0}\frac{E(\lambda)}{\Delta\lambda}=\lim_{\Delta\lambda\rightarrow 0}\frac{E(\lambda)-E(\lambda_*)}{\Delta\lambda}=
\lim_{\Delta\lambda\rightarrow 0}\frac{B(\lambda)-B(\lambda_*)}{\Delta\lambda}
=B^{\prime}(\lambda_*).
\end{equation}
Since $\sigma(B(\lambda_*))$ is simple, it follows
from Lemma \ref{singular-pertur} that
\begin{equation}\label{T1-2}
\sigma(B(\lambda))=\sigma(B(\lambda_*))+
u_{\lambda_*}^{H}E(\lambda)v_{\lambda_*}+O(\|E(\lambda)\|^{2}).
\end{equation}
From (\ref{T1-1}) and (\ref{T1-2}), we have
\begin{eqnarray*}
\sigma^{\prime}(B(\lambda_*)) &=& \lim_{\Delta\lambda\rightarrow 0}\frac{\sigma(B(\lambda))-\sigma(B(\lambda_*))}{\Delta\lambda} \\
&=& \lim_{\Delta\lambda\rightarrow 0}u_{\lambda_*}^{H}\frac{E(\lambda)}{\Delta\lambda}v_{\lambda_*}+\lim_{\Delta\lambda\rightarrow 0}O\bigg(\frac{\|E(\lambda)\|^{2}}{\Delta\lambda}\bigg) \\
&=& u_{\lambda_*}^{H}B^{\prime}(\lambda_*)v_{\lambda_*},
\end{eqnarray*}
where we have used
$$\lim_{\Delta\lambda\rightarrow 0}\frac{\|E(\lambda)\|^{2}}{|\Delta\lambda|}=
\|B^{\prime}(\lambda_*)\|\lim_{\Delta\lambda\rightarrow 0}\|E(\lambda)\|= \|B^{\prime}(\lambda_*)\|\|E(\lambda_*)\|=0.
$$
Hence, \eqref{div} holds.
\end{proof}
By Theorem \ref{T1}, we can now give a sufficient condition
for the convergence of the Ritz value $\mu$.
\begin{theorem}\label{appro}
Assume that $B(\lambda)$ by Algorithm \ref{alg-RR} is analytic for $\lambda\in \Omega$ and $\sigma_{\min}(B(\lambda_*))$ is simple. Then the smallest singular value $\sigma_{\min}(B(\lambda))$
is simple for $\lambda$ lying in a sufficiently small disc
$\mathcal{B}=\{\lambda\in\mathbb{C}:\mid\lambda-\lambda_{*}\mid\leq|\mu-\lambda_{*}|\}
\subset \Omega$, and there exists a positive constant $\alpha>0$ such that $|\sigma_{\min}^{\prime}(B(\lambda))|>\alpha$ for $\lambda\in \mathcal{B}$ and
\begin{equation}\label{apppro1}
|\mu-\lambda_{*}|\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}\alpha}
\| T(\lambda_{*})\|.
\end{equation}
\end{theorem}
\begin{proof}
By Theorem \ref{T1}, $\sigma_{\min}(B(\lambda))$ is analytic for $\lambda\in \mathcal{B}$. By the assumption that $\sigma_{\min}(B(\lambda_*))$ is simple,
we have $\sigma^{\prime}_{\min}(B(\lambda_*))\not=0$.
For $\lambda$ lying in a sufficiently small disc $\mathcal{B}$,
we have $\sigma^{\prime}_{\min}(B(\lambda))\not=0$, and
there exists a positive constant $\alpha$ such that
$|\sigma^{\prime}_{\min}(B(\lambda))|>\alpha$ for all $\lambda\in\mathcal{B}$.
Since $\sigma_{\min}(B(\mu))$ is real-valued and differentiable in $\Omega$,
it is from \cite{Qazi} that
$$
\sigma_{\min}(B(\mu))-\sigma_{\min}(B(\lambda_{*}))
=\sigma^{\prime}_{\min}(B(\xi))(\mu-\lambda_{*})$$
for some $\xi$ in the segment connecting $\lambda_*$ and $\mu$. Notice that
$\xi\in\mathcal{B}\subset \Omega$. By \eqref{sigmB} and $\sigma_{\min}(B(\mu))=0$, we obtain
$$|\mu-\lambda_{*}|=\frac{|\sigma_{\min}(B(\lambda_{*}))|}{|\sigma^{\prime}_{\min}(B(\xi))|}
\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}\alpha}\| T(\lambda_{*})\|.$$
~
\end{proof}
This theorem shows that, under the assumptions described,
$\mu\rightarrow \lambda_*$ as fast as $\varepsilon\rightarrow 0$.
\section{Convergence of the Ritz vector and the refined Ritz vector}\label{sec:RR}
In this section, we consider the convergence of the Ritz vector
$\widetilde{x}$ and the refined Ritz vector $\widehat{x}$.
Let $X_{\bot}$ be an orthonormal basis for the orthogonal complement of
${\rm span}\{x_{*}\}$ with respect to $\mathbb{C}^n$, and define $L(\lambda_{*})=X_{\bot}^{H}T(\lambda_{*})X_{\bot}$.
Then from $T(\lambda_{*})x_{*}=0$ we obtain the Schur-like decomposition
\begin{equation}\label{schurlike}
\begin{pmatrix}
x_{*}^{H} \\
X_{\bot}^{H} \\
\end{pmatrix}T(\lambda_{*})\begin{pmatrix}
x_{*} & X_{\bot} \\
\end{pmatrix}=\begin{pmatrix}
0 & x_{*}^{H}T(\lambda_{*})X_{\bot}\\
0 & L(\lambda_{*}) \\
\end{pmatrix}.
\end{equation}
Since $\lambda_{*}$ is a simple eigenvalue of $T(\cdot)$, Proposition $1$ in \cite{Neumaier} shows that $\text{rank}(T(\lambda_{*}))=n-1$. As a result,
$L(\lambda_{*})$ is nonsingular, and
$\sigma_{\min}(L(\lambda_{*}))>0$.
In terms of the computable residual norm, the following theorem derives an error bound for a general approximate eigenpair $(\mu,\widetilde{x})$, which is not necessarily a Ritz pair. The theorem extends Theorem 3.1
of \cite{Jia1999} and Theorem 6.1 of \cite{Jia2001} to the NEP case
and applies to the Rayleigh--Ritz method
and the refined Rayleigh--Ritz method.
\begin{theorem}\label{con-eigenpair}
Let $(\mu,\widetilde{x})$ with $\|\widetilde{x}\|=1$ be an approximate eigenpair of $T(\lambda)x=0$, and denote the corresponding residual norm by $$\rho=\| T(\mu)\widetilde{x}\|.$$
Denote $L(\mu)=X_{\bot}^{H}T(\mu)X_{\bot}$. If $\sigma_{\min}(L(\mu))>0$, then
\begin{equation}\label{sin-x}
\sin\angle(x_*,\widetilde{x})\leq\frac{\rho}{\sigma_{\min}(L(\mu))}+O(|\mu-\lambda_{*}|).
\end{equation}
\end{theorem}
\begin{proof}
By the unitary invariance of 2-norm, we have
\begin{eqnarray*}
\rho &=& \bigg\|\begin{pmatrix}
x_{*}^{H} \\
X_{\bot}^{H} \\
\end{pmatrix}T(\mu)\widetilde{x}\bigg\| \\
&=& \bigg\|\begin{pmatrix}
x_{*}^{H}T(\mu)\widetilde{x} \\
X_{\bot}^{H}T(\mu)(x_{*}x_{*}^{H}\widetilde{x}+X_{\bot}X_{\bot}^{H}\widetilde{x})\\
\end{pmatrix}\bigg\|
\\
&=& \bigg\|\begin{pmatrix}
x_{*}^{H}T(\mu)\widetilde{x} \\
X_{\bot}^{H}T(\mu)x_{*}x_{*}^{H}\widetilde{x}+L(\mu)X_{\bot}^{H}\widetilde{x} \\
\end{pmatrix} \bigg\|.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
\rho&\geq& \|X_{\bot}^{H}T(\mu)x_{*}x_{*}^{H}\widetilde{x}+L(\mu)X_{\bot}^{H}\widetilde{x}\|\\
&\geq& \|L(\mu)X_{\bot}^{H}\widetilde{x}\|-\|X_{\bot}^{H}T(\mu)x_{*}x_{*}^{H}\widetilde{x}\|,
\end{eqnarray*}
i.e.,
\begin{equation}\label{p}
\|L(\mu)X_{\bot}^{H}\widetilde{x}\|\leq\rho+
\|X_{\bot}^{H}T(\mu)x_{*}x_{*}^{H}\widetilde{x}\|.
\end{equation}
Since $T(\lambda)$ is analytic with respect to $\lambda$, we have
\begin{equation}\label{expension}
T(\mu)=T(\lambda_{*})+T^{\prime}(\lambda_{*})(\mu-\lambda_{*})+O((\mu-\lambda_{*})^2).
\end{equation}
Therefore,
$$T(\mu)x_{*}=(\mu-\lambda_{*})T^{\prime}(\lambda_{*})x_{*}+O((\mu-\lambda_{*})^2),$$
which proves
\begin{equation}\label{inest}
\|X_{\bot}^{H}T(\mu)x_{*}x_{*}^{H}\widetilde{x}\|=O(|\mu-\lambda_{*}|).
\end{equation}
Notice that $\sin\angle(x_{*},\widetilde{x})=\|X_{\bot}^H\widetilde{x}\|$
and $\|L(\mu)X_{\bot}^{H}\widetilde{x}\|\geq \sigma_{\min}(L(\mu))\|X_{\bot}^{H}\widetilde{x}\|$.
Therefore, from (\ref{p}) and (\ref{inest}), we obtain
$$\sin\angle(x_{*},\widetilde{x})\sigma_{\min}(L(\mu))
\leq\rho+O(|\mu-\lambda_{*}|),$$
proving that
\begin{equation*}
\sin\angle(x_{*},\widetilde{x})\leq\dfrac{\rho+O(|\mu-\lambda_{*}|)}
{\sigma_{\min}(L(\mu))}\leq\frac{\rho}{\sigma_{\min}(L(\mu))}+O(|\mu-\lambda_{*}|).
\end{equation*}
\end{proof}
We remark that the second term in the right-hand side of \eqref{p} vanishes for the linear eigenvalue problem $T(\lambda)x=(A-\lambda I)x=0$ since $X_{\bot}^HT(\mu)x_*=(\lambda-\mu)X_{\bot}^Hx_*=0$. In this case, the term $O(|\mu-\lambda_{*}|)$ disappears in \eqref{sin-x}, and the theorem reduces to Theorem 3.1 of \cite{Jia1999}.
The smallest singular value $\sigma_{\min}(L(\mu))>0$ is ensured, provided that $\lambda_{*}$ is a simple eigenvalue of $T(\lambda)$ and $\mu$ is sufficiently close to $\lambda_*$. We will investigate more on $\sigma_{\min}(L(\mu))$ later. Theorem~\ref{con-eigenpair} indicates that if the residual associated with a converging Ritz value $\mu$ converges to zero then the corresponding Ritz vector converges to $x_*$. This residual norm can be used to
check if the approximate eigenpair converges.
The following result gives an upper bound for the error of the Ritz vector $\widetilde{x}$ in terms of $\varepsilon$. We will show that $\widetilde{x}$ converges conditionally and reveal why $\widetilde{x}$ may fail to converge to $x_{*}$ even though $\varepsilon\rightarrow 0$ and even $\varepsilon=0$.
Recall that $B(\mu)z=0$, and let $(z,Z_{\bot})$ be unitary. Then
\begin{equation}\label{schurB}
\begin{pmatrix}
z^{H} \\
Z_{\bot}^{H} \\
\end{pmatrix}B(\mu)\begin{pmatrix}
z & Z_{\bot} \\
\end{pmatrix}=\begin{pmatrix}
0 & z^{H}B(\mu)Z_{\bot}\\
0 & C(\mu) \\
\end{pmatrix},
\end{equation}
where $C(\mu)=Z_{\bot}^HB(\mu)Z_{\bot}$.
If $\mu$ is a simple eigenvalue of $B(\cdot)$, then $C(\mu)$ is nonsingular.
However, once $\mu$ is a multiple one or is close to some other eigenvalues
of $B(\cdot)$, then $C(\mu)$ is singular or approximately singular. In these two cases, since $\sigma_{\min}(C(\mu))$ is zero or close to zero, the eigenvector $z$ may not be unique or $\widetilde{x}$ is unique but may have very poor and even have no accuracy, as shown below.
In either case, by the continuity argument, $\sigma_{\min}(C(\lambda_*))$ is
arbitrarily close to zero as $\mu\rightarrow \lambda_*$.
The following theorem shows that the Ritz vector
$\widetilde{x}$ converges conditionally, and extends Theorem 3.2 of \cite{Jia1999} to the NEP, in which
the term $O(|\mu-\lambda_{*}|)$ vanishes when the NEP becomes a linear one.
\begin{theorem}\label{con-Ritz}
Assume that $\mu$ is simple. Then $\widetilde{x}$ is unique; if $\sigma_{\min}(C(\lambda_*))>0$, we have
\begin{equation}\label{sin-x1}
\sin\angle(x_{*},\widetilde{x})\leq\bigg(1+\frac{\| T(\lambda_{*})\|}{\sqrt{1-\varepsilon^{2}}\sigma_{\min}(C(\lambda_*))}\bigg)
\varepsilon+O(|\mu-\lambda_{*}|).
\end{equation}
\end{theorem}
\begin{proof} Note that the residual
$r=B(\lambda_{*})\widehat{u}$ in (\ref{err}) satisfies
$$\| r\|\leq\frac{\varepsilon}{\sqrt{1-\varepsilon^{2}}}\| T(\lambda_{*})\|.$$
Then applying Theorem \ref{con-eigenpair} to $B(\mu)z=0$ and the residual
norm $\|r\|=\|B(\lambda_{*})\widehat{u}\|$ with $(\lambda_*,\widehat{u})$ being an
approximate eigenpair of $B(\cdot)$, we obtain
$$
\sin\angle(z,\widehat{u})\leq\frac{\| T(\lambda_{*})\|\varepsilon}{\sqrt{1-\varepsilon^{2}}\sigma_{\min}(C(\lambda_*))}
+O(|\mu-\lambda_{*}|).
$$
Since $\widetilde{x}=Wz$, $\widehat{u}=W^Hx_*/\sqrt{1-\varepsilon^2}$
and $P_{\mathcal{W}}=WW^H$, the above relation means
$$
\sin\angle(\tilde{x},P_{\mathcal{W}}x_{*})\leq\dfrac{\| T(\lambda_{*})\|\varepsilon}{\sqrt{1-\varepsilon^{2}}\sigma_{\min}(C(\lambda_*))}
+O(|\mu-\lambda_{*}|)
$$
due to the orthonormality of $W$. Since
$$
\angle(x_{*},\tilde{x})\leq\angle(x_{*},P_{\mathcal{W}}x_{*})
+\angle(P_{\mathcal{W}}x_{*},\tilde{x})
=\angle(x_{*},\mathcal{W})+\angle(P_{\mathcal{W}}x_{*},\tilde{x}),
$$
by \eqref{devia} we obtain
\begin{eqnarray*}
\sin\angle(x_{*},\tilde{x})&\leq&
\sin\angle(x_{*},\mathcal{W})+\sin\angle(P_{\mathcal{W}}x_{*},\tilde{x})\\
&\leq& \bigg(1+\dfrac{\| T(\lambda_{*})\|}{\sqrt{1-\varepsilon^{2}}\sigma_{\min}(C(\mu))}\bigg)
\varepsilon+O(|\mu-\lambda_{*}|).
\end{eqnarray*}
~
\end{proof}
This theorem provides sufficient conditions for the convergence of
the Ritz vector $\widetilde{x}$, which require that (i) the
Ritz value $\mu\rightarrow\lambda_{*}$ and (ii) $\sigma_{\min}(C(\lambda_*))$
be uniformly bounded away from zero as $\varepsilon\rightarrow 0$.
As we have commented before the theorem, theoretically there is no guarantee that condition (ii) is satisfied. If $\sigma_{\min}(C(\lambda_*))$ is small and
no larger than $O(\varepsilon)$, then $\widetilde{x}$ may have no accuracy.
Next we investigate the convergence of the refined Ritz vector $\widehat{x}$,
and prove that it unconditionally converges to $x$ provided that
$\mu\rightarrow\lambda_*$ as $\varepsilon\rightarrow 0$.
For a converging Ritz value $\mu$, we make a Schur-like decomposition of $T(\mu)$ similar to \eqref{schurlike}, where the entry of $(1,1)$-position is small but not zero, the $(2,2)$-position is written as $L(\mu)$, and the first column of the unitary similarity transformation is $\widehat{x}$
and $L(\mu)\rightarrow L(\lambda_*)$. Since $T(\lambda)$ is supposed to be analytic with respect to $\lambda\in \Omega$, the matrix $L(\lambda)$ is analytic with respect to $\lambda$ too. Therefore,
for $\mu$ sufficiently close to $\lambda_*$, by the Taylor expansion we obtain
\begin{equation}\label{sig-0}
L(\mu)=L(\lambda_{*})+L^{\prime}(\lambda_{*})(\mu-\lambda_{*})+(\mu-\lambda_{*})^{2}R_1(\mu,\lambda_{*}),
\end{equation}
where $R_1(\mu,\lambda_{*})$ is a matrix whose norm is uniformly bounded
with respect to $\mu$:
$\| R_1(\mu,\lambda_{*})\|\leq \beta$ with $\beta>0$ being a constant.
\begin{theorem}\label{con-Refined-vector} Assume that $|\mu-\lambda_{*}|$ is small such that
\begin{equation}\label{assumr}
\sigma_{\min}(L(\lambda_{*}))-\| L^{\prime}(\lambda_{*})\| |\mu-\lambda_{*}|-\beta |\mu-\lambda_{*}|^{2}>0.
\end{equation}
Then
\begin{eqnarray}
\| T(\mu)\widehat{x}\|&\leq&\frac{\| T(\mu)\|\varepsilon+\|T^{\prime}(\lambda_*)\| |\mu-\lambda_{*}|}{\sqrt{1-\varepsilon^{2}} }+O(|\mu-\lambda_*|^2),
\label{rres}\\
\sin\angle(x_{*},\widehat{x})&\leq&\frac{\| T(\mu)\|\varepsilon+\|T^{\prime}(\lambda_*)\||\mu-\lambda_*|}
{\sqrt{1-\varepsilon^{2}}\sigma_{\min}(L(\mu))}+O(|\mu-\lambda_{*}|^2).
\label{errorrefine}
\end{eqnarray}
\end{theorem}
\begin{proof}
Decompose $x_*$ as the orthogonal direct sum:
$$
x_{*}=\frac{P_{\mathcal{W}}x_{*}}{\| P_{\mathcal{W}}x_{*}\|}\cos\angle(x_{*},\mathcal{W})+\frac{(I-P_{\mathcal{W}})x_{*}}{\| (I-P_{\mathcal{W}})x_{*}\|}\sin\angle(x_{*},\mathcal{W}),
$$
and note that $\sin\angle(x_{*},\mathcal{W})=\varepsilon$ and
$\cos\angle(x_{*},\mathcal{W})=\sqrt{1-\varepsilon^2}$.
Then
\begin{eqnarray*}
\bigg\|T(\mu)\frac{P_{\mathcal{W}}x_{*}}{\| P_{\mathcal{W}}x_{*}\|}\bigg\| &=& \bigg\| \frac{T(\mu)}{\sqrt{1-\varepsilon^{2}}}\bigg(x_{*}-\frac{(I-P_{\mathcal{W}}x_{*})}{\| (I-P_{\mathcal{W}}x_{*})\|}\varepsilon\bigg)\bigg\| \\
&\leq& \frac{1}{\sqrt{1-\varepsilon^{2}}}\big(\| T(\mu)x_{*}\|+\| T(\mu)\|\varepsilon\big).
\end{eqnarray*}
On the other hand, since $T(\lambda)$ is analytic with respect to $\lambda\in \Omega$, for a converging Ritz value $\mu$, by the Taylor expansion we obtain
\begin{equation}\label{sig-1}
T(\mu)=T(\lambda_{*})+T^{\prime}(\lambda_{*})(\mu-\lambda_{*})+(\mu-\lambda_{*})^{2}R_2(\mu,\lambda_{*}),
\end{equation}
where $R_2(\mu,\lambda_{*})$ is a matrix whose norm is uniformly bounded, i.e.,
$\| R_2(\mu,\lambda_{*})\|\leq \gamma$ with $\gamma>0$ uniformly.
From $T(\lambda_{*})x_{*}=0$ and \eqref{sig-1}, we obtain
\begin{eqnarray*}
\| T(\mu)x_{*}\| &=& \|( T(\mu)-T(\lambda_{*}))x_{*} \|\\
&\leq& |\mu-\lambda_*|\|T^{\prime}(\lambda_*)\|+O(|\mu-\lambda_*|^2),
\end{eqnarray*}
Therefore, by definition (\ref{Refin}) of the refined Ritz vector
$\widehat{x}$, we obtain
\begin{equation*}
\| T(\mu)\widehat{x}\|\leq\bigg\| T(\mu)\frac{P_{\mathcal{W}}x_{*}}{\| P_{\mathcal{W}}x_{*}\|}\bigg\|\leq\frac{\| T(\mu)\|\varepsilon+\|T^{\prime}(\lambda_*)\| |\mu-\lambda_{*}|}{\sqrt{1-\varepsilon^{2}} }+O(|\mu-\lambda_*|^2),
\end{equation*}
which proves \eqref{rres}.
From (\ref{sig-0}), by the standard perturbation theory of singular values,
we obtain
$$
\sigma_{\min}(L(\mu))\geq\sigma_{\min}(L(\lambda_{*}))-\| L^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid-\beta|\mu-\lambda_{*}|^{2}.
$$
As a result, under assumption \eqref{assumr}, applying
Theorem \ref{con-eigenpair} to $\|T(\mu)\widehat{x}\|$, from
\eqref{rres} we obtain
\begin{eqnarray*}
\sin\angle(x_{*},\widehat{x}) &\leq& \frac{\| T(\mu)\|\varepsilon+\|T^{\prime}(\lambda_*)\|
|\mu-\lambda_{*}|}{\sqrt{1-\varepsilon^{2}}\sigma_{\min}
(L(\mu))}+O(|\mu-\lambda_{*}|^2),
\end{eqnarray*}
which is \eqref{errorrefine}.
\end{proof}
Theorem \ref{con-Refined-vector} indicates that the
residual norm $\|T(\mu)\widehat{x}\|\rightarrow 0$ and $\widehat{x}\rightarrow x_{*}$ provided that $\mu\rightarrow \lambda_{*}$ as $\varepsilon\rightarrow 0$. Recall from Theorem~\ref{con-Ritz} that the convergence of the Ritz vector $\widetilde{x}$ requires that $\sigma_{\min}(C(\lambda_*))$ be uniformly bounded away from zero, which can be arbitrarily small whenever $\mu$ is a multiple Ritz value or is close to some other Ritz values. In contrast,
the situation is fundamentally different for the refined Ritz vector
$\widehat{x}$ since
$\sigma_{\min}(L(\mu))$ must be uniformly positive for a simple $\lambda_*$ as
$\sigma_{\min}(L(\mu))\rightarrow\sigma_{\min}(L(\lambda_*))>0$ when $\mu\rightarrow\lambda_*$.
Next we further compare the Ritz vector and the refined Ritz vector, and get more insight into them.
For the simple $\lambda_*$, the following result shows that $\sigma_{\min}(T(\mu))$ is a simple singular value of $T(\mu)$ when $\mu$ is close enough to $\lambda_{*}$, which will be exploited to
establish the uniqueness of the refined Ritz vector $\widehat{x}$.
\begin{lemma}\label{unique-rR-value} Assume that
$\lambda_*$ is a simple eigenvalue of $T(\lambda)x=0$. If $\mu$ is sufficiently close to $\lambda_{*}$ such that
\begin{equation}\label{assums}
\| T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid+\gamma|\mu-\lambda_{*}|^{2}
<\dfrac{1}{2}\sigma_{2}(T(\lambda_{*})),
\end{equation}
where $\sigma_{2}(T(\lambda_{*}))$ is the second smallest singular value of
$T(\lambda_{*})$, then $\sigma_{\min}(T(\mu))$ is simple.
\end{lemma}
\begin{proof}
Under the assumption, $\sigma_{\min}(T(\lambda_{*}))=0$ is a simple singular value of $T(\lambda_{*})$, meaning that
$$\sigma_{2}(T(\lambda_{*}))>\sigma_{\min}(T(\lambda_{*}))=0.$$
From (\ref{sig-1}), we obtain
$$\| T(\mu)-T(\lambda_{*})\|\leq \| T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid+\gamma|\mu-\lambda_{*}|^{2}.$$
Therefore, by the standard perturbation theory,
the errors of the $i$th smallest singular values $\sigma_i(\cdot)$ of $T(\mu)$ and $T(\lambda_*)$ satisfy
$$\mid\sigma_{i}(T(\mu))-\sigma_{i}(T(\lambda_{*}))\mid\leq\| T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid+\gamma|\mu-\lambda_{*}|^{2},$$
from which it follows that
\begin{equation}\label{rR1}
\sigma_{2}(T(\mu))\geq\sigma_{2}(T(\lambda_{*}))-\| T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid-\gamma|\mu-\lambda_{*}|^{2}
\end{equation}
and
\begin{equation}\label{sigma1}
\sigma_{\min}(T(\mu))=\sigma_1(T(\mu))\leq\| T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid+\gamma|\mu-\lambda_{*}|^{2}
<\frac{1}{2}\sigma_{2}(T(\lambda_{*})).
\end{equation}
On the other hand, from (\ref{assums}) and (\ref{rR1}) we obtain
$$\sigma_{2}(T(\mu))>\frac{1}{2}\sigma_{2}(T(\lambda_{*})),$$
which, together with \eqref{sigma1}, shows that
$\sigma_{\min}(T(\mu))$ is a simple singular value of $T(\mu)$.
\end{proof}
Based on Lemma \ref{unique-rR-value}, we can prove that, unlike the Ritz vector, the refined Ritz vector is unique, provided that $\mu\rightarrow\lambda_*$ as $\varepsilon\rightarrow 0$.
\begin{theorem}\label{unique-rR-vector}
Let $\widehat{\sigma}_1\leq\widehat{\sigma}_2\leq\cdots\leq \widehat{\sigma}_m$
be the singular values of $T(\mu)W$ with $m$ the number of columns of $W$, and
assume that $\lambda_*$ is a simple eigenvalue of $T(\lambda)x=0$ and that $\mu$ is sufficiently close to $\lambda_*$ such that $\widehat{\sigma}_{1}<\dfrac{1}{2}\sigma_{2}(T(\lambda_{*}))-
\|T^{\prime}(\lambda_{*})\| |\mu-\lambda_{*}|$ and $\sigma_2(T(\lambda_{*}))>2\gamma|\mu-\lambda_{*}|^{2}$.
Then $\widehat{\sigma}_{1}$ is simple, and $\widehat{x}$ is unique.
\end{theorem}
\begin{proof}
In terms of the eigenvalue interlacing property of the
Hermitian matrix
$$
\begin{pmatrix}
W^{H} \\
W_{\bot}^{H} \\
\end{pmatrix}T^{H}(\mu)T(\mu)\begin{pmatrix}
W & W_{\bot} \\
\end{pmatrix}=\begin{pmatrix}
W^{H}T(\mu)^{H}T(\mu)W & W^{H}T(\mu)^{H}T(\mu)W_{\bot}\\
W_{\bot}^{H}T(\mu)^{H}T(\mu)W & W_{\bot}^{H}T(\mu)^{H}T(\mu)W_{\bot} \\
\end{pmatrix},$$
we have $\widehat{\sigma}_{2}\geq\sigma_{2}(T(\mu))$. Then from (\ref{rR1}) and the assumptions we obtain
\begin{align*}
\widehat{\sigma}_{2}-\widehat{\sigma}_{1}&\geq\sigma_{2}(T(\mu))-\widehat{\sigma}_{1}\\
&\geq\sigma_{2}(T(\lambda_{*}))
-\|T^{\prime}(\lambda_{*})\|\mid\mu-\lambda_{*}\mid-\gamma |\mu-\lambda_*|^2\\
& \ \ \ -\frac{1}{2}\sigma_{2}(T(\lambda_{*}))+\|T^{\prime}(\lambda_{*})\|
\mid\mu-\lambda_{*}\mid\\
&\geq \frac{1}{2}\sigma_{2}(T(\lambda_{*}))-\gamma |\mu-\lambda_*|^2>0
\end{align*}
Therefore, $\widehat{\sigma}_{1}$ is simple, and $\widehat{x}$ is unique.
\end{proof}
Notice that, by definition, $\widehat{\sigma}_1=\|T(\mu)\widehat{x}\|$.
Relation~\eqref{rres} shows that $\widehat{\sigma}_1\rightarrow 0$ as
$\mu\rightarrow\lambda_*$. Therefore, the conditions for $\widehat{\sigma}_1$
and $\sigma_2(T(\lambda_*)$ are definitely met for $\mu$ sufficiently close
to the simple eigenvalue $\lambda_*$. This theorem generalizes Theorem 2.2 of \cite{Jia2004} to the NEP case.
We now construct an example to illustrate our results on the Ritz and refined Ritz vectors.
\begin{example}
Consider the REP $T(\lambda)x=0$ with
$$T(\lambda)=\begin{pmatrix}
\lambda & 1 & \lambda^{2} \\
1 & \lambda & 0 \\
0 & 0 & \frac{\lambda}{\lambda-1} \\
\end{pmatrix}.
$$
\end{example}
Since $\text{det}(T(\lambda))=\lambda(\lambda+1)$, it has two
eigenvalues $-1$ and $0$, each of which has algebraic and geometric multiplicities one (cf. \cite{Guttel}
for the definition of algebraic and geometric multiplicities of an eigenvalue).
Suppose that we want to compute the eigenvalue $\lambda_{*}=0$ and the associated eigenvector $x_*=e_3=(0,0,1)^T$. We generate the subspace $\mathcal{W}$ by the orthonormal matrix
$$W=\begin{pmatrix}
0 & 1 \\
0 & 0\\
1 & 0 \\
\end{pmatrix},
$$
which contains the desired $x_*$ exactly, i.e., $\varepsilon=0$.
The resulting projected matrix-valued function
$$B(\lambda)=W^{H}T(\lambda)W=\begin{pmatrix}
\frac{\lambda}{\lambda-1} & 0 \\
\lambda^{2} & \lambda \\
\end{pmatrix}.
$$
Since $\text{det}(B(\lambda))=\frac{\lambda^{2}}{\lambda-1}$, its eigenvalues, i.e.,
the Ritz values, are $\mu=0$, whose
algebraic and geometric multiplicities are two.
Clearly, $\mu$ equals the desired simple eigenvalue $\lambda_{*}=0$ of $T(\cdot)$. However,
observe that any nonzero vector $z$ satisfies $B(0)z=0$, meaning that any nonzero vector is an eigenvector of $B(0)$. Therefore, the Ritz vector $\widetilde{x}=Wz$ is not unique, and there are two linearly independent ones, each of which is formally an approximation to $x_*=e_3$, causing the
failure of the Rayleigh--Ritz method because $B(0)$ itself does not give us any clue to which vector should be chosen. For instance, we might choose $z=(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})^{T}$, in which case we get a Ritz vector $\widetilde{x}=(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}})^{T}$, a completely meaningless approximation to $e_3$, and the residual norm
$\|T(0)\widetilde{x}\|=\frac{1}{\sqrt{2}}$.
In contrast, the matrix
$$T(0)W=\begin{pmatrix}
0 & 0 \\
0 & 1 \\
0 & 0 \\
\end{pmatrix}
$$
is column rank deficient has rank one. It is easily justified that its right singular vector with the smallest singular value zero is uniquely $y^T=(1,0)^T$. Therefore, the refined Ritz vector $\widehat{x}=Wy$ is unique and precisely equals the desired eigenvector $e_{3}$. Naturally, the residual norm $\|T(0)\widehat{x}\|=0$. This example demonstrates that the superiority of the refined Ritz vector to the Ritz vector.
\section{Bounds for the error of the Ritz vector and the refined Ritz vector}\label{sec:bound}
In this section, we continue to explore the Ritz vector and the refined Ritz vector,
derive lower and upper bounds for $\sin\angle(\widetilde{x},\widehat{x})$, and shed more light on these two vectors and the implications of the bounds on the residual norms obtained by the Rayleigh--Ritz method and the refined
Rayleigh--Ritz method.
Recall that the refined Ritz vector $\widehat{x}=Wy$ with $y$ being the right singular vector of $T(\mu)W$ associated with its smallest singular value $\widehat{\sigma}_{1}$, and keep in mind the Ritz vector $\widetilde{x}=Wz$ and \eqref{schurB}. We can establish the following results.
\begin{theorem}\label{bound-R} With decomposition \eqref{schurB}, if $\sigma_{\min}(C(\mu))>0$, i.e., $\mu$ is a simple eigenvalue
of $B(\cdot)$, then
\begin{equation}\label{bound-Sin}
\frac{\widehat{\sigma}_{1}\| (WZ_{\bot})^{H}s\|}{\sigma_{\max}(C(\mu))}\leq
\sin\angle(\widetilde{x},\widehat{x})\leq\frac{\widehat{\sigma}_{1}\| W^{H}s\|}{\sigma_{\min}(C(\mu))},
\end{equation}
where $\sigma_{\max}(C(\mu))$ is the largest singular value of $C(\mu)$ and
$s$ is the left singular vector of $T(\mu)W$ associated with its smallest singular value $\widehat{\sigma}_{1}$.
\end{theorem}
\begin{proof}
Since $\widehat{\sigma}_{1}$ is the smallest singular value of $T(\mu)W$, we have
\begin{equation}\label{By}
B(\mu)y=W^{H}T(\mu)Wy=\widehat{\sigma}_{1}W^{H}s.
\end{equation}
By \eqref{schurB} and \eqref{By} we obtain
\begin{eqnarray*}
\begin{pmatrix}
z^{H} \\
Z_{\bot}^{H} \\
\end{pmatrix}B(\mu)\begin{pmatrix}
z & Z_{\bot} \\
\end{pmatrix}\begin{pmatrix}
z^{H} \\
Z_{\bot}^{H} \\
\end{pmatrix}y
&=& \begin{pmatrix}
0 & z^{H}B(\mu)Z_{\bot} \\
0 & C(\mu) \\
\end{pmatrix}\begin{pmatrix}
z^{H} \\
Z_{\bot}^{H} \\
\end{pmatrix}y
\\
&=& \begin{pmatrix}
z^{H}B(\mu)Z_{\bot}Z_{\bot}^Hy \\
C(\mu)Z_{\bot}^{H}y \\
\end{pmatrix}\\
&=&\begin{pmatrix}
\widehat{\sigma}_{1}z^{H}W^{H}s \\
\widehat{\sigma}_{1}Z_{\bot}^{H}W^{H}s \\
\end{pmatrix},
\end{eqnarray*}
where the last equality follows from \eqref{By}. Therefore,
$$
C(\mu)Z_{\bot}^{H}y=\widehat{\sigma}_{1}Z_{\bot}^{H}W^{H}s.
$$
By the orthonormality of $W$ and the above relation, we obtain
\begin{equation}\label{l-u-s}
\sin\angle(\widetilde{x},\widehat{x})=\sin\angle(z,y)=\| Z_{\bot}^{H}y\|=\widehat{\sigma}_{1}\| C^{-1}(\mu)(WZ_{\bot})^{H}s\|,
\end{equation}
from which it follows that
\begin{equation}\label{l-u}
\frac{\widehat{\sigma}_{1}\| (WZ_{\bot})^{H}s\|}{\sigma_{\max}(C(\mu))}\leq
\sin\angle(\widetilde{x},\widehat{x})\leq \frac{\widehat{\sigma}_{1}\| (WZ_{\bot})^{H}s\|}{\sigma_{\min}(C(\mu))}\leq
\frac{\widehat{\sigma}_{1}\| W^{H}s\|}{\sigma_{\min}(C(\mu))}.
\end{equation}
~
\end{proof}
Since $\widehat{\sigma}_1\rightarrow 0$ as $\mu\rightarrow \lambda_*$ for $\varepsilon\rightarrow 0$, the upper bound in \eqref{bound-Sin} shows that $\sigma_{\min}(C(\mu))>0$ uniformly is a sufficient condition for $\widetilde{x}\rightarrow\widehat{x}$. This uniform condition is essentially in accordance with that in Theorem~\ref{con-Ritz}, which ensures the convergence of $\widetilde{x}$.
The lower bound tends to zero provided that $\widehat{\sigma}_1\rightarrow 0$. Therefore, generally $\sin\angle(\widetilde{x},\widehat{x})=0$ if and only if $\widehat{\sigma}_{1}=0$; in other words, if $\widehat{x}\not=x$, then $\widetilde{x}\neq\widehat{x}$ generally. This theorem is an extension of Theorem 3.1 in \cite{Jia2004}.
Denote $\widetilde{r}=T(\mu)\widetilde{x}$ and $\widehat{r}=T(\mu)\widehat{x}$, respectively. By definition, we trivially have $\|\widehat{r}\|\leq\|\widetilde{r}\|$.
The following theorem gives more insightful relationships between them.
\begin{theorem}\label{normR}
Let $\widehat{\sigma}_{1}$ and $\widehat{\sigma}_{m}$ be the smallest and
largest singular values of $T(\mu) W$. The following results hold:
\begin{equation}\label{rr1}
\cos^{2}\angle(\widetilde{x},\widehat{x})+
\bigg(\frac{\widehat{\sigma}_{2}}{\widehat{\sigma}_{1}}\bigg)^{2}\sin^{2}\angle(\widetilde{x},\widehat{x})\leq\frac{\| \widetilde{r}\|^{2}}{\| \widehat{r}\|^{2}}\leq\cos^{2}\angle(\widetilde{x},\widehat{x})+
\bigg(\frac{\widehat{\sigma}_{m}}{\widehat{\sigma}_{1}}\bigg)^{2}\sin^{2}\angle(\widetilde{x},\widehat{x}).
\end{equation}
\end{theorem}
\begin{proof} Note $\| \widehat{r}\|=\widehat{\sigma}_{1}$. The proof is completely similar to that of Theorem 4.1 in \cite{Jia2004} and is thus omitted.
\end{proof}
By Theorem \ref{normR}, we may have $\|\widehat{r}\|\ll\|\widetilde{r}\|$ because the upper bound for $\dfrac{\| \widetilde{r}\|}{\| \widehat{r}\|}$
in (\ref{rr1}) may be much bigger than one, as our previous example has illustrated. This is because $\sin\angle(\widetilde{x},\widehat{x})$ may tend to zero much more slowly than $\widehat{\sigma}_{1}$ and it may be not small and even can be arbitrarily close to one, as Theorem~\ref{bound-R} indicates.
In practice it is rare that the subspace $\mathcal{W}$ contains the desired
eigenvector exactly. Let us further investigate Example 1 and show that the Rayleigh--Ritz method may fail and
$\dfrac{\| \widetilde{r}\|}{\| \widehat{r}\|}\gg 1$ for $\varepsilon$ sufficiently small.
\begin{example} In Example \ref{ex1}, perturb $W$ by a random normal deviation matrix
with standard deviation of $10^{-4}$ and orthonormalize $W$.
Then $\varepsilon=O(10^{-4})$, and
the projected matrix-valued function
$$B(\lambda)=\frac{1}{\lambda-1}\begin{pmatrix}
b_{11}(\lambda) & b_{12}(\lambda) \\
b_{21}(\lambda) & b_{22}(\lambda) \\
\end{pmatrix}
$$
with $b_{11}(\lambda)=(8.8849e-05)\lambda^{3}-(8.8828e-05)\lambda^{2}+\lambda+2.0385e-08$,
$b_{12}(\lambda)=(7.8972e-09)\lambda^{3}-(8.8891e-05)\lambda^{2}+(2.9251e-04)\lambda-1.1475e-04$,
$b_{21}(\lambda)=-\lambda^{3}+\lambda^{2}+(2.9251e-04)\lambda-1.1475e-04$, and $b_{22}(\lambda)=-(8.8883e-05)\lambda^{3}+1.0001\lambda^{2}-1.0006\lambda+5.8885e-04$.
Notice that, in the numerator of $\det(B(\lambda))=0$,
the coefficients of $\lambda^6$ and $\lambda^5$ are exactly zero.
The eigenvalues of $B(\lambda)$ are $7.3993e+03, -4.0016e+03,5.6570e-04, 2.3256e-05$.
According to Algorithm 1, we take the smallest $2.3256e-05$ in magnitude
to approximate the desired eigenvalue zero. Then the associated Ritz vector is
$$\widetilde{x}=(1.9873e-01,5.3892e-05,-9.8005e-01)^{T},$$
which is an approximation to the desired $x_*=e_{3}$ with little accuracy
as $\sin\angle(\widetilde{x},e_{3})=1.9873e-01=O(1)$, and the corresponding residual norm is $\|\widetilde{r}\|=\|T(\mu)\widetilde{x}\|=1.9873e-01=O(1)$.
In contrast, the refined Ritz vector is
$$\widehat{x}=(-2.8435e-08,-1.1469e-04,1)^{T},$$
which is an excellent approximation to the eigenvector $e_{3}$ with
accuracy $\sin\angle(\widehat{x},e_{3})=1.1469e-04=O(\varepsilon)$, and the
residual norm is $\|\widehat{r}\|=\|T(\mu)\widehat{x}\|=1.1703e-04=O(\varepsilon)$.
Based on the above results, we have $\sin\angle(\widetilde{x},\widehat{x})=1.9872e-01$ and the ratio
$$
\dfrac{\|\widehat{r}\|}{\|\widetilde{r}\|}=5.8887e-04=O(\varepsilon).
$$
Note that $\widehat{\sigma}_{1}=1.1703e-04$. Then we see that the lower and upper
bounds in (\ref{l-u}) are very close and they estimate the ratio very accurately.
As a matter of fact, for this example, if we perturb $W$ by a random normal deviation matrix with standard deviation of an arbitrarily small $\varepsilon$, then we always have $\sin\angle(\widetilde{x},e_{3})=O(1)$ and $\|\widetilde{r}\|=O(1)$ but $\sin\angle(\widehat{x},e_{3})=O(\varepsilon)$ and $\|\widehat{r}\|=O(\varepsilon)$. These demonstrate that the refined Rayleigh--Ritz method may work much better than the Rayleigh--Ritz method and the latter may produce very poor approximate
eigenvectors.
\end{example}
\section{Conclusion}\label{sec:concl}
In this paper, we have studied the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing a simple eigepair of \eqref{NEP}. We have established
a priori error bounds for the Ritz value, the Ritz vector and the refined Ritz vector, and given sufficient conditions for their convergence, respectively. We have also derived lower and upper bounds for the error of the Ritz vector and the refined Ritz vector as well as for the ratio of the residual norms obtained by the two methods. We have constructed a few examples to illustrate the theoretical results and the merit of the refined Ritz vector. These results have nontrivially extended those in \cite{Jia2004,Jia1999,Jia2001} for the linear eigenvalue problem to the NEP. They have shown that the refined Ritz vector is unique and can be much more accurately than the Ritz vector;
consequently, the residual norm by the refined Rayleigh--Ritz method can be much smaller than that by the Rayleigh--Ritz method.
The results provide necessary theoretical supports for these two kinds of projection methods for numerical solutions of large NEPs.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2022-12-02T02:08:35",
"yymm": "2212",
"arxiv_id": "2212.00302",
"language": "en",
"url": "https://arxiv.org/abs/2212.00302",
"abstract": "We establish a general convergence theory of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing some simple eigenpair ($\\lambda_{*},x_{*}$) of a given analytic nonlinear eigenvalue problem (NEP). In terms of the deviation $\\varepsilon$ of $x_{*}$ from a given subspace $\\mathcal{W}$, we establish a priori convergence results on the Ritz value, the Ritz vector and the refined Ritz vector, and present sufficient convergence conditions for them. The results show that, as $\\varepsilon\\rightarrow 0$, there is a Ritz value that unconditionally converges to $\\lambda_*$ and the corresponding refined Ritz vector does so too but the Ritz vector may fail to converge and even may not be unique. We also present an error bound for the approximate eigenvector in terms of the computable residual norm of a given approximate eigenpair, and give lower and upper bounds for the error of the refined Ritz vector and the Ritz vector as well as for that of the corresponding residual norms. These results nontrivially extend some convergence results on these two methods for the linear eigenvalue problem to the NEP. Examples are constructed to illustrate some of the results.",
"subjects": "Numerical Analysis (math.NA)",
"title": "An analysis of the Rayleigh-Ritz and refined Rayleigh-Ritz methods for nonlinear eigenvalue problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876992225169,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7089594794227371
} |
https://arxiv.org/abs/2106.10023 | Spanning $F$-cycles in random graphs | We extend a recent argument of Kahn, Narayanan and Park (Proceedings of the AMS, to appear) about the threshold for the appearance of the square of a Hamilton cycle to other spanning structures. In particular, for any spanning graph, we give a sufficient condition under which we may determine its threshold. As an application, we find the threshold for a set of cyclically ordered copies of $C_4$ that span the entire vertex set, so that any two consecutive copies overlap in exactly one edge and all overlapping edges are disjoint. This answers a question of Frieze. We also determine the threshold for edge-overlapping spanning $K_r$-cycles. | \section{Introduction}\label{sec:intro}
The study of threshold functions for the appearance of spanning structures plays an important role in the theory of random graphs.
Unlike in the case of small subgraphs, which was resolved by \citet{ErdRen60} (for balanced graphs) and by \citet{Bol81} (for general graphs), in the case of general spanning structures only sufficient conditions are known.
These lead to upper bounds for the threshold of a general spanning graph, although the expectation threshold conjecture of \citet{KK07}, if true, predicts the threshold for \emph{any} graph up to a logarithmic factor.
Apart from particular structures where the thresholds are known, such as perfect matchings~\cite{ErdRen66}, $F$-factors~\cite{JohKahVu08}, Hamilton cycles~\cite{Kor77,Pos76} or spanning trees~\cite{Montgomery19} (to name a few), the most general result providing upper bounds was, until recently, due to \citet{Ri00}, giving in some cases asymptotically optimal upper bounds (lattices, hypercubes, $k$-th powers of Hamilton cycles for $k\ge 3$~\cite{KO12}). An excellent survey by \citet{Boe17} provides references to many other results, in particular algorithmic ones.
The recent breakthrough work by \citet{FKNP19} established the fractional expectation threshold conjecture of \citet{Tal10}, providing in many cases optimal thresholds or being off by at most a logarithmic factor.
The subsequent work by \citet{KNP20} exploited the proof approach in~\cite{FKNP19} in a more efficient way, allowing to erase the logarithmic factor in the case of the square of a Hamilton cycle, and thus proving the threshold for its appearance to be $n^{-1/2}$.
In a recent paper, \citet{Frieze20} studied thresholds for the containment of spanning $K_r$-cycles, i.e., cyclically ordered edge-disjoint copies of $K_r$ with two consecutive copies sharing a vertex.
He proved the optimal threshold of the form $n^{-2/r}\log^{1/\binom{r}{2}}n$ by reducing this problem to another result of Riordan about coupling the random graph with the random $r$-uniform hypergraph~\cite{Ri18} (see also the work of \citet{Hec18} for the triangle case).
Frieze also raised the question about the threshold for the containment of a spanning $C_4$-cycle, where the copies of $C_4$ are ordered cyclically and two consecutive cycles overlap in exactly one edge, whereby each cycle $C_4$ overlaps with two copies of $C_4$ in opposite edges (there are some possible variations, but this would be a canonically defined structure).
Such $C_4$-cycles are referred to in~\cite{Frieze20} as a $C_4$-cycle with overlap $2$, where it is also observed that the threshold for its appearance is at most $ n^{-2/3} \log n$, which follows from~\cite{FKNP19}.
The purpose of this paper is to contribute to the large body of work on thresholds for spanning structures by establishing thresholds for spanning $2$-overlapping $C_4$-cycles (which we denote by $C^{e}_{4,n}$), thus answering the question of Frieze~\cite{Frieze20}, and also for $2$-overlapping $K_r$-cycles (defined below) for $r\geq4$.
Both structures cannot be handled directly by the results in~\cite{FKNP19, Ri00}.
In order to obtain these results, we generalise the approach of \citet{KNP20}.
As the results, we establish the following thresholds.
The first theorem answers the question of Frieze~\cite{Frieze20}.
\begin{theorem}\label{thm:Ctwo-threshold}
The threshold for the appearance of $C^{e}_{4,n}$ in $G(2n,p)$ is $\Theta(n^{-2/3})$.
\end{theorem}
Our second result generalises the recent work of \citet{KNP20} on the threshold for the square of a Hamilton cycle.
The square of a Hamilton cycle can be seen as the particular case $r=3$ of a structure which we call $2$-overlapping (or edge-overlapping) spanning $K_r$-cycle and denote by $K_{r,2,n}$, for $r\geq3$.
This consists of a set of cyclically ordered copies of $K_r$, where consecutive cliques share exactly one edge and, if $r\geq4$, all other cliques are pairwise vertex-disjoint\COMMENT{In the case $r=3$, this is impossible, and we enforce that each clique shares exactly one vertex with the consecutive of its consecutive; this precisely defines the square of a Hamilton cycle.}.
\begin{theorem}\label{thm:Kr-cycle}
Let $r\ge 3$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Then, the threshold for the appearance of $K_{r,2,n}$ in $G(n,p)$ is $\Theta(n^{-2/(r+1)})$.
\end{theorem}
To prove \cref{thm:Ctwo-threshold,thm:Kr-cycle}, we state and prove a general lemma (the fragmentation lemma, \cref{lem:fragmentation}), which has potential to handle more spanning structures.
This lemma is a generalisation of the work of Kahn, Narayanan and Park on the square of a Hamilton cycle~\cite[Lemma~3.1]{KNP20} to handle structures for which constantly many rounds of exposure may be necessary, in contrast to~\cite{KNP20}, where only two rounds are used, and to~\cite{FKNP19}, where logarithmically many rounds are necessary.
The organisation of the paper is as follows.
In the next section, \cref{sec:fragmentation}, we provide the main definitions, state a general lemma (the fragmentation lemma, \cref{lem:fragmentation}), and use it to establish a general theorem (\cref{thm:main}) about thresholds for certain spanning graphs.
\Cref{thm:main} is actually the main general result of the paper, and \cref{thm:Ctwo-threshold,thm:Kr-cycle} are two of its applications.
We prove these two applications in \cref{sec:special_cases}.
Finally, in \cref{sec:conclude} we collect a few remarks, and in the Appendix we provide the proof of \cref{lem:fragmentation}.
\section{A general theorem for thresholds}\label{sec:fragmentation}
Given any real numbers $a$ and $b$, we write $[a,b]$ to refer to the set $\{n\in\mathbb{Z}:a\leq n\leq b\}$.
For an integer $n$, we often abbreviate $[n]\coloneqq[1,n]$.
We use standard $O$ notation for asymptotic statements.
A hypergraph $\mathcal{H}$ on the vertex set $V\coloneqq V(\mathcal{H})$ is a subset of the power set $2^V$.
The elements of $\mathcal{H}$ are referred to as edges.
The hypergraph $\mathcal{H}$ is said to be $r$-bounded if all its edges have cardinality at most $r$, and $r$-uniform if all the edges have exactly $r$ vertices.
Oftentimes, we will consider multihypergraphs $\mathcal{H}$ on $V$, where we view $\mathcal{H}$ as a multiset with elements from $2^V$.
To ease readability, we will often refer to multihypergraphs as hypergraphs.
We also omit floor and ceiling signs whenever they do not affect our asymptotic computations.
Following \cite{KNP20}, we say that a (multi-)hypergraph $\mathcal{H}$ is \emph{$q$-spread} if, for every $I\subseteq V(\mathcal{H})$, we have
\begin{equation*}\label{eq:spread-def}
|\mathcal{H}\cap \langle I\rangle|\le q^{|I|}|\mathcal{H}|,
\end{equation*}
where $\langle I\rangle\coloneqq \{J\subseteq V(\mathcal{H}):I\subseteq J\}$ and $\mathcal{H}\cap \langle I\rangle$ is the set of edges of $\mathcal{H}$ in $\langle I\rangle$ (with multiplicities if $\mathcal{H}$ is a multihypergraph).
The \emph{spreadness} of $\mathcal{H}$ is the minimum $q$ such that $\mathcal{H}$ is $q$-spread.
Let $S\in \mathcal{H}$ and $X\subseteq V(\mathcal{H})$.
For any $J\in \mathcal{H}$ such that $J\subseteq S\cup X$, we call the set $J\setminus X$ an \emph{$(S, X)$-fragment}.
Given some $k\in \mathbb{N}$, we say that the pair $(S,X)$ is \emph{$k$-good} if some $(S,X)$-fragment has size at most $k$, and we say it is \emph{$k$-bad} otherwise.
More generally, let $\mathcal{H}_0$ be some $k_0$-bounded (multi-)hypergraph.
Let $k_0\geq k_1\geq \ldots\geq k_t$ be a sequence of integers and $X_1,\ldots,X_t$ be a sequence of subsets of $V(\mathcal{H}_0)$.
Then, we define a sequence of $k_i$-bounded multihypergraphs $\mathcal{H}_1,\ldots,\mathcal{H}_{t}$ inductively as follows.
Let $i\in[t]$, and assume the hypergraph $\mathcal{H}_{i-1}$ is already defined.
Then, consider each $S\in \mathcal{H}_{i-1}$ such that $(S,X_i)$ is a $k_i$-good pair, and let $\mathcal{H}_i$ be the multihypergraph which consists of one (arbitrary) $(S,X_i)$-fragment of size at most $k_i$ for each such $k_i$-good pair $(S,X_i)$.
That is, we define $\mathcal{G}_i\coloneqq\{S\in\mathcal{H}_{i-1}:(S,X_i)\text{ is }k_i\text{-good}\}$ and, for each $S\in\mathcal{G}_i$, $\mathcal{J}_i(S)\coloneqq\{J\setminus X_i:J\in\mathcal{H}_{i-1},J\subseteq S\cup X_i, |J\setminus X_i|\leq k_i\}$.
We then fix an arbitrary function $f_i\colon\mathcal{G}_i\to\bigcup_{S\in\mathcal{G}_i}\mathcal{J}_i(S)$ such that $f_i(S)\in\mathcal{J}_i(S)$ for every $S\in\mathcal{G}_i$ (for instance, we may simply pick the lexicographically smallest element in the set) and define
\[
\mathcal{H}_{i}\coloneqq \{f_i(S):S\in\mathcal{G}_i\}.
\]
We will refer to the sequence $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_t)$ as a \emph{fragmentation process} with respect to $(k_1,\ldots, k_t)$ and $(X_1,\ldots,X_t)$.
In our applications, we will let $X_1,\ldots,X_t$ be random subsets of $V(\mathcal{H}_0)$ and choose a suitable sequence $k_0,\ldots,k_t$ which will guarantee that the hypergraphs in the sequence do not become very small (with high probability).
Observe that the fragments at the $i$-th step of this process (that is, the edges of $\mathcal{H}_i$) correspond to subsets of the edges of $\mathcal{H}_0$ which have not been covered by the sets $X_1,\ldots,X_i$.
In particular, for all $i\in[t]$ and all $I\subseteq V(\mathcal{H}_0)$ we have that\COMMENT{Let $S\in\mathcal{H}_i\cap\langle I\rangle$, so in particular $S\in\mathcal{H}_i$. But then, as $\mathcal{H}_i$ was obtained by a fragmentation process, $S$ is a subset of some uniquely defined $S'\in\mathcal{H}_0$ .
Since $\langle I\rangle$ contains all supersets of $I$, we have that $S$ is a superset of $I$ and, thus, so is $S'$. Hence $S'\in\mathcal{H}_0\cap\langle I\rangle$.\\
Note about $S'$: say that $i=1$ (for larger $i$, the same holds in an iterative way). Then, $S'$ is \emph{not} chosen so that $S=S'\setminus X_1$, but rather as the set $S'$ from which we have chosen $S$ as an $(S',X_1)$-fragment. This guarantees that, for each $S\in\mathcal{H}_i$, the chosen $S'$ is distinct (possibly as an element of the multiset), and thus the above really yields the desired bound. The reason why we are still guaranteed that $S\subseteq S'$ is that $S=J\setminus X_1$ for some $J\subseteq S'\cup X_1$, so $S=J\setminus X_1\subseteq S'\setminus X_1\subseteq S'$.}
\begin{equation}\label{equa:fragmentationProperty}
|\mathcal{H}_i\cap \langle I\rangle|\leq|\mathcal{H}_0\cap \langle I\rangle|.
\end{equation}
While the general framework developed in \cite{FKNP19,KNP20} works for arbitrary hypergraph thresholds, here we focus on graphs.
Let $F$ be some (possibly spanning) subgraph of the complete graph $K_n$, and let $\mathcal{F}$ denote the set of all copies of $F$ in $K_n$\COMMENT{In more generality, we could define $\mathcal{F}$ as the subgraph of all minimal elements of an increasing property $\mathcal{P}$, in the same way as in \cite{FKNP19}; I believe our methods would transfer as long as each element of $\mathcal{F}$ has the same size, so $\mathcal{F}$ is uniform; they should also work if it is not uniform, but we might need to be a bit more careful.}.
We will identify copies of $F$ from $\mathcal{F}$ with their edge sets, and we thus view $\mathcal{F}$ as a $k$-uniform hypergraph, where $k=|E(F)|$, on the vertex set $M\coloneqq \binom{[n]}{2}$.
We now define a strengthening of the notion of spreadness of hypergraphs which is key for our results.
For $q,\alpha,\delta\in(0,1)$, we say that a $k$-bounded hypergraph $\mathcal{F}$ on vertex set $M$ is \emph{$(q,\alpha,\delta)$-superpread} if it is $q$-spread and, for any $I\subseteq M$ with $|I|\le \delta k$, we have
\begin{equation*}\label{eq:superspread-def}
|\mathcal{F}\cap \langle I\rangle|\le q^{|I|} k^{-\alpha c_I} |\mathcal{F}|,
\end{equation*}
where $c_I$ is the number of components of $I$ (when $I$ is viewed as a subgraph of $K_n$).
The role of the term $k^{-\alpha c_I}$ will become clear later, but, roughly speaking, it will be responsible for bounding the threshold by $O(q/\alpha)$.
The value of the constant $\delta$ actually plays no role in the result, but we do need it to be bounded away from $0$ for our approach to work.
The following result is the main lemma of the paper.
It will be used to iteratively build a spanning copy of $F$ in $G(n,p)$ through a fragmentation process.
\begin{lemma}\label{lem:fragmentation}
Let $d,\alpha,\delta>0$ with $\alpha,\delta<1$.
Then, there is a fixed constant $C_0$ such that, for all $C\ge C_0$ and $n\in\mathbb{N}$, the following holds.
Let $F$ be some subgraph of $K_n$ with $\Delta(F)\le d$\COMMENT{Note: this condition cannot be relaxed because we use \cref{lem:num_subgraphs}.} and $k_0\coloneqq|E(F)|=\omega(1)$, and let $\mathcal{F}$ be the set of all copies of $F$ in $K_n$.
Assume that $\mathcal{F}$ is $(q,\alpha,\delta)$-superspread with $q\geq4k_0/(Cn^2)$ and that $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_i)$ is some fragmentation process with $\mathcal{H}_0\coloneqq \mathcal{F}$ such that, for each $j\in[i]$, $\mathcal{H}_j$ is $k_j$-bounded and $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$, and $k_i=\omega(k_0^{\alpha})$.
Then, for $w\coloneqq Cq \binom{n}{2}$, $k\coloneqq k_ik_0^{-\alpha}$ and $X$ chosen uniformly at random from $\binom{M}{w}$, we have
\begin{equation}\label{eq:fragmentation}
\mathbb{E}\left[\left\lvert\left\{ (S,X) : S\in \mathcal{H}_i, (S,X)\text{ is }k\text{-bad}\right\}\right\rvert\right]\le 2C^{-k/3} |\mathcal{H}_i|.
\end{equation}
\end{lemma}
The proof of \cref{lem:fragmentation} closely follows the proofs of Lemma~3.1 from~\cite{KNP20} and Lemma~3.1 from~\cite{FKNP19}.
Therefore, for the sake of completeness, we give its proof in \cref{app:Fragmentation}, for the convenience of the interested reader.
Equipped with \cref{lem:fragmentation} we can now establish the following.
\begin{theorem}\label{thm:main}
Let $d,\alpha,\delta,\varepsilon>0$ with $\alpha,\delta<1$.
Then, there is a fixed constant $C_0$ such that, for all $C\ge C_0$ and $n\in\mathbb{N}$, the following holds.
If $F$ is a subgraph of $K_n$ with $\Delta(F)\le d$ and $k_0\coloneqq|E(F)|=\omega(1)$ and the hypergraph $\mathcal{F}$ of all copies of $F$ is $(q,\alpha,\delta)$-superspread with $q\geq4k_0/(Cn^2)$, then, for $p\ge C q$,
\[
\mathbb{P}\left[F\subseteq G(n,p)\right]\geq1-\varepsilon.
\]
\end{theorem}
This result immediately provides an upper bound of $Cq$ for the threshold for the appearance of $F$ as a subgraph of $G(n,p)$\COMMENT{We do not need to use the results of Friedgut: the original result of Bollobás and Thomason (see the proof in Frieze-Karonski) already shows that $p$ as above is an upper bound on the threshold. Perhaps we should cite some of these.}.
If a matching lower bound can be found (say, by the standard first moment method\COMMENT{Can we always show that the spreadness of a hypergraph is a lower bound for the threshold?}), then this establishes the threshold for the appearance of any graph $F$ which satisfies the conditions in the statement.
The proof of \cref{thm:main} follows along similar lines as the proofs in~\cite{FKNP19,KNP20}: one proceeds in rounds of sprinkling random edges by showing that, after each round of exposure (which corresponds to a step of the fragmentation process), the random graph contains larger pieces of the desired structure (or, conversely, the missing fragments become smaller).
In the general proof in~\cite{FKNP19}, the authors show that the progress in each round shrinks the percentage of the edges from the desired structure by a factor of $0.9$, which results in logarithmically many steps and, thus, a $\log n$ factor with respect to the fractional expectation threshold of the structure (this result is quite general, though, and oftentimes a logarithmic factor is indeed needed, as in the case of spanning trees, Hamilton cycles and $K_r$-factors in random graphs, or of perfect matchings and loose Hamilton cycles in random hypergraphs).
The threshold for the square of a Hamilton cycle $K_{3,2,n}$ happens to be $n^{-1/2}$, and in this case, as shown in~\cite{KNP20}, two rounds suffice: the shrinkage factor there is $n^{-1/2}$, so that after the first round a second moment computation suffices in the second round of exposure/sprinkling.
We show that the threshold for the appearance of $F$ is at most $O(q)$ and for this we will need $1/\alpha$ rounds (exposing each time edges with probability $Cq$, for some constant $C$): the shrinkage factor in all but the last round will be $n^{-\alpha}$, so that we can apply the second moment method in the last round.
In the proof of \cref{thm:main} we make use of the following auxiliary lemma.
Again, its proof follows similarly as the proof of Proposition~2.2 in~\cite{KNP20}, and we thus defer it to \cref{app:Fragmentation}.
\begin{lemma}\label{lem:num_subgraphs}
Let $F$ be a graph with $f$ edges and maximum degree $d$.
Then, the number of subgraphs $I$ of $F$ with $\ell$ edges and $c$ components is at most
\[
(4ed)^{\ell}\binom{f}{c}.
\]
\end{lemma}
\begin{proof}[Proof of \cref{thm:main}]
We first note that, by adjusting the value of $C_0$, we may assume that $n$ is sufficiently large, and therefore $k_0$ is sufficiently large too.
We may also assume that $q<C_0^{-1}$.
In the beginning, we will switch and work with the $G(n,m)$ model instead of $G(n,p)$.
This can be done easily since these models are essentially equivalent for $m=p\binom{n}{2}$ (see, e.g.,~\cite[Proposition~1.12]{JLR00}).
We proceed as follows.
We consider $G(n,m_1)\cup G(n,m_2)\cup\ldots\cup G(n,m_t)$ with $t=\lceil1/\alpha\rceil-1$ and $m_i=Kq\binom{n}{2}$ for each $i\in[t]$, where $K$ is assumed to be sufficiently large throughout (and $C_0$ will be defined as $2(t+1)K$).
We then define a fragmentation process on $\mathcal{F}$ with respect to $(k_1,\ldots,k_t)$ and $(G(n,m_1),\ldots,G(n,m_t))$, where the integers $k_1,\ldots,k_t$ will be defined shortly.
We prove that a.a.s.~each step of this fragmentation process satisfies the conditions of \cref{lem:fragmentation}, so that we may iteratively apply it and conclude that each of the subsequent hypergraphs is not `too small'.
At the end of this process, we will be sufficiently `close' to a copy of $F$ that a second moment argument will yield the result.
To be precise, we first consider the hypergraph $\mathcal{H}_0\coloneqq \mathcal{F}$ and take $X_1\coloneqq G(n,m_1)$ and $k_1\coloneqq k_0^{1-\alpha}$.
We consider a first step in the fragmentation process.
We obtain a multihypergraph $\mathcal{H}_1$ of $(S,X_1)$-fragments which is $k_1$-bounded, where each $S$ is an edge of $\mathcal{H}_0$.
In particular, by the assertion~\eqref{eq:fragmentation} of \cref{lem:fragmentation} and Markov's inequality, we have that
\begin{equation}\label{eq:successful}
\mathbb{P}\left[|\mathcal{H}_1|\ge |\mathcal{H}_0|/2\right]\ge 1-4K^{-k_1/3}.
\end{equation}
Suppose now that we have already run the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_i)$, for some $i\in[t-1]$, and that $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$ for all $j\in[i]$.
We run one further step of the fragmentation process with $X_{i+1}\coloneqq G(n,m_{i+1})$ and $k_{i+1}\coloneqq k_ik_0^{-\alpha}$ to obtain a $k_{i+1}$-bounded hypergraph $\mathcal{H}_{i+1}$ of $(S,X_1\cup\ldots\cup X_{i+1})$-fragments (where, again, each $S$ is an edge of $\mathcal{H}_0$).
By another application of \cref{lem:fragmentation} and Markov's inequality, we obtain that
\begin{equation}\label{eq:successful2}
\mathbb{P}\left[|\mathcal{H}_{i+1}|\ge |\mathcal{H}_i|/2\right]\ge 1-4K^{-k_{i+1}/3}.
\end{equation}
We say that the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_t)$ is \emph{successful} if $|\mathcal{H}_j|\ge |\mathcal{H}_{j-1}|/2$ for all $j\in[t]$.
Let $\beta\coloneqq1-t\alpha$, and note that, by the definition of $t$, we have $0<\beta\leq\alpha$.
By \eqref{eq:successful} and \eqref{eq:successful2}, we conclude that the probability that the fragmentation process $(\mathcal{H}_0,\mathcal{H}_1,\ldots,\mathcal{H}_{t-1})$ which we run is successful is\COMMENT{Note we are assuming $k_0\to\infty$ and, since $\beta$ is a positive constant (but $\beta-\alpha\leq0$), we also have $k_0^\beta\to\infty$.
This is clearly the leading term, as in all other cases the exponent is a smaller constant.}
\[
1-4\sum_{i=1}^{t}K^{-k_i/3}\ge1-4\sum_{i=1}^{\lceil{1}/{\alpha}\rceil-1}K^{-k_0^{1-i\alpha}/3}=1-O\left(K^{-k_0^{\beta}/3}\right).
\]
To summarise, a.a.s.~the fragmentation process is successful and, thus, yields a $k_{t}$-bounded multihypergraph $\mathcal{H}_t$ of $(S,X_1\cup\ldots\cup X_t)$-fragments, where $k_{t}=k_0^{\beta}$, $|\mathcal{H}_t|\geq2^{-t}|\mathcal{H}_0|$ and each $S$ is an element of $\mathcal{H}_0$.
We now apply one more round of sprinkling.
In this final round we switch and work with the random set $X\coloneqq G(n,p)$ with $p=Kq$.
We may also assume that $\mathcal{H}_t$ is $k_t$-uniform, since every set $S\in\mathcal{H}_t$ is contained in some $S'\in\mathcal{F}$ and thus we can add some arbitrary $k_t-|S|$ vertices from $S'\setminus S$ to~$S$.
The proof now will proceed along the same lines as the proof in~\cite[Theorem~1.2]{KNP20}.
Define the random variable $Y\coloneqq |\{S\in\mathcal{H}_t:S\subseteq G(n,p)\}|$.
Our aim is to estimate the variance of~$Y$ and to show that $\mathbb{P}[Y=0]\le \varepsilon$.
This would mean that the random graph $G(n,p)\cup\bigcup_{i=1}^{t} G(n,m_i)$ contains a copy of $F$ with probability at least $1-\varepsilon$ (by~\cite[Proposition~1.12]{JLR00}, this also applies to $G\left(n,C_0q\right)$).
We estimate the variance of $Y$ as follows (recall that we work in $G(n,p)$ now).
Let $R\in \mathcal{H}_t$, so $|R|=k_t=k_0^{\beta}$.
Then, using the fact that $\mathcal{F}$ is $(q,\alpha,\delta)$-superspread and \eqref{equa:fragmentationProperty}, for each $\ell\in[k_t]$ we have that
\begin{align*}
|\{S\in \mathcal{H}_t: |S\cap R|=\ell\}|&\le \sum_{L\subseteq R, |L|=\ell} |\mathcal{H}_t\cap\langle L\rangle|\le \sum_{L\subseteq R, |L|=\ell} |\mathcal{F}\cap\langle L\rangle|
\le \sum_{L\subseteq R, |L|=\ell} q^{|L|} k_0^{-\alpha c_L} |\mathcal{F}|,\\
&=\sum_{c=1}^\ell \sum_{L\subseteq R, |L|=\ell, c_L=c} q^{\ell} k_0^{-\alpha c} |\mathcal{F}|
\overset{\text{\cref{lem:num_subgraphs}}}{\le} \sum_{c=1}^\ell (4ed)^\ell \binom{k_t}{c} q^{\ell} k_0^{-\alpha c} |\mathcal{F}|\\
&=q^{\ell} |\mathcal{F}| (4ed)^\ell \sum_{c=1}^\ell \binom{k_t}{c} k_0^{-\alpha c} \le q^{\ell} |\mathcal{F}| (4ed)^\ell \sum_{c=1}^\ell \left(\frac{ ek_t k_0^{-\alpha}}{c}\right)^c
=q^{\ell} |\mathcal{F}| e^{O(\ell)},
\end{align*}
where the implicit constants in the $O$ notation are independent of $K$.
We therefore get the following bound on the variance:
\[
\mathrm{Var}[Y]\le p^{2k_t}\sum_{R,S\in \mathcal{H}_t,\, R\cap S\neq\varnothing} p^{-|R\cap S|}\overset{|\mathcal{H}_t|\le |\mathcal{F}|}{\le} |\mathcal{F}|^2p^{2k_t}\sum_{\ell=1}^{k_t} e^{O(\ell)}p^{-\ell}q^{\ell}=O\left(\mathbb{E}[Y]^2/K\right),
\]
where we use the facts that $p=Kq$, $\mathbb{E}[Y]=p^{k_t}|\mathcal{H}_t|$ and $|\mathcal{H}_t|\ge 2^{-t}|\mathcal{F}|=\Theta(|\mathcal{F}|)$.
By letting $K$ be sufficiently large, the result follows by Chebyshev's inequality.
\end{proof}
\section{Applications of Theorem~\ref{thm:main}}\label{sec:special_cases}
We use this section to prove \cref{thm:Ctwo-threshold,thm:Kr-cycle} as applications of \cref{thm:main}.
\subsection{Spanning \texorpdfstring{$C_4$}{C4}-cycles}\label{sect31}
Throughout this section, we assume that $n$ is even.
Observe that the graph $C^{e}_{4,n}$ has $3n/2$ edges.
Let $\mathcal{C}$ be the $(3n/2)$-uniform hypergraph on the vertex set $M=\binom{[n]}{2}$ where we see (the set of edges of) each copy of $C^{e}_{4,n}$ as an edge of $\mathcal{C}$.
We write $|\mathcal{C}|$ for the number of its edges and notice that $|\mathcal{C}|=(n-1)!/2$.
Indeed, consider an arbitrary labelling $v_1,\ldots,v_n$ of the vertices.
We define a copy of $C^{e}_{4,n}$ uniquely based on this ordering: we consider a matching between the set of even vertices and odd vertices (where we add the edge $v_{2i-1}v_{2i}$ for each $i\in[n/2]$), and then we define a cycle of length $n/2$ on the set of even vertices and another cycle in the set of odd vertices (where the edges of these cycles join the vertices which are closest in the labelling, seen cyclically).
In this way, each of the $C_4$'s which conform the copy of $C^{e}_{4,n}$ is given by four consecutive vertices in the labelling, starting with an odd vertex $v_{2i-1}$, so that its edges are $\{v_{2i-1}v_{2i},v_{2i+1}v_{2i+2},v_{2i-1}v_{2i+1},v_{2i}v_{2i+2}\}$.
Now one can easily verify that there are $2n$ different labellings which yield the same copy of $C^{e}_{4,n}$ (there are $n/2$ possible starting points while maintaining the same cyclic ordering; if the ordering is reversed, the resulting graph is the same; and if all pairs of vertices $\{v_{2i-1}v_{2i}\}$ are swapped, the resulting graph is also the same).
Recall that a hypergraph $\mathcal{C}$ is $q$-spread, for some $q\in(0,1)$, if $|\mathcal{C}\cap \langle I\rangle|\le q^{|I|}|\mathcal{C}|$ for all $I\subseteq M$, where $\langle I\rangle$ denotes the set of all supersets of $I$.
Moreover, for $\alpha,\delta\in(0,1)$, we say $\mathcal{C}$ is $(q,\alpha,\delta)$-superpread if it is $q$-spread and, for every $I\subseteq M$ with $|I|\le 3\delta n/2$, we have
\begin{equation*}
|\mathcal{C}\cap \langle I\rangle|\le q^{|I|} (3n/2)^{-\alpha c_I} |\mathcal{C}|,
\end{equation*}
where $c_I$ is the number of components of $I$.
Our main goal now is to establish that the hypergraph $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread.
\begin{lemma}\label{obs:small-edge-spread}
Let $I\subseteqC^{e}_{4,n}$ be a graph with $\ell\le n/10$ edges and $c$ components.
Then, we have
\[
|V(I)|-c\geq\frac{2}{3}\ell+\frac{c}{3}.
\]
\end{lemma}
\begin{proof}
Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge, and let $v_1,\ldots,v_c$ be the number of vertices spanned by $I_1,\ldots,I_c$, respectively.
Since for all $j\in[c]$ we have $|I_j|\le|I|\le n/10$, we conclude the following easy bound on any component:
\begin{equation}\label{eq:edge-bound}
|I_j|\le \frac{1}{2}\left(4\cdot 2+(v_j-4)\cdot 3\right)=\frac{3}{2}v_j-2.
\end{equation}
Indeed, this holds since the maximum degree of $I$ is at most $3$ and in every component $I_j$ with $4\leq v_j\leq n/10$ there are four vertices whose sum of degrees is at most $8$.
For $v_j\in[2,3]$ we have $|I_j|= v_j-1$, hence the bound given in~\eqref{eq:edge-bound} holds in these cases as well.
Summing over all $j\in[c]$, we obtain that \[\ell=|I|\leq\frac32|V(I)|-2c,\]
which yields the desired result by rearranging the terms.
\end{proof}
\begin{lemma}\label{obs:large-edge-spread}
Let $I\subseteqC^{e}_{4,n}$ be a graph with $\ell$ edges and $c$ components.
Then, we have
\[
|V(I)|-c\geq\frac{2}{3}\ell-1.
\]
\end{lemma}
\begin{proof}
Since every vertex of $C^{e}_{4,n}$ has degree $3$, the bound in the statement holds trivially if $I$ has only one component.
We may thus assume that $I$ contains at least two components with at least one edge each.
But then, one can directly check that \eqref{eq:edge-bound} must hold, and we can argue exactly as in \cref{obs:small-edge-spread}, which leads to a better bound than claimed in the statement.
\end{proof}
\begin{lemma}\label{obs:general-spread}
The hypergraph $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread.
\end{lemma}
\begin{proof}
Let $I\subseteq M$.
We need to obtain upper bounds for $|\mathcal{C}\cap \langle I\rangle|$.
If $I$ is not contained in any copy of $C^{e}_{4,n}$, then $|\mathcal{C}\cap \langle I\rangle|=0$, so we may assume $I$ is a subgraph of some copy of $C^{e}_{4,n}$.
Recall that a copy of $C^{e}_{4,n}$ can be defined by an ordering of $[n]$ and that exactly $2n$ such orderings define the same copy of $C^{e}_{4,n}$.
Thus, it suffices to bound the number of orderings of $[n]$ which define a copy of $C^{e}_{4,n}$ containing $I$.
Let $I_1,\ldots,I_c$ be the components of $I$ which contain at least one edge.
For each $j\in[c]$, choose a vertex $x_j\in V(I_j)$ (note that there are $v_j$ possible choices for this, which leads to a total of
\begin{equation}\label{equa:bound1C4}
\prod_{j=1}^cv_j\leq 2^{|I|}
\end{equation}
choices for $\{x_1,\ldots,x_c\}$).
Now, each ordering $\sigma$ of $[n]$ (recall this defines a copy of $C^{e}_{4,n}$) induces an ordering on the set consisting of the vertices $x_1,\ldots,x_c$ as well as all isolated vertices in $I$.
Let us denote this induced ordering as $\tau=\tau(\sigma)$.
We now want to bound the total number of possible orderings $\sigma$ by first bounding the number of orderings $\tau$ (which depend on the choice of $x_1,\ldots,x_c$) and then the number of orderings $\sigma$ with $\tau=\tau(\sigma)$.
After the choice of $x_1,\ldots,x_c$, the number of possible orderings $\tau$ is \begin{equation}\label{equa:bound2C4}
(n-|V(I)|+c)!.
\end{equation}
Now, in order to obtain some $\sigma$ such that $\tau=\tau(\sigma)$, it suffices to `insert' the vertices which are missing into the ordering, and this must be done in a way which is consistent with the structure of the components $I_j$.
For each $j\in[c]$, consider a labelling of the vertices of $I_j$ starting with $x_j$ and such that each subsequent vertex has at least one neighbour with a smaller label.
Then, we insert the vertices of $I_j$ into the ordering following this labelling, and note that, for each vertex, there are at most three choices, as $\Delta(I_j)\leq3$.
This implies there are at most $3^{|V(I_j)|-1}\leq3^{|I_j|}$ possible ways to fix the ordering of the vertices of $I_j$.
By considering all $j\in[c]$, we conclude that there are at most
\begin{equation}\label{equa:bound3C4}
\prod_{j=1}^c 3^{|I_j|}\le 3^{|I|}
\end{equation}
possible orderings $\sigma$ which result in the same $\tau$.
Combining \eqref{equa:bound1C4}, \eqref{equa:bound2C4} and \eqref{equa:bound3C4} with the fact that there are $2n$ distinct orderings $\sigma$ which result in the same copy of $C^{e}_{4,n}$, we conclude that
\begin{equation}\label{equa:bound4C4}
|\mathcal{C}\cap \langle I\rangle|\le \frac{6^{|I|}}{2n}(n-|V(I)|+c)!\le 6^{|I|}(n-|V(I)|+c-1)!.
\end{equation}
We can now estimate the spreadness of $\mathcal{C}$.
Consider first any $I\subseteq M$ with $|I|\leq n/10=|C^{e}_{4,n}|/15$, and let $c$ be its number of components.
Then, by substituting the bound given by \cref{obs:small-edge-spread} into \eqref{equa:bound4C4}, we conclude that
\[|\mathcal{C}\cap \langle I\rangle|\leq 6^{|I|}\left(n-\frac23|I|-\frac{c}{3}-1\right)!.\]
By using the bound on $|I|$ and taking into account that $|\mathcal{C}|=(n-1)!/2$ and $|C^{e}_{4,n}|=3n/2$, we conclude that $|\mathcal{C}\cap \langle I\rangle|\leq q^{|I|}|C^{e}_{4,n}|^{-c/3}|\mathcal{C}|$ for $q\geq250n^{-2/3}$\COMMENT{Let $\ell\coloneqq |I|$.
It suffices to check that
\[6^{\ell}\left(n-\frac{2}{3}\ell-\frac{c}{3}-1\right)!\leq q^\ell\left(\frac32n\right)^{-c/3}\frac{(n-1)!}{2}.\]
By Stiring's approximation, it suffices to check that (for sufficiently large $n$)
\[q^\ell\geq4\cdot6^{\ell}\left(\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{1/2}e^{2\ell/3}\left(e\frac32\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{c/3}\left(\frac{n-\frac{2}{3}\ell-\frac{c}{3}}{n}\right)^n\left(n-\frac{2}{3}\ell-\frac{c}{3}\right)^{-2\ell/3}.\]
By taking roots, we want to have
\[q\geq\left(4\left(\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{1/2}\right)^{1/\ell}6e^{2/3}\left(e\frac32\frac{n}{n-\frac{2}{3}\ell-\frac{c}{3}}\right)^{c/(3\ell)}\left(\frac{n-\frac{2}{3}\ell-\frac{c}{3}}{n}\right)^{n/\ell}\left(n-\frac{2}{3}\ell-\frac{c}{3}\right)^{-2/3}.\]
Now consider each term in the expression above.
The term inside the first big parenthesis tends to $1$ as $\ell$ goes to infinity, and it is always bounded by some constant (say, it is at most $8$, since we have a upper bound on $\ell$ and, thus, also on $c$ that leads to the thing inside the second parenthesis being a constant smaller than $2$).
The term $6e^{2/3}$ remains as is.
The next term can be bounded by $(3e/2)^{1/3}$ since $c\leq\ell$.
The next term can be bounded simply by $1$.
The final term, then, is at most $(n/2)^{-2/3}$.
Putting all of these bound together, it follows that it suffices to have
\[q\geq8\cdot6e^{2/3}(3e/2)^{1/3}(n/2)^{-2/3}=2^{13/3}3^{4/3}e n^{-2/3}.\]
In particular, the constant above is at most $250$.}.
Similarly, assume $I\subseteq M$ has $|I|>n/10$ edges and $c$ components.
By substituting the bound given by \cref{obs:large-edge-spread} into \eqref{equa:bound4C4}, we now have that
\[|\mathcal{C}\cap \langle I\rangle|\leq 6^{|I|}\left(n-\frac23|I|\right)!.\]
Now, as above, we conclude that $|\mathcal{C}\cap \langle I\rangle|\leq q^{|I|}|\mathcal{C}|$ for $q\geq12n^{-2/3}$.\COMMENT{Let $\ell\coloneqq|I|$.
It suffices to show that
\[6^\ell\left(n-\frac{2}{3}\ell\right)!\leq q^\ell\frac{(n-1)!}{2}.\]
By making use of Stirling's approximation, we have that $(n-2\ell/3)!\geq\sqrt{2\pi(n-2\ell/3)}((n-2\ell/3)/e)^{n-2\ell/3}$, and using the same approximation for $n!$, we conclude that it suffices to have
\[q^\ell\geq4n\frac{\sqrt{n-2\ell/3}}{\sqrt{n}}\cdot6^\ell e^{2\ell/3}\frac{\left(n-\frac{2}{3}\ell\right)^{n-2\ell/3}}{n^n}.\]
Now note that
\[4n\frac{\sqrt{n-2\ell/3}}{\sqrt{n}}\cdot6^\ell e^{2\ell/3}\frac{\left(n-\frac{2}{3}\ell\right)^{n-2\ell/3}}{n^n}\leq4n6^\ell e^{2\ell/3}\frac{\left(\left(1-\frac{2\ell}{3n}\right)n\right)^{n-2\ell/3}}{n^n}\leq4n6^\ell e^{2\ell/3}n^{-2\ell/3}.\]
Therefore, it suffices to have
\[q\geq(4n)^{1/\ell}6 e^{2/3}n^{-2/3}.\]
But now, by the bound $\ell\geq n/10$, we have that $(4n)^{1/\ell}\to1$, so in particular the inequality holds (for sufficiently large $n$) if $q=6.1e^{2/3}n^{-2/3}\leq12n^{-2/3}$.}
Combining the two statements above, it follows by definition that $\mathcal{C}$ is $((250n^{-2/3},1/3,1/15))$-superspread, as we wanted to see.
\end{proof}
\begin{proof}[Proof of \cref{thm:Ctwo-threshold}]
\Cref{obs:general-spread,obs:small-edge-spread} establish that $\mathcal{C}$ is $(250n^{-2/3},1/3,1/15)$-superspread.
By \cref{thm:main} we have that, if $p\ge Cn^{-2/3}$, where $C$ is a sufficiently large constant, then
\[
\mathbb{P}\left[C^{e}_{4,n}\subseteq G(n,p)\right]\ge 1/2.
\]
To finish the argument, one can employ a general result of Friedgut~\cite{Fri05} (see, e.g., a recent paper of \citet{NS20}) which allows to establish that
\[\mathbb{P}\left[C^{e}_{4,n}\subseteq G(n,(1+o(1))p)\right]=1-o(1).\qedhere\]
\end{proof}
\begin{comment}
\begin{observation}\label{obs:number-subgraphs}
Let $F$ be a subgraph of $C^{e}_{4,n}$ with $f$ edges, then the number of subgraphs $U$ of $F$ with $\ell$ edges and $c$ components is at most
\[
(12e)^{\ell}\binom{f}{c}.
\]
\end{observation}
\begin{proof}
The proof follows similar to the proof of Proposition~2.2 from~\cite{KNP20}: the number of connected $h$-edge subgraphs of $G$ containing a given vertex ist less than $(e\Delta(G))^h$.
Now we first specify the roots of the $c$ components of the $\ell$-edge subgraph of $F$ in at most $2^{c}\binom{f}{c}$ ways, then choose the sizes of the components in at most $\binom{\ell-1}{c-1}$ ways and then choose the subgraphs along $F$ in at most $\prod_{j=1}^c(3e)^{\ell_j}=(3e)^\ell$ ways.
In total we get at most $(3e)^\ell\binom{\ell-1}{c-1} 2^{c}\binom{f}{c}\le (12e)^{\ell}\binom{f}{c}$ subgraphs.
\end{proof}
\end{comment}
\subsection{Spanning \texorpdfstring{$K_r$}{Kr}-cycles}
In the following we will study copies of $K_r$ arranged in a cyclic way.
Since there are several ways how two consecutive copies of $K_r$ can overlap, we provide a precise definition of what will be called an $s$-overlapping $K_r$-cycle.
\begin{definition}\label{def:Krsn}
Let $r> s\ge 0$ and $n\in \mathbb{N}$ with $(r-s)\mid n$ be integers.
A $K_{r,s,n}$-cycle is a graph on vertex set $\mathbb{Z}_n=[0,n-1]$ whose edge set is the union of the edge sets of $n/(r-s)$ copies of $K_r$, where for each $i\in[0,n/(r-s)-1]$ there is a copy of $K_r$ on the vertices $[i(r-s),i(r-s)+r-1]$ (modulo $n$).
\end{definition}
In other words, the $n/(r-s)$ copies of $K_r$ are arranged cyclically on the vertex set $\mathbb{Z}_n$, so that two consecutive copies of $K_r$ intersect in exactly $s$ vertices and two non-consecutive cliques intersect in as few vertices as possible.
The case $s=0$ corresponds to a $K_r$-factor.
The threshold for the property of containing a $K_r$-factor was famously determined by \citet{JohKahVu08}.
For the case $s=1$, the copies of $K_r$ in $K_{r,1,n}$ are edge-disjoint.
As mentioned in the introduction, the threshold for the appearance of $K_{r,1,n}$ in $G(n,p)$ was recently determined by \citet{Frieze20}.
When $s=r-1$, the $s$-overlapping $K_r$-cycles are usually referred to as the $(r-1)$-th power of a Hamilton cycle $C_n$, where the $k$-th power of some arbitrary graph $G$ is obtained by connecting any two vertices of $G$ which are at distance at most $k$ with an edge.
The threshold for the appearance of $K_{r,r-1,n}$ is known to be $n^{-1/r}$.
This was observed by \citet{KO12} for $r\ge 4$, while the case $r=3$ was solved recently by \citet{KNP20}.
We determine the threshold for the appearance of $K_{r,s,n}$ for all the remaining values of $r$ and~$s$.
Whenever $s\geq3$, the result follows from a general result of \citet{Ri00}; see \cref{sec:conclude}.
Our main focus here is on the cases when $s=2$.
The overall strategy follows the same structure as in \cref{sect31}.
We denote the set of all unlabelled copies of $K_{r,s,n}$ on $[n]$ by $\mathcal{C}_{r,s,n}$.
When talking about subgraphs of $K_{r,s,n}$, we refer to sets of consecutive vertices as \emph{segments}.
The \emph{length} of a segment is the number of vertices it contains.
First, we observe the following several simple facts about $K_{r,s,n}$.
\begin{fact}\label{fact:prop_Krsn}
Let $r> s\ge 0$ and $n\in \mathbb{N}$ with $(r-s)\mid n$.
Then, the number of edges in $K_{r,s,n}$ is exactly
\[\left(\binom{r}{2}-\binom{s}{2}\right)\frac{n}{r-s}=\frac{1}{2}(r+s-1)n.\]
In particular, for $s\le r/2$, the number of vertices of degree $2r-s-1$ is exactly ${sn}/({r-s})$ (such vertices belong to two copies of $K_r$), whereas the remaining ${(r-2s)n}/{(r-s)}$ vertices have degree $r-1$ (these vertices belong to exactly one copy of $K_r$).\qed
\end{fact}
We will call vertices of $K_{r,2,n}$ with degree $2r-3$ \emph{heavy} and those with degree $r-1$ \emph{light}.
\begin{fact}\label{fact:number-cCrsn}
Let $r> s\ge 1$ and $n\in \mathbb{N}$ with $(r-s)\mid n$ and $s\le r/2$.
We have
\[|\mathcal{C}_{r,s,n}|=\frac{(n-1)! (r-s)}{2((r-2s)!)^{n/(r-s)}(s!)^{n/(r-s)}}=\frac{r-s}{2}d_{r,s}^n(n-1)!,\]
where $d_{r,s}\in (0,1]$ is some absolute constant that depends on $s$ and $r$ only.\COMMENT{There are $n!$ permutations of the vertices.
Given this, there are $2n/(r-s)$ equal cyclic orders (the starting point matters up to $r-s$ choices, and after that the order would repeat itself, and we can go in either of the two directions).
Furthermore, for each set of consecutive vertices of degree $r-1$, we can reorder them in any way and obtain the same graph; there are $n/(r-s)$ sets of $r-2s$ such consecutive vertices.
Similarly, for each set of consecutive vertices of degree $2r-s-1$ contained in the same edges, we can reorder them in any way and obtain the same graph; there are $n/(r-s)$ sets of $s$ such consecutive vertices.} \qed
\end{fact}
\begin{fact}\label{fact:consec_Krsn}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $V\subseteq V(K_{r,2,n})$ be a segment starting in the first vertex of some clique $K_r$ with $|V|\le n/(2r)+1$\COMMENT{This bound (or a weaker form) is necessary in the sense that, if the graph wraps around, then the number of edges can be slightly larger than described below.}.
Then,
\begin{equation}\label{eq:max-subgraph}
e(K_{r,2,n}[V])=\left(\binom{r}{2}-1\right)a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2),
\end{equation}
where $|V|=(r-2)a+b$ with $a,b\in \mathbb{N}_0$ and $0\le b< r-2$.\COMMENT{
\begin{proof}
Let $v=(r-2)a+b$ with $a,b\in \mathbb{N}_0$ and $0\le b< r-2$.
We may assume that $V=[0,v-1]$.
We distinguish the following cases: $b\in\{0, 1\}$ and $2\le b< r-2$.
Assume first that $b\in\{0, 1\}$.
Then, we have
\[
e(I)=\left(\binom{r}{2}-1\right)a+\binom{b}{2}-(2-b)(r-2)=\left(\binom{r}{2}-1\right)(a-1)+\binom{r-2+b}{2}
\]
(we have $a$ contributions of $\binom{r}{2}-1$, where we count edges ($a$ times) on a set of size $r-2$ and between this set and the next two vertices of $V$ (note this guarantees that we do not count the edge induced by said pair of vertices twice); additionally we get $\binom{b}{2}$ edges, but we have to correct this for the last `full' clique in $V$, since it will miss exactly $2-b$ vertices and, therefore, $(2-b)(r-2)$ edges).\\
Assume next that $2\le b< r-2$.
By the same argument as above, we have $e(I)=\left(\binom{r}{2}-1\right)a+\binom{b}{2}$.
\end{proof}
} \qed
\end{fact}
Instead of using \eqref{eq:max-subgraph}, we will make use of the following estimate to streamline our calculations.
\begin{proposition}\label{prop:easy-bound}
Let $r\ge 4$ and $v\in \mathbb{N}$ with $v=(r-2)a+b$, where $a,b\in \mathbb{N}_0$ and $0\le b< r-2$.
Then,
\begin{equation}\label{eq:easy-bound}
\left(\binom{r}{2}-1\right)a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le \frac{r+1}{2}v -\frac{r+2}{2}.
\end{equation}
\end{proposition}
\begin{proof}
We can rewrite the LHS of~\eqref{eq:easy-bound} as
\[
\frac{(r+1)(r-2)}{2}a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2).
\]
By substituting $v=(r-2)a+b$ in the RHS, we see that~\eqref{eq:easy-bound} is equivalent to\COMMENT{We want to prove
\[\frac{(r+1)(r-2)}{2}a+\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le\frac{r+1}{2}((r-2)a+b) -\frac{r+2}{2}=\frac{(r+1)(r-2)}{2}a+\frac{r+1}{2}b-\frac{r+2}{2},\]
and the leftmost term cancels out.}
\[\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\le\frac{r+1}{2}b-\frac{r+2}{2}.\]
To verify this, we consider
\begin{align*}
f(b)\coloneqq&\, 2\left(\frac{r+1}{2}b-\frac{r+2}{2}-\left(\binom{b}{2}-\max\{2-b,0\}\cdot(r-2)\right)\right)\\
=&\,(r+2-b)b-(r+2)+2\max\{2-b,0\}\cdot(r-2).
\end{align*}
For all $b\neq2$ we have $f'(b)=(r+2)-2b-2(r-2)\cdot \mathds{1}_{\{b<2\}}$ and $f''(b)=-2$.
Since $f$ is concave in $(-\infty,2)$ and $(2,\infty)$, in order to verify that $f(b)\ge 0$ for all $b\in[0,r-3]$ (which is then equivalent to~\eqref{eq:easy-bound}) it suffices to check the value of $f(b)$ at $0$, $1$, $2$ and $r-3$ (assuming this is larger than $2$)\COMMENT{$1$ is necessary for the case $r=4$.}.
Indeed,
\begin{align*}
f(0)&=-(r+2)+4(r-2)=3r-10> 0,\\
f(1)&=(r+1)-(r+2)+2(r-2)=2r-5>0,\\
f(2)&=2r-(r+2)=r-2> 0,\\
\intertext{and, if $r-3> 2$,}
f(r-3)&=5(r-3)-(r+2)=4r-17>0.\qedhere
\end{align*}
\end{proof}
Our goal now is to establish that the densest subgraphs of $K_{r,2,n}$ are precisely those described in \cref{fact:consec_Krsn}.
The next lemma establishes that, among all subgraphs of $K_{r,2,n}$ induced by segments of length at most $n/(2r)+1$ (i.e., there is no `wrapping around the cycle'), the densest ones are those where the segment starts in a `new' $K_r$ or ends in a `full' $K_r$.
\begin{lemma}\label{lem:densest-case-segment}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $V\subseteq V(K_{r,2,n})$ be a segment with $r\le |V|\le n/(2r)+1$.
Then, the number of edges induced by $V$ is
maximised when $V$ starts in the first vertex of some clique $K_r$ or ends in the last vertex of some clique $K_r$.
\end{lemma}
\begin{proof}
Let $V$ be a segment which induces the maximum possible number of edges from $K_{r,2,n}$.
By the symmetries of $K_{r,2,n}$, we may assume that $V\cap([0,r-1]\cup[n-r,n-1])=\varnothing$\COMMENT{This assumption is in place so there are no issues when talking about last and first cliques.}.
Assume that $V$ is not of the form described in the claim (i.e., it neither begins in the first vertex nor ends in the last vertex of some clique $K_r$).
Let $i_1$ and $i_2$ be the first and last vertices of $V$, and let $j_1$ and $j_2$ be the number of vertices which $V$ contains in the (last) clique $K_r$ which contains $i_1$ and in the (first) clique which contains $i_2$, respectively.
We may assume, without loss of generality, that $j_1\le j_2$ (and recall that $j_1,j_2<r$).
However, then the set $(V\setminus\{i_1\})\cup\{i_2+1\}=[i_1+1,i_2+1]$ induces more edges than $V$\COMMENT{We remove $j_1-1$ edges from one side and add $j_2$ edges to the other.}.
But this contradicts our choice of $V$ as a set of consecutive vertices which induces the maximum possible number of edges.
\end{proof}
Now we prove that no subgraph of $K_{r,2,n}$ is denser than the subgraphs induced by segments.
\begin{lemma}\label{lem:densest-case-general}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $V\subseteq V(K_{r,2,n})$ with $r\le v\coloneqq|V|\le n/(2r)+1$.
Then, the number of edges induced by $V$ is at most
the number induced by a segment of length $v$.
\end{lemma}
\begin{proof}
Let $V\subseteq V(K_{r,2,n})$ be a set of cardinality $v$ inducing the maximum possible number of edges.
Let $I$ be the graph induced by $V$.
Of course, we may assume $I$ has no isolated vertices.
Let $S$ be a smallest segment containing $V$.
By the symmetries of $K_{r,2,n}$, we may assume that $S\cap([0,r-1]\cup[n-r,n-1])=\varnothing$.
We use a compression-type argument to show that we can modify $V$ into a segment which induces at least as many edges as $V$.
We achieve this by consecutively creating new sets $V'$ which are contained in shorter segments but induce at least as many edges as the previous set.
Let $i_1$ and $i_2$ be the first and last vertices of $V$ (i.e., $S=[i_1,i_2]$) and notice that $i_1$ and $i_2$ do not form an edge (since $v\le n/(2r)+1$).
Observe, then, that $\deg_I(i_1),\deg_I(i_2)\in[r-1]$.
By an argument as in \cref{lem:densest-case-segment}, we may assume, without loss of generality, that $\deg_I(i_1)\ge \deg_I(i_2)$ and that $i_1$ is the first vertex of some clique $K_r$ completely contained in $I$:
otherwise, we could replace the vertex $i_2$ with some missing vertex from such a clique and increase the number of edges\COMMENT{Assuming that $\deg_I(i_1)\ge \deg_I(i_2)$ can be done by symmetry.
Now, assume $i_1$ is not the first vertex of a `full' copy of $K_r$.
By deleting $i_2$, we loose $\deg_I(i_2)$ edges. By then adding a new vertex in the clique containing $i_1$, since this clique must already contain $\deg_I(i_1)+1$ vertices, we gain $\deg_I(i_1)+1>\deg_I(i_2)$ new edges, so the total number of edges goes up.
But this contradicts the assumption that $I$ has the maximum possible number of edges.}.
Let $K^{(1)}$ be the copy of $K_r$ contained in $I$ with the smallest indices (in particular, it contains $i_1$).
Assume that $V$ is not a segment.
Consider the vertex $i'\in S\setminus V$ with the smallest index.
Observe that $i'\notin V(K^{(1)})$.
We distinguish two cases, depending on whether $i'$ is heavy or light.
If $i'$ is heavy, let $K'$ and $K''$ be the two copies of $K_r$ from $K_{r,2,n}$ with $i'\in V(K')\cap V(K'')$ and $K'$ containing smaller indices than $K''$.
If $E(K'')\cap E(I)=\varnothing$, then we can shift all edges of $I$ induced by $V\cap[i'+1,i_2]$ to the left $r$ positions, yielding a graph contained in a segment of length $i_2-i_1+1-r$ with the same number of edges as $I$.
Hence, we assume that $E(K'')\cap E(I)\neq\varnothing$, which implies $V$ contains at least two of the vertices of $K''$.
Observe that $K^{(1)}\neq K'$.
Then, replace $i_1$ by $i'$.
In this way, since $i_1$ is the first vertex from $K^{(1)}$, we remove $r-1$ edges, but at the same time we add at least $r$ edges\COMMENT{All vertices in $K'$ before $i'$ must be in the graph, so at least $r-2$ in $V(K')\setminus V(K'')$. But also, since $|V\cap V(K'')|\geq2$, we are adding at least two more edges.\\
We could ignore the fact that $i'\notin K^{(1)}$ and still obtain the desired result (we would replace $e-1$ edges by $r-1$ new edges and obtain a shorter segment).}.
But this contradicts our choice of $V$.
Assume now that $i'$ is light and let $K$ be the unique clique $K_r$ from $K_{r,s,n}$ with $i'\in V(K)$.
Since $i'$ is light, by its definition we must have $|V(K)\setminus V|\le r-2$\COMMENT{There have to be at least two heavy vertices in $K$ which lie in $V$ (the last two with indices below $i'$).}.
We replace the first $t\coloneqq |V(K)\setminus V|\le r-2$ vertices from $K^{(1)}$ by adding $V(K)\setminus V$ to $V$.
In this way, we remove $\sum_{i=1}^{t} (r-i)$ edges, but at the same time we add at least $\sum_{i=1}^{t} (r-i)$ edges.
Since $i_2>i'$, this means the new set lies in a shorter segment but induces at least as many edges.
In all described situations we managed to make the segment containing $V$ shorter while not decreasing the number of edges.
Hence, we eventually find a segment $V'$ inducing at least as many edges as $V$.
\end{proof}
The following now follows directly by combining \cref{fact:consec_Krsn,prop:easy-bound,lem:densest-case-segment,lem:densest-case-general} (and noting that the bound holds trivially if $|V|\leq r$).
\begin{corollary}\label{coro:Krdensity_bound}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $V\subseteq V(K_{r,2,n})$ with $|V|\le n/(2r)+1$.
Then,
\[e(K_{r,2,n}[V])\leq \frac{r+1}{2}|V|-\frac{r+2}{2}.\]
\end{corollary}
Using what we have proved so far, we can obtain estimates which will be crucial for studying the spreadness of $\mathcal{C}_{r,2,n}$.
\begin{lemma}\label{lemma:small-spread-new}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $I\subseteqK_{r,2,n}$ be a subgraph with $\ell\le n/(2r)$ edges and $c$ components.
Then,
\[|V(I)|-c\ge \frac{2}{r+1}\ell+\frac{c}{r+1}.\]
\end{lemma}
\begin{proof}
Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge, and let $v_1,\ldots,v_c$ be the number of vertices spanned by $I_1,\ldots,I_c$, respectively.
For each $j\in[c]$, since $|I_j|\le|I|\le n/(2r)$, by \cref{coro:Krdensity_bound} we have the following easy bound:
\[|I_j|\le \frac{r+1}{2}v_j-\frac{r+2}{2}.\]
Summing over all $j\in[c]$, we obtain that
\[\ell=|I|\le \frac{r+1}{2}|V(I)| -\frac{r+2}{2}c=\frac{r+1}{2}\left(|V(I)|-c\right)-\frac{c}{2},\]
and the claim follows by reordering.
\end{proof}
\begin{lemma}\label{lemma:large-spread-new}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Let $I\subseteqK_{r,2,n}$ be a subgraph with $\ell$ edges and $c$ components.
Then,
\[|V(I)|\ge \frac{2}{r+1}\ell.\]
\end{lemma}
\begin{proof}
The vertex set of $K_{r,2,n}$ consists of $t\coloneqq n/(r-2)$ segments of heavy vertices and $t$ segments of light vertices, which alternate as we traverse the vertex set.
The segments of heavy vertices have length $2$, and the segments of light vertices have length $r-4$ (the case $r=4$ is special: here the segments of light vertices are empty).
For each $i\in[t]$, let $h_i$ and $\ell_i$ denote the number of heavy vertices and light vertices of $I$ in the $i$-th segment of heavy or light vertices, respectively.
For notational purposes, let $h_{t+1}\coloneqq h_1$.
Then, we can bound the number of edges of $I$ as follows:
\begin{align*}
|I|&\le \sum_{i=1}^t \left(\binom{h_i}{2}+\binom{\ell_i}{2}+h_i\ell_i+h_{i+1}\ell_i+h_{i+1}h_i\right)\\
&\le \sum_{i=1}^t \left(\binom{h_i}{2}+\binom{\ell_i}{2}+h_i\ell_i+2\ell_i+2h_i\right)=\sum_{i=1}^t \left(\binom{h_i+\ell_i}{2}+2(\ell_i+h_i)\right).
\end{align*}
Next, observe that, for each $i\in[t]$,
\[\binom{h_i+\ell_i}{2}+2(\ell_i+h_i)=(h_i+\ell_i)\left(\frac{h_i+\ell_i-1}{2}+2\right)\le \frac{r+1}{2}(h_i+\ell_i),\]
where the inequality holds since $h_i+\ell_i\le r-2$.
The conclusion follows by adding over all $i\in[t]$.
\end{proof}
Combining the previous two lemmas, we show that $\mathcal{C}_{r,2,n}$ is a $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread hypergraph.
\begin{lemma}\label{lem:spread}
Let $r\ge 4$ and $n\in \mathbb{N}$ with $(r-2)\mid n$.
Then, the hypergraph $\mathcal{C}_{r,2,n}$ of all copies of $K_{r,2,n}$ in $M=\binom{[n]}{2}$ is $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread.
\end{lemma}
\begin{proof}
Let $I\subseteq M$.
Our first aim is to obtain a general upper bound on $|\mathcal{C}_{r,2,n}\cap \langle I\rangle|$.
If $I$ is not contained in any copy of $K_{r,2,n}$, we automatically have $|\mathcal{C}_{r,2,n}\cap \langle I\rangle|=0$, so we may assume $I$ is a subgraph of some copy of $K_{r,2,n}$.
Recall that each copy of $K_{r,2,n}$ can be defined by an ordering of the $n$ vertices, so it suffices to bound the number of orderings which yield a copy of $K_{r,2,n}$ containing $I$.
Let $I_1,\ldots,I_c$ be the components of $I$ with at least one edge.
For each $j\in[c]$, let $x_j\in V(I_j)$ (there are $v_j$ such possible choices, which leads to a total of
\begin{equation}\label{equa:Krtnboundspread1}
\prod_{j=1}^cv_j\leq 2^{|I|}
\end{equation}
choices for $\{x_1,\ldots,x_j\}$\COMMENT{We have $v_j\leq|I_j|+1\leq 2^{|I_j|}$.}).
Then, each ordering $\sigma$ of $[n]$ which defines a copy of $K_{r,2,n}$ containing $I$ induces a unique ordering $\tau=\tau(\sigma)$ on the set consisting of $x_1,\ldots,x_j$ and all other isolated vertices.
The total number of such orderings $\tau$ is
\begin{equation}\label{equa:Krtnboundspread2}
(n-|V(I)|+c)!
\end{equation}
so now it suffices to bound, for each such $\tau$, the number of orderings $\sigma$ with $\tau=\tau(\sigma)$.
Given an ordering $\tau$, in order to obtain an ordering $\sigma$ with $\tau=\tau(\sigma)$, it suffices to `insert' the missing vertices into the ordering.
That is, for each $j\in[c]$, we need to `insert' the other vertices of $V(I_j)$ into the ordering.
By considering a labelling of the vertices of $I_j$ in such a way that each subsequent vertex is a neighbour of at least one previously included vertex (and taking into account that $x_j$ is already included), we note that there are at most $2r$ choices for each vertex (recall that $\Delta(K_{r,2,n})<2r$).
This leads to a total of at most $(2r)^{|V(I_j)|-1}\leq(2r)^{|I_j|}$ possible ways to include the component $I_j$.
By considering all $j\in[c]$, we conclude that there are at most
\begin{equation}\label{equa:Krtnboundspread3}
\prod_{j=1}^c(2r)^{|I_j|}=(2r)^{|I|}
\end{equation}
orderings $\sigma$ with $\tau=\tau(\sigma)$.
Combining \eqref{equa:Krtnboundspread1}, \eqref{equa:Krtnboundspread2} and \eqref{equa:Krtnboundspread3} with the fact that each copy of $K_{r,2,n}$ is given by $d_{r,2}^{-n}2n/(r-s)$ distinct orderings (see \cref{fact:number-cCrsn}), we conclude that
\begin{equation}\label{equa:spreadboundKr2n}
|\mathcal{C}_{r,2,n}\cap \langle I\rangle|\le \frac{r-s}{2}d_{r,2}^n(4r)^{|I|}(n-|V(I)|+c-1)!.
\end{equation}
We can now estimate the spreadness of $\mathcal{C}_{r,2,n}$.
Consider first any $I\subseteq M$ with $|I|>n/(2r)$.
Note that $I$ has at most $2r$ components $I_j$ of size larger than $n/(2r)$.
For each of these components, we use \cref{lemma:large-spread-new} to bound $|V(I_j)|$.
For the remaining components $I_j$, we simply use the bound $|V(I_j)|-1\geq2|I_j|/(r+1)$, which follows by \cref{lemma:small-spread-new}.
By substituting these bounds into \eqref{equa:spreadboundKr2n}, we conclude that
\[|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)!.\]
By comparing this with the expression given in \cref{fact:number-cCrsn} (and taking into account the bound on $|I|$), we conclude that $|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq q^{|I|}|\mathcal{C}_{r,2,n}|$ whenever $q\geq c_1 n^{-2/(r+1)}$\COMMENT{By \cref{fact:number-cCrsn}, we have that
\[|\mathcal{C}_{r,2,n}|=\frac{r-2}{2}d_{r,2}^n(n-1)!.\]
Now it suffices to satisfy
\[\frac{r-2}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)!\leq q^{|I|}\frac{r-2}{2}d_{r,2}^n(n-1)!.\]
By Stirling's approximation, for sufficiently large $n$, it suffices to satisfy
\[2(4r)^{|I|}\left(n-\frac{2}{r+1}|I|+2r\right)_{2r}\sqrt{2\pi\left(n-\frac{2}{r+1}|I|\right)}\left(\frac{n-\frac{2}{r+1}|I|}{e}\right)^{n-\frac{2}{r+1}|I|}\leq q^{|I|}\frac{1}{n}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n.\]
Rearranging,
\[q\geq\left(2\sqrt{n\left(n-\frac{2}{r+1}|I|\right)}\left(n-\frac{2}{r+1}|I|+2r\right)_{2r}\right)^{1/|I|}4re^{\frac{2}{r+1}}\left(\frac{n-\frac{2}{r+1}|I|}{n}\right)^{n/|I|}\left(n-\frac{2}{r+1}|I|\right)^{-2/(r+1)}.\]
Now, using the bound $|I|\geq n/(2r)$, as $n$ goes to infinity, the first term tends to $1$, so we have that, if $n$ is sufficiently large, it suffices to have
\[q\geq8re^{\frac{2}{r+1}}\left(1-\frac{1}{r(r+1)}\right)^{2r}\left(1-\frac{1}{r(r+1)}\right)^{-2/(r+1)}n^{-2/(r+1)}=c_1n^{-2/(r+1)}.\]}, where $c_1$ is a constant that depends only on $r$.
Consider now some $I\subseteq M$ with $|I|\leq n/(2r)=|K_{r,2,n}|/(r(r+1))$ (see \cref{fact:prop_Krsn}), and let $c$ be its number of components.
By making use of \cref{lemma:small-spread-new} and \eqref{equa:spreadboundKr2n}, we have that
\[|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}-1\right)!.\]
As above, by comparing this with the expression given in \cref{fact:number-cCrsn}, and taking into account also \cref{fact:prop_Krsn}, for $q\geq c_2 n^{-2/(r+1)}$, where $c_2$ depends only on $r$, we have that $|\mathcal{C}_{r,2,n}\cap\langle I\rangle|\leq q^{|I|}|K_{r,2,n}|^{-c/(r+1)}|\mathcal{C}_{r,2,n}|$\COMMENT{As before, by \cref{fact:number-cCrsn},
\[|\mathcal{C}_{r,2,n}|=\frac{r-s}{2}d_{r,2}^n(n-1)!,\]
and recall from \cref{fact:prop_Krsn} that
\[|K_{r,2,n}|=\frac{r+1}{2}n.\]
Now it suffices to satisfy
\[\frac{r-s}{2}d_{r,2}^n(4r)^{|I|}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}-1\right)!\leq q^{|I|}\left(\frac{r+1}{2}n\right)^{-c/(r+1)}\frac{r-s}{2}d_{r,2}^n(n-1)!.\]
By Stirling's approximation, for sufficiently large $n$, it suffices to satisfy
\begin{align*}
2(4r)^{|I|}\frac{1}{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}\sqrt{2\pi\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\left(\frac{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}{e}\right)^{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}\\
\leq q^{|I|}\left(\frac{r+1}{2}n\right)^{-c/(r+1)}\frac{1}{n}\sqrt{2\pi n}\left(\frac{n}{e}\right)^n.
\end{align*}
Rearranging,
\begin{align*}
q\geq\left(2\sqrt{n/\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\right)^{1/|I|}4re^{\frac{2}{r+1}}\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)^{-2/(r+1)}\\
\cdot\left(\frac{e(r+1)n}{2\left(n-\frac{2}{r+1}|I|-\frac{c}{r+1}\right)}\right)^{\frac{c}{(r+1)|I|}}\left(\frac{n-\frac{2}{r+1}|I|-\frac{c}{r+1}}{n}\right)^{n/|I|}.
\end{align*}
Now, consider each term here.
The first term goes to $1$ if $|I|$ goes to infinity, and is bounded by a constant otherwise ($\leq2$), so in general we simply bound it by $2$.
The term $4re^{\frac{2}{r+1}}$ is also just a constant.
By the bound on $|I|$, the next term is always bounded from above by $(n/2)^{-2/(r+1)}$.
By the bound on $|I|$, the next term (the first in the second line) can be bounded from above by $(e(r+1))^{c/((r+1)|I|)}\leq(e(r+1))^{1/(r+1)}$, where the large parenthesis in the denominator is always at least $n/2$ and $c\leq|I|$.
The last term can be bounded from above by $1$.
Combining all of these, it suffices to have
\[q\geq c_2n^{-2/(r+1)},\]
where $c_2$ depends only on $r$.}.
\end{proof}
\begin{proof}[Proof of \cref{thm:Kr-cycle}]
By \cref{lem:spread}, we have that $\mathcal{C}_{r,2,n}$ is $(O(n^{-2/(r+1)}),1/(r+1),1/(r(r+1)))$-superspread.
By \cref{thm:main} it follows that, if $C$ is sufficiently large, for $p\ge Cn^{-2/(r+1)}$ we have
\[
\mathbb{P}\left[K_{r,2,n}\subseteq G(n,p)\right]\ge 1/2.
\]
To finish the argument, one can employ a general result of Friedgut~\cite{Fri05} (see also~\cite{NS20}) which allows to establish that
$\mathbb{P}\left[K_{r,2,n}\subseteq G(n,(1+o(1))p)\right]=1-o(1)$.
\end{proof}
\section{Concluding remarks}\label{sec:conclude}
\subsection{Dense overlapping \texorpdfstring{$K_r$}{Kr}-cycles}
As mentioned in the introduction, a general result of \citet{Ri00} provides a sufficient condition for a spanning graph to be contained in $G(n,p)$.
For a graph $H=(V,E)$, let $v(H)\coloneqq|V|$ and $e(H)\coloneqq|E|$.
For each integer $v$, let $e_H(v)\coloneqq\max \{ e(F) : F \subseteq H, v(F)=v \}$.
Then, the following parameter will be responsible for the upper bound on the threshold for the property that $H\subseteq G(n,p)$:
\begin{align*}
\gamma(H) \coloneqq \max_{3 \le v \le n} \left\{ \frac{e_H(v)}{v-2} \right\}.
\end{align*}
Riordan proved the following (see also~\cite{PP15} for its generalization to hypergraphs).
\begin{theorem}\label{thm:Riordan}
Let $H=H^{(i)}$ be a sequence of graphs with $n=n(i)$ vertices (where $n$ tends to infinity with $i$), $e(H)=\alpha \binom{n}{2} = \alpha (n)\binom{n}{2}$ edges and $\Delta=\Delta(H)$.
Let $p = p(n)\colon \mathbb{N}\to [0,1)$.
If $H$ has a vertex of degree at least $2$ and $n p^{\gamma(H)} \Delta^{-4} \rightarrow \infty$, then a.a.s.~the random graph $G(n,p)$ contains a copy of $H$.
\end{theorem}
From \cref{fact:prop_Krsn} it follows that $\gamma(K_{r,s,n})\ge \frac{r+s-1}{2}$ and, since $\gamma(K_r)>\frac{r+s-1}{2}$ for $s\in[2]$, Riordan's theorem does not provide optimal bounds on the theshold in the case of our \cref{thm:Kr-cycle} (nor for $K_{r,1,n}$, for which the threshold was determined by \citet{Frieze20}).
However, in the cases for $s\ge 3$, Riordan's theorem suffices and yields the correct threshold $n^{-2/(r+s-1)}$ for the property that $G(n,p)$ contains a copy of $K_{r,s,n}$.
\subsection{Extensions: hypergraphs and rainbow thresholds}
Throughout this paper, for simplicity, we have focused on properties of random graphs.
However, we believe that \cref{thm:main} extends to random hypergraphs without much issue.
Very recently, Frieze and Marbach~\cite{FM21} extended the results from~\cite{FKNP19,KNP20} to rainbow versions, where the vertices of some $r$-uniform hypergraph are colored randomly with $r$ colors.
It is then shown in~\cite{FM21} that the upper bounds on the thresholds as proved in~\cite{FKNP19,KNP20} remain asymptotically the same to yield a rainbow hyperedge or a rainbow copy of a spanning structure (e.g., bounded degree spanning tree, square of a Hamilton cycle), and the result is also extended to a rainbow version of the containment of the $k$-th power of a Hamilton cycle.
We believe that the fragmentation lemma, \cref{lem:fragmentation}, and \cref{thm:main} also admit rainbow versions.
\bibliographystyle{mystyle}
| {
"timestamp": "2021-06-21T02:14:11",
"yymm": "2106",
"arxiv_id": "2106.10023",
"language": "en",
"url": "https://arxiv.org/abs/2106.10023",
"abstract": "We extend a recent argument of Kahn, Narayanan and Park (Proceedings of the AMS, to appear) about the threshold for the appearance of the square of a Hamilton cycle to other spanning structures. In particular, for any spanning graph, we give a sufficient condition under which we may determine its threshold. As an application, we find the threshold for a set of cyclically ordered copies of $C_4$ that span the entire vertex set, so that any two consecutive copies overlap in exactly one edge and all overlapping edges are disjoint. This answers a question of Frieze. We also determine the threshold for edge-overlapping spanning $K_r$-cycles.",
"subjects": "Combinatorics (math.CO)",
"title": "Spanning $F$-cycles in random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594757889705
} |
https://arxiv.org/abs/2010.07716 | Balanced Colorings and Bifurcations in Rivalry and Opinion Networks | Balanced colorings of networks classify robust synchrony patterns -- those that are defined by subspaces that are flow-invariant for all admissible ODEs. In symmetric networks the obvious balanced colorings are orbit colorings, where colors correspond to orbits of a subgroup of the symmetry group. All other balanced colorings are said to be exotic. We analyze balanced colorings for two closely related types of network encountered in applications: trained Wilson networks, which occur in models of binocular rivalry, and opinion networks, which occur in models of decision making. We give two examples of exotic colorings which apply to both types of network, and prove that Wilson networks with at most two learned patterns have no exotic colorings. We discuss how exotic colorings affect the existence and stability of branches for bifurcations of the corresponding model ODEs. | \section{Introduction}
We work in the `coupled cell' network formalism of~\cite{GS06, GST05, SGP03},
which should be consulted for precise definitions and proofs. Section~\ref{S:BCA} provides
a short summary. Networks
consist of {\em nodes} connected by {\em arrows} (directed edges), both of
which are partitioned into {\em types}. Each node represents a dynamical
system, and arrows indicate couplings between these systems. Identical
types determine identical dynamics or couplings. It is sometimes convenient to interpret
a node as an `internal arrow' from that node to itself. Multiple arrows and
self-loops are permitted; indeed, they are required to simplify the theory.
Robust patterns of synchrony in networks are classified by {\em balanced colorings} of
the nodes. A coloring assigns a color to each node, and it is balanced if nodes of the same color
have color-isomorphic input sets --- that is, the same number of arrows of each type with
tail nodes of a given color. To each coloring is associated a subspace of the state space,
called a {\em synchrony subspace},
in which the coordinates of nodes of a given color are all equal.
This synchrony subspace is invariant for the dynamics of any ODE whose
structure is compatible with the network architecture --- that is, for
all {\em admissible ODEs} --- if and only if the coloring is balanced.
The existence of these universal flow-invariant subspaces has strong implications
for bifurcations of admissible ODEs, because states lying in such a subspace
have the synchrony pattern specified by the coloring, and can be found by
restricting the ODE to the subspace. Such restrictions are precisely the admissible ODEs for
the {\em quotient network}, obtained by identifying nodes with the same color and preserving
the colored input structure. Sets of synchronous nodes are often called {\em clusters} \cite{PSHMR13},
and the restricted ODE on the quotient network describes the dynamics of the clusters.
Synchrony for all admissible ODEs may seem a strong condition,
but weaker forms of synchrony that persist
under small perturbations of the ODE also correspond to balanced colorings.
This has been proved for hyperbolic equilibria (\cite[Theorem 7.6]{GST05}, \cite[Theorem 6.1]{S20rigideq}), and for hyperbolic periodic states (\cite[Theorem 6.1]{GRW10})
with a technical assumption on the network architecture:
see~\cite[Appendix]{S20rigideq}.
It is plausibly conjectured to be true for all
hyperbolic periodic states~\cite[Section 10]{GS06}, and for more complex dynamic trajectories under some
kind of assumption about persistence of the underlying attractor under small perturbations.
A central issue in this case is to find a suitable definition of this type of persistence.
In networks with symmetry, every subgroup $\Sigma$ of the symmetry group $\Gamma$ defines a
balanced coloring, where colors correspond to the orbits of $\Sigma$. We call such
a coloring an {\em orbit coloring}. The synchrony
space is then the fixed-point space of $\Sigma$. However, {\em exotic} balanced colorings that
are not of this form can exist for some symmetric networks.
This was pointed out in~\cite{GNS04} for a 12-node bidirectional ring
with $\mathbb{D}_{12}$ symmetry and for certain $2$-color lattice patterns. Other examples are discussed
in~\cite{AS07, AS08}, and for planar square and hexagonal lattices
in~\cite{S19, SGo19,SGo20}.
\begin{figure}[htb]
\centerline{
\includegraphics[height=1.8in]{5x4groupnet.pdf}\qquad\qquad
\includegraphics[height=1.97in]{5x4inputs.pdf}
}
\caption{{\em Left}: Group network for $\mathbb{S}_4 \times \mathbb{S}_5$ acting on a $4 \times 5$
array. Only a representative set of connections shown. {\em Right}:
The four arrow-types. Arrowheads not shown; only inputs to node $(1,1)$ depicted. Colors of tail nodes and forms of edges indicate arrow-type: (a) red/internal arrow, (b) blue/solid, (c) yellow/dashed, (d) green/gray.}
\label{F:5x3gpnet}
\end{figure}
Here we study networks whose nodes form an $m \times n$ array.
We assume the symmetry group is $\mathbb{S}_m \times \mathbb{S}_n$, where
$\mathbb{S}_m$ permutes the $n$ rows and $\mathbb{S}_n$ permutes the $n$ columns, so
all nodes have the same type.
Figure~\ref{F:5x3gpnet} (left) is a schematic illustration of the corresponding
{\em group network} ${\mathcal G}_{mn}$ in the sense of~\cite[Section 2]{AS07}.
There is one arrow for each ordered pair $(i,j)$ of nodes, with
head is $i$ and tail $j$. The type of the arrow is given by the orbits of such pairs
under the group action; arrows $(i,i)$ correspond to the `internal arrow' of node $i$
and are represented by the node symbol, not an arrow as such.
Figure~\ref{F:5x3gpnet} (right) is a schematic illustration of the inputs
to node $(1,1)$. There are four types of arrow:
(a) Internal node arrow: circles.
(b) Row arrows: solid black lines.
(c) Column arrows: dashed black lines.
(d) Diagonal arrows between nodes in neither the same row nor the same column: grey lines.
Admissible maps for a symmetric network are always equivariant under its symmetry group,
but the converse is false in general, \cite[Section 3.1]{AS07}. In equivariant
dynamical systems, every subspace that is flow-invariant for all equivariant ODEs is a
fixed-point subspace~\cite[Theorem 4.1]{AS06}. The possibility
of exotic colorings can therefore
be attributed to the difference between equivariant and admissible maps.
The synchrony subspace of an exotic balanced coloring supports a
pattern that is not an orbit coloring, hence not detected by the usual
methods of equivariant dynamics.
The main point of this paper is the existence of exotic colorings
in the networks ${\mathcal G}_{36}$ and ${\mathcal G}_{55}$, shown below in
Figures~\ref{F:3x6wilson} and \ref{F:5x5latin} respectively. (Similar methods
can presumably extend this result to many larger values of $m$ and $n$.)
To complement these results, we also investigate networks arising in
a model of binocular rivalry, showing that when these networks are trained
on only one or two images, no exotic colorings exist.
We postpone discussion to Section~\ref{S:WRN},
after the models concerned have been introduced.
Finally, in Section~\ref{S:BEP} we discuss implications for bifurcations of admissible
ODEs on the networks ${\mathcal G}_{mn}$. Exotic colorings indicate the presence of
flow-invariant subspaces that are not detected by the usual method for
finding bifurcating branches of symmetric dynamical systems, namely
the Equivariant Branching Lemma. This proves the existence of
certain symmetry-breaking patterns. The analogous phenomenon of synchrony-breaking
patterns in networks can lead to exotic patterns, not predicted by their symmetries.
Taking network constraints into account, as well as
symmetry, gives a broader picture of the dynamics and bifurcations.
\section{Networks Occurring in Applications}
Our results apply to
two closely related classes of symmetric networks occurring
in applications. Both are variations on networks proposed by Wilson~\cite{W03, W07, W09}
to model interocular grouping, and more generally, high-level
decision-making in the brain, and are called {\em Wilson networks}.
An {\em untrained} Wilson network is a rectangular $m\times n$ array of
identical nodes whose columns are all-to-all connected by identical inhibitory arrows. The symmetry
group of this network is not $\mathbb{S}_m\times\mathbb{S}_n$, but the
larger wreath product $\mathbb{S}_m \wr \mathbb{S}_n$ (permute the nodes in each column in any manner, and permute
the columns setwise). Figure~\ref{F:wilson_net_un} shows (schematically)
an untrained $5 \times 8$ Wilson network. For clarity, the dotted lines indicate
sets of nodes --- the columns --- with identical all-to-all coupling.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=3in]{wilson_net_un.pdf}}
\caption{Untrained $5 \times 8$ Wilson network. Dotted lines indicate
sets of nodes with identical all-to-all coupling. Columns are decoupled from each other.}
\label{F:wilson_net_un}
\end{figure}
\subsection{Binocular Rivalry}
The discovery of (binocular) rivalry is credited to Giambattista della Porta~\cite{P1593} in 1593.
He placed two books so that each eye saw only one, and stated that he was able to
read from either book at will by moving the `visual virtue' (attention) from one eye
to the other. Closely related but distinct phenomena are visual illusions,
where a single image can be perceived
in more than one way; the classic example is the Necker cube of 1832, Necker~\cite{N32}, which alternates between two apparent orientations. Numerous authors have modeled
rivalry and illusions using a network with two nodes, representing the two percepts \cite{C10a,CSRR08,KM00,LC02,L88,M84,M90,NVNV07,SC11}.
Modern experiments on rivalry show a different image to each eye and ask the subject
to report what they perceive.
In simple cases, these percepts alternate between those two images.
In some studies, however,
some subjects perceive not only the original two images,
but images that combine aspects of those images in new ways~\cite{KPYF96,SSH08,SG02,TMB06}.
The occurrence of these {\em derived images} can be modelled using trained
Wilson networks~\cite{DGMW12, DGW13, DG14}, also called rivalry networks. Experimental evidence supporting such models
has been obtained in~\cite{GZWL19}, which reports data on
the 4-location experiment proposed in~\cite{TMB06},
obtaining results consistent with those predicted by a trained Wilson network model.
The model for a given pair of images begins with an untrained Wilson network.
Each column represents an
{\em attribute} of the image, such as the orientation or color of some feature. The nodes
in a given column represent {\em levels} of that attribute, such as specific orientations or colors.
The initial Wilson network is {\em trained} by adding
excitatory arrows representing the {\em learned patterns}
shown to the two eyes. A learned pattern is represented by a choice
of one node from each column --- the appropriate level of that attribute --- connected all-to-all by identical excitatory arrows.
In standard model ODEs, the state of each node $i$ is defined by two variables $(x_i^A, x_i^H) \in \mathbb{R}^2$.
The first is an {\em activity} variable, the second a {\em fatigue} variable.
Derived patterns emerge from the dynamics of an admissible ODE for the trained Wilson network.
A balanced coloring of a Wilson network defines a {\em fusion state} in which the
percept is ambiguous. If the fusion state becomes unstable,
symmetry-breaking states can bifurcate.
Such bifurcations are usually analysed using equivariant (symmetric)
bifurcation theory. In the models under discussion, the relevant bifurcating branches
consist of oscillatory states, created through an equivariant version of Hopf bifurcation.
The main existence theorem here is the
Equivariant Hopf Theorem~\cite[Chapter XVI Theorem 4.1]{GSS88},
which guarantees the existence of branches with certain spatiotemporal symmetries ---
namely, $\mathbb{C}$-axial subgroups, which are those having 2-dimensional fixed-point spaces
in a Liapunov-Schmidt reduction. Spatiotemporal symmetries describe patterns of
synchrony and phase relations between nodes.
The interpretation of an oscillatory state
associates the perceived image with the node or nodes whose activity variables
are largest among all nodes. Changes in the ordering of activity levels
indicate transitions between different percepts.
However, the relation of symmetry to the network structure is
subtle. In particular, exotic colorings predict behavior
in such models that is not deducible from symmetries alone. So, potentially,
exotic colorings can lead to branches with new types of spatiotemporal symmetry.
The main results of this paper for trained Wilson networks are:
\begin{itemize}
\item
Every balanced coloring is an orbit coloring for the automorphism group
of the resulting network when the network has
been trained on 0, 1, or 2 learned patterns.
\item
The automorphism group concerned
depends on the attribute levels of the images used to train the network.
\item There exist exotic colorings when the network has
been trained on 3 or more learned patterns.
\end{itemize}
Thus there can exist spatiotemporal patterns --- in particular, synchrony
patterns of oscillatory states and associated phase patterns --- that are not predicted by
the Equivariant Hopf Theorem. These patterns have not yet been investigated
in any detail.
\subsection{Decision Making}
The notion of decision making is very broad.
Virtually any collective system of active agents must make decisions.
The agents can, for instance, be humans, birds in a flock, fish in a school,
bees, bacteria, neurons, or autonomous robots.
As preliminary step, agents form `opinions' about a set of possible options;
that is, preferences that affect the system's behavior. These opinions collectively
determine the eventual decision at system level. Examples of
decision making systems include:
\begin{itemize}
\item
Social and economic decisions by humans based on
different types of electoral and information-sharing systems.
\item Hunting strategies in past and present hunter-gatherer communities.
\item Bee colonies collectively deciding on a new nest site.
\item
Animal groups collectively deciding when, and in which direction, to move --- for example when
approaching two possible food sources.
\item
Neurons in lower brain areas integrating sensory inputs to perform perceptual and
motor behavior decision making.
\item
Neurons in higher brain areas integrating
sensorimotor information to make higher-level decisions.
\item
Bacteria and other social microorganisms collectively deciding --- for instance, by
quorum sensing --- how and when to undergo phenotypic differentiation in response
to environmental signals.
\item
Swarms of autonomous robots settling on a collective strategy to accomplish some task.
\end{itemize}
The same mathematical models often apply to a variety of such
real-world decision-making systems.
Here we focus on
networks very similar to Wilson networks that have been introduced into
models of decision making~\cite{FGBL20}. Again
the nodes of the network form an $m \times n$ array, but now the
columns represent options, the rows represent agents, and the state of
a node represents the agent's opinion about that option.
Connections between nodes are of three types: distinct nodes in the same row,
distinct nodes in the same column, and all other pairs of distinct nodes.
The network has $\mathbb{S}_m\times\mathbb{S}_n$ symmetry where $\mathbb{S}_m$ permutes
the $m$ rows and $\mathbb{S}_n$ permutes the $n$ columns.
The analysis of~\cite{FGBL20} focuses on steady-state symmetry-breaking bifurcation
from a fully symmetric equilibrium, using the
Equivariant Branching Lemma~\cite{Ci81} (see also~\cite[Lemma 1.3.1]{GS02} or \cite[Chapter XIII Theorem 3.3]{GSS88}). This result states that, subject
to certain genericity conditions, branches exist for all {\em axial} subgroups of the symmetry
group --- those with 1-dimensional fixed-point spaces. Classifying the axial subgroups
therefore leads to a list of branches breaking symmetry in various ways. In this
application the corresponding orbit colorings classify patterns of equal opinions
about various options for various agents. These patterns govern agreements
(consensus) and disagreements (dissensus) among the agents.
The states obtained using the Equivariant Branching Lemma need not
exhaust the possible bifurcating branches, even for equivariant dynamical
systems, but they provide useful information on symmetry-breaking states
that are guaranteed to occur. In networks, the difference between
equivariant and admissible ODEs can create branches that do not occur
in general equivariant systems. In particular, exotic balanced colorings
predict new branches that do not appear in any classification by isotropy subgroups.
The main result of this paper for opinion networks is:
\begin{itemize}
\item The examples of exotic colorings for trained Wilson networks can
also be interpreted as exotic colorings for opinion networks.
\end{itemize}
\section{Balanced Colorings and Automorphisms}
\label{S:BCA}
We begin by recalling some basic concepts of the `coupled cell' network
formalism~\cite{GST05, SGP03}.
\begin{definition} \label{D:network_diag} \rm
A {\em network diagram} is a labelled directed graph that has four ingredients:
\begin{itemize}
\item[\rm (1)]
A finite set of {\em nodes} (or {\em cells}) ${\mathcal C} = \{1, 2, \ldots, n\}$.
\item[\rm (2)]
A {\em node symbol} assigned to each node. Nodes with the same symbol are said to be {\em identical}.
\item[\rm (3)]
A finite set $\AA$ of {\em edges} or {\em arrows}. Each arrow $e$ points from its
{\em tail node} $\mathcal{T}(e)$ to its {\em head node} $\mathcal{H}(e)$.
\item[\rm (4)]
An {\em arrow symbol} assigned to each arrow. Arrows with the same symbol are said to be {\em identical}.
\end{itemize}
\end{definition}
The formal theory of networks goes on to define a class of {\em admissible maps}
and associated {\em admissible ODEs} associated with the network. Essentially,
these are the maps or ODEs that respect the network structure.
The {\em automorphism group} $\mathrm{aut}\,({\mathcal G})$ of a network ${\mathcal G}$
consists of all permutations of the nodes ${\mathcal C}$ that preserve the number and type
of arrows between each pair of nodes.
The {\em input set} $I(c)$ consists of all input arrows to node $c$.
An {\em input isomorphism} $\beta:I(c) \rightarrow I(d)$ is bijection
that preserves arrow-type.
A {\em coloring} of ${\mathcal G}$ is a partition of ${\mathcal C}$, or equivalently
a map $K:{\mathcal C} \rightarrow {\mathcal K}$ where ${\mathcal K}$ is a set of colors.
A coloring is {\em balanced} if, whenever nodes $c, d$ have the same color,
there is an input isomorphism $\beta:I(c) \rightarrow I(d)$ such that
$\mathcal{T}(e)$ and $\mathcal{T}(\beta(e))$ have the same color for all arrows $e \in \AA$.
As already remarked, a coloring is balanced if and only if it
defines a synchrony subspace that is flow-invariant for
all admissible ODEs, and the dynamics of the resulting clusters is given by
the restricted admissible ODE on the {\em quotient network} whose nodes
represent the clusters.
A subgroup $\Sigma \subseteq \mathrm{aut}\,({\mathcal G})$ defines a coloring
$K^\Sigma$, where colors are determined by $\Sigma$-orbits on ${\mathcal C}$.
That is, nodes have the same color if and only if they are in the same
$\Sigma$-orbit. We call this the {\em orbit coloring} for $\Sigma$.
\begin{proposition}
Every orbit coloring is balanced.
\end{proposition}
\begin{proof}
This is~\cite[Proposition 3.3]{AS06}, stated using different terminology.
In that paper, `orbit coloring' is replaced by `fixed-point coloring'.
\end{proof}
The {\em isotropy subgroup} $\Sigma^K$ of a coloring $K$
is the subgroup of $\mathrm{aut}\,({\mathcal G})$ that leaves every set of $K$ setwise
invariant, where $K$ is considered as a partition of ${\mathcal C}$.
The next proposition is simple but useful:
\begin{proposition}
\label{P:iso_col}
A coloring is an orbit coloring if and only if it is the
same (up to choice of colors) as the orbit coloring for
$K^\Sigma$, where $\Sigma$ is its isotropy subgroup.
\end{proposition}
\begin{proof}
If $K= K^\Sigma$ then $K$ is an orbit coloring. Conversely,
let $K$ be the orbit coloring for a subgroup $\Omega$.
Then $\Omega \subseteq \Sigma$ so $\mathrm{Fix}(\Sigma) \subseteq \mathrm{Fix}(\Omega)$.
Let $c,d$ be in the same $\Sigma$-orbit. Since $\Sigma$ fixes
the colors, $c$ and $d$ have the same color. Therefore
$\mathrm{Fix}(\Sigma) = \mathrm{Fix}(\Omega)$ and $K^\Sigma = K$.
\end{proof}
The next result is useful because some of the applications we discuss
use networks like ${\mathcal G}_{mn}$ but with arrows of type (d) deleted.
\begin{lemma}
\label{L:delete}
A coloring $K$ is balanced for ${\mathcal G}_{mn}$ provided it is balanced
when type (d) arrows are deleted and type (a) `arrows' are ignored.
\end{lemma}
\begin{proof}
By~\cite[Lemma 5.4]{S07} a coloring is balanced if and only if it is balanced for each type
of arrow separately --- that is, in the network where arrows of all other types are deleted.
This follows by counting arrows with tail node of a given color, and noting that
input isomorphisms preserve arrow-type. If an input isomorphism also
preserve colors of tail nodes, it must preserve colors of tail nodes of arrows of
each type separately.
For ${\mathcal G}_{mn}$, balance for type (a) internal arrows is trivial.
Moreover, balance for type (d) arrows is a consequence of balance for
both types (b) and (c), because there is a unique arrow for each pair $(i,j)$ and
it therefore has a unique type.
\end{proof}
\section{Existence of Exotic Colorings}
We now give two examples of exotic balanced colorings in networks
${\mathcal G}_{mn}$ when $(m,n) = (3,6)$ and $(5,5)$.
\subsection{The Network ${\mathcal G}_{36}$}
The first example is ${\mathcal G}_{36}$, Figure~\ref{F:3x6wilson}, where we
omit type (d) arrows and show types (b) and (c) schematically.
By Lemma~\ref{L:delete} the omission of type (d) arrows does not affect balance.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2.7in]{3x6wilson.pdf}}
\caption{Exotic balanced coloring of ${\mathcal G}_{36}$. Type (d) arrows omitted,
without loss of generality.}
\label{F:3x6wilson}
\end{figure}
\begin{theorem}
The coloring in Figure~{\em \ref{F:3x6wilson}} is balanced, but is not an orbit coloring.
\end{theorem}
\begin{proof}
It is easy to prove that the pattern in Figure~\ref{F:3x6wilson} is balanced.
Each row contains the same colors with the same multiplicities, so
it is balanced with respect to the type (b) connections.
Columns either have the same colors with the same multiplicities,
or a disjoint set of colors. So the pattern is balanced with respect to the type (c)
connections. Therefore it is balanced.
We claim it is not an orbit coloring.
Suppose for a contradiction that it is the orbit coloring of a subgroup $\Sigma \subseteq \mathbb{S}_3 \times \mathbb{S}_6$. Because the colors in regions $A$ and $B$ are disjoint, we must have
\[
\Sigma \subseteq \mathbb{S}_3^R \times \mathbb{S}_3^A \times \mathbb{S}_3^B
\]
where $\mathbb{S}_3^R$ permutes the rows, $\mathbb{S}_3^A$ permutes
columns $1,2, 3$ and $\mathbb{S}_3^B$ permutes columns $4,5,6$. We identify these groups
with the same group $\mathbb{S}_3$ and use $R, A, B$ to indicate
which direct factor is intended.
We ask when an element $(\alpha,\beta,\gamma) \in \mathbb{S}_3^R \times \mathbb{S}_3^A \times \mathbb{S}_3^B$
fixes the pattern. This gives its isotropy subgroup $\Sigma$. Then we show that
the pattern is not given by the orbits of $\Sigma$.
Regions $A$ and $B$ in the figure (columns 1--3 and 4--6)
have disjoint sets of colors. The isotropy subgroup $\Sigma$
must preserve these colors, hence it must preserve regions
$A$ and $B$. Therefore $\Sigma$ is the intersection of
the isotropy subgroups $\Sigma^A$ and $\Sigma^B$ of the patterns in regions $A$ and $B$.
The isotropy group $\Sigma^A$ clearly consists of all elements of the form
\[
(\alpha, \alpha, \gamma) \quad \alpha \in \mathbb{Z}_3, \gamma \in \mathbb{S}_3
\]
The isotropy group $\Sigma^B$ clearly consists of all elements of the form
\[
(\alpha, \beta, \alpha) \quad \alpha \in \mathbb{S}_3, \beta \in \mathbb{S}_3
\]
(This has two orbits: the set of nodes in positions $(i,i)\ (1 \leq i \leq 3)$ and
the set of nodes in positions $(i,j)\ (1 \leq i\neq j \leq 3)$. This is the 2-color
pattern in region $B$.)
Their intersection therefore consists of all elements of the form
\[
(\alpha, \alpha, \alpha) \quad \alpha \in \mathbb{Z}_3
\]
This group has order 3. It has six orbits, not five, so Lemma~\ref{L:delete}
implies that Figure~\ref{F:3x6wilson} is not an orbit coloring.
In fact, the orbit coloring for this group splits the six purple nodes in region $B$
into two sets of three, with different colors, such that the same color occurs
along each broken diagonal.
\end{proof}
\subsection{The Network ${\mathcal G}_{55}$}
The example in the previous section involves two disjoint `blocks' of colors
in regions $A$ and $B$. By passing to more learned patterns --- five, to be precise --- we
can find an example with one block.
We use a coloring of ${\mathcal G}_{55}$ based on a Latin square to give an example of an
exotic balanced coloring. Again we
omit type (d) arrows and show types (b) and (c) schematically, appealing to
Lemma~\ref{L:delete}.
\begin{remark}\em
The classification of $3\times 3$ and $4 \times 4$ Latin squares~\cite{wiki}
shows that no analogous examples exists with 3 or 4 learned patterns.
\end{remark}
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2in]{5x5latin.pdf}}
\caption{Exotic balanced coloring of ${\mathcal G}_{55}$. Type (d) arrows omitted,
without loss of generality.}
\label{F:5x5latin}
\end{figure}
\begin{theorem}
The coloring in Figure~{\em \ref{F:5x5latin}} is balanced, but is not an orbit coloring.
\end{theorem}
\begin{proof}
This coloring is balanced because every node receives exactly one
type (b) connection and one type (c) connection from
a node of any different color.
To prove that this coloring is not an orbit coloring,
observe that the subgroup that fixes the black pattern is clearly
the diagonal subgroup
\[
\Delta = \{ (\sigma, \sigma) : \sigma \in \mathbb{S}_5\} \subseteq \mathbb{S}_5 \times\mathbb{S}_5
\]
For $\sigma \in \mathbb{S}_5$ define the permutation matrix $P_\sigma$ by
\[
(P_\sigma)_{ij} = \left\{ \begin{array}{rcl}
1 & \mbox{if} & j = \sigma(i) \\
0 & \mbox{if} & j \neq \sigma(i)
\end{array} \right.
\]
Let $M$ be a $5 \times 5$ matrix. Then $P_\sigma M$ permutes
the rows of $M$, sending row $i$ to row $\sigma^{-1}(i)$.
Similarly $MP_\sigma$ permutes
the columns of $M$, sending column $j$ to column $\sigma(j)$.
Thus an element $(\sigma, \sigma) \in \Delta$ can be considered
as acting on the space of $5 \times 5$ matrices by conjugation:
\[
M \mapsto P_\sigma MP_\sigma^{-1}
\]
This permutes both rows and columns simultaneously, via the same permutation.
Define five matrices $M_j$ corresponding to the positions of nodes of a given color:
\[
M_1 = \Matrix{1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1} \quad
M_2 = \Matrix{0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0} \quad
M_3 = \Matrix{0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0}
\]
\[
M_4 = \Matrix{0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0} \quad
M_5 = \Matrix{0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0}
\]
\noindent
We already know that the permutations $(\sigma,\tau)$ that fix the
black nodes are those in $\Delta$. So $(\sigma,\sigma) \in \Delta$ fixes
all $M_j$ (which is equivalent to fixing the coloring) if and only if
\[
P_\sigma M_jP_\sigma^{-1} = M_j \quad 2 \leq j \leq 5
\]
or equivalently
\[
M_jP_\sigma = P_\sigma M_j \quad 2 \leq j \leq 5
\]
Solving these equations using Mathematica, we find that
\[
P_\sigma = \Matrix{a & b & b & b & b \\ b & a & b & b & b \\ b & b & a & b & b \\
b & b & b & a & b \\ b & b & b & b & a }
\]
for $a, b \in \mathbb{R}$. This is a permutation matrix if and only if $a=1, b=0$,
so the isotropy subgroup of the coloring is the identity.
However, the fixed-point subspace of the identity contains all colorings, so
the pattern in Figure~\ref{F:5x5latin} is not an orbit coloring.
\end{proof}
\section{Decision Making}
Networks ${\mathcal G}_{mn}$ occur in models of decision making~\cite{FGBL20}.
We summarize the salient features. The model assumes that a group
of $m$ equal agents is trying to decide among a set of $n$ options, initially
assumed to be of equal value to all agents.
This case is mathematically tractable because of its symmetry, and it is also important
for examining more complex real-world settings.
The equality assumptions are formalized as symmetries.
Equation (2.11) of~\cite{FGBL20} defines a specific class of admissible ODEs,
which is an equivariant
version of a more general model~\cite{Bizyaeva2020}.
The authors use the Equivariant Branching Lemma to prove the existence
of various symmetry-breaking equilibria.
\subsection{Exotic Colorings of Opinion Networks}
\label{S:exotic_decision}
We now discuss the difference between admissible maps for the network
${\mathcal G}_{mn}$ and equivariant maps for $\mathbb{S}_m\times\mathbb{S}_n$. Recall
from~\cite{GST05, SGP03} that the {\em vertex group} $B(i,i)$
at a node $i$ of a network is the group of all input automorphisms
on the set $I(i)$ of all input arrows of node $i$.
The proof of \cite[Theorem 4.4]{AS06} can easily be
adapted to show that for a symmetric network,
admissible maps are the same as equivariant maps
if and only if:
\begin{itemize}
\item[\rm (a)]
The network is all-to-all connected.
\item[\rm (b)]
Every vertex symmetry extends to an element of the symmetry group.
\end{itemize}
\noindent
When the network is ${\mathcal G}_{45}$, for example, Figure~\ref{F:5x3gpnet} (right)
shows that the vertex group of any node $c=(i,j)$ of ${\mathcal G}_{mn}$ is
\[
B(c,c) = \mathbb{S}_4 \times \mathbb{S}_3 \times \mathbb{S}_{12}
\]
Condition (a) holds in this case, but
\[
|B(c,c)| = 4!\, 3!\, 16! = 3012881743872000 \qquad |\mathbb{S}_4 \times \mathbb{S}_5| = 2880
\]
so condition (b) does not.
Therefore admissible maps are not the same as equivariant maps for ${\mathcal G}_{45}$.
\begin{remark}\em
It can easily be shown that the linear admissible maps for ${\mathcal G}_{mn}$ are
the same as the linear equivariant maps, so the usual conditions for local bifurcation
from a fully symmetric state are the same in both cases. However, the
detailed nonlinear structure can be different, affecting the bifurcations: see
Section~\ref{S:BEP}.
\end{remark}
The same calculation and a little extra effort proves:
\begin{theorem}
\label{T:equiv_not_admiss}
The equivariant maps for $\mathbb{S}_m\times\mathbb{S}_n$ are the same as the admissible maps
for ${\mathcal G}_{mn}$ if and only if $(m,n) = (1,n), (2,2), (2,3), (3,2)$.
\end{theorem}
\begin{proof}
The vertex group of any node has order $v = (m-1)!(n-1)![(m-1)(n-1)]!$.
This must be less than or equal to $|\mathbb{S}_m\times\mathbb{S}_n| = g = m!n!$. It is easy to show that this is the case if
and only if $m = 1$, and $n$ is arbitrary, or $(m,n) = (2,2), (2,3), (2,4), (3,2), (4,2)$. We
rule out $(2,4)$ and $(4,2)$ by considering the vertex group structure, not just its order.
If $m = 1$ then $v = (n-1)!$ and we must embed $\mathbb{S}_{m-1}$ in $\mathbb{S}_m$, which is
possible.
If $m=2, n=2$ we must embed $\mathbb{S}_1\times\mathbb{S}_1\times\mathbb{S}_1$ in $\mathbb{S}_2\times\mathbb{S}_2$,
which is possible.
If $m=2, n=3$ or the other way round we must embed $\mathbb{S}_1\times\mathbb{S}_2\times\mathbb{S}_2$ in $\mathbb{S}_2\times\mathbb{S}_2$,which is possible.
If $m=2, n=4$ or the other way round we must embed $\mathbb{S}_3\times\mathbb{S}_3$ in $\mathbb{S}_2\times\mathbb{S}_4$.
This is {\em not} possible. In $\mathbb{S}_3\times\mathbb{S}_3$ there exist two non-conjugate order-3
elements, but in $\mathbb{S}_2\times\mathbb{S}_4$ all order-3 elements are conjugate.
It remains to show that in the cases listed, all equivariants are admissible.
This is trivial for $m = 1$ since it holds for an $\mathbb{S}_n$ group network.
For $(2,2)$ the vertex groups are trivial. For $(2,3)$, hence also $(3,2)$, the
vertex groups are $\mathbb{Z}_2 \times \mathbb{Z}_2$ with each factor acting independently on
two arrows of a given type (solid, gray). These elements extend to
automorphisms. The networks are all-to-all connected, so conditions (a) and (b) hold,
which completes the proof.
\end{proof}
This theorem suggests that there is scope for
exotic colorings of opinion networks to exist. We confirm this by giving
two examples of such colorings.
\begin{example}\em
In this example $m=3, n = 6$ or $m=6, n = 3$.
The $3 \times 6$ array in Figure~\ref{F:3x6wilson} can be interpreted as an opinion network with
$3$ agents and $6$ options --- or, alternatively, $6$ agents and $3$ options. The trained Wilson network includes arrow of types (a), (b), (c) above, and we have seen that it is balanced for
each type separately. The opinion network has a fourth type of arrow (d).
By Lemma~\ref{L:delete}, the same coloring is
also balanced for type (d) arrows, so it is balanced for the opinion network.
This can also be checked directly by counting all input nodes of a given color.
\end{example}
\begin{example}\em
In the second example $m=5, n = 5$.
The $5\times 5$ network of Figure~\ref{F:5x5latin} can be interpreted as
an opinion network, and again gives an exotic coloring.
\end{example}
It seems highly likely that similar constructions provide exotic patterns
(both for trained Wilson networks and opinion networks) where
$(m,n) = (k, 2k)$ for $k \geq 4$ and $(m,n) = (k, k)$ for $k \geq 6$.
We have not investigated this question.
\section{Wilson and Rivalry Networks}
\label{S:WRN}
Recall that an untrained Wilson network ${\mathcal W}$ consists of $n$ disjoint columns,
as in Figure~\ref{F:wilson_net_un}.
Each column is an all-to-all connected network (each pair of distinct nodes is connected
in each direction by an arrow) in which all arrows are equivalent
and inhibitory. Arrows in distinct columns are equivalent, and each column contains
the same number $m$ of nodes. The nodes form an $m \times n$ array, and we
label nodes by pairs $(i,j)$ where $1 \leq i \leq m, 1\leq j \leq n$.
\begin{lemma}
The automorphism group of ${\mathcal W}$ is the wreath product
\[
\Gamma = \mathbb{S}_m \wr \mathbb{S}_n
\]
\end{lemma}
\proof
The assertion is a formal statement that automorphisms can permute the nodes
in each column, independently, and also permute columns.
\qed
\subsection{Colorings of Untrained Wilson Networks}
We will prove that Wilson networks trained on 0, 1, or 2 patterns
do not have exotic colorings. To do so, we require the following definition:
\begin{definition}\em
A partition of nodes is {\em color-disjoint} if nodes of the same color
belong to the same part. That is, the colors refine the partition.
\end{definition}
We also need the following proposition, which has some interest in its own right:
\begin{proposition}
\label{P:rectangles}
Let ${\mathcal W}$ be an untrained Wilson network.
Every balanced coloring of ${\mathcal W}$ is the orbit coloring for
an isotropy subgroup of $\Gamma$.
\end{proposition}
\proof
All nodes of ${\mathcal W}$ are input equivalent.
Consider a coloring and let its isotropy subgroup be $\Sigma$.
We claim that the coloring is given by the fixed-point set of $\Sigma$;
that is, colors are determined by the $\Sigma$-orbits.
If two columns contain a node with the same color, balance implies that every
color occurs in each column with the same multiplicity. Using permutations
of the columns, we can arrange all columns that share a color so that they are next
to each other. This defines a partition of the set of columns. Call its parts `blocks'.
Within any given block, we can permute the nodes in those columns
in any manner using $\Gamma$. Since colors occur with equal multiplicities,
we can use such permutations to arrange the colors in rectangles,
as in Figure~\ref{F:wilson_net_0}.
Call this the {\em standard form} of the coloring. It lies in the
$\Gamma$-orbit of the original coloring because
the permutations concerned lie in $\Gamma$. Their effect produces a coloring
whose isotropy subgroup $\Sigma'$ is conjugate to $\Sigma$, so
without loss of generality we can consider a coloring in standard form.
We claim that the orbit coloring of $\Sigma'$
is the standard coloring. By definition the standard coloring is
fixed by $\Sigma'$. We must show that no finer coloring is fixed
by $\Sigma'$.
Since different blocks of columns
are color-disjoint, it is enough to consider one block.
Within that block, the rectangles are color-disjoint,
so it is enough to consider a single rectangle.
This rectangle is a Wilson network in its own right. Since all nodes have the same color,
this is the orbit coloring for a subgroup of $\Sigma$ isomorphic to
$\mathbb{S}_q \wr \mathbb{S}_p$, where the rectangle has $p$ rows and $q$ columns.
\qed
\begin{figure}[htb]
\centerline{%
\includegraphics[width=3in]{wilson_net_0.pdf}}
\caption{Partitioning an untrained Wilson network by a balanced coloring. Dotted lines indicate
inhibitory connections within columns. Connections along each line
are all-to-all, but for clarity they are shown only for adjacent nodes.}
\label{F:wilson_net_0}
\end{figure}
\subsection{Training with Patterns}
Define
\[
M = \{ 1, \ldots, m\} \qquad
N = \{ 1, \ldots, n\}
\]
so that nodes are labelled by $M \times N$. Here $M$ parametrizes the rows
and $N$ the columns.
A {\em pattern} is a function
\[
f:N \rightarrow M \times N
\]
such that for all $n \in N$ we have $f(m) = (g(n), n)$ for some $g(n) \in M$.
Then $g$ is a {\em section} of $M \times N$. We define ${\mathcal C}^f$ to be
the set of nodes of the form $(f(n), n)$ for $n \in N$.
We represent the learning of pattern $f$ by ${\mathcal W}$ as the introduction of new
excitatory arrows, leading to a network ${\mathcal W}^f$. The new arrows are all
identical, and one arrow connects each ordered pair of distinct nodes in ${\mathcal C}^f \times {\mathcal C}^f$.
Learning a set of patterns $F=\{f_i\}$ is represented by adding one set of such
arrows for each $f_i \in F$. The resulting network, denoted by ${\mathcal W}^F$, is a
{\em trained Wilson network} or {\em rivalry network}.
\begin{remark}\em
This notion of learning implies that when two (or more) learned patterns
have some nodes in common,
those nodes receive two (or more) new arrows, not one.
In Wilson network models of visual illusions, learned patterns are replaced by patterns
representing pre-learned geometric consistency conditions~\cite{SG19}.
\end{remark}
We assume that all of these excitatory arrows are identical for different $f_i$.
It is easy to see that if $f$ is a single pattern then every balanced coloring
of ${\mathcal W}^f$ is an orbit coloring. We prove the same is true for $2$-colorings.
(The method easily specializes to a single coloring.) First, we set
up some useful observations.
Suppose that $F= \{f_1, f_2\}$ consists of two distinct patterns. Define the
{\em input-type} of a node $c$ to be the set of all nodes $d$ such that $I(d)$ is
input-isomorphic to $I(c)$. By~\cite[Section 6]{SGP03}, any
balanced coloring refines input-type. The network ${\mathcal W}^F$ has precisely
three distinct input-types:
\begin{itemize}
\item[]
Type $A$: Nodes that do not appear in a learned pattern. These have no excitatory connections
and $m-1$ inhibitory ones.
\item[]
{Type $B$}: Nodes that appear in a single learned pattern: these have $n-1$ excitatory connections
and $m-1$ inhibitory ones.
\item[]
{Type $C$}: Nodes that appear in both learned patterns: these have $2n-2$
excitatory connections and $m-1$ inhibitory ones.
\end{itemize}
We also use $A, B, C$ for the corresponding sets of nodes, Figure~\ref{F:wilson_net_1}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_1.pdf}}
\caption{Partitioning a Wilson network with two learned patterns.
Solid lines indicate learned excitatory connections,
dotted lines indicate inhibitory connections within columns. Connections along each line
are all-to-all, but for clarity these are shown only for adjacent nodes in each learned pattern.}
\label{F:wilson_net_1}
\end{figure}
Since a balanced coloring refines input equivalence, the partition
$\{A,B,C\}$ is color-disjoint.
These three types determine the group $\mathrm{aut}\,({\mathcal W}^F)$.
It is a subgroup of $\mathrm{aut}\,({\mathcal W})$ (delete excitatory connections).
It is generated by:
\begin{itemize}
\item[\rm (1)]
Permutations of the nodes in any given column that fix all nodes in $B, C$.
These permute the nodes of type $A$ in the column.
\item[\rm (2)]
The permutation $\kappa$ that fixes every node of type $B$ and
swaps the two nodes of type $C$ in all columns that contain
such nodes.
\item[\rm (3)]
All permutations of those columns that contain nodes of type $B$.
\item[\rm (4)]
All permutations of those columns that contain nodes of type $C$.
\end{itemize}
These permutations generate the subgroup
\begin{equation}
\label{E:Omega}
\Omega = \mathbb{Z}_2^\kappa \times (\mathbb{S}_{m-2}\wr \mathbb{S}_k) \times (\mathbb{S}_{m-1}\wr \mathbb{S}_{n-k}) \subseteq \mathbb{S}_m \times \mathbb{S}_n
\end{equation}
\begin{theorem}
\label{T:2pattorbit}
Suppose that $F$ consists of one or two distinct patterns. Then every balanced coloring
of ${\mathcal W}^F$ is a orbit coloring.
\end{theorem}
\begin{proof}
We give the proof for two patterns; then we specialize to get the proof for one pattern.
So let $F = \{f_1,f_2\}$ where $f_1 \neq f_2$.
We can use permutations within each column to arrange for nodes
in learned patterns to appear as the top node, or top two nodes, of each column.
Moreover, nodes in the same $f_j$ appear in the same row
when the patterns are distinct in that column.
Permutations between columns can position all the nodes that belong to two patterns
first (reading along rows from left to right), followed by those that belong to only one,
as in Figure~\ref{F:wilson_net_1}.
The network ${\mathcal W}^F$ has precisely
three distinct input-types $A, B, C$, defined above.
Since a balanced coloring refines input equivalence, the partition
$\{A,B,C\}$ is color-disjoint.
Suppose that the top two rows of region $C$ have a color
in common. The all-to-all excitatory connections, plus balance, then
imply that every color appearing in the top row must also appear
in the bottom row, with the same multiplicity.
There are thus two cases:
\begin{itemize}
\item[] Case 1:
Every color appearing in the top row must also appear
in the bottom row, with the same multiplicity.
\item[] Case 2:
The two rows of region $C$ are color-disjoint.
\end{itemize}
First, we deal with Case 1.
We refine the partition $\{A,B,C\}$ in two further stages. First we split regions $C$
into two parts $C_1, C_2$ as follows.
If a node in row 1 has an inhibitory connection to
a node with the same color in row 2, (necessarily in the same column),
place those nodes in set $C_1$. Otherwise place them in $C_2$.
We claim that each vertical connection in $C_2$ links the same
two (distinct) colors. Suppose a node in row 1 has color $\alpha$
and is connected to color $\beta$ in row 2. Balance implies
that every node of color $\alpha$ in either row must be
paired vertically with color $\beta$ in the other row. Moreover,
both rows contain the same number of nodes with any given color,
so colors $\alpha$ and $\beta$ occur with the same multiplicity
in row 1 and in row 2. (In particular, $C_2$ meets an even number of columns.)
Clearly $C_1$ and $C_2$ are color-disjoint, by balance. A color anywhere
in $C_1$ cannot connect vertically both to itself, and to something different.
(Bear in mind that regions $C$ and $A$ are color-disjoint, so the
discrepancy cannot be repaired by coloring another node in the same column.)
We now split region $A$ into $A_1, A_2, A_3$, according to whether
the top node of the column lies in $C_1, C_2,$ or $B$. Balance clearly
implies that all six regions $A_1, A_2, A_3, B, C_1, C_2$ are color-disjoint.
See Figure~\ref{F:wilson_net_2}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_2_col.pdf}}
\caption{Refining the partition of the trained Wilson network by splitting
regions $A, C$.}
\label{F:wilson_net_2}
\end{figure}
Finally, we partition regions $C_1$ and $B$ according to the color
of the top node. Region $C_2$ is similarly partitioned using the colors
of the vertical pairs in the top two rows. (Only one such subset is
drawn but there could be many.) Without loss of generality we can
use elements of $\Omega$,
defined in~\eqref{E:Omega}, to group nodes of a given color (or color pair)
into blocks of adjacent nodes, Figure~\ref{F:wilson_net_3}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_3_col.pdf}}
\caption{Refining the partition of the trained Wilson network even further
by splitting regions according to colors appearing in the learned patterns.}
\label{F:wilson_net_3}
\end{figure}
As in the proof of Proposition~\ref{P:rectangles}, we can use elements of $\Omega$
to arrange colors in the blocks of $A_j$ into rectangles, each rectangle containing
a single color.
Recall that $\kappa$ flips the two learned patterns (acting
trivially on $B$). Define a permutation $\rho$ of the columns, of order 2,
which acts trivially on $B, C_1$ and transposes pairs of adjacent columns in $C_2$
with matching colors in their top nodes. Then $\kappa\rho$ fixes all the colors
of nodes in $B \cup C$. Further permutations of the columns
force equalities of colors along the top row.
We now construct, for any rectangle $R$ in any region, a subgroup
$\Sigma_R \subseteq \Sigma$ whose fixed-point set determines
the coloring in that rectangle. The subgroup generated by all such
permutations, together with $\kappa\rho$ and any permutations of columns
already defined, has the same fixed-point set as $\Sigma$ because
$\Sigma$ is the isotropy group of the coloring. The construction of
$\Sigma_R$ is the same as that in the proof of Proposition~\ref{P:rectangles},
giving a group $\mathbb{S}_q \wr \mathbb{S}_p$ where the rectangle has $p$ rows and $q$ columns.
This completes Case 1.
Case 2 is similar, but now the two rows of $C$ are color-disjoint,
so $\kappa\rho$ is not required.
This completes the proof for two learned patterns.
When there is only one learned pattern, the same proof applies taking $C$ to be the empty set.
Now every arrow in $B$ occurs twice, but this is effectively the same as a single
arrow of a new type.
\end{proof}
The network of Figure~\ref{F:3x6wilson} can be viewed
as a trained Wilson network with three (disjoint) learned patterns.
Thus the analog of Theorem~\ref{T:2pattorbit} is not valid for
Wilson networks with three learned patterns. Similarly,
The network of Figure~\ref{F:5x5latin} can be viewed
as a trained Wilson network with five (disjoint) learned patterns.
\section{Bifurcations to Exotic Patterns}
\label{S:BEP}
We comment briefly on the implications of Figure~\ref{F:5x5latin} for bifurcations.
The corresponding analysis for Figure~\ref{F:3x6wilson} is less straightforward
when colors are amalgamated to apply the Equivariant Branching Lemma,
and we have not carried it out.
Associated with any balanced coloring $K$ of a network ${\mathcal G}$ is a {\em quotient network} ${\mathcal G}_K$
with one node for each color, and input arrows copied from the original
network according to the colors of head and tail nodes. It is shown in~\cite{GST05,SGP03}
that with canonical identifications of state spaces, the restriction of
an admissible map for ${\mathcal G}$ to the synchrony subspace of $K$ is admissible for ${\mathcal G}_K$,
and all admissible maps for ${\mathcal G}_K$ can be obtained in this manner.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2.3in]{5colQuot.pdf}
}
\caption{The quotient network for Figure~\ref{F:5x5latin}. Lines without arrows
are bidirectional. Numbers beside gray arrows indicate multiplicities.}
\label{F:5x5latinQuot}
\end{figure}
The quotient network for the coloring in Figure~\ref{F:5x5latin} is shown in
Figure~\ref{F:5x5latinQuot}. It has five nodes and $\mathbb{S}_5$ symmetry.
It is easy to see that {\em any} coloring of this quotient network
is balanced; indeed, it is an orbit coloring for the subgroup of $\mathbb{S}_5$ that preserves
the colors of nodes.
Moreover, balanced colorings lift from quotient networks to
the original network, and remain balanced. However, the lift need not
be an orbit coloring on the full network, as we now demonstrate.
We can apply the Equivariant Branching Lemma
to this quotient, and then lift the resulting pattern to ${\mathcal G}_{55}$,
to prove the generic existence of branches with (for instance) two colors, obtained
by amalgamating colors in Figure~\ref{F:5x5latin} in any manner. The question is
whether any of these amalgamated colorings is exotic for ${\mathcal G}_{55}$.
Some are not, but a plausible candidate is
Figure~\ref{F:5x5latin2col} (left), obtained by coloring black and yellow the same,
and red, green, and blue the same. We claim that this pattern is indeed exotic
in ${\mathcal G}_{55}$. The proof uses Proposition~\ref{P:iso_col}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2in]{5x5latin2col.pdf} \qquad\qquad
\includegraphics[width=2in]{5x5latin2colFix.pdf}
}
\caption{{\em Left}: This balanced 2-coloring is not an orbit coloring for $\mathbb{S}_5 \times \mathbb{S}_5$.
{\em Right}: The fixed-point subspace of the isotropy subgroup of the 2-color pattern is a 3-color pattern.}
\label{F:5x5latin2col}
\end{figure}
Calculations with Mathematica show that the
isotropy subgroup $\Sigma \subseteq \mathbb{S}_5 \times \mathbb{S}_5$ of
Figure~\ref{F:5x5latin2col} (left) is isomorphic
to a dihedral group $\mathbb{D}_5$. Since this acts on 25 nodes, and its orbits have
size at most 10, there must be at least 3 orbits, so its fixed-point subspace
must have dimension at least 3. Therefore Figure~\ref{F:5x5latin2col} is not
an orbit coloring for $\mathbb{S}_5 \times \mathbb{S}_5$.
In more detail: the group $\Sigma$ is generated by
the pairs of permutations (in cycle notation)
\[
g = ((13245),(13245)) \qquad h = ((1)(24)(35), (14)(23)(5))
\]
which satisfy the relations
\[
g^5 = h^2 = 1, \ hgh = g^{-1}
\]
for $\mathbb{D}_5$. It is a twisted $\mathbb{D}_5$ subgroup of $\mathbb{S}_5$; that is, it consists of
elements $(\gamma, \theta(\gamma))$ where $\gamma \in \mathbb{D}_5 \subseteq \mathbb{S}_5 \times \ONE$
and $\theta: \mathbb{D}_5 \to \mathbb{D}_5$ is an isomorphism. Up to conjugacy we can make
$\theta$ the identity on $\mathbb{Z}_5$; it then maps an order-2 element to a different order-2 element.
The fixed-point subspace of $\Sigma$ comprises the patterns in
Figure~\ref{F:5x5latin2col} (right). There are two orbits of size 10 (black, red)
and one of size 5 (orange). By Lemma~\ref{L:delete}, this $2$-coloring is not an orbit coloring.
(The orange nodes do {\em not} correspond to
a specific color in Figure~\ref{F:5x5latin} (left): the reason is that
this $5$-coloring is not an orbit coloring.)
The quotient network for the 3-coloring of Figure~\ref{F:5x5latin2col} (right),
which we do not draw, has
an obvious $\mathbb{Z}_2$ symmetry that swaps red and black but fixes orange.
It has an orbit coloring that merges red and black.
It also has a balanced coloring that merges red and orange, which is
not a $\mathbb{Z}_2$ orbit coloring, and this lifts to Figure~\ref{F:5x5latin2col} (right).
\subsection{Stability}
As well as existence of particular states, it is also important to study their stability.
This is a more delicate issue. In particular, whenever the critical eigenspace supports
a nontrivial equivariant quadratic, all axial bifurcating branches of equilibria are {\em unstable}
near the bifurcation point; see~\cite[Chapter XIII Theorem 4.4 ]{GSS88}.
When the symmetry group is $\mathbb{S}_n$ in its standard
permutation representation, and we are considering a symmetry-breaking bifurcation, such a quadratic equivariant exists for all $n \geq 3$. This problem also arises
for $\mathbb{S}_m\times\mathbb{S}_n$.
It is discussed in the evolutionary models of~\cite{CS00,GS02,SEC03},
for $\mathbb{S}_n$, and in the decision models of~\cite{FGBL20} for $\mathbb{S}_m\times\mathbb{S}_n$. In both cases
the remedy is to include suitable cubic equivariant terms to `compactly' the bifurcation.
This allows an unstable transcritical branch to turn around at a saddle-node (fold) point
and regain stability. This creates a jump bifurcation.
(If $n = 2k$ there is an exception: the isotropy
subgroup $\mathbb{S}_k \times\mathbb{S}_k$ gives a pitchfork bifurcation.)
The same issue arises when considering the stability of the pattern in
Figure~\ref{F:5x5latin2col} (left), because the symmetry group of the quotient
network is $\mathbb{S}_5$, and on the relevant synchrony subspace the pattern
is given by an axial subgroup $\mathbb{S}_2 \times \mathbb{S}_3$. The same remedy also
applies: include suitable cubic equivariants so that the branch turns round.
It can then regain stability within the synchrony subspace. This still leaves open
whether it is also stable transverse to the synchrony subspace, that is, to
perturbations that break the synchrony of the balanced $5$-coloring.
We have not yet investigated this question, but the problem is likely to be complicated.
Numerical simulations in~\cite{FGBL20} suggest that states arising from axial subgroups
of $\mathbb{S}_5 \times \mathbb{S}_5$ can be stabilized in this manner in the whole
of state space $\mathbb{R}^{25}$. However, this particular state has
not yet been explored numerically.
\section{Conclusions}
We have studied the formation of synchrony patterns in networks ${\mathcal G}_{mn}$ whose nodes form
$m \times n$ arrays, and whose connections imply $\mathbb{S}_m \times \mathbb{S}_n$
symmetry. Symmetry methods --- namely, the Equivariant Branching Lemma
and the Equivariant Hopf Theorem --- prove the generic existence of bifurcating
branches that break symmetry to, respectively, axial subgroups and $\mathbb{C}$-axial subgroups.
Network structure is stronger than equivariance, and it can create
exotic colorings, not predicted by symmetry considerations. Such colorings
exist in particular when $(m,n) = (3,6)$ and $(5,5)$.
Applying the Equivariant Branching Lemma to a quotient network of
${\mathcal G}_{55}$, we have proved the existence of an exotic 2-coloring of
${\mathcal G}_{55}$ itself. This pattern is unstable near bifurcation. It can regain
stability in the quotient network, but may still be unstable to perturbations
that break the synchrony pattern defining that quotient.
Networks ${\mathcal G}_{mn}$ have been used to model decision making,
and related networks to which the same results apply occur in models
of binocular rivalry and visual illusions. The occurrence of exotic
synchrony patterns predicts states of such models that have not been
derived using equivariant methods, and whose description is not
given by isotropy subgroups.
We have also shown that in rivalry models, networks trained on one or
two images do not have exotic colorings, so all synchrony patterns
(hence also all phase patterns) can be characterized by isotropy subgroups
of the symmetry group.
\section{Introduction}
We work in the `coupled cell' network formalism of~\cite{GS06, GST05, SGP03},
which should be consulted for precise definitions and proofs. Section~\ref{S:BCA} provides
a short summary. Networks
consist of {\em nodes} connected by {\em arrows} (directed edges), both of
which are partitioned into {\em types}. Each node represents a dynamical
system, and arrows indicate couplings between these systems. Identical
types determine identical dynamics or couplings. It is sometimes convenient to interpret
a node as an `internal arrow' from that node to itself. Multiple arrows and
self-loops are permitted; indeed, they are required to simplify the theory.
Robust patterns of synchrony in networks are classified by {\em balanced colorings} of
the nodes. A coloring assigns a color to each node, and it is balanced if nodes of the same color
have color-isomorphic input sets --- that is, the same number of arrows of each type with
tail nodes of a given color. To each coloring is associated a subspace of the state space,
called a {\em synchrony subspace},
in which the coordinates of nodes of a given color are all equal.
This synchrony subspace is invariant for the dynamics of any ODE whose
structure is compatible with the network architecture --- that is, for
all {\em admissible ODEs} --- if and only if the coloring is balanced.
The existence of these universal flow-invariant subspaces has strong implications
for bifurcations of admissible ODEs, because states lying in such a subspace
have the synchrony pattern specified by the coloring, and can be found by
restricting the ODE to the subspace. Such restrictions are precisely the admissible ODEs for
the {\em quotient network}, obtained by identifying nodes with the same color and preserving
the colored input structure. Sets of synchronous nodes are often called {\em clusters} \cite{PSHMR13},
and the restricted ODE on the quotient network describes the dynamics of the clusters.
Synchrony for all admissible ODEs may seem a strong condition,
but weaker forms of synchrony that persist
under small perturbations of the ODE also correspond to balanced colorings.
This has been proved for hyperbolic equilibria (\cite[Theorem 7.6]{GST05}, \cite[Theorem 6.1]{S20rigideq}), and for hyperbolic periodic states (\cite[Theorem 6.1]{GRW10})
with a technical assumption on the network architecture:
see~\cite[Appendix]{S20rigideq}.
It is plausibly conjectured to be true for all
hyperbolic periodic states~\cite[Section 10]{GS06}, and for more complex dynamic trajectories under some
kind of assumption about persistence of the underlying attractor under small perturbations.
A central issue in this case is to find a suitable definition of this type of persistence.
In networks with symmetry, every subgroup $\Sigma$ of the symmetry group $\Gamma$ defines a
balanced coloring, where colors correspond to the orbits of $\Sigma$. We call such
a coloring an {\em orbit coloring}. The synchrony
space is then the fixed-point space of $\Sigma$. However, {\em exotic} balanced colorings that
are not of this form can exist for some symmetric networks.
This was pointed out in~\cite{GNS04} for a 12-node bidirectional ring
with $\mathbb{D}_{12}$ symmetry and for certain $2$-color lattice patterns. Other examples are discussed
in~\cite{AS07, AS08}, and for planar square and hexagonal lattices
in~\cite{S19, SGo19,SGo20}.
\begin{figure}[htb]
\centerline{
\includegraphics[height=1.8in]{5x4groupnet.pdf}\qquad\qquad
\includegraphics[height=1.97in]{5x4inputs.pdf}
}
\caption{{\em Left}: Group network for $\mathbb{S}_4 \times \mathbb{S}_5$ acting on a $4 \times 5$
array. Only a representative set of connections shown. {\em Right}:
The four arrow-types. Arrowheads not shown; only inputs to node $(1,1)$ depicted. Colors of tail nodes and forms of edges indicate arrow-type: (a) red/internal arrow, (b) blue/solid, (c) yellow/dashed, (d) green/gray.}
\label{F:5x3gpnet}
\end{figure}
Here we study networks whose nodes form an $m \times n$ array.
We assume the symmetry group is $\mathbb{S}_m \times \mathbb{S}_n$, where
$\mathbb{S}_m$ permutes the $n$ rows and $\mathbb{S}_n$ permutes the $n$ columns, so
all nodes have the same type.
Figure~\ref{F:5x3gpnet} (left) is a schematic illustration of the corresponding
{\em group network} ${\mathcal G}_{mn}$ in the sense of~\cite[Section 2]{AS07}.
There is one arrow for each ordered pair $(i,j)$ of nodes, with
head is $i$ and tail $j$. The type of the arrow is given by the orbits of such pairs
under the group action; arrows $(i,i)$ correspond to the `internal arrow' of node $i$
and are represented by the node symbol, not an arrow as such.
Figure~\ref{F:5x3gpnet} (right) is a schematic illustration of the inputs
to node $(1,1)$. There are four types of arrow:
(a) Internal node arrow: circles.
(b) Row arrows: solid black lines.
(c) Column arrows: dashed black lines.
(d) Diagonal arrows between nodes in neither the same row nor the same column: grey lines.
Admissible maps for a symmetric network are always equivariant under its symmetry group,
but the converse is false in general, \cite[Section 3.1]{AS07}. In equivariant
dynamical systems, every subspace that is flow-invariant for all equivariant ODEs is a
fixed-point subspace~\cite[Theorem 4.1]{AS06}. The possibility
of exotic colorings can therefore
be attributed to the difference between equivariant and admissible maps.
The synchrony subspace of an exotic balanced coloring supports a
pattern that is not an orbit coloring, hence not detected by the usual
methods of equivariant dynamics.
The main point of this paper is the existence of exotic colorings
in the networks ${\mathcal G}_{36}$ and ${\mathcal G}_{55}$, shown below in
Figures~\ref{F:3x6wilson} and \ref{F:5x5latin} respectively. (Similar methods
can presumably extend this result to many larger values of $m$ and $n$.)
To complement these results, we also investigate networks arising in
a model of binocular rivalry, showing that when these networks are trained
on only one or two images, no exotic colorings exist.
We postpone discussion to Section~\ref{S:WRN},
after the models concerned have been introduced.
Finally, in Section~\ref{S:BEP} we discuss implications for bifurcations of admissible
ODEs on the networks ${\mathcal G}_{mn}$. Exotic colorings indicate the presence of
flow-invariant subspaces that are not detected by the usual method for
finding bifurcating branches of symmetric dynamical systems, namely
the Equivariant Branching Lemma. This proves the existence of
certain symmetry-breaking patterns. The analogous phenomenon of synchrony-breaking
patterns in networks can lead to exotic patterns, not predicted by their symmetries.
Taking network constraints into account, as well as
symmetry, gives a broader picture of the dynamics and bifurcations.
\section{Networks Occurring in Applications}
Our results apply to
two closely related classes of symmetric networks occurring
in applications. Both are variations on networks proposed by Wilson~\cite{W03, W07, W09}
to model interocular grouping, and more generally, high-level
decision-making in the brain, and are called {\em Wilson networks}.
An {\em untrained} Wilson network is a rectangular $m\times n$ array of
identical nodes whose columns are all-to-all connected by identical inhibitory arrows. The symmetry
group of this network is not $\mathbb{S}_m\times\mathbb{S}_n$, but the
larger wreath product $\mathbb{S}_m \wr \mathbb{S}_n$ (permute the nodes in each column in any manner, and permute
the columns setwise). Figure~\ref{F:wilson_net_un} shows (schematically)
an untrained $5 \times 8$ Wilson network. For clarity, the dotted lines indicate
sets of nodes --- the columns --- with identical all-to-all coupling.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=3in]{wilson_net_un.pdf}}
\caption{Untrained $5 \times 8$ Wilson network. Dotted lines indicate
sets of nodes with identical all-to-all coupling. Columns are decoupled from each other.}
\label{F:wilson_net_un}
\end{figure}
\subsection{Binocular Rivalry}
The discovery of (binocular) rivalry is credited to Giambattista della Porta~\cite{P1593} in 1593.
He placed two books so that each eye saw only one, and stated that he was able to
read from either book at will by moving the `visual virtue' (attention) from one eye
to the other. Closely related but distinct phenomena are visual illusions,
where a single image can be perceived
in more than one way; the classic example is the Necker cube of 1832, Necker~\cite{N32}, which alternates between two apparent orientations. Numerous authors have modeled
rivalry and illusions using a network with two nodes, representing the two percepts \cite{C10a,CSRR08,KM00,LC02,L88,M84,M90,NVNV07,SC11}.
Modern experiments on rivalry show a different image to each eye and ask the subject
to report what they perceive.
In simple cases, these percepts alternate between those two images.
In some studies, however,
some subjects perceive not only the original two images,
but images that combine aspects of those images in new ways~\cite{KPYF96,SSH08,SG02,TMB06}.
The occurrence of these {\em derived images} can be modelled using trained
Wilson networks~\cite{DGMW12, DGW13, DG14}, also called rivalry networks. Experimental evidence supporting such models
has been obtained in~\cite{GZWL19}, which reports data on
the 4-location experiment proposed in~\cite{TMB06},
obtaining results consistent with those predicted by a trained Wilson network model.
The model for a given pair of images begins with an untrained Wilson network.
Each column represents an
{\em attribute} of the image, such as the orientation or color of some feature. The nodes
in a given column represent {\em levels} of that attribute, such as specific orientations or colors.
The initial Wilson network is {\em trained} by adding
excitatory arrows representing the {\em learned patterns}
shown to the two eyes. A learned pattern is represented by a choice
of one node from each column --- the appropriate level of that attribute --- connected all-to-all by identical excitatory arrows.
In standard model ODEs, the state of each node $i$ is defined by two variables $(x_i^A, x_i^H) \in \mathbb{R}^2$.
The first is an {\em activity} variable, the second a {\em fatigue} variable.
Derived patterns emerge from the dynamics of an admissible ODE for the trained Wilson network.
A balanced coloring of a Wilson network defines a {\em fusion state} in which the
percept is ambiguous. If the fusion state becomes unstable,
symmetry-breaking states can bifurcate.
Such bifurcations are usually analysed using equivariant (symmetric)
bifurcation theory. In the models under discussion, the relevant bifurcating branches
consist of oscillatory states, created through an equivariant version of Hopf bifurcation.
The main existence theorem here is the
Equivariant Hopf Theorem~\cite[Chapter XVI Theorem 4.1]{GSS88},
which guarantees the existence of branches with certain spatiotemporal symmetries ---
namely, $\mathbb{C}$-axial subgroups, which are those having 2-dimensional fixed-point spaces
in a Liapunov-Schmidt reduction. Spatiotemporal symmetries describe patterns of
synchrony and phase relations between nodes.
The interpretation of an oscillatory state
associates the perceived image with the node or nodes whose activity variables
are largest among all nodes. Changes in the ordering of activity levels
indicate transitions between different percepts.
However, the relation of symmetry to the network structure is
subtle. In particular, exotic colorings predict behavior
in such models that is not deducible from symmetries alone. So, potentially,
exotic colorings can lead to branches with new types of spatiotemporal symmetry.
The main results of this paper for trained Wilson networks are:
\begin{itemize}
\item
Every balanced coloring is an orbit coloring for the automorphism group
of the resulting network when the network has
been trained on 0, 1, or 2 learned patterns.
\item
The automorphism group concerned
depends on the attribute levels of the images used to train the network.
\item There exist exotic colorings when the network has
been trained on 3 or more learned patterns.
\end{itemize}
Thus there can exist spatiotemporal patterns --- in particular, synchrony
patterns of oscillatory states and associated phase patterns --- that are not predicted by
the Equivariant Hopf Theorem. These patterns have not yet been investigated
in any detail.
\subsection{Decision Making}
The notion of decision making is very broad.
Virtually any collective system of active agents must make decisions.
The agents can, for instance, be humans, birds in a flock, fish in a school,
bees, bacteria, neurons, or autonomous robots.
As preliminary step, agents form `opinions' about a set of possible options;
that is, preferences that affect the system's behavior. These opinions collectively
determine the eventual decision at system level. Examples of
decision making systems include:
\begin{itemize}
\item
Social and economic decisions by humans based on
different types of electoral and information-sharing systems.
\item Hunting strategies in past and present hunter-gatherer communities.
\item Bee colonies collectively deciding on a new nest site.
\item
Animal groups collectively deciding when, and in which direction, to move --- for example when
approaching two possible food sources.
\item
Neurons in lower brain areas integrating sensory inputs to perform perceptual and
motor behavior decision making.
\item
Neurons in higher brain areas integrating
sensorimotor information to make higher-level decisions.
\item
Bacteria and other social microorganisms collectively deciding --- for instance, by
quorum sensing --- how and when to undergo phenotypic differentiation in response
to environmental signals.
\item
Swarms of autonomous robots settling on a collective strategy to accomplish some task.
\end{itemize}
The same mathematical models often apply to a variety of such
real-world decision-making systems.
Here we focus on
networks very similar to Wilson networks that have been introduced into
models of decision making~\cite{FGBL20}. Again
the nodes of the network form an $m \times n$ array, but now the
columns represent options, the rows represent agents, and the state of
a node represents the agent's opinion about that option.
Connections between nodes are of three types: distinct nodes in the same row,
distinct nodes in the same column, and all other pairs of distinct nodes.
The network has $\mathbb{S}_m\times\mathbb{S}_n$ symmetry where $\mathbb{S}_m$ permutes
the $m$ rows and $\mathbb{S}_n$ permutes the $n$ columns.
The analysis of~\cite{FGBL20} focuses on steady-state symmetry-breaking bifurcation
from a fully symmetric equilibrium, using the
Equivariant Branching Lemma~\cite{Ci81} (see also~\cite[Lemma 1.3.1]{GS02} or \cite[Chapter XIII Theorem 3.3]{GSS88}). This result states that, subject
to certain genericity conditions, branches exist for all {\em axial} subgroups of the symmetry
group --- those with 1-dimensional fixed-point spaces. Classifying the axial subgroups
therefore leads to a list of branches breaking symmetry in various ways. In this
application the corresponding orbit colorings classify patterns of equal opinions
about various options for various agents. These patterns govern agreements
(consensus) and disagreements (dissensus) among the agents.
The states obtained using the Equivariant Branching Lemma need not
exhaust the possible bifurcating branches, even for equivariant dynamical
systems, but they provide useful information on symmetry-breaking states
that are guaranteed to occur. In networks, the difference between
equivariant and admissible ODEs can create branches that do not occur
in general equivariant systems. In particular, exotic balanced colorings
predict new branches that do not appear in any classification by isotropy subgroups.
The main result of this paper for opinion networks is:
\begin{itemize}
\item The examples of exotic colorings for trained Wilson networks can
also be interpreted as exotic colorings for opinion networks.
\end{itemize}
\section{Balanced Colorings and Automorphisms}
\label{S:BCA}
We begin by recalling some basic concepts of the `coupled cell' network
formalism~\cite{GST05, SGP03}.
\begin{definition} \label{D:network_diag} \rm
A {\em network diagram} is a labelled directed graph that has four ingredients:
\begin{itemize}
\item[\rm (1)]
A finite set of {\em nodes} (or {\em cells}) ${\mathcal C} = \{1, 2, \ldots, n\}$.
\item[\rm (2)]
A {\em node symbol} assigned to each node. Nodes with the same symbol are said to be {\em identical}.
\item[\rm (3)]
A finite set $\AA$ of {\em edges} or {\em arrows}. Each arrow $e$ points from its
{\em tail node} $\mathcal{T}(e)$ to its {\em head node} $\mathcal{H}(e)$.
\item[\rm (4)]
An {\em arrow symbol} assigned to each arrow. Arrows with the same symbol are said to be {\em identical}.
\end{itemize}
\end{definition}
The formal theory of networks goes on to define a class of {\em admissible maps}
and associated {\em admissible ODEs} associated with the network. Essentially,
these are the maps or ODEs that respect the network structure.
The {\em automorphism group} $\mathrm{aut}\,({\mathcal G})$ of a network ${\mathcal G}$
consists of all permutations of the nodes ${\mathcal C}$ that preserve the number and type
of arrows between each pair of nodes.
The {\em input set} $I(c)$ consists of all input arrows to node $c$.
An {\em input isomorphism} $\beta:I(c) \rightarrow I(d)$ is bijection
that preserves arrow-type.
A {\em coloring} of ${\mathcal G}$ is a partition of ${\mathcal C}$, or equivalently
a map $K:{\mathcal C} \rightarrow {\mathcal K}$ where ${\mathcal K}$ is a set of colors.
A coloring is {\em balanced} if, whenever nodes $c, d$ have the same color,
there is an input isomorphism $\beta:I(c) \rightarrow I(d)$ such that
$\mathcal{T}(e)$ and $\mathcal{T}(\beta(e))$ have the same color for all arrows $e \in \AA$.
As already remarked, a coloring is balanced if and only if it
defines a synchrony subspace that is flow-invariant for
all admissible ODEs, and the dynamics of the resulting clusters is given by
the restricted admissible ODE on the {\em quotient network} whose nodes
represent the clusters.
A subgroup $\Sigma \subseteq \mathrm{aut}\,({\mathcal G})$ defines a coloring
$K^\Sigma$, where colors are determined by $\Sigma$-orbits on ${\mathcal C}$.
That is, nodes have the same color if and only if they are in the same
$\Sigma$-orbit. We call this the {\em orbit coloring} for $\Sigma$.
\begin{proposition}
Every orbit coloring is balanced.
\end{proposition}
\begin{proof}
This is~\cite[Proposition 3.3]{AS06}, stated using different terminology.
In that paper, `orbit coloring' is replaced by `fixed-point coloring'.
\end{proof}
The {\em isotropy subgroup} $\Sigma^K$ of a coloring $K$
is the subgroup of $\mathrm{aut}\,({\mathcal G})$ that leaves every set of $K$ setwise
invariant, where $K$ is considered as a partition of ${\mathcal C}$.
The next proposition is simple but useful:
\begin{proposition}
\label{P:iso_col}
A coloring is an orbit coloring if and only if it is the
same (up to choice of colors) as the orbit coloring for
$K^\Sigma$, where $\Sigma$ is its isotropy subgroup.
\end{proposition}
\begin{proof}
If $K= K^\Sigma$ then $K$ is an orbit coloring. Conversely,
let $K$ be the orbit coloring for a subgroup $\Omega$.
Then $\Omega \subseteq \Sigma$ so $\mathrm{Fix}(\Sigma) \subseteq \mathrm{Fix}(\Omega)$.
Let $c,d$ be in the same $\Sigma$-orbit. Since $\Sigma$ fixes
the colors, $c$ and $d$ have the same color. Therefore
$\mathrm{Fix}(\Sigma) = \mathrm{Fix}(\Omega)$ and $K^\Sigma = K$.
\end{proof}
The next result is useful because some of the applications we discuss
use networks like ${\mathcal G}_{mn}$ but with arrows of type (d) deleted.
\begin{lemma}
\label{L:delete}
A coloring $K$ is balanced for ${\mathcal G}_{mn}$ provided it is balanced
when type (d) arrows are deleted and type (a) `arrows' are ignored.
\end{lemma}
\begin{proof}
By~\cite[Lemma 5.4]{S07} a coloring is balanced if and only if it is balanced for each type
of arrow separately --- that is, in the network where arrows of all other types are deleted.
This follows by counting arrows with tail node of a given color, and noting that
input isomorphisms preserve arrow-type. If an input isomorphism also
preserve colors of tail nodes, it must preserve colors of tail nodes of arrows of
each type separately.
For ${\mathcal G}_{mn}$, balance for type (a) internal arrows is trivial.
Moreover, balance for type (d) arrows is a consequence of balance for
both types (b) and (c), because there is a unique arrow for each pair $(i,j)$ and
it therefore has a unique type.
\end{proof}
\section{Existence of Exotic Colorings}
We now give two examples of exotic balanced colorings in networks
${\mathcal G}_{mn}$ when $(m,n) = (3,6)$ and $(5,5)$.
\subsection{The Network ${\mathcal G}_{36}$}
The first example is ${\mathcal G}_{36}$, Figure~\ref{F:3x6wilson}, where we
omit type (d) arrows and show types (b) and (c) schematically.
By Lemma~\ref{L:delete} the omission of type (d) arrows does not affect balance.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2.7in]{3x6wilson.pdf}}
\caption{Exotic balanced coloring of ${\mathcal G}_{36}$. Type (d) arrows omitted,
without loss of generality.}
\label{F:3x6wilson}
\end{figure}
\begin{theorem}
The coloring in Figure~{\em \ref{F:3x6wilson}} is balanced, but is not an orbit coloring.
\end{theorem}
\begin{proof}
It is easy to prove that the pattern in Figure~\ref{F:3x6wilson} is balanced.
Each row contains the same colors with the same multiplicities, so
it is balanced with respect to the type (b) connections.
Columns either have the same colors with the same multiplicities,
or a disjoint set of colors. So the pattern is balanced with respect to the type (c)
connections. Therefore it is balanced.
We claim it is not an orbit coloring.
Suppose for a contradiction that it is the orbit coloring of a subgroup $\Sigma \subseteq \mathbb{S}_3 \times \mathbb{S}_6$. Because the colors in regions $A$ and $B$ are disjoint, we must have
\[
\Sigma \subseteq \mathbb{S}_3^R \times \mathbb{S}_3^A \times \mathbb{S}_3^B
\]
where $\mathbb{S}_3^R$ permutes the rows, $\mathbb{S}_3^A$ permutes
columns $1,2, 3$ and $\mathbb{S}_3^B$ permutes columns $4,5,6$. We identify these groups
with the same group $\mathbb{S}_3$ and use $R, A, B$ to indicate
which direct factor is intended.
We ask when an element $(\alpha,\beta,\gamma) \in \mathbb{S}_3^R \times \mathbb{S}_3^A \times \mathbb{S}_3^B$
fixes the pattern. This gives its isotropy subgroup $\Sigma$. Then we show that
the pattern is not given by the orbits of $\Sigma$.
Regions $A$ and $B$ in the figure (columns 1--3 and 4--6)
have disjoint sets of colors. The isotropy subgroup $\Sigma$
must preserve these colors, hence it must preserve regions
$A$ and $B$. Therefore $\Sigma$ is the intersection of
the isotropy subgroups $\Sigma^A$ and $\Sigma^B$ of the patterns in regions $A$ and $B$.
The isotropy group $\Sigma^A$ clearly consists of all elements of the form
\[
(\alpha, \alpha, \gamma) \quad \alpha \in \mathbb{Z}_3, \gamma \in \mathbb{S}_3
\]
The isotropy group $\Sigma^B$ clearly consists of all elements of the form
\[
(\alpha, \beta, \alpha) \quad \alpha \in \mathbb{S}_3, \beta \in \mathbb{S}_3
\]
(This has two orbits: the set of nodes in positions $(i,i)\ (1 \leq i \leq 3)$ and
the set of nodes in positions $(i,j)\ (1 \leq i\neq j \leq 3)$. This is the 2-color
pattern in region $B$.)
Their intersection therefore consists of all elements of the form
\[
(\alpha, \alpha, \alpha) \quad \alpha \in \mathbb{Z}_3
\]
This group has order 3. It has six orbits, not five, so Lemma~\ref{L:delete}
implies that Figure~\ref{F:3x6wilson} is not an orbit coloring.
In fact, the orbit coloring for this group splits the six purple nodes in region $B$
into two sets of three, with different colors, such that the same color occurs
along each broken diagonal.
\end{proof}
\subsection{The Network ${\mathcal G}_{55}$}
The example in the previous section involves two disjoint `blocks' of colors
in regions $A$ and $B$. By passing to more learned patterns --- five, to be precise --- we
can find an example with one block.
We use a coloring of ${\mathcal G}_{55}$ based on a Latin square to give an example of an
exotic balanced coloring. Again we
omit type (d) arrows and show types (b) and (c) schematically, appealing to
Lemma~\ref{L:delete}.
\begin{remark}\em
The classification of $3\times 3$ and $4 \times 4$ Latin squares~\cite{wiki}
shows that no analogous examples exists with 3 or 4 learned patterns.
\end{remark}
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2in]{5x5latin.pdf}}
\caption{Exotic balanced coloring of ${\mathcal G}_{55}$. Type (d) arrows omitted,
without loss of generality.}
\label{F:5x5latin}
\end{figure}
\begin{theorem}
The coloring in Figure~{\em \ref{F:5x5latin}} is balanced, but is not an orbit coloring.
\end{theorem}
\begin{proof}
This coloring is balanced because every node receives exactly one
type (b) connection and one type (c) connection from
a node of any different color.
To prove that this coloring is not an orbit coloring,
observe that the subgroup that fixes the black pattern is clearly
the diagonal subgroup
\[
\Delta = \{ (\sigma, \sigma) : \sigma \in \mathbb{S}_5\} \subseteq \mathbb{S}_5 \times\mathbb{S}_5
\]
For $\sigma \in \mathbb{S}_5$ define the permutation matrix $P_\sigma$ by
\[
(P_\sigma)_{ij} = \left\{ \begin{array}{rcl}
1 & \mbox{if} & j = \sigma(i) \\
0 & \mbox{if} & j \neq \sigma(i)
\end{array} \right.
\]
Let $M$ be a $5 \times 5$ matrix. Then $P_\sigma M$ permutes
the rows of $M$, sending row $i$ to row $\sigma^{-1}(i)$.
Similarly $MP_\sigma$ permutes
the columns of $M$, sending column $j$ to column $\sigma(j)$.
Thus an element $(\sigma, \sigma) \in \Delta$ can be considered
as acting on the space of $5 \times 5$ matrices by conjugation:
\[
M \mapsto P_\sigma MP_\sigma^{-1}
\]
This permutes both rows and columns simultaneously, via the same permutation.
Define five matrices $M_j$ corresponding to the positions of nodes of a given color:
\[
M_1 = \Matrix{1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1} \quad
M_2 = \Matrix{0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0} \quad
M_3 = \Matrix{0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0}
\]
\[
M_4 = \Matrix{0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0} \quad
M_5 = \Matrix{0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0}
\]
\noindent
We already know that the permutations $(\sigma,\tau)$ that fix the
black nodes are those in $\Delta$. So $(\sigma,\sigma) \in \Delta$ fixes
all $M_j$ (which is equivalent to fixing the coloring) if and only if
\[
P_\sigma M_jP_\sigma^{-1} = M_j \quad 2 \leq j \leq 5
\]
or equivalently
\[
M_jP_\sigma = P_\sigma M_j \quad 2 \leq j \leq 5
\]
Solving these equations using Mathematica, we find that
\[
P_\sigma = \Matrix{a & b & b & b & b \\ b & a & b & b & b \\ b & b & a & b & b \\
b & b & b & a & b \\ b & b & b & b & a }
\]
for $a, b \in \mathbb{R}$. This is a permutation matrix if and only if $a=1, b=0$,
so the isotropy subgroup of the coloring is the identity.
However, the fixed-point subspace of the identity contains all colorings, so
the pattern in Figure~\ref{F:5x5latin} is not an orbit coloring.
\end{proof}
\section{Decision Making}
Networks ${\mathcal G}_{mn}$ occur in models of decision making~\cite{FGBL20}.
We summarize the salient features. The model assumes that a group
of $m$ equal agents is trying to decide among a set of $n$ options, initially
assumed to be of equal value to all agents.
This case is mathematically tractable because of its symmetry, and it is also important
for examining more complex real-world settings.
The equality assumptions are formalized as symmetries.
Equation (2.11) of~\cite{FGBL20} defines a specific class of admissible ODEs,
which is an equivariant
version of a more general model~\cite{Bizyaeva2020}.
The authors use the Equivariant Branching Lemma to prove the existence
of various symmetry-breaking equilibria.
\subsection{Exotic Colorings of Opinion Networks}
\label{S:exotic_decision}
We now discuss the difference between admissible maps for the network
${\mathcal G}_{mn}$ and equivariant maps for $\mathbb{S}_m\times\mathbb{S}_n$. Recall
from~\cite{GST05, SGP03} that the {\em vertex group} $B(i,i)$
at a node $i$ of a network is the group of all input automorphisms
on the set $I(i)$ of all input arrows of node $i$.
The proof of \cite[Theorem 4.4]{AS06} can easily be
adapted to show that for a symmetric network,
admissible maps are the same as equivariant maps
if and only if:
\begin{itemize}
\item[\rm (a)]
The network is all-to-all connected.
\item[\rm (b)]
Every vertex symmetry extends to an element of the symmetry group.
\end{itemize}
\noindent
When the network is ${\mathcal G}_{45}$, for example, Figure~\ref{F:5x3gpnet} (right)
shows that the vertex group of any node $c=(i,j)$ of ${\mathcal G}_{mn}$ is
\[
B(c,c) = \mathbb{S}_4 \times \mathbb{S}_3 \times \mathbb{S}_{12}
\]
Condition (a) holds in this case, but
\[
|B(c,c)| = 4!\, 3!\, 16! = 3012881743872000 \qquad |\mathbb{S}_4 \times \mathbb{S}_5| = 2880
\]
so condition (b) does not.
Therefore admissible maps are not the same as equivariant maps for ${\mathcal G}_{45}$.
\begin{remark}\em
It can easily be shown that the linear admissible maps for ${\mathcal G}_{mn}$ are
the same as the linear equivariant maps, so the usual conditions for local bifurcation
from a fully symmetric state are the same in both cases. However, the
detailed nonlinear structure can be different, affecting the bifurcations: see
Section~\ref{S:BEP}.
\end{remark}
The same calculation and a little extra effort proves:
\begin{theorem}
\label{T:equiv_not_admiss}
The equivariant maps for $\mathbb{S}_m\times\mathbb{S}_n$ are the same as the admissible maps
for ${\mathcal G}_{mn}$ if and only if $(m,n) = (1,n), (2,2), (2,3), (3,2)$.
\end{theorem}
\begin{proof}
The vertex group of any node has order $v = (m-1)!(n-1)![(m-1)(n-1)]!$.
This must be less than or equal to $|\mathbb{S}_m\times\mathbb{S}_n| = g = m!n!$. It is easy to show that this is the case if
and only if $m = 1$, and $n$ is arbitrary, or $(m,n) = (2,2), (2,3), (2,4), (3,2), (4,2)$. We
rule out $(2,4)$ and $(4,2)$ by considering the vertex group structure, not just its order.
If $m = 1$ then $v = (n-1)!$ and we must embed $\mathbb{S}_{m-1}$ in $\mathbb{S}_m$, which is
possible.
If $m=2, n=2$ we must embed $\mathbb{S}_1\times\mathbb{S}_1\times\mathbb{S}_1$ in $\mathbb{S}_2\times\mathbb{S}_2$,
which is possible.
If $m=2, n=3$ or the other way round we must embed $\mathbb{S}_1\times\mathbb{S}_2\times\mathbb{S}_2$ in $\mathbb{S}_2\times\mathbb{S}_2$,which is possible.
If $m=2, n=4$ or the other way round we must embed $\mathbb{S}_3\times\mathbb{S}_3$ in $\mathbb{S}_2\times\mathbb{S}_4$.
This is {\em not} possible. In $\mathbb{S}_3\times\mathbb{S}_3$ there exist two non-conjugate order-3
elements, but in $\mathbb{S}_2\times\mathbb{S}_4$ all order-3 elements are conjugate.
It remains to show that in the cases listed, all equivariants are admissible.
This is trivial for $m = 1$ since it holds for an $\mathbb{S}_n$ group network.
For $(2,2)$ the vertex groups are trivial. For $(2,3)$, hence also $(3,2)$, the
vertex groups are $\mathbb{Z}_2 \times \mathbb{Z}_2$ with each factor acting independently on
two arrows of a given type (solid, gray). These elements extend to
automorphisms. The networks are all-to-all connected, so conditions (a) and (b) hold,
which completes the proof.
\end{proof}
This theorem suggests that there is scope for
exotic colorings of opinion networks to exist. We confirm this by giving
two examples of such colorings.
\begin{example}\em
In this example $m=3, n = 6$ or $m=6, n = 3$.
The $3 \times 6$ array in Figure~\ref{F:3x6wilson} can be interpreted as an opinion network with
$3$ agents and $6$ options --- or, alternatively, $6$ agents and $3$ options. The trained Wilson network includes arrow of types (a), (b), (c) above, and we have seen that it is balanced for
each type separately. The opinion network has a fourth type of arrow (d).
By Lemma~\ref{L:delete}, the same coloring is
also balanced for type (d) arrows, so it is balanced for the opinion network.
This can also be checked directly by counting all input nodes of a given color.
\end{example}
\begin{example}\em
In the second example $m=5, n = 5$.
The $5\times 5$ network of Figure~\ref{F:5x5latin} can be interpreted as
an opinion network, and again gives an exotic coloring.
\end{example}
It seems highly likely that similar constructions provide exotic patterns
(both for trained Wilson networks and opinion networks) where
$(m,n) = (k, 2k)$ for $k \geq 4$ and $(m,n) = (k, k)$ for $k \geq 6$.
We have not investigated this question.
\section{Wilson and Rivalry Networks}
\label{S:WRN}
Recall that an untrained Wilson network ${\mathcal W}$ consists of $n$ disjoint columns,
as in Figure~\ref{F:wilson_net_un}.
Each column is an all-to-all connected network (each pair of distinct nodes is connected
in each direction by an arrow) in which all arrows are equivalent
and inhibitory. Arrows in distinct columns are equivalent, and each column contains
the same number $m$ of nodes. The nodes form an $m \times n$ array, and we
label nodes by pairs $(i,j)$ where $1 \leq i \leq m, 1\leq j \leq n$.
\begin{lemma}
The automorphism group of ${\mathcal W}$ is the wreath product
\[
\Gamma = \mathbb{S}_m \wr \mathbb{S}_n
\]
\end{lemma}
\proof
The assertion is a formal statement that automorphisms can permute the nodes
in each column, independently, and also permute columns.
\qed
\subsection{Colorings of Untrained Wilson Networks}
We will prove that Wilson networks trained on 0, 1, or 2 patterns
do not have exotic colorings. To do so, we require the following definition:
\begin{definition}\em
A partition of nodes is {\em color-disjoint} if nodes of the same color
belong to the same part. That is, the colors refine the partition.
\end{definition}
We also need the following proposition, which has some interest in its own right:
\begin{proposition}
\label{P:rectangles}
Let ${\mathcal W}$ be an untrained Wilson network.
Every balanced coloring of ${\mathcal W}$ is the orbit coloring for
an isotropy subgroup of $\Gamma$.
\end{proposition}
\proof
All nodes of ${\mathcal W}$ are input equivalent.
Consider a coloring and let its isotropy subgroup be $\Sigma$.
We claim that the coloring is given by the fixed-point set of $\Sigma$;
that is, colors are determined by the $\Sigma$-orbits.
If two columns contain a node with the same color, balance implies that every
color occurs in each column with the same multiplicity. Using permutations
of the columns, we can arrange all columns that share a color so that they are next
to each other. This defines a partition of the set of columns. Call its parts `blocks'.
Within any given block, we can permute the nodes in those columns
in any manner using $\Gamma$. Since colors occur with equal multiplicities,
we can use such permutations to arrange the colors in rectangles,
as in Figure~\ref{F:wilson_net_0}.
Call this the {\em standard form} of the coloring. It lies in the
$\Gamma$-orbit of the original coloring because
the permutations concerned lie in $\Gamma$. Their effect produces a coloring
whose isotropy subgroup $\Sigma'$ is conjugate to $\Sigma$, so
without loss of generality we can consider a coloring in standard form.
We claim that the orbit coloring of $\Sigma'$
is the standard coloring. By definition the standard coloring is
fixed by $\Sigma'$. We must show that no finer coloring is fixed
by $\Sigma'$.
Since different blocks of columns
are color-disjoint, it is enough to consider one block.
Within that block, the rectangles are color-disjoint,
so it is enough to consider a single rectangle.
This rectangle is a Wilson network in its own right. Since all nodes have the same color,
this is the orbit coloring for a subgroup of $\Sigma$ isomorphic to
$\mathbb{S}_q \wr \mathbb{S}_p$, where the rectangle has $p$ rows and $q$ columns.
\qed
\begin{figure}[htb]
\centerline{%
\includegraphics[width=3in]{wilson_net_0.pdf}}
\caption{Partitioning an untrained Wilson network by a balanced coloring. Dotted lines indicate
inhibitory connections within columns. Connections along each line
are all-to-all, but for clarity they are shown only for adjacent nodes.}
\label{F:wilson_net_0}
\end{figure}
\subsection{Training with Patterns}
Define
\[
M = \{ 1, \ldots, m\} \qquad
N = \{ 1, \ldots, n\}
\]
so that nodes are labelled by $M \times N$. Here $M$ parametrizes the rows
and $N$ the columns.
A {\em pattern} is a function
\[
f:N \rightarrow M \times N
\]
such that for all $n \in N$ we have $f(m) = (g(n), n)$ for some $g(n) \in M$.
Then $g$ is a {\em section} of $M \times N$. We define ${\mathcal C}^f$ to be
the set of nodes of the form $(f(n), n)$ for $n \in N$.
We represent the learning of pattern $f$ by ${\mathcal W}$ as the introduction of new
excitatory arrows, leading to a network ${\mathcal W}^f$. The new arrows are all
identical, and one arrow connects each ordered pair of distinct nodes in ${\mathcal C}^f \times {\mathcal C}^f$.
Learning a set of patterns $F=\{f_i\}$ is represented by adding one set of such
arrows for each $f_i \in F$. The resulting network, denoted by ${\mathcal W}^F$, is a
{\em trained Wilson network} or {\em rivalry network}.
\begin{remark}\em
This notion of learning implies that when two (or more) learned patterns
have some nodes in common,
those nodes receive two (or more) new arrows, not one.
In Wilson network models of visual illusions, learned patterns are replaced by patterns
representing pre-learned geometric consistency conditions~\cite{SG19}.
\end{remark}
We assume that all of these excitatory arrows are identical for different $f_i$.
It is easy to see that if $f$ is a single pattern then every balanced coloring
of ${\mathcal W}^f$ is an orbit coloring. We prove the same is true for $2$-colorings.
(The method easily specializes to a single coloring.) First, we set
up some useful observations.
Suppose that $F= \{f_1, f_2\}$ consists of two distinct patterns. Define the
{\em input-type} of a node $c$ to be the set of all nodes $d$ such that $I(d)$ is
input-isomorphic to $I(c)$. By~\cite[Section 6]{SGP03}, any
balanced coloring refines input-type. The network ${\mathcal W}^F$ has precisely
three distinct input-types:
\begin{itemize}
\item[]
Type $A$: Nodes that do not appear in a learned pattern. These have no excitatory connections
and $m-1$ inhibitory ones.
\item[]
{Type $B$}: Nodes that appear in a single learned pattern: these have $n-1$ excitatory connections
and $m-1$ inhibitory ones.
\item[]
{Type $C$}: Nodes that appear in both learned patterns: these have $2n-2$
excitatory connections and $m-1$ inhibitory ones.
\end{itemize}
We also use $A, B, C$ for the corresponding sets of nodes, Figure~\ref{F:wilson_net_1}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_1.pdf}}
\caption{Partitioning a Wilson network with two learned patterns.
Solid lines indicate learned excitatory connections,
dotted lines indicate inhibitory connections within columns. Connections along each line
are all-to-all, but for clarity these are shown only for adjacent nodes in each learned pattern.}
\label{F:wilson_net_1}
\end{figure}
Since a balanced coloring refines input equivalence, the partition
$\{A,B,C\}$ is color-disjoint.
These three types determine the group $\mathrm{aut}\,({\mathcal W}^F)$.
It is a subgroup of $\mathrm{aut}\,({\mathcal W})$ (delete excitatory connections).
It is generated by:
\begin{itemize}
\item[\rm (1)]
Permutations of the nodes in any given column that fix all nodes in $B, C$.
These permute the nodes of type $A$ in the column.
\item[\rm (2)]
The permutation $\kappa$ that fixes every node of type $B$ and
swaps the two nodes of type $C$ in all columns that contain
such nodes.
\item[\rm (3)]
All permutations of those columns that contain nodes of type $B$.
\item[\rm (4)]
All permutations of those columns that contain nodes of type $C$.
\end{itemize}
These permutations generate the subgroup
\begin{equation}
\label{E:Omega}
\Omega = \mathbb{Z}_2^\kappa \times (\mathbb{S}_{m-2}\wr \mathbb{S}_k) \times (\mathbb{S}_{m-1}\wr \mathbb{S}_{n-k}) \subseteq \mathbb{S}_m \times \mathbb{S}_n
\end{equation}
\begin{theorem}
\label{T:2pattorbit}
Suppose that $F$ consists of one or two distinct patterns. Then every balanced coloring
of ${\mathcal W}^F$ is a orbit coloring.
\end{theorem}
\begin{proof}
We give the proof for two patterns; then we specialize to get the proof for one pattern.
So let $F = \{f_1,f_2\}$ where $f_1 \neq f_2$.
We can use permutations within each column to arrange for nodes
in learned patterns to appear as the top node, or top two nodes, of each column.
Moreover, nodes in the same $f_j$ appear in the same row
when the patterns are distinct in that column.
Permutations between columns can position all the nodes that belong to two patterns
first (reading along rows from left to right), followed by those that belong to only one,
as in Figure~\ref{F:wilson_net_1}.
The network ${\mathcal W}^F$ has precisely
three distinct input-types $A, B, C$, defined above.
Since a balanced coloring refines input equivalence, the partition
$\{A,B,C\}$ is color-disjoint.
Suppose that the top two rows of region $C$ have a color
in common. The all-to-all excitatory connections, plus balance, then
imply that every color appearing in the top row must also appear
in the bottom row, with the same multiplicity.
There are thus two cases:
\begin{itemize}
\item[] Case 1:
Every color appearing in the top row must also appear
in the bottom row, with the same multiplicity.
\item[] Case 2:
The two rows of region $C$ are color-disjoint.
\end{itemize}
First, we deal with Case 1.
We refine the partition $\{A,B,C\}$ in two further stages. First we split regions $C$
into two parts $C_1, C_2$ as follows.
If a node in row 1 has an inhibitory connection to
a node with the same color in row 2, (necessarily in the same column),
place those nodes in set $C_1$. Otherwise place them in $C_2$.
We claim that each vertical connection in $C_2$ links the same
two (distinct) colors. Suppose a node in row 1 has color $\alpha$
and is connected to color $\beta$ in row 2. Balance implies
that every node of color $\alpha$ in either row must be
paired vertically with color $\beta$ in the other row. Moreover,
both rows contain the same number of nodes with any given color,
so colors $\alpha$ and $\beta$ occur with the same multiplicity
in row 1 and in row 2. (In particular, $C_2$ meets an even number of columns.)
Clearly $C_1$ and $C_2$ are color-disjoint, by balance. A color anywhere
in $C_1$ cannot connect vertically both to itself, and to something different.
(Bear in mind that regions $C$ and $A$ are color-disjoint, so the
discrepancy cannot be repaired by coloring another node in the same column.)
We now split region $A$ into $A_1, A_2, A_3$, according to whether
the top node of the column lies in $C_1, C_2,$ or $B$. Balance clearly
implies that all six regions $A_1, A_2, A_3, B, C_1, C_2$ are color-disjoint.
See Figure~\ref{F:wilson_net_2}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_2_col.pdf}}
\caption{Refining the partition of the trained Wilson network by splitting
regions $A, C$.}
\label{F:wilson_net_2}
\end{figure}
Finally, we partition regions $C_1$ and $B$ according to the color
of the top node. Region $C_2$ is similarly partitioned using the colors
of the vertical pairs in the top two rows. (Only one such subset is
drawn but there could be many.) Without loss of generality we can
use elements of $\Omega$,
defined in~\eqref{E:Omega}, to group nodes of a given color (or color pair)
into blocks of adjacent nodes, Figure~\ref{F:wilson_net_3}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=5in]{wilson_net_3_col.pdf}}
\caption{Refining the partition of the trained Wilson network even further
by splitting regions according to colors appearing in the learned patterns.}
\label{F:wilson_net_3}
\end{figure}
As in the proof of Proposition~\ref{P:rectangles}, we can use elements of $\Omega$
to arrange colors in the blocks of $A_j$ into rectangles, each rectangle containing
a single color.
Recall that $\kappa$ flips the two learned patterns (acting
trivially on $B$). Define a permutation $\rho$ of the columns, of order 2,
which acts trivially on $B, C_1$ and transposes pairs of adjacent columns in $C_2$
with matching colors in their top nodes. Then $\kappa\rho$ fixes all the colors
of nodes in $B \cup C$. Further permutations of the columns
force equalities of colors along the top row.
We now construct, for any rectangle $R$ in any region, a subgroup
$\Sigma_R \subseteq \Sigma$ whose fixed-point set determines
the coloring in that rectangle. The subgroup generated by all such
permutations, together with $\kappa\rho$ and any permutations of columns
already defined, has the same fixed-point set as $\Sigma$ because
$\Sigma$ is the isotropy group of the coloring. The construction of
$\Sigma_R$ is the same as that in the proof of Proposition~\ref{P:rectangles},
giving a group $\mathbb{S}_q \wr \mathbb{S}_p$ where the rectangle has $p$ rows and $q$ columns.
This completes Case 1.
Case 2 is similar, but now the two rows of $C$ are color-disjoint,
so $\kappa\rho$ is not required.
This completes the proof for two learned patterns.
When there is only one learned pattern, the same proof applies taking $C$ to be the empty set.
Now every arrow in $B$ occurs twice, but this is effectively the same as a single
arrow of a new type.
\end{proof}
The network of Figure~\ref{F:3x6wilson} can be viewed
as a trained Wilson network with three (disjoint) learned patterns.
Thus the analog of Theorem~\ref{T:2pattorbit} is not valid for
Wilson networks with three learned patterns. Similarly,
The network of Figure~\ref{F:5x5latin} can be viewed
as a trained Wilson network with five (disjoint) learned patterns.
\section{Bifurcations to Exotic Patterns}
\label{S:BEP}
We comment briefly on the implications of Figure~\ref{F:5x5latin} for bifurcations.
The corresponding analysis for Figure~\ref{F:3x6wilson} is less straightforward
when colors are amalgamated to apply the Equivariant Branching Lemma,
and we have not carried it out.
Associated with any balanced coloring $K$ of a network ${\mathcal G}$ is a {\em quotient network} ${\mathcal G}_K$
with one node for each color, and input arrows copied from the original
network according to the colors of head and tail nodes. It is shown in~\cite{GST05,SGP03}
that with canonical identifications of state spaces, the restriction of
an admissible map for ${\mathcal G}$ to the synchrony subspace of $K$ is admissible for ${\mathcal G}_K$,
and all admissible maps for ${\mathcal G}_K$ can be obtained in this manner.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2.3in]{5colQuot.pdf}
}
\caption{The quotient network for Figure~\ref{F:5x5latin}. Lines without arrows
are bidirectional. Numbers beside gray arrows indicate multiplicities.}
\label{F:5x5latinQuot}
\end{figure}
The quotient network for the coloring in Figure~\ref{F:5x5latin} is shown in
Figure~\ref{F:5x5latinQuot}. It has five nodes and $\mathbb{S}_5$ symmetry.
It is easy to see that {\em any} coloring of this quotient network
is balanced; indeed, it is an orbit coloring for the subgroup of $\mathbb{S}_5$ that preserves
the colors of nodes.
Moreover, balanced colorings lift from quotient networks to
the original network, and remain balanced. However, the lift need not
be an orbit coloring on the full network, as we now demonstrate.
We can apply the Equivariant Branching Lemma
to this quotient, and then lift the resulting pattern to ${\mathcal G}_{55}$,
to prove the generic existence of branches with (for instance) two colors, obtained
by amalgamating colors in Figure~\ref{F:5x5latin} in any manner. The question is
whether any of these amalgamated colorings is exotic for ${\mathcal G}_{55}$.
Some are not, but a plausible candidate is
Figure~\ref{F:5x5latin2col} (left), obtained by coloring black and yellow the same,
and red, green, and blue the same. We claim that this pattern is indeed exotic
in ${\mathcal G}_{55}$. The proof uses Proposition~\ref{P:iso_col}.
\begin{figure}[htb]
\centerline{%
\includegraphics[width=2in]{5x5latin2col.pdf} \qquad\qquad
\includegraphics[width=2in]{5x5latin2colFix.pdf}
}
\caption{{\em Left}: This balanced 2-coloring is not an orbit coloring for $\mathbb{S}_5 \times \mathbb{S}_5$.
{\em Right}: The fixed-point subspace of the isotropy subgroup of the 2-color pattern is a 3-color pattern.}
\label{F:5x5latin2col}
\end{figure}
Calculations with Mathematica show that the
isotropy subgroup $\Sigma \subseteq \mathbb{S}_5 \times \mathbb{S}_5$ of
Figure~\ref{F:5x5latin2col} (left) is isomorphic
to a dihedral group $\mathbb{D}_5$. Since this acts on 25 nodes, and its orbits have
size at most 10, there must be at least 3 orbits, so its fixed-point subspace
must have dimension at least 3. Therefore Figure~\ref{F:5x5latin2col} is not
an orbit coloring for $\mathbb{S}_5 \times \mathbb{S}_5$.
In more detail: the group $\Sigma$ is generated by
the pairs of permutations (in cycle notation)
\[
g = ((13245),(13245)) \qquad h = ((1)(24)(35), (14)(23)(5))
\]
which satisfy the relations
\[
g^5 = h^2 = 1, \ hgh = g^{-1}
\]
for $\mathbb{D}_5$. It is a twisted $\mathbb{D}_5$ subgroup of $\mathbb{S}_5$; that is, it consists of
elements $(\gamma, \theta(\gamma))$ where $\gamma \in \mathbb{D}_5 \subseteq \mathbb{S}_5 \times \ONE$
and $\theta: \mathbb{D}_5 \to \mathbb{D}_5$ is an isomorphism. Up to conjugacy we can make
$\theta$ the identity on $\mathbb{Z}_5$; it then maps an order-2 element to a different order-2 element.
The fixed-point subspace of $\Sigma$ comprises the patterns in
Figure~\ref{F:5x5latin2col} (right). There are two orbits of size 10 (black, red)
and one of size 5 (orange). By Lemma~\ref{L:delete}, this $2$-coloring is not an orbit coloring.
(The orange nodes do {\em not} correspond to
a specific color in Figure~\ref{F:5x5latin} (left): the reason is that
this $5$-coloring is not an orbit coloring.)
The quotient network for the 3-coloring of Figure~\ref{F:5x5latin2col} (right),
which we do not draw, has
an obvious $\mathbb{Z}_2$ symmetry that swaps red and black but fixes orange.
It has an orbit coloring that merges red and black.
It also has a balanced coloring that merges red and orange, which is
not a $\mathbb{Z}_2$ orbit coloring, and this lifts to Figure~\ref{F:5x5latin2col} (right).
\subsection{Stability}
As well as existence of particular states, it is also important to study their stability.
This is a more delicate issue. In particular, whenever the critical eigenspace supports
a nontrivial equivariant quadratic, all axial bifurcating branches of equilibria are {\em unstable}
near the bifurcation point; see~\cite[Chapter XIII Theorem 4.4 ]{GSS88}.
When the symmetry group is $\mathbb{S}_n$ in its standard
permutation representation, and we are considering a symmetry-breaking bifurcation, such a quadratic equivariant exists for all $n \geq 3$. This problem also arises
for $\mathbb{S}_m\times\mathbb{S}_n$.
It is discussed in the evolutionary models of~\cite{CS00,GS02,SEC03},
for $\mathbb{S}_n$, and in the decision models of~\cite{FGBL20} for $\mathbb{S}_m\times\mathbb{S}_n$. In both cases
the remedy is to include suitable cubic equivariant terms to `compactly' the bifurcation.
This allows an unstable transcritical branch to turn around at a saddle-node (fold) point
and regain stability. This creates a jump bifurcation.
(If $n = 2k$ there is an exception: the isotropy
subgroup $\mathbb{S}_k \times\mathbb{S}_k$ gives a pitchfork bifurcation.)
The same issue arises when considering the stability of the pattern in
Figure~\ref{F:5x5latin2col} (left), because the symmetry group of the quotient
network is $\mathbb{S}_5$, and on the relevant synchrony subspace the pattern
is given by an axial subgroup $\mathbb{S}_2 \times \mathbb{S}_3$. The same remedy also
applies: include suitable cubic equivariants so that the branch turns round.
It can then regain stability within the synchrony subspace. This still leaves open
whether it is also stable transverse to the synchrony subspace, that is, to
perturbations that break the synchrony of the balanced $5$-coloring.
We have not yet investigated this question, but the problem is likely to be complicated.
Numerical simulations in~\cite{FGBL20} suggest that states arising from axial subgroups
of $\mathbb{S}_5 \times \mathbb{S}_5$ can be stabilized in this manner in the whole
of state space $\mathbb{R}^{25}$. However, this particular state has
not yet been explored numerically.
\section{Conclusions}
We have studied the formation of synchrony patterns in networks ${\mathcal G}_{mn}$ whose nodes form
$m \times n$ arrays, and whose connections imply $\mathbb{S}_m \times \mathbb{S}_n$
symmetry. Symmetry methods --- namely, the Equivariant Branching Lemma
and the Equivariant Hopf Theorem --- prove the generic existence of bifurcating
branches that break symmetry to, respectively, axial subgroups and $\mathbb{C}$-axial subgroups.
Network structure is stronger than equivariance, and it can create
exotic colorings, not predicted by symmetry considerations. Such colorings
exist in particular when $(m,n) = (3,6)$ and $(5,5)$.
Applying the Equivariant Branching Lemma to a quotient network of
${\mathcal G}_{55}$, we have proved the existence of an exotic 2-coloring of
${\mathcal G}_{55}$ itself. This pattern is unstable near bifurcation. It can regain
stability in the quotient network, but may still be unstable to perturbations
that break the synchrony pattern defining that quotient.
Networks ${\mathcal G}_{mn}$ have been used to model decision making,
and related networks to which the same results apply occur in models
of binocular rivalry and visual illusions. The occurrence of exotic
synchrony patterns predicts states of such models that have not been
derived using equivariant methods, and whose description is not
given by isotropy subgroups.
We have also shown that in rivalry models, networks trained on one or
two images do not have exotic colorings, so all synchrony patterns
(hence also all phase patterns) can be characterized by isotropy subgroups
of the symmetry group.
| {
"timestamp": "2020-10-16T02:16:41",
"yymm": "2010",
"arxiv_id": "2010.07716",
"language": "en",
"url": "https://arxiv.org/abs/2010.07716",
"abstract": "Balanced colorings of networks classify robust synchrony patterns -- those that are defined by subspaces that are flow-invariant for all admissible ODEs. In symmetric networks the obvious balanced colorings are orbit colorings, where colors correspond to orbits of a subgroup of the symmetry group. All other balanced colorings are said to be exotic. We analyze balanced colorings for two closely related types of network encountered in applications: trained Wilson networks, which occur in models of binocular rivalry, and opinion networks, which occur in models of decision making. We give two examples of exotic colorings which apply to both types of network, and prove that Wilson networks with at most two learned patterns have no exotic colorings. We discuss how exotic colorings affect the existence and stability of branches for bifurcations of the corresponding model ODEs.",
"subjects": "Dynamical Systems (math.DS); Social and Information Networks (cs.SI); Physics and Society (physics.soc-ph)",
"title": "Balanced Colorings and Bifurcations in Rivalry and Opinion Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877018151064,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594754147337
} |
https://arxiv.org/abs/math/9904037 | Geometric Knot Spaces and Polygonal Isotopy | The space of n-sided polygons embedded in three-space consists of a smooth manifold in which points correspond to piecewise linear or ``geometric'' knots, while paths correspond to isotopies which preserve the geometric structure of these knots. The topology of these spaces for the case n = 6 and n = 7 is described. In both of these cases, each knot space consists of five components, but contains only three (when n = 6) or four (when n = 7) topological knot types. Therefore ``geometric knot equivalence'' is strictly stronger than topological equivalence. This point is demonstrated by the hexagonal trefoils and heptagonal figure-eight knots, which, unlike their topological counterparts, are not reversible. Extending these results to the cases n \ge 8 is also discussed. |
\section{Introduction}
Consider the sorts of configurations that can be constructed out of a
sequence of line segments, glued end to end to end to form an embedded
loop in $ \Field{R}^{3}. $ The line segments might represent bonds between
atoms in a polymer, segments in the base-pair sequence of a circular
DNA macromolecule, or simply thin wooden sticks attached with flexible
rubber joints. Thus, a spatial polygon of this kind serves as a
mathematical model for some object which is physically knotted yet
retains some of the rigidity inherited from the materials from which
it is built.
It is a classical result of three-dimensional topology that knotted
loops made out of flexible string can always be approximated by
polygonal loops consisting of many thin, rigid segments. Furthermore,
any deformation performed on the string can always be approximated by
a deformation of the polygon, as long as the number of edges is
allowed to increase. However if we insist that the number of edges
remain constant, then we clearly restrict the types of knots that we
can construct. For instance, if we use five or fewer edges, every
loop we build is topologically unknotted; on the other hand we can
build a trefoil or a figure-eight knot if we use six or seven edges,
respectively. See Figure ~\ref{fig:polygons}.
What is not clear is whether we can always mimic a topological
deformation by a deformation of polygons when we place restrictions on
the number of edges. For instance, it is unknown whether we can build
a really complicated polygon which, if it were made out of flexible
string, could be topologically deformed into a round unknot but, if it
were built out of rigid sticks with flexible joints, could not be
flattened out into a planar polygon. In other words, it is an open
question whether there exist topological
unknots which are geometrically knotted.
\begin{figure}[t]
\insertfig{polygons.eps}{1.25in}
\caption{A hexagonal trefoil knot and a heptagonal figure-eight knot.}
\label{fig:polygons}
\end{figure}
As it turns out, it is not always possible to find a geometric isotopy
({\em i.e.} one which keeps the number of edges fixed) between two
polygonal configurations which are topologically equivalent. In fact,
even the case of hexagonal trefoils is nontrivial, as there are
distinct geometric isotopy types, or {\em isotopes}, of this knot. As
a consequence, familiar properties such as reversibility behave
differently when dealing with geometric knots.
One formulation due to Dick Randell \cite{Randell:conform1,
Randell:conform2} is obtained by observing the correspondence between
$ n $-sided polygonal loops in Euclidean three-space and points in $
\Field{R}^{3n}. $ Suppose that $ P $ is an $ n $-sided polygon in $ \Field{R}^{3},
$ together with a choice of a ``first vertex'' $ v_{1} $ and an
orientation. By listing the coordinates of each vertex in sequence,
we obtain a point $ (x_{1}, y_{1}, z_{1}, x_{2}, y_{2}, z_{2}, \ldots,
x_{n}, y_{n}, z_{n}) \in \Field{R}^{3n} $ which we associate with $ P =
\langle v_{1}, v_{2}, \ldots, v_{n} \rangle. $ As in the theory of
Vassiliev invariants, let the {\em discriminant} $ \Sigma^{(n)} $ be
the set of all points in $ \Field{R}^{3n} $ which correspond to polygons with
self-intersections. If $ n > 3, $ this discriminant is the union of $
\frac{1}{2} n (n - 3) $ pieces, each of which corresponds to the set
of polygons with an intersecting pair of non-adjacent edges. For
instance, the subset in $ \Sigma^{(n)} $ consisting of polygons for
which the edges $ v_{1}v_{2} $ and $ v_{3}v_{4} $ intersect can be
described as the collection of polygons for which:
\begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})}
\item the vertices $ v_{1}, v_{2}, v_{3}, $ and $ v_{4} $ are coplanar,
\item the line determined by $ v_{1} $ and $ v_{2} $ separates $ v_{3} $
from $ v_{4} , $ and
\item the line determined by $ v_{3} $ and $ v_{4} $ separates $ v_{1} $
from $ v_{2} . $
\end{enumerate}
Note that this set corresponds to the closure of the locus in $
\Field{R}^{3n} $ of the system
\begin{gather*}
(v_{2}-v_{1}) \times (v_{3}-v_{1}) \cdot (v_{4}-v_{1}) = 0 , \notag \\
(v_{2}-v_{1}) \times (v_{3}-v_{1}) \cdot (v_{2}-v_{1}) \times
(v_{4}-v_{1}) < 0 , \\
(v_{4}-v_{3}) \times (v_{1}-v_{3}) \cdot (v_{4}-v_{3}) \times
(v_{2}-v_{3}) < 0 . \notag
\end{gather*}
Therefore, each of these pieces is the closure of a codimension one
cubic semi-algebraic variety, {\em i.e.} a hypersurface with boundary.
We define the space of geometric knots to be the complement of this
discriminant, $ \mathfrak{Geo}^{(n)} = \Field{R}^{3n} - \Sigma^{(n)}. $ Therefore $
\mathfrak{Geo}^{(n)} $ is a dense open submanifold of $ \Field{R}^{3n} . $ In this
space, points correspond to embedded polygons or {\em geometric
knots}, paths correspond to {\em geometric isotopies}, and
path-components correspond to {\em geometric knot types}.
By a theorem of Whitney \cite{Whitney:varieties}, for any given $ n $
there are only finitely many path-components in $ \mathfrak{Geo}^{(n)} .$ It is
also a well-known ``folk theorem,'' due perhaps to Kuiper, that the
spaces $ \mathfrak{Geo}^{(3)}, \mathfrak{Geo}^{(4)}, $ and $ \mathfrak{Geo}^{(5)} $ are connected.
In \cite{Calvo:thesis, Calvo:hexagons}, I showed that the spaces $
\mathfrak{Geo}^{(6)} $ and $ \mathfrak{Geo}^{(7)} $ have five components each. Contrast
this with the fact that only three topological knot types are
represented in $ \mathfrak{Geo}^{(6)} , $ and that only four topological knot
types are present in $ \mathfrak{Geo}^{(7)} .$ When $ n > 8, $ the exact number
of path-components remains unknown. In fact, even the number of
topological knot types represented in the different components of $
\mathfrak{Geo}^{(n)} $ is known only when $ n < 9 .$ The following theorem
summarizes the current status of the classification of geometric knots
with a small number of edges.
\begin{Thm} \label{thm:classification}
(Calvo \cite{Calvo:thesis, Calvo:hexagons})
\begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})}
\item The spaces $ \mathfrak{Geo}^{(3)}, \mathfrak{Geo}^{(4)}, $ and $ \mathfrak{Geo}^{(5)} $ are
path-connected and consist only of unknots.
\item The space $ \mathfrak{Geo}^{(6)} $ of hexagonal knots contains five
path-components. These consist of a single component of unknots, two
components of right-handed trefoils, and two components of left-handed
trefoils.
\item The space $ \mathfrak{Geo}^{(7)} $ of heptagonal knots contains five
path-components. These consist of a single component of unknots and
of each type of trefoil knot, and two components of figure-eight knots.
\item The space $ \mathfrak{Geo}^{(8)} $ of octagonal knots contains at least
twenty path-components. However, the only knots represented in this
space are the unknot, the trefoil knot, the figure-eight knot, every
five and six crossing prime knot $ (5_1, 5_2, 6_1, 6_2, $ and $ 6_3) ,
$ the square and granny knots $ (3_1 \pm 3_1) , $ the (3, 4)-torus
knot $ (8_{19}) , $ and the knot $ 8_{20} . $
\end{enumerate} \end{Thm}
It is important to note that although the deformations obtained as
paths in $ \mathfrak{Geo}^{(n)} $ preserve the polygonal structure of the knot
in question, in general they will not preserve edge length. Let $ f:
\mathfrak{Geo}^{(n)} \rightarrow \Field{R}^{n} $ be the map taking $ P = \langle v_{1},
v_{2}, \ldots, v_{n} \rangle $ to the $ n $-tuple \[
(\norm{v_{1}-v_{2}}, \norm{v_{2}-v_{3}}, \ldots, \norm{v_{n-1}-v_{n}},
\norm{v_{n}-v_{1}}) . \] Then points in the preimage $ \mathfrak{Equ}^{(n)} =
f^{-1}(1, 1, \ldots, 1) $ correspond to {\em equilateral knots} with
unit length edges. Since the point $ (1, 1, \ldots, 1) $ is a regular
value for $ f, $ the space $ \mathfrak{Equ}^{(n)} $ is a $ 2n $-dimensional
submanifold (in fact, a codimension $ n $ quadric hypersurface)
intersecting a number of the components of $ \mathfrak{Geo}^{(n)} , $ some
perhaps more than once. Paths in this submanifold correspond to
geometric isotopies which do preserve edge length, so the
path-components of this space offer yet another notion of knottedness.
In his original papers on molecular conformation spaces
\cite{Randell:conform1, Randell:conform2}, Randell shows that if $ n
\le 5 $ then $ \mathfrak{Equ}^{(n)} $ is connected. The case when $ n = 6 $ had
virtually remained untouched for ten years, except for work by Kenneth
Millett and Rosa Orellana showing that $ \mathfrak{Equ}^{(6)} $ contains a
single component of topological unknots.~\!\!\footnote{\, Their
unpublished result is mentioned in Proposition 1.2 of
\cite{Millett:random.knot}.} By focusing attention to a special case
of singular ``almost knotted'' hexagons, \cite{Calvo:hexagons} shows
that two hexagons are equilaterally equivalent exactly when they are
geometrically equivalent. Thus $ \mathfrak{Equ}^{(6)} $ intersects each
component of $ \mathfrak{Geo}^{(6)} $ exactly once. Furthermore,
\cite{Calvo:hexagons} shows that this correspondence of
path-components is not uninteresting, as the inclusion $ \mathfrak{Equ}^{(6)}
\hookrightarrow \mathfrak{Geo}^{(6)} $ has a nontrivial kernel at the level of
fundamental group. In fact, if $ \mathcal{T} $ is a component of
trefoils in $ \mathfrak{Geo}^{(6)} ,$ then $ \pi_{1}(\mathcal{T}) = \Field{Z}_{2} $
while $ \pi_{1}(\mathcal{T} \cap \mathfrak{Equ}^{(6)}) $ contains an infinite
cyclic subgroup.
This paper presents two key ingredients from \cite{Calvo:thesis,
Calvo:hexagons} used to obtain Theorem~\ref{thm:classification}. In
Section~\ref{sec:stratify}, we discuss a method of decomposing $
\mathfrak{Geo}^{(n)} $ into three-dimensional fibres or ``strata.'' This method
proved particularly useful in the analysis of $ \mathfrak{Geo}^{(6)} $ and $
\mathfrak{Geo}^{(7)} .$ Then Section ~\ref{sec:project} describes an upper bound
on the minimal crossing number of the knot realized by an $ n $-sided
polygon. This bound, which is obtained by looking at a particular
projection of polygon into a sphere, improves the one previously known
by a linear term and provides enough control to classify the
topological knot types present in $ \mathfrak{Geo}^{(8)} . $
\section{A Stratification of Geometric Knot Spaces} \label{sec:stratify}
Consider the map $ g $ with domain $ \mathfrak{Geo}^{(n)} $ which ``forgets''
the last vertex of a polygon, mapping
\[ P = \langle v_{1}, v_{2}, v_{3}, \ldots, v_{n-1}, v_{n} \rangle
\mapsto g(P) = \langle v_{1}, v_{2}, v_{3}, \ldots, v_{n-1} \rangle .\]
Notice that a generic polygon in $ \mathfrak{Geo}^{(n)} $ will map to an
embedded polygon in $ \mathfrak{Geo}^{(n-1)} ; $ the only polygons which do not
are the ones for which some part of the linkage $ v_{1} v_{2} \ldots
v_{n-1} $ passes through the line segment between $ v_{1} $ and $
v_{n-1}, $ and these polygons form a codimension one subset of $
\mathfrak{Geo}^{(n)} .$ In particular, since $ \mathfrak{Geo}^{(n)} $ is a manifold, every
$ n $-sided polygon can be perturbed by just a tiny amount so that its
image under $ g $ lies in $ \mathfrak{Geo}^{(n-1)} . $
Suppose that $ Q $ is an $ (n-1) $-sided polygon in $ \mathfrak{Geo}^{(n-1)}. $
Then the preimage $ g^{-1}(Q) $ will be a three-dimensional manifold,
homeomorphic to the set of valid $ n $th vertices for $ Q. $ This
divides $ \mathfrak{Geo}^{(n)} $ into three-dimensional slices or {\em strata}.
As $ Q $ varies over $ \mathfrak{Geo}^{(n-1)}, $ the corresponding
three-dimensional stratus $ g^{-1}(Q) $ will vary. By observing how
these strata change, we can obtain useful information about $
\mathfrak{Geo}^{(n)} .$
\begin{figure}[t]
\insertfig{frontview.eps}{2.6in}
\caption{One possible sixth vertex for the pentagon $ Q .$}
\label{fig:frontview}
\end{figure}
For example, consider the pentagon $ Q = \langle v_1, v_2, v_3, v_4,
v_5 \rangle $ with coordinates
\begin{gather*}
\langle (0, 0, 0), (.886375, .276357, .371441), \\
(.125043, -.363873, .473812), \\
(.549367, .461959, .845227), (.818041, 0, 0) \rangle
\end{gather*}
shown in Figure ~\ref{fig:frontview}. Suppose that we replace the
edge between $ v_5 $ and $ v_1 $ with a pair of new edges, from $ v_5
$ to some new vertex $ v_{6} \in \Field{R}^3 $ and from this vertex back to $
v_1 .$ This creates a hexagon which, with a bit of care in choosing $
v_{6}, $ will also be embedded in $ \Field{R}^3 . $ For instance, if we
place the new vertex at $ (.4090205, 0, -.912525), $ we obtain an
unknotted hexagon. On the other hand, placing $ v_{6} $ at $
(.4090205, -.343939, .845227), $ gives a hexagon which is knotted as a
right-handed trefoil. See Figure ~\ref{fig:frontview}. The preimage
$ g^{-1}(Q) \in \mathfrak{Geo}^{(6)} $ is homeomorphic to the dense open subset
of $ \Field{R}^3 $ consisting of ``valid'' sixth vertices for $ Q .$
To examine which points in $ \Field{R}^3 $ correspond to embedded hexagons
obtained from $ Q, $ we will think of the $ x $-axis as a ``central
axis'' in this space and consider the collection of half-planes
radiating from this axis. We refer to these as {\em standard
half-planes}. These half-planes appear as rays from the origin in
Figure ~\ref{fig:sideview}, which shows the projection of $ Q $ into
the $ yz $-plane.
Let $ \mathcal{P}_{2}, \mathcal{P}_{3}, $ and $ \mathcal{P}_{4} $ be
the standard half-planes containing $ v_{2}, v_{3}, $ and $ v_{4}, $
respectively. Thus
\begin{gather*}
\mathcal{P}_{2} = \{ y = \tfrac{276357}{371441} z \approx .744 z,
\; z > 0 \} , \\
\mathcal{P}_{3} = \{ y = - \tfrac{363873}{473812} z \approx -.768 z,
\; z > 0 \} , \\
\mathcal{P}_{4} = \{ y = \tfrac{461959}{845227} z \approx .547 z,
\; z > 0 \} ,
\end{gather*}
as shown in Figure ~\ref{fig:sideview}.
\begin{figure}[t]
\insertfig{sideview.eps}{2.6in}
\caption{Projection of pentagon $ Q $ into $ yz $-plane.}
\label{fig:sideview}
\end{figure}
Notice that the interior of any standard half-plane to the left of $
\mathcal{P}_{2} $ and $ \mathcal{P}_{3} $ will miss $ Q $ altogether.
Thus, any point in the interior of any these half-planes may be used
as a sixth vertex for a hexagon. Every other standard half-plane,
however, does intersect $ Q $ at one or more interior points, so these
half-planes will contain some points which correspond to hexagons with
self-intersections.
The interior of any standard half-plane between $ \mathcal{P}_{2} $
and $ \mathcal{P}_{4} $ will intersect $ Q $ only once, in its second
edge. Depending on which point of this plane we choose for the new
vertex $ v_{6} , $ the two-edge linkage $ v_{5} v_{6} v_{1} $ will
either dip underneath or jump over this edge. If $ v_{5} v_{6} v_{1}
$ goes under $ v_{2} v_{3}, $ then $ v_{6} $ can be dragged back to
the $ x $-axis, say to the midpoint of edge $ v_{1}v_{5}, $ giving an
isotopy of the resulting hexagon back to the unknotted loop realized
by the pentagon $ Q .$ However, if $ v_{5} v_{6} v_{1} $ loops above
the edge $ v_{2}v_{3} , $ then this edge will obstruct any isotopy of
the hexagon which attempts to push $ v_{6} $ down towards the $ x
$-axis in this plane. For instance, $ Q $ crosses the half-plane $ \{
y = .6z, \; z > 0 \} $ at the point $ (.828333, .227547, .379246) . $
Vertices collinear with $ (.828333, .227547, .379246) $ and $ v_{1} $
correspond to embedded hexagons only when they lie between these
points; otherwise the second and sixth edges of the resulting hexagon
will cross each other. Similarly, vertices collinear with $ (.828333,
.227547, .379246) $ and $ v_{5} $ which do not lie between these two
points correspond to hexagons with intersecting second and fifth
edges. Therefore, points in the rays beginning at $ (.828333,
.227547, $ $ .379246) $ and radiating away from either $ v_{1} $ or $
v_{5} $ do not correspond to embedded hexagons, and the half-plane is
cut into two regions by a ``V''-shaped discriminant. See Figure
~\ref{fig:v.discrim}(a). If $ v_{6} $ is placed in the region of this
half-plane labelled {\em i}, then the pair of new edges will dip under
the edge $ v_{2}v_{3} $ and the resulting hexagon will be isotopic to
$ Q .$ Alternatively, if $ v_{6} $ is placed in the region labelled
{\em ii}, then $ v_{5} v_{6} v_{1} $ will jump over this edge.
\begin{figure}[t]
\insertfig{vdiscrim.eps}{1.7in}
\centering{{\small (a) $ \{ y = .6z, \; z > 0 \} $ \hspace{46mm}
(b) $ \{ y = 0, \; z > 0 \} $ \hspace{4mm}}}
\caption{$ Q $ separates each half-plane by ``V''-shaped discriminants.}
\label{fig:v.discrim}
\end{figure}
Now, the interior of every standard half-plane between $
\mathcal{P}_{4} $ and $ \mathcal{P}_{3} $ intersects $ Q $ in two
points, in the interior of its second and third edges. As before,
these edges will form obstructions to a homotopy moving $ v_{6} $ in
this plane. Therefore, for each of the points through which these
edges cross the half-plane, there will be a ``V''-shaped discriminant
as above. For example, in the half-plane $ \{ y = 0, \; z > 0 \} , $
which intersects $ Q $ at the points $ (.557744, 0, .415630) $ and $
(.312006, 0, .637463) , $ vertices in the four rays beginning at
either of these points and radiating away from $ v_{1} $ and $ v_{5} $
correspond to hexagons with self-intersections. These two
``V''-shaped discriminants separate the half-plane into four regions,
arranged as in Figure ~\ref{fig:v.discrim}(b). As before, placing the
new vertex in each of these regions corresponds to looping the new
edges of the hexagon over either the second ({\em vi}) or third ({\em
iv}) edge of $ Q , $ or both ({\em v}), or neither ({\em iii}) of
these.
We can show that the arrangement of the ``V''-shaped discriminants
remains relatively unchanged for standard half-planes in each of these
intervals. In fact, the connected components of the half-planes in
Figure ~\ref{fig:v.discrim} are only cross-sectional slices of
``cylindrical sectors'' of $ g^{-1}(Q) $ which wrap around the $ x
$-axis. Denote these sectors as {\em i}, {\em ii}, {\em iii}, {\em
iv}, {\em v}, and {\em vi}, using the notation in Figure
~\ref{fig:v.discrim}. Furthermore, let {\em o} denote the sector of $
g^{-1}(Q) $ corresponding to vertices in half-planes which do not
intersect $ Q $ at all. Then the way in which these sectors are glued
together depends on the behavior of the discriminants at the three
``critical level'' standard half-planes $ \mathcal{P}_{2} ,
\mathcal{P}_{3} , $ and $ \mathcal{P}_{4} .$
\begin{figure}[b]
\insertfig{p2discrim.eps}{1.54in}
\caption{Critical level $ \mathcal{P}_{2} = \{ y =
\tfrac{276357}{371441} z \approx .744 z, \; z > 0 \}.$ }
\label{fig:chp.discrim.a}
\end{figure}
The first of these half-planes, $ \mathcal{P}_{2}, $ contains the
first edge of $ Q , $ which connects $ v_{1} = (0,0,0) $ to $ v_{2} =
(.886375, .276357, .371441) . $ Vertices in rays beginning at any
point in this edge and radiating away from $ v_{6} $ correspond to
hexagons with intersecting first and fifth edges. Hence, for each
point in this edge there is a ``V''-shaped discriminant. The union of
these discriminants forms a two-dimensional discriminant corresponding
to an obstruction in the space $ g^{-1}(Q) .$ See Figure
~\ref{fig:chp.discrim.a}. However, this obstruction only
partially blocks access to sector {\em i}. Therefore both {\em i} and
{\em ii} are glued to {\em o} at this half-plane.
A similar two-dimensional discriminant occurs for $ \mathcal{P}_{4} .$
This half-plane contains the fourth edge of $ Q , $ which joins $
v_{4} = (.549367, .461959, .845227) $ and $ v_{5} = (.818041, $ $ 0,
0).$ Vertices collinear with the origin and any point $ p $ on this
edge correspond to embedded hexagons only if they lie between $ (0, 0,
0) $ and $ p .$ See Figure ~\ref{fig:chp.discrim.b}. This
discriminant completely closes off sector {\em vi}, and obstructs
parts of sectors {\em i}, {\em ii}, and {\em iii}. Thus, at this
level, {\em i} is attached to {\em iii} and {\em iv}, {\em ii} is
attached to {\em v}, and {\em vi} is abruptly terminated.
\begin{figure}[h]
\insertfig{p4discrim.eps}{2.13in}
\caption{Critical level $ \mathcal{P}_{4} = \{ y =
\tfrac{461959}{845227} z \approx .547 z, \; z > 0 \}.$}
\label{fig:chp.discrim.b}
\end{figure}
The third critical level half-plane, $ \mathcal{P}_{3}, $ presents a
different situation, as it intersects $ Q $ only at the vertex $ v_{3}
= (.125043, -.363873, .473812) .$ In this case, the ``V''-shaped
discriminants corresponding to the second and third edges of $ Q $
come together as the two edges become incident at their common vertex.
As the discriminants merge, sectors {\em iv} and {\em vi} are
terminated, while both of the sectors {\em iii} and {\em v} merge with
sector {\em o}.
Figure ~\ref{fig:cylinder} presents a cylindrical section of $ \Field{R}^3 $
about the $ x $-axis, showing the sectors of $ g^{-1}(Q) $ and the
connections between them. In particular, it shows that $ g^{-1}(Q) $
consists of two disjoint path-components, corresponding to the two
knot types possible for hexagons in the stratus $ g^{-1}(Q) $: the
unknot and the right-handed trefoil.
\begin{figure}[t]
\insertfig{cylinder.eps}{2in}
\caption{Cylindrical section of $ \Field{R}^{3} $ showing the different
interconnecting sectors of $ g^{-1}(Q) .$}
\label{fig:cylinder}
\end{figure}
The key feature characterizing a pentagon's corresponding stratus in $
\mathfrak{Geo}^{(6)} $ is the relative position of the second, third, and fourth
vertices with respect to the axis through the other two vertices.
Suppose that $ Q = \langle v_1, v_2, v_3, v_4, v_5 \rangle $ is an
arbitrary pentagon in $ \mathfrak{Geo}^{(5)}, $ and that $ \mathcal{L} $ is the
line determined by $ v_1 $ and $ v_5 .$ Since $ \mathfrak{Geo}^{(5)} $ is a
manifold, we can perturb $ Q $ slightly, if necessary, to ensure that
$ v_1 v_5 $ is the only edge of $ Q $ which intersects $ \mathcal{L}.$
As above, let $ \mathcal{P}_{2}, \mathcal{P}_{3}, $ and $
\mathcal{P}_{4} $ be the half-planes with boundary $ \mathcal{L} $
which contain $ v_2, v_3, $ and $ v_4 , $ respectively. Again, a
slight deformation will make $ Q $ a generic pentagon, guaranteeing
that the three $ \mathcal{P}_{i} $'s are distinct.
As in the example above, the $ \mathcal{P}_{i} $'s will divide $
\Field{R}^{3} $ into three open regions, with $ Q $ intersecting two of these
and completely missing the third. As we rotate in a right-handed
fashion about the axis $ \mathcal{L} , $ beginning in the region which
misses $ Q , $ we will encounter each of the $ \mathcal{P}_{i} $'s in
one of six orders. For example, in the pentagon shown in Figures
~\ref{fig:frontview} and \ref{fig:sideview}, these half-planes appear
in the order $ \mathcal{P}_2 - \mathcal{P}_4 - \mathcal{P}_3 , $ or
simply, 2-4-3.
Let $ H \in \mathfrak{Geo}^{(6)} $ be a generic hexagon embedded in $ \Field{R}^{3} .$
By considering the order in which the $ \mathcal{P}_{i} $'s associated
with $ g(H) $ occur, we divide $ \mathfrak{Geo}^{(6)} $ into six open regions
meeting along codimension one sets where either:
\begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})}
\item two of the $ \mathcal{P}_{i} $'s coincide, or
\item an edge of $ H $ crosses $ \mathcal{L} .$
\end{enumerate}
By analyzing the behavior of the strata $ g^{-1}(g(H)) $ as they
interconnect, we obtain Table ~\ref{tab:hexagons}, which indicates the
number of path-components in each of the six regions of $ \mathfrak{Geo}^{(6)} ,
$ arranged by the topological knot type they represent.
\begin{table}[ht]
\caption{Number of components in each region of $ \mathfrak{Geo}^{(6)} .$ }
\label{tab:hexagons}
\begin{center}
\begin{tabular}{|ccccc|}
\hline
$ \mathbf{Region \, of \, \mathfrak{Geo}^{(6)}} $ & \; &
$ \quad \mathbf{0} \quad $ &
$ \quad \mathbf{3_1} \quad $ &
$ \; \, \mathbf{- 3_1} \; \, $ \\
\hline
2-3-4 && 1 & - & - \\
2-4-3 && 1 & 1 & - \\
3-2-4 && 1 & 1 & - \\
3-4-2 && 1 & - & 1 \\
4-2-3 && 1 & - & 1 \\
4-3-2 && 1 & - & - \\
\hline
\end{tabular}
\end{center}
\end{table}
As noted above, the six regions of $ \mathfrak{Geo}^{(6)} $ meet along
codimension one subsets consisting of hexagons for which two of the $
\mathcal{P}_i $'s coincide. For instance, regions 2-4-3 and 4-2-3
meet along a subset consisting of hexagons with $ \mathcal{P}_2 =
\mathcal{P}_4 .$ The six regions also meet along codimension one
subsets consisting of hexagons that intersect line $ \mathcal{L} . $
For example, regions 2-4-3 and 4-3-2 meet along a set of hexagons for
which edge $ v_2 v_3 $ intersects this line. These connections are
shown schematically in Figure~\ref{fig:connections}; here solid lines
represent hexagons with two coinciding $ \mathcal{P}_i $'s while gray
lines represent hexagons for which some edge intersects line $ v_1 v_5
.$
\begin{figure}[t]
\insertfig{connect.eps}{2in}
\caption{Codimension one connections between regions.}
\label{fig:connections}
\end{figure}
Consider a hexagon $ H $ in the common boundary between two regions of
$ \mathfrak{Geo}^{(6)} .$ Since $ H $ can be perturbed slightly to make generic
hexagons of either type, $ H $ must be of a topological knot type
common to both regions. However, the only knot type common to
adjacent regions in Figure~\ref{fig:connections} is the unknot.
Therefore hexagons in these codimension one subsets must be unknotted
and, in particular, the topological unknots form a single component of
geometric unknots in $ \mathfrak{Geo}^{(6)} .$
On the other hand, suppose that $ h: [0, 1] \rightarrow \mathfrak{Geo}^{(6)} $
is a path from some trefoil of type 2-4-3 to some trefoil of type
3-2-4. Since $ \mathfrak{Geo}^{(6)} $ is an open subset of $ \Field{R}^{18} , $ there
is a small open 18-ball contained in $ \mathfrak{Geo}^{(6)} $ about each point
in this path. Thus we can assume that whenever $ h $ passes through a
boundary of one of the six regions, it does so through a generic point
in one of the codimension one subsets above. But then $ h $ must pass
through either 2-3-4 or 4-3-2; see Figure~\ref{fig:connections}. This
is a contradiction since only unknots live in these regions. Thus
there is no path connecting the trefoils of type 2-4-3 and those of
type 3-2-4. Similarly, there is no path between the type 4-2-3 and
3-4-2 trefoils. This proves that $ \mathfrak{Geo}^{(6)} $ consists of five
path-components: one consisting of unknots, two of right-handed
trefoils, and two of left-handed trefoils.
The geometric knot types in $ \mathfrak{Geo}^{(6)} $ are completely
characterized by a pair of combinatorial invariants which capture a
hexagon's topological chirality ({\em i.e.} right- or left-handedness)
and geometric curl ({\em i.e.} ``upward'' or ``downward'' twisting),
and are easily computed from the coordinates of a hexagon's vertices.
To define these invariants, let $ H = \langle v_{1}, v_{2}, v_{3},
v_{4}, v_{5}, v_{6} \rangle $ be an embedded hexagon in $ \Field{R}^{3} ,$
and consider the open triangular disc determined by vertices $ v_{1},
v_{2}, $ and $ v_{3}.$ This disc inherits an orientation from $ H $
via the ``right hand rule.'' Let $ \Delta_{2} $ be the algebraic
intersection number of the hexagon and this triangle. Notice that the
triangular disc can only be pierced by edges $ v_{4}v_{5} $ and $
v_{5}v_{6}.$ Furthermore, if both of these edges intersect the disc,
they will do so in opposite directions, with their contributions to $
\Delta_{2} $ canceling out. Thus $ \Delta_{2} $ takes on a value of
0, 1, or $ -1 .$ Similarly, define $ \Delta_{4} $ and $ \Delta_{6} $
to be the intersection numbers of H with the triangles $ \triangle
v_{3}v_{4}v_{5} $ and $ \triangle v_{5}v_{6}v_{1}, $ respectively. By
considering the possible values for the $ \Delta_{i} $'s (see Lemma 8
in \cite{Calvo:hexagons}), we can show that
\begin {enumerate} \renewcommand {\labelenumi} {(\roman{enumi})}
\item $ H $ is a right-handed trefoil if and only if $ \Delta_2 =
\Delta_4 = \Delta_6 = 1 , $
\item $ H $ is a left-handed trefoil if and only if $ \Delta_2 =
\Delta_4 = \Delta_6 = -1 , $ and
\item $ H $ is an unknot if and only if $ \Delta_i = 0 $ for some $ i
\in \{2, 4, 6 \} ,$
\end{enumerate}
implying that the product
\begin{equation} \label{eq:chirality}
\Delta(H) = \Delta_{2}\Delta_{4}\Delta_{6},
\end{equation}
which we call the {\em chirality} of $ H $, is an invariant under
geometric deformations.
Next, we define the {\em curl} of $ H $ as
\begin{equation} \label{eq:curl}
\Curl H = \Sign \bigl( (v_3 - v_1) \times (v_5 - v_1) \cdot (v_2 -
v_1) \bigr) .
\end{equation}
This gives the sign of the $ z $-coordinate of $ v_2 $ when we rotate
$ H $ so that $ v_1, v_3, $ and $ v_5 $ are placed on the $ xy $-plane
in a counterclockwise fashion, and therefore measures in some sense
whether a hexagon twists up or down. Consider a path $ h: [0, 1]
\rightarrow \mathfrak{Geo}^{(6)} $ which changes the curl of a hexagonal trefoil
from +1 to -1. Then there must be some point on this path for which
the vector triple product in (\ref{eq:curl}) is equal to zero. At
this point, the vertices $ v_1, v_2, v_3, $ and $ v_5 $ are all
coplanar. However, we can show such a hexagon must be unknotted,
giving us a contradiction. In particular, the product $ \Delta^{2}(H)
\, \Curl H $ is also an invariant under geometric deformations. A
simple calculation then shows that every trefoil of type 2-4-3 or
4-2-3 has positive curl, while every one of type 3-2-4 or 3-4-2 has
negative curl.
\begin{Thm} \label{thm:jcc}
Define the joint chirality-curl of a hexagon $ H $ as the ordered pair
$ \mathcal{J} (H) = (\Delta(H), \Delta^{2}(H) \, \Curl H) .$ Then
\begin{equation} \label{eq:jcc.values}
\mathcal{J} (H) = \begin{cases} ( 0, 0 ) & \text{iff} \, H \, \text{is an
unknot,} \\
( +1, c ) & \text{iff} \, H \, \text{is a right-handed trefoil with}
\, \Curl H = c, \\
( -1, c ) & \text{iff} \, H \, \text{is a left-handed trefoil with} \,
\Curl H = c.
\end{cases}
\end{equation}
Therefore the geometric knot type of a hexagon $ H $ is completely
determined by the value of its chirality and curl.
\end{Thm}
Before leaving the world of hexagons behind, let us make one last
observation. Recall that the construction of $ \mathfrak{Geo}^{(n)} $ depends
on a choice of a ``first vertex'' $ v_1 $ and an orientation. This
amounts to choosing a sequential labeling $ v_1, v_2, \ldots, v_n $
for the vertices of each polygon. A different choice of labels will
lead to a different point in $ \mathfrak{Geo}^{(n)} $ corresponding to the same
underlying polygon. Thus the dihedral group $ \mathbf{D}_n $ of order
$ 2n $ acts on $ \mathfrak{Geo}^{(n)} $ by shifting or reversing the order of
these labels, and this action preserves topological knot type.
Observation of the effects on $ \Curl H $ by the group action of $
\mathbf{D}_6 $ on $ \mathfrak{Geo}^{(6)} $ reveals that same statement does not
hold true for geometric knot type. In particular, if the group action
is defined by the automorphisms
\begin{align*}
r : \langle v_1, v_2, v_3, v_4, v_5, v_6 \rangle & \mapsto \langle v_1, v_6,
v_5, v_4, v_3, v_2 \rangle , \\
s : \langle v_1, v_2, v_3, v_4, v_5, v_6 \rangle & \mapsto \langle v_2, v_3,
v_4, v_5, v_6, v_1 \rangle ,
\end{align*}
then
\[ \Curl r H = \Curl s H = - \Curl H . \]
This shows that the hexagonal trefoil knot is not reversible: In
contrast with trefoils in the topological setting, reversing the
orientation on a hexagonal trefoil yields a different geometric knot.
Furthermore, shifting the labels over by one vertex also changes the
the knot type of a trefoil, so that taking quotients under this action
we can see that the spaces $ \mathfrak{Geo}^{(6)} / \! \prec \! s \! \succ $
of non-based, oriented hexagons, and $ \mathfrak{Geo}^{(6)} / \! \prec \! r, s
\! \succ $ of non-based, non-oriented hexagons consist of only three
components each.
A similar decomposition can be made for the space $ \mathfrak{Geo}^{(7)} $ of
heptagons. In this case we consider the relative ordering of the
half-planes $ \mathcal{P}_{2}, \mathcal{P}_{3}, \mathcal{P}_{4}, $ and
$ \mathcal{P}_{5} $ bounded by the line through $ v_{1} $ and $ v_{6}.
$ This defines 24 open regions which meet along codimension one
subsets where two of the $ \mathcal{P}_i $'s coincide. These
junctions can be schematically described as switches in the indices
denoting the regions. For instance, regions 2-4-3-5 and 4-2-3-5 meet
along a subset consisting of hexagons with $ \mathcal{P}_2 =
\mathcal{P}_4 . $ We can build a model for these connections by
taking a vertex for each of the 24 regions and an edge for each
codimension-1 subset joining them. The result is a valence-3 graph
which forms the 1-dimensional skeleton of a solid zonotope called a
{\em permutahedron}, shown in Figure ~\ref{fig:permutahedron}. Each
vertex of the permutahedron is part of a unique square face
corresponding to the order-4 sequence of index switches in which the
first two indices and the last two indices are switched in an
alternating fashion. In addition, each vertex is part of two distinct
hexagonal faces which correspond to the order-6 switch sequences in
which either the first or last index is fixed while the other three
indices are permuted through all six possible orderings. Therefore
the valence-3 permutahedron has six square faces and eight hexagonal
faces. Extending the edges shared by any two hexagonal faces shows
that this is nothing more than a truncated octahedron, also known in
crystallography as a Fedorov cubo-octahedron. ~\!\!\footnote{\, The
cubo-octahedron is a {\em parallelohedron}, that is, a crystalline
shape having parallel opposite faces with which three-space can be tiled.
One should not confuse Fedorov's cubo-octahedron with Kepler's
cuboctahedron, which is built from an octahedron by truncating at the
midpoint (rather than at the one- and two-third points) of each edge
and thus consists of six squares and eight triangular faces. See
pp.17 -- 18 in \cite{Ziegler:polytopes} and pp. 722 -- 723 in
\cite{Tutton:crystals}.}
\begin{figure}[t]
\insertfig{permuta.eps}{2in}
\caption{The valence 3 permutahedron.}
\label{fig:permutahedron}
\end{figure}
With a few additional considerations, ~\!\!\footnote{\, The interested
reader is referred to pp. 53 -- 56 in \cite{Calvo:thesis}.} the
analysis of the strata over each of these regions shows that $
\mathfrak{Geo}^{(7)} $ has a single path-component of unknots and of each
topological type of trefoil, and two containing figure-eight knots.
Again, these figure-eight knots are new examples of distinct geometric
isotopes of the same topological knot, which can be distinguished by a
geometric invariant $ \Xi, $ defined as follows.
Suppose that $ H $ is the heptagon $ \langle v_1, v_2, v_3, v_4, v_5,
v_6, v_7 \rangle. $ Define the functions $ \Theta_3 (H) $ and $
\Theta_6 (H) $ as
\begin{equation}
\begin{aligned}
\Theta_3 &= \Sign \bigl( (v_7 - v_1) \times (v_2 - v_1) \cdot (v_3 - v_1) \bigr), \\
\Theta_6 &= \Sign \bigl( (v_6 - v_1) \times (v_7 - v_1) \cdot (v_2 - v_1) \bigr).
\end{aligned}
\end{equation}
Then $ \Theta_3 = \Theta_6 $ if the vertices $ v_3 $ and $ v_6 $ lie
on the same side of the plane $ \mathcal{P} $ determined by $ v_7,
v_1, $ and $ v_2 , $ and $ \Theta_3 = - \Theta_6 $ if $ v_3 $ and $
v_6 $ lie on different sides of $ \mathcal{P} . $ Notice that
for a generic heptagon, exactly one of the functions $ \frac{1}{2}
(\Theta_3 + \Theta_6) $ and $ \frac{1}{2} (\Theta_3 - \Theta_6) $ is
zero, while the other is $ \pm 1 . $
Let $ I_{34} $ denote the algebraic intersection number of edge $ v_3
v_4 $ with the triangular disc $ \triangle v_{7}v_{1}v_{2}, $ using
the usual orientations induced by $ H . $ Similarly define $ I_{45} $
and $ I_{56} $ as the intersection numbers of the triangle $ \triangle
v_{7}v_{1}v_{2} $ with the edges $ v_4 v_5 $ and $ v_5 v_6 , $
respectively.
If $ H $ has $ \Theta_3 = \Theta_6 , $ then $ v_3 $ and $ v_6 $ lie on
the same side of the plane $ \mathcal{P} $ so that the three-edge
linkage $ v_3 v_4 v_5 v_6 $ will intersect $ \mathcal{P} $ at most
twice. Furthermore, if both of these intersections happen in the
interior of $ \triangle v_{7}v_{1}v_{2}, $ they occur with opposite
orientations. Thus the sum $ I_{34} + I_{45} + I_{56} $ only takes on
values -1, 0, or 1.
On the other hand, suppose that $ H $ is a figure-eight knot with $
\Theta_3 = - \Theta_6 . $ Then $ v_3 $ and $ v_6 $ lie on the
opposite sides of the plane $ \mathcal{P} $ so that the three-edge
linkage $ v_3 v_4 v_5 v_6 $ intersects $ \mathcal{P} $ an odd number
of times. First, suppose that there is only one intersection; then
the linkage $ v_7 v_1 v_2 $ can be piecewise linearly isotoped into a
straight line segment. We can think of this isotopy as either pushing
$ v_{1} $ in a straight line path towards the midpoint of the line
segment $ v_{2} v_{7}, $ or (in the case that the intersection occurs
inside $ \triangle v_{7}v_{1}v_{2} $) as stretching $ v_7 v_1 v_2 $
into a large loop, swinging it like a ``jump rope'' around and to the
other side of the heptagon, and then pushing it in until it coincides
with the line segment $ v_{2} v_{7} . $ See Figure
~\ref{fig:jumprope}. In either case we get a hexagonal realization of
a figure-eight knot. Since this is impossible, the linkage $ v_3 v_4
v_5 v_6 $ has to cross the plane $ \mathcal{P} $ three times, and in
particular, $ v_3 v_4 $ and $ v_5 v_6 $ must do so with the same
orientation. Therefore the quantity $ I_{34} - I_{56} $ will either
be zero (when both edges intersect $ \triangle v_{7}v_{1}v_{2}, $ or
when neither of the two do) or $ \pm 1 $ (when only one of these
intersections occurs inside the triangle).
\begin{figure}[t]
\insertfig{jumprope.eps}{2.75in}
\caption{A piecewise linear isotopy of the linkage $ v_7 v_1 v_2 .$}
\label{fig:jumprope}
\end{figure}
A quick look at the possible configurations shows that:
\begin{enumerate} \renewcommand {\labelenumi} {(\roman{enumi})}
\item if $ H $ is a heptagonal figure-eight knot with $ \Theta_3 =
\Theta_6 , $ then exactly one of the intersection numbers $ I_{34},
I_{45}, $ or $ I_{56} $ is non-zero; thus $ I_{34} + I_{45} + I_{56} =
\pm 1 $ (Lemma 4.2 in \cite{Calvo:thesis}), and
\item if $ H $ is a heptagonal figure-eight knot with $ \Theta_3 = -
\Theta_6 , $ then exactly one of the intersection numbers $ I_{34} $
or $ I_{56} $ is non-zero; in particular $ I_{34} - I_{56} = \pm 1 $
(Lemma 4.3 in \cite{Calvo:thesis}).
\end{enumerate}
Now consider the function
\begin{equation} \label{eq:xi}
\Xi (H) = \frac{1}{2} \biggl( \Theta_3 + \Theta_6 \biggr)
\biggr( I_{34} + I_{45} + I_{56} \biggr) +
\frac{1}{2} \biggl( \Theta_3 - \Theta_6 \biggr)
\biggr( I_{34} - I_{56} \biggr) .
\end{equation}
By (i) and (ii) above, $ \Xi $ can only take values of 1 or -1.
Suppose that the value of $ \Xi $ changes along some path $ h: [0,1]
\rightarrow \mathfrak{Geo}^{(7)} .$ Since $ \mathfrak{Geo}^{(7)} $ is a manifold, we can
assume that, in this path, only one vertex passes through the interior
of $ \triangle v_{7}v_{1}v_{2} $ at any one time, and that only one
edge intersects the line segment $ v_7 v_2 $ at a time, and that these
two things happen at different times. Note that each of these events
will change the values of $ I_{34} + I_{45} + I_{56} $ and $ I_{34} -
I_{56} $ by at most one. However, $ \Xi $ can only change in
increments of two, so if the values of $ \Theta_3 $ and $ \Theta_6 $
remain constant through out $ h, $ $ \Xi $ must also remain unchanged.
By reversing orientations if necessary, we can assume then that the
deformation changes the sign of $ \Theta_3 . $ In particular, let $
H_0 $ be a heptagonal figure-eight knot with $ \Theta_3 = 0 . $ By
pushing $ v_3 $ slightly towards $ v_6 , $ we get a heptagon $ H_0^+ $
with $ \Theta_3 = \Theta_6 ; $ let $ I_{34}^+ $ be the appropriate
intersection number for this heptagon. On the other hand, we obtain a
heptagon $ H_0^- $ with $ \Theta_3 = - \Theta_6 $ by pushing $ v_3 $
away from $ v_6 ; $ let $ I_{34}^- $ be the corresponding intersection
number for this heptagon. By picking $ H_{0}^{+} $ and $ H_{0}^{-} $
close enough to $ H_{0}, $ we can assume that the values of the
intersection numbers $ I_{45} $ and $ I_{56} $ coincide for all three
knots. This leaves two cases to consider.
First, suppose that $ I_{34}^- = 0 . $ Then $ I_{56} = \pm 1 $ by
(ii), and hence $ I_{34}^+ = I_{45} = 0 $ by (i). Therefore
\[ \biggl( I_{34}^+ + I_{45} + I_{56} \biggr) = I_{56} =
- \biggl( I_{34}^- - I_{56} \biggr) . \]
The extra negative sign in the right hand term of this equation
neutralizes the change of sign in $ \Theta_{3}, $ so that $ \Xi $
remains unchanged.
Next, suppose that $ I_{34}^- = \pm 1, $ in which case $ I_{34}^+ = 0
.$ Then $ I_{56} = 0 $ by (ii) and $ I_{45} = \pm 1 $ by (i).
Furthermore, the edges $ v_3 v_4 $ and $ v_4 v_5 $ must intersect the
interior of $ \triangle v_{7}v_{1}v_{2} $ from opposite directions, so
$ I_{34}^- = - I_{45} .$ Therefore
\[ \biggl( I_{34}^+ + I_{45} + I_{56} \biggr) = I_{45} = - I_{34}^- =
- \biggl( I_{34}^- - I_{56} \biggr) , \]
so that, as before, $ \Xi $ does not change. This proves the
following result.
\begin{Thm}
$ \Xi $ is an invariant of heptagonal figure-eight knots under geometric
deformations.
\end{Thm}
It is interesting to note that $ \Xi $ is also invariant under mirror
reflections, since the resulting sign changes in the functions $
\Theta_{3}, \Theta_{6}, I_{34}, I_{45}, $ and $ I_{56} $ cancel out in
(\ref{eq:xi}). This reflects the fact that heptagonal figure-eight
knots are achiral, {\em i.e.} equivalent to their mirror images.
Figure ~\ref{fig:achiral} shows one such isotopy. Starting with the
diagram at the top of Figure ~\ref{fig:achiral} and proceeding in a
clockwise fashion, we first push $ v_1 $ through the interior of the
triangular disc $ \triangle v_2 v_3 v_4 .$ Note that in doing so, we
may need to change the lengths of one or more of the edges. Although
it is difficult to see from the perspective of Figure
~\ref{fig:achiral}, this motion actually defines an isotopy from the
heptagon $ \langle v_1, v_2, v_3, v_4, v_5, v_6, v_7 \rangle $ to the
heptagon $ \langle -v_6, -v_7, -v_1, -v_2, -v_3, -v_4, -v_5 \rangle .$
We continue by repeating similar moves, passing $ v_3 $ through $
\triangle v_4 v_5 v_6 , $ then $ v_5 $ through $ \triangle v_6 v_7 v_1
, $ and so on. After seven steps, when we move $ v_6 $ past $
\triangle v_7 v_1 v_2 , $ we arrive at the diagram at the bottom of
Figure ~\ref{fig:achiral}. At this point, the figure-eight knot is
the mirror image of the starting position.
\begin{figure}[p]
\vspace{.5in}
\insertfig{achiral.eps}{6.4in}
\caption{Heptagonal figure-eight knots are achiral.}
\label{fig:achiral}
\end{figure}
Finally, consider the $ \mathbf{D}_{7} $ action on $ \mathfrak{Geo}^{(7)} $ defined by
the automorphisms
\begin{align*}
r \langle v_1, v_2, v_3, v_4, v_5, v_6, v_7 \rangle &=
\langle v_1, v_7, v_6, v_5, v_4, v_3, v_2 \rangle \\
s \langle v_1, v_2, v_3, v_4, v_5, v_6, v_7 \rangle &=
\langle v_2, v_3, v_4, v_5, v_6, v_7, v_1 \rangle .
\end{align*}
Reversing the orientation on $ H $ via the map $ r $ will reverse the
orientations on both the edges of $ H $ and the triangular discs that
they define. In particular,
\[ I_{34}(rH) = I_{56}(H) \qquad \qquad I_{45}(rH) = I_{45}(H)
\qquad \qquad I_{56}(rH) = I_{34}(H) .
\]
On the other hand, $ r $ not only switches the roles of $ \Theta_3 $ and $ \Theta_6, $
but also changes their signs:
\begin{align*}
\Theta_3 (rH) &= \Sign \bigl( (v_2 - v_1) \times (v_7 - v_1) \cdot (v_6 - v_1) \bigr), \\
&= - \Sign \bigl( (v_6 - v_1) \times (v_7 - v_1) \cdot (v_2 - v_1) \bigr), \\
&= - \Theta_6 (H), \\
\Theta_6 (rH) &= \Sign \bigl( (v_3 - v_1) \times (v_2 - v_1) \cdot (v_7 - v_1) \bigr), \\
&= - \Sign \bigl( (v_7 - v_1) \times (v_2 - v_1) \cdot (v_3 - v_1) \bigr), \\
&= - \Theta_3 (H). \\
\end{align*}
Therefore
\begin{align*}
\Xi (rH) &= \frac{1}{2} \biggl( \Theta_3 (rH) + \Theta_6 (rH) \biggr)
\biggr( I_{34} (rH) + I_{45} (rH) + I_{56} (rH) \biggr) \\ & \hspace{25mm} +
\frac{1}{2} \biggl( \Theta_3 (rH) - \Theta_6 (rH) \biggr)
\biggr( I_{34} (rH) - I_{56} (rH) \biggr) \\
&= \frac{1}{2} \biggl( - \Theta_6 (H) - \Theta_3 (H) \biggr)
\biggr( I_{56} (H) + I_{45} (H) + I_{34} (H) \biggr) \\ & \hspace{30mm} +
\frac{1}{2} \biggl( - \Theta_6 (H) + \Theta_3 (H) \biggr)
\biggr( I_{56} (H) - I_{34} (H) \biggr) \\
&= - \Xi (H) .
\end{align*}
This shows that, like hexagonal trefoil knots, figure-eight knots in $
\mathfrak{Geo}^{(7)} $ are irreversible, in contrast with their topological
counterparts. However, recall that the irreversibility of trefoils in
$ \mathfrak{Geo}^{(6)} $ depended strongly on our choice of a ``first'' vertex $
v_1 .$ In that case, a cyclic permutation of its six vertices would
change the trefoil's geometric knot type. This is not the case for
the figure-eight knots in $ \mathfrak{Geo}^{(7)}, $ for consider the group
action induced by the automorphism $ s $ on the set of geometric
isotopes of the figure-eight knot. This is an order 7 action on a two
element set, and must therefore be trivial. In other words, we must
have
\[ \Xi (sH) = \Xi (H). \]
Hence the distinction in the two figure-eight knot types is an effect
of ``true'' geometric knotting, which goes beyond a simple relabeling
of the vertices or our arbitrary choice of first vertex.
\section{Knot Projections and Minimal Polygon index} \label{sec:project}
In Section ~\ref{sec:stratify}, we were concerned with the question of
determining, for a given integer $ n, $ the number of path-components
present in the space $ \mathfrak{Geo}^{(n)} $ of $ n $-sided polygons. In other
words, ``how many geometric knot types are there for a particular
value of $ n $?'' However, as $ n $ increases, the space $ \mathfrak{Geo}^{(n)}
$ becomes more and more combinatorially intricate. As this happens,
we turn to the question of understanding the number of represented
topological (rather than geometric) knot types, and in particular, of
how complicated a knot can be realized by an $ n $-sided polygon. The
answer to this question is only known when $ n \le 8 .$ For example,
we know there are 9-sided polygonal embeddings of every seven crossing
prime knot $ ( 7_{1}, \ldots , 7_{7}) $ as well as the knots $ 8_{16},
8_{17}, 8_{18}, 8_{21}, 9_{40}, 9_{41}, 9_{42}, $ and $ 9_{46}, $ but
presumably this list could be much bigger, and include some of the
knots for which we have so far only found 10- or 11-sided
realizations.~\!\!\footnote{\, See Table 1 in
\cite{Calvo:montecarlo}.} In this section, we give one of several
known bounds on the complexity of an $ n $-sided polygon.
Recall that the {\em minimal crossing number} of a knot is the
smallest number of crossings present in any general position
projection of the knot into a plane or sphere. This is the
conventional measure of a knot's complexity, used in the standard
notation for knots and links as well as in the knot tables in the
appendices of \cite{Adams:book}, \cite{Kauffman:book},
\cite{Livingston:book}, and \cite{Rolfsen:book}. We similarly define
the {\em minimal polygon index} of a knot as the smallest number of
edges present in any polygonal embedding of the knot. This invariant,
which is elsewhere known as the {\em stick number} \cite{Adams:book,
Adams:sticks, Furstenberg:sticks}, the {\em broken line number}
\cite{Negami:s.bounds}, or simply the {\em edge number}
\cite{Meissen:sticks, Randell:sticks}, serves as the corresponding
measure of complexity for polygonal knots. These two invariants are
traditionally related by the following construction.
~\!\!\footnote{\, This construction appears in Theorem 7 in
\cite{Negami:s.bounds}, and as Exercise 1.38 in \cite{Adams:book}.}
Let $ P = \langle v_1, v_2, \ldots, v_{n-1}, v_n \rangle \in
\mathfrak{Geo}^{(n)} $ be an $ n $-sided polygon embedded in $ \Field{R}^{3} .$ We
project the points in $ P $ orthogonally onto a plane perpendicular to
one of its edges, say $ v_{1} v_{2} .$ This amounts to looking at the
polygon from a viewpoint in which we see the edge $ v_{1}v_{2} $
``head on,'' so that the image of $ P $ on our retina ({\em i.e.} the
plane) is an $ (n-1) $-sided polygon. An edge in this image cannot
cross either of its two neighbors, or itself, so each edge will
intersect at most $ n-4 $ other edges. Thus for generic polygons in $
\mathfrak{Geo}^{(n)}, $ this method gives a knot projection with no more than $
\tfrac{1}{2} (n-1)(n-4) $ crossings. This leads to the conclusion
that if a knot $ K $ has minimal crossing number $ c(K) $ and minimal
polygon index $ s(K) ,$ then \[ c(K) \le \frac{\bigl( s(K) - 1 \bigr)
\bigl( s(K) - 4 \bigr)}{2} , \] or equivalently (by completing squares
and solving for $ s $) \[ s(K) \ge \frac{5 \, + \, \sqrt{9 \, + \, 8
c(K)}}{2} .\]
For hexagons and heptagons, the bound on crossing number becomes 5 and
9, respectively, well over the actual values of 3 and 4 obtained in
Section ~\ref{sec:stratify}. In fact, the estimated $ \tfrac{1}{2}
(n-1)(n-4) $ crossings in the image of an $ n $-sided polygon can
never be achieved when $ n $ is odd. Here we present an improvement
on the bounds above.
First suppose that we relabel the vertices of $ P $ in sequence so
that $ v_{1} $ is a point on the boundary of the convex hull spanned
by the vertices of $ P .$ Therefore, we can find a plane $
\mathcal{P}_{1} $ which intersects $ P $ only at the vertex $ v_{1}, $
with $ P $ lying entirely on one side of $ \mathcal{P}_{1} .$
Let $ \mathcal{S} $ be a large sphere centered at $ v_1 $ and
enclosing all of $ P , $ and consider the image of the radial
projection $ p: P - \{v_1\} \rightarrow \mathcal{S} .$ By our choice
of $ v_{1} ,$ this image lies entirely in a hemisphere of $
\mathcal{S} $ cut by the equator $ \mathcal{S} \cap \mathcal{P}_{1} .$
Furthermore, note that the interiors of edges $ v_1 v_2 $ and $ v_1
v_n $ are respectively mapped to the single points $ p(v_2) $ and $
p(v_n) .$ Thus, by picking a generic $ P $ in $ \mathfrak{Geo}^{(n)} , $ we can
assume that $ \Gamma = p (P - \{v_1\}) $ consists of a chain of $ n-2
$ great circular arcs on $ \mathcal{S} $ intersecting in four-valent
crossings.
Suppose that $ \Gamma $ has $ c $ crossings. Since $ \Gamma $ is
contained in a single hemisphere of $ \mathcal{S}, $ a pair of arcs
will intersect at most once. Furthermore adjacent arcs cannot
intersect, so each one of the $ n-4 $ interior arcs $ p(v_3 v_4),
\ldots, p(v_{n-2} v_{n-1}) $ can intersect at most $ n-5 $ other arcs,
while each of the extreme arcs $ p(v_2 v_3) $ and $ p(v_{n-1} v_n) $
can intersect at most $ n-4 $ other arcs. Hence
\[
c \le \frac{1}{2}\biggl( (n-4)(n-5) + 2(n-4) \biggr) = \frac{(n-3)(n-4)}{2}.
\]
\begin{figure}[t]
\insertfig{epsilon1.eps}{2in}
\centering{{\small (a) \hspace{48mm} (b) \hspace{4mm}}}
\insertfig{epsilon2.eps}{2in}
\centering{{\small (c) \hspace{48mm} (d) \hspace{4mm}}}
\caption{Deformation of $ P $ inside a small $ \epsilon $-ball
about $ v_{1}.$}
\label{fig:epsilon.ball}
\end{figure}
Let $ \epsilon > 0 $ be small enough that the closed $ \epsilon $-ball
$ \mathcal{B}_{\epsilon} $ centered at $ v_{1} $ intersects the
polygon $ P $ in exactly two small segments of the edges $ v_{1}v_{2}
$ and $ v_{n}v_{1} , $ as shown in Figure ~\ref{fig:epsilon.ball}(a).
Suppose that the edge $ v_{1}v_{2} $ intersects the sphere $ \partial
\mathcal{B}_{\epsilon} $ at the point $ q_{1} .$ Furthermore, let $
q_{2} $ be the point where the equator $ \partial
\mathcal{B}_{\epsilon} \cap \mathcal{P}_{1} $ intersects the
half-plane containing $ v_{2} $ and bounded by the line determined by
$ v_{1} $ and $ v_{3} .$ Then we can deform the segment $ q_{1}v_{1} $
so that it curves along a great circle path $ \alpha_{1} $ from $
q_{1} $ to $ q_{2}, $ and then in a straight line path to $ v_{1} .$
See Figure ~\ref{fig:epsilon.ball}(b). Note that since the arc $
\alpha_{1} $ lies on the same plane as $ v_{2}v_{3}, $ then $
p(\alpha_{1}) \cup p(v_{2}v_{3}) $ forms a single great circle
trajectory on $ \mathcal{S} $ from $ p(v_{3}) $ to $ p(q_{2}) .$ Thus,
after this deformation, the upper bound on the total number of
crossings given above still holds.
Similarly, let $ q_{3} $ be the point of intersection between the
equator $ \partial \mathcal{B}_{\epsilon} \cap \mathcal{P}_{1} $ and
the half-plane containing $ v_{n} $ and bounded by the line determined
by $ v_{1} $ and $ v_{n-1}, $ and let $ q_{4} $ be the point at which
the edge $ v_{n}v_{1} $ intersects $ \partial \mathcal{B}_{\epsilon}
.$ Then the segment $ v_{1}q_{4} $ can be deformed so that it travels
in a straight line path from $ v_{1} $ to $ q_{3} $ and then curves
along a great circle path $ \alpha_{3} $ from $ q_{3} $ to $ q_{4}. $
See Figure ~\ref{fig:epsilon.ball}(c). As before, the arc $
\alpha_{3} $ lies on the same plane as $ v_{n-1}v_{n}, $ so $
p(v_{n-1}v_{n}) \cup p(\alpha_{3}) $ forms a single great circle
trajectory on $ \mathcal{S} $ from $ p(v_{n-1}) $ to $ p(q_{3}). $
Therefore the upper bound on the number of crossings given above still
holds after this deformation.
Finally, isotope $ P $ by moving $ v_1 $ into the interior of the
triangle $ \triangle q_{2}v_{1}q_{3} $ while curving the segments $
q_{2}v_{1} $ and $ v_{1}q_{3} $ until they coincide with an arc along
the equator $ \partial \mathcal{B}_{\epsilon} \cap \mathcal{P}_{1} ,$
as in Figure ~\ref{fig:epsilon.ball}(d). This final transformation
turns $ P $ into a non-polygonal embedding of the same (topological)
knot type; this new embedding agrees with $ P $ outside of the ball $
\mathcal{B}_{\epsilon} $ but completely avoids its interior. In the
meanwhile, the image under $ p $ of this embedding is simply a
(spherical) knot projection $ \Gamma' $ consisting of the $ n-2 $ arcs
of $ \Gamma $ (with its ends extended by $ p(\alpha_{1}) $ and $
p(\alpha_{3}) $), together with an $ (n-1) $th arc $ \alpha_{2} $
running along the equator $ \mathcal{S} \cap \mathcal{P}_{1} $ and
joining the endpoints $ p(q_2) $ and $ p(q_3) .$ Since $ \Gamma $ is
contained entirely on one side of the equator, $ \alpha_{2} $ does not
cross any other arcs. Hence the new projection has no more crossings
than it did before the last deformation, proving the following
theorem.
\begin{Thm} \label{thm:new.c.bound}
Suppose that a knot $ K $ with minimal crossing number $ c(K) $ and
minimal polygon index $ s(K) .$ Then
\begin{equation} \label{eq:crossing}
c(K) \le \frac{ \bigl(s(K) - 3 \bigr) \bigl(s(K) - 4 \bigr)}{2} .
\end{equation}
Completing the square in (\ref{eq:crossing}) shows that
\[ 2c \le s^2 - \, 7 s \, + \, 12 =
\biggl(s - \frac{7}{2}\biggr)^2 - \frac{1}{4} , \]
so that
\[ s(K) \ge \frac{7 \, + \, \sqrt{8c(K) \, + \, 1}}{2} . \]
\end{Thm}
Note that Theorem ~\ref{thm:new.c.bound} correctly predicts that the
trefoil is the only non-trivial knot which can be realized with six
edges.
\begin{figure}[tp]
\insertfig{diagram1.eps}{1.9in}
\centering{{\small \hspace{5mm} (a) \hspace{60mm} (b) }}
\insertfig{diagram2.eps}{1.9in}
\centering{{\small \hspace{5mm} (c) \hspace{60mm} (d) }}
\caption{A knot universe and several choices in over- and
under-crossings.}
\label{fig:diagrams}
\end{figure}
In the case of octagons, the new bound on crossing number becomes 10.
However, in \cite{Calvo:thesis} we systematically look at the possible
knot projections $ \Gamma' $ resulting from the deformation described
by Figure ~\ref{fig:epsilon.ball} and thereby enumerate the
topological knots which are appear in $ \mathfrak{Geo}^{(8)} .$ For example,
consider the ten-crossing knot universe shown in Figure
~\ref{fig:diagrams}(a). By appropriately choosing at each crossing
which strand goes ``over'' and which one goes ``under,'' we will
obtain a knot projection $ \Gamma' $ corresponding, as above, to some
octagon $ P .$ As we make choices in ``over'' and ``under'' crossings
we need to keep a few points in mind:
\begin{enumerate} \renewcommand {\labelenumi} {(\roman{enumi})}
\item If $ v_2 v_3 $ passes under every one of its crossings, then the
interior of the triangular disc $ \triangle v_1 v_2 v_3 $ does not
intersect the rest of $ P .$ In this case, $ P $ can be isotoped by
pushing $ v_2 $ in a straight line path to the midpoint of the line
segment $ v_1 v_3 $ until $ P $ coincides with a heptagon. A similar
isotopy exists if $ v_{7}v_{8} $ contributes only ``under'' crossings.
Therefore we need not consider these diagrams.
\item If the edges $ v_{5}v_{6} $ and $ v_{6}v_{7} $ both go under $
v_{3}v_{4}, $ as in Figure ~\ref{fig:diagrams}(b), then we can isotope
$ P $ so that the corresponding $ \Gamma' $ has two fewer crossings.
For instance, we can shrink the lengths of $ v_{5}v_{6} $ and $
v_{6}v_{7}, $ in essence performing a Reidemeister 2 move. We can
therefore ignore crossing choices which permit a reducing isotopy of
this type, delaying their analysis until we examine the resulting
reduced diagram.
\item Some choices of ``over'' and ``under'' crossings will lead to
configurations which are impossible to create with straight edges.
For instance, consider the three crossing choices made in Figure
~\ref{fig:diagrams}(c). Let $ \mathcal{P} $ be the plane containing $
v_{4}, v_{5}, $ and $ v_{6} .$ Note that the interior of edge $ v_6
v_7 $ lies entirely above the plane $ \mathcal{P}, $ since it starts
on the plane at $ v_{6} $ and then crosses over $ v_4 v_5 .$
Similarly, the interior of edge $ v_{3} v_{4} $ lies below the plane $
\mathcal{P} $ since it crosses under $ v_{5} v_{6} $ and then meets
the plane at $ v_{4} .$ This means that $ v_{3} v_{4} $ cannot cross
over $ v_6 v_7, $ as in Figure ~\ref{fig:diagrams}(c), unless one of
the two edges is bent.
\item A particularly tricky example of a bad ``over'' and ``under''
crossing choice is shown in Figure ~\ref{fig:diagrams}(d). This
diagram corresponds to an octagonal realization of the knot $ 8_{18}
,$ shown in Figure ~\ref{fig:eight.18}. Here the problem is not as
obvious as before. In fact, among all of the projections
corresponding to impossible configurations which we encounter in
\cite{Calvo:thesis}, this is the only one which is not clearly
impossible. Nonetheless, through a delicate balance between
introducing self-intersections and counting dimensions, we can show
that there is no way to construct this configuration. The details of
this argument will appear in a forthcoming paper.
\end{enumerate}
\begin{figure}[t]
\insertfig{eight18.eps}{2in}
\caption{This octagonal embedding of the knot $ 8_{18} $ cannot be
constructed with straight edges.}
\label{fig:eight.18}
\end{figure}
After considering all possible projections $ \Gamma' $ with more than
six crossings, we find that the only knots with polygon index 8 and
crossing number greater than 6 are $ 8_{19} $ and $ 8_{20} .$ Since it
is known that there are octagonal realizations of every knot $ K $
with crossing number $ c(K) \le 6, $ we obtain a complete list of the
topological knots present in $ \mathfrak{Geo}^{(8)}, $ as indicated in Theorem
~\ref{thm:classification}(iv). With the exception of $ 6_{3} , $ the
square knot $ 3_{1} - 3_{1}, $ the figure-eight knot $ 4_{1}, $ and
the unknot, every knot type in this list is chiral and therefore must
contribute at least two path-components in $ \mathfrak{Geo}^{(8)} .$ Therefore $
\mathfrak{Geo}^{(8)} $ will contain at least twenty path-components.
\section*{Acknowledgments}
I would like to thank Ken Millett, who first led me into this
wonderful subject, and who has always been happy to give me his
advice, insights, and toughest questions. I would also like to thank
Janis Cox Millett for her hospitality this summer, as the three of us
traveled through Paris, Athens, Delphi, Berlin, and Aix-en-Provence.
\bibliographystyle{amsplain}
| {
"timestamp": "1999-04-09T05:46:42",
"yymm": "9904",
"arxiv_id": "math/9904037",
"language": "en",
"url": "https://arxiv.org/abs/math/9904037",
"abstract": "The space of n-sided polygons embedded in three-space consists of a smooth manifold in which points correspond to piecewise linear or ``geometric'' knots, while paths correspond to isotopies which preserve the geometric structure of these knots. The topology of these spaces for the case n = 6 and n = 7 is described. In both of these cases, each knot space consists of five components, but contains only three (when n = 6) or four (when n = 7) topological knot types. Therefore ``geometric knot equivalence'' is strictly stronger than topological equivalence. This point is demonstrated by the hexagonal trefoils and heptagonal figure-eight knots, which, unlike their topological counterparts, are not reversible. Extending these results to the cases n \\ge 8 is also discussed.",
"subjects": "Geometric Topology (math.GT)",
"title": "Geometric Knot Spaces and Polygonal Isotopy",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595528,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594742920234
} |
https://arxiv.org/abs/1806.09242 | Homotopy classification of Leavitt path algebras | In this paper we address the classification problem for purely infinite simple Leavitt path algebras of finite graphs over a field $\ell$. Each graph $E$ has associated a Leavitt path $\ell$-algebra $L(E)$. There is an open question which asks whether the pair $(K_0(L(E)), [1_{L(E)}])$, consisting of the Grothendieck group together with the class $[1_{L(E)}]$ of the identity, is a complete invariant for the classification, up to algebra isomorphism, of those Leavitt path algebras of finite graphs which are purely infinite simple. We show that $(K_0(L(E)), [1_{L(E)}])$ is a complete invariant for the classification of such algebras up to polynomial homotopy equivalence. To prove this we develop the bivariant algebraic $K$-theory of Leavitt path algebras and obtain several results of independent interest. | \section{Introduction}
A directed graph $E$ consists of a set $E^0$ of vertices and a set $E^1$ of edges together with source and range functions $r,s:E^1\to E^0$. This article is concerned with the Leavitt path algebra $L(E)$ of a directed graph $E$ over a field $\ell$ (\cite{libro}). When $\ell=\C$, $L(E)$ is a normed algebra; its completion is the graph $C^*$-algebra $C^*(E)$. A graph $E$ is called finite or countable if both $E^0$ and $E^1$ are finite or countable. A result of Cuntz and R{\o}rdam (\cite{ror}*{Theorem 6.5}) says that the purely infinite simple graph algebras associated to finite graphs, i.e. the purely infinite simple Cuntz-Krieger algebras, are classified up to (stable) isomorphism by the Grothendieck group $K_0$. It is an open question whether a similar result holds for Leavitt path algebras \cite{alps}. Here we prove that $K_0$ classifies simple Leavitt path algebras up to ($M_2$-) homotopy equivalence. In the following theorem and elsewhere, we use the following notations. We write
$\iota_2:R\to M_2R$ for the inclusion of an algebra into the upper left hand corner of the matrix algebra, $\phi\approx \psi$ to indicate that two algebra homomorphisms $\phi$ and $\psi$ are (polynomially) homotopic and $\phi\approx_{M_2}\psi$ to mean that $\iota_2\phi\approx\iota_2\psi$. We also put $[1_R]$ for the $K_0$-class of the identity of a unital algebra $R$. In Theorem \ref{thm:main2} we prove the following.
\begin{thm}\label{intro:main}
Let $E$ and $F$ be finite graphs. Assume that $L(E)$ and $L(F)$ are purely infinite simple. Let
$\xi:K_0(L(E))\to K_0(L(F))$ be an isomorphism of groups. Then there exist nonzero algebra homomorphisms $\phi:L(E)\leftrightarrow L(F):\psi$ such that
$K_0(\phi)=\xi$, $K_0(\psi)=\xi^{-1}$, $\psi\phi\approx_{M_2} \operatorname{id}_{L(E)}$ and $\phi\psi\approx_{M_2}\operatorname{id}_{L(F)}$. If moreover $\xi([1_{L(E)}])=[1_{L(F)}]$ then $\phi$ and $\psi$ can be chosen to be unital homomorphisms such that $\psi\phi\approx \operatorname{id}_{L(E)}$ and $\phi\psi\approx \operatorname{id}_{L(F)}$.
\end{thm}
We also prove other results which we think are of independent interest. For example we have the following embedding theorem, proved in Corollary \ref{coro:tododentro}.
\begin{thm}\label{thm:embed} Let $E$ be a graph such that $L(E)$ is simple and let $R$ be a unital purely infinite algebra.
\item[i)] If $E$ is countable then $L(E)$ embeds as a subalgebra of $M_\infty R$.
\item[ii)] If $E$ is finite and $[1_R]=0$ in $K_0(R)$, then $L(E)$ embeds as a unital subalgebra of $R$.
\item[iii)] If $E$ is finite, then $L(E)$ embeds as a subalgebra of $R$.
\end{thm}
For particular $R$, we have the following result on uniqueness up to homotopy for embeddings into $R$. In the next theorem and elsewhere, we write $[A,R]$ and
$[A,R]_{M_2}$ for the set of homotopy classes and $M_2$-homotopy classes of homomorphisms $A\to R$. If moreover, $A$ and $R$ are unital, we write $[A,R]_1$ for the set of homotopy classes of unital homomorphisms $A\to R$. In the next theorem and elsewhere we use the notion of regular supercoherent ring from \cite{gersten}. For example, $L(E)$ is regular supercoherent for every finite graph $E$ (\cite{libro}*{Lemma 6.4.16}). We write $L_n$ for the Leavitt path algebra of the one-vertex graph with $n$ loops.
\begin{thm}\label{thm:embel2}
Let $E$ be finite graph such that $L(E)$ is simple and $R$ a purely infinite simple, regular supercoherent unital algebra. Then \ $[L(E),L_2]_1= [L(E),L_2]_{M_2}\setminus\{0\}$, $[L(E),R\otimes L_2]_1=[L(E),R\otimes L_2]_{M_2}\setminus\{0\}$, and both sets have exactly one element each.
\end{thm}
In particular, Theorem \ref{thm:embel2} implies that if $d:L_2\to L_2\otimes L_2$, $d(x)=1\otimes x$ and $\phi: L_2\to L_2\otimes L_2$ is a nonzero homomorphism, then $\phi\approx_{M_2}d$ and that if $\phi$ is unital then $\phi\approx d$.
In \cite{dwkk} we introduced, for an algebra $A$ and a unital algebra $R$, an abelian monoid of homotopy classes of extensions of $A$ by $M_\infty R$, and considered its group completion $\mathcal{E}xt(A,R)$. We showed in \cite{dwkk}*{Remark 5.8} that if $E$ is a graph such that $E^0$ is finite and $E^1$ is countable, then for the bivariant algebraic $K$-theory group $kk_n(A,R)$ of \cite{kkwt}, there is a natural map
\begin{equation}\label{intro:mapextkk}
\mathcal{E}xt(L(E),R)\to kk_{-1}(L(E), R).
\end{equation}
Recall that a ring $R$ is \emph{$K_n$-regular} if the canonical map $K_n(R)\to K_n(R[t_1,\dots,t_m])$ is an isomorphism for every $m$. For example, every Leavitt path algebra is $K_n$-regular for all $n\in\Z$, by \cite{dwkk}*{Example 5.5}.
\begin{thm}\label{intro:ext}
Let $E$ be a finite graph such that $L(E)$ is simple. Let $R$ be either a division algebra or a $K_0$-regular purely infinite simple unital algebra. Then the natural map \eqref{intro:mapextkk} is an isomorphism
\[
\mathcal{E}xt(L(E),R)\overset{\sim}{\lra} kk_{-1}(L(E),R).
\]
\end{thm}
Combining Theorem \ref{intro:ext} with results from our previous paper \cite{dwkk} we are able to compute $\mathcal{E}xt(L(E), R)$ in some cases. For example, we get that if $E$ is as in Theorem \ref{intro:ext}, $\operatorname{reg}(E)=E^0\setminus\operatorname{sink}(E)$ and $I$ and $A_E\in\Z^{\operatorname{reg}(E)\times E^0}$ are the identity and the incidence matrices with the rows corresponding to sinks removed, then
\begin{equation}\label{intro:ckext}
\mathcal{E}xt(L(E),\ell)={\rm Coker}(I-A_E).
\end{equation}
If moreover $K_0(L(E))$ is torsion, then for every $R$ as in Theorem \ref{intro:ext} (in particular, for $R=\ell$ and for every purely infinite simple unital Leavitt path algebra $R$), we have
\begin{equation}\label{intro:extext}
\mathcal{E}xt(L(E),R)=\operatorname{Ext}^1_\Z(K_0(L(E)),K_0(R)).
\end{equation}
The following theorem is the main technical result of the paper; it is key for the proof of Theorems \ref{intro:main}, \ref{thm:embel2} and \ref{intro:ext}.
\begin{thm}\label{intro:kklift}
Let $E$ be a finite graph such that $L(E)$ is simple and $R$ a purely infinite simple unital algebra. Assume that $R$ is $K_1$-regular. Then
the canonical map
\[
[L(E),R]_{M_2}\setminus\{0\}\to kk(L(E),R)
\]
is an isomorphism of monoids.
\end{thm}
Thanks to Remark \ref{rem:gagp}, we may view Theorem \ref{intro:kklift} as a generalization of the theorem of Ara, Goodearl and Pardo \cite{agp} which says that if $R$ is as in the theorem and $\mathcal V(R)$ is the monoid of Murray-von Neumann equivalence classes of idempotent matrices in $M_\infty R$, then $K_0(R)=\mathcal V(R)\setminus\{0\}$. In fact, the latter result is used in the proof of Theorem \ref{intro:kklift}.
\iffalse
In part ii) of Theorem \ref{intro:kklift} we use Proposition \ref{prop:vle}, which says that for $F$ as above, the
monoid $\mathcal V_1(L(F))$ of Murray-von Neumann equivalence classes of idempotent elements of $L(F)$ is isomorphic to $\mathcal V(L(F))$.
\fi
The proof of Theorem \ref{intro:kklift} also uses results from our previous paper on $kk$ of Leavitt path algebras and an adaptation to the purely algebraic setting, developed in Sections \ref{sec:k0k1lift} and \ref{sec:kklift}, of several results proved for the $C^*$-algebra setting in R\o rdam's article \cite{ror}.
\goodbreak
\bigskip
The rest of this paper is organized as follows. In Section \ref{sec:v1} we prove (Corollary \ref{coro:kv1pis}) that if $R$ is a $K_1$-regular, purely infinite simple and unital algebra, then $K_1(R)$ is isomorphic to the group $\pi_0(U(R))$ of polynomially connected components of the group of invertible elements of $R$. The case of Theorem \ref{intro:kklift} when $L(E)$ is not purely infinite is contained in Proposition \ref{prop:homotopis} (see Remark \ref{rem:simpnopis}). Section \ref{sec:k0lift} considers the problem of whether a given group homomorphism $K_0(L(E))\to K_0(R)$ can be lifted to an algebra homomorphism. We show in Theorem \ref{thm:k0liftr} that if $R$ is purely infinite simple and unital and $E$ is countable, then any group homomorphism $\xi:K_0(L(E))\to K_0(R)$ is induced by an algebra homomorphism $\psi:L(E)\to M_\infty R$, that if moreover $E^0$ is finite and $\xi$ is unital (i.e. $\xi([1_{L(E)}])=[1_R])$ then $\xi$ is also induced by a unital homomorphism
$\phi:L(E)\to R$, and that if $E$ is finite then any group homomorphism
$\xi:K_0(L(E))\to K_0(R)$ is induced by a nonzero algebra homomorphism $\phi:L(E)\to L(F)$. Theorem \ref{thm:embed} is Corollary \ref{coro:tododentro}. If $E$ is a finite graph with reduced incidence matrix $A_E$ as above, we shall abuse notation and write
$I-A_E^t$ for the transpose of the matrix of \eqref{intro:ckext}. Section \ref{sec:k0k1lift} is concerned with the question of whether, given a finite graph $E$ with reduced incidence matrix $A_E$, an algebra $R$ and a pair $(\xi_0,\xi_1)$ of group homomorphisms $\xi_0:K_0(L(E))\to K_0(R)$ and $\xi_1:\ker(I-A_E^t)\to K_1(R)$, there is an algebra homomorphism simultaneously inducing $\xi_0$ and $\xi_1$. We prove in Theorem \ref{thm:ror} that if $L(E)$ is simple and $R$ is purely infinite, unital and $K_1$-regular, then there is an algebra homomorphism $\phi:L(E)\to R$
which induces both $\xi_0$ and $\xi_1$, and that if $\xi_0$ is unital, then $\phi$ can be chosen to be a unital homomorphism $L(E)\to R$.
Section \ref{sec:kklift} is devoted to the proof of Theorem \ref{thm:kkliftr}, which contains the case of Theorem \ref{intro:kklift} when $L(E)$ is purely infinite simple. Theorem \ref{intro:main} is proved in Section \ref{sec:main} (Theorem \ref{thm:main2}). Section \ref{sec:ext} is devoted to algebra extensions. Theorem \ref{intro:ext} is contained in Theorem \ref{thm:ext}; formulas \eqref{intro:ckext} and \eqref{intro:extext} are proved in Corollary \ref{coro:ckext} and Example \ref{ex:cute}. Section \ref{sec:tenso} is concerned with maps to $L_2$ and $R \otimes L_2$; Theorem \ref{thm:embel2} is proved in Theorem \ref{thm:mapl2}.
\goodbreak
\medskip
\noindent{\it Acknowledgements. } A previous version of this article contained a proof of Proposition \ref{prop:vle} for the particular case when $R$ is a Leavitt path algebra. We are indebted to Pere Ara for pointing out that in fact the result holds for every unital purely infinite ring $R$, and that the proof is immediate from results in \cite{agp}.
\section{Idempotents, units and the groups \texorpdfstring{$K_0$}{K0} and \texorpdfstring{$K_1$}{K1} in the purely infinite simple unital case}\label{sec:v1}
Let $R$ be a ring; write $\operatorname{Idem}(R)$ for the set of idempotent elements. Let $p,q\in\operatorname{Idem}(R)$. We write $p\sim q$ if $p$ and $q$ are \emph{Murray-von Neumann equivalent} \cite{agp}; that is, if there exist elements $x\in pRq$
and $y\in qRp$ such that $xy=p$ and $yx=q$. We call such pair $(x,y)$ an \emph{MvN equivalence} from $p$ to $q$ and write $(x,y):p\sim q$.
Put $\operatorname{Idem}_n(R)=\operatorname{Idem}(M_n(R))$, $1\le n\le \infty$. If $R$ is unital, we write
\[
\mathcal V_n(R)=\operatorname{Idem}_n(R)/\sim \qquad (1\le n<\infty),\quad \mathcal V(R)=\operatorname{Idem}_\infty(R)/\sim.
\]
\begin{rem}\label{rem:vr} One may also define $\mathcal V(R)$ as the set of isomorphism classes of finitely generated projective right modules. The equivalence between the two definitions follows from \cite{rosen}*{Theorem 1.2.3} and \cite{black}*{Propositions 4.2.5 and 4.3.1}. One checks that if $f:R\to S$ is a homomorphism and $f(1)=p$, then
under the identification, the map $\mathcal V(R)\to \mathcal V(S)$ induced by $M_\infty R\to M_\infty S$ corresponds to the scalar extension functor $\otimes_RpS$.
\end{rem}
\iffalse
\begin{lem}\label{lem:vninjectsv}
The map $\mathcal V_n(R)\to\mathcal V(R)$ is injective.
\end{lem}
\begin{proof} If $p,q\in\operatorname{Idem}_n(R)$ and $x\in pM_\infty(R)q$ and $y\in qM_\infty(R)q$, then $x,y\in M_n(R)$. In particular if $p\sim q$ in $\operatorname{Idem}_\infty(R)$, then $p\sim q$ in $\operatorname{Idem}_n(R)$.
\end{proof}
\fi
If $p,q\in\operatorname{Idem}(R)$ and $pq=qp=0$ we say that $p$ and $q$ are \emph{orthogonal} and write $p\perp q$ to indicate this. An idempotent $p$ in a ring $R$ is \emph{infinite}
if there exist orthogonal idempotents $q,r\in R$ such that $p= q+r$,
$p\sim q$ and $r\ne 0$. A ring $R$ is said to be \emph{purely infinite simple} if for every nonzero element $x\in R$ there exist $s,t\in R$ such that $sxt$ is an infinite idempotent. If $R$ is unital this is equivalent to asking that $R$ not be a division ring and that for every $x\in R$ there are $a,b\in R$ such that $axb=1$.
\iffalse
Recall that a vertex $v\in E^0$ is \emph{singular} if it is either a sink or an infinite emitter, and that it is \emph{regular} otherwise. We write $\operatorname{reg}(E)$, $\operatorname{sink}(E)$, $\operatorname{sour}(E)$ and $\operatorname{inf}(E)$ for the sets of regular vertices, sinks, sources, and infinite emitters, and put $\operatorname{sing}(E)=\operatorname{sink}(E)\cup\operatorname{inf}(E)$.
The graphs $E$ such that $L(E)$ is purely infinite simple are completely characterized by \cite{libro}*{Theorem 3.1.10}. We wish to express this result using the notion of cofinality; we recall the definition from \cite{libro}*{Definitions 2.9.4}. Let $\mathfrak{X}_E$ be the set whose elements are the infinite paths of $E$ and also the finite paths which end at a singular vertex of $E$. The graph $E$ is called \emph{cofinal} if for every vertex $v\in E^0$ and every $\gamma\in\mathfrak{X}_E$ there exists a path from $v$ to some vertex $w$ in $\gamma$.
\begin{thm}\label{thm:pist}\cite{libro}*{Lemma 2.9.6 and Theorem 3.1.10}
$L(E)$ is purely infinite simple if and only if $E$ is cofinal, has at least one cycle and every cycle of $E$ has an exit.
\end{thm}
\begin{lem}\label{lem:nosources} Let $E$ be a graph with finitely many vertices and such that $L(E)$ is purely infinite. Then there is a full subgraph $E'$ without sources such $L(E')$ is also purely infinite and such the map $\mathcal V(L(E'))\to\mathcal V(L(E))$ induced by the inclusion $L(E')\subset L(E)$ is an isomorphism.
\end{lem}
\begin{proof} The case when $\operatorname{inf}(E)=\emptyset$ follows from \cite{alps}*{Corollary 1.5} which in turn uses \cite{alps}*{Proposition 1.4}. Both results hold for arbitrary graphs with finitely many vertices, with essentially the same proofs, taking Remark \ref{rem:vr} into account.
\end{proof}
\begin{thm}\label{thm:vle} Let $E$ be a graph with finitely many vertices such that $L(E)$ is simple and purely infinite. Then the map
$\iota:\mathcal V_1(L(E))\to \mathcal V(L(E))$ is an isomorphism. Moreover, for every $n\ge 1$ and every $(q_1,\dots,q_n)\in \operatorname{Idem}_\infty(L(E))^n$ there exists $(p_1,\dots,p_n)\in\operatorname{Idem}_1(R)^n$, such that $p_i\sim q_i$ in $\operatorname{Idem}_\infty(L(E))$ and such that $p_i\perp p_j$ for $i\ne j$.
\end{thm}
\begin{proof} Injectivity of $\iota$ is a particular case of Lemma \ref{lem:vninjectsv}. Observe that the case $n=1$ of the last assertion of the theorem implies that $\iota$ is surjective. To prove the rest of the assertions, we may assume that $E$ has no sources, by Lemma \ref{lem:nosources}. By \cite{libro}*{Theorem 3.2.5}
, $\mathcal V(L(E))$ is generated by the set of Murray-von Neumann equivalence classes $\{[v]:v\in E^0\}$ of the vertices of $E$.
Hence it suffices to show that if $v_1,\dots,v_r\in E^0$ are distinct,
$1\le n_i$ and $1\le i\le r$, then there exist orthogonal idempotents $p_{i,j}\in L(E)$ ($1\le i\le r$, $1\le j\le n_i$) such that $[p_{i,j}]=v_i$. We know from Theorem \ref{thm:pist} that $E$ is cofinal. This together with the hypothesis that $E^0$ is finite and the assumption that $E$ has no sources imply that for every vertex $v\in E^0$ there exists a cycle $\alpha_{v,1}$ based at $v$. Using in addition that (again by Theorem \ref{thm:pist}) every cycle has an exit, we obtain that there is a closed path $\alpha_{v,2}$, also based at $v$, such that $\alpha_{v,1}^*\alpha_{v,2}=\alpha_{v,2}^*\alpha_{v,1}=0$. Let $\mathcal R_2$ be the graph with one vertex and two edges $e_1,e_2$, and let $C_2=C(\mathcal R_2)$ be the Cohn algebra (\cite{dwkk}*{Section 4}). Let $f_v:C_2\to L(E)$ be the $*$-homomorphism that sends $e_i\mapsto \alpha_{v,i}$ ($i=1,2$). Let $n=\max\{n_i:1\le i\le r\}$; choose $l>0$ such that $2^l>n$ and let $\{\omega_j: 1\le j\le 2^l\}$ be the set of all words of length $l$ on $e_1$ and $e_2$. Observe that $v\ne w$ implies $f_v(x) f_w(y)=0$ for all $x,y\in C_2$ and that $\omega_j^*\omega_k=\delta_{j,k}$. It follows that the idempotents $p_{i,j}=f_{v_i}(w_jw_j^*)\in \operatorname{Idem}_1(L(E))$ ($1\le i\le r$, $1\le j\le n_i$) are orthogonal to each other and that $[p_{i,j}]=v_i$, as wanted.
\end{proof}
\fi
The following theorem describing $K_0$ and $K_1$ of purely infinite simple unital rings is due to Ara, Goodearl and Pardo.
If $R$ is a unital ring, write $U(R)$ for the group of invertible elements of $R$.
\begin{thm}\label{thm:kpis}\cite{agp}*{Corollary 2.3 and Theorem 2.4}
If $R$ is a purely infinite simple unital ring, then $$K_0(R) = \mathcal{V}(R) \backslash \{ [0] \}$$
$$K_1(R) = U(R)^{ab}.$$
\end{thm}
\begin{prop}\label{prop:vle} Let $R$ be a purely infinite simple unital ring. Then the map
$\iota:\mathcal V_1(R)\to \mathcal V(R)$ is an isomorphism. Moreover, for every $n\ge 1$ and every element $(q_1,\dots,q_n)\in \operatorname{Idem}_\infty(R)^n$ there exists $(p_1,\dots,p_n)\in\operatorname{Idem}_1(R)^n$, such that $p_i\sim q_i$ in $\operatorname{Idem}_\infty(R)$ and such that $p_i\perp p_j$ for $i\ne j$.
\end{prop}
\begin{proof}
This is straightforward from \cite{agp}*{Proposition 1.5 and Lemma 1.1}
\end{proof}
Combining Proposition \ref{prop:vle} and Theorem \ref{thm:kpis} we obtain the following.
\begin{coro}\label{coro:k0pis}
Let $R$ be a purely infinite simple unital ring. Then
$$ K_0(R) \cong \mathcal V_1(R)\backslash \{[0]\}. $$
\end{coro}
\begin{coro}\label{coro:equiv} Let $R$ be a purely infinite simple unital ring and let $e, f\in R$ be nonzero idempotents.
Then the following are equivalent
\item[(1)] $e \sim f$.
\item[(2)] $[e] = [f] $ in $K_0(R)$.
\noindent If furthermore $e,f\in\operatorname{Idem}_1(R)\backslash\{0,1\}$ then the above conditions are also equivalent to the following.
\item[(3)] There exists $ u \in U(R)$ such that $ f= u e u^{-1}$.
\item[(4)] There exists a commutator $ u \in [U(R):U(R)]$ such that $ f= u e u^{-1}$.
\end{coro}
\begin{proof}
The equivalence of (1) and (2) follows from Corollary \ref{coro:k0pis}. By \cite{black}*{Proposition 4.2.5}, (3) is equivalent to having simultaneously $e \sim f$ and $1-e \sim 1-f$. Hence to prove that (1) implies (3) it only remains to show that $1-e \sim 1-f$. But
$$ [e] + [1-e] = [1]= [f]+[1-f] $$
in $K_0(R)$ and $[e]= [f]$, implies $[1-e] = [1-f]$ in $K_0(R)$ and therefore in $\mathcal V_1(R)$. Hence $1-e \sim 1-f$.
Next we show that (3) implies (4). Because $R$ is simple and $f\ne 1$, $1-f$ is a full idempotent. Hence $ (1-f) L(E) (1-f)$ is purely infinite simple (by \cite{agp}*{Corollary 1.7}) and the inclusion induces an isomorphism $ K_1((1-f) R (1-f)) \overset{\sim}{\lra} K_1(R)$. By Theorem \ref{thm:kpis}, this implies that the induced map $U((1-f) R (1-f))^{ab}\to U(R)^{ab}$ is an isomorphism. Since the latter map sends $[\xi]\mapsto [\xi+f]$, there is an element $\omega\in U((1-f) R (1-f))$ such that $ [\omega+f]= [u^{-1}]$. Then $(\omega+f) u \in [U(R):U(R)]$ and $(\omega+f) u e u^{-1} (\omega^{-1} +f ) = f$.
To prove that (4) implies (1) take $x = e u^{-1} f$ and $y = f u e$; we have $xy=e$ and $yx=f$.
\end{proof}
Let $G:{{\rm Alg}_\ell}\to\mathfrak{Grp}$ be a functor from algebras to groups and let $A\in{{\rm Alg}_\ell}$. The \emph{connected component} of $G(A)$ is the subgroup
\[
G(A)\supset G(A)^0=\{g\mid (\exists u(t)\in G(A[t]))\quad u(0)=1, u(1)=g\}.
\]
Observe that $G(A)^0$ is a normal subgroup. We write
\[
\pi_0G(A)=G(A)/G(A)^0.
\]
The \emph{Karoubi-Villamayor} $K_1$-group (\cite{kv}) is
\[
KV_1(A)=\pi_0(\Gl(A)).
\]
Observe that every elementary matrix is in $\Gl(A)^0$. It follows that we have a surjective homomorphism
\begin{equation}\label{map:k1kv1}
K_1(A)\twoheadrightarrow KV_1(A).
\end{equation}
By \cite{weih}*{Proposition 1.5}, the map \eqref{map:k1kv1} is an isomorphism whenever $A$ is $K_1$-regular.
\begin{lem}\label{lem:kv1pis}
Let $R$ be a unital ring.
\item[i)] If $p\in \operatorname{Idem}(R)$ and $u\in U(pRp)^0$, then $u+1-p\in U(R)^0$.
\item[ii)] Let $x_1,\dots,x_n,y_1,\dots,y_n\in R$ such that $y_ix_jy_i=\delta_{i,j}y_i$, $x_iy_jx_i=\delta_{i,j}x_i$. Set $p_i=x_iy_i$, $q_i=y_ix_i$, $P=\bigoplus_{i=1}^np_iR$, $Q=\bigoplus_{i=1}^nq_iR$. Then the map
\begin{gather*}
c_{y,x}:=\operatorname{End}_R(P)=\bigoplus_{i,j}p_jRp_i\to \bigoplus_{i,j}q_jRq_i=\operatorname{End}_R(Q),\\
a\mapsto \operatorname{diag}(y_1,\dots,y_n)a\operatorname{diag}(x_1,\dots,x_n)
\end{gather*}
is an isomorphism which sends $U(\operatorname{End}_R(P))^0$ isomorphically onto $U(\operatorname{End}_R(Q))^0$.
\end{lem}
\begin{proof} Straightforward. \end{proof}
\begin{prop}\label{prop:kv1pis}
Let $R$ be a unital purely infinite simple ring. Then the canonical map $\pi_0(U(R))\to \pi_0(\Gl(R))=KV_1(R)$ is an isomorphism.
\end{prop}
\begin{proof}
We know from Theorem \ref{thm:kpis} and \eqref{map:k1kv1} that $U(R)\to KV_1(R)$ is surjective. The kernel of this map is $U(R)\cap \Gl(R)^0$; it is clear that it contains $U(R)^0$. We have to show that
\begin{equation}\label{u0gl0}
U(R)\cap\Gl(R)^0\subset U(R)^0.
\end{equation}
We claim that the argument of the proof that $[\Gl(R):\Gl(R)]\cap U(R)\subset [U(R):U(R)]$ in \cite{agp}*{Theorem 2.3} can be adapted to prove \eqref{u0gl0}.
The proof in \emph{loc.cit.} has two parts. The first part shows that if $0\ne p\in \operatorname{Idem}(R)$ and $u\in [\Gl(R):\Gl(R)]\cap U(R)$ satisfies
\begin{equation}\label{upup}
u=p+(1-p)u(1-p)
\end{equation}
then $u\in [U(R):U(R)]$. Using the same argument and taking Lemma \ref{lem:kv1pis} into account, one shows that if \eqref{upup} is in $\Gl(R)^0$, then it must be in $U(R)^0$. In the second part of the proof of \cite{agp}*{Theorem 2.3} it is observed that for adequately chosen idempotents $e$ and $f\in T=eRe$ and elements $x_1,y_1,\dots,x_n,y_n\in R$, the assignment $a\mapsto \operatorname{diag}(y_1,\dots,y_n)a\operatorname{diag}(x_1,\dots,x_n)$ induces an isomorphism between $R$ and the subring
\[
M_n(T)\supset S=\{(a_{i,j}):a_{i,n}\in Tf, a_{n,i}\in fT \text{ for all } 1\le i\le n\}.
\]
Let $\mathcal E\subset U(R)$ be the image under the isomorphism $U(S)\overset{\sim}{\lra} U(R)$ of the subgroup generated by the set of those elementary matrices $1+a\epsilon_{i,j}$ $i\ne j$ which are elements of $S$. The authors then proceed, using the argument of the proof of \cite{m&m}*{Theorem 2.2}, to show that any $u\in U(R)$ is congruent modulo $\mathcal E$ to one of the form of \eqref{upup}. In view of Lemma \ref{lem:kv1pis} and of the fact that elementary matrices above are in $U(S)^0$, this shows that any $u\in U(R)$ is congruent modulo $U(R)^0$ to one of the form \eqref{upup}. This finishes the proof.
\end{proof}
\begin{coro}\label{coro:kv1pis}
If $R$ is unital, purely infinite simple and $K_1$-regular then $K_1(R)=\pi_0(U(R))$.
\end{coro}
Let $A$ be an algebra. Identify $\operatorname{Hom}_{{\rm Alg}_\ell}(\ell,A)=\operatorname{Idem}_1(A)$ via the bijection $\phi\mapsto \phi(1)$. We say that two idempotents
$p,q\in\operatorname{Idem}_1(A)$ are \emph{homotopic}, and write $p\approx q$, if the corresponding homomorphisms $\ell\to A$ are homotopic.
\begin{lem}\label{lem:idemzero}
Let $A$ be an algebra and $p\in \operatorname{Idem}_1(A)$. Then $p\approx 0$ if and only if $p=0$. If $A$ is unital, then $p\approx 1$ if and only if
$p=1$.\end{lem}
\begin{proof} The if part of both assertions is clear. One checks that if $x\in\{0,1\}$ and $p(t)\in\operatorname{Idem}_1(A[t])$ satisfies $p(0)=x$,
then $p=x$. The only if part of both assertions follows from this.
\end{proof}
Recall (see \cite{dwkk}*{Section 2}) that a \emph{$C_2$-algebra} is a unital algebra $R$ together with a unital homomorphism from the Cohn algebra $C_2$ to $R$. Thus a $C_2$-algebra is a unital algebra together with elements $x_1,x_2,y_1,y_2\in R$ such that $y_ix_j=\delta_{i,j}$. For example, if $R$ is a purely infinite simple unital algebra then $R$ is a $C_2$-algebra (see \cite{agp}*{Proposition 1.5}). Put
\begin{equation}\label{map:boxplus}
\boxplus:R\oplus R\to R, \quad a\boxplus b=x_1ay_1+x_2by_2.
\end{equation}
\iffalse
For example, if $R$ is unital, then any partition of $\N$ into two disjoint infinite subsets induces a unital homomorphism $L_2\to\Gamma'R$ which makes $\Gamma'R$, $\Gamma R$, $\Sigma R$ and $\Sigma'R$ into sum algebras.
\fi
\begin{lem}\label{lem:tensosum}
Let $R_1$ and $R_2$ be $C_2$-algebras and let $A_1\vartriangleleft R_1$ and $A_2\vartriangleleft R_2$ ideals. Let $\boxplus_i:A_i\oplus A_i\to A_i$ be the sum operation \eqref{map:boxplus}. Then the maps
\[
\boxplus_1\otimes\operatorname{id}_{A_2},\operatorname{id}_{A_1}\otimes\boxplus_2:A_1\otimes A_2\oplus A_1\otimes A_2\to A_1\otimes A_2
\]
are $M_2$-homotopic.
\end{lem}
\begin{proof} Straightforward from \cite{dwkk}*{Lemma 2.3}.
\end{proof}
Let $C$ be an algebra, $A,B\subset C$ subalgebras and $x,y\in C$ satisfying
$xAy\subset B$ and $ayxa'=aa'$ $(a,a'\in A)$; then the following map is an algebra homomorphism
\begin{equation}\label{map:adxy}
\operatorname{ad}(x,y):A\to B,\quad \operatorname{ad}(x,y)(a)=xay.
\end{equation}
If $C$ is unital and $y=x^{-1}$, then $\operatorname{ad}(x,y)=\operatorname{ad}(x)$ is the usual conjugation map.
\begin{lem}\label{lem:adhomo}
Let $A$ and $R$ be algebras, with $A$ finitely generated. Then:
\item[i)]
The canonical map
\[
[A,M_\infty R]\to [A,M_\infty R]_{M_2}
\]
is bijective.
\item[ii)] If furthermore $R$ is a $C_2$-algebra then the canonical map
\[
[A,R]_{M_2}\to [A,M_\infty R]_{M_2}
\]
is an isomorphism of monoids.
\end{lem}
\begin{proof}
\item[i)] Because $A$ is finitely generated,
\[
[A,M_\infty R]=\colim_n[A, M_{2^n}R]=\colim_n[A, M_{2^n}R]_{M_2}=[A,M_\infty R]_{M_2}.
\]
\item[ii)]
Because $R$ is an $C_2$-algebra, the map $[A,R]_{M_2}\to [A,M_\infty R]_{M_2}$ is a monoid homomorphism by Lemma \ref{lem:tensosum}. We have to prove that it is bijective. Observe that $M_2R$ is again a $C_2$-algebra. Hence in view of the proof of part i), it suffices to show that
$[A,R]_{M_2}\to [A,M_2R]_{M_2}$ is bijective. Let $x=\epsilon_{1,1}\otimes x_1+\epsilon_{1,2}\otimes x_2$ and $y=\epsilon_{1,1}\otimes y_1
+\epsilon_{2,1}\otimes y_2$. By \cite{dwkk}*{Lemma 2.3}, the following diagram is $M_2$-homotopy commutative
\[
\xymatrix{M_2R\ar@{=}[dr]\ar[r]^{\operatorname{ad}(x,y)}&\iota_2(R)\ar[d]^{\operatorname{inc}}\\
& M_2R.}
\]
It follows that the map of ii) is surjective. Injectivity follows similarly.
\end{proof}
\begin{lem}\label{lem:adxy}
Let $\phi,\psi:A\to R$ be algebra homomorphisms with $R$ unital. Assume that there are $n\ge 1$ and $u\in\operatorname{GL}_n(R)$ such that $\operatorname{ad}(u)\iota_n\phi=\iota_n\psi$.
Then there are elements $x,y\in R$ such that $\operatorname{ad}(x,y)\phi=\psi$. If moreover $A$, $\phi$ and $\psi$ are unital, then we may choose $x$ invertible and $y=x^{-1}$.
\end{lem}
\begin{proof} Put $v=u^{-1}$. It follows from the identity $\operatorname{ad}(u)\iota_n\phi=\iota_n\psi$ that for every $a\in A$, $u_{1,1}\phi(a)v_{1,1}=\psi(a)$
and $u_{i,1}\phi(a)=\phi(a)u_{1,i}=0$ if $i\ne 1$. Hence $x=u_{1,1}$ and $y=v_{1,1}$ satisfy $\operatorname{ad}(x,y)\phi=\psi$ and if $\phi$
and $\psi$ are unital, then $xy=yx=1$.
\end{proof}
\begin{prop}\label{prop:homotopis}
Let $R$ be a unital, purely infinite simple, $K_0$-regular algebra and $n\ge 1$. Then the natural monoid maps
\[
[M_n,R]_{M_2}\to [M_n,M_\infty R]\setminus\{0\}\to kk(M_n,R)\cong kk(\ell,R)\cong K_0(R)
\]
are bijective. Moreover, for nonzero algebra homomorphisms $M_n\to M_\infty R$ as well as for unital algebra homomorphisms $M_n\to R$, being homotopic is the same as being conjugate.
\end{prop}
\begin{proof} Because as explained above, any purely infinite simple unital algebra is a $C_2$-algebra, the map $[M_n,R]_{M_2}\to [M_n,M_\infty R]$ is an isomorphism of monoids by Lemma \ref{lem:adhomo}. Since $(\iota_n)^*:kk(M_n,R)\to kk(\ell,R)=K_0R$ is an isomorphism, to prove that the map $[M_n,M_\infty R]\setminus\{0\}\overset{\sim}{\lra} kk(M_n,R)$ is surjective, it suffices, by Corollary \ref{coro:k0pis}, to show that the image of its composite with $\iota_n^*$ contains the class of every nonzero idempotent in $R$. Let $p\in \operatorname{Idem}_1 R\setminus\{0\}$; by Proposition \ref{prop:vle} we may choose $q\in \operatorname{Idem}_1R$
, $q\sim p$, and an embedding $\theta:M_n\to R$ sending $\epsilon_{1,1}\to q$. Hence the map of the proposition is surjective. If two homomorphisms $\phi,\psi\in\operatorname{Hom}_{{{\rm Alg}_\ell}}(M_n,M_\infty R)$ induce the same $K_0$-element then they are conjugate by the argument of the proof of \cite{goodearl}*{Lemma 15.23(b)}, and therefore homotopic by Lemma \ref{lem:adhomo} and \cite{dwkk}*{Lemma 2.3}. From what we have just proved and Lemma \ref{lem:adxy}, it follows that if two unital homomorphisms $M_n\to R$ are homotopic then they are conjugate. This finishes the proof.
\end{proof}
\begin{rem}\label{rem:simpnopis} Let $E$ be a finite graph such that $L(E)$ is simple. If $L(E)$ is not purely infinite, then it follows from
\cite{libro}*{Lemma 2.9.5} and source elimination \cite{libro}*{Definition 6.3.26} that $L(E)\cong M_n$ for some $1\le n<\infty$. Hence, since $K_n$-regularity implies $K_{n-1}$-regularity \cite{vorst},
Proposition \ref{prop:homotopis} implies Theorem \ref{intro:kklift} in the case when $L(E)$ is simple and not pure infinite.
\end{rem}
\section{Lifting \texorpdfstring{$K$}{K}-theory maps to algebra maps:\texorpdfstring{$K_0$}{K0}}\label{sec:k0lift}
Recall that a vertex $v\in E^0$ is \emph{singular} if it is either a sink or an infinite emitter, and that it is \emph{regular} otherwise. We write $\operatorname{reg}(E)$, $\operatorname{sink}(E)$, $\operatorname{sour}(E)$ and $\operatorname{inf}(E)$ for the sets of regular vertices, sinks, sources, and infinite emitters, and put $\operatorname{sing}(E)=\operatorname{sink}(E)\cup\operatorname{inf}(E)$.
Let $R$ and $S$ be unital algebras and $\xi:K_0(R)\to K_0(S)$. We call $\xi$ \emph{unital} if $\xi([1_R])=[1_S]$.
\begin{thm}\label{thm:k0liftr}
Let $E$ be a graph, $R$ a purely infinite simple unital algebra, and $\xi:K_0(L(E))\to K_0(R)$ a group homomorphism. Set $\iota:R\to M_\infty(R)$, $\iota(a)=\epsilon_{1,1}\otimes a$.
\item[i)] If $E$ is countable, then there exists a nonzero algebra homomorphism $\psi:L(E)\to M_\infty R$ such that $K_0(\psi)=K_0(\iota)\xi$.
\item[ii)] If $E$ is finite, then there exists a nonzero algebra homomorphism $\psi:L(E)\to R$ such that $K_0(\psi)=\xi$.
\item[iii)] If $E^0$ is finite, $E^1$ countable and $\xi$ unital, then there is a unital homomorphism $\phi:L(E)\to R$ such that
$K_0(\phi)=\xi$.
\end{thm}
\begin{proof}
Assume first that $E$ is countable and row-finite. By Theorem \ref{thm:kpis} there are orthogonal idempotents $\{p_e:e\in E^1\}\cup\{p_v:v\in\operatorname{sing}(E)\}\subset \operatorname{Idem}_\infty(R)\setminus\{0\}$ such that $[p_v]=\xi[v]$ and $[p_e]=\xi[ee^*]$ in $K_0(R)$ $(v\in \operatorname{sink}(E)$, $e\in E^1$). If $e\in E^1$ and $r(e)\in\operatorname{reg}(E)$ then
\[
[p_e]=[\sum_{f\in E^1, s(f)=r(e)}p_f].
\]
Hence for $\sigma_f=\sum_{f\in E^1, s(f)=r(e)}p_f$ there are elements $x_e,y_e\in M_\infty(R)$ implementing an MvN equivalence $p_e\sim\sigma_e$. Similarly if $e\in E^1$ and $r(e)=v\in\operatorname{sink}(E)$, then there is an MvN equivalence $(x_e,y_e):p_e\sim p_v$ with $x_e, y_e\in M_\infty R$. One checks that the prescriptions
\[
\psi(e)=x_e,\psi(e^*)=y_e\quad (e\in E^1),\quad \psi(v)=p_v\quad (v\in\operatorname{sink}(E))
\]
define a nonzero algebra homomorphism $\psi:L(E)\to M_\infty R$. Let $\tau:M_\infty M_\infty\to M_\infty M_\infty$, $\tau(x\otimes y)=y\otimes x$; one checks that $\tau\otimes Id_R$ induces the identity of $K_0(M_\infty R)$. By construction $K_0(\psi)$ agrees with $K_0(\tau\otimes 1)K_0(\iota)\xi=K_0(\iota)\xi$ on the classes of those vertices which are sinks and on those of elements of the form $ee^*$ $(e\in E^1)$. Since the latter generate $K_0(L(E))$ (by \cite{libro}*{Theorem 3.2.5}), we have $K_0(\psi)=K_0(\iota)\xi$.
For general countable $E$, let $E_\delta$ be a desingularization and $f:L(E)\to L(E_\delta)$ the canonical homomorphism \cite{aap}*{Section ~5}; then $K_0(f)$ is an isomorphism. Hence by what we have just proved, there exists an algebra homomorphism $\psi':L(E_\delta)\to M_\infty(R)$ such that $K_0(\psi')=K_0(\iota)\xi K_0(f)^{-1}$. Then
$\phi=\psi'f$ satisfies $K_0(\psi)=K_0(\iota)\xi$. This proves i). Next assume that $E^1$ is countable, that $E^0$ is finite and that $\xi([1_{L(E)}]=[1_R]$. Let $\psi:L(E)\to M_\infty(R)$ be a homomorphism such that $K_0(\iota)\xi=K_0(\psi)$. Set $p=\psi(1)$; then $\psi(L(E))\subset pM_\infty (R)p$ and there is an MvN equivalence $(x,y):p\sim \epsilon_{1,1}$. It follows that there is a unique unital homomorphism $\phi:L(E)\to R$ such that $\iota\phi=\operatorname{ad}(y,x)\psi$. By \cite{dwkk}*{Lemma 2.3}, $\phi$ satisfies the requirements of iii). Finally assume that $E$ is finite. By Corollary \ref{coro:k0pis} and Proposition \ref{prop:vle} there are orthogonal idempotents $\{p_e:e\in E^1\}\cup\{p_v:v\in\operatorname{sink}(E)\}\subset \operatorname{Idem}_1(R)\setminus\{0\}$ such that $[p_v]=\xi[v]$ and $[p_e]=\xi[ee^*]$ $(v\in \operatorname{sink}(E)$, $e\in E^1$). If $e\in E^1$ and $r(e)\notin\operatorname{sink}(E)$ then by Corollary \ref{coro:k0pis}, for $\sigma_e$ as in the proof of Theorem \ref{thm:k0liftr} there are elements $x_e\in p_e R \sigma_e$ and $y_e\in\sigma_e R p_e$ such that $p_e=x_ey_e$ and $\sigma_e=y_ex_e$. Similarly, if $e\in E^1$ and $r(e)=v\in\operatorname{sink}(E)$, then there are $x_e\in p_e R p_v$ and $y_e\in p_v R p_e$ such that $y_ex_e=p_v$ and $x_ey_e=p_e$. One checks that the prescriptions
\[
\psi(e)=x_e,\psi(e^*)=y_e\quad (e\in E^1),\quad \psi(v)=p_v\quad (v\in\operatorname{sink}(E))
\]
define a nonzero algebra homomorphism $\psi:L(E)\to R$ such that $K_0(\psi)=\xi$.
\end{proof}
\begin{coro}\label{coro:tododentro}
Let $R$ be a unital purely infinite algebra and $E$ a graph such that $L(E)$ is simple.
\item[i)] If $E$ is countable, then $L(E)$ embeds as a subalgebra of $M_\infty R$.
\item[ii)] If $E^1$ is countable, $E^0$ is finite and $[1_R]=0$ in $K_0(R)$, then $L(E)$ embeds as a unital subalgebra
of $R$.
\item[iii)] If $E$ is finite then $L(E)$ embeds as a subalgebra of $R$.
\end{coro}
\begin{proof} Apply Theorem \ref{thm:k0liftr} to the trivial homomorphism $\xi=0$.
\end{proof}
\begin{rem}\label{rem:brosore}
It follows from Corollary \ref{coro:tododentro} that any purely infinite algebra $R$ such that $[1_R]=0$ contains $L_2$ as a unital subalgebra. Hence by \cite{brosore}*{Theorem 4.1}, if $E$ is countable (resp. finite), then $L(E)$ embeds as a subalgebra (resp. a unital subalgebra) of $R$, independently of whether $L(E)$ is simple or not.
\end{rem}
\begin{coro}\label{coro:sospe1} Let $E$ be a countable graph with finite $E^0$. Assume that $K_0(L(E))$ is finite and let $d_1,\dots,d_n$, $d_i\backslash d_{i+1}$ be its invariant factors. Let $j:{{\rm Alg}_\ell}\to kk$ be canonical functor (\cite{kkwt}). Then there is an algebra homomorphism $\psi:L(E)\to M_\infty(\bigoplus_{i=1}^nL_{d_i+1})$ such that $j(\psi)$ is an isomorphism in $kk$. If moreover $L(E)$ is purely infinite simple then there is an algebra homomorphism $\phi:\bigoplus_{i=1}^nL_{d_i+1}\to M_\infty L(E)$
such that $\iota^{-1}j(\phi)$ and $\iota^{-1}j(\psi)$ are inverse isomorphisms in $kk$. If $E$ is finite then the same holds with $L(E)$ substituted for $M_\infty(L(E))$.
\end{coro}
\begin{proof}
Assume that $E$ is countable with finite $E^0$. By part ii) of Theorem \ref{thm:k0liftr}, for each $1\le i\le n$, there is a homomorphism $\psi_{i}:L(E)\to M_\infty L_{d_i+1}$ such that $K_0(\psi_i)$ is the projection from $K_0(L(E))=\bigoplus_{j=1}^n\Z/d_j$ onto the copy of $\Z/d_i$.
The map
\[
\psi=(\psi_1,\dots,\psi_{n}):L(E)\to M_\infty(\bigoplus_{i=1}^bL_{d_i+1})
\]
then induces an isomorphism in $K_0$. In view of \cite{dwkk}*{Lemma 7.2} and of the fact that, since $K_0(L(E))$ is finite, $\ker(I-A_E^t)=0$, this implies that $K_1(\psi)$ is an isomorphism too. Hence $j(\psi)$ is an isomorphism by \cite{dwkk}*{Proposition 5.10}. Assume furthermore that $L(E)$ is purely infinite simple. Consider the graph
\[
F=\coprod_{i=1}^n\mathcal R_{d_i+1}.
\]
Then $L(F)=\bigoplus_{j=1}^nL_{d_{j+1}}$. The homomorphism $\phi$ of the corollary is obtained by applying Theorem \ref{thm:k0liftr} to $\xi=K_0(\psi)^{-1}\iota:K_0(L(F))\to K_0(L(E))$. This proves the first assertion of the corollary; the second, for finite $E$, is proved similarly, using part iii) of Theorem \ref{thm:k0liftr}.
\end{proof}
Let $E$ be a finite graph; if $X\subset L(E)$, write $\operatorname{span}(X)$ for the subspace generated by $X$. In the following proposition and elsewhere we consider the following ``diagonal" subalgebra of $L(E)$
\[
DL(E)=\operatorname{span}(\operatorname{sink}(E)\cup\{ee^*: e\in E^1\})\subset L(E).
\]
Proposition \ref{prop:k0agreeepsi} below will be needed in the next section.
\begin{prop}\label{prop:k0agreeepsi}
Let $E$ and $R$ be as in part iii) of Theorem \ref{thm:k0liftr}. Assume that $L(E)$ is simple and let $\phi, \psi : L(E) \to R$ be nonzero algebra homomorphisms such that $K_0(\phi) = K_0 (\psi)$. Then there exists an algebra homomorphism $\psi':L(E) \to R$ such that $j(\psi) = j(\psi')$ in $kk$ and $\psi'_{|DL(E)}=\phi_{|DL(E)}$.
\end{prop}
\begin{proof}
First assume that $ \phi(1) = \psi(1)=p$. For each $e\in E^1$ and each $v\in\operatorname{sink}(E)$ choose MvN equivalences $(x_e,y_e):\phi(ee^\ast)\sim \psi(e e^\ast)$ and $(x_v,y_v):\phi(v)\sim \psi(v)$. Define $x = \sum_{e\in E^1} x_e+\sum_{v\in\operatorname{sink}(E)}x_v$ and $y = \sum_{e\in E^1} y_e+\sum_{v\in\operatorname{sink}(E)}y_v$. Then $x,y\in p R p$ and $x y = p= y x$. Hence $\psi':L(E)\to R$, $\psi'(a) = x \psi(a) y$ satisfies $ \psi'_{|DL(E)} = \phi_{|DL(E)}$. Moreover $j(\psi)=j(\psi')$ by \cite{dwkk}*{Lemma 2.3}.
Next assume that $\phi(1) \neq \psi(1)$ and that none of them is equal to $1$. Then by Corollary \ref{coro:equiv}, there is an element $u \in U(R)$ such that $u \phi (1) u^{-1} = \psi(1)$. Hence we can replace $\psi$ by $a\mapsto u \psi(a) u^{-1}$ and we are in the above case.
Finally, if $\phi(1) \neq \psi(1)$ and one of them, say $\psi(1)$, is $1$, we can replace $\phi $ by a unital homomorphism by Theorem \ref{thm:k0liftr} and we are again in the first case.
\end{proof}
\section{Lifting \texorpdfstring{$K$}{K}-theory maps to algebra maps: \texorpdfstring{$K_0$}{K0} and \texorpdfstring{$K_1$}{K1}}\label{sec:k0k1lift}
Let $E$ be a finite graph; below we will give a right inverse of the surjective map
\begin{equation}\label{map:elonto}
\partial: K_1(L(E))\twoheadrightarrow \ker(I-A_E^t).
\end{equation}
Observe that the analogue of the map \eqref{map:elonto} in the $C^*$-algebra setting is an isomorphism; an explicit formula for its inverse was given by R\o rdam
in \cite{ror}*{page 33} in the case when $E$ is regular. We shall show that in the purely algebraic case considered here, the same formula gives a right inverse of \eqref{map:elonto}, even for singular $E$.
Let $I-B^t_E$ be as in \cite{dwkk}*{Remark 5.7}. Let
\[
s^*:\Z^{E^0}\to \Z^{(E^1)\coprod\operatorname{sink}(E)}, s^*(\chi_{v})=\left\{\begin{matrix}\sum_{s(e)=v}\chi_e& v\in\operatorname{reg}(E)\\ \chi_v& v\in\operatorname{sink}(E)\end{matrix}\right.
\]
By \cite{abc}*{formula 4.1}, we have a commutative diagram
\[
\xymatrix{\Z^{E^1}\ar[r]^{I-B^t_E}&\Z^{E^1\coprod\operatorname{sink}(E)}\\
\Z^{\operatorname{reg}(E)}\ar[u]^{s^*}\ar[r]_{I-A_E^t}&\Z^{E^0}\ar[u]^{s^*}
}
\]
In particular, $s^*$ maps $\ker(I-A^t_E)\to \ker(I-B^t_E)$. Furthermore it is an isomorphism by the dual of \cite{abc}*{Lemma 4.3}. Let $ x = (x_v) \in \ker(I-A_E^t) \subseteq \Z^{\operatorname{reg}(E)}$. Set $y=s^*(x)\in \ker(I-B_E^t)$. Let
\begin{equation}\label{elS}
S=\{(e,j): y_e\ne 0, 1\le j\le |y_e|\}
\end{equation}
Consider the diagonal matrix $V=V(x)\in M_S(L(E))$,
\[
V_{(e,j),(e,j)}=\left\{\begin{matrix} e& \text{ if } y_e>0\\ e^* & \text{ if } y_e<0\end{matrix}\right.
\]
Let $p=1-VV^*$, $q=1-V^*V$. Observe that $p,q\in M_S(DL(E))$. Moreover, for $\Lambda=E^1\coprod\operatorname{sink}(E)$, $DL(E)\cong \ell^{\Lambda}$ and we may regard
$p=(p_{\alpha})$ and $q=(q_{\alpha})$ as $\Lambda$-tuples of diagonal matrices in $M_S$ whose entries are in $\{0,1\}$. One checks, using that $y\in\ker(I-B_E^t)$, that for each $\alpha\in\Lambda$, $p_{\alpha}$ and $q_{\alpha}$ have the same number of nonzero coefficients. Hence we may choose for each $\alpha$ a matrix
$W_{\alpha}\in p_{\alpha}M_Sq_{\alpha}$ with coefficients in $\{0,1\}$ such that $W_{\alpha} W_{\alpha}^t=p_{\alpha}$ and $W_{\alpha}^tW_{\alpha}=q_{\alpha}$. Further, we may even require that
\begin{equation}\label{condiW}
(W_{\alpha})_{(e,i),(f,j)}=1\Rightarrow (p_{\alpha})_{(e,i),(e,i)}=(q_{\alpha})_{(f,j),(f,j)}=1.
\end{equation}
We shall use \eqref{condiW} in the proof of Lemma \ref{lem:aux} below.
Let $W=W(x)\in M_S(DL(E))$ be the matrix corresponding to $(W_\alpha)$; then
\begin{equation}\label{vw}
WW^*=1-VV^*,\quad W^*W=1-V^*V \text{ and }W^*V=V^*W=0.
\end{equation}
Put
\begin{equation}\label{elU}
U(x)=V(x)+W(x).
\end{equation}
It follows from \eqref{vw} that $U(x)U(x)^*=U(x)^*U(x)=1$.
\begin{prop}\label{prop:section}
Let $x\in \ker(I-A_E^t)$, $[U(x)]\in K_1(L(E))$ the class of the element \eqref{elU} and $\partial : K_1 (L(E)) \to \ker(I-A_E^t)$ as in \eqref{map:elonto}.
Then $\partial(U(x))=-x$.
\end{prop}
\begin{proof}
We keep the notation of the paragraph preceding the proposition. Let $C(E)$ be the Cohn path algebra; consider the subalgebra
\[
DC(E)=\operatorname{span}(\{q_v:v\in\operatorname{reg}(E)\}\cup\operatorname{sink}(E)\cup\{ee^*:e\in E^1\}\subset C(E).
\]
Consider the diagonal matrix $\hat{V}$ defined by the same prescription as $V$ but regarded now as an element of $M_S(C(E))$. Let $\hat{W}\in M_S(DC(E))$ be the image of $W$ under the map induced by the obvious inclusion $DL(E)\subset DC(E)$; put $\hat{U}=\hat{V}+\hat{W}$.
Consider the matrix
$$ h = \begin{bmatrix}
2 \hat{U} - \hat{U}\hat{U^\ast}\hat{U} & \hat{U}\hat{U^\ast}-1 \\
1-\hat{U^\ast}\hat{U} & \hat{U^\ast}
\end{bmatrix}\in M_{S\coprod S}(C(E)). $$
By \cite{www}*{Section 2.4} (see also \cite{robook}*{Definition 9.1.3}), $h$ is invertible and
$$\partial ( [U]) = [ h 1_{S} h^{-1}]-[1_{S}]. $$
Here $1_{S}$ is the $S\times S$ identity matrix, located in the upper left corner.
One checks that $\hat{U}=\hat{U}\hat{U^\ast}\hat{U}$, and that
\begin{equation}\label{deltU}
\partial ([U]) = [1- \hat{U^\ast}(x_i)\hat{U}]-[1 - \hat{U}\hat{U^\ast}]\in K_0(\bigoplus_{v\in \operatorname{reg}(E)}\ell q_v)\cong\Z^{\operatorname{reg}(E)}.
\end{equation}
One checks, using \eqref{deltU} and the fact that $x\in \ker (I-A_E^t)$, that $$\partial([U])=-[\sum_{v \in E^0} x_v q_v].$$
This finishes the proof.
\end{proof}
In principle, the assignment $\ker(I-A_E^t)\to K_1(L(E))$, $[x]\mapsto [U(x)]$ is just a set theoretic map. A group homomorphism with similar properties is obtained as follows. Choose a basis $\mathfrak{B}=\{x_i\}$ of the free abelian group $\ker(I-A_E^t)$; let
\begin{equation}\label{map:gamma}
\gamma=\gamma_{\mathfrak{B}}:\ker(I-A_E^t)\to K_1(L(E)), \quad \gamma(\sum_i n_ix_i)=\sum_in_i[U(x_i)].
\end{equation}
Let $E$ be a finite graph such that $L(E)$ is purely infinite simple. Then $\operatorname{sink}(E)=\emptyset$, by \cite{libro}*{Lemma 3.1.10 and Theorem 3.1.10}. Let $\phi : L(E) \to R$ be a unital algebra homomorphism with $R$ purely infinite simple. Set
\begin{equation}\label{rphi}
R_\phi = \{ x \in R \ : \phi (e e^\ast)x=x\phi(ee^\ast),\quad \ \text{ for all } e \in E^1\}.
\end{equation}
Note that
\[
R_\phi = \oplus_{e \in E^1} \phi (e e^\ast) R \phi (e e^\ast).
\]
Because $L(E)$ is simple, $\phi(\alpha)\ne 0$ $(\alpha\in E^1)$, whence each of the inclusions $\phi(\alpha\alpha^*)R\phi(\alpha\alpha^*)\subset R$ induces an isomorphism in $K_1$. Hence the direct sum $R_\phi\subset R^{E^1}$ of those inclusions induces an isomorphism
\begin{equation}\label{map:adjoint}
K_1(R_\phi) = \bigoplus_{e \in E^1} K_1( \phi (e e^\ast) R \phi (e e^\ast))\overset{\sim}{\lra} K_1(R)^{E^1}.
\end{equation}
Let $\iota:K_1(R_\phi)\to K_1(R)$ be the map induced by the inclusion $R_\phi\subset R$. Consider the bilinear map
\begin{equation}\label{pairing}
\langle \cdot , \cdot \rangle: \Z^{E^1}\times K_1(R_\phi)\to K_1(R), \quad \langle x,y\rangle=\sum_i
x_i\iota(y_i).
\end{equation}
Observe that $\langle \cdot , \cdot \rangle$ is a perfect pairing; indeed the adjoint homomorphism $K_1(R_\phi)\to K_1(R)^{E^1}$ is the isomorphism
\eqref{map:adjoint}.
\begin{lema}\label{lem:aux} (cf.\cite{ror}*{Lemma 3.5}.)
Let $E$ be a finite graph such that $L(E)$ is purely infinite simple, $R$ a purely infinite simple unital algebra, and $\phi$ and $\psi : L(E) \to R$ unital homomorphisms. Assume that $\phi$ and $\psi$ agree on $DL(E)$. Let
\[
u=\sum_{\alpha \in E^1} \psi(\alpha) \phi(\alpha^\ast)\in R_\phi= R_\psi.
\]
Then
\begin{equation}\label{efsdfas}
K_1(\psi) (\gamma(x ))= \langle x , [u] \rangle + K_1(\phi) (\gamma(x )) \text{ for all } x \in \ker(I-A_E^t).
\end{equation}
\end{lema}
\begin{proof}
Observe that $\psi(e)\phi(e^*)\in U(\phi(e)R\phi(e^*))$ $(e\in E^1)$, whence $u\in U(R_\phi)$.
Let $\{\chi_e:e\in I\}$ be the canonical basis of $\in\Z^I$. One checks that
\begin{equation}\label{paircano}
\langle \chi_e , [u] \rangle = \psi(e) \phi(e^\ast) +1 -\phi(ee^\ast).
\end{equation}
To prove the lemma, we may assume that $x$ is an element of the basis $\mathfrak{B}$ of $\ker(I-A_E^t)$ used in \eqref{map:gamma} to define $\gamma$ . Then taking
\eqref{paircano} into account and adopting the notations and conventions used in the definition of $U(x)$, one computes that the right hand side of equation \eqref{efsdfas} is
\begin{equation}\label{rhs:aux}
\sum_{y_e > 0} y_e [\psi(e) \phi(e^\ast) +1 -\phi(e e^\ast)] + [\phi ( U(x))] - \sum_{y_e < 0} y_e [\phi(e) \psi(e^\ast) +1 -\phi(e e^\ast)] .
\end{equation}
Let $S$ be as in \eqref{elS}. Consider the diagonal matrices $P,Q\in M_SL(E)$ with diagonal entries as follows
\begin{gather*}
P_{(e,j),(e,j)}=\left\{\begin{matrix}\psi(e)\phi(e^*)+1-\phi(ee^*)& \text{ if }y_e>0\\ 1 & \text{ if } y_e<0\end{matrix}\right.\\
Q_{(e,j),(e,j)}=\left\{\begin{matrix}1 & \text{ if }y_e>0\\ \phi(e)\psi(e^*)+1-\phi(ee^*)& \text{ if } y_e<0\end{matrix}\right.
\end{gather*}
Observe that \eqref{rhs:aux} is $[P\phi(U(x))Q]$. Hence it suffices to show that $K_1(\psi)(U(x))=[P\phi(U(x))Q]$; we shall show that in fact $\psi(U(x))=P\phi(U(x))Q$. Recall that $U(x)=V(x)+W(x)$. It is immediate from the definition of $V(x)$ that $\psi(V(x))=P\phi(V(x))Q$. Hence since $W$ has coefficients in $DL(E)$, it only remains to show that $\phi(W(x))=P\phi(W(x))Q$. A tedious but straightforward calculation, using \eqref{condiW} shows that
\[
\phi(W(x)_{\alpha})_{(e,i),(f,j)}=(P\phi(W(x)_\alpha)Q)_{(e,i),(f,j)}\qquad\forall (e,i),(f,j)\in S,\quad \alpha\in\Lambda.
\]
This completes the proof.
\end{proof}
\begin{rem}\label{rem:xi0detxi1}
Recall that if $L(E)$ is unital, we have an exact sequence
\[
0\to K_0(L(E))\otimes K_1(\ell)\to K_1(L(E))\to \ker(I-A_E^t)\to 0.
\]
It follows from \cite{dwkk}*{Lemma 7.2} that if $R\in{{\rm Alg}_\ell}$ is $K_1$-regular and $\xi\in kk(L(E),R)$, then $K_1(\xi)$ restricts to the composite of $K_0(\xi)\otimes \operatorname{id}$ with the cup product $K_0(R)\otimes K_1(\ell)\to K_1(R)$.
\end{rem}
\begin{teo}\label{thm:ror}
Let $E$ be a finite graph and $S$ an algebra. Assume that $L(E)$ is simple and that $S$ is unital, purely infinite simple and $K_1$-regular. Let $\xi_0 : K_0 (L(E)) \to K_0(S)$ and $\xi_1 : \ker(I-A_E^t) \to K_1(S)$ be group homomorphisms.
Then there exists a nonzero algebra homomorphism $\phi : L(E) \to S$ such that $K_0(\phi) = \xi_0$ and such that $K_1(\phi)\gamma= \xi_1$. If moreover $\xi_0$ is unital then we can choose $\phi$ to be a unital homomorphism $L(E)\to S$.
\end{teo}
\begin{proof}
By Theorem \ref{thm:k0liftr}, there exists a nonzero algebra homomorphism \goodbreak
$ \phi_0 : L(E) \to S$ such that $K_0(\phi_0) = \xi_0$, and if $\xi_0$ is unital then we may choose $\phi_0$ unital. If $L(E)$ is not purely infinite, then by Remark \ref{rem:simpnopis}, $L(E)\cong M_n$ for some $1\le n<\infty$. Hence
$\ker(I-A_E^t)=0$ and $K_1(L(E))=K_0(L(E))\otimes U(\ell)$. Assume that $L(E)$ is purely infinite simple. Let $R = \phi_0(1) S \phi_0(1)$ and let $\bar{\phi}_0 : L(E) \to R$ be the corestriction of $\phi_0$ and $\operatorname{inc} : R \to S$ the inclusion. Since $\ker(I-A_E^t)$ is a direct summand of $\Z^{\operatorname{reg}(E)}$ and $\langle\cdot,\cdot\rangle$ is a perfect pairing, there exists $\theta\in K_1(R_{\bar{\phi}_0})$ such that
$$ \langle - , \theta \rangle = K_1(\operatorname{inc})^{-1} \xi_1 - K_1(\bar{\phi}_0) \gamma. $$
Because $R_{\bar{\phi}_0}$ is a direct sum of purely infinite simple algebras, by Theorem \ref{thm:kpis} there exists $g \in U(R_{\bar{\phi}_0})$ such that $[g]= \theta$. Define $\bar{\phi} : L(E) \to R$ by setting $\bar{\phi}_{|E^0}=(\bar{\phi_0})_{|E^0}$, $\bar{\phi}(e)=g\bar{\phi_0}(e)$,
$\bar{\phi}(e^\ast)=\bar{\phi_0}(e^\ast) g^{-1}$. Observe that $\bar{\phi}$ and $\bar{\phi}_0$ agree on $DL(E)$; in particular, $\bar{\phi}$ is unital. Hence by Lemma \ref{lem:aux}, we have
$$ K_1(\bar{\phi}) \gamma = K_1(\bar{\phi}_0 ) \gamma + \langle - , [u]\rangle. $$
But it follows from the formula defining $u$ in Lemma \ref{lem:aux} and the definition of $\bar{\phi}$ that $u=g$. Hence
$$ K_1(\bar{\phi}) \gamma = K_1(\operatorname{inc})^{-1} \xi_1. $$
Set $\phi=\operatorname{inc}\bar{\phi}$; then $ K_1(\phi)\gamma= \xi_1$. Further, $K_0(\phi) = K_0(\phi_0) = \xi_0$ because $\phi$ and $\phi_0$ agree on $E^0$.
It is clear by construction that if $\phi_0$ is unital homomorphism, then $\phi$ is also unital.
\end{proof}
\section{Lifting \texorpdfstring{$kk$}{kk}-maps to algebra maps}\label{sec:kklift}
Let $\phi,\psi:A\to B$ be algebra homomorphisms. Put
\[
C_{\phi,\psi}=\{(a,f)\in A\oplus B[t]:f(0)=\phi(a), f(1)=\psi(a)\}.
\]
Let $\pi:C_{\phi,\psi}\to A$, $\pi(a,f)=a$; we have an algebra extension
\begin{equation}\label{seq:cylinder}
\Omega B\to C_{\phi,\psi}\overset{\pi}{\longrightarrow} A.
\end{equation}
\begin{lem}\label{lem:triaresta}
Let $j:{{\rm Alg}_\ell}\to kk$ be the canonical functor. The sequence \eqref{seq:cylinder} induces the following distinguished triangle in $kk$
\[
\xymatrix{j(\Omega B)\ar[r]& j(C_{\phi,\psi})\ar[r]^{j(\pi)}& j(A)\ar[r]^{j(\phi)-j(\psi)}&j(B).}
\]
\end{lem}
\begin{proof}
By definition of $C_{\phi,\psi}$, we have a map of extensions
\begin{equation}\label{map:cylseq}
\xymatrix{\Omega B\ar@{=}[d]\ar[r]&C_{\phi,\psi}\ar[r]^\pi\ar[d]&A\ar[d]^{(\phi,\psi)}\\
\Omega B\ar[r]& B[t]\ar[r]^{(\operatorname{ev}_0,\operatorname{ev}_1)}& B\oplus B}
\end{equation}
Let $\Delta:B\to B\oplus B$, $\Delta(b)=(b,b)$. One checks that the $kk$-triangle associated to the bottom row of \eqref{map:cylseq} is isomorphic to
\[
\xymatrix{j(\Omega B)\ar[r]^0&j(B)\ar[r]^{j(\Delta)}&j(B\oplus B)\ar[r]^(.6){[\operatorname{id},-\operatorname{id}]}&B.}
\]
Let $\xi:j(A)\to j(B)$ be the boundary map in the triangle induced by \eqref{seq:cylinder}. It follows from \eqref{map:cylseq} that there is a commutative diagram
\[
\xymatrix{j(A)\ar[r]^\xi\ar[d]^{(j(\phi),j(\psi))}&j(B)\ar@{=}[d]\\
j(B)\oplus j(B)\ar[r]_{[\operatorname{id},-\operatorname{id}]}& j(B).}
\]
Hence $\xi=j(\phi)-j(\psi)$.
\end{proof}
Let $R$ be a unital, purely infinite simple algebra, let $E$ be a finite graph such that $L(E)$ is simple and let $\phi,\psi:L(E)\to R$ be nonzero algebra homomorphisms which agree on $DL(E)$. Let $R_\phi$ be as
in \eqref{rphi}. Put $p=\phi(1)$ and let
\[
B=pRp.
\]
By corestriction, we may consider $\phi$ and $\psi$ as homomorphisms $L(E)\to B$. Let
\[
C=\{f\in B[t]\mid (\exists a\in L(E))\ \ \phi(a)=f(0),\ \ \psi(a)=f(1)\}.
\]
Since $L(E)$ is simple, the map
\[
C_{\phi,\psi}\to C,\quad (a,f)\mapsto f
\]
is an isomorphism. We shall identify $C=C_{\phi,\psi}$. Assume that $R$ is $K_1$-regular. Then $B$ is $K_1$-regular also, whence $K_0(\Omega B)=KV_1(B)=K_1(B)$. Hence the extension \eqref{seq:cylinder} induces an exact sequence
\begin{equation}\label{seq:kcyl}
\xymatrix{K_1(B)\ar[r]^{\partial'}&K_0(C)\ar[r]^\pi& K_0(L(E))\ar[r]^{\phi-\psi}&K_0(B)}
\end{equation}
The following two lemmas adapt \cite{ror}*{Lemmas 3.2 and 3.3} to the purely algebraic case.
\begin{lem}\label{lem:3.2}
Let $u$ be as in Lemma \ref{lem:aux}, $\partial'$ as in \eqref{seq:kcyl} and $\langle \cdot,\cdot\rangle$ as in \eqref{pairing}. Let $\sigma\in K_0(C)^{E^1}$, $\sigma_e=[\phi(ee^*)]$. Then for every $x\in \Z^{E^1}$ we have
\[
\langle x,[u]\rangle=-\langle(I-A^t_E)x,\sigma\rangle
\]
\end{lem}
\begin{proof} Let $u_e=u\phi(ee^*)+1-\phi(ee^*)$ ($e\in E^1$). By Whitehead's lemma there is $U_e(t)\in \Gl(B[t])$ such that $U_e(0)=1$ and
$U_e(1)=\operatorname{diag}(u_e,u_e^{-1})$. Set $V_e(t)=U_e(t)\operatorname{diag}(\phi(e),0)$, $W_e(t)=\operatorname{diag}(\phi(e^*),0)U_e(t)^{-1}$. Now proceed as in the proof of
\cite{ror}*{Lemma 3.2}, substituting $U_e(t)^{-1}$ and $W_e(t)$ for $U_e(t)^*$ and $V_e(t)^*$.
\end{proof}
\begin{lem}\label{lem:3.3}
Let $\lambda:R_\phi\to R_\phi$, $\lambda(a)=\sum_{e\in E^1}\phi(e)a\phi(e^*)$. If $j(\phi)=j(\psi)\in kk(L(E),B)$ then there is $\nu\in U(R_\phi)$
such that $[u]=[\nu^{-1}\lambda(\nu)]\in K_1(R_\phi)$.
\end{lem}
\begin{proof} The proof is the same as that of \cite{ror}*{Lemma 3.3}.
\end{proof}
Let $S$ be an algebra, $E$ a finite graph, and $\phi,\psi:L(E)\to S$ algebra homomorphisms. We say that $\phi$ and $\psi$ are \emph{$1$-step $\operatorname{ad}$-homotopic} if either
\begin{itemize}
\item[a)] there is an MvN equivalence $(u,u'):\psi(1)\sim\phi(1)$ such that $\operatorname{ad}(u,u')\phi=\psi$,
\goodbreak
or
\goodbreak
\item[b)] $\phi$ and $\psi$ agree on $DL(E)$ and for $B=\phi(1)S\phi(1)$ there is $U(t)\in \Gl(B_{\phi}[t])$ such that $U(0)=1$ and $\phi_{i+1}(e)=U(1)\phi(e)$, $\psi(e^*)=\phi_i(e^*)U(1)^{-1}$.
\end{itemize}
We say that $\phi$ and $\psi$ are \emph{$n$-step $\operatorname{ad}$-homotopic} if there is a sequence of algebra homomorphisms $\phi_i:L(E)\to S$, $1\le i\le n$, such that $\phi_1=\phi$, $\phi_n=\psi$, and $\phi_{i}$ and $\phi_{i+1}$ are $1$-step $\operatorname{ad}$-homotopic for $1\le i\le n-1$. Two unital homomorphisms $\phi$ and $\psi$ are \emph{$n$-step unitally $\operatorname{ad}$-homotopic} if they are $n$-$\operatorname{ad}$-homotopic and the $\phi_i$ can be chosen to be unital for all $1\le i\le n$. Call $\phi$ and $\psi$ (unitally) \emph{$\operatorname{ad}$-homotopic} if they are $n$-step (unitally) $\operatorname{ad}$-homotopic for some $n$.
\begin{rem}\label{rem:adhomo}
Observe that if in a) above $\phi$ and $\psi$ are unital, then $u\in U(S)$, so that $\phi$ and $\psi$ are conjugate in the usual, unital sense. Note also that in the situation b) above, $\phi$ and $\psi$ are homotopic. It follows that a unital homomorphism $\phi:L(E)\to L(E)$ is unitally $\operatorname{ad}$-homotopic to the identity if and only if it is homotopic to $\operatorname{ad}(u)$ for some $u\in U(L(E))$.
\end{rem}
\begin{thm}\label{thm:kkliftr} Let $E$ be a finite graph and $R$ a unital algebra. Assume that $L(E)$ and $R$ are purely infinite simple and that $R$ is $K_1$-regular. Then the canonical map
\begin{equation}\label{map:kkliftr}
j : [L(E), R]_{M_2}\setminus\{0\} \to kk(L(E),R)
\end{equation}
is an isomorphism of monoids. In particular, $[L(E), R]_{M_2} \setminus\{0\}$ is the group completion of $[L(E),R]_{M_2}$. Moreover, we have the following:
\item[i)] If $\xi\in kk(L(E),R)$, then there is a nonzero algebra homomorphism $\phi:L(E)\to R$ such that $j(\phi)=\xi$. Moreover, $\phi$ may be chosen to be unital if $\xi$ is.
\item[ii)] Two nonzero (unital) algebra homomorphisms $\phi,\psi: L(E)\to R$ satisfy $j(\phi)=j(\psi)$ if and only if they are $M_2$-homotopic if and only if they are (unitally) $\operatorname{ad}$-homotopic if and only if they are $3$-step (unitally) $\operatorname{ad}$-homotopic.
\end{thm}
\begin{proof} The map $[L(E),R]_{M_2} \to kk(L(E),R)$ is a monoid homomorphism by the same argument as in Proposition \ref{prop:homotopis}.
Let $\xi\in kk(L(E),R)$ and let $\gamma:\ker(I-A_E^t)\to K_1(L(E))$ be as in \eqref{map:gamma}. By Theorem \ref{thm:ror} there exists
a nonzero algebra homomorphism $\psi:L(E)\to R$ such that $K_0(\xi)=K_0(\psi)$ and $K_1(\xi)\gamma=K_1(\psi)\gamma$. Let $B=\psi(1) R\psi(1)$, $\operatorname{inc} : B \to R $ the inclusion and $\bar{\psi}:L(E)\to B$ the corestriction of $\psi$. Then $j(\operatorname{inc})$ is an isomorphism, and
for $\eta=j(\operatorname{inc})^{-1}\xi$ we have $\eta-j(\bar{\psi})\in kk(L(E),B)^2\cong \operatorname{Ext}_\Z^1(K_0(L(E)),K_1(B))$, by \cite{dwkk}*{Theorem 7.11}. To prove that the map of the theorem is surjective, it suffices to show that there exists $u\in U(R_\psi)$ such that for $\phi:L(E)\to B$, $\phi(e)=u\psi(e)$, $\phi(e^*)=\psi(e^*)u^{-1}$, we have $\eta-j(\bar{\psi})=j(\phi)-j(\bar{\psi})$. The argument of the proof of \cite{ror}*{Theorem 3.1} shows this. Next we show that \eqref{map:kkliftr} is injective, and that the different notions of homotopy agree. It follows from Lemma \ref{lem:adhomo}, \cite{dwkk}*{Lemma 2.3} and the definition of $\operatorname{ad}$-homotopy that $\operatorname{ad}$-homomotopic homomorphisms $L(E)\to R$ are $M_2$-homotopic, and from the universal property of $kk$ that $j$ sends homotopic maps to equal maps. Conversely, let $\phi,\psi:L(E)\to R$ be algebra homomorphisms such that $j(\phi)=j(\psi)$. Then $K_0(\phi)=K_0(\psi)$, whence there exist
for each $e\in E^1$ elements $u_e\in \phi(ee^*) R\psi(ee^*)$ and $u'_e\in \psi(ee^*) R\phi(ee^*)$ such that $u_eu'_e=\phi(ee^*)$ and $u'_eu_e=\psi(ee^*)$. Thus $u=\sum_{e\in E^1}u_e\in \phi(1)R\psi(1)$, $u'=\sum_{e\in E^1}u'_e\in\psi(1) R\phi(1) $, and $\psi'=\operatorname{ad}(u,u')\psi$ agrees with $\phi$ on $DL(E)$. Hence upon spending a $1$-step $\operatorname{ad}$-homotopy from $\psi$ to $\psi'$ if necessary, we may assume that $\phi$ and $\psi$ agree on $DL(E)$. Let $B=\phi(1) R\phi(1)$ and let $u\in B_\phi$ be as in Lemma \ref{lem:aux}; we have
\begin{equation}\label{phiu}
\psi(e)=u\phi(e), \quad \psi(e^*)=\phi(e^*)u^{-1}.
\end{equation}
Observe that, because $R$ is purely infinite and $K_1$-regular, the same is true of $B$. By Lemma \ref{lem:3.3} and $K_1$-regularity of $B$, there are $\nu\in\Gl(B_\phi)$ and
$U(t)\in \Gl(B[t])$ such that $U(0)=1$ and $U(1)=\nu^{-1}\lambda(\nu)u^{-1}$. Hence, upon using a second $1$-step $\operatorname{ad}$-homotopy, we may assume that $u=\nu^{-1}\lambda(\nu)$. A calculation shows that $\psi=\operatorname{ad}(\nu)\phi$. Thus a third $1$-step $\operatorname{ad}$-homotopy concludes the proof of the nonunital part of the theorem. If $\xi$ is unital, then by Theorem \ref{thm:ror} there is a unital algebra homomorphism $\psi:L(E)\to R$ such that $K_0(\xi)=K_0(\psi)$ and $K_1(\xi)\gamma=K_1(\psi)\gamma$. The argument used above to prove the surjectivity of \eqref{map:kkliftr} subsituting $\xi$ for $\eta$, shows that there is a unital algebra homomorphism $\phi:L(E)\to R$ such that $j(\phi)=\xi$. Finally the same argument used above for nonunital homomorphisms shows that two unital homomorphisms $L(E)\to R$ go to the same element in $kk$ if and only if they are unitally $3$-step $\operatorname{ad}$-homotopic.
\end{proof}
\begin{rem}\label{rem:gagp}
By Lemma \ref{lem:adhomo}, we have that if $R$ and $L(E)$ are as in Theorem \ref{thm:kkliftr}, then $[L(E), M_\infty R]$ is an abelian monoid, with operation induced by the map \eqref{map:boxplus}, and the canonical homomorphism $[L(E), M_\infty R] \setminus \{ 0 \}\to kk(L(E),R)$ is an isomorphism of monoids.
\end{rem}
\section{Homotopy classification theorem}\label{sec:main}
\begin{thm}\label{thm:main2}
Let $E$ and $F$ be finite graphs such that $L(E)$ and $L(F)$ are purely infinite simple.
Let $\xi_0:K_0(L(E))\overset{\sim}{\lra} K_0(L(F))$ be an isomorphism. Then there exist nonzero algebra homomorphisms $\phi:L(E)\to L(F)$ and $\psi:L(F)\to L(E)$ such that $K_0(\phi)=\xi_0$, $K_1(\psi)=\xi_0^{-1}$, $\psi\phi\approx_{M_2}\operatorname{id}_{L(E)}$ and $\phi\psi\approx_{M_2}\operatorname{id}_{L(F)}$. If $\xi_0$ is unital then we may choose $\phi$ and $\psi$ to be unital homomorphisms such that $\phi\psi$ and $\psi\phi$ are homotopic to the respective identity maps.
\end{thm}
\begin{proof}
Because $\ker(I-A_E^t)$ and $\ker(I-A_F^t)$ are isomorphic to the quotients of $K_0(L(E))$ and $K_0(L(F))$ modulo torsion, the assumed isomorphism $\xi_0$ induces an isomorphism $\xi_1 : \ker(I-A_E^t) \overset{\sim}{\lra} \ker(I-A_F^t)$. By \cite{dwkk}*{Corollary 7.19}, there exists
$\xi\in kk(L(E),L(F))$ such that for the injective homomorphism $\gamma_F:\ker(I-A_F^t)\to K_1(L(F))$ of \eqref{map:gamma}, we have $K_0(\xi)=\xi_0$ and $K_1(\xi)\gamma_E=\gamma_F\xi_1$. Hence $\xi\in kk(L(E),L(F))$ is an isomorphism by \cite{dwkk}*{Proposition 5.10}. By Theorem \ref{thm:kkliftr} there are algebra homomorphisms $\phi:L(E)\to L(F)$ and $\psi:L(F)\to L(E)$ such that $j(\phi)=\xi$ and $j(\psi)=\xi^{-1}$, which may be chosen unital if $\xi_0$ is. Again by Theorem \ref{thm:kkliftr}, $\phi\psi$ and $\psi\phi$ are $M_2$-homotopic to the respective identity maps. If moreover $\phi$ and $\psi$ are unital, then by Theorem \ref{thm:kkliftr}, $\phi\psi$ and $\psi\phi$ are unitally $\operatorname{ad}$-homotopic to identity maps. Hence by Remark \ref{rem:adhomo} there are $u\in U(L(E))$ and $v\in U(L(F)$ such that $\operatorname{ad}(v)\phi\psi$ and $\psi\phi\operatorname{ad}(u)$ are homotopic to identity maps. Hence $\psi$ is a homotopy equivalence. Upon replacing $\phi$ by the homotopy inverse of $\psi$, we get the last statement of the theorem.
\end{proof}
Recall from \cite{black}*{Chapter III, Section 6.2} that a \emph{scaled ordered group} is an ordered group together with a choice of order unit. If $R$ is a unital algebra, then $K_0(R)$
has a natural structure of scaled ordered group whose positive cone is the image of $\mathcal V(R)$ and whose order unit is $[1_R]$.
We say that two unital algebras $R$ and $S$ are \emph{unitally homotopy equivalent} if there are unital homomorphisms $\phi:R\to S$ and $\psi:S\to R$ such that $\psi\phi$ and $\phi\psi$ are homotopic to the respective identity maps.
\begin{coro}\label{coro:main2} Let $E$ and $F$ be finite graphs such that $L(E)$ and $L(F)$ are simple. Assume that $K_0(L(E))$ and $K_0(L(F))$ are isomorphic as scaled ordered groups. Then either
\item[i)] there is $1\le n$ such that $L(E)\cong L(F)\cong M_n$
\goodbreak
or
\goodbreak
\item[ii)] $L(E)$ and $L(F)$ are purely infinite and unitally homotopy equivalent.
\end{coro}
\begin{proof} By Remark \ref{rem:simpnopis} if $L(E)$ is simple but not purely infinite, then there is $n\ge 1$ such that
$L(E)\cong M_n$. In this case $K_0(L(E))\cong \Z$ with the usual order and $[1_{L(E)}]$ corresponds to $n$. On the other hand if $R$ is a purely infinite simple unital algebra, then every element of $K_0(R)$ is nonnegative, by Theorem \ref{thm:kpis}. The proof is concluded using Theorem \ref{thm:main2} and observing that the identity is the only automorphism of $\Z$ as an ordered group.
\end{proof}
\section{Algebra extensions}\label{sec:ext}
Let $R$ be an algebra. For $x\in R^\N$, let $\operatorname{supp} (x)=\{n\in\N:x_n\ne 0\}$. For a matrix $a\in R^{\N\times\N}$ and $i\in\N$, put $a_{i,*}$ and $a_{*,i}$ for the $i^{th}$ row and column, and set
\begin{gather*}
\Im (a)=\{a_{i,j}:i,j\in\N\}\subset R,\\
N(a)=\sup\{\#\operatorname{supp}(a_{i,*}), \#\operatorname{supp}(a_{*,i}):i\in\N\}.
\end{gather*}
Consider the algebras
\begin{gather*}
\Gamma R=\{a\in R^{\N\times\N}: \text{each row and column of $a$ is finitely supported}\}, \\
\Gamma' R=\{a\in\Gamma R: \#\Im(a)<\infty \text{ and } N(a)<\infty\},\\
\Sigma R=\Gamma R/M_\infty R,\quad\Sigma'R=\Gamma'R/M_\infty R.
\end{gather*}
The algebras $\Sigma R$ and $\Sigma'R$ are the \emph{Wagoner} and \emph{Karoubi} suspensions.
\begin{prop}\label{prop:wagopis}
Let $R$ be either a division algebra or a purely infinite simple unital algebra. Then $\Sigma R$ and $\Sigma' R$ are purely infinite simple.
\end{prop}
\begin{proof}
It suffices to show that if $M\in \Gamma R\setminus M_\infty R$ then there are $A,B\in \Gamma' R$ such that $AMB=1$. The conditions defining
$\Gamma'$ and $\Gamma$ imply that there are infinite, strictly increasing sequences $Y=\{y_1, y_2,\dots\}, N=\{N_1=1,N_2,\dots\}\subset\N$ such that for each $j$, $\emptyset\ne\operatorname{supp}(m_{*,y_j})\subset[N_j+1,N_{j+1}]$. Let $B_1$ be the matrix whose $n^{th}$ column is the canonical basis element $\epsilon_{y_n}$. The support of the $j^{th}$-column of the matrix $MB_1$ is contained in $[N_j+1,N_{j+1}]$. Choose, for each $j$, an element $x_j\in [N_{j+1},N_{j+1}]$ such that $(MB_1)_{x_j,j}\ne 0$. Let $A_1$ be the matrix whose $j^{th}$ row is the basis element $\epsilon_{x_j}$. The matrix $A_1MB_1$ is diagonal, and all its diagonal entries are nonzero. Hence by our hypothesis on $R$ there are diagonal matrices $A_2$ and $B_2$ such that $A_2A_1MB_1B_2=1$.
\end{proof}
Recall from \cite{dwkk}*{Lemma 2.8 and the paragraph below} that when $R$ is unital, every extension of an algebra $A$ by $M_\infty R$ is classified by a homomorphism $A\to \Sigma R$. By \cite{dwkk}*{Lemma 2.5}, the sets $[A,\Sigma R]_{M_2}$ and
$[A,\Sigma'R]_{M_2}$ are abelian monoids with the sum induced by \eqref{map:boxplus}. Put
\[
\mathcal{E}xt(A,R)=[A,\Sigma R]_{M_2},\quad \mathcal{E}xt(A,R)_f=[A,\Sigma'R]_{M_2}.
\]
By definition, there is a canonical map $\mathcal{E}xt(A,R)_f\to\mathcal{E}xt(A,R)$; by \cite{dwkk}*{Remark 5.8} there is also a natural map $\mathcal{E}xt(A,R)\to kk_{-1}(A,R)$.
\begin{thm}\label{thm:ext}
Let $R$ be either a division algebra or a $K_0$-regular purely infinite simple unital algebra and $E$ a finite graph such that $L(E)$ is simple. Then the canonical maps
\[
\mathcal{E}xt(L(E),R)_f\to\mathcal{E}xt(L(E),R)\to kk_{-1}(L(E),R)
\]
are isomorphisms. Moreover every nonzero element of each of these groups represents the $M_2$-homotopy class a nontrivial extension of $L(E)$ by $M_\infty(R)$.
\end{thm}
\begin{proof}
Since $\ell$ is a field, $\Sigma$ and $\Sigma'$ are models for the suspension functor. By Proposition \ref{prop:wagopis}, $\Sigma R$ and $\Sigma ' R$ are purely infinite simple. Now apply Theorem \ref{thm:kkliftr} to prove the first assertion. The second assertion follows from Theorem \ref{thm:kkliftr} and \cite{dwkk}*{Lemma 2.8}.
\end{proof}
\begin{coro}\label{coro:ckext}[cf. \cite{ck}*{Theorem 5.3}]
For $E$ as in the theorem above, we have
\[
\mathcal{E}xt(L(E),\ell)={\rm Coker}(I-A_E).
\]
\end{coro}
\begin{proof} Immediate from Theorem \ref{thm:ext} and the the fact that $KH^1(L(E))={\rm Coker}(I-A_E)$ \cite{dwkk}*{Formula 6.4}.
\end{proof}
\begin{coro}\label{coro:cute}
Let $E$ and $R$ be as in Theorem \ref{thm:ext}. Then there is an exact sequence
\begin{multline*}
0\to\operatorname{Ext}^1_\Z(K_0(L(E)),K_0(R))\to \mathcal{E}xt(L(E),R)\to\\
\operatorname{Hom}_\Z(\ker(I-A_E^t),K_0(R))\oplus\operatorname{Hom}_\Z(K_0(L(E)),K_{-1}R)\to 0.
\end{multline*}
\end{coro}
\begin{proof} Immediate from Theorem \ref{thm:ext} and \cite{dwkk}*{Corollary 7.19}.
\end{proof}
\begin{ex}\label{ex:cute}
If $R$ is either $\ell$ or a purely infinite simple unital Leavitt path algebra, then $K_{-1}R=0$, so the exact sequence
of Corollary \ref{coro:cute} becomes
\[
0\to\operatorname{Ext}^1_\Z(K_0(L(E)),K_0(R))\to \mathcal{E}xt(L(E),R)\to
\operatorname{Hom}_\Z(\ker(I-A_E^t),K_0(R))\to 0.
\]
If furthermore $K_0(L(E))$ is torsion, then $\ker(I-A_E^t)=0$, and we get a canonical isomorphism
\[
\mathcal{E}xt(L(E),R)=\operatorname{Ext}^1_\Z(K_0(L(E)),K_0(R)).
\]
\end{ex}
\section{Maps into tensor products with \texorpdfstring{$L_2$}{L2}}\label{sec:tenso}
\begin{lem}\label{lem:homozero} Let $E$ be a graph and let $\phi:L(E)\to R$ be an algebra homomorphism. Then $\phi\approx 0\iff \phi=0$.
\end{lem}
\begin{proof} It suffices to show that if $H:L(E)\to R[t]$ satisfies $\operatorname{ev}_0H=0$, and $v\in E^0$, then $H(v)=0$. This follows from Lemma \ref{lem:idemzero}.
\end{proof}
A unital algebra $R$ is \emph{regular supercoherent} if for every $n\ge 0$, $R[t_1, \dots, t_n]$ is regular coherent in the sense of \cite{gersten}.
\begin{lem}\label{lem:kreg}
Let $E$ be graph and $R$ a regular supercoherent algebra. Then $L(E)\otimes R$ is $K$-regular. In particular, $L(E) \otimes L(F)$ is $K$-regular for every finite graph $F$.
\end{lem}
\begin{proof}
\iffalse By the argument of \cite{abc}*{page 23}\fi
By definition, $R_n= R[t_1,\dots,t_n]$ is regular supercoherent for every $n\ge 0$. Hence by \cite{dwkk}*{Example 5.5} the canonical map $K_*(R_n\otimes L(E))\overset{\sim}{\lra} KH_*(R_n\otimes L(E))=KH_*(R_0\otimes L(E))$ is an isomorphism for every $n$, whence also $K_*(R_0\otimes L(E))\to K_*(R_n\otimes L(E))$ is an isomorphism for all $n$. The second assertion follows from the first, using \cite{libro}*{Lemma 6.4.16}.
\end{proof}
Let $R,S$ be unital algebras. Put
\[
[R,S]\supset [R,S]_1=\{[f]: f \text{ unital }\}.
\]
\begin{thm}\label{thm:mapl2}
Let $E$ be finite graph such that $L(E)$ is simple and $R$ a purely infinite simple regular supercoherent algebra. Then $[L(E),L_2]_1=[L(E),L_2]_{M_2}\setminus\{0\}$, $[L(E),R\otimes L_2]_1=[L(E),R\otimes L_2]_{M_2}$, and both sets have exactly one element each.
\end{thm}
\begin{proof} By Remark \ref{rem:simpnopis}, Proposition \ref{prop:homotopis} and Theorem \ref{intro:kklift}, $[L(E),L_2]_{M_2}\setminus\{0\}$ has exactly one element, since $j(L_2)=0$ in $kk$; by Corollary \ref{coro:tododentro} this element is the class of a unital homomorphism. Next let $\phi,\psi:L(E)\to L_2$ be unital homomorphisms. If $L(E)$ is not purely infinite, then by Proposition \ref{prop:homotopis}, $\phi$ and $\psi$ are conjugate, and therefore homotopic, since by Corollary \ref{coro:kv1pis}, $ \pi_0(U(L_2)) = K_1(L_2) = 0$. If $L(E)$ is purely infinite, then by part iii) of Theorem \ref{thm:kkliftr}, $\phi$ and $\psi$ are $3$-step unitally ad-homotopic. Hence by Remark \ref{rem:adhomo} and the argument we have just used, $\phi\approx\psi$. Thus the assertions about homomorphisms $L(E)\to L_2$ are proved. It is well-known that the tensor product of a unital simple algebra with a unital central simple algebra is again simple. By \cite{apcrow}*{Theorem 4.2}, $L_2$ is central, so $R\otimes L_2$ is simple. Moreover, $R\otimes L_2$ is purely infinite by \cite{apgpsm}*{Theorem 7.9}. Hence using that $j(R\otimes L_2)=0$ in $kk$ and applying Lemmas \ref{lem:homozero} and \ref{lem:kreg}, Proposition \ref{prop:homotopis} and Theorem \ref{thm:kkliftr}, we obtain
\[
[L(E),R\otimes L_2]_{M_2}\setminus\{0\} = kk(L(E),R\otimes L_2)=0.
\]
By Corollary \ref{coro:tododentro} there is a unital homomorphism $\phi:L(E)\to L(F)\otimes L_2$. If $\psi$ is another, then $\phi\approx\psi$ by Lemma \ref{lem:adxy} and the argument above.
\end{proof}
\begin{ex}\label{ex:otimesl2}
Let $R$ be as in Theorem \ref{thm:mapl2}, let $d:L_2\to R \otimes L_2$, $a\mapsto 1\otimes a$ and let $\phi:L_2\to R\otimes L_2$ be any homomorphism. Setting $L(E)=L_2$ in Theorem \ref{thm:mapl2} we get that if $\phi$ is nonzero then it is $M_2$-homotopic to $d$ and that if $\phi$ is unital then it is homotopic to $d$.
\end{ex}
\begin{bibdiv}
\begin{biblist}
\bib{libro}{book}{
author={Abrams, Gene},
author={Ara, Pere},
author={Siles Molina, Mercedes},
title={Leavitt path algebras},
date={2017},
series={Lecture Notes in Math.},
volume={2008},
publisher={Springer},
doi={$10.1007/978-1-4471-7344-1$},
}
\bib{aap}{article}{
author={Abrams, G.},
author={Aranda Pino, G.},
title={The Leavitt path algebras of arbitrary graphs},
journal={Houston J. Math.},
volume={34},
date={2008},
number={2},
pages={423--442},
issn={0362-1588},
review={\MR{2417402}},
}
\bib{alps}{article}{
author={Abrams, Gene},
author={Louly, Adel},
author={Pardo, Enrique},
author={Smith, Christopher},
title={Flow invariants in the classification of Leavitt path algebras},
journal={J. Algebra},
volume={333},
date={2011},
pages={202--231},
issn={0021-8693},
review={\MR{2785945}},
}
\bib{abc}{article}{
author={Ara, Pere},
author={Brustenga, Miquel},
author={Corti\~nas, Guillermo},
title={$K$-theory of Leavitt path algebras},
journal={M\"unster J. Math.},
volume={2},
date={2009},
pages={5--33},
issn={1867-5778},
review={\MR{2545605}},
}
\bib{agp}{article}{,
title={$K_0$ of purely infinite simple regular rings},
author={P. Ara},
author={K. Goodearl},
author={E. Pardo},
journal={K-theory},
volume={26},
number={1},
pages={69--100},
year={2002},
publisher={Springer}
}
\bib{apcrow}{article}{
author={Aranda Pino, Gonzalo},
author={Crow, Kathi},
title={The center of a Leavitt path algebra},
journal={Rev. Mat. Iberoam.},
volume={27},
date={2011},
number={2},
pages={621--644},
issn={0213-2230},
review={\MR{2848533}},
doi={10.4171/RMI/649},
}
\bib{apgpsm}{article}{
author={Aranda Pino, G.},
author={Goodearl, K. R.},
author={Perera, F.},
author={Siles Molina, M.},
title={Non-simple purely infinite rings},
journal={Amer. J. Math.},
volume={132},
date={2010},
number={3},
pages={563--610},
issn={0002-9327},
review={\MR{2666902}},
doi={10.1353/ajm.0.0119},
}
\bib{black}{article}{
AUTHOR = {Blackadar, Bruce},
TITLE = {{$K$}-theory for operator algebras},
SERIES = {Mathematical Sciences Research Institute Publications},
VOLUME = {5},
PUBLISHER = {Springer-Verlag, New York},
YEAR = {1986},
PAGES = {viii+338},
ISBN = {0-387-96391-X},
review = {\MR{859867}},
URL = {http://dx.doi.org/10.1007/978-1-4613-9572-0},
}
\bib{brosore}{article}{
author={Brownlowe, Nathan},
author={S\o rensen, Adam Peder Wie},
title={Leavitt $R$-algebras over countable graphs embed into $L_(2,R)$},
journal={J. Algebra},
volume={454},
date={2007},
pages={334--356},
issn={0021-8693},
review={\MR{3473431}},
}
\bib{www}{article}{
author={Corti\~nas, Guillermo},
title={Algebraic v. topological $K$-theory: a friendly match},
conference={
title={Topics in algebraic and topological $K$-theory},
},
book={
series={Lecture Notes in Math.},
volume={2008},
publisher={Springer, Berlin},
},
date={2011},
pages={103--165},
review={\MR{2762555}},
}
\bib{dwkk}{article}{
author={Corti\~nas, Guillermo},
author={Montero, Diego},
title={Algebraic bivariant $K$-theory and Leavitt path algebras},
eprint={arXiv:1806.09204},
}
\bib{kkwt}{article}{
author={Corti\~nas, Guillermo},
author={Thom, Andreas},
title={Bivariant algebraic $K$-theory},
journal={J. Reine Angew. Math.},
volume={610},
date={2007},
pages={71--123},
issn={0075-4102},
review={\MR{2359851}},
}
\bib{ck}{article}{
author={Cuntz, Joachim},
author={Krieger, Wolfgang},
title={A class of $C^*$-algebras and topological Markov chains},
journal={Inventiones Mathematicae}
}
\comment{
\bib{crr}{book}{
author={Cuntz, Joachim},
author={Meyer, Ralf},
author={Rosenberg, Jonathan M.},
title={Topological and bivariant $K$-theory},
series={Oberwolfach Seminars},
volume={36},
publisher={Birkh\"auser Verlag, Basel},
date={2007},
pages={xii+262},
isbn={978-3-7643-8398-5},
review={\MR{2340673}},
}
}
\comment{
\bib{DT}{article}{
author={Drinen, D.},
author={Tomforde, M.},
title={The $C^*$-algebras of arbitrary graphs},
journal={Rocky Mountain J. Math.},
volume={35},
date={2005},
number={1},
pages={105--135},
issn={0035-7596},
review={\MR{2117597}},
doi={10.1216/rmjm/1181069770},
}
}
\bib{gersten}{article}{
author={Gersten, Stephen M.},
title={{$K$}-theory of free rings},
journal={Comm. Algebra},
volume={1},
date={1974},
pages={39--64},
issn={0092-7872},
review={\MR{0396671}},
}
\bib{goodearl}{book}{
author={Goodearl, K. R.},
title={von Neumann regular rings},
series={Monographs and Studies in Mathematics},
volume={4},
publisher={Pitman (Advanced Publishing Program), Boston, Mass.-London},
date={1979},
pages={xvii+369},
isbn={0-273-08400-3},
review={\MR{533669}},
}
\bib{kv}{article}{
author={Karoubi, Max},
author={Villamayor, Orlando},
title={$K$-th\'eorie alg\'ebrique et $K$-th\'eorie topologique. I},
language={French},
journal={Math. Scand.},
volume={28},
date={1971},
pages={265--307 (1972)},
issn={0025-5521},
review={\MR{0313360}},
doi={10.7146/math.scand.a-11024},
}
\comment{
\bib{leav}{article}{
author={Leavitt, W. G.},
title={The module type of a ring},
journal={Trans. Amer. Math. Soc.},
volume={103},
date={1962},
pages={113--130},
issn={0002-9947},
review={\MR{0132764}},
}
}
\bib{m&m}{article}{
author={Menal, Pere},
author={Moncasi, Jaume},
title={On regular rings with stable range $2$},
journal={J. Pure Appl. Algebra},
volume={24},
date={1982},
number={1},
pages={25--40},
issn={0022-4049},
review={\MR{647578}},
doi={10.1016/0022-4049(82)90056-1},
}
\comment{
\bib{neeman}{book}{
author={Neeman, Amnon},
title={Triangulated categories},
series={Annals of Mathematics Studies},
volume={148},
publisher={Princeton University Press, Princeton, NJ},
date={2001},
pages={viii+449},
isbn={0-691-08685-0},
isbn={0-691-08686-9},
review={\MR{1812507}},
}
\bib{emathesis}{thesis}{
author={Rodr{\'\i }guez Cirone, Emanuel},
title={Bivariant algebraic $K$-theory categories and a spectrum for $G$-equivariant bivariant algebraic $K$-theory},
type={PhD thesis},
address={Buenos Aires},
date={2017},
eprint={http://cms.dm.uba.ar/academico/carreras/doctorado/tesisRodriguez.pdf}
}
}
\bib{ror}{article}{
title={Classification of Cuntz-Krieger algebras},
author={R{\o}rdam, Mikael},
journal={K-theory},
volume={9},
number={1},
pages={31--58},
year={1995},
publisher={Springer}
}
\bib{robook}{book}{
author={R\o rdam, M.},
author={Larsen, F.},
author={Laustsen, N.},
title={An introduction to $K$-theory for $C^*$-algebras},
series={London Mathematical Society Student Texts},
volume={49},
publisher={Cambridge University Press, Cambridge},
date={2000},
pages={xii+242},
isbn={0-521-78334-8},
isbn={0-521-78944-3},
review={\MR{1783408}},
}
\bib{rosen}{book}{
author={Rosenberg, Jonathan},
title={Algebraic $K$-theory and its applications},
series={Graduate Texts in Mathematics},
volume={147},
publisher={Springer-Verlag, New York},
date={1994},
pages={x+392},
isbn={0-387-94248-3},
review={\MR{1282290}},
}
\comment{
\bib{rt}{article}{
author={Ruiz, Efren},
author={Tomforde, Mark},
title={Classification of unital simple Leavitt path algebras of infinite
graphs},
journal={J. Algebra},
volume={384},
date={2013},
pages={45--83},
issn={0021-8693},
review={\MR{3045151}},
}
}
\bib{vorst}{article}{
author={Vorst, Ton},
title={Localization of the $K$-theory of polynomial extensions},
journal={Math. Ann.},
volume={244},
date={1979},
pages={33--43},
review={\MR{0550060}},
}
\bib{wagoner}{article}{
author={Wagoner, J. B.},
title={Delooping classifying spaces in algebraic $K$-theory},
journal={Topology},
volume={11},
date={1972},
pages={349--370},
issn={0040-9383},
review={\MR{0354816}},
}
\bib{weih}{article}{
author={Weibel, Charles A.},
title={Homotopy algebraic $K$-theory},
conference={
title={Algebraic $K$-theory and algebraic number theory},
address={Honolulu, HI},
date={1987},
},
book={
series={Contemp. Math.},
volume={83},
publisher={Amer. Math. Soc., Providence, RI},
},
date={1989},
pages={461--488},
review={\MR{991991}},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2018-08-07T02:13:30",
"yymm": "1806",
"arxiv_id": "1806.09242",
"language": "en",
"url": "https://arxiv.org/abs/1806.09242",
"abstract": "In this paper we address the classification problem for purely infinite simple Leavitt path algebras of finite graphs over a field $\\ell$. Each graph $E$ has associated a Leavitt path $\\ell$-algebra $L(E)$. There is an open question which asks whether the pair $(K_0(L(E)), [1_{L(E)}])$, consisting of the Grothendieck group together with the class $[1_{L(E)}]$ of the identity, is a complete invariant for the classification, up to algebra isomorphism, of those Leavitt path algebras of finite graphs which are purely infinite simple. We show that $(K_0(L(E)), [1_{L(E)}])$ is a complete invariant for the classification of such algebras up to polynomial homotopy equivalence. To prove this we develop the bivariant algebraic $K$-theory of Leavitt path algebras and obtain several results of independent interest.",
"subjects": "Rings and Algebras (math.RA); K-Theory and Homology (math.KT); Operator Algebras (math.OA)",
"title": "Homotopy classification of Leavitt path algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595527,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594742920233
} |
https://arxiv.org/abs/1811.04600 | New Theoretical Bounds and Constructions of Permutation Codes under Block Permutation Metric | Permutation codes under different metrics have been extensively studied due to their potentials in various applications. Generalized Cayley metric is introduced to correct generalized transposition errors, including previously studied metrics such as Kendall's $\tau$-metric, Ulam metric and Cayley metric as special cases. Since the generalized Cayley distance between two permutations is not easily computable, Yang et al. introduced a related metric of the same order, named the block permutation metric. Given positive integers $n$ and $d$, let $\mathcal{C}_{B}(n,d)$ denote the maximum size of a permutation code in $S_n$ with minimum block permutation distance $d$. In this paper, we focus on the theoretical bounds of $\mathcal{C}_{B}(n,d)$ and the constructions of permutation codes under block permutation metric. Using a graph theoretic approach, we improve the Gilbert-Varshamov type bound by a factor of $\Omega(\log{n})$, when $d$ is fixed and $n$ goes into infinity. We also propose a new encoding scheme based on binary constant weight codes. Moreover, an upper bound beating the sphere-packing type bound is given when $d$ is relatively close to $n$. | \section{Introduction}
Let $S_n$ be the symmetric group on $n$ elements. A permutation code is a subset of $S_n$ with some certain constraints. Permutation codes under several
different metrics are widely used due to their various applications. Especially in recent years, permutation codes under Kendall's $\tau$-metric, Ulam
metric and Cayley metric have been extensively studied in clouding storage systems, genome resequencing and the rank modulation scheme of flash memories
\cite{2015BuzagloKendall},\cite{2016BuzagloMulti},\cite{2013FarnoudUlam},\cite{Gologlu2015New},\cite{2014Multipermutation},\cite{1990KendallRankMethod},\cite{2016WangPerms},\cite{2016ZhangSnake}.
Under these metrics, codes are designed to correct transposition errors or translocation errors. In \cite{2014CheeBreakpoint}, Chee and Vu introduced the
generalized Cayley metric which includes the metrics aforementioned as special cases. However, the generalized Cayley distance between two permutations
is in general not easily computable and thus the construction of codes is difficult. In \cite{Yang2018Theoretical}, Yang et al. introduced the block
permutation metric which could be simply computed and is of the same order as the generalized Cayley metric. By the metric embedding method, the problem
of constructing codes in the generalized Cayley metric is transformed into constructing codes in the block permutation metric. Several theoretical
bounds (Gilbert-Varshamov type and sphere-packing type) and constructions of codes under block permutation metric are shown in \cite{Yang2018Theoretical}.
In this paper we further consider permutation codes in $S_n$ under the block permutation metric. We first establish a connection between permutation
codes and independent sets in a corresponding graph and then study the bounds of the independence number of the graph. By this graph theoretic approach,
we improve the Gilbert-Varshamov type bound asymptotically by a factor of $\Omega(\log{n})$, when the minimum distance $d$ is fixed while $n$ goes into
infinity. We also propose a new encoding scheme based on certain constructions of binary constant weight codes. Compared with the known constructions,
we improve the size of codes by a factor of $\Theta(n^{2d-4})$. As for the upper bound, each permutation can be represented as a corresponding characteristic set
and then we apply some methods from extremal set theory to obtain an upper bound of a new type, which beats the sphere-packing type bound when $d$ is relatively close to $n$.
The rest of this paper is organized as follows. In Section \ref{sec:pre}, we review some basic backgrounds about block permutation metric. In Section
\ref{sec:graph}, we introduce some relevant terminologies and results from extremal graph theory and then establish the correspondence between
permutation codes and independent sets in some certain graph. The asymptotic improvement of the Gilbert-Varshamov type bound is presented in Section \ref{sec:TheoBound}.
Section \ref{sec:encoding} contains a new encoding scheme based on binary constant weight codes. The upper bound based on extremal set theory is presented in Section \ref{sec:UpperBound}.
We conclude in Section \ref{sec:conclusion}.
\section{Block permutation metric}\label{sec:pre}
In this section, we give some definitions and notations for permutation codes under block permutation metric.
Let $[n]$ denote $\{1,2,3,\ldots,n\}$. $\pi=(\pi(1),\pi(2),\ldots,\pi(n))$ is a permutation over $[n]$, known as the vector notation of a permutation.
The symbol $\circ$ denotes the composition of permutations. Specifically, for two permutations $\sigma$ and $\tau$, their composition, denoted by
$\sigma \circ \tau$, is the permutation with $\sigma \circ \tau(i)=\sigma(\tau(i))$ for all $i \in [n]$. All the permutations under this operation form
the noncommutative group $S_n$ known as the symmetric group on $[n]$ of size $\left|S_n\right|=n!$. The subsequence of $\sigma$ from indices $i$ to
$j$ is written as $\sigma \left[i;j\right] \triangleq \left(\sigma(i),\sigma(i+1),\ldots,\sigma(j)\right)$.
\begin{definition}
A permutation $\pi \in S_n$ is called {\it minimal} if and only if no consecutive elements in $\pi$ are also consecutive in the identity permutation
$e=(1,2,\ldots,n)$, i.e., for all $1 \leqslant i \leqslant n-1, \pi(i+1)\neq \pi(i)+1$. Denote the set of all the minimal permutations in $S_n$ as
$\mathcal{D}_{n}$.
\end{definition}
\begin{definition}\label{def:BPM}
The block permutation distance $d_{B}(\pi_{1},\pi_{2})$ between two permutations $\pi_{1},\pi_{2} \in S_n$ is equal to $d$ if
\begin{equation*}
\pi_{1}=\left(\psi_{1},\psi_{2},\ldots,\psi_{d+1}\right), \pi_{2}=\left(\psi_{\sigma(1)},\psi_{\sigma(2)},\ldots,\psi_{\sigma(d+1)}\right),
\end{equation*}
where $\sigma\in\mathcal{D}_{d+1}$, $\psi_{k}=\pi_1\left[i_{k-1}+1:i_{k}\right]$ for $0=i_{0}<i_{1}<\cdots<i_{d}<i_{d+1}=n$ and $1\leqslant k\leqslant
d+1.$
\end{definition}
The definition suggests that in order to turn $\pi_{1}$ into $\pi_{2}$, one way is to first divide $\pi_{1}$ into $d+1$ segments
$\pi_{1}=\left(\psi_{1},\psi_{2},\ldots,\psi_{d+1}\right)$ and then perform a block level permutation of these segments according to a permutation
$\sigma\in\mathcal{D}_{d+1}$. The constraint of $\sigma$ being minimal indicates that $d_{B}(\pi_{1},\pi_{2})=d$ if and only if $d+1$ is the minimum
number of segments that $\pi_{1}$ needs to be divided into for such an operation. This definition is somehow not intuitive enough and thus Yang et al.
\cite{Yang2018Theoretical} found another way to characterize the block permutation distance explicitly by the {\it characteristic set} of a permutation.
\begin{definition}\label{def:char}
The characteristic set $A\left(\pi\right)$ for any $\pi \in S_n$ is defined as set of all the consecutive pairs in $\pi$, i.e.,
\begin{equation*}
A\left(\pi\right) \triangleq \left\{\left(\pi\left(i\right),\pi\left(i+1\right)\right)\mid 1\leqslant i < n\right\}.
\end{equation*}
\end{definition}
Note that the characteristic set of a permutation is equivalent to representing a permutation by a directed Hamiltonian path on $n$ vertices. That is,
the Hamiltonian path corresponding to $\pi$ is the set of edges in $\{(x,y)|x,y\in[n],(x,y)\in A(\pi)\}$. The following idea will be frequently used throughout the paper.
Given a subset of $A(\pi)$, the directed edges corresponding to the subset constitute a disjoint union of several directed paths
(an isolated vertex $v$ will be also regarded as a path starting and ending with $v$). Then $\pi$ should be obtained by
concatenating these directed paths into a directed Hamiltonian path.
Let $\mathcal{P}_{n}$ be the set $\{(i,j)|i\neq j, i\in [n], j\in[n]\}$. $\left|\mathcal{P}_{n}\right|=n(n-1)$. For each permutation $\pi \in S_n$, the
corresponding characteristic set $A\left(\pi\right)$ is then a subset of $\mathcal{P}_{n}$ of cardinality $\left|A\left(\pi\right)\right|=n-1$. The
block permutation metric can be characterized by the characteristic set and then some basic properties of the metric can be derived.
These are summarized in the following two lemmas proposed in \cite{Yang2018Theoretical}.
\begin{lemma}\label{lem:db}
For all $\pi_{1},\pi_{2}\in S_n$,
\begin{equation*}
d_{B}\left(\pi_{1},\pi_{2}\right)=\left|A\left(\pi_{1}\right) \setminus A\left(\pi_{2}\right)\right|.
\end{equation*}
\end{lemma}
\begin{lemma}\label{lem:pro} For all $\pi_{1},\pi_{2},\pi_{3}\in S_n$, the block permutation distance $d_{B}$ satisfies the following properties:
\begin{enumerate}
\item (Symmetry) $d_{B}\left(\pi_{1},\pi_{2}\right)=d_{B}\left(\pi_{2},\pi_{1}\right)$.
\item (Left-invariance) $d_{B}\left(\pi_{3}\circ \pi_{1},\pi_{3} \circ \pi_{2}\right)=d_{B}\left(\pi_{1},\pi_{2}\right)$.
\item (Triangle Inequality) $d_{B}\left(\pi_{1},\pi_{3}\right)\leqslant d_{B}\left(\pi_{1},\pi_{2}\right)+d_{B}\left(\pi_{2},\pi_{3}\right).$
\end{enumerate}
\end{lemma}
The following example shows how to compute the block permutation distance between two permutations following the terminologies above.
\begin{example}
Let $\pi_{1} = (4,8,3,2,6,7,5,1,9)$, $\pi_{2}=(6,7,8,3,2,5,1,9,4).$ Their characteristic sets are
\begin{flalign}
A(\pi_{1})&=\{(4,8),(8,3),(3,2),(2,6),(6,7),(7,5),(5,1),(1,9)\}, \nonumber\\
A(\pi_{2})&=\{(6,7),(7,8),(8,3),(3,2),(2,5),(5,1),(1,9),(9,4)\}, \nonumber
\end{flalign}
and thus we have $$d_{B}(\pi_{1},\pi_{2})=\left|A\left(\pi_{1}\right) \setminus A\left(\pi_{2}\right)\right|=|\{(4,8),(2,6),(7,5)\}|=3.$$
On the other hand, to compute $d_{B}(\pi_{1},\pi_{2})$ by Definition \ref{def:BPM}, we should find $\psi_{i}, 1\leqslant i \leqslant 4$
and $\sigma\in\mathcal{D}_{4}$ as follows:
$$\psi_{1}=(4), \psi_{2} =(8,3,2), \psi_{3}=(6,7), \psi_{4}=(5,1,9), \sigma=(3,2,4,1).$$
Then we have
\begin{flalign}
\pi_{1} & =(\psi_{1},\psi_{2},\psi_{3},\psi_{4}), \nonumber \\
\pi_{2} & =(\psi_{\sigma(1)},\psi_{\sigma(2)},\psi_{\sigma(3)},\psi_{\sigma(4)}), \nonumber
\end{flalign}
and thus $d_{B}(\pi_{1},\pi_{2})=3.$\\
\end{example}
Note that it is usually not easy to find such $\psi_{i}$ and $\sigma$ to compute the block permutation distance between two permutations, while finding the
difference between two characteristic sets is relatively easier. Next we introduce the permutation code under block permutation metric.
\begin{definition}
Given positive integers $n$ and $d$, $\mathcal{C}\subseteq S_n$ is called an $(n,d)$-permutation code under block permutation metric, if
$d_B(\sigma,\pi)\geqslant d$ for any two distinct permutations $\sigma,\pi\in \mathcal{C}$. Let $\mathcal{C}_{B}(n,d)$ denote the maximum size of an $(n,d)$-permutation code
$\mathcal{C}$.
\end{definition}
The best known upper bound and lower bound of $\mathcal{C}_{B}(n,d)$ are proposed in \cite{Yang2018Theoretical}, which are the so-called sphere-packing type bound and Gilbert-Varshamov
type bound. Both bounds are derived from the estimation on the size of a block permutation ball.
\begin{definition}\label{def:ball}
For given integers $n$, $t$ and a given center point $\pi \in S_n$, the $t$-block permutation ball centered at $\pi$ is defined as the set of all permutations $\sigma \in
S_n$, $d_{B}\left(\pi,\sigma \right)\leqslant t$. We denote the $t$-block permutation ball centered at $\pi$ as $b_{B}\left(n,t,\pi\right).$
\end{definition}
Note that by the left-invariance property of $d_{B}$, the size of $b_{B}\left(n,t,\pi\right)$ is independent of the center $\pi$ and thus we can
denote the size of the ball as $|b_{B}\left(n,t\right)|$.
\begin{lemma}\label{lem:ballsize}\textup{\cite{Yang2018Theoretical}}
For given integers $n$ and $t$, $t\leqslant n-\sqrt{n}-1$, denote the size of a $t$-block
permutation ball as $\left |b_{B}\left(n,t\right)\right |$, then we have
\begin{equation*}
\prod\limits_{i=1}^{t}\left(n-i\right)\leqslant \left |b_{B}\left(n,t\right)\right| \leqslant \prod\limits_{i=0}^{t}\left(n-i\right).
\end{equation*}
\end{lemma}
\begin{lemma}\label{lem:GVSP}\textup{\cite{Yang2018Theoretical}}
For given integers $n$ and $t$, let $d=2t+1$, then we can bound $\mathcal{C}_{B}(n,d)$ as
\begin{equation*}
\frac{n!}{\left |b_{B}\left(n,2t\right)\right|} \leqslant \mathcal{C}_{B}(n,d) \leqslant \frac{n!}{\left |b_{B}\left(n,t\right)\right|}.
\end{equation*}
\end{lemma}
In \cite{Yang2018Theoretical} several constructions of $(n,d)$-permutation codes with $d=2t+1$ were presented, including a code of size
$\frac{n!}{q^{2d-3}}$, where $n(n-1) \leqslant q \leqslant 2n(n-1)$ is a prime number. Moreover \cite{Yang2018Theoretical} contains some explicit systematic constructions and decoding algorithms.
\section{Graph models}\label{sec:graph}
We use the standard terminologies and notations in graph theory. A graph $G$ consists of a set of vertices $V(G)$ and a set of edges $E(G)$.
Each edge is a pair of vertices. Two vertices $u$ and $v$ are called adjacent if there is an edge $\{u,v\}\in E\left(G\right)$.
We say that $H$ is a subgraph of $G$ if $V\left(H\right)\subset V(G)$ and $E(H)\subset E(G)$.
Furthermore if $H$ contains all edges of $G$ joining two vertices in $V(H)$, then $H$ is said
to be the subgraph of $G$ induced by $V(H)$. The neighborhood of a vertex $v$ is the set of all vertices adjacent to $v$, denoted by $\Gamma(v)$.
The neighborhood graph of $v$ is the subgraph induced by $\Gamma(v)$. The size of $|\Gamma(v)|$ is called the degree of the vertex $v$.
Let $\Delta(G)$ denote the maximum vertex degree. An independent set in a graph is a set of vertices where every
pair is nonadjacent. The size of the largest independent set in $G$ is called the independence number, denoted as $\alpha(G)$.
In this section we introduce a natural relationship between codes and independent sets of a corresponding graph. Take the set of all the codewords as the
vertex set of a graph. Two codewords with distance less than $d$ are connected via an edge. Then in an independent set of this graph, every two distinct
codewords have distance no less than $d$. Thus we have a correspondence between an independent set and a code with minimum distance $d$. The problem of
estimating the maximal size of a code turns into analyzing the independence number of the corresponding graph. This well-known approach has already been
shown to be powerful in studying several kinds of codes. Take the permutation code under Hamming metric as an example. Gao et al. \cite{2013GaoImprovementofGV}
improved the Gilbert-Varshamov bound by a factor of $\Omega(\log n)$, when the minimum distance $d$ is fixed and $n$ goes into infinity. Tail et al.
\cite{vardy} improved the Gilbert-Varshamov bound by a factor of $\Omega(n)$, when $\frac{d}{n}$ is fixed and $n$ goes into infinity. Recently, Wang et
al. \cite{2016WangPerms} used a coloring approach to analyze the independence number and improved the Gilbert-Varshamov bound by a factor of $\Omega(n)$
when the minimum distance $d$ is fixed and $n$ goes into infinity.
Here we introduce some results about the independence number of locally sparse graphs. A graph is called triangle-free if and only if the neighborhood
of every vertex is an independent set.
Ajtai et al. \cite{1980AjtaiNoteRamsey} showed the relationship between triangle-free property and independence number in the following lemma.
\begin{lemma} Let $G$ be a graph with maximum degree $\Delta$. If $G$ is triangle-free, then we have
\begin{equation*}
\alpha(G)\geqslant \frac{|V(G)|}{8\Delta} \log_{2}\Delta.
\end{equation*}
\end{lemma}
\par In \cite{1985BollobasRandomgraphs} the lemma above was extended from triangle-free graphs into graphs with relatively few triangles.
\begin{lemma}
Let $G$ be a graph with maximum degree $\Delta$. If $G$ has at most $T$ triangles, then we have
\begin{equation*}
\alpha(G)\geqslant \frac {|V(G)|}{10\Delta}(\log_{2}\Delta- \frac{1}{2}\log_{2}(\frac{T}{|V(G)|})).
\end{equation*}
\end{lemma}
Note that a graph has relatively few triangles when the neighborhoods of its vertices are relatively sparse. Jiang and Vardy \cite{2004JiangGVbound}
generalized the results above for locally sparse graphs as follows.
\begin{lemma}\label{lem:JVsparse}
Let $G$ be a graph with maximum degree $\Delta$. Suppose for any vertex $v\in V(G)$, the subgraph induced by the neighborhood of $v$ has at most
$P$ edges, then we have
\begin{equation*}
\alpha(G)\geqslant \frac{|V(G)|}{10\Delta}(\log_{2}\Delta-\frac{1}{2}\log_{2}(\frac{P}{3})).
\end{equation*}
\end{lemma}
\section{An asymptotic improvement of the lower bound}\label{sec:TheoBound}
Before presenting the main results of this section, it should be noted that $\mathcal{C}_{B}(n,d)$ can be determined under some special cases.
\begin{theorem}
$\mathcal{C}_{B}(n,1)=n!$. $\mathcal{C}_{B}(n,2)=(n-1)!$. $\mathcal{C}_{B}(n,n-1)\leqslant n$ and equality holds if $n$ is not 3 or 5.
\end{theorem}
\begin{proof}
\begin{enumerate}
\item Trivially take all the permutations in $S_n$ and we have $\mathcal{C}_{B}(n,1)=n!$.
\item It is easy to check that for any two permutations $\pi$ and $\sigma$, $d_B(\pi,\sigma)=1$ if and only if $\sigma$ is a cyclic shift of $\pi$.
That is, if $\pi=(\pi(1),\dots,\pi(n))$ and $d_B(\pi,\sigma)=1$, then $\sigma$ is of the form $\sigma=(\pi(t),\dots,\pi(n),\pi(1),\dots,\pi(t-1))$ for some $2\leqslant t \leqslant n$.
Under the operation of cyclic shifting, $S_n$ is divided into $(n-1)!$ equivalent classes where each class is known as a circular permutation.
By picking an arbitrary permutation from each equivalent class we obtain an $(n,2)$-permutation code of cardinality $(n-1)!$.
\item For any two distinct permutations $\pi$ and $\sigma$ in an $(n,n-1)$-permutation code, their characteristic sets are disjoint according to Lemma \ref{lem:db}.
Since each characteristic set is a subset of $\mathcal{P}_{n}$ of cardinality $n-1$, $|\mathcal{P}_{n}|=n(n-1)$, then the number of codewords is at most $n$.
\begin{enumerate}
\item Suppose $n$ is even, $n=2p$. Define $a_{2i-1}=2i-1$ for $1\leqslant i \leqslant p$ and $a_{2i}=2p-2i$ for $1\leqslant i \leqslant p-1$, i.e., $(a_{1},a_{2},\ldots,a_{n-1})=(1,2p-2,3,2p-4,\ldots,p,\ldots,4,2p-3,2,2p-1)$.
For every $1\leqslant i \leqslant n$, let the $i$-th codeword be $(i,i+a_{1},i+a_{1}+a_{2},\dots,i+\sum\limits_{j=1}^{k}a_{j},\ldots,i+\sum\limits_{j=1}^{n-1} a_{j})$, where each entry is taken modulo $n$ (and note that we use `$n$' instead of `$0$' for some entry).
It is routine to check that $\sum\limits_{j=1}^{2i}a_{j}\equiv -i \pmod{n}$, $1\leqslant i \leqslant p-1$ and $\sum\limits_{j=1}^{2i-1}a_{j} \equiv i \pmod{n}$, $1\leqslant i \leqslant p$. Therefore $\sum\limits_{j=1}^{k}a_{j}$ are distinct modulo $n$ for $1\leqslant k\leqslant n$. So these $n$ codewords defined above are indeed codewords in $S_{n}$.
For every pair $(c,d)$ with $d-c\equiv a_{k} \pmod{n}$, it appears exactly once, in the $i$-th codeword with $c\equiv i+\sum\limits_{j=1}^{k-1} a_j$ and $d\equiv i+\sum\limits_{j=1}^{k} a_j$.
\item Suppose $n$ is odd. To construct an $(n,n-1)$-permutation code of size $n$, consider the complete directed graph on $n+1$ vertices $[n]\cup\{\infty\}$.
For each $\pi$, its characteristic set $A(\pi)$ also represents the directed Hamiltonian path on $n$ vertices. Further add the edges $(\infty,\pi(1))$ and $(\pi(n),\infty)$ into $A(\pi)$.
Then each permutation corresponds to a directed Hamiltonian cycle on $[n]\cup\{\infty\}$. Thus an $(n,n-1)$-permutation code of size $n$ is equivalent
to a Hamiltonian decomposition in the complete directed graph on $[n]\cup\{\infty\}$. Hamiltonian decomposition is a well studied topic, for example in \cite{1980TimothyDecomposition}.
It has been shown that for odd integers $n \geqslant 7$, the edges of the complete directed graph on $n+1$ vertices can be partitioned into $n$ directed Hamiltonian circuits.
\end{enumerate}
Therefore, $\mathcal{C}_{B}(n,n-1)\leqslant n$ and equality holds if $n$ is even or $n\geqslant 7$ is odd. Moreover, it can be easily checked that $\mathcal{C}_{B}(3,2)=2$ and $\mathcal{C}_{B}(5,4)=4$.
\end{enumerate}
\end{proof}
\begin{remark}
When $n+1$ is prime, there is another construction of an $(n,n-1)$-permutation code of size $n$ different from the one in the proof above. Consider the code $\{(i,2i,\dots,(n-1)i,ni):1\leqslant i \leqslant n\}$, with each entry modulo $(n+1)$. It is straightforward to check that every pair of $(a,b)$ appears exactly once (in the $i$th codeword, $i\equiv(b-a)\pmod{n+1}$).
\end{remark}
After solving these special cases, the rest of this section is devoted to improving the asymptotic lower bound of $\mathcal{C}_{B}(n,d)$ with $d\geqslant 3$ being a fixed constant, while $n$ approaches infinity.
The idea is to analyze the independence number of the corresponding block permutation graph, defined as follows.
\begin{definition}
For given positive integers $n$ and $d\geqslant 3$, the $(n,d)$-block permutation graph $\mathcal{G}_{n,d}$ is the graph with vertex set $S_n$ and edge set
$\{(\pi,\sigma):\pi\neq\sigma,d_B(\pi,\sigma)<d\}$.
\end{definition}
The codewords of an $(n,d)$-permutation code under block permutation metric are vertices of an independent set in $\mathcal{G}_{n,d}$. Conversely, any
independent set in $\mathcal{G}_{n,d}$ is an $(n,d)$-permutation code. To get a lower bound of $\mathcal{C}_{B}(n,d)$ via the graph theoretic approach using Lemma \ref{lem:JVsparse},
we need to calculate some parameters of the graph $\mathcal{G}_{n,d}$.
Let $\mathcal{H}_{n,d}$ be the subgraph induced by the neighborhood of the identity permutation $(1,2,3,\ldots,n)$, and let $R(n,k)$ be the set of all
permutations in $S_n$ which are exactly at distance $k$ from the identity, i.e.,
\begin{equation*}
R(n,k)=\{\sigma\in S_n : d_{B}(\sigma,id)=k \}.
\end{equation*}
Then the induced subgraph $\mathcal{H}_{n,d}$ has the vertex set $V(\mathcal{H}_{n,d})=\bigcup\limits_{k=1}^{d-1} R(n,k)$. The size of $R(n,k)$ is a well-studied topic in \cite{2002MyersCounting}.
\begin{lemma}\textup{\cite{2002MyersCounting}} For all integers $1\leqslant k\leqslant n-1$,
\begin{equation*}
|R(n,k)|=k!\binom{n-1}{k}\sum_{i=0}^{k}(-1)^{k-i}\frac{(i+1)}{(k-i)!}.
\end{equation*}
\end{lemma}
Since $\binom{n}{a}=\Theta(n^{a})$ when $a$ is a fixed positive integer and $n$ goes to infinity, then asymptotically
$|R(n,k)|=\Theta(n^{k})$, $1\leqslant k \leqslant d-1$ and thus $|b_{B}(n,d-1)|=\sum\limits_{k=0}^{d-1}|R(n,k)|=\Theta(n^{d-1})$, when $d$ is fixed and $n$ goes to infinity.
To apply Lemma \ref{lem:JVsparse}, we already have $V(\mathcal{G}_{n,d})=n!$ and $\mathcal{G}_{n,d}$ is a regular graph of degree $\Delta=b_{B}\left(n,d-1\right)-1=\Theta(n^{d-1})$.
The remaining parameter to compute is $P(n,d)$, the number of edges in the induced subgraph $\mathcal{H}_{n,d}$.
\begin{lemma} \label{lem:2d-3}
For a fixed positive integer $d \geqslant 3$, $P(n,d)=O(n^{2d-3})$ when $n$ goes to infinity.
\end{lemma}
\begin{proof}
The number of vertices in $R(n,k)$ is asymptotically $\Theta(n^{k})$. Thus the number of edges connecting some $\pi\in R(n,k_1)$ and some $\sigma\in R(n,k_2)$ is $\Theta(n^{k_1+k_2})=O(n^{2d-3})$ as long as $k_1+k_2 \leqslant 2d-3$. Therefore, to prove the lemma we only need to focus on bounding the number of edges connecting some $\pi\in R(n,d-1)$ and some $\sigma\in R(n,d-1)$.
Consider the characteristic sets of such $\pi$ and $\sigma$. $|A(id)\setminus A(\pi)|=|A(id)\setminus A(\sigma)|=d-1$. Let $x(\pi,\sigma)$ be the number of consecutive pairs in $A(id)$ contained in neither $A(\pi)$ nor $A(\sigma)$, i.e.,
$$x(\pi,\sigma)=|\big(A(id)\setminus A(\pi)\big)\cap \big(A(id)\setminus A(\sigma)\big)|.$$
For a fixed $\pi\in R(n,d-1)$, the number of permutations $\sigma\in R(n,d-1)$ with $x(\pi,\sigma)=x$ is at most ${d-1 \choose x}{n-d\choose {d-1-x}}=\Theta(n^{d-1-x})$, since $A(id)\setminus A(\sigma)$ contains exactly $x$ pairs out of the $d-1$ pairs in $A(id)\setminus A(\pi)$ and $d-1-x$ pairs out of the $n-d$ pairs in $A(id)\cap A(\pi)$. Recall that $|R(n,d-1)|=\Theta(n^{d-1})$ and then the number of edges connecting $\pi,\sigma \in R(n,d-1)$ with $1\leqslant x(\pi,\sigma)\leqslant d-1$ is at most $\Theta(n^{2d-3})$. Therefore, to prove the lemma we only need to focus on bounding the number of edges connecting some $\pi\in R(n,d-1)$ and some $\sigma\in R(n,d-1)$, with $x(\pi,\sigma)=0$. Now we claim that in fact there are no such edges.
Since $x(\pi,\sigma)=0$, then $\big(A(id)\setminus A(\sigma)\big)\subset \big(A(\pi)\setminus A(\sigma)\big)$ and thus $d_B(\pi,\sigma)\geqslant d-1$. If $\pi$ and $\sigma$ are connected, then it must hold that $d_B(\pi,\sigma)=d-1$ and
$$A(id)\setminus A(\sigma)~=~A(\pi)\setminus A(\sigma)$$
and simultaneously
$$A(id)\setminus A(\pi)~=~A(\sigma)\setminus A(\pi).$$
Now consider the $n-d$ pairs in $A(\pi)\cap A(\sigma)$. In the graph with vertex $[n]$, label all the directed edges $(x,y)$ where $(x,y)\in A(\pi)\cap A(\sigma)$ and call this graph $\mathcal{G}$. The union of $A(\pi)\cap A(\sigma)$ and $A(id)\setminus A(\sigma)$ is $A(\pi)$, the directed Hamiltonian path corresponding to $\pi$. Therefore $\mathcal{G}$ is a union of $d$ non-intersecting directed paths (there may exist isolated vertices and each isolated vertex is also considered as a directed path), where the $j$th path is denoted as $P_j=(x_j\rightarrow\cdots\rightarrow y_j)$, indicating that it starts with $x_j$ and ends with $y_j$, $1\leqslant j \leqslant d$. The directed Hamiltonian path corresponding to $\pi$ is then a concatenation of these paths and without loss of generality it can be written as $P_1\rightarrow P_2\rightarrow \cdots \rightarrow P_d$. Since the edges connecting the $P_j$'s arise from $A(id)\setminus A(\sigma)$, then it implies that $x_{j+1}=y_j+1$ for $1\leqslant j \leqslant d-1$.
Now since the directed Hamiltonian path corresponding to $\sigma$ is also formed by using the $d-1$ edges in $A(id)\setminus A(\pi)$ to connect the $P_j$'s, then there are only two cases. The first case is when $x_1\neq y_d+1$, then there is only a unique way to connect the $P_j$'s via edges corresponding to consecutive pairs, i.e., $\sigma=\pi$. The other case is when $x_1=y_d+1$ and the directed Hamiltonian path corresponding to $\sigma$ will be of the form $P_t\rightarrow P_{t+1}\rightarrow \cdots \rightarrow P_d \rightarrow P_1 \rightarrow \cdots \rightarrow P_{t-1}$. However, since $d\geqslant 3$, then $\sigma$ and $\pi$ will share $d-2$ edges $\{(y_j,x_{j+1})|j\neq t-1, 1\leqslant j \leqslant d-1\}$, which contradicts to $x(\pi,\sigma)=0$.
Therefore, the last kind of edges we focus on do not exist at all and the total number of edges in the graph $\mathcal{H}_{n,d}$ is $P(n,d)=O(n^{2d-3})$.
\end{proof}
Now we are ready to apply Lemma \ref{lem:JVsparse} to obtain the new lower bound of $\mathcal{C}_{B}(n,d)$.
\begin{theorem}
When $d$ is fixed, $d \geqslant 3$ and $n$ goes into infinity, there exists an $(n,d)$-permutation code under block permutation metric with size
$$\mathcal{C}_{B}(n,d)=\alpha(\mathcal{G}_{n,d})\geqslant \frac{n!}{10\Delta}(\log_{2}\Delta-\frac{1}{2}\log_{2}(\frac{P(n,d)}{3})) = \Omega(\frac{n!\log{n}}{n^{d-1}}).$$
Particularly, it improves the Gilbert-Varshamov bound by a factor of $\Omega(\log(n)).$
\end{theorem}
\begin{proof}
Using our graph notation, the Gilbert-Varshamov bound is
\begin{equation*}
A_{GV}(n,d):=\frac{n!}{1+\Delta(n,d)}=\Theta(\frac{n!}{n^{d-1}}).
\end{equation*}
By Lemma \ref{lem:JVsparse} and Lemma \ref{lem:2d-3}, we have
\begin{flalign}
\frac{\alpha(\mathcal{G}_{n,d})}{A_{GV}(n,d)} & \geqslant
\frac{\frac{n!}{10\Delta(n,d)}(\log_{2}\Delta(n,d)-\frac{1}{2}\log_{2}(\frac{P(n,d)}{3}))}{\frac{n!}{1+\Delta(n,d)}} \nonumber \\
\geqslant & \frac{1}{10} \log_{2}(\frac{\Delta(n,d)}{\sqrt{\frac{P(n,d)}{3}}}) \geqslant \frac{1}{10} \log_{2}(\frac{c_{b}n^{d-1}}{c_{s}n^{d-\frac{3}{2}}})
=c\log(n). \nonumber
\end{flalign}
Hence we have
$$\frac{\alpha(\mathcal{G}_{n,d})}{A_{GV}(n,d)}=\Omega(\log(n)). $$\\
where $c_{b}$, $c_{s}$ and $c$ are constants independent of $n$.
\end{proof}
\section{Construction} \label{sec:encoding}
In this section, we propose a new construction of permutation codes under block permutation metric. The main idea arises from constructing constant weight binary codes under Hamming metric.
Recall that $\mathcal{P}_n=\{(x,y):x\neq y, x,y\in[n]\}$ and $|\mathcal{P}_n|=n(n-1)$. Suppose $q\geqslant n(n-1)/2$ is a prime number.
From \emph{Bertrand's postulate}, there is always such a $q$, $n(n-1)/2\leqslant q \leqslant n(n-1)$.
Let $\mathcal{V}:\mathcal{P}\rightarrow \mathbb{F}_q$ be a map from $\mathcal{P}$ to the finite field $\mathbb{F}_q$ such that for distinct pairs $(x,y)$ and $(x',y')$, $\mathcal{V}(x,y)=\mathcal{V}(x',y')$ if and only if
$x'=y$ and $y'=x$. The range of $\mathcal{V}$ has size $n(n-1)/2$ and can be satisfied since we set $q\geqslant n(n-1)/2$.
Then for any permutation $\pi\in S_n$, $\mathcal{V}$ maps its characteristic set $A(\pi)=\{(\pi(i),\pi(i+1))\mid 1\leqslant i < n\}$ into $\{\mathcal{V}((\pi(i),\pi(i+1))\mid 1\leqslant i < n\}$, which is a subset of $\mathbb{F}_q$ of cardinality $n-1$. Denote these $n-1$ elements as $\gamma_{1},\gamma_{2},\dots,\gamma_{n-1}$.
We then define a map $F$ from $S_n$ to $\mathbb{F}_{q}^{d-1}$ as follows:
\begin{equation*}
F(\pi)=(F_{1}(\pi),F_{2}(\pi),...,F_{d-1}(\pi)),
\end{equation*}
where
\begin{flalign*}\label{map:F}
F_{1}(\pi)&=\sum_{1\leqslant i \leqslant n-1}\gamma_{i}, \\
F_{2}(\pi)&=\sum_{1\leqslant i < j \leqslant n-1}\gamma_{i}\gamma_{j}, \\
F_{3}(\pi)&=\sum_{1\leqslant i < j < k \leqslant n-1}\gamma_{i}\gamma_{j}\gamma_{k}, \\
& ...\nonumber
\end{flalign*}
\begin{theorem}
For any two distinct permutations $\pi,\sigma \in S_n$, if $F(\pi)=F(\sigma)$, then $d_B(\pi,\sigma)\geqslant d$.
\end{theorem}
\begin{proof}
Suppose on the contrary that there exist two distinct permutations $\pi,\sigma \in S_n$ such that $F(\pi)=F(\sigma)$ and $d_B(\pi,\sigma)=\delta<d$.
Recall that $d_B(\pi,\sigma)=|A(\pi)\setminus A(\sigma)|= |A(\sigma)\setminus A(\pi)|$.
Therefore $\mathcal{V}$ maps the set $A(\pi)\setminus A(\sigma)$ into a subset $\{\alpha_{1},\alpha_{2},\dots,\alpha_{\delta}\}$ and similarly
$\mathcal{V}$ maps the set $A(\sigma)\setminus A(\pi)$ into a subset $\{\beta_{1},\beta_{2},\dots,\beta_{\delta}\}$.
The condition $F(\pi)=F(\sigma)$ will infer the following equations.
\begin{flalign*}
\zeta_{1} &=\sum_{1\leqslant i \leqslant \delta}\alpha_{i}=\sum_{1\leqslant i \leqslant \delta}\beta_{i}, \\
\zeta_{2} &=\sum_{1\leqslant i < j \leqslant \delta}\alpha_{i}\alpha_{j}=\sum_{1\leqslant i < j \leqslant \delta}\beta_{i}\beta_{j}, \\
& \ldots \\
\zeta_{d-1} &=\sum_{i_{1}<\ldots<i_{d-1}}\alpha_{i_{1}}\ldots\alpha_{i_{d-1}}=\sum_{i_{1}<\ldots<i_{d-1}}\beta_{i_{1}}\ldots\beta_{i_{d-1}}.
\end{flalign*}
Consider the polynomial $x^{\delta}-\zeta_{1}x^{\delta -1}+\zeta_{2}x^{\delta -2}-\cdots +(-1)^{\delta +1}\zeta_{\delta}=\prod_{1\leqslant i \leqslant \delta}(x-\alpha_i)=\prod_{1\leqslant i \leqslant \delta}(x-\beta_i)$.
Then $\{\alpha_{1},\alpha_{2},\dots,\alpha_{\delta}\}$ and $\{\beta_{1},\beta_{2},\dots,\beta_{\delta}\}$ are both the zeros of this polynomial and thus these two sets are identical.
Consider the complete directed graph with vertex set $[n]$ where each permutation corresponds to a directed Hamiltonian path indicated by its characteristic set. Now the path indicating $\pi$ and the path indicating $\sigma$ share $n-1-\delta$ directed edges in $A(\pi)\cap A(\sigma)$. Due to the property of the map $\mathcal{V}$, the set $\mathcal{E}$ of edges (without considering directions at this moment) corresponding to the pairs $\{\alpha_{1},\alpha_{2},\dots,\alpha_{\delta}\}=\{\beta_{1},\beta_{2},\dots,\beta_{\delta}\}$ are uniquely determined. With the given directions on the edges $A(\pi)\cap A(\sigma)$, there is a unique way to choose the directions for the edges in $\mathcal{E}$ to obtain a Hamiltonian path. Therefore $\pi$ should be the same as $\sigma$, a contradiction.
\end{proof}
Therefore, we can construct $(n,d)$-permutation codes under block permutation metric as follows.
\begin{theorem}\label{thm:constructcode}
For every $\mathbf{f}\in\mathbb{F}_{q}^{d-1}$, $C_{\mathbf{f}}(n,d)=\{\pi|\pi\in S_{n}, F(\pi)=\mathbf{f}\}$ is an $(n,d)$-permutation code under block permutation metric.
\end{theorem}
Consider all the vectors $\mathbf{f}\in\mathbb{F}_{q}^{d-1}$ and then $\{C_{\mathbf{f}}(n,d): \textbf{f}\in \mathbb{F}_{q}^{d-1}\}$ is a partition of $S_{n}$, where each
component $C_{\emph{\textbf{f}}}(n,d)$ is a permutation code under block permutation metric. Suppose $C_{\mathbf{f}_{\max}}(n,d)$ is the one with
maximal size, then by pigeonhole principle, we obtain that $|C_{\mathbf{f}_{\max}}(n,d)| \geqslant \frac{n!}{|\mathbb{F}_{q}^{d-1}|} =
\frac{n!}{q^{d-1}} = \frac{n!}{n^{2d-2}}.$
In \cite{Yang2018Theoretical}, Yang et al. constructed a permutation code of size $\frac{n!}{q^{2d-3}}=\frac{n!}{n^{4d-6}}$, where $q$ is a prime number
such that $n(n-1) \leqslant q \leqslant 2n(n-1)$. So our construction improves the size of permutation codes by a factor of $\Theta(n^{2d-4})$.
\section{An upper bound} \label{sec:UpperBound}
In this section, we obtain a new upper bound by means of analyzing the characteristic sets of the codewords.
Recall that for each permutation $\pi\in S_n$, its characteristic set $A(\pi)=\{ (\pi(i),\pi(i+1))|1\leqslant i <n \}$ is a subset of
$\mathcal{P}_{n}$ of cardinality $|A(\pi)|=n-1$.
Denote $I(\pi_{1},\pi_{2})=|A(\pi_{1})\cap A(\pi_{2})|$, then we have
\begin{lemma}\label{lem:intersection}
For any $\pi_{1},\pi_{2}\in S_n$, $d_{B}(\pi_{1},\pi_{2})\geqslant d$ if and only if $I(\pi_{1},\pi_{2}) \leqslant n-d-1$.
\end{lemma}
Given an $(n,d)$-permutation code $\mathcal{C}$, let $\mathcal{F}$ be the collection of all the characteristic sets $A(\pi)$ of the codewords, i.e.,
$\mathcal{F}=\{ A(\pi)|\pi\in\mathcal{C} \}$. We translate the problem of analyzing the bound of codes into the following extremal set theory problem:
find the maximal size of a family $\mathcal{F}$ of $(n-1)$-subsets of $\mathcal{P}_{n}$ satisfying that the intersection of each pair of subsets is at most $n-d-1$.
Then we can obtain an upper bound of a new type as follows.
\begin{theorem}\label{thm:upperbound}
For given integers $n$ and $d$,
\begin{equation*}
|\mathcal{F}|\leqslant \frac{\binom{n}{d}\binom{n}{d}(n-d)!}{\binom{n-1}{n-d}}.
\end{equation*}
\end{theorem}
\begin{proof}
Let $T(n,d)$ be the family of all possible $(n-d)$-subsets of some $A(\pi)$, $\pi\in S_n$. Each $A(\pi)\in\mathcal{F}$ contains $\binom{n-1}{n-d}$
such subsets. By Lemma \ref{lem:intersection}, any $(n-d)$-subset in $T(n,d)$ is contained in the characteristic set of at most one codeword.
Therefore $|\mathcal{F}|\binom{n-1}{n-d}\leqslant |T(n,d)|$.
The remaining problem is to estimate $|T(n,d)|$. For each set $A\in T(n,d)$, consider the $n\times n$ matrix $M=(m_{i,j})$ where
\begin{equation*}
m_{i,j}=
\begin{cases}
1, & \mbox{if pair } (i,j)\in A, \\
0, & \mbox{otherwise}.
\end{cases}
\end{equation*}
Since $A$ is an $(n-d)$-subset of some $A(\pi)$, $\pi\in S_n$, then the matrix should contain exactly $n-d$ entries of `1' and the weight of each column and row is at most 1.
Then the number of distinct $A$ is upper bounded by the number of ways to select $n-d$ rows and $n-d$ columns and construct a permutation matrix from the chosen sub-matrix.
Hence $T(n,d)\leqslant \binom{n}{n-d}\binom{n}{n-d}(n-d)!$. Therefore we have
$|\mathcal{F}|\leqslant\frac{\binom{n}{d}\binom{n}{d}(n-d)!}{\binom{n-1}{n-d}}$.
\end{proof}
By Lemma \ref{lem:ballsize} and Lemma \ref{lem:GVSP}, if $t \leqslant n-\sqrt{n}-1$, denote the sphere-packing bound as $A_{SP}(n,2t+1)$, which falls in the
range
$$\frac{n!}{\prod\limits_{i=0}^{t}(n-i)}\leqslant A_{SP}(n,2t+1) \leqslant \frac{n!}{\prod\limits_{i=1}^{t}(n-i)}.$$
Denote our new type upper bound $\frac{\binom{n}{d}\binom{n}{d}(n-d)!}{\binom{n-1}{n-d}}$ as $A_{new}(n,d)$.
\begin{corollary}
Given $n$ and $d=2t+1$, if $t\leqslant n-\sqrt{n}-1$, $n\cdot\prod\limits_{i=0}^{t}\left(n-i\right)\leqslant d\cdot d!$ and $d\leqslant n-1$, then
$A_{new}(n,d)\leqslant A_{SP}(n,d)$.
\end{corollary}
In Table \ref{table:UP} we list several cases for small parameters as supporting evidences to show that the new bound in Theorem \ref{thm:upperbound} works better than
sphere-packing bound when $d$ is relatively close to $n$. Note that the values of sphere-packing bound in this table say that the size of codes is upper bounded by some value $x$, where $x$ is not less than the values shown in the table. (For example, the size of a (13,9)-code is upper bounded by $x$, where $x\geqslant 40320$. It doesn't necessarily suggest that the size of a (13,9)-code is upper bounded by $40320$. Our new result indicates that the size of a (13,9)-code is upper bounded by 24787, which is indeed an improvement over the sphere-packing bound.)
\begin{table}[!h]\label{table:bound}
\centering
\caption{A comparison of new bound and sphere-packing bound with some small parameters} \label{table:UP}
\begin{tabular}{cccc|cccc}
\toprule
n & d & Sphere-packing bound & Theorem \ref{thm:upperbound} &n & d & Sphere-packing bound & Theorem \ref{thm:upperbound}\\
\midrule
13 & 9 & $\geqslant$ 40320 & {\bfseries 24787} &18 & 11 & $\geqslant$ 479001600 & {\bfseries 262461363} \\
15 & 11 & $\geqslant$ 362880 & {\bfseries 44672} &18 & 13 & $\geqslant$ 39916800 & {\bfseries 1423607}\\
16 & 11 & $\geqslant$ 3628800 & {\bfseries 762415} & 19 & 11 & $\geqslant 6227020800$ & {\bfseries 5263805324}\\
17 & 11 & $\geqslant$ 39916800 & {\bfseries 13771113} &19 & 13 & $\geqslant$ 479001600 & {\bfseries 28551213}\\
17 & 13 & $\geqslant$ 3628800 & {\bfseries 74696} & 20 & 13 & $\geqslant 6227020800$ & {\bfseries 601078154}\\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}\label{sec:conclusion}
In this paper, we establish the correspondence between permutation codes and the independent sets of block permutation graphs. Using this approach,
we improve the Gilbert-Varshamov bound asymptotically by a factor of $\Omega(\log n)$ when the minimum distance $d$ is fixed and $n$ goes into infinity.
As for the upper bound, we clarify the relationship between block permutation distance of permutations and the intersection of their characteristic
sets. Using some counting methods, we derive an upper bound of a new type, which beats the sphere-packing bound when $d$ is relatively close to $n$.
Moreover, we present the existence of a permutation code which improves the size of the known result by a factor of $\Theta(n^{2d-4})$.
Explicit encoding schemes achieving this size are considered for future research.
\bibliographystyle{plain}
| {
"timestamp": "2018-11-13T02:17:16",
"yymm": "1811",
"arxiv_id": "1811.04600",
"language": "en",
"url": "https://arxiv.org/abs/1811.04600",
"abstract": "Permutation codes under different metrics have been extensively studied due to their potentials in various applications. Generalized Cayley metric is introduced to correct generalized transposition errors, including previously studied metrics such as Kendall's $\\tau$-metric, Ulam metric and Cayley metric as special cases. Since the generalized Cayley distance between two permutations is not easily computable, Yang et al. introduced a related metric of the same order, named the block permutation metric. Given positive integers $n$ and $d$, let $\\mathcal{C}_{B}(n,d)$ denote the maximum size of a permutation code in $S_n$ with minimum block permutation distance $d$. In this paper, we focus on the theoretical bounds of $\\mathcal{C}_{B}(n,d)$ and the constructions of permutation codes under block permutation metric. Using a graph theoretic approach, we improve the Gilbert-Varshamov type bound by a factor of $\\Omega(\\log{n})$, when $d$ is fixed and $n$ goes into infinity. We also propose a new encoding scheme based on binary constant weight codes. Moreover, an upper bound beating the sphere-packing type bound is given when $d$ is relatively close to $n$.",
"subjects": "Information Theory (cs.IT); Combinatorics (math.CO)",
"title": "New Theoretical Bounds and Constructions of Permutation Codes under Block Permutation Metric",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287698185481,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594727950762
} |
https://arxiv.org/abs/2210.17070 | Private optimization in the interpolation regime: faster rates and hardness results | In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting. However, when the functions exhibit quadratic growth around the optimum, we show (near) exponential improvements in the private sample complexity. In particular, we propose an adaptive algorithm that improves the sample complexity to achieve expected error $\alpha$ from $\frac{d}{\varepsilon \sqrt{\alpha}}$ to $\frac{1}{\alpha^\rho} + \frac{d}{\varepsilon} \log\left(\frac{1}{\alpha}\right)$ for any fixed $\rho >0$, while retaining the standard minimax-optimal sample complexity for non-interpolation problems. We prove a lower bound that shows the dimension-dependent term is tight. Furthermore, we provide a superefficiency result which demonstrates the necessity of the polynomial term for adaptive algorithms: any algorithm that has a polylogarithmic sample complexity for interpolation problems cannot achieve the minimax-optimal rates for the family of non-interpolation problems. | \section{Introduction} \label{sec:intro}
We study differentially private stochastic convex optimization (DP-SCO), where given a dataset $\mc{S} = S_1^n \simiid P$ we wish to solve
\begin{equation}
\label{eqn:objective}
\begin{split}
\minimize ~ & f(x) = \E_P[F(x;\statrv)]
= \int_\statdomain F(x; \statval) dP(\statval) \\
\subjectto ~ & x \in \mc{X},
\end{split}
\end{equation}
while guaranteeing differential privacy. In problem~\eqref{eqn:objective}, $\xd \subset \R^d$ is the parameter space, $\statdomain$ is a sample space, and $\{F(\cdot;s):
s\in\S \}$ is a collection of convex losses.
We study the interpolation setting, where there exists a solution that simultaneously minimizes all of the sample losses.
Interpolation problems are ubiquitous in machine learning applications: for example, least squares problems with consistent solutions~\cite{StrohmerVe09,NeedellWaSr14}, and problems with over-parametrized models where a perfect predictor exists~\cite{MaBaBe18,BelkinHsMi18,BelkinRaTs19}. This has led to a great deal of work on the advantages and implications of interpolation~\cite{SrebroSrTe10,CotterShSrSr11,BelkinHsMi18,BelkinRaTs19}.
For non-private SCO, interpolation problems allow significant improvements in convergence rates over generic problems~\cite{SrebroSrTe10,CotterShSrSr11,MaBaBe18,VaswaniBaSc19,WoodworthSr21}. For general convex functions, \citet{SrebroSrTe10} develop algorithms that obtain $O(\frac{1}{n})$ sub-optimality, improving over the minimax-optimal rate $O(\frac{1}{\sqrt{n}})$ for non-interpolation problems. Even more dramatic improvements are possible when the functions exhibit growth around the minimizer, as~\citet{VaswaniBaSc19} show that SGD achieves exponential rates in this setting compared to polynomial rates without interpolation. \cite{AsiDu19siopt, AsiChChDu20, ChadhaChDu22} extend these fast convergence results to model-based optimization methods.
Despite the recent progress and increased interest in interpolation problems, in the private setting they remain poorly understood. In spite of the substantial progress in characterizing the tight convergence guarantees for a variety of settings in DP optimization~\cite{BassilySmTh14,BassilyFeTaTh19,FeldmanKoTa20,AsiFeKoTa21,AsiLeDu21}, we have little understanding of private optimization in the growing class of interpolation problems.
Given (i) the importance of differential privacy and interpolation problems in modern machine learning, (ii) the (often) paralyzingly slow rates of private optimization algorithms, and (iii) the faster rates possible for non-private interpolation problems, the interpolation setting provides a reasonable opportunity for significant speedups in the private setting. This motivates the following two questions: first, is it possible to improve the rates for DP-SCO in the interpolation regime? And, what are the optimal rates?
\subsection{Our contributions}
We answer both questions. In particular, we show that
\begin{enumerate}
\item \textbf{No improvements in general} (\Cref{sec:no-growth}): our first result is a hardness result demonstrating that the rates cannot be improved for DP-SCO in the interpolation regime with general convex functions. More precisely, we prove a lower bound of $\Omega(\frac{d}{n \diffp})$ on the excess loss for pure differentially private algorithms. This shows that existing algorithms achieve optimal private rates for this setting.
\item \textbf{Faster rates with growth} (\Cref{sec:growth}): when the functions exhibit quadratic growth around the minimizer, that is, $f(x) - f(x^\star) \ge \lambda \norms{x-x^\star}_2^2$ for some $\lambda >0$, we propose an algorithm that achieves near-exponentially small excess loss, improving over the polynomial rates in the non-interpolation setting. Specifically, we show that the sample complexity to achieve expected excess loss $\alpha>0$ is $O(\frac{1}{\alpha^\rho} + \frac{d}{\diffp} \log\paren{\frac{1}{\alpha}})$ for pure DP and $O(\frac{1}{\alpha^\rho} + \frac{\sqrt{d \log(1/\delta)}}{\diffp} \log\paren{\frac{1}{\alpha}})$ for \ed-DP, for any fixed $\rho>0$. This improves over the sample complexity for non-interpolation problems with growth which is $O(\frac{1}{\alpha} + \frac{d}{\diffp \sqrt{\alpha}})$.
We also present new algorithms that improve the rates for interpolation problems with the weaker $\kappa$-growth assumption~\cite{AsiLeDu21} for $\growth > 2$ where we achieve excess loss $O( ( \frac{1}{\sqrt{n}} + \frac{d}{n \diffp} )^{\frac{\kappa}{\kappa - 2}} )$, compared to the previous bound $O( ( \frac{1}{\sqrt{n}} + \frac{d}{n \diffp} )^{\frac{\kappa}{\kappa - 1}} )$ without interpolation.
\item \textbf{Adaptivity to interpolation} (\Cref{sec:growth-adap}):
While these improvements for the interpolation regime are important, practitioners using these methods in practice cannot identify whether the dataset they are working with is an interpolating one or not.
Thus, it is crucial that these algorithms do not fail when given a non-interpolating dataset.
We show that our algorithms are adaptive to interpolation, obtaining these better rates for interpolation while simultaneously retaining the standard minimax optimal rates for non-interpolation problems.
\item \textbf{Tightness} (\Cref{sec:super}): finally, we provide a lower bound and a super-efficiency result that demonstrate the (near) tightness of our upper bounds showing sample complexity $\Omega(\frac{d}{\diffp} \log\paren{\frac{1}{\alpha}}) $ is necessary for interpolation problems with pure DP. Moreover, our super-efficiency result shows that the polynomial dependence on $1/\alpha$ in the sample complexity is necessary for adaptive algorithms: any algorithm that has a polylogarithmic sample complexity for interpolation problems cannot achieve minimax-optimal rates for non-interpolation problems.
\end{enumerate}
\subsection{Related work}
Over the past decade, a lot of works ~\cite{ChaudhuriMoSa11,DuchiJoWa13_focs,SmithTh13lasso, BassilySmTh14,
AbadiChGoMcMiTaZh16, BassilyFeTaTh19, FeldmanKoTa20,AsiFeKoTa21,AsiDuFaJaTa21,BassilyFeGuTa20} have studied the problem of private convex optimization.
\citet{ChaudhuriMoSa11} and \cite{BassilySmTh14} study the closely related problem of differentially private empirical risk minimization (DP-ERM) where the goal is to minimize the empirical loss, and obtain (minimax) optimal rates of $d/n\diffp$ for pure DP and ${\sqrt{d \log(1/\delta)}}/{n \diffp}$ for $(\diffp,\delta)$-DP. Recently, more papers have moved beyond DP-ERM to privately minimizing the population loss (DP-SCO)~\cite{BassilyFeTaTh19,FeldmanKoTa20,AsiFeKoTa21,AsiDuFaJaTa21,BassilyGuNa21,AsiLeDu21}. ~\citet{BassilyFeTaTh19} was the first paper to obtain the optimal rate $1/\sqrt{n}+ {\sqrt{d \log(1/\delta)}}/{n \diffp}$ for \ed-DP, and subsequent papers develop more efficient algorithms that achieve the same rates~\cite{FeldmanKoTa20,BassilyFeGuTa20}. Moreover, other papers study DP-SCO under different settings including non-Euclidean geometry~\cite{AsiFeKoTa21,AsiDuFaJaTa21}, heavy-tailed data~\cite{WangXiDeXu20}, and functions with growth~\cite{AsiLeDu21}. However, to the best of our knowledge, there has not been any work in private optimization that studies the problem in the interpolation regime.
On the other hand, the optimization literature has witnessed numerous papers on the interpolation regime~\cite{SrebroSrTe10,CotterShSrSr11,MaBaBe18,VaswaniBaSc19,LiuBe20,WoodworthSr21}. \citet{SrebroSrTe10} propose algorithms that roughly achieve the rate $1/n + \sqrt{f\opt/n}$ for smooth and convex functions where $f\opt = \min_{x \in \xd} f(x)$. In the interpolation regime with $f\opt=0$, this result obtains loss $1/n$ improving over the standard $1/\sqrt{n}$ rate for non-interpolation problems. Moreover, \citet{VaswaniBaSc19} studied the interpolation regime for functions with growth and show that SGD enjoys linear convergence (exponential rates). More recently, several papers investigated and developed acceleration-based algorithms in the interpolation regime~\cite{LiuBe20,WoodworthSr21}.
\subsection{Related Work}\label{sec:rel-work}
\section{Preliminaries}\label{sec:prelim}
We begin with notation that will be used throughout the paper and provide some standard definitions from convex analysis and differential privacy.
\paragraph{Notation}
We let $n$ denote the sample size and $d$ the dimension. We let $\param$ denote the optimization variable and $\paramdomain \subset \R^d$ the constraint set. $\statval$ are samples from $\statdomain$, and $\statrv$ is an $\statdomain$-valued random variable. For each sample $\statval \in
\statdomain$, $F(\cdot; \statval): \R^d \rightarrow \R \cup \{+\infty\}$ is
a closed convex function. Let $\partial \risksamp(\param; \statval)$ denote the subdifferential of $\risksamp(\cdot; \statval)$ at $\param$. We let $\statdomain^n$ denote the collection of datasets $\statvalset= (\statval_1, \ldots, \statval_n)$ with $n$ data points from $\statdomain$. We let $\risk_\statvalset(\param) \defeq \frac{1}{n}\sum_{\statval \in \statvalset} \risksamp(\param, \statval)$ denote the empirical loss and $\risk(\param) \defeq \E[\risksamp(\param;\statrv)]$ denote the population loss. The distance of a point to a set is $\dist\paren{x,Y} = \min_{y \in Y}\ltwo{x - y}$. We use ${\rm Diam}(\paramdomain) = \sup_{x, y \in \paramdomain}\ltwo{x - y}$ to denote the diameter of parameter space $\paramdomain$ and use $D$ as a bound on the diameter of our parameter space.
We recall the definition of $\ed$-differential privacy.
\begin{definition}
A randomized mechanism $M$ is $\ed$-differentially private ($\ed$-DP) if for all datasets $\statvalset, \statvalset' \in \statdomain^n$ that differ in a single data point and for all events $\mc{O}$ in the output space of $M$, we have
\begin{align*}
P(M(\statvalset)\in \mc{O}) \leq e^\diffp P(M(\statvalset') \in \mc{O}) + \delta.
\end{align*}
We define $\diffp$-differential privacy ($\diffp$-DP) to be $(\diffp, 0)$-differential privacy.
\end{definition}
We now recall a couple of standard convex analysis definitions.
\begin{definition}\label{def:lipschitz}
~\\
\begin{enumerate}
\item A function $h: \xd \to \R$ is \emph{$\lip$-Lipschitz} if for all $x,y\in\paramdomain$
\begin{align*}
|{h(x) - h(y)}| \leq \lip \ltwo{x - y}.
\end{align*}
Equivalently, a function is \emph{$\lip$-Lipschitz} if $\ltwo{\nabla f(x)} \le \lip$ for all $x \in \xd$.
\item A function $h$ is \emph{$\smooth$-smooth} if it has $\smooth$-Lipschitz gradient: for all $x, y \in \paramdomain$
\begin{align*}
\ltwo{\nabla h(x) - \nabla h(y)} \leq \smooth \ltwo{x - y}.
\end{align*}
\item A function $h$ is \emph{$\growthcoef$-strongly convex} if for all $x, y \in \paramdomain$
\begin{align*}
h(y) \ge h(x) + \grad h(x)^T(y - x) + \frac{\growthcoef}{2}\ltwo{y - x}^2.
\end{align*}
\end{enumerate}
\end{definition}
We formally define interpolation problems:
\begin{definition}[Interpolation Problem]\label{def:interpolation}
Let $\paramdomain\opt \defeq \argmin_{\param \in \paramdomain} \risk(\param)$. Then problem \eqref{eqn:objective} is an interpolation problem if there exists $\param\opt \in \paramdomain\opt$ such that for $P$-almost all $\statval \in \statdomain$, we have $0 \in \partial F(x\opt;s)$.
\end{definition}
Interpolation problems are common in modern machine learning, where models are overparameterized. One simple example is overparameterized linear regression: there exists a solution that minimizes each individual sample function. Classification problems with margin are another example.
Crucial to our results is the following quadratic growth assumption:
\begin{definition}\label{ass:growth}
We say that a function $f$ satisfies the quadratic growth condition if for all $x \in \xd$
\begin{align*}
\risk(\param) - \inf_{\param' \in \paramdomain\opt}\risk(\param') \geq \frac{\growthcoef}{2}\dist\paren{\param,\paramdomain\opt}^2.
\end{align*}
\end{definition}
This assumption is natural with interpolation and holds for many important applications including noiseless linear regression~\cite{StrohmerVe09,NeedellWaSr14}. Past work (\cite{VaswaniBaSc19,WoodworthSr21}) uses this assumption with interpolation to get faster rates of convergence for non-private optimization.
Finally, the adaptivity of our algorithms will crucially depend on an innovation leveraging Lipchitizian extensions, defined as follows.
\begin{definition}[Lipschitzian extension \cite{HiriartUrrutyLe93ab}] \label{def:lip-ext}
The Lipschitzian extension with Lipschitz constant L of a function $f$ is defined as the infimal convolution
\begin{equation}
\label{eqn:lip-ext}
f_L(x) \coloneqq \inf_{y \in \R^d} \{f(y) + L\ltwo{x - y}\}.
\end{equation}
\end{definition}
\noindent The Lipschitzian extension~\eqref{eqn:lip-ext} essentially transforms a general convex function into an $\lip$-Lipschitz convex function.
We now present a few properties of the Lipschitzian extension that are relevant to our development.
\begin{lemma}\label{lem:lip-ext}
Let $f:\paramdomain \to \R$ be convex. Then its Lipschitzian extension satisfies the following:
\begin{enumerate}
\item $f_L$ is $L$-Lipschitz.
\item $f_L$ is convex.
\item If $f$ is $L$-Lipschitz, then $f_L(x) = f(x)$, for all $x$.
\item Let $y(x) = \argmin_{y \in \R^d}\{f(y) + L\ltwo{x - y}\}$. If $y(x)$ is at a finite distance from $x$, we have
\begin{equation*}
\grad f_L(x) = \begin{cases}
\grad f(x), &\text{if $\ltwo{\grad f(x)} \le L$}\\
L\frac{x - y(x)}{\ltwo{x - y(x)}}, &\text{otherwise}.
\end{cases}
\end{equation*}
\end{enumerate}
\end{lemma}
We use the Lipschitzian extension as a substitute for gradient clipping to ensure differential privacy. Unlike gradient clipping, which may alter the geometry of a convex problem to a non-convex one, the Lipschitzian extension of a function remains convex and thus retains other nice properties that we leverage in our algorithms in~\Cref{sec:growth}.
\section{Hardness of private interpolation}\label{sec:no-growth}
In non-private stochastic convex optimization, for smooth functions it is well known that interpolation problems enjoy the fast rate $O(1/n)$~\cite{SrebroSrTe10} compared to the minimax-optimal $O(1/\sqrt{n})$ without interpolation~\cite{Duchi18}. In this section, we show that such an improvement is not generally possible with privacy. The same lower bound of private non-interpolation problems, $d/n\diffp$, holds for interpolation problems.
To state our lower bounds, we present some notation that we will use throughout of the paper.
We let $\funcsetfamily$ denote the family of function $\risksamp$ and dataset $\statvalset$ pairs such that $\risksamp: \paramdomain \times \statdomain \rightarrow \R$ is convex and $\smooth$-smooth in its first argument, $|\statvalset| = n$, and $\risk_{\statvalset}(y) = \fracnsamp \sum_{\statval \in \statvalset}\risksamp(y, \statval)$ is an interpolation problem (\Cref{def:interpolation}).
We define the constrained minimax risk to be
\iftoggle{arxiv}{
\begin{align*}
\minimax(\paramdomain, \funcsetfamily, \varepsilon, \delta)
\defeq
&\inf_{M \in \edfamily} \sup_{(\risksamp, \statvalset^n) \in \funcsetfamily} \E[\risk_{\statvalset^n}(M(\statvalset^n))] - \inf_{\param'\in\paramdomain}\risk_{\statvalset^n}(\param').
\end{align*}
}{
\begin{align*}
&\minimax(\paramdomain, \funcsetfamily, \varepsilon, \delta)
\defeq \\
&\inf_{M \in \edfamily} \sup_{(\risksamp, \statvalset^n) \in \funcsetfamily} \E[\risk_{\statvalset^n}(M(\statvalset^n))] - \inf_{\param'\in\paramdomain}\risk_{\statvalset^n}(\param').
\end{align*}
}
where $\edfamily$ be the collection of $\ed$-differentially private mechanisms from $\statdomain^n$ to $\paramdomain$. We use $\efamily$ to denote the collection of $\diffp$-DP mechanisms from $\statdomain^n$ to $\paramdomain$. Here, the expectation is taken over the randomness of the mechanism, while the dataset $\statvalset^n$ is fixed.
We have the following lower bound for private interpolation problems; the proof is deferred to \Cref{proof:thm:lb-private-interp-nogrowth}.
\begin{theorem}\label{thm:lb-private-interp-nogrowth}
Suppose $\paramdomain\subset \R^d$ contains a $d$-dimensional $\ell_2$ ball of diameter $\diam$.
Then the following lower bound holds for $\delta=0$
\begin{align*}
\minimaxe
\geq \frac{\smooth \diam^2 d}{96e^2 n\diffp }.
\end{align*}
Moreover, if $0<\delta < \diffp/6$ and $d=1$, the following lower bound holds
\begin{align*}
\minimaxed
\geq \frac{\smooth \diam^2 }{16(e + 1)n \diffp }.
\end{align*}
\end{theorem}
Recall the optimal rate for pure DP optimization problems without interpolation is $O(\frac{1}{\sqrt{n}} + \frac{d}{n\varepsilon})$. The first term is the non-private rate, as this is the rate one would get if $\varepsilon = 0$. The second term is the private rate, as this is the price algorithms have to pay for privacy. In modern machine learning, problems are often high dimensional, so we often think of the dimension $d$ scaling with some function of the number of samples $n$. Thus, the private rate is often thought to dominate the non-private rate. For this reason, in this section, we focus on the private rate. The lower bounds of~\Cref{thm:lb-private-interp-nogrowth} show that it is not possible to improve the private rate for interpolation problems in general. Similarly, for approximate \ed-DP, the lower bound shows that improvements are not possible for $d=1$.
For completeness, as we alluded to earlier, we note that our results do not preclude the possibility of improving the non-private rate from $O(1/\sqrt{n})$ to $O(1/n)$. We leave this as an open problem of independent interest for future work.
Despite this pessimistic result, in the next section we show that substantial improvements are possible for private interpolation problems with additional growth conditions.
\section{Faster rates for interpolation with growth}\label{sec:growth}
Having established our hardness result for general interpolation problems, in this section we show that when the functions satisfy additional growth conditions, we get (nearly) exponential improvements in the rates of convergence for private interpolation.
Our algorithms use recent localization techniques that yield optimal algorithms for DP-SCO~\cite{FeldmanKoTa20,AsiLeDu21} where the algorithm iteratively shrinks the diameter of the domain. However, to obtain faster rates for interpolation, we crucially build on the observation that the norm of the gradients is decreasing as we approach the optimal solution, since $\ltwo{\nabla F(x;s)} \le \smooth \ltwo{x-x\opt}$. Hence, by carefully localizing the domain and shrinking the Lipschitz constant accordingly, our algorithms improve the rates for interpolating datasets.
However, this technique alone yields an algorithm that may not be private for non-interpolation problems, violating that privacy must hold for all inputs: the reduction in the Lipschitz constant may not hold for non-interpolation problems, and thus, the amount of noise added may not be enough to ensure privacy.
To solve this issue, we use the Lipschitzian extension (\Cref{def:lip-ext}) to transform our potentially non-Lipschitz sample functions into Lipschitz ones and guarantee privacy even for non-interpolation problems.
We begin in~\Cref{sec:lip-ext} by presenting our Lipschitzian extension based algorithm, which recovers the standard optimal rates for (non-interpolation) $L$-Lipschitz functions while still guaranteeing privacy when the function is not Lipschitz. Then in~\Cref{sec:growth-non-adap} we build on this algorithm to develop a localization-based algorithm that obtains faster rates for interpolation-with-growth problems.
Finally, in~\Cref{sec:growth-adap} we present our final adaptive algorithm, which obtains fast rates for interpolation-with-growth problems while achieving optimal rates for non-interpolation growth problems.
\subsection{Lipschitzian-extension based algorithms }
\label{sec:lip-ext}
\newcommand{\mathbf{M}^L_{(\diffp,\delta)}}{\mathbf{M}^L_{(\diffp,\delta)}}
Existing algorithms for DP-SCO with $L$-Lipschitz functions may not be private if the input function is not $\lip$-Lipschitz~\cite{BassilyFeGuTa20,FeldmanKoTa20,AsiLeDu21}.
Given any DP-SCO algorithm $\mathbf{M}^L_{(\diffp,\delta)}$, which is private for $\lip$-Lipschitz functions, we present a framework that transforms $\mathbf{M}^L_{(\diffp,\delta)}$ to an algorithm which is (i) private for all functions, even ones which are not $\lip$-Lipschitz functions and (ii) has the same utility guarantees as $\mathbf{M}^L_{(\diffp,\delta)}$ for $\lip$-Lipschitz functions. In simpler terms, our algorithm essentially feeds $\mathbf{M}^L_{(\diffp,\delta)}$ the Lipschitzian-extension of the sample functions as inputs. \Cref{alg:lip-ext} describes our Lipschitzian-extension based framework.
\begin{algorithm}
\caption{Lipschitzian-Extension Algorithm}
\label{alg:lip-ext}
\begin{algorithmic}[1]
\REQUIRE Dataset $\statvalset=(\ds_1, \ldots, \ds_n)\in \domain^n$;
\STATE Let $F_L(x;s_i)$ be the Lipschitzian extension of $F(x;s_i)$ for all $i$.
\begin{equation*}
F_L(x;s_i) = \inf_{y} \{F(y;s_i) + L\ltwo{x - y}\}
\end{equation*}
\STATE Run $\mathbf{M}^L_{(\diffp,\delta)}$ over the functions $F_L(\cdot;s_i)$.
\STATE Let $x_{\rm priv}$ denote the output of $\mathbf{M}^L_{(\diffp,\delta)}$.
\RETURN $x_{\rm priv}$
\end{algorithmic}
\end{algorithm}
For this paper we consider $\mathbf{M}^L_{(\diffp,\delta)}$ to be Algorithm 2 of \cite{AsiLeDu21} (reproduced in \Cref{appen:asiledu-alg} as \Cref{alg:loc-growth}). The following proposition summarizes our guarantees for~\Cref{alg:lip-ext}.
\begin{proposition}
\label{prop:lip-ext-alg}
Let $\mc{L}_L$ denote the set of sample function-dataset pair $(F,S)$ such that $F$ is $L$-Lipschitz and let $\mc{F}$ denote the set of sample function-dataset pair $(F,\statvalset)$ such that $\mathbf{M}^L_{(\diffp,\delta)}$ is $(\diffp,\delta)$-DP for any $(F,\statvalset) \in \mc{L}_L \cap \mc{F}$. Then
\begin{enumerate}
\item For any $(F,\statvalset) \in \mc{F}$, \Cref{alg:lip-ext} is $(\diffp,\delta)$-DP.
\item For any $(F,\statvalset) \in \mc{L}_L \cap \mc{F}$, \Cref{alg:lip-ext} achieves the same optimality guarantees as $\mathbf{M}^L_{(\diffp,\delta)}$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the first item, note that~\Cref{lem:lip-ext} implies that $F_L$ is $L$-Lipschitz, i.e. $(F_L,\statvalset) \in \mc{L}_L \cap \mc{F}$. Since $\mathbf{M}^L_{(\diffp,\delta)}$ is $(\diffp,\delta)$-DP when applied over Lipschitz functions in $\mc{F}$, we have that \Cref{alg:lip-ext} is $(\diffp,\delta)$-DP.
For the second item, \Cref{lem:lip-ext} implies that $F_L = F$ when $F$ is $L$-Lipschitz. Thus, in \Cref{alg:lip-ext}, we apply $\mathbf{M}^L_{(\diffp,\delta)}$ over $F$ itself.
\end{proof}
While clipped DP-SGD does ensure privacy for input functions which are not $\lip$-Lipschitz,
our algorithm has some advantages over clipped DP-SGD: first, clipping does not result in optimal rates for pure DP, and second, clipped DP-SGD results in time complexity $O(n^{3/2})$. In contrast, our Lipschitzian extension approach is amenable to existing linear time algorithms~\cite{FeldmanKoTa20} allowing for almost linear time complexity algorithms for interpolation problems.
Finally, while clipping the gradients and using the Lipschitzian extension both alter the effective function being optimized, only the Lipschitzian extension is able to preserve the convexity of said effective function (see item 2 in~\Cref{lem:lip-ext}).
We make a note about the computational efficiency of \Cref{alg:lip-ext}. Recall that when the objective is in fact $L$-Lipschitz, computing gradients for the Lipschitzian extension (say in the context of a first-order method) is only as expensive as computing the gradients for the original function. In particular, one can first compute the gradient of the original function and use item 4 of \Cref{lem:lip-ext}; when the problem is $L$-Lipschitz, $\|\nabla f(x)\|_2$ is always less than or equal to $L$ and thus the gradient of the Lipschitzian extension is just the gradient of the original function.
\subsection{Faster non-adaptive algorithm}
\label{sec:growth-non-adap}
Building on the Lipschitzian-extension framework of the previous section, in this section, we present our epoch based algorithm, which obtains faster rates in the interpolation-with-growth regime. It uses \Cref{alg:lip-ext} with $\mathbf{M}^L_{(\diffp,\delta)}$ as \Cref{alg:loc-growth} (reproduced in \Cref{appen:asiledu-alg}) as a subroutine
in each epoch, to localize and shrink the domain as the iterates get closer to the true minimizer. Simultaneously, the algorithm also reduces the Lipschitz constant, as the interpolation assumption implies that the norm of the gradient decreases for iterates near the minimizer.
The detailed algorithm is given in \Cref{alg:priv-interpol-quad} where $\diam_i$ denotes the effective diameter and $\lip_i$ denotes the effective Lipschitz constant in epoch $i$.
\begin{algorithm}[tb]
\caption{Domain and Lipschitz Localization algorithm}
\label{alg:priv-interpol-quad}
\begin{algorithmic}[1]
\REQUIRE
Dataset $\statvalset=(\ds_1, \ldots, \ds_n)\in \domain^n$,
Lipschitz constant $L$,
domain $\paramdomain$,
probability parameter $\tailprob$,
initial point $\param_{0}$
\STATE Set $L_1 = L$, $D_1 = {\rm Diam}(\paramdomain)$ and $\paramdomain_1 = \paramdomain$
\STATE Partition the dataset into T partitions (denoted by $\{\statvalset_k\}_{k = 1}^T$) of size $m$ each; $\statvalset_k = (s_{(k-1)m + 1},\dots,s_{km})$
\FOR{$i=1$ to $T$\,}
\STATE $\param_i \leftarrow $ Run~\Cref{alg:lip-ext} with dataset $\statvalset_i$, constraint set $\paramdomain_i$,
Lipschitz constant $L_i$,
probability parameter $\tailprob/T$,
privacy parameters $(\diffp,\delta)$,
initial point $\param_{i-1}$,
\STATE Shrink the diameter
\iftoggle{arxiv}
{
\begin{align*}
D_{i+1} = 256 \left(\frac{L_i}{\growthcoef}\max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}}\right.\right., \left.\left.\frac{\min(d,\sqrt{d \log(1/\delta)})\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)&
\end{align*}
}
{
\begin{align*}
D_{i+1} = 256 \left(\frac{L_i}{\growthcoef}\max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}}\right.\right.,&\\ \left.\left.\frac{\min(d,\sqrt{d \log(1/\delta)})\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)&
\end{align*}
}
\STATE Set $\paramdomain_{i+1} = \{\param : \ltwo{\param - \param_i} \le \diam_{i+1}/2\}$
\STATE Set $L_{i+1} = \smooth \diam_{i+1}$
\ENDFOR
\RETURN the final iterate $\param_T$
\end{algorithmic}
\end{algorithm}
The following theorem provides our upper bounds for~\Cref{alg:priv-interpol-quad}, demonstrating near-exponential rates for interpolation problems; we present the proof in \Cref{appen:proof-ub}.
\begin{restatable}{theorem}{ubquadtheorem}
\label{thm:ub-quad}
Assume each sample function $\risksamp$ is $L$-Lipschitz and $\smooth$-smooth, and let the population function $f$ satisfy quadratic growth (\Cref{ass:growth}). Let Problem \eqref{eqn:objective} be an interpolation problem. Then \Cref{alg:priv-interpol-quad} is $(\diffp,\delta)$-DP.
For $\delta = 0$,
$\tailprob = \frac{1}{n^\mu}$, $\sampround = 256\log^2 n\frac{\smooth \log(1/\beta)}{\growthcoef}\max\left\{\frac{256\smooth }{\growthcoef},\frac{d}{\diffp \sqrt{\log n}}\right\}$, $T = n/\sampround$ and any $\mu > 0$, \Cref{alg:priv-interpol-quad} returns $x_T$ such that
\iftoggle{arxiv}{
\begin{align}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam&\left(\frac{1}{n^\mu} + \exp\left(-\wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) + \exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth d}}\right)\right).\label{eqn:pure-ub}
\end{align}
}
{
\begin{align}
\nonumber \E[\risk(\param_T) - \risk(\param\opt)] \le L\diam&\left(\frac{1}{n^\mu} + \exp\left(-\wt \Theta \paren{\frac{n \growthcoef^2}{\mu\smooth^2}}\right) + \right. \\
& \left.\exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\mu\smooth d}}\right)\right).\label{eqn:pure-ub}
\end{align}s
}
For $\delta > 0$,
$\tailprob = \frac{1}{n^\mu}$, $\sampround = 256\log^2 n \frac{\smooth \log(1/\beta)}{\growthcoef}\max\left\{\frac{256\smooth }{\growthcoef},\frac{\sqrt{d}\log(1/\delta)}{\diffp \sqrt{\log n}}\right\}$, $T = n/\sampround$ and any $\mu > 0$, \Cref{alg:priv-interpol-quad} returns $x_T$ such that
\iftoggle{arxiv}{
\begin{align}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam\left(\frac{1}{n^\mu} + \exp\left(\wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) + \exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth \sqrt{d \log(1/\delta)}}}\right)
\right).\label{eqn:appr-ub}
\end{align}
}
{
\begin{align}
\nonumber \E[\risk(\param_T) - \risk(\param\opt)] &\le L\diam\left(\frac{1}{n^\mu} + \exp\left(\wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) + \right.\\
&\left.\exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth \sqrt{d \log(1/\delta)}}}\right)
\right).\label{eqn:appr-ub}
\end{align}}
\end{restatable}
The exponential rates in~\Cref{thm:ub-quad} show a significant improvement in the interpolation regime over the minimax-optimal $O( ( \frac{1}{\sqrt{n}} + \frac{d}{n \diffp} )^2)$ without interpolation~\cite{FeldmanKoTa20,AsiLeDu21}.
To get the linear convergence rates, we run roughly $ n/\log n$ epochs with $\log n$ samples each. Thus, each call of the subroutine runs the algorithm on only logarithmic number of samples compared to the number of epochs.
Intuitively, growth conditions improves the performance of the sub-algorithm, while growth and interpolation conditions reduce the search space. This in tandem leads to faster rates.
To better illustrate the improvement in rates compared to the non-private setting, the next corollary states the private sample complexity required to achieve error $\alpha$ in the interpolation regime.
\begin{corollary}\label{cor:ub-quad}
Let the conditions of \Cref{thm:ub-quad} hold. For $\delta = 0$ , \Cref{alg:priv-interpol-quad} is $\diffp$-DP and requires
\begin{align*}
n = \wt O \paren{\frac{1}{\alpha^{\rho}} + \frac{d}{\rho \diffp}\log\paren{\frac{1}{\alpha}}}
\end{align*}
samples to ensure $\E[\risk(\param_T) - \risk(\param\opt)] \le \alpha$
for any fixed $\rho > 0$, where $\wt O$ ignores only polyloglog factors in $1/\alpha$. \\
Moreover, for $\delta > 0$, \Cref{alg:priv-interpol-quad} is \ed-DP and requires
\begin{align*}
n = \wt O \paren{\frac{1}{\alpha^{\rho}} + \frac{\sqrt{d\log(1/\delta)}}{\rho\diffp}\log\paren{\frac{1}{\alpha}}}
\end{align*}
samples to ensure $\E[\risk(\param_T) - \risk(\param\opt)] \le \alpha$, for any fixed $\rho > 0$, where $\wt O$ ignores polyloglo factors in $1/\alpha$.
\end{corollary}
As the sample complexity of DP-SCO to achieve expected error $\alpha$ on general quadratic growth problems is~\cite{AsiLeDu21}
\begin{equation*}
\Theta \left( \frac{1}{\alpha} + \frac{d}{\diffp \sqrt{\alpha}} \right),
\end{equation*}
\Cref{cor:ub-quad} shows that we are able to improve the polynomial dependence on $1/\alpha$ in the sample complexity to (nearly) logarithmic for interpolation problems.
\begin{rem}
In contrast to \Cref{cor:ub-quad}, we can tune the failure probability parameter $\beta$ to get the sample complexity $\frac{d}{\diffp}\log^2\paren{\frac{1}{\alpha}}$. Even though this sample complexity does not have the polynomial factor, it may be worse than $\frac{1}{\alpha^{\rho}} + \frac{d}{\diffp}\log\paren{\frac{1}{\alpha}}$, because generally the dimension term is the dominant one.
\end{rem}
We end this section by considering growth conditions that are weaker than quadratic growth.
\begin{rem}(interpolation with $\kappa$-growth)
We can extend our algorithms to work for
the weaker $\growth$-growth condition \cite{AsiLeDu21}, i.e., $f(x) - f(x^\star) \ge \frac{\lambda}{\growth} \norms{x-x^\star}_2^\growth$. We present the full details of these algorithms in \Cref{appen:gen-growth} (see \Cref{alg:priv-interpol-kappa}). In this setting, we obtain excess loss
\begin{equation*}
O\left( \left( \frac{1}{\sqrt{n}} + \frac{d}{n \diffp} \right)^{\frac{\kappa}{\kappa - 2}} \right),
\end{equation*}
for interpolation problems, improving over the minimax-optimal loss for non-interpolation problems which is
\begin{equation*}
O\left( \left( \frac{1}{\sqrt{n}} + \frac{d}{n \diffp} \right)^{\frac{\kappa}{\kappa - 1}} \right).
\end{equation*}
As an example, when $\kappa=3$, this corresponds to an improvement from roughly $(d/n\diffp)^{3/2}$ to $(d/n\diffp)^{3}$. Like our previous results, we are again able to show similar improvements for \ed-DP with better dependence on the dimension. Finally, we note that we have not provided lower bounds for the interpolation-with-$\kappa$-growth setting for $\kappa>2$. We leave this question as a direction for future research.
\end{rem}
\subsection{Adaptive algorithm}
\label{sec:growth-adap}
Though \Cref{alg:priv-interpol-quad} is private and enjoys faster rates of convergence in the interpolation regime, it is not necessarily adaptive to interpolation, i.e.~it may perform poorly given a non-interpolation problem. In fact, since the shrinkage of the diameter and Lipschitz constants at each iteration hinges squarely on the interpolation assumption,
the new domain may not include the optimizing set $\paramdomain\opt$ in the non-interpolation setting, so our algorithm may not even converge. Since in general we do not know a priori whether a dataset is interpolating, it is important to have an algorithm which adapts to interpolation.
To that end, we present an adaptive algorithm that achieves faster rates for interpolation-with-growth problems while simultaneously obtaining the standard optimal rates for general growth problems.
The algorithm consists of two steps. In the first step, our algorithm privately minimizes the objective without assuming it is an interpolation problem. Next, we run our non-adaptive interpolation algorithm of~\Cref{sec:growth-non-adap} over the localized domain returned by the first step. If our problem was an interpolating one, the second step recovers the faster rates in \Cref{sec:growth-non-adap}. If our problem was not an interpolating one,
the first localization step ensures that we at least recover the non-interpolating convergence rate. We stress that the privacy of \Cref{alg:priv-adapt-interpol} requires that the call to \Cref{alg:priv-interpol-quad} remains private even if the problem is non-interpolating. This is ensured by using our Lipschitzian extension based algorithm with $\mathbf{M}^L_{(\diffp,\delta)}$ as \Cref{alg:loc-growth}. The Lipschitzian extension allows us to continue preserving privacy. We present the full details of this algorithm in~\Cref{alg:priv-adapt-interpol}.
\begin{algorithm}[tb]
\caption{Algorithm that adapts to interpolation}
\label{alg:priv-adapt-interpol}
\begin{algorithmic}[1]
\REQUIRE
Dataset $\statvalset=(\ds_1, \ldots, \ds_n)\in \domain^n$,
Lipschitz constant $L$,
domain $\paramdomain$,
probability parameter $\tailprob$,
initial point $\param_{0}$
\STATE Partition the dataset into 2 partitions $S_1 = (s_{1},\dots,s_{n/2})$ and $S_2 = (s_{(n/2)+1},\dots,s_{n})$
\STATE $\param_1 \leftarrow $ Run \Cref{alg:lip-ext} with dataset $S_1$,
constraint set $\paramdomain_i$,
Lipschitz constant $L_i$,
probability parameter $\tailprob/2$,
privacy parameters $(\diffp,\delta)$,
initial point $\param_{i-1}$,
\STATE Shrink the diameter
\iftoggle{arxiv}{\begin{align*}
\diam_{\rm int} = \frac{128 L}{\lambda} \cdot &\left( \frac{ \sqrt{\log(2/\beta) } \log^{3/2} n}{\sqrt{n}} + \frac{\min\{d,\sqrt{d\log(1/\delta)}\}\log(2/\beta)\log n}{n \diffp} \right)
\end{align*}
}
{\begin{align*}
\diam_{\rm int} = \frac{128 L}{\lambda} \cdot &\left( \frac{ \sqrt{\log(2/\beta) } \log^{3/2} n}{\sqrt{n}}\right.+\\
&\left.\frac{\min\{d,\sqrt{d\log(1/\delta)}\}\log(2/\beta)\log n}{n \diffp} \right)
\end{align*}}
\STATE $\paramdomain_{\rm int} = \{\param : \ltwo{\param - \param_1} \le \diam_{\rm int}/2\}$
\STATE $\param_{\rm adapt} \leftarrow $ Run \Cref{alg:priv-interpol-quad} with dataset $S_2$,
diameter $\diam_{\rm int}$,
Lipschitz constant $L$,
domain $\paramdomain_{\rm int}$,
smoothness parameter $\smooth$,
tail probability parameter $\tailprob/2$,
growth parameter $\growthcoef$,
initial point $\param_{1}$
\RETURN the final iterate $\param_{\rm adapt}$.
\end{algorithmic}
\end{algorithm}
The following theorem (\Cref{thm:adapt-conv}) states the convergence guarantees of our adaptive algorithm (\Cref{alg:priv-adapt-interpol}) in both the interpolation and non-interpolation regimes for the pure DP setting. The results for approximate DP are similar and can be obtained by replacing $d$ with $\sqrt{d\log(1/\delta)}$; we give the full details in \Cref{appen:proof-ub}.
\begin{restatable}{theorem}{ubadapttheorem}
\label{thm:adapt-conv}
Let each sample function $\risksamp$ be $L$-Lipschitz and $\smooth$-smooth, and let the population function $f$ satisfy quadratic growth (\Cref{ass:growth}) with coefficient $\growthcoef$. Let $\param_{\rm adapt}$ be the output of \Cref{alg:priv-adapt-interpol}. Then
\begin{enumerate}
\item \Cref{alg:priv-adapt-interpol} is $\diffp$-DP.
\item Without any additional interpolation assumption, $\param_{\rm adapt}$ satisfies
\begin{align*}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam \cdot \wt O \left( \frac{ 1}{\sqrt{n}} + \frac{d}{n \diffp} \right)^2.
\end{align*}
\item Let problem \eqref{eqn:objective} be an interpolation problem. Then$\param_{\rm adapt}$ satisfies
\iftoggle{arxiv}{
\begin{align*}
\nonumber \E[\risk(\param_T) - \risk(\param\opt)] \le L\diam&\left(\frac{1}{n^\mu} + \exp\left(- \wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) + \exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth d}}\right)\right).
\end{align*}}
{
\begin{align*}
\nonumber \E[\risk(\param_T) - \risk(\param\opt)] \le L\diam&\left(\frac{1}{n^\mu} + \exp\left(- \wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) \right. \\
& + \left.\exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth d}}\right)\right).
\end{align*}
}
\end{enumerate}
\end{restatable}
\begin{proof}
The privacy of \Cref{alg:priv-adapt-interpol} follows from the privacy of \Cref{alg:lip-ext,alg:priv-interpol-quad} and post-processing.
To prove the convergence guarantees, we first need to show that the optimal set $\paramdomain\opt$ is in the shrinked domain $\paramdomain_{\rm int}$. Using the high probability guarantees of \Cref{alg:lip-ext}, we know that with probability $1 - \beta/2$, we have
\iftoggle{arxiv}{
\begin{align*}
f(x_1) - f(x\opt) \le \frac{2^{12} L}{\lambda} \cdot &\left( \frac{ \sqrt{\log(2/\beta) } \log^{3/2} n}{\sqrt{n}} + \frac{\sqrt{d\log(1/\delta)}\log(2/\beta)\log n}{n \diffp} \right).
\end{align*}}
{
\begin{align*}
f(x_1) - f(x\opt) \\ \le \frac{2^{12} L}{\lambda} \cdot &\left( \frac{ \sqrt{\log(2/\beta) } \log^{3/2} n}{\sqrt{n}}\right.+\\
&\left.\frac{\sqrt{d\log(1/\delta)}\log(2/\beta)\log n}{n \diffp} \right)
\end{align*}
}
Using the quadratic growth condition, we immediately have $\ltwo{\param\opt - \param_1} \le \diam_{\rm int}/2$ and hence $\paramdomain\opt \subset \paramdomain_{\rm int}$.
Using smoothness, we have that for any $x \in \paramdomain_{\rm int}$,
\begin{align*}
f(x) - f(x\opt) \le \frac{H\diam_{\rm int}^2}{2}.
\end{align*}
Since \Cref{alg:priv-interpol-quad} always outputs a point in its input domain (in this case $\paramdomain_{\rm int}$), even in the non-interpolation setting that
\begin{align*}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam \cdot \wt O \left( \frac{ 1}{\sqrt{n}} + \frac{d}{n \diffp} \right)^2.
\end{align*}
\noindent In the interpolation setting, the guarantees of \Cref{alg:priv-interpol-quad} hold and result is immediate.
\end{proof}
\section{Optimality and Superefficiency}\label{sec:super}
We conclude this paper by providing a lower bound and a super-efficiency result that demonstrate the tightness of our upper bounds. Recall that our upper bound from~\Cref{sec:growth} is roughly (up to constants)
\begin{equation}
\label{eq:ub}
\frac{1}{n^c} + \exp\left(-\wt \Theta \left(\frac{n \diffp}{d} \right) \right),
\end{equation}
for any arbitrarily large $c$. We begin with an exponential lower bound showing that the second term in~\eqref{eq:ub} is tight.
We then prove a superefficiency result that demonstrates that any private algorithm which avoids the first term in~\eqref{eq:ub} cannot be adaptive to interpolation, that is, it can not achieve the minimax optimal rate for the family of non-interpolation problems.
\Cref{thm:lb-private-interp-growth} below presents our exponential lower bounds for private interpolation problems with growth.
We use the notation and proof structure of~\Cref{thm:lb-private-interp-nogrowth}. We let $\funcsetfamilygrowth\subset\funcsetfamily$ be the subcollection of function, data set pairs which also have functions $\risk_{\statvalset^n}$ that have $\lambda$-quadratic growth (\Cref{ass:growth}).
The proof of \Cref{thm:lb-private-interp-growth} is found in \Cref{proof:thm:lb-private-interp-growth}.
\begin{theorem}\label{thm:lb-private-interp-growth}
Let $\paramdomain\subset \R^d$ contain a $d$-dimensional $\ell_2$-ball of diameter $\diam$.
Then
\begin{align*}
\minimaxegrowth
\geq \frac{\lambda \diam^2}{96}\exp\left(-\frac{2\lambda n\varepsilon}{\smooth d}\right).
\end{align*}
\end{theorem}
\newcommand{\funcsetfamily_\lambda^L(\risksamp)}{\funcsetfamily_\lambda^L(\risksamp)}
This lower bound addresses the second term of~\eqref{eq:ub}; we now turn to our superefficiency results to lower bound the first term of~\eqref{eq:ub}. We start with defining some notation and making some simplifying assumptions.
For a fixed function $\risksamp: \paramdomain, \statdomain \rightarrow \R$ which is convex, $\smooth$-smooth with respect to the first argument, let $\funcsetfamily_\lambda^L(\risksamp)$ be the set of datasets $\statvalset$ of $n$ data points sampled from $\statdomain$ such that $\risk_{\statvalset}(\param) \defeq \frac{1}{n}\sum_{\statval \in \statvalset^n} \risksamp(\param, \statval)$ is $\lip$-Lipschitz and have $\lambda$-strongly convex objectives.
For simplicity, we will assume that 1. $\inf_{\param\in\paramdomain}\risksamp(\param; \statval) = 0$ for all $\statval \in \statdomain$, 2. $\paramdomain = [-\diam, \diam] \subset \R$, and 3. the codomain of $F$ is $\R_+$.
With this setup, we present the formal statement of our result; the proof of \Cref{thm:superefficiency} is found in \Cref{proof:thm:superefficiency}.
\begin{theorem}\label{thm:superefficiency}
Suppose we have some $\statvalset \in \funcsetfamily_\lambda^L(\risksamp)$ with $\lipschitz = 2\smooth\diam$ such that $(\risksamp, \statvalset)$ satisfy \Cref{def:interpolation}.
Suppose there is an $\diffp$-DP estimator $M$ such that
\begin{align*}
\E[\risk_\statvalset(M(\statvalset)) ]- \inf_{\param \in \xd}\risk_\statvalset(\param) \leq c\diam^2
e^{-\Theta((n\diffp)^t)}
\end{align*}
for some $t > 0 $ and absolute constant $c$.
Then, for sufficiently large $n$, there exists another dataset $\statvalset' \in \funcsetfamily_\lambda^L(\risksamp)$, where $(\risksamp, \statvalset')$ may \textbf{not} satisfy \Cref{def:interpolation}, such that
\begin{align*}
\E[\risk_{\statvalset'}(M(\statvalset'))] - \inf_{\param \in \xd}\risk_{\statvalset'}(\param) =
\Omega\left(\frac{\diam^2}{(n\diffp)^{2(1-t)}}\right)
\end{align*}
\end{theorem}
To better contextualize this result, suppose there exists an algorithm which atttains a $\exp(-\wt \Theta \left(n \diffp/d \right) )$ convergence rate on interpolation problems; i.e., the algorithm is able to avoid the $1/n^c$ term in~\eqref{eq:ub}. Then~\Cref{thm:superefficiency} states that there exists some strongly convex, non-interpolation problem on which the aforementioned algorithm will optimize very poorly; in particular, the algorithm will only be able to return a solution that attains, on average, constant error on this ``hard'' problem. More generally, recall that in the non-interpolation quadratic growth setting, the optimal error rate is on the order of $1/(n\diffp)^2$~\cite{AsiLeDu21}. \Cref{thm:superefficiency} shows that attaining better-than-polynomial error complexity on quadratic growth interpolation problems implies that the algorithm cannot be minimax optimal in the non-interpolation quadratic growth setting. Thus, the rates our adaptive algorithms attain are the best we can hope for if we want an algorithm to perform well on both interpolation and non-interpolation quadratic growth problems.
\section{Implications and applications}
\label{sec:implications}
\ha{add two examples of interpolation problems and show the implications of our results}
\subsection{Noiseless least squares}
\subsection{Classification with margin}
\section{Results from previous work}
\subsection{Proof of \Cref{lem:lip-ext}}
\begin{enumerate}
\item Follows from Proposition IV.3.1.4 of \cite{HiriartUrrutyLe93ab}.
\item Follows from Proposition IV.3.1.4 of \cite{HiriartUrrutyLe93ab}.
\item Follows since for $L$-lipschitz functions $ 0 \in \grad f(x) + L\mathbb{B}_2$.
\item Follows from Section VI.4.5 of \cite{HiriartUrrutyLe93ab}.
\end{enumerate}
\subsection{Algorithms from \cite{AsiLeDu21}}
\label{appen:asiledu-alg}
\begin{algorithm}
\caption{Localization based Algorithm}
\label{alg:pure-erm}
\begin{algorithmic}[1]
\REQUIRE Dataset $D=(\ds_1, \ldots, \ds_n)\in \domain^n$,
constraint set $\xdomain$,
step size $\ss$, initial point $x_0$,
Lipschitz (clipping) constant $L$,
privacy parameters $(\diffp,\delta)$;
\STATE Set $k = \ceil{\log n}$ and $n_0 = n/k$
\FOR{$i=1$ to $k$\,}
\STATE Set
$\ss_i = 2^{-4i} \ss$
\STATE Solve the following ERM over
$ \mc{X}_i= \{x\in \xdomain: \norm{x - x_{i-1}}_2 \le {2\lip \ss_i n_0} \}$:
\begin{equation*}
F_i(x) = \frac{1}{n_0} \sum_{j=1 + (i-1)n_0}^{in_0} \f(x;\ds_j) + \frac{1}{\ss_i n_0 } \norm{x - x_{i-1}}_2^2
\end{equation*}
\STATE Let $\hat x_i$ be the output of the optimization algorithm.
\IF{$\delta=0$}
\STATE Set
$\noise_i \sim \laplace_d(\sigma_i)$ where $\sigma_i = 4 \lip \ss_i \sqrt{d}/\diffp_i$
\ELSIF{$\delta>0$}
\STATE Set $\noise_i \sim \normal(0,\sigma_i^2)$ where $\sigma_i = 4 \lip \ss_i \sqrt{\log(1/\delta)}/\diffp$
\ENDIF
\STATE Set $x_i = \hat x_i + \noise_i$
\ENDFOR
\RETURN the final iterate $x_k$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Epoch-based algorithms for $\kappa$-growth}
\label{alg:loc-growth}
\begin{algorithmic}[1]
\REQUIRE
Dataset $\Ds=(\ds_1, \ldots, \ds_n)\in \domain^n$,
constraint set $\xdomain$,
Lipschitz (clipping) constant $L$,
initial point $x_0$,
number of iterations $T$,
probability parameter $\tailprob$,
privacy parameters $(\diffp,\delta)$;
\STATE Set $n_0 = n/T$ and $\rad_0 = {\rm diam}(\xdomain)$
\IF{$\delta=0$}
\STATE Set
$\ss_0 = \frac{\rad_0}{2\lip} \min \left(\frac{1}{\sqrt{n_0 \log(n_0) \log(1/\beta)}} ,\frac{ \diffp}{ d \log(1/\beta)} \right)$
\ELSIF{$\delta>0$}
\STATE Set
\begin{align*}
\ss_0 = \frac{\rad_0}{2\lip} \min \left\{\frac{1}{\sqrt{n_0 \log(n_0) \log(1/\beta)}} , \frac{ \diffp}{ \sqrt{d \log(1/\delta)} \log(1/\beta)} \right)
\end{align*}
\ENDIF
\FOR{$i=0$ to $T - 1$\,}
\STATE Let $\Ds_i = (\ds_{1+(i-1) n_0}, \dots, \ds_{i n_0})$
\STATE Set $\rad_i = 2^{-i} \rad_0$ and $\ss_i = 2^{-i} \ss_0$
\STATE Set $\xdomain_i = \{x \in \xdomain : \ltwo{x - x_i} \le \rad_i \}$
\STATE Run~\Cref{alg:pure-erm} on dataset $\Ds_i$ with starting point $x_i$, Lipschitz (clipping) constant $L$,
privacy parameter $(\diffp,\delta)$, domain $\xdomain_i$ (with diameter $\rad_i$), step size $\ss_i$
\STATE Let $x_{i+1}$ be the output of the private procedure
\ENDFOR
\RETURN $x_T$
\end{algorithmic}
\end{algorithm}
\subsection{Theoretical results from \cite{AsiLeDu21}}
We first reproduce the high probability guarantees of \Cref{alg:pure-erm} as proved in \cite{AsiLeDu21}.
\begin{restatable}{proposition}{restatePureSCOHb}
\label{thm:sco-hb-pure}
Let $\beta \le 1/(n+d)$, $\diam_2(\xdomain)\le \rad$ and $\f(x;\ds)$ be convex, $\lip$-Lipschitz for all $\ds \in \domain$. Setting
\begin{equation*}
\ss = \frac{\rad}{\lip} \min \left(\frac{1}{\sqrt{n \log(1/\beta)}} ,\frac{ \diffp}{ d \log(1/\beta)} \right)
\end{equation*}
then for $\delta=0$, \Cref{alg:pure-erm} is $\diffp$-DP and has with probability $1-\beta$
\begin{equation*}
f(x) - f(x\opt)
\le 128\lip \rad \cdot \left( \frac{\sqrt{\log(1/\beta) } \log^{3/2} n}{\sqrt{n}} + \frac{ d \log(1/\beta) \log n}{n \diffp} \right).
\end{equation*}
\end{restatable}
\begin{restatable}{proposition}{restateApproxSCOHb}
\label{thm:sco-hb-appr}
Let $\beta \le 1/(n+d)$, $\diam_2(\xdomain)\le \rad$ and $\f(x;\ds)$ be convex, $\lip$-Lipschitz for all $\ds \in \domain$. Setting
\begin{equation*}
\ss = \frac{\rad}{\lip} \min \left(\frac{1}{\sqrt{n \log(1/\beta)}} ,\frac{ \diffp}{ \sqrt{d \log(1/\delta)} \log(1/\beta)} \right),
\end{equation*}
then for $\delta > 0 $, \Cref{alg:pure-erm} is $(\diffp,\delta)$-DP and has with probability $1-\beta$
\begin{equation*}
f(x) - f(x\opt)
\le 128 \lip \rad \cdot \left( \frac{\sqrt{\log(1/\beta) } \log^{3/2} n}{\sqrt{n}} + \frac{ \sqrt{d \log(1/\delta)} \log(1/\beta) \log n}{n \diffp} \right).
\end{equation*}
\end{restatable}
Now, we reproduce the high probability convergence guarantees of \Cref{alg:loc-growth}.
\begin{restatable}{theorem}{restateSCOGrowthPure}
\label{thm:sco-growth-pure}
Let $\beta \le 1/(n+d)$, $\diam_2(\paramdomain)\le \diam$ and $\f(x;\ds)$ be convex, $\lip$-Lipschitz for all $\ds \in \statdomain$.
Assume that $\pf$ has $\kappa$-growth (Assumption~\ref{ass:growth}) with $\kappa \ge \lkappa >1$.
Setting $T = \left\lceil \frac{2 \log n}{\lkappa-1} \right\rceil$,
\Cref{alg:loc-growth} is $\diffp$-DP and has with probability $1-\beta$
\begin{equation*}
\pf(x_T) - \min_{x \in \xdomain} \pf(x)
\le \frac{4032}{\lambda^{\frac{1}{\kappa - 1}}} \cdot \left( \frac{\lip \sqrt{\log(1/\beta) }\log^{3/2} n }{\sqrt{n}} + \frac{ \lip d \log(1/\beta)\log n}{n \diffp (\lkappa -1)} \right)^{\frac{\kappa}{\kappa-1}}.
\end{equation*}
\end{restatable}
\begin{restatable}{theorem}{restateSCOGrowthApprox}
\label{thm:sco-growth-appr}
Let $\beta \le 1/(n+d)$, $\diam_2(\xdomain)\le \rad$ and $\f(x;\ds)$ be convex, $\lip$-Lipschitz for all $\ds \in \statdomain$.
Assume that $\pf$ has $\kappa$-growth (Assumption~\ref{ass:growth}) with $\kappa \ge \lkappa >1$.
Setting $T = \left\lceil \frac{2 \log n}{\lkappa-1} \right\rceil$ and $\delta>0$, \Cref{alg:loc-growth} is $(\diffp,\delta)$-DP and has with probability $1-\beta$
\begin{equation*}
\pf(x_T) - \min_{x \in \xdomain} \pf(x)
\le \frac{4032}{\lambda^{\frac{1}{\kappa - 1}}} \cdot \left( \frac{\lip \sqrt{\log(1/\beta) } \log^{3/2} n}{\sqrt{n}} + \frac{ \lip \sqrt{d \log(1/\delta)} \log(1/\beta)\log n}{n \diffp (\lkappa -1)} \right)^{\frac{\kappa}{\kappa-1}}.
\end{equation*}
\end{restatable}
\section{Proofs from \Cref{sec:no-growth}}
\label{appen:proof-lb}
\subsection{Proof of \Cref{thm:lb-private-interp-nogrowth}}\label{proof:thm:lb-private-interp-nogrowth}
Consider the sample risk function
\begin{align*}
\risksamp(\param; \statval) \defeq \frac{\smooth}{2} \ltwo{\param - \statval}^2 \cdot \ind{s \neq 0}.
\end{align*}
We define the datasets $\statvalset_v^n \defeq \{0\}^{n - k}\cup \{v\}^{k}$. We define the corresponding population risk to be $\risk_v(\param) \defeq \frac{1}{n} \sum_{\statval \in \statvalset_v} \risksamp(\param; \statval) = \frac{k\smooth}{2n} \ltwo{\param - v}^2 $. We select $\mc{V}$ to be a $\gamma$-packing (with respect to the $\ell_2$ norm) of diameter $\diam$ ball contained in $\paramdomain$. Define the separation between $v, v' \in \mc{V}$ with respect to the loss $\risk_v$ and $\risk_{v'}$ by
\begin{align*}
d_{\rm opt}(f_v,f_{v'}) \coloneqq \inf_{\param \in \paramdomain} \frac{\risk_v(\param)}{2} + \frac{\risk_{v'}(\param)}{2} \geq c \defeq \frac{k \smooth}{8n}\gamma^2.
\end{align*}
For the sake of contradiction, suppose that $\E[\risk_v(M(\statvalset_v))] \leq \tau$ for $\tau < \frac{k \smooth \gamma^2}{8n(1 + e^{k\varepsilon}2^d\gamma^d/\diam^d)}$ for all $v \in \paramdomain$. Then by Markov's inequality, $\P(f_v(M(\statvalset_{v})) > c) \leq \frac{\tau}{c}$ and $\P(f_v(M(\statvalset_{v})) \leq c) \geq 1 - \frac{\tau}{c}$ for all $v$, and so
\begin{align*}
\frac{\tau}{c}
& \stackrel{(i)} {\geq} \P(\risk_v(M(\statvalset_{v})) > c) \\
& \stackrel{(ii)}{\geq} \P(\cup_{v' \in \mc{V} \setminus \{v\}}\risk_{v'}(M(\statvalset_{v})) \leq c) \\
& \stackrel{(iii)}{\geq} e^{-k\varepsilon}\sum_{v' \in \mc{V} \setminus \{v\}} \P(\risk_{v'}(M(\statvalset_{v'})) \leq c) \\
& \stackrel{(i)}{\geq} e^{-k\varepsilon} (|\mc{V}| - 1)\left(1 - \frac{\tau}{c}\right),
\end{align*}
where inequality $(ii)$ follows from the definition of the separation, and $(iii)$ follows from privacy and the disjoint nature of the events in the union. Rearranging, we get that
\begin{align*}
\tau \geq \frac{k \smooth \gamma^2}{8n(1 + e^{k\varepsilon}(|\mc{V}| -1)\inv)},
\end{align*}
which is a contradiction. By standard packing inequalities~\cite{Wainwright19}, we know that $|\mc{V}| \geq (\diam/2\gamma)^d$. Setting $k = d/\varepsilon$ and $\gamma = \diam/2e$ and using the fact that $x/ (x-1)$ is decreasing in $x$ gives
\begin{align*}
\tau \geq \frac{ d \smooth \diam^2}{32n\diffp e^2(1 + e^{d}(e^d -1)\inv)} \geq \frac{\smooth \diam^2 d}{96e^2 n\diffp }.
\end{align*}
We now prove the $\ed$-DP lower bound. Consider the following sample risk function
\begin{align*}
\risksamp(\param; \statval) \defeq \frac{\smooth}{2} (\param - \statval)^2 \ind{s \neq 0}.
\end{align*}
We define the datasets $\statvalset_v^n \defeq \{0\}^{n - k}\cup \{v\}^{k}$ inducing the corresponding population risk $\risk_v(\param) \defeq \frac{1}{n} \sum_{\statval \in \statvalset_v} \risksamp(\param; \statval) = \frac{k\smooth}{2n} (\param - v)^2 $. We select two points $v, v'$ contained within the diameter $D$ ball contained in $\paramdomain$ such that $|v - v'| = D$. Define the separation between $v, v' \in \mc{V}$ with respect to the loss $\risk_v$ and $\risk_{v'}$ as
\begin{align*}
d_{\rm opt}(f_v,f_{v'}) \coloneqq \inf_{\param \in \paramdomain} \frac{\risk_v(\param)}{2} + \frac{\risk_{v'}(\param)}{2} \geq c \defeq \frac{k \smooth}{8n}\diam^2
\end{align*}
For the sake of contradiction, suppose that $\E[\risk_v(M(\statvalset_v))] \leq \tau$ for $\tau < \frac{k \smooth \diam^2}{8n}\left\lparen\frac{e^{-k\diffp} - k e^{-\diffp} \delta}{1 + e^{-k\varepsilon}}\right\rparen$ for all $v \in \paramdomain$. Then by Markov's inequality, $\P(f_v(M(\statvalset_{v})) > c) \leq \frac{\tau}{c}$ and $\P(f_v(M(\statvalset_{v})) \leq c) \geq 1 - \frac{\tau}{c}$ for all $v$, and so
\begin{align*}
\frac{\tau}{c}
& \stackrel{(i)} {\geq} \P(\risk_v(M(\statvalset_{v})) > c) \\
& \stackrel{(ii)}{\geq} \P(\risk_{v'}(M(\statvalset_{v})) \leq c) \\
& \stackrel{(iii)}{\geq} e^{-k\varepsilon} \P(\risk_{v'}(M(\statvalset_{v'})) \leq c) - ke^{-\diffp} \delta \\
& \stackrel{(i)}{\geq} e^{-k\varepsilon} \left(1 - \frac{\tau}{c}\right) - ke^{-\diffp} \delta,
\end{align*}
where inequality $(ii)$ follows from the definition of the separation, and $(iii)$ follows from group privacy of $\ed$-privacy \cite{DworkRo14}. Rearranging, we get that
\begin{align*}
\tau \geq \frac{k \smooth \diam^2}{8n}\left\lparen\frac{e^{-k\diffp} - k e^{-\diffp} \delta}{1 + e^{-k\varepsilon}}\right\rparen,
\end{align*}
which is a contradiction. Setting $k = 1/\diffp$ and using the fact $\delta \leq \diffp e^{\diffp - 1} / 2$ gives the first result.
\section{Proofs from \Cref{sec:growth}}
\label{appen:proof-ub}
We first prove a lemma that each time we shrink the domain size, the set of interpolating solutions still lies in the new domain with high probability, and the new Lipschitz constant we define is a valid Lipschitz constant for the loss defined on the new domain. We prove it in generality for $\growth$-growth.
\begin{lemma}\label{lem:valid-subalg}
Let $\paramdomain\opt$ denote the set of interpolating solutions of problem \eqref{eqn:objective}. Then $\paramdomain\opt \subset \paramdomain_{i}$ for all $i \in [T]$ with probability $1 - \beta$, and $\ltwo{\grad F(y;s)} \le L_i$ for all $y \in \paramdomain_i$.
\end{lemma}
\begin{proof}
We prove this lemma for the case when $\delta = 0$, the case when $\delta > 0$ follows similarly. For epoch $i$, using Theorem 2 of \cite{AsiLeDu21}, we have with probability $1 - \tailprob/T$,
\begin{align*}
\risk(\hparam_i) - \risk(\param\opt) \le \frac{C_\kappa}{\growthcoef^{\frac{1}{\growth - 1
}}}\max\left\{\frac{L_i\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{ \sqrt{\sampround}},\frac{L_i{d}\log(T/\tailprob)\log \sampround}{\sampround \diffp}\right\}^{\frac{\growth}{\growth - 1}}
\end{align*}
Using the growth condition on $f(\cdot)$, we have
\begin{align*}
\ltwo{\hparam_i - \param\opt} \le \sqrt[\growth]{\frac{\growth(\risk(\hparam_i) - \risk(\param\opt))}{\growthcoef}} \le (C_\growth \growth)^{1/\growth}\max\left\{\frac{L_i\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{\growthcoef \sqrt{\sampround}},\frac{L_i{d}\log(T/\tailprob)\log \sampround}{\growthcoef \sampround \diffp}\right\}^{\frac{1}{\growth - 1}},
\end{align*}
Using $c_\growth = 2 (C_\growth \growth)^{1/\growth}$, we get $\ltwo{\hparam_i - \param\opt} \le D_{i+1}/2$ with probability $1 - \tailprob/T$. Thus, for each epoch $i$, with probability $1 - \tailprob/T$, each point in the set $\paramdomain\opt$ of optimizers lies in the domain $\paramdomain_{i}$. Using a union bound on all epochs, we have $\paramdomain\opt \subset \paramdomain_{i}$ for all $i \in [T]$ with probability $1 - \tailprob$.
We now prove the second part of the lemma. Using the smoothness of $F(\cdot;s)$ and that $\grad F(\param\opt;s) = 0$ for all $\param\opt \in \paramdomain\opt$, we have
\begin{align*}
\ltwo{\grad\risksamp(y;\sampval)} = \ltwo{\grad\risksamp(y;\sampval) - \grad\risksamp(\param\opt;\sampval)} \le \smooth \ltwo{y - \param\opt} \leq \smooth \left(\ltwo{y - \hparam_i} + \ltwo{\hparam\opt - \hparam_i}\right) \le \smooth D_{i} = L_i
\end{align*}
as desired.
\end{proof}
We now restate and prove the convergence rate of \Cref{alg:priv-interpol-quad}
\ubquadtheorem*
\begin{proof}
First we prove the privacy guarantee of the algorithm. Each sample impacts only one of the iterates $\hparam_i$, thus Algorithm \ref{alg:priv-interpol-quad} satisfies the same privacy guarantee as \Cref{alg:loc-growth} by postprocessing.
We divide the utility proof into 2 main parts; first is to check the validity of the assumptions while applying \Cref{alg:loc-growth} and second is using its high probability convergence guarantees to get the final rates. To check this, we ensure that the optimum set lies in the new domain $\paramdomain_i$ at step $i$ and that the Lipschitz constant $L_i$ defined with respect to the domain is a valid lipschitz constant. This follows from \Cref{lem:valid-subalg}.
Next, we use the high probability convergence guarantees of the subalgorithm \Cref{alg:loc-growth} to get convergence rates for \Cref{alg:priv-interpol-quad}.
We prove it for the case when $\delta = 0$, the case when $\delta > 0$ is similar. We know that
\begin{align*}
L_i & = \smooth D_i \\
& = c_2\frac{\smooth L_{i-1}}{\growthcoef} \max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}.
\end{align*}
Thus we have
\begin{align*}
L_T = \left(c_2\frac{\smooth}{\growthcoef} \max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)^{T-1} L_1.
\end{align*}
Using Theorem 2 of \cite{AsiLeDu21} on the last epoch, we have with probability $1 - \tailprob$ that
\begin{align*}
\risk(\hparam_T) - \risk(\param\opt) &\le C_2\frac{L^2_T}{\growthcoef}\max\left\{\frac{\log(T/\tailprob)\log^{3/2}\sampround}{ \sampround},\frac{{d^2}\log^2(T/\tailprob) \log\sampround}{ \sampround^2 \diffp^2}\right\} \\
& =\left(c^2_2\frac{\smooth^2}{\growthcoef^2} \max\left\{\frac{\log(T/\tailprob)\log^{3/2}\sampround}{ \sampround},\frac{{d^2}\log^2(T/\tailprob) \log\sampround}{ \sampround^2 \diffp^2}\right\}\right)^{T} \frac{C_2 L_1^2\growthcoef}{\smooth^2c_2^2}\\
& =\left(c^2_2\frac{\smooth^2}{\growthcoef^2} \max\left\{\frac{\log(T/\tailprob)\log^{3/2}\sampround}{ \sampround},\frac{{d^2}\log^2(T/\tailprob) \log\sampround}{ \sampround^2 \diffp^2}\right\}\right)^{T} \frac{ L_1^2\growthcoef}{8\smooth^2}.
\end{align*}
Let $\sampround = k \log^2 n$ and $T = n/\sampround$ for some $k$ such that
$$\left(c^2_2\frac{\smooth^2}{\growthcoef^2} \max\left\{\frac{\log(n/(\tailprob k\log^2 n))\log^{3/2}(k \log^2 n)}{k \log^2 n}, \frac{{d^2}\log^2(n/(\tailprob k\log^2 n)) \log(k \log^2 n)}{ (k \log^2 n)^2 \diffp^2}\right\}\right) \le \frac{1}{e} .$$
This holds for example for
\begin{align*}
k = 256\frac{\smooth \log(1/\beta)}{\growthcoef}\max\left\{\frac{256\smooth }{\growthcoef},\frac{d}{\diffp \sqrt{\log n}}\right\},
\end{align*}
for sufficiently large $n$. Using these values of $\sampround$ and $T$, we have
\begin{align}\label{eqn:good-conv}
\risk(\hparam_T) - \risk(\param\opt) \le \frac{C_2 L^2\growthcoef}{\smooth^2c_2^2} \exp\left(-\frac{n}{k \log^2 n}\right) = \frac{L_1^2\growthcoef}{8\smooth^2} \exp\left(-\frac{n}{k \log^2 n}\right).
\end{align}
To get the convergence results in expectation, let $A$ denote the ``bad" event with tail probability $\tailprob$, where $ \risk(\hparam_T) - \risk(\param\opt) > \frac{L_1^2\growthcoef}{8\smooth^2} \exp\left(-\frac{n}{k \log^2 n}\right)$. Now,
\begin{align*}
\E[\risk(\hparam_T) - \risk(\param\opt)] &\le \beta\frac{\smooth\diam^2}{2} + (1 - \beta)\E[\risk(\hparam_T) - \risk(\param\opt) \mid A^c]\\
&\le \beta\frac{\smooth\diam^2}{2} + \E[\risk(\hparam_T) - \risk(\param\opt) \mid A^c]
\end{align*}
Substituting $\tailprob = \frac{1}{n^\mu}$ and using \Cref{eqn:good-conv}, we get the result.
\end{proof}
\subsection{Algorithm for general $\growth$}
\label{appen:gen-growth}
\begin{algorithm}[ht]
\caption{Epoch based epoch based epoch based clipped-GD}
\label{alg:priv-interpol-kappa}
\begin{algorithmic}[1]
\REQUIRE
number of epochs: $\epochs$, samples in each round: $\sampround = n/T$, Diameter at the start: $\diam_1$, lipschitz constant at the start $L_1$, domain $\paramdomain_1$, initial point $\hparam_{0}$
\FOR{$i=1$ to $T$\,}
\STATE $\hparam_i \leftarrow $ Output of \Cref{alg:loc-growth} when run on domain $\paramdomain_i$ (diameter $\diam_i$), with lipschitz constant $L_i$ using $\sampround$ samples.
\IF{$\delta = 0$}
\STATE \begin{align*}
\text{Set} \ D_{i+1} = c_\growth \left(\frac{L_i}{\growthcoef}\max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)^{\frac{1}{\growth - 1}}
\end{align*}
\ELSIF{$\delta > 0$}
\STATE \begin{align*}
\text{Set} \ D_{i+1} = c_\growth \left(\frac{L_i}{\growthcoef}\max\left\{\frac{\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{\sqrt{\sampround}},\frac{\sqrt{d \log(1/\delta)}\log(T/\tailprob)\log \sampround}{\sampround \diffp}\right\}\right)^{\frac{1}{\growth - 1}}
\end{align*}
\ENDIF
\STATE Set $\paramdomain_{i+1} = \{\hparam : \ltwo{\hparam - \hparam_i} \le \diam_{i+1}/2\}$
\STATE Set $L_{i+1} = \smooth \diam_{i+1}$
\ENDFOR
\RETURN the final iterate $\param_T$
\end{algorithmic}
\end{algorithm}
\begin{remark}
$c_\growth$ is an absolute constant dependent on the high probability performance guarantees of \Cref{alg:loc-growth}. We can calculate that $C_\growth$ is at most $2^{12}(\sim 4000)$ and hence $c_\growth \le 2(2^{12}\growth)^{1/\growth} \le 4 \cdot 2^{12/\growth}$.
\end{remark}
\begin{restatable}{theorem}{ubkappatheorem}
\label{thm:ub-kappa}
Assume each sample function $\risksamp$ be $L$-Lipschitz and $\smooth$-smooth, and let the population function $f$ satisfy quadratic growth (\Cref{ass:growth}). Let Problem \eqref{eqn:objective} be an interpolation problem. Then, \Cref{alg:priv-interpol-kappa} is $(\diffp,\delta)$-DP.
For $\delta = 0$,
\Cref{alg:priv-interpol-kappa} with $T = \log n$ and $m = \frac{n}{\log n}$, we have
\begin{align*}
\risk(\hparam_T) - \risk(\param\opt) \le \widetilde{O}\paren{\frac{1}{\sqrt{n}} + \frac{d}{n\diffp}}^{\frac{\growth}{\growth - 2}},
\end{align*}
with probability $1 - \tailprob$. For $\delta > 0$, \Cref{alg:priv-interpol-kappa} when run using $T = \log n$ and $\sampround = n/\log n$ achieves error
\begin{align*}
\risk(\hparam_T) - \risk(\param\opt) \le \widetilde{O}\paren{\frac{1}{\sqrt{n}} + \frac{\sqrt{d}\log(1/\delta)}{n\diffp}}^{\frac{\growth}{\growth - 2}},
\end{align*}
with probability $1 - \tailprob$.
\end{restatable}
\begin{proof}
The privacy guarantee follows from the proof of \Cref{thm:ub-quad}. We divide the utility proof into 2 main parts; first is to check the validity of the assumptions while applying \Cref{alg:loc-growth} and second is using its high probability convergence guarantees to get the final rates. To check this, we ensure that the optimum set lies in the new domain defined at every step and that the lipschitz constant defined with respect to the domain is a valid lipschitz constant. This follows from \Cref{lem:valid-subalg}.
Next, we use the high probability convergence guarantees of the subalgorithm \Cref{alg:loc-growth} to get convergence rates for \Cref{alg:priv-interpol-quad}.
We prove it for the case when $\delta = 0$, the case when $\delta > 0$ is similar. We know that
\begin{align*}
L_i & = \smooth D_i \\
& = c_\growth\smooth\left(\frac{ L_{i-1}}{\growthcoef} \max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)^{\frac{1}{\growth - 1}}.
\end{align*}
Thus, we have
\begin{align*}
L_T & = (c_\growth\smooth)^{\frac{\growth - 1}{\growth - 2}\left(1 - \frac{1}{(\growth - 1)^{T-1}}\right)}\left(\frac{1}{\growthcoef} \max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)^{\frac{1}{\growth - 2}\left(1 - \frac{1}{(\growth - 1)^{T-1}}\right)} L_1^\frac{1}{(\growth - 1)^{T-1}}.
\end{align*}
We note that for $T\sim \log n$, $\frac{1}{(\growth - 1)^{T-1}} \approx \frac{1}{n^{\log(\kappa - 1)}}$ and thus for large $n$, we ignore the terms of the form $a^{-\frac{1}{n^{\log(\kappa - 1)}}}$ since they are $\approx 1$. Ignoring these terms by including an additional constant $C'$ we can write
\begin{align*}
L_T & = C'(c_\growth\smooth)^{\frac{\growth - 1}{\growth - 2}}\left(\frac{1}{\growthcoef} \max\left\{\frac{\sqrt{\log(T/\tailprob)} \log^{3/2} \sampround}{\sqrt{\sampround}},\frac{d\log(T/\tailprob) \log \sampround}{\sampround \diffp}\right\}\right)^{\frac{1}{\growth - 2}} L_1^\frac{1}{(\growth - 1)^{T-1}}.
\end{align*}
Using Theorem 2 of \cite{AsiLeDu21} on the last epoch, we have with probability $1 - \tailprob$ that
\begin{align*}
\risk(\hparam_T) - \risk(\param\opt) &\le \frac{C_\kappa }{\growthcoef^{\frac{1}{\growth - 1
}}}\max\left\{\frac{L_T\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{ \sqrt{\sampround}},\frac{L_T{d}\log(T/\tailprob)\log \sampround}{\sampround \diffp}\right\}^{\frac{\growth}{\growth - 1}} \\
& = \frac{(C')^{\frac{\growth}{\growth - 1}}C_\kappa (c_\growth\smooth)^{\frac{\growth}{\growth - 2}}}{\growthcoef^{\frac{2}{\growth - 2
}}}\max\left\{\frac{\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{ \sqrt{\sampround}},\frac{{d}\log(T/\tailprob)\log \sampround}{\sampround \diffp}\right\}^{\frac{\growth}{\growth - 2}} L_1^{{\frac{\growth}{(\growth - 1)^T}}}.
\end{align*}
Choosing $T = \log n$ and $\sampround = n/\log n$, we have
\begin{align*}
\risk(\hparam_T) - \risk(\param\opt) &\le \frac{(C')^{\frac{\growth}{\growth - 1}}C_\kappa (c_\growth\smooth)^{\frac{\growth}{\growth - 2}}}{\growthcoef^{\frac{2}{\growth - 2
}}}\max\left\{\frac{\sqrt{\log(\log n/\tailprob)}\log^{3/2} (n/\log n)}{ \sqrt{n/\log n}},\frac{{d}\log(\log n/\tailprob)\log (n/\log n)}{\diffp n/\log n }\right\}^{\frac{\growth}{\growth - 2}} L_1^{{\frac{\growth}{n}}}.
\end{align*}
Now we write results in terms of sample complexity required to achieve a particular error. The sufficient number of samples. To ensure $\risk(\hparam_T) - \risk(\param\opt) < \alpha$, it is sufficient to ensure
$$ \frac{(C')^{\frac{\growth}{\growth - 1}}C_\kappa (c_\growth\smooth)^{\frac{\growth}{\growth - 2}}}{\growthcoef^{\frac{2}{\growth - 2
}}}\max\left\{\frac{\sqrt{\log(T/\tailprob)}\log^{3/2} \sampround}{ \sqrt{\sampround}},\frac{{d}\log(T/\tailprob)\log \sampround}{\sampround \diffp}\right\}^{\frac{\growth}{\growth - 2}} L_1^{{\frac{\growth}{(\growth - 1)^T}}} < \alpha .$$
Choosing $n = \tilde{O}\left(\max\{(\frac{1}{\alpha^2})^\frac{\growth - 2}{\growth},(\frac{d}{\diffp\alpha})^\frac{\growth - 2}{\growth}\}\right)$ ensures error $\le \alpha$.
\end{proof}
\begin{restatable}{corollary}{ubkappaexp}
\label{cor:ub-kappa}
Under the conditions of \Cref{thm:ub-kappa}, for $\delta = 0$, the expected error of the output of algorithm is upper bounded by
\begin{align*}
\E[\risk(\hparam_T) - \risk(\param\opt)] \le \widetilde{O}\paren{\frac{1}{\sqrt{n}} + \frac{d}{n\diffp}}^{\frac{\growth}{\growth - 2}},
\end{align*}
for arbitrarily large $\mu$. For $\delta > 0$, the expected error of the output of algorithm is upper bounded by
\begin{align*}
\E[\risk(\hparam_T) - \risk(\param\opt)] \le \widetilde{O}\paren{\frac{1}{\sqrt{n}} + \frac{d}{n\diffp}}^{\frac{\growth}{\growth - 2}},
\end{align*}
for arbitrarily large $\mu$.
\end{restatable}
\subsection{$(\diffp,\delta)$ version of \Cref{thm:adapt-conv}}
\begin{restatable}{theorem}{ubadapttheoremed}
\label{thm:adapt-conv-appr}
Assume each sample function $\risksamp$ be $L$-Lipschitz and $\smooth$-smooth, and let the population function $f$ satisfy quadratic growth (\Cref{ass:growth}) with coefficient $\growthcoef$. Let $\param_{\rm adapt}$ be the output of \Cref{alg:priv-adapt-interpol}. Then,
\begin{enumerate}
\item \Cref{alg:priv-adapt-interpol} is $\diffp$-DP.
\item Without any additional interpolation assumption, we have that the expected error of the $\param_{\rm adapt}$ is upper bounded by
\begin{align*}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam \cdot \wt O \left( \frac{ 1}{\sqrt{n}} + \frac{\sqrt{d\log(1/\delta)}}{n \diffp} \right)^2.
\end{align*}
\item Let problem \eqref{eqn:objective} be an interpolation problem. Then, the expected error of the $\param_{\rm adapt}$ is upper bounded by
\begin{align*}
\nonumber \E[\risk(\param_T) - \risk(\param\opt)] \le L\diam&\left(\frac{1}{n^\mu} + \exp\left(- \wt \Theta \paren{\frac{n \growthcoef^2}{\smooth^2}}\right) \right. \\
& + \left.\exp\left(- \wt \Theta \paren{\frac{\growthcoef n \diffp}{\smooth \sqrt{d\log(1/\delta)}}}\right)\right).
\end{align*}
\end{enumerate}
\end{restatable}
\begin{proof}
First, we note that the privacy of \Cref{alg:priv-adapt-interpol} follows from the privacy of \Cref{alg:priv-interpol-quad} and \Cref{alg:lip-ext} and post-processing.
To prove the convergence guarantees, we first need to show that the optimal set $\paramdomain\opt$ is included in the shrinked domain $\paramdomain_{\rm int}$. Using the high probability guarantees of \Cref{alg:lip-ext}, we know that with probability $1 - \beta/2$, we have
\begin{align*}
f(x_1) - f(x\opt) \le \frac{2^{12} L}{\lambda} \cdot ^{\log(\kappa - 1)}\left( \frac{ \sqrt{\log(2/\beta) } \log^{3/2} n}{\sqrt{n}}\right.+
\left.\frac{\sqrt{d\log(1/\delta)}\log(2/\beta)\log n}{n \diffp} \right).
\end{align*}
Using the quadratic growth condition, we immediately have $\ltwo{\param\opt - \param_1} \le \diam_{\rm int}/2$ and hence $\paramdomain\opt \subset \paramdomain_{\rm int}$.
Using smoothness, we have that for any $x \in \paramdomain_{\rm int}$,
\begin{align*}
f(x) - f(x\opt) \le \frac{H\diam_{\rm int}^2}{2}.
\end{align*}
Since \Cref{alg:priv-interpol-quad} always outputs a point in its input domain (in this case $\paramdomain_{\rm int}$), even in the non-interpolation setting we have that
\begin{align*}
\E[\risk(\param_T) - \risk(\param\opt)] \le L\diam \cdot \wt O \left( \frac{ 1}{\sqrt{n}} + \frac{\sqrt{d\log(1/\delta)}}{n \diffp} \right)^2.
\end{align*}
In the interpolation setting, the guarantees of \Cref{alg:priv-interpol-quad} hold and the result is immediate.
\end{proof}
\section{Proofs from \Cref{sec:super}}
\subsection{Proof of \Cref{thm:lb-private-interp-growth}}\label{proof:thm:lb-private-interp-growth}
The proof is exactly the same as \Cref{thm:lb-private-interp-nogrowth}, except we set $k = \frac{\lambda n}{\smooth}$ to ensure that $\risk_v(\param)$ for any $v \in \paramdomain$ has $\lambda$-quadratic growth. Finally we set $\gamma = \frac{\diam}{2}\exp(\frac{-\lambda n\varepsilon}{\smooth d})$ and use the fact that $e^{\frac{\lambda n\diffp}{\smooth}}\geq 2$ and the fact that $x/ (x-1)$ is decreasing in $x$ to give the desired lower bound.
\subsection{Proof of \Cref{thm:superefficiency}}\label{proof:thm:superefficiency}
The proof of this result hinges on the two following supporting propositions. We first copy Proposition 2.2 from \cite{AsiDu20} (listed as \Cref{proposition:super-efficiency} below) in our notation for convenience. We then state \Cref{thm:modulus-of-continuity} which gives upper and lower bounds on the modulus of continuity (defined in \Cref{proposition:super-efficiency}). We note that the lower bound presented in \Cref{thm:modulus-of-continuity} is one of the novel contributions of this paper; the proof of \Cref{thm:modulus-of-continuity} can be found in \Cref{proof:thm:modulus-of-continuity}. We will first assume this to be true and prove \Cref{thm:superefficiency} before returning prove its correctness.
\begin{proposition}
\label{proposition:super-efficiency}
For some fixed $\risksamp: \paramdomain, \statdomain \rightarrow \R$ which is convex and $\smooth$-smooth with respect to its first argument,
let
$\statvalset \in \funcsetfamily_\lambda^L(\risksamp)$ for $\lipschitz = 2\smooth\diam$. Let $\param_\statvalset\opt = \argmin_{\param' \in \paramdomain} \risk_\statvalset(\param')$. Define the corresponding modulus of continuity
\begin{align*}
\lmod(\statvalset, 1/\varepsilon) \defeq \sup_{\statvalset'\in \funcsetfamily_\lambda^L(\risksamp)} \{|\param_\statvalset\opt - \param_{\statvalset'}\opt| : \dham(\statvalset, \statvalset') \leq 1/\varepsilon \}.
\end{align*}
Assume the mechanism $M$ is $\diffp$-DP and
for some $\gamma \le \frac{1}{2e}$ achieves
\begin{equation*}
\E[|M(\statvalset) - \param_\statvalset\opt|]
\le \gamma \left(\frac{\lmod(\statvalset; 1 / \diffp)}{2}\right).
\end{equation*}
Then there exists a sample $\statvalset' \in \funcsetfamily_\lambda^L(\risksamp)$ where
$\dham(\statvalset, \statvalset') \le \frac{\log(1/2\gamma)}{2 \diffp}$ such that
\begin{align*}
\E[|M(\statvalset') - \param_{\statvalset'}\opt|]
\ge \frac{1}{4} \ell\left(\frac{1}{4} \lmod\left(\statvalset';
\frac{\log(1 / 2\gamma)}{2 \diffp}\right)\right).
\end{align*}
\end{proposition}
\begin{proposition}\label{thm:modulus-of-continuity}
Let $\risksamp: \paramdomain, \statdomain \rightarrow \R$ be convex and $\smooth$-smooth in its first argument and satisfying $\inf_{\param\in\paramdomain}\risksamp(\param; \statval) = 0$ for all $\statval \in \statdomain$. Suppose we have some $\statvalset \in \funcsetfamily_\lambda^L(\risksamp)$ with $\lipschitz = 2\smooth\diam$ which also induces an interpolation problem (a problem which satisfies \Cref{def:interpolation}).
With respect to the dataset $\statvalset$, the modulus of continuity $\lmod(\statvalset, 1/\varepsilon)$
satsifies
\begin{align*}
\frac{\diam}{ n \varepsilon} \leq \lmod(\statvalset, 1/\varepsilon) \leq \frac{8\smooth \diam}{\lambda n \varepsilon}
\end{align*}
\end{proposition}
With these two results, we can now prove \Cref{thm:superefficiency}. Restating the conditions of the theorem formally, suppose for some constants $c_0$ and $c_1$ there is an $\diffp$-DP estimator $M$ such that
\begin{align*}
\E[\risk_\statvalset(M(\statvalset)) ]- \inf_{\param \in \xd}\risk_\statvalset(\param) \leq c_0\diam^2
e^{-c_1(n\diffp)^t}.
\end{align*}
If $t > 1$, set $t = \min(1, t)$, then the bound certainly still holds for large enough $n$.
If we let $\param_\statvalset\opt = \argmin_{\param\in\paramdomain} \risk_\statvalset(\param)$, using the definition of strong convexity, we have that there exists some $c_2$ and $c_3$ such that
\begin{align*}
\E[|M(\statvalset) - \param_\statvalset\opt|] \leq c_2\diam e^{- c_3(n\diffp)^t}
\end{align*}
To satisfy the expression from \Cref{proposition:super-efficiency}, we select $\gamma$ such that
\begin{align*}
\frac{\gamma\lmod(\statvalset; 1/\varepsilon)}{2} = c_2 \diam e^{-c_3 (n\diffp)^t}.
\end{align*}
Using \Cref{thm:modulus-of-continuity}
we must have $\frac{ \lambda n \varepsilon}{4 \smooth} c_2 \exp(-c_3 (n\diffp)^t) \leq \gamma \leq 2 n\varepsilon c_2 \exp(-c_3 (n\diffp)^t)$. Using \Cref{proposition:super-efficiency}, we have that
\begin{align*}
\E[|M(\statvalset') - \param_{\statvalset'}\opt|] \geq \lmod\left(\statvalset' ; \frac{\log(1/2\gamma)}{2\varepsilon}\right)
\end{align*}
Before performing a further lower bound on this quantity, we first verify that $\frac{\log(1/2\gamma)}{2\varepsilon}$ does not exceed the total size of the dataset, $n$. Using our bounds on $\gamma$, we see that
\begin{align*}
\frac{\log(1/2\gamma)}{2\varepsilon} \leq \frac{1}{2\varepsilon}\left( c_3 (n\diffp)^t - \log c_2 - \log\left(\frac{\lambda n \varepsilon}{2 \smooth}\right)\right)
\end{align*}
For any $t\in(0,1]$, for sufficiently large $n$, this quantity is less than $n$. We now lower bound the modulus of continuity by using the fact that it is a non-decreasing function in its second argument:
\begin{align*}
\E[|M(\statvalset') - \param\opt_{\statvalset'}|] &\geq \lmod\left(\statvalset' ; \frac{\log(1/2\gamma)}{2\diffp}\right) \geq \lmod\left(\statvalset' ; \frac{ c_3 (n\diffp)^t - \log c_2 - \log(4n\diffp)}{2\diffp}\right)\\
&\geq \frac{\diam}{2n\diffp}\left[c_3(n\diffp)^t - \log c_2 - \log(4n\diffp) \right].
\end{align*}
This is the desired result; the last inequality comes from another application of \Cref{thm:modulus-of-continuity} but with $\frac{ c_3 (n\diffp)^t - \log c_2 - \log(4n\varepsilon)}{2\diffp}$ in place of $1/\diffp$.
\subsubsection{Proof of \Cref{thm:modulus-of-continuity}}\label{proof:thm:modulus-of-continuity}
\noindent\fbox{%
\parbox{\textwidth}{%
\textbf{Proof Outline}\\~\\
At a high level, starting with a function $\riskn$, we first remove an arbitrary $1/\varepsilon$ fraction to create a function $\riskneps$. We then replace the sample functions we removed with $1/\varepsilon$ samples of $\frac{\smooth}{2}(\param-\diam)^2$ and argue how far the minimizer of $\riskneps +\frac{\smooth}{2n\varepsilon}(\param-\diam)^2$ is away from the minimizer of $\riskn$. We will need many supporting lemmas to complete this proof; we quickly outline how we use these lemmas.
\begin{enumerate
\item We use \Cref{lem:growth-minimizer-stability} to argue that the minimizers of $\riskneps$ are no different that $\riskn$.
\item We use \Cref{lem:sc-closure-removal} to argue about the growth of $\riskneps$.
\item We use \Cref{lem:sc-closure-add}, \Cref{lem:growth-smooth-consequence}, \Cref{lem:control-of-opt-2} to lower bound how far the minimizer of $\riskneps +\frac{\smooth}{2n\varepsilon}(\param-\diam)^2$ has moved from the minimizer of $\riskn$.
\item We use \Cref{lem:stability} to upper bound how far the minimizer of $\riskneps +\frac{\smooth}{2n\varepsilon}(\param-\diam)^2$ has moved from the minimizer of $\riskn$.
\end{enumerate}
}}\\~\\
We now formally introduce the several supporting lemmas which will aid our proof of \Cref{thm:modulus-of-continuity}. The first ensures that the minimizing set does not change upon the removal of a constant number of samples.
\begin{lemma}\label{lem:growth-minimizer-stability}
Assume that $\inf_{\param\in\paramdomain}\risksamp(\param; \statval) = 0$ for all $\statval \in \statdomain$.
Suppose $\riskn$ satisfies \Cref{def:interpolation} and has $\lambda$-quadratic growth. Let $\paramdomain\opt\defeq \argmin_{\param\in\paramdomain} \riskn(\param)$. Let $\statvalset_\varepsilon \subset \statvalset$ consist of any (constant not scaling with $n$) $1/\varepsilon > 0$ data points. Then, for $\riskneps \defeq \fracnsamp \sum_{\statval \in \statvalset \setminus \statvalset_\varepsilon} \risksamp(\param; \statval)$ we have that $\paramdomainneps \defeq \argmin_{\param\in\paramdomain} \riskneps(\param) = \paramdomain\opt$.
\end{lemma}
\begin{proof}
Suppose for the sake of contradiction that $\paramdomain\opt \neq \paramdomainneps$
Since $\riskn$ is an interpolation problem, the removal of samples can only increase the size of $\paramdomainneps$. Suppose that $\paramdomainneps \setminus \paramdomain\opt \neq \emptyset$. There exists at most $1/\varepsilon$ points in $\statvalset$ that have non-zero error on $\paramdomainneps \setminus \paramdomain\opt$. However, by smoothness of each sample function (and the fact that $\risk(\param\opt) = 0$ and $\risk'(\param\opt) = 0$ by construction), we have that for $\param \in [a,b]$
\begin{align*}
\riskn(\param) \leq \frac{\smooth }{n\varepsilon}\dist(\param, \paramdomain\opt)^2.
\end{align*}
Since $\lim_{n\to\infty}\frac{\smooth}{n\varepsilon} = 0$, this contradicts $\lambda$-quadratic growth.
\end{proof}
This second lemma ensures that deleting a constant number of samples does not affect the growth or strong convexity of the population function by too much.
\begin{lemma}\label{lem:sc-closure-removal}
Assume that $\inf_{\param\in\paramdomain}\risksamp(\param; \statval) = 0$ for all $\statval \in \statdomain$.
Suppose $\riskn$ satisfies \Cref{def:interpolation} and has $\lambda$-quadratic growth (respectively $\lambda$-strong convexity). Let $\riskneps$ be defined as in \Cref{lem:growth-minimizer-stability}. Then $\riskneps$ has $\gamma$-quadratic growth (respectively $\gamma$-strong convexity) for any $\gamma \leq \lambda - \frac{\smooth}{n \varepsilon}$.
\end{lemma}
\begin{proof}
By \Cref{lem:growth-minimizer-stability}, that the minimizing set of $\riskneps$ is the same as $\riskn$.
Suppose for the sake of contradiction that $\riskneps$ does not have $\gamma$-quadratic growth. Then there must exist $\param_1$ such that
\begin{align*}
\riskneps(\param_1) - \riskneps(\param\opt) < \frac{\gamma}{2}\ltwo{\param_1 - \param\opt}^2.
\end{align*}
By smoothness and growth we have
\begin{align*}
\frac{\smooth}{2 n \varepsilon} \ltwo{\param_1 - \param\opt}^2 + \frac{\gamma}{2}\ltwo{\param_1 - \param\opt}^2 > \riskn(\param_1) - \riskn(\param\opt) \geq \frac{\lambda}{2}\ltwo{\param_1 - \param\opt}^2.
\end{align*}
This implies that $\gamma > \lambda - \frac{\smooth}{n\varepsilon}$, a contradiction.
Suppose for the sake of contradiction that $\riskneps$ does not have $\gamma$-strong convexity. Then there must exist $\param_1$ and $\param_2$ such that
\begin{align*}
\riskneps(\param_1) - \riskneps(\param_2) < \frac{\gamma}{2}\ltwo{\param_1 - \param\opt}^2 + \langle \nabla \riskneps(\param_2), \param_1 - \param_2 \rangle.
\end{align*}
By smoothness and strong convexity we have
\begin{align*}
\frac{\smooth}{2 n \varepsilon} \ltwo{\param_1 - \param_2}^2 + \frac{\gamma}{2}\ltwo{\param_1 - \param_2}^2 + \langle \nabla \riskn(\param_2), \param_1 - \param_2 \rangle > \riskn(\param_1) - \riskn(\param_2) \geq \frac{\lambda}{2}\ltwo{\param_1 - \param_2}^2 + \langle \nabla \riskn(\param_2), \param_1 - \param_2 \rangle.
\end{align*}
However, this implies that $\gamma > \lambda - \frac{\smooth}{n\varepsilon}$ which is a contradiction.
\end{proof}
The next lemma is a standard result on the closure under addition of strongly convex functions.
\begin{lemma}\label{lem:sc-closure-add}
Let functions $h_1$ and $h_2$ be $\lambda$ and $\gamma$ strongly convex respectively, then $h_1 + h_2$ is $\lambda + \gamma$ strongly convex.
\end{lemma}
This lemma provides some growth conditions on the gradient under smoothness, strong convexity and quadratic growth.
\begin{lemma}\label{lem:growth-smooth-consequence}
Let $g: \paramdomain \rightarrow \R_+$ be a convex function with $\paramdomain\opt = \argmin_{\param \in \paramdomain}g(\param)$ such that for $\param\opt \in \paramdomain\opt$, $g(\param\opt) =0$. Suppose $g$ has $\lambda$-quadratic growth, then
\begin{align*}
|g'(\param)| \geq \frac{\lambda}{2} \dist(\param, \paramdomain\opt).
\end{align*}
If instead $g$ has $\lambda$-strong convexity, then
\begin{align*}
|g'(\param)| \geq \lambda \dist(\param, \paramdomain\opt).
\end{align*}
Alternatively, suppose $g$ has $\smooth$-smoothness, then
\begin{align*}
|g'(\param)| \leq \smooth \dist(\param, \paramdomain\opt).
\end{align*}
\end{lemma}
\begin{proof}
We note that by first order optimality conditions, for all $\param\opt \in\paramdomain\opt$, $\nabla g(\param\opt) = 0$.
To prove the first inequality, we have that for any $\param\opt \in \paramdomain\opt$, the following is true:
\begin{align*}
\frac{\lambda}{2} \dist(\param, \paramdomain\opt)^2 \leq g(\param) - g\opt \leq |g'(\param)||\param - \param\opt|.
\end{align*}
In particular, minimizing over $\param\opt$ on the right hand side and rearranging gives the desired result.
To prove the second result, we know that by strong convexity for any $\param\opt \in \paramdomain\opt$
\begin{align*}
|g'(\param)|= |g'(\param) - g'(\param\opt)| \geq \lambda |\param - \param\opt|.
\end{align*}
To prove the last result, we know that by smoothness for any $\param\opt \in \paramdomain\opt$
\begin{align*}
|g'(\param)|= |g'(\param) - g'(\param\opt)| \leq \smooth |\param - \param\opt|.
\end{align*}
Minimizing over $\param\opt$ on the right hand side gives the desired result.
\end{proof}
This lemma controls how much the minimizers of a function can change if another function is added. This will directly be useful in lower bounding the modulus of continuity.
\begin{lemma}\label{lem:control-of-opt-2}
Suppose $h: [-\diam, \diam] \to \R_+$ and $g: [-\diam, \diam] \to \R_+$. Let $\param_h\opt$ be the largest minimizer of $h$ and $\param_g\opt$ be the smallest minimizer of $g$, and assume that $\param_h\opt \leq \param_g\opt$. Let $\param\opt$ be any minimizer of $h +g$. Assume that $h(\param_h\opt) = 0$ and $g(\param_g\opt)= 0$.
If $h$ has $\lambda_h$-quadratic growth and $g$ is $\smooth_g$-smooth, then
\begin{align*}
\param\opt -\param_h\opt \leq \frac{\smooth_g (\param_g\opt-\param_h\opt)}{\frac{\lambda_h}{2} + \smooth_g}.
\end{align*}
If $h$ is $\smooth_h$-smooth and $g$ has $\lambda_g$-quadratic growth, then
\begin{align*}
\frac{\frac{\lambda_g}{2} (\param_g\opt-\param_h\opt)}{\frac{\lambda_g}{2} + \smooth_h}
\leq\param\opt -\param_h\opt.
\end{align*}
The same relation holds with $\lambda_g / 2$ and $\lambda_h / 2$ replaced with $\lambda_g$ and $\lambda_h$ respectively if the above statement is modified such that $g$ and $h$ are $\lambda_g$ and $\lambda_h$ strongly convex instead.
\end{lemma}
\begin{proof}
If $\param_h\opt \neq \diam$, then the first order condition for optimality implies
\begin{align*}
h'(\param_h\opt) + g'(\param_h\opt) = g'(\param_h\opt) < 0 \quad \text{and} \quad h'(\param_g\opt) + g'(\param_g\opt) = h'(\param_g\opt) > 0.
\end{align*}
Thus, $\param\opt \in (\param_h\opt, \param_g\opt)$. By the monotonicty of the first derivative of convex functions that for $\param\opt \in (\param_h\opt, \param_g\opt)$, $g'(\param\opt) < 0$ and $h'(\param\opt) > 0$. Combining this with \Cref{lem:growth-smooth-consequence}, we get
\begin{align*}
\frac{\lambda_h}{2}(\param\opt - \param_h\opt) \leq h'(\param\opt) \leq \smooth_h(\param\opt - \param_h\opt)\\
\smooth_g (\param\opt - \param_g\opt) \leq g'(\param\opt)\leq \frac{\lambda_g}{2}(\param\opt - \param_g\opt).
\end{align*}
Combining these facts gives
\begin{align*}
\frac{\lambda_h}{2}(\param\opt - \param_h\opt) + \smooth_g (\param\opt -\param_g\opt) \leq h'(\param\opt) + g('\param\opt) = 0 \leq \smooth_h(\param\opt - \param_h\opt) + \frac{\lambda_g}{2} (\param\opt -\param_g\opt)
\end{align*}
Rearranging these two inequalities gives the desired result. We note that the lower bound only requires that $h$ is $\smooth_h$-smooth and $g$ has $\lambda_g$-quadratic growth, and the upper bound only requires $h$ has $\lambda_h$-quadratic growth and $g$ is $\smooth_g$-smooth. The last statement about strong convexity follows from the same reasoning, except using the strong convexity inequality in \Cref{lem:growth-smooth-consequence} instead of the quadratic growth inequality.
\end{proof}
The following lemma is a slight modification of Claim 6.1 from \cite{ShalevShSrSr09} and will be helpful for us to upper bound the modulus of continuity.
\begin{lemma}\label{lem:stability}
Let $\statvalset'$ consist of $n$ data points where $|\statvalset \triangle \statvalset'| = k$. Suppose that $\riskn$ is $\lambda$-strongly convex and satisfies \Cref{def:interpolation}. Assume the sample function $\risksamp: \paramdomain \times \statdomain \to \R_+$ is $\lipschitz$-Lipschitz in its first argument and that $\inf_{\param\in\paramdomain}\risksamp(\param; \statval) = 0$ for all $\statval \in \statdomain$. For $\paramopts \in \argmin_{\param \in \paramdomain} \riskn(\param)$ and $\paramoptsp \in \argmin_{\param \in \paramdomain} \risknp(\param)$, we have that
\begin{align*}
\ltwo{\paramopts - \paramoptsp} \leq \frac{4k\lipschitz}{\lambda n}.
\end{align*}
\end{lemma}
\begin{proof}
By strong convexity, we have that
\begin{align*}
\riskn(\paramoptsp) - \riskn(\paramopts) \geq \frac{\lambda}{2} \ltwo{\paramoptsp - \paramopts}^2,
\end{align*}
since by first order optimality conditions, we know that $\nabla \riskn(\paramopts) = 0$ as a consequence of \Cref{def:interpolation}. We also have
\begin{align*}
\riskn(\paramoptsp) - \riskn(\paramopts) &= \fracnsamp \sum_{\statval \in \statvalset \setminus \statvalset'} \left[ \risksamp(\paramoptsp; \statval) - \risksamp(\paramopts; \statval)\right] + \fracnsamp \sum_{\statval \in \statvalset \cap \statvalset'} \left[ \risksamp(\paramoptsp; \statval) - \risksamp(\paramopts; \statval)\right]\\
&=\fracnsamp \sum_{\statval \in \statvalset \setminus \statvalset'} \left[ \risksamp(\paramoptsp; \statval) - \risksamp(\paramopts; \statval)\right]
- \fracnsamp \sum_{\statval \in \statvalset' \setminus \statvalset} \left[ \risksamp(\paramoptsp; \statval) - \risksamp(\paramopts; \statval)\right]
+\risknp(\paramoptsp) - \risknp(\paramopts)\\
&\leq \frac{2 k \lipschitz}{n} \ltwo{\paramoptsp - \paramopts},
\end{align*}
where the last inequality comes from the Lipschitzness of $\risksamp$ and that $\paramoptsp \in \argmin_{\param \in \paramdomain} \risknp(\param)$.
\end{proof}
Armed with these supporting lemmas, we can now bound the modulus of continuity. Let $\param_0\opt$ be the largest minimizer of $\riskn$ following the steps of the proof outline.
Without loss of generality, we assume that $\param_0\opt \leq 0$. If $\param_0\opt >0$, by symmetry, it suffices to consider the problem replacing $\frac{\smooth}{2}(\param -\diam)^2$ with $\frac{\smooth}{2}(\param +\diam)^2$ in the following proof.
By \Cref{lem:growth-minimizer-stability}, $\riskneps$ has the same minimizing set as $\riskn$. By \Cref{lem:sc-closure-removal}, $\riskneps$ has $\lambda - \frac{\smooth}{n\varepsilon}$-strong convexity. Replace the $1/\varepsilon$ datapoints removed with samples that have the loss function $\frac{\smooth}{2}(\param-\diam)^2$; we note that it is clear that $\frac{\smooth}{2}(\param-\diam)^2$ satisifies the desired Lipschitz condition. Our constructed non-interpolation population function is
\begin{align*}
\riskneps(\param) + \frac{\smooth}{2 n\varepsilon}(\param - \diam)^2,
\end{align*}
which is $\lambda$-strongly convex by \Cref{lem:sc-closure-removal} and \Cref{lem:sc-closure-add} and is $2\smooth\diam$-Lipschitz. This means that the $\statvalset'$ this function corresponds to belongs to $\funcsetfamily_\lambda^L(\risksamp)$. Let $\param\opt$ be the minimizer of $\riskneps(\param) + \frac{\smooth}{2 n\varepsilon}(\param - \diam)^2$.
By the triangle inequality, we have $\riskneps$ is $\left(\frac{n-1/\varepsilon}{n}\right)\smooth$-smooth. $\frac{\smooth}{2n\varepsilon}(\param - \diam)^2$ is $\frac{\smooth}{n\varepsilon}$- strongly convex. Thus, by \Cref{lem:control-of-opt-2} setting $h(x)\defeq \riskneps(x)$ and $g(x)\defeq \frac{\smooth}{2n\varepsilon}(\param - \diam)^2$, we have that
\begin{align*}
|\param\opt - \param_0\opt| = \param\opt - \param_0\opt \geq \frac{\frac{\smooth}{n\varepsilon} (\diam - \param_0\opt)}{\left(\frac{n-1/\varepsilon}{n}\right)\smooth + \frac{\smooth}{n\varepsilon}} = \frac{\diam - \param_0\opt}{n \varepsilon} = \frac{\diam + |\param_0\opt|}{n \varepsilon} \geq \frac{\diam}{n \varepsilon}.
\end{align*}
Here, implicitly, we are using the fact that $\param_0\opt$ is also a minimizer of $\riskneps$ by \Cref{lem:sc-closure-removal}. This completes the proof of the lower bound.
The upper bound follows from \Cref{lem:stability} with $k = 1/\varepsilon$ and $\lipschitz = 2\smooth\diam$.
| {
"timestamp": "2022-11-01T01:21:20",
"yymm": "2210",
"arxiv_id": "2210.17070",
"language": "en",
"url": "https://arxiv.org/abs/2210.17070",
"abstract": "In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems -- problems where there exists a solution that simultaneously minimizes all of the sample losses -- than on non-interpolating ones; we show that generally similar improvements are impossible in the private setting. However, when the functions exhibit quadratic growth around the optimum, we show (near) exponential improvements in the private sample complexity. In particular, we propose an adaptive algorithm that improves the sample complexity to achieve expected error $\\alpha$ from $\\frac{d}{\\varepsilon \\sqrt{\\alpha}}$ to $\\frac{1}{\\alpha^\\rho} + \\frac{d}{\\varepsilon} \\log\\left(\\frac{1}{\\alpha}\\right)$ for any fixed $\\rho >0$, while retaining the standard minimax-optimal sample complexity for non-interpolation problems. We prove a lower bound that shows the dimension-dependent term is tight. Furthermore, we provide a superefficiency result which demonstrates the necessity of the polynomial term for adaptive algorithms: any algorithm that has a polylogarithmic sample complexity for interpolation problems cannot achieve the minimax-optimal rates for the family of non-interpolation problems.",
"subjects": "Machine Learning (cs.LG); Cryptography and Security (cs.CR); Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "Private optimization in the interpolation regime: faster rates and hardness results",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.982287697666963,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7089594724208393
} |
https://arxiv.org/abs/1310.2275 | A pointwise inequality for the fourth order Lane-Emden equation | We prove that the following pointwise inequality holds\begin{equation*} -\Delta u \ge \sqrt\frac{2}{(p+1)-c_n} |x|^{\frac{a}{2}} u^{\frac{p+1}{2}} + \frac{2}{n-4} \frac{|\nabla u|^2}{u} \ \ \text{in}\ \ \mathbb{R}^n\end{equation*}where $c_n:=\frac{8}{n(n-4)}$, for positive bounded solutions of the fourth order Hénon equation that is \begin{equation*} \Delta^2 u = |x|^a u^p \ \ \ \ \text {in }\ \ \mathbb{R}^n \end{equation*} for some $a\ge0$ and $p>1$. Motivated by the Moser's proof of the Harnack's inequality as well as Moser iteration type arguments in the regularity theory, we develop an iteration argument to prove the above pointwise inequality. As far as we know this is the first time that such an argument is applied towards constructing pointwise inequalities for partial differential equations. An interesting point is that the coefficient $\frac{2}{n-4}$ also appears in the fourth order $Q$-curvature and the Paneitz operator. This in particular implies that the scalar curvature of the conformal metric with conformal factor $u^\frac{4}{n-4}$ is positive. | \section{Introduction}
We are interested in proving a priori pointwise estimate for
positive solutions of the following fourth order H\'{e}non equation
\begin{equation}\label{4henon}
\Delta^2 u = |x|^a u^p \ \ \ \ \text {in }\ \ \mathbb{R}^n
\end{equation}
where $p>1$ and $a\ge 0$. Let us first mention that for the case
$a=0$, it is known that (\ref{4henon}) only admits $u=0$ as a
nonnegative solution when $p$ is a subcritical exponent that is
$1<p<\frac{n+4}{n-4}$ when $n\ge 5$ and $1<p$ when $n\le 4$.
Moreover, for the critical case $p=\frac{n+4}{n-4}$ all entire
positive solutions are classified. See \cite{l,wx}. This is a
counterpart of the standard Liouville theorem of Gidas-Spruck in
\cite{gs,gs2} for the second order Lane-Emden equation
\begin{equation}\label{2lane}
-\Delta u = u^p \ \ \ \ \text {in }\ \ \mathbb{R}^n
\end{equation}
stating that $u=0$ is the only nonnegative solution for
(\ref{2lane}) when $p$ is a subcritical exponent that is
$1<p<\frac{n+2}{n-2}$ when $n\ge 3$. Note also that for the
fourth order H\'{e}non equation, it is conjectured that $u=0$ is
the only nonnegative solution of (\ref{4henon}) when $p$ is a
subcritical exponent that is when $1<p<\frac{n+4+2a}{n-4}$ and $n\ge
5$, see \cite{fg}. Therefore, throughout this note, when we are
dealing with (\ref{4henon}), we assume that $p > \frac{n+4+2a}{n-4}$
and $n\ge 5$. For more information, see \cite{fg,so} and references
therein.
Pointwise estimates have had tremendous impact on the theory of
elliptic partial differential equations. In what follows we list
some of the celebrated pointwise inequalities for certain semilinear
elliptic equations and systems. These inequalities have been used to
tackle well-known conjectures and open problems. The following
inequality by Modica \cite{m} has been one of the main techniques
to solve the De Giorgi's conjecture (1978) for the Allen-Cahn
equation and to analyze various semilinear equations and problems.
\begin{thm} (Modica \cite{m}, 1985) Let $F\in C^2(\mathbb {R})$ be a nonnegative function and
$u$ be a bounded entire solution of
\begin{equation}\label{mod}
\Delta u=F'(u) \ \ \text{in} \ \ \mathbb{R}^n.
\end{equation}
Then
\begin{equation}\label{pointmod}
|\nabla u|^2\le 2 F(u) \ \ \text{in}\ \ \mathbb{R}^n.
\end{equation}
\end{thm}
For the specific case $F(u)=\frac{1}{4}(1-u^2)^2$, equation (\ref{mod}) is known as the Allen-Cahn equation.
Note also that Caffarelli et al. in \cite{cgs} extended this
inequality to quasilinear equations. We refer interested readers to \cite{fv1,fv2,fv3,fv4,cfv,fsv} regarding pointwise gradient estimates and certain improvements of (\ref{pointmod}). For the fourth order counterpart of (\ref{mod}) with an arbitrary
nonlinearity, a general inequality of the form (\ref{pointmod}) is
not known. However, for a particular nonlinearity known as the
fourth order Lane-Emden equation, i.e.
\begin{equation}\label{4lane}
\Delta^2 u = u^p \ \ \ \ \text {in }\ \ \mathbb{R}^n
\end{equation}
it is shown by Wei and Xu, as Theorem 3.1 in \cite{wx}, that the negative Laplacian of the positive solutions is non-negative
that is $-\Delta u\ge 0$ in $\mathbb{R}^n$. Set $v=-\Delta u$ and from the fact that $-\Delta u\ge 0$ we can consider \eqref{4lane} as a special case (when $q=1$) of the Lane-Emden system that is
\begin{eqnarray}\label{lane-emden}
\left\{ \begin{array}{lcl}
\hfill -\Delta u&=& v^q \ \ \text{in}\ \ \mathbb{R}^n,\\
\hfill -\Delta v&=& u^p \ \ \text{in}\ \ \mathbb{R}^n,
\end{array}\right.
\end{eqnarray}
where $p\ge q \ge 1$. Note that there is a significant difference between system (\ref{lane-emden}) and equation (\ref{4lane}) in the sense that this system has Hamiltonian structure while the equation has gradient structure, see \cite{dff,dfm,sz} and references therein. This system has been of great interest at least in the past two decades. In particular, the Lane-Emden conjecture stating that $u=v=0$ is the only nonnegative solution for this system where $\frac{1}{p+1} +\frac{1}{q+1}>\frac{n-2}{n}$ has been studied extensively and various methods and techniques are developed to tackle this conjecture. Among these methods, Souplet \cite{so} proved the following pointwise inequality for solutions of (\ref{lane-emden}) and then used it to prove the Lane-Emden conjecture in four dimensions. Note that the particular case $1<p<2$ is done by Phan in \cite{phan}.
\begin{thm} (Souplet \cite{so}, 2009) Let $u$ and $v$ be nonnegative solutions of (\ref{lane-emden}). Then the following inequality holds
\begin{equation}\label{pointlane}
\frac{u^{p+1}}{p+1}\le \frac{v^{q+1}}{q+1} \ \ \text{in}\ \ \mathbb{R}^n.
\end{equation}
\end{thm}
Applying this theorem, the following pointwise inequality holds for nonnegative solutions of \eqref{4lane}
\begin{equation}\label{point4lane}
-\Delta u \ge \sqrt\frac{2}{p+1} u^{\frac{p+1}{2}} \ \ \text{in}\ \ \mathbb{R}^n.
\end{equation}
Note also that Phan in \cite{phan}, with similar methods provided in \cite{so}, extended the pointwise inequality (\ref{pointlane}) to nonnegative solutions of the H\'{e}non-Lane-Emden system that is
\begin{eqnarray}\label{systemhenon}
\left\{ \begin{array}{lcl}
\hfill -\Delta u&=& |x|^b v^q \ \ \text{in}\ \ \mathbb{R}^n,\\
\hfill -\Delta v&=& |x|^a u^p \ \ \text{in}\ \ \mathbb{R}^n,
\end{array}\right.
\end{eqnarray}
where $p\ge q \ge 1$. Suppose that $0\le a -b\le (n-2)(p-q)$ then
\begin{equation}\label{pointsystemhenon}
|x|^a \frac{u^{p+1}}{p+1}\le |x|^b \frac{v^{q+1}}{q+1} \ \ \text{in}\ \ \mathbb{R}^n.
\end{equation}
The standard method to prove a pointwise inequality, as it is used to prove (\ref{pointlane}) and (\ref{pointmod}), is to derive an appropriate equation, call it an auxiliary equation, for the difference function of the right-hand and the left-hand sides of the inequality. Then, whenever we have enough decay estimates on solutions of the auxiliary equation, maximum principles can be applied to prove that the difference function has a fixed sign. So, the key point here is to manipulate a suitable auxiliary equation.
In a more technical framework, to construct an auxiliary equation to prove (\ref{pointlane}) and (\ref{point4lane}) a few positive terms including a gradient term of the form $|\nabla u|^2 u^{t-2}$ for some number $t$ are not considered in \cite{so}. To be more explicit, in order to prove (\ref{point4lane}), that is a particular case of (\ref{pointlane}), the difference function $w(x):=\Delta u+\sqrt\frac{2}{p+1} u^{\frac{p+1}{2}}$ is considered. Straightforward calculations show that the following auxiliary equation holds
\begin{equation}\label{auxw}
\left( \sqrt \frac{2}{p+1} u^{\frac{1-p}{2}}\right) \Delta w= \Delta u+ \sqrt\frac{2}{p+1} u^{\frac{p+1}{2}}+ \frac{p-1}{2} \frac{|\nabla u|^2}{u}.
\end{equation}
In order to show that $\Delta w$ is nonnegative when $w$ is nonnegative, via maximum principles for the above equation, the gradient term $\frac{|\nabla u|^2}{u}$ is not considered in \cite{so}. Note that the above equation (\ref{auxw}) implies, in the spirit, that the gradient term $\frac{|\nabla u|^2}{u}$ should have an impact on the inequality just like the Laplacian operator and the power term $u^{\frac{p+1}{2}}$. This is our motivation to attempt to include the gradient term in the inequality (\ref{point4lane}) that gives a lower bound on the Laplacian operator. Let us briefly mention that Modica in his proof of (\ref{pointmod}) took advantage of similar gradient terms to construct an auxiliary equation. Following ideas provided by Modica \cite{m} and Souplet \cite{so}, as we shall see in the proof of Proposition \ref{propwk}, we manage to keep most of the positive terms when looking for an auxiliary equation.
In this paper, we develop a Moser iteration type argument to
prove a lower bound for the negative Laplacian of positive bounded solutions of (\ref{4henon})
that involves powers of $u$ and the new term $\frac{|\nabla u|^2}{u}$ with $\frac{2}{n-4}$ as the coefficient.
The remarkable point is that the coefficient $\frac{2}{n-4}$ is what we exactly need in the estimate of the scalar curvature for the conformal metric $g = u^{\frac{2}{n-4}}g_0$.
Here is our main result.
\begin{thm}\label{mainres} Let $u$ be a bounded positive solution of (\ref{4henon}). Then the following pointwise inequality holds
\begin{equation}\label{newpoint4henon}
-\Delta u \ge \sqrt\frac{2}{(p+1)-c_n} |x|^{\frac{a}{2}} u^{\frac{p+1}{2}} + \frac{2}{n-4} \frac{|\nabla u|^2}{u} \ \ \text{in}\ \ \mathbb{R}^n
\end{equation}
where $c_n:=\frac{8}{n(n-4)}$ and $0\le a\le \inf_{k\ge 0} A_k$ (defined at (\ref{ak})).
\end{thm}
\begin{remark} A natural question here is that what are the best constants in the inequality (\ref{newpoint4henon})?
\end{remark}
Let us now put the inequality (\ref{newpoint4henon}) in a more geometric text. By the conformal change $g = u^{\frac{4}{n-4}} g_0$ where $g_0$ is the usual Euclidean metric, the new scalar curvature becomes
$$ S_g= - \frac{4(n-1)}{n-2} u^{-\frac{n+2}{n-4}} \Delta \left( u^{\frac{n-2}{n-4}} \right). $$
An immediate consequence of (\ref{newpoint4henon}) is that the conformal scalar curvature is positive. Note that this can not be deduced from the inequality (\ref{point4lane}).
The idea of proving a lower bound for the negative of Laplacian operator is also used in the context of nonlinear eigenvalue problems to prove certain regularity results, e.g. see \cite{ceg}. Similar pointwise inequalities are used to prove Liouville theorems in the notion of stability in \cite{wxy,wy} and references therein as well. We would like to mention that Gui in \cite{gui} proved a very interesting Hamiltonian identity for elliptic systems that may be regarded as a generalization of the Modica's inequality. He used this identity to rigorously analyze the structure of level curves of saddle solutions of the Allen-Cahn equation as well as Young's Law for the contact angles in triple junction formation. Note also that as it is shown by Farina in \cite{far} for the Ginzburg-Landau system, the analog of Modica's estimate is false for systems in general. We refer interested readers to \cite{ali} for a review of this topic and to \cite{fg2} for De Giorgi type results for systems.
Here is the organization of the paper. In Section \ref{secEst}, we provide certain standard elliptic estimates that are consequences of Sobolev embeddings and the regularity theory. Then, in Section \ref{secIter} we develop a Moser iteration type argument, following ideas provided by Modica \cite{m} and Souplet in \cite{so}. Finally, in Section \ref{secapp}, we first give a certain maximum principle type argument for a quasilinear equation that arises in the Moser iteration process. Then we apply the estimates and methods developed in former sections. We suggest to ignore the weight function $|x|^a$ in (\ref{4henon}) when reading the paper for the first time.
\section{Technical elliptic estimates}\label{secEst}
In this section, we provide some elliptic decay estimates that we use frequently later in the proofs. Deriving the right decay estimates for solutions of (\ref{4henon}) play a fundamental role in the most our proofs. Similar estimates have been also used in the literature to construct Liouville theorems and regularity results. We refer the interested readers to \cite{f,fg,phan,so,ps}. We start with the following standard estimate.
\begin{lemma}\label{1bound} ($L^p$-estimate on $B_R$) Suppose that $u$ is a nonnegative solution of (\ref{4henon}) then for any $R>1$ we have
\begin{equation*}
\int_{B_R} |x|^a u^p \le C \ R^{n-\frac{4p+a}{p-1}},
\end{equation*}
where $C=C(n,p,a)>0$ is independent from $R$.
\end{lemma}
\noindent\textbf{Proof:} Consider the following test function $\phi_R\in C^4_c(\mathbb{R}^n)$ with $0\le\phi_R\le1$;
$$\phi_R(x)=\left\{
\begin{array}{ll}
1, & \hbox{if $|x|<R$;} \\
0, & \hbox{if $|x|>2R$;}
\end{array}
\right.$$
where $|| D^{i} \phi_R||_{\infty} \le \frac{C}{R^{i}}$ where $1\le i \le 4$. For fixed $m\ge 2$, we have
$$|\Delta^2 \phi^m_R(x)|\le \left\{
\begin{array}{ll}
0, & \hbox{if $|x|<R$ or $|x|>2R$;} \\
C R^{-4} \phi^{m-4}_R , & \hbox{if $R<|x|<2R$;}
\end{array}
\right.$$
where $C>0$ is independent from $R$. For $m\ge 2$, multiply the equation by $\phi^m_R$ and integrate to get
\begin{eqnarray*}
\int_{B_{2R}} |x|^{a} u^p \phi^m_R &= & \int_{B_{2R}} \Delta^2 u \phi^m_R\\
&=& \int_{B_{2R}} u \Delta^2 \phi^m_R \le C R^{-4}\int_{B_{2R}\setminus B_{R}} u\phi^{m-4}_R.
\end{eqnarray*}
Applying H\"{o}lder's inequality we get
\begin{eqnarray*}
\int_{B_{2R}} |x|^{a} u^p \phi^m_R & \le & C \ R^{-4} \left( \int_{B_{2R}\setminus B_{R}} |x|^{\frac{-a}{p }p' } \right)^{\frac{1}{p'}} \left( \int_{B_{2R}\setminus B_{R}} |x|^{a} u^p \phi^{(m-4)p}_R \right)^{1/p}\\
& \le & C\ R^{ (n-\frac{a}{p}p')\frac{1}{p'} -4 } \left( \int_{B_{2R}\setminus B_{R}} |x|^{a} u^p \phi^{(m-4)p}_R \right)^{1/p},
\end{eqnarray*}
where $p'=\frac{p}{p-1}$. Set $m=(m-4)p$ that gives $m=\frac{4p}{p-1}$ to get
\begin{equation*}
\int_{B_{2R}} |x|^{a} u^p \phi^m_R \le C\ R^{ (n-\frac{a}{p}p')\frac{1}{p'} -4 } \left( \int_{B_{2R}} |x|^{a} u^p \phi^{m}_R \right)^{1/p}.
\end{equation*}
Therefore,
\begin{equation*}
\int_{B_{2R}} |x|^{a} u^p \phi^m_R \le C\ R^{ (n-\frac{a}{p}p') -4p'} .
\end{equation*}
This finishes the proof.
\hfill $\Box$
From the H\"{o}lder's inequality we get the following.
\begin{cor}\label{u} Under the same assumptions as Lemma \ref{1bound}. The following estimate holds
$$ \int_{B_R\setminus B_{R/2}} u \le C R^{n -\frac{a+4}{p-1}}$$
where $C=C(n,p,a)>0$ is independent from $R$.
\end{cor}
We now show that the operator $-\Delta u$ has a sign. Then, we apply this to provide various elliptic estimates for derivatives of $u$. In addition, later on this helps us to start an iteration argument.
\begin{prop}\label{prop0}
Let $u$ be a positive solution of (\ref{4henon}). Then, $-\Delta u\ge 0$ in $\mathbb {R}^n$.
\end{prop}
\noindent\textbf{Proof:} Let $v=-\Delta u$. Ideas and methods applied in this proof are strongly motivated by the ones given in \cite{wx}. Suppose that there is $x_0\in\mathbb R^n$ such that $v(x_0)<0$. Without loss of generality we take $x_0=0$, i. e. in case of $x_0\neq 0$ set $\omega(x)=v(x+x_0)$ and apply the same argument. We use the notation $\bar f(r)=\frac{1}{|\partial B_r|} \int_{\partial B_r} f dS$ as the average of function $f(x)$ on the boundary of $B_r$. We refer interested readers to \cite{n} regarding the average function. Applying the H\"{o}lder's inequality
\begin{eqnarray}\label{bar}
\left\{ \begin{array}{lcl}
\hfill -\Delta_r \bar u (r)&=& \bar v (r) \ \ \text{in}\ \ \mathbb{R},\\
\hfill -\Delta_r \bar v(r) &\ge & r^a (\bar u)^p \ \ \text{in}\ \ \mathbb{R},
\end{array}\right.
\end{eqnarray}
where $\Delta_r$ is the Laplacian operator in the polar coordinates, i.e. $$\Delta_r \bar f(r)= r^{1-n} ( r^{n-1} \bar f'(r) )'.$$ It is straightforward to see that
$$\bar v'(r)=\frac{1}{|\partial B_r|} \int_{B_r} \Delta v=-\frac{1}{|\partial B_r|} \int_{B_r} |x|^a u^p \le 0.$$
Therefore, $\bar v(r)\le \bar v(0)<0$ for $r>0$. Similarly for $\bar u'(r)$ we have
\begin{eqnarray*}
\bar u'(r) &=&- \frac{1}{|\partial B_r|} \int_{B_r} v = - r^{1-n} \int_0^r s^{n-1} \bar v(s) ds
\\&\ge& - \bar v(0) r^{1-n} \int_0^r s^{n-1} ds = - \frac{\bar v(0)}{n} r.
\end{eqnarray*}
From this for any $r\ge r_0$ we get
\begin{eqnarray}\label{bar2}
\bar u(r) \ge \alpha r^2 ,
\end{eqnarray}
where $\alpha=- \frac{\bar v(0)}{2n} >0$. We now have a lower bound on $\bar u(r) $.
Instead suppose that the following more general lower bound holds on $\bar u(r) $,
\begin{eqnarray}\label{bar2}
\bar u(r) \ge \frac{\alpha^{p^k}}{\beta^{s_k}} r^{t_k} \ \ \text{for} \ \ r\ge r_k,
\end{eqnarray}
where $s_0:=0$, $t_0:=2$, $\alpha:=- \frac{\bar v(0)}{2n}>0$ and $\beta:=2p+a+n+4>0$. Note that system (\ref{bar}) makes a relation between two functions $\bar u(r) $ and $\bar v(r) $. Therefore, the lower bound on $\bar u(r) $ forces an upper bound on $\bar v(r) $ and vice versa. In the light of this fact, we can construct an iteration argument to improve the bound (\ref{bar2}). Integrating the second equation of (\ref{bar}) over $[r_k,r]$ when $r\ge r_k$ we get
\begin{eqnarray*}
r^{n-1} \bar v'(r)& \le& r_k^{n-1} \bar v'(r_k)- \frac{\alpha^{p^{k+1}}}{\beta^{p s_k}} \int_{r_k}^{r} s^{n-1+a+pt_k} ds\\&\le&- \frac{\alpha^{p^{k+1}}}{\beta^{p s_k} (pt_k+n+a)} (r^{pt_k+n+a}-r_k^{pt_k+n+a}) \ \ \text{since} \ \ \bar v' <0.
\end{eqnarray*}
Therefore $\bar v'(r) \le - \frac{\alpha^{p^{k+1}}}{\beta^{p s_k} (pt_k+n+a)} (r^{pt_k+a+1}-r_k^{pt_k+a+1}) \ \ \text{for all} \ \ r \ge r_k$ that is
\begin{equation*}
\bar v'(r) \le - \frac{\alpha^{p^{k+1}}}{2\beta^{p s_k} (pt_k+n+a)} r^{pt_k+a+1} \ \ \text{for all} \ \ r \ge 2^{\frac{1}{pt_k+a+1}}r_k.
\end{equation*}
Integrating the last inequality over $[2^{\frac{1}{pt_k+a+1}}r_k,r]$ when $r\ge 2^{\frac{1}{pt_k+a+1}}r_k=\tilde r_k$, we obtain
\begin{equation*}
\bar v(r) \le \bar v(\tilde r_k) - \frac{\alpha^{p^{k+1}}}{2\beta^{p s_k} T_{k,n,a,p}} (r^{pt_k+a+2}-\tilde r_k^{pt_k+a+2} ),
\end{equation*}
where $T_{k,n,a,p}:=(pt_k+n+a)(pt_k+2+a)$. By similar discussions and by taking $r $ large enough, that is $r\ge 2^{\frac{1}{pt_k+a+1}} 2^{\frac{1}{pt_k+a+2}} r_k ={\tilde {\tilde r}}_k$, we end up with
\begin{equation}\label{vr}
\bar v(r) \le - \frac{\alpha^{p^{k+1}}}{4\beta^{p s_k} T_{k,n,a,p}} r^{pt_k+a+2}.
\end{equation}
Applying (\ref{vr}) and integrating equation (\ref{bar}) again over $[{\tilde {\tilde r}}_k,r]$ when $r\ge {\tilde {\tilde r}}_k$, we have
\begin{eqnarray*}
r^{n-1} \bar u'(r)&=& \tilde r_k^{n-1} \bar u'(\tilde r_k) - \int_{\tilde r_k}^{r} s^{n-1} \bar v(s) ds
\\&\ge& \frac{\alpha^{p^{k+1}}}{4\beta^{p s_k} T_{k,n,a,p}} \int_{\tilde r_k}^r s^{pt_k+a+n+1} ds.
\end{eqnarray*}
Therefore, the following new lower bound on $\bar u(r)$ holds
\begin{eqnarray*}
\bar u(r)&\ge& \frac{\alpha^{p^{k+1}}}{2^4 \beta^{p s_k} \tilde T_{k,n,a,p}} r^{pt_k+a+n+4},
\end{eqnarray*}
where $$r\ge 2^{\frac{1}{pt_k+a+3}} 2^{\frac{1}{pt_k+a+4}} {\tilde {\tilde r}}_k=2^{ \sum_{i=1}^4 \frac{1}{pt_k+a+i} }r_k , $$
and
\begin{eqnarray*}
\tilde T_{k,n,a,p}&=& (pt_k+n+a+2) (pt_k+4+a) T_{k,n,a,p}\\&=&(pt_k+n+a)(pt_k+2+a) (pt_k+n+a+2) (pt_k+4+a)\\&\le& (pt_k+n+a+4)^4 .
\end{eqnarray*}
We now modify this estimate to make the coefficients similar to (\ref{bar2}). After simplifying we get
\begin{equation}\label{bar3}
\bar u(r) \ge \frac{\alpha^{p^{k+1}}}{\beta^{p s_k} M_k} r^{pt_k+a+4} \ \ \ \ \text{for} \ \ r\ge 2^{ \frac{4}{pt_k+a+1}} r_k ,
\end{equation}
where $M_k:=2^4 (pt_k+n+a+4)^4$. In what follows, we put an upper bound on $M_k$ that is expressed as a power of $\beta$. Note that
\begin{eqnarray*}
\frac{1}{2}\sqrt[4]{M_{k+1}}&=&pt_{k+1}+n+a+4= p(pt_{k}+n+a+4)++n+a+4 \\&\le& (pt_{k}+n+a+4)(p+1)=\frac{p+1}{2}\sqrt[4]{M_{k}}.
\end{eqnarray*}
From this we have $M_{k+1} \le (p+1)^4M_k$ and therefore $M_k \le (p+1)^{4k}M_0$ where $M_0=2^4(2p+n+a+4)^4$ because $t_0=2$. Since the constant $\beta$ is defined as $\beta=2p+n+a+4$, we get the following bound
\begin{equation}\label{mk}
M_k \le \beta^{4k+4}.
\end{equation}
From this, (\ref{bar2}) and (\ref{bar3}) and to complete the iteration process, we set
\begin{eqnarray}\label{tk}
t_{k+1}&:=&pt_k+a+4 \ \ \text{for } \ t_0=2,\\ \label{sk}
s_{k+1}&:=& p s_k+ 4k+4 \ \ \text{for} \ s_0=0,
\end{eqnarray}
and therefore,
\begin{eqnarray}\label{bar4}
\bar u(r) \ge \frac{\alpha^{p^{k+1}}}{\beta^{s_{k+1}}} r^{t_{k+1}}
\ \ \text{for} \ \ r\ge r_{k+1},
\end{eqnarray}
where $ r_{k+1}:=2^{ \frac{4}{pt_k+a+1}} r_k \ge 2^{ \sum_{i=1}^4 \frac{1}{pt_k+a+i} }r_k .$
By direct calculations on these recursive sequences we get the explicit sequences
\begin{eqnarray*}
t_k&=& \frac{2p^{k+1} +(a+2) p^k-(a+4)}{p-1}, \\
s_k&=&\frac{4p^{k+1}-4p(k+1)+4k}{(p-1)^2} , \\
r_{k}&=&2^{ \sum_{i=0}^{k-1} \frac{4}{pt_i+a+1}} r_0 \le 2^{ \sum_{i=0}^{\infty} \frac{4}{pt_i+a+1}} r_0 =:r^* <\infty.
\end{eqnarray*}
Set $R:=\beta^{\frac{2}{p-1}} M$ where $M=\max\{\alpha^{-1},m\}$ when $m>1$ is large enough to make sure $m \beta^{\frac{2}{p-1}} \ge r^* $. Therefore, $R \ge r^* \ge r_k$ for any $k$ and we have
$$
\bar u(R) \ge M^{t_k-p^k} \beta^{ \frac{2t_k}{p-1} -s_k }.
$$
If we take $k$ large enough, e.g. $k \ge \frac{\ln (a+4)-\ln(a+2)}{\ln p}$, then $t_k>p^k$. The fact that $M>1$, gives us
\begin{equation*}
\bar u(R) \ge \beta^{ \frac{2t_k}{p-1} -s_k } = \beta^{ \frac{ 2(a+2) p^k +4k(p-1) +4p-2(a+4) }{(p-1)^2} } .
\end{equation*}
Since we have assumed that $a+2>0$ and $\beta>1$, we get $\bar u(R)\to\infty$ as $k\to\infty$. Note that $0<R<\infty$ is independent from $k$. This finishes the proof.
\hfill $\Box$
We now apply Proposition \ref{prop0} to conclude that $-\Delta u \ge 0$ and therefore we can consider equation (\ref{4henon}) as a special case of the H\'{e}non-Lane-Emden equation.
\begin{lemma}\label{1boundLap} ($L^1$-estimates on $B_R$) Suppose that $u$ is a nonnegative solution of (\ref{4henon}) then for any $R>1$ we have
\begin{equation*}
\int_{B_R} |\Delta u| \le C R^{n-\frac{2p+2+a}{p-1}} ,\\
\end{equation*}
where $C=C(n,p,a)>0$ is independent from $R$.
\end{lemma}
\noindent\textbf{Proof:} Set $v=-\Delta u$. From Proposition \ref{prop0} we know that $v\ge0$. Therefore the pair $(u,v)$ satisfies the following system
\begin{eqnarray}
\label{mainbound}
\left\{ \begin{array}{lcl}
\hfill -\Delta u&=& v \ \ \text{in}\ \ \mathbb{R}^n,\\
\hfill -\Delta v&=& |x|^{a}u^p \ \ \text{in}\ \ \mathbb{R}^n,
\end{array}\right.
\end{eqnarray}
that is a particular case of the H\'{e}non-Lane-Emden system. From the estimates provided in \cite{fg} as Lemma 2.1 we get the desired result.
\hfill $\Box$
\begin{lemma}\label{interp}
(An interpolation inequality on $B_R$) Let $R>1$ and $z\in W^{2,1}(B_{2R})$. Then
$$\int_{B_R\setminus B_{R/2}} | D z |\le C R \int_{B_{2R}\setminus B_{R/4}} |\Delta z| + C R^{-1} \int_{B_{2R}\setminus B_{R/4}} |z| ,$$
where $C=C(n)>0$ is independent from $R$.
\end{lemma}
\begin{cor}\label{corgrad} Under the same assumptions as Lemma \ref{1bound}. The following estimate holds.
$$\int_{B_R\setminus B_{R/2}} | D u |\le C R^{n-\frac{p+3+a}{p-1}} ,$$
where $C=C(n,p,a)>0$ is independent from $R$.
\end{cor}
\begin{lemma} \label{ellip}
($L^\tau$-estimate on $B_R$) Let $1< \tau<\infty$ and $z\in W^{2,\tau}(B_{2R})$. Then,
$$\int_{B_R\setminus B_{R/2}} | D^{2} z |^\tau\le C \int_{B_{2R}\setminus B_{R/4}} |\Delta z|^\tau + C R^{-2\tau} \int_{B_{2R}\setminus B_{R/4}} |z|^\tau ,$$
where $C=C(n,\tau)>0$ does not depend on $R$.
\end{lemma}
\begin{lemma}\label{2bound} ($L^2$-estimates on $B_R$) Suppose that $u$ is a bounded nonnegative solution of (\ref{4henon}) then for any $R>1$ we have
\begin{equation}\label{stepfin}
\int_{B_R} |\Delta u|^2 \le C \int_{B_{2R}} |x|^a u^{p+1} + C R^{-2} \int_{B_{2R}} |\Delta u| + C R^{-4} \int_{B_{2R}\setminus B_{R}} u,
\end{equation}
where $C=C(n,p,a)>0$ does not depend on $R$.
\end{lemma}
\noindent\textbf{Proof:} We proceed in two steps.
\noindent Step 1. Multiply the both sides of equation (\ref{4henon}) with $u \phi^2$ where $\phi\in C_c^\infty(\mathbb{R}^n)\cap[0,1]$ is a test function. Then, doing the integration by parts, we get
\begin{eqnarray*}
\int_{\mathbb{R}^n} |\Delta u|^2 \phi^2 &=& \int_{\mathbb{R}^n} |x|^a u^{p+1} \phi^2 - 4 \int_{\mathbb{R}^n} \Delta u \nabla u\cdot \nabla \phi \phi - \int_{\mathbb{R}^n} u \Delta u \left(2 |\nabla\phi|^2+2 \phi\Delta \phi\right)
\\&\le& \int_{\mathbb{R}^n} |x|^a u^{p+1} \phi^2 + \delta \int_{\mathbb{R}^n} |\Delta u|^2 \phi^2 + C(\delta) \int_{\mathbb{R}^n} |\nabla u|^2 |\nabla \phi|^2 \\&&+ C \int_{\mathbb{R}^n} |\Delta u | \left( |\nabla\phi|^2+|\Delta \phi|\right),
\end{eqnarray*}
for some constant $C>0$. Here, we have used the Cauchy's inequality for $0<\delta <1$. Therefore, if we set $\phi$ to be the standard test function that is $\phi=1$ in $B_R$ and $\phi=0$ in $\mathbb{R}^n\setminus B_{2R}$ when $||D_x^i \phi ||_{L^{\infty}(B_{2R}\setminus B_R)} \le C R^{-i}$ for $i=1,2$, then we get
\begin{equation}\label{step1}
\int_{B_R} |\Delta u|^2 \le \int_{B_{2R}} |x|^a u^{p+1} + C R^{-2} \int_{B_{2R}\setminus B_{R}} |\nabla u|^2 + C R^{-2} \int_{B_{2R}\setminus B_{R}} |\Delta u|,
\end{equation}
where $C=C(n,p,a)>0$ does not depend on $R$.
\noindent Step 2. Multiply the both sides of $-\Delta u=v$ with $u \phi^2$ where $\phi$ is the same test function as Step 1. Again doing integration by parts we get
\begin{eqnarray*}
\int_{\mathbb{R}^n} |\nabla u|^2 \phi^2 &=& \int_{\mathbb{R}^n} uv \phi^2 - 2 \int_{\mathbb{R}^n} u\nabla u\cdot \nabla \phi \phi
\\&\le& \int_{\mathbb{R}^n} uv \phi^2 + \delta \int_{\mathbb{R}^n} |\nabla u|^2 \phi^2 + C(\delta) \int_{\mathbb{R}^n} |\nabla \phi|^2 u^2 ,
\end{eqnarray*}
where we have also used the Cauchy's inequality for $0<\delta <1$. So,
\begin{equation}\label{step2}
\int_{B_R} |\nabla u|^2 \le C \int_{B_{2R}} |\Delta u| + C R^{-2} \int_{B_{2R}\setminus B_{R}} u,
\end{equation}
where we have used the boundedness of $u$. From (\ref{step1}) and (\ref{step2}) we get
\begin{equation}\label{stepfin}
\int_{B_R} |\Delta u|^2 \le \int_{B_{2R}} |x|^a u^{p+1} + C R^{-2} \int_{B_{2R}} |\Delta u| + C R^{-4} \int_{B_{2R}\setminus B_{R}} u.
\end{equation}
This completes the proof.
\hfill $\Box$
We now apply Lemma \ref{1bound}, Lemma \ref{2bound} and Corollary \ref{u} to get the following.
\begin{cor}\label{Delta2} Suppose that the assumptions of Lemma \ref{1bound} hold. Moreover, let $u$ be bounded then
\begin{equation}\label{Delta2R}
\int_{B_R} |\Delta u|^2 \le C R^{n-\frac{4p+a}{p-1}},
\end{equation}
where $C=C(n,p,a)>0$ is independent from $R$.
\end{cor}
\begin{lemma} (Sobolev inequalities on the sphere $S^{n-1}$) \label{sobolev}
Let $n\ge2$, integer $i\ge 1$ and $1<t<\tau\le\infty$. For $z\in W^{i,t}(S^{n-1})$, the following estimate holds
$$||z||_{L^\tau(S^{n-1})}\le C || D_\theta^i z||_{L^t(S^{n-1})} + C || z ||_{L^1(S^{n-1})} ,$$ where
$$\left\{
\begin{array}{ll}
\frac{1}{\tau}= \frac{1}{t}-\frac{i}{n-1}, & \hbox{if $it+1<n$,} \\
\tau=\infty, & \hbox{if $it+1>n$,}
\end{array}
\right.$$
and $C=C(i,t,n,\tau)>0$.
\end{lemma}
\section{Developing the iteration argument}\label{secIter}
In this section, we develop a counterpart of the Moser iteration argument \cite{mos} for solutions of (\ref{4henon}). We define a sequence of functions $(w_k)_{k=-1}$ of the form $$ w_k:=\Delta u+\alpha_k |\nabla u|^2 (u+\epsilon)^{-1} + \beta_k |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}
$$ where $\alpha_k$ and $\beta_k$ are certain nondecreasing sequences of nonnegative numbers where $\alpha_{-1}=\beta_{-1}=0$.
Assuming that $w_k\le 0$, that is essentially a lower bound on the negative Laplacian operator, holds we construct a differential inequality for $w_{k+1}$ where $\alpha_{k+1}\ge \alpha_k$ and $\beta_{k+1}\ge \beta_k$. Then, applying certain maximum principle type arguments, we show that $w_{k+1}\le 0$. Note that $w_{k+1}\le 0$ is stronger than $w_k\le 0$, because it forces a stronger lower bound on the negative of Laplacian operator.
We start with proving that $w_{-1}$, which is the Laplacian operator of $u$, is nonpositive, see Proposition \ref{prop0}. Then, using this fact and applying (\ref{systemhenon}) and (\ref{pointsystemhenon}) when $q=1$ and $b=0$, we get the following inequality for nonnegative solutions of the fourth order H\'{e}non equation (\ref{4henon})
\begin{equation}\label{point4henon}
-\Delta u \ge \sqrt\frac{2}{p+1} |x|^{\frac{a}{2}} u^{\frac{p+1}{2}} \ \ \text{in}\ \ \mathbb{R}^n,
\end{equation}
where $0\le a\le (n-2)(p-1)$. Inequality (\ref{point4henon}) is the first step of the iteration argument meaning that $w_0 \le 0$ for $\alpha_0=0$ and $\beta_0=\sqrt\frac{2}{p+1}$.
We now perform the iteration argument.
\begin{prop}\label{propwk}
Let $u$ be a positive classical solution of (\ref{4henon}). Suppose that $(\alpha_k)_{k=0}$ and $(\beta_k)_{k=0}$ are sequences of numbers. Define the following sequence of functions
\begin{equation}\label{wk}
w_k:=\Delta u+\alpha_k |\nabla u|^2 (u+\epsilon)^{-1} + \beta_k |x|^{\frac{a}{2}} u^{\frac{p+1}{2}},
\end{equation}
where $\epsilon=\epsilon(k)$ is a positive constant. Suppose that $w_k\le 0$, then $w_{k+1}$ satisfies the following differential inequality
\begin{eqnarray}\label{wk+1}
&&\Delta w_{k+1}-2 \alpha_{k+1} (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}\\ &&\nonumber +\alpha_{k+1} w_{k+1} (u+\epsilon)^{-2} |\nabla u|^2 - \frac{\beta_{k+1}(p+1)}{2} u^{\frac{p-1}{2}} |x|^{\frac{a}{2}} w_{k+1}\\ &&\nonumber \ge
I^{(1)}_{\epsilon,\beta_{k} } |x|^a u^p + \alpha_{k+1} I^{(2)}_{\alpha_{k}} |\nabla u|^4 (u+\epsilon)^{-3} + I^{(4)}_{a,\alpha_{k},\beta_{k}} |x|^{a-2} u^{\frac{p+1}{2}}
\\&&\nonumber+ I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}} |x|^a u^{\frac{p+1}{2}} \left| \frac{\nabla u}{u} +\frac{ a\beta_{k+1} \left(\frac{p+1}{2}- \alpha_{k+1} \frac{u}{u+\epsilon}\right) }{ 2 I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}} }\frac{x}{|x|^2} \right|^2,
\end{eqnarray}
where
\begin{eqnarray*}
I^{(1)}_{\epsilon, \alpha_k,\beta_{k}}&: =& 1- \frac{p+1}{2} \beta_{k+1}^2 +\frac{2}{n} \alpha_{k+1} \beta_k^2 \frac{u}{u+\epsilon},
\\
I^{(2)}_{\alpha_{k}} &:= &\frac{2}{n} (\alpha_{k+1} +\alpha_k+1)^2 -2\alpha_{k+1} (\alpha_{k+1}+1)+\alpha_{k+1},
\\
I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}}&: = & \frac{4}{n} \alpha_{k+1} \beta_k (\alpha_{k+1} +\alpha_k+1) \frac{u^2}{(u+\epsilon)^2} +\beta_{k+1} \alpha_{k+1} \frac{u^2}{(u+\epsilon)^2}
\\&& -(p+1) \beta_{k+1} \alpha_{k+1} \frac{u}{(u+\epsilon)} +\frac{p+1}{2}\left(\frac{p-1}{2}-\alpha_{k+1} \frac{u}{(u+\epsilon)} \right ) \beta_{k+1},
\\
I^{(4)}_{a,\epsilon, \alpha_{k},\beta_{k}}&: = & \frac{a}{2}\beta_{k+1} (n+\frac{a}{2} -2) - \frac{a^2 \beta^2_{k+1} \left(\frac{p+1}{2}- \alpha_{k+1} \frac{u}{u+\epsilon}\right) ^2 }{ 4 I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}} } .
\end{eqnarray*}
\end{prop}
\noindent\textbf{Proof:} For the sake of simplicity in calculations, set $b:=\frac{a}{2}$ and $q:=\frac{p+1}{2}$. From (\ref{wk}) the function $w_{k+1}$ is defined as $$ w_{k+1}:=\Delta u+\alpha_{k+1} |\nabla u|^2 (u+\epsilon)^{-1} + \beta_{k+1} |x|^{b} u^q.$$
Taking Laplacian of $w_{k+1}$ and using equation (\ref{4henon}) we get
\begin{eqnarray}\label{wk++1}
\Delta w_{k+1}&=& \Delta^2 u+ \alpha_{k+1} \Delta (|\nabla u|^2 (u+\epsilon)^{-1} ) +\beta_{k+1} \Delta (|x|^b u^q)
\\ \nonumber &=&|x|^a u^p+I+J,
\end{eqnarray}
where $I:= \alpha_{k+1} \Delta (|\nabla u|^2(u+\epsilon)^{-1}) $ and $J:=\beta_{k+1} \Delta (|x|^b u^q)$. In what follows, we simplify $I$ and $J$ as well as finding lower bounds for these terms. We start with $J$ that is
\begin{eqnarray*}
\frac{J}{\beta_{k+1}}&=& \Delta (|x|^b u^q)= \Delta |x|^b u^q +\Delta u^q |x|^b +2\nabla |x|^b\cdot \nabla u^q \\ &=& b (n+b-2) |x|^{b-2} u^q + q(q-1) |x|^b u^{q-2} |\nabla u|^2\\& &+q |x|^b u^{q-1} \Delta u +2b q |x|^{b-2} u^{q-1} \nabla u\cdot x.
\end{eqnarray*}
From the definition of $w_{k+1}$, we have
\begin{equation}\label{uwk+1}
\Delta u=w_{k+1}-\alpha_{k+1} |\nabla u|^2 (u+\epsilon)^{-1} -\beta_{k+1} |x|^{b} u^q.
\end{equation}
Substitute this into the last equation to simplify $J$ as
\begin{eqnarray}\label{J}
\frac{J}{\beta_{k+1}}&=& q u^{q-1} |x|^b w_{k+1}-q\beta_{k+1} u^{2q-1} |x|^{2b} \\&& \nonumber +
\left(q(q-1)-q\alpha_{k+1}\frac{u}{u+\epsilon}\right) |x|^b u^{q-2} |\nabla u|^2
\\&& \nonumber +b (n+b-2) |x|^{b-2} u^q +2b q |x|^{b-2} u^{q-1} \nabla u\cdot x.
\end{eqnarray}
We now simplify $I$ as what follows,
\begin{eqnarray*}
\frac{I}{\alpha_{k+1}}&=& \Delta (|\nabla u|^2 (u+\epsilon)^{-1} )= \sum_{i,j} \partial_{jj}(u^2_i (u+\epsilon)^{-1} )\\&=& 2 (u+\epsilon)^{-1} \sum_{i,j} (\partial_{ij}u)^2 +2 (u+\epsilon)^{-1} \nabla u\cdot \nabla \Delta u -4 (u+\epsilon)^{-2} \sum_{i,j} \partial_i u \partial_j u \partial_{ij} u \\&&- |\nabla u|^2 (u+\epsilon)^{-2} \Delta u + 2 |\nabla u|^4 (u+\epsilon)^{-3} .
\end{eqnarray*}
Again substituting (\ref{uwk+1}) into the term $2(u+\epsilon)^{-1} \nabla u\cdot \nabla \Delta u$ appeared above, we get
\begin{eqnarray*}
\frac{I}{\alpha_{k+1}}&=&2 (u+\epsilon)^{-1} \sum_{i,j} (\partial_{ij}u)^2 -4 (u+\epsilon)^{-2} \sum_{i,j} \partial_i u \partial_j u \partial_{ij} u \\&&+ 2 |\nabla u|^4 (u+\epsilon)^{-3} - |\nabla u|^2 (u+\epsilon)^{-3} \Delta u \\&&+2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}-2\alpha_{k+1} (u+\epsilon)^{-1} \nabla u\cdot\left(|\nabla u|^2(u+\epsilon)^{-1} \right) \\&& -2 \beta_{k+1} (u+\epsilon)^{-1} \nabla u\cdot \nabla\left( |x|^b u^q \right).
\end{eqnarray*}
Then, collecting the similar terms we obtain
\begin{eqnarray*}
&& \frac{I}{\alpha_{k+1}} -2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}
= 2(u+\epsilon)^{-1} \sum_{i,j} (\partial_{ij}u)^2
\\&& -4(\alpha_{k+1}+1) (u+\epsilon)^{-2} \sum_{i,j} \partial_i u \partial_j u \partial_{ij} u
\\&&+ 2(\alpha_{k+1}+1) |\nabla u|^4 (u+\epsilon)^{-3}- |\nabla u|^2 (u+\epsilon)^{-2} \Delta u\\&& -2 \beta_{k+1}b |x|^{b-2} (u+\epsilon)^{-1} u^{q} \nabla u\cdot x
\\&&-2\beta_{k+1} q |x|^{b} u^{q-1} (u+\epsilon)^{-1} |\nabla u|^2.
\end{eqnarray*}
Completing the square we get
\begin{eqnarray}\label{Isquare}
&& \frac{I}{\alpha_{k+1}}-2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}
\\&&\nonumber= 2(u+\epsilon)^{-1} \sum_{i,j} \left( \partial_{ij}u -(\alpha_{k+1}+1) (u+\epsilon)^{-1} \partial_i u \partial_j u \right)^2
\\&& \nonumber -2 \alpha_{k+1} (\alpha_{k+1}+1) |\nabla u|^4 (u+\epsilon)^{-3} - |\nabla u|^2 (u+\epsilon)^{-2} \Delta u
\\&& \nonumber -2 \beta_{k+1}b |x|^{b-2} (u+\epsilon)^{-1} u^{q} \nabla u\cdot x -2\beta_{k+1} q |x|^{b} u^{q-1} (u+\epsilon)^{-1} |\nabla u|^2.
\end{eqnarray}
Note that for any $n\times n$ matrix $A=(a_{i,j})$ the Hilbert-Schmidt norm is defined by $|| A||_2=\sqrt {\sum_{i,j} | a_{i,j} |^2} =\sqrt {\text{trace} (A A^*)}$, where $A^*$ denotes the conjugate transpose of $A$. From the Cauchy-Schwarz' inequality, the following inequality holds,
\begin{equation}\label{matrix}
|\text{trace} \ A|^2=|(A,I)|^2 \le ||A||_2^2 ||I||_2^2=n \sum_{i,j} | a_{i,j} |^2 .
\end{equation}
Set $a_{i,j}:= \partial_{ij}u -(\alpha_{k+1}+1) (u+\epsilon)^{-1} \partial_i u \partial_j u $ in (\ref{matrix}) to get
$$\sum_{i,j}^{n} \left( \partial_{ij}u -(\alpha_{k+1}+1) (u+\epsilon)^{-1} \partial_i u \partial_j u \right)^2 \ge \frac{1}{n} \left( \Delta u -(\alpha_{k+1}+1) (u+\epsilon)^{-1} |\nabla u|^2 \right)^2 .
$$
From this lower bound for the Hessian and (\ref{Isquare}), we get
\begin{eqnarray}\label{Ialpha}
&&\frac{I}{\alpha_{k+1}} -2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1} \\ & &\ge \nonumber \frac{2}{n}(u+\epsilon)^{-1} \left( \Delta u -(\alpha_{k+1}+1)(u+\epsilon)^{-1} |\nabla u|^2 \right)^2 \\ && \nonumber -2 \alpha_{k+1} (\alpha_{k+1}+1) |\nabla u|^4 (u+\epsilon)^{-3} - |\nabla u|^2 (u+\epsilon)^{-2} \Delta u +T_{k},
\end{eqnarray}
where $$T_k:=-2 \beta_{k+1}b |x|^{b-2} (u+\epsilon)^{-1} u^{q} \nabla u\cdot x -2\beta_{k+1} q |x|^{b} u^{q-1} (u+\epsilon)^{-1} |\nabla u|^2.$$
Note also that from the assumption $w_k \le 0$ we have this upper bound on the Laplacian operator, $\Delta u\le -\alpha_k |\nabla u|^2 (u+\epsilon)^{-1} - \beta_k |x|^{b} u^q$. Elementary calculations show that if $t\le t_*\le 0$ and $s\ge 0$ then $(t-s)^2\ge t_*^2-2t_*s+s^2$. Set the parameters as $t=\Delta u$, $t_*=-\alpha_k |\nabla u|^2 (u+\epsilon)^{-1} - \beta_k |x|^{b} u^q$ and $s=(\alpha_{k+1}+1) (u+\epsilon)^{-1} |\nabla u|^2 $ to get the following lower bound on the square term that appears in (\ref{Ialpha}),
\begin{eqnarray}\label{square}
&& \left( \Delta u -(\alpha_{k+1}+1)(u+\epsilon)^{-1} |\nabla u|^2 \right)^2 \ge \left(\alpha_k |\nabla u|^2 (u+\epsilon)^{-1}+ \beta_k |x|^{b} u^q\right)^2 \\&& \nonumber+2\left(\alpha_k |\nabla u|^2 (u+\epsilon)^{-1} +\beta_k |x|^{b} u^q\right) (\alpha_{k+1}+1) (u+\epsilon)^{-1} |\nabla u|^2 \\&&\nonumber +(\alpha_{k+1}+1)^2 (u+\epsilon)^{-2} |\nabla u|^4.
\end{eqnarray}
Substitute (\ref{uwk+1}) into the term $ - |\nabla u|^2 (u+\epsilon)^{-2} \Delta u$ that appears in \eqref{Ialpha} to eliminate the Laplacian operator. Then, apply inequality (\ref{square}) to simplify \eqref{Ialpha} as
\begin{eqnarray*}
&&\frac{I}{\alpha_{k+1}}-2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1} \ge \frac{2}{n}(u+\epsilon)^{-1} \{ (\alpha_{k+1}+\alpha_k+1)^2 |\nabla u|^4 (u+\epsilon)^{-2} \\&&+\beta_k ^2 |x|^{2b} u^{2q} + 2\beta_k(\alpha_{k+1}+\alpha_k+1)|x|^b u^{q} (u+\epsilon)^{-1} |\nabla u|^2 \} - w_{k+1} (u+\epsilon)^{-2} |\nabla u|^2 \\&& -\alpha_{k+1} (2\alpha_{k+1}+1) |\nabla u|^4 (u+\epsilon)^{-3}
+ \beta_{k+1} |x|^b u^q (u+\epsilon)^{-2} |\nabla u|^2 + T_{k}.
\end{eqnarray*}
Collecting similar terms and using the value of $ T_{k}$, we end up with
\begin{eqnarray*}
&&\frac{I}{\alpha_{k+1}}-2 (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1} +w_{k+1} (u+\epsilon)^{-2} |\nabla u|^2 \\&& \ge \frac{2}{n} \beta_k^2|x|^{2b} u^{2q} (u+\epsilon)^{-1}+
I^{(2)}_{\alpha_k} |\nabla u|^4 (u+\epsilon)^{-3} + S_{\epsilon,\alpha_k,\beta_k} |\nabla u|^2 u^{q-2} |x|^b \\&&-2 \beta_{k+1}b |x|^{b-2} (u+\epsilon)^{-1} u^{q} \nabla u\cdot x,
\end{eqnarray*}
where
\begin{eqnarray*}
I^{(2)}_{\alpha_k}&:=& \frac{2}{n}(\alpha_{k+1}+\alpha_k+1)^2 -2\alpha_{k+1}(\alpha_{k+1}+1) +\alpha_{k+1},\\
S_{\epsilon,\alpha_k,\beta_k} &:=& \frac{4}{n} \beta_k (\alpha_{k+1}+\alpha_k+1) \frac{u^2}{(u+\epsilon)^2}+ \beta_{k+1} \frac{u^2}{(u+\epsilon)^2} -2\beta_{k+1}q \frac{u}{u+\epsilon}.
\end{eqnarray*}
Therefore, the following lower bound for $I$ holds,
\begin{eqnarray}\label{lowerI}
I &\ge& 2 \alpha_{k+1} (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}
\\&& \nonumber - \alpha_{k+1} w_{k+1} (u+\epsilon)^{-2} |\nabla u|^2
\\&& \nonumber + \frac{2}{n} \alpha_{k+1} \beta_k^2|x|^{2b} u^{2q} (u+\epsilon)^{-1} + I_{\alpha_k} |\nabla u|^4 (u+\epsilon)^{-3}
\\&& \nonumber + S_{\epsilon,\alpha_k,\beta_k} |\nabla u|^2 u^{q-2} |x|^b -2 \beta_{k+1}b |x|^{b-2} (u+\epsilon)^{-1} u^{q} \nabla u\cdot x.
\end{eqnarray}
Finally, applying this lower bound for $I$ and the lower bound given for $J$ in \eqref{J}, from (\ref{wk+1}) we get
\begin{eqnarray*}
&&\Delta w_{k+1}-2 \alpha_{k+1} (u+\epsilon)^{-1} \nabla u\cdot\nabla w_{k+1}+ \alpha_{k+1} (u+\epsilon)^{-2} |\nabla u|^2 w_{k+1} -\beta_{k+1} q u^{q-1} |x|^b w_{k+1}\\
&& \ge |x|^a u^p \left( 1- q \beta_{k+1}^2 + \frac{2}{n} \alpha_{k+1} \beta_k^2 \frac{u}{u+\epsilon}\right) + \alpha_{k+1} I^{(2)}_{\alpha_k} |\nabla u|^4 (u+\epsilon)^{-3} \\&&+ \left( \alpha_{k+1} S_{\epsilon,\alpha_k,\beta_k} + \left(q(q-1)-\alpha_{k+1} q \frac{u}{u+\epsilon} \right)\beta_{k+1}\right)|\nabla u|^2 u^{q-2} |x|^b
\\&& +2 b\beta_{k+1} \left(q- \alpha_{k+1} \frac{u}{u+\epsilon}\right) |x|^{b-2} u^{q-1} \nabla u\cdot x + b\beta_{k+1} (n+b-2) |x|^{b-2} u^q.
\end{eqnarray*}
Completing the square finishes the proof.
\hfill $\Box$
\section{Proof of Theorem \ref{mainres} via Iteration Arguments}\label{secapp}
To apply the iteration argument, we need to develop a maximum principle argument for the following equation
\begin{equation}\label{win}
\Delta w-2 \alpha (u+\epsilon)^{-1} \nabla u\cdot\nabla w +\alpha w (u+\epsilon)^{-2} |\nabla u|^2 - \frac{\beta(p+1)}{2} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} w = f(x)\ge 0 \ \ \ \mathbb{R}^n
\end{equation}
that appears in Proposition \ref{propwk}, where $\alpha,\beta$ are positive constants, $u$ is a solution of (\ref{4henon}) and $w,f\in C^{\infty}(\mathbb R^n)$.
\begin{lemma}\label{boundwt} Suppose that $w$ is a solution of the differential inequality (\ref{win}) where $u$ is a solution of (\ref{4henon}) and
\begin{equation}\label{w}
w=\Delta u+\alpha (u+\epsilon)^{-1} |\nabla u|^2 + \beta |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}
\end{equation}
for positive constants $\epsilon$, $\alpha$ and $\beta$. Then, assuming that $p+1>2\alpha$ the following holds
\begin{equation}\label{w+}
\Delta \tilde w \ge 0 \ \ \ \text{on } \{w\ge 0\} \subset \mathbb{R}^n
\end{equation}
where $ \tilde w= (u+\epsilon)^{t} w$ for $t=-\alpha$.
\end{lemma}
\noindent\textbf{Proof:} Straightforward calculations show that
\begin{eqnarray*}
\Delta \tilde w &=& (u+\epsilon)^{t} \Delta w+ 2t (u+\epsilon)^{t-1} \nabla u\cdot \nabla w
\\&&+ t (u+\epsilon)^{t-1} w\Delta u+ t(t-1) w (u+\epsilon)^{t-2} |\nabla u|^2
\end{eqnarray*}
We now add and subtract two terms $\frac{\beta(p+1)}{2} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} (u+\epsilon)^{t} w$ and $t w (u+\epsilon)^{t-2} |\nabla u|^2$ to the above identity and collect the similar terms to get
\begin{eqnarray*}
\Delta \tilde w &=& (u+\epsilon)^{t} \left( \Delta w+ 2t (u+\epsilon)^{-1} \nabla u\cdot \nabla w - t w (u+\epsilon)^{-2} |\nabla u|^2 - \frac{\beta(p+1)}{2} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} w \right) \\&&+
\frac{\beta(p+1)}{2} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} (u+\epsilon)^{t} w + t w (u+\epsilon)^{t-2} |\nabla u|^2 + t (u+\epsilon)^{t-1} w \Delta u
\\&& + t (t-1) w (u+\epsilon)^{t-2} |\nabla u|^2.
\end{eqnarray*}
From the fact that $t=-\alpha$ and $w$ satisfies (\ref{win}) we get
\begin{eqnarray*}
\Delta \tilde w \ge \frac{\beta(p+1)}{2} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} (u+\epsilon)^{t} w + t (u+\epsilon)^{t-1} w \Delta u + t^2 w (u+\epsilon)^{t-1} \frac{|\nabla u|^2}{u+\epsilon}
\end{eqnarray*}
Note that we can eliminate the gradient term using (\ref{w}) that is $\alpha (u+\epsilon)^{-1} |\nabla u|^2=w-\Delta u- \beta |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}$. Therefore, after collecting the similar terms we get
\begin{eqnarray*}
\Delta \tilde w & \ge & \frac{t^2}{\alpha} w^2 (u+\epsilon)^{t-1} + (u+\epsilon)^{t-1} w t \left(1-\frac{t}{\alpha} \right) \Delta u
\\&& + \beta (u+\epsilon)^{t-1} |x|^{\frac{a}{2}} u^{\frac{p-1}{2}} w \left( \frac{(p+1)\epsilon}{2}+ u\left(\frac{p+1}{2} -\frac{t^2}{\alpha}\right) \right)
\\&=:& R_1+R_2+R_3.
\end{eqnarray*}
We claim that the above three terms $R_1,R_2,R_3$ are nonnegative when $w \ge 0$. From the fact that $\alpha>0$ one can see that $R_1$ is nonnegative. From the definition of $t=-\alpha<0$ we have $t(1-\frac{t}{\alpha})=-2\alpha <0$. This together with Proposition \ref{prop0}, that is $\Delta u \le 0$, confirms that $R_2$ is nonnegative. Positivity of $R_3$ is an immediate consequence of the assumptions. In other words, note that $\beta$ is positive and $\frac{p+1}{2}-\frac{t^2}{\alpha}=\frac{p+1}{2}-\alpha$ is also positive based on the assumptions. This finishes the proof.
\hfill $\Box$
We now apply Lemma \ref{boundwt} to show that $w$ that is a solution of (\ref{win}) is negative.
\begin{lemma}\label{boundw} Suppose that $\tilde w$ and $w$ are the same as Lemma \ref{boundwt}. Let $u$ be a bounded solution of (\ref{4henon}) then $ w \le 0$.
\end{lemma}
\noindent\textbf{Proof:} The methods and ideas that we apply in the proof are motivated by the ones provided by Souplet in \cite{so}. Multiply (\ref{w+}) with $\tilde w^s_+$ where $s> 0$ is a parameter that will be determined later. Then, integration by parts over $B_R$ gives us
\begin{equation}\label{eqws}
0 \le \int_{B_R} \Delta \tilde w \tilde w^s_+= - s \int_{B_R} |\nabla \tilde w_+|^2 \tilde w^{s-1}_+ + R^{n-1} \int_{S^{n-1}} \tilde w_r \tilde w^s_+.
\end{equation}
Therefore,
\begin{equation}\label{eqws2}
\int_{B_R} |\nabla \tilde w_+|^2 \tilde w^{s-1}_+ \le \frac{1}{s(s+1)} R^{n-1} \int_{S^{n-1}} (\tilde w^{s+1}_+)_r = C(s) R^{n-1} I'(R),
\end{equation}
where $$I(R):= \int_{S^{n-1}} \tilde w^{s+1}_+= \int_{S^{n-1}} (u+\epsilon)^{-(s+1)\alpha} w^{s+1}_+
$$
and $C(s)$ is a constant independent from $R$. Note that $w$ given as $w=\Delta u+\alpha |\nabla u|^2 (u+\epsilon)^{-1} +\beta |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}$ satisfies $w\ge 0$ if and only if $-\Delta u \le \alpha |\nabla u|^2 (u+\epsilon)^{-1} + \beta |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}$. Therefore,
\begin{equation}\label{eq1}
w^{s+1}_+ \le C |\nabla u|^{2(s+1)} (u+\epsilon)^{-(s+1)} +C |x|^{(s+1)a/2} u^{ (s+1)(p+1)/2}
\end{equation}
where $C=C(\alpha,\beta,s)$. Applying this upper bound for $w_+$, we can get an upper bound for $I(R)$ as following.
\begin{eqnarray}\label{I(R)}
I(R) &\le& C \int_{S^{n-1}} (u+\epsilon)^{-(s+1)(\alpha+1)} |\nabla u|^{2(s+1)}
\\&& \nonumber + C R^{\frac{s+1}{2}a} \int_{S^{n-1}} (u+\epsilon)^{-\alpha(s+1)} u ^{(s+1)(p+1)/2} \\&\le & \nonumber C(\epsilon) \int_{S^{n-1}} |\nabla u|^{2(s+1)} + C(\epsilon) R^{\frac{s+1}{2}a} \int_{S^{n-1}} u ^{\frac{s+1}{2}(p+1)}
\\&=: & \nonumber C(\epsilon) (I_1(R)+I_2(R)).
\end{eqnarray}
In what follows we show that there is a sequence $R$ such that the two terms $I_1(R)$ and $I_2(R)$ decay to zero, for a fixed $\epsilon$. We start with $I_2(R)$ that includes an integral of a positive power of $u$ over the sphere. Due to the boundedness assumption on $u$, it is straightforward to relate this term to $L^p$ estimates of $u$ over the sphere. As a matter of fact, if $(s+1)(p+1)>2p$ then from the boundedness of $u $ we have
\begin{equation}\label{firsts}
\int_{S^{n-1}} u ^{\frac{s+1}{2}(p+1)} \le C(n) ||u ||_{L^p(S^{n-1})}^p
\end{equation}
and for the case $(s+1)(p+1) \le 2p$ we can perform the H\"{o}lder's inequality to get
\begin{equation}\label{firstb}
\int_{S^{n-1}} u ^{\frac{s+1}{2}(p+1)} \le C(n,p) ||u ||_{L^p(S^{n-1})}^ {\frac{(p+1)(s+1)}{2}}. \end{equation}
So, to prove a decay estimate for $I_2(R)$ we need to construct a decay estimate for $||u ||_{L^p(S^{n-1})}$. On the other hand, we apply Lemma \ref{sobolev} to get an upper bound for the first term in (\ref{I(R)}) that is $I_1(R)$. In fact, from Lemma \ref{sobolev} where $i=1$, $\tau=2(s+1)$ and $t=2$ we have
\begin{eqnarray}\label{du2s}
||D_x u||_{L^{2(s+1)}(S^{n-1})} &\le& C || D_\theta D_x u ||_{L^2(S^{n-1})} + C || D_x u ||_{L^1(S^{n-1})} \\&\le & \nonumber C R || D^2_x u ||_{L^2(S^{n-1})} + C || D_x u ||_{L^1(S^{n-1})}
\end{eqnarray}
for $s=\frac{2}{n-3}$. In order to get a decay estimate for $I_1(R)$, we need decay estimates for the two terms in the right-hand side of (\ref{du2s}) which are $|| D^2_x u ||_{L^2(S^{n-1})}$ and $|| D_x u ||_{L^1(S^{n-1})}$.
We now apply the elliptic estimates given in Section \ref{secEst} to provide decay estimates for $||u||_{{L^p(S^{n-1})}}$, $||D_x u||_{{L^1(S^{n-1})}}$ and $ ||D^2_x u||_{L^2(S^{n-1})}$. To do so we first find appropriate upper bounds for these terms on the ball of radius $R$. Then we use certain comparing measure arguments to construct decay estimates over the sphere. So, from Lemma \ref{ellip} when $\tau=2$, we get
\begin{equation}\label{d2u}
\int_{R/2}^{R} || D_{x}^{2} u ||_{ L^{2}(S^{n-1}) }^2 r^{n-1} dr \le C \int_{B_{2R}\setminus B_{R/4}} |\Delta u|^2 + C R^{-4} \int_{B_{2R}\setminus B_{R/4}} u^2 .
\end{equation}
We now apply Corollary \ref{Delta2} and Corollary \ref{u} to get a decay estimate for the right-hand side of (\ref{d2u}) that is
\begin{eqnarray*}
R^{-4}\int_{B_{2R}\setminus B_{R/4}} u^2&\le& C R^{-4}\int_{B_{2R}\setminus B_{R/4}} u \le C R^{-4} R^{n -\frac{a+4}{p-1}} = C R^{n-\frac{a+4p}{p-1}},\\
\int_{B_{2R}\setminus B_{R/4}} |\Delta u|^2 & \le& C R^{n-\frac{a+4p}{p-1}},
\end{eqnarray*}
where $C$ is independent from $R$. From this and (\ref{d2u}) we obtain the following desired decay estimate on the Hessian operator of $u$
\begin{equation}\label{d2ufinfin}
\int_{R/2}^{R} || D_{x}^{2} u ||_{ L^{2}(S^{n-1}) }^2 r^{n-1} dr \le C R^{n-\frac{4p+a}{p-1}}.
\end{equation}
Similarly, from Corollary \ref{corgrad} and Lemma \ref{1bound} we have
\begin{eqnarray}\label{d2ufinfin1}
\int_{R/2}^{R} || D_{x}u ||_{ L^{1}(S^{n-1}) } r^{n-1} dr &\le & C R^{n-\frac{p+3+a}{p-1} },
\\ \label{d2ufinfin2}
\int_{R/2}^{R} || u ||^p_{ L^{p}(S^{n-1}) } r^{n-1} dr &\le & C R^{n-\frac{a+4}{p-1}p }.
\end{eqnarray}
Now let's define the following sets. These sets are meant to facilitate our arguments towards construction of decay estimates for $||u||_{{L^p(S^{n-1})}}$, $||D_x u||_{{L^1(S^{n-1})}}$ and $ ||D^2_x u||_{L^2(S^{n-1})}$. For a large number $M$, that will be determined later, define
\begin{eqnarray*}
\label{mu}\Gamma_{1}(R) &:=&\left\{r\ \in(R/2,R); \ ||u||^p_{{L^p(S^{n-1})}}> M R^{-\frac{a+4}{p-1}p}\right\},\\
\label{mdu}\Gamma_{2}(R) &:=&\left\{r\ \in(R/2,R); \ ||D_x u||_{{L^1(S^{n-1})}} > M R^{-\frac{p+3+a}{p-1} } \right\},\\
\label{md4u}\Gamma_{3}(R) &:=&\left\{r\ \in(R/2,R); \ ||D^2_x u||^2_{L^2(S^{n-1})}> M R^{-\frac{a+4p}{p-1}} \right\}.
\end{eqnarray*}
We claim that $|\Gamma_{i}(R)|\le R/4$ for $1\le i\le 3$. Using (\ref{d2ufinfin}), we get
\begin{eqnarray*}
C &\ge & R^{-n+\frac{a+4p}{p-1}} \int_{R/2}^{R} ||D_{x}^{2} u||^{2}_{L^2(S^{n-1})} r^{n-1} dr
\\& \ge & N R^{-n+\frac{a+4p}{p-1}} R^{n-1} \int_{R/2}^{R} ||D_{x}^{2} u||^{2}_{L^2(S^{n-1})} dr
\\ \\
& \ge& N M R^{-n+\frac{a+4p}{p-1}} R^{n-1} \int_{|\Gamma_3(R)|} R^{-\frac{a+4p}{p-1}} dr
\\
& \ge& N M R^{-n+\frac{a+4p}{p-1}} R^{n-1} |\Gamma_3(R)| R^{-\frac{a+4p}{p-1}}
\\&=& N M |\Gamma_3(R)| R^{-1} ,
\end{eqnarray*}
where $N=(1/2)^{n-1}$. Therefore, $|\Gamma_3(R)|\le \frac{C}{NM} R$. Now choosing $M$ to be large enough that is $M>\frac{4C}{N}$, we get $|\Gamma_{3}(R)|\le R/4$. Similarly, applying (\ref{d2ufinfin1}) and (\ref{d2ufinfin2}), one can show that $|\Gamma_{i}(R)|\le R/4$ for $1\le i\le 2$. Hence, $|\Gamma_{i}(R)|\le R/4$ for $1\le i\le 3$ while $\Gamma_{i}(R)\subset (R/2,R)$. So, we can find a sequence $\tilde R$ such that
\begin{equation} \label{hatr}
\tilde R\in (R/2,R)\setminus \bigcup_{i=1}^{i=3}\Gamma_{i}(R)\neq\phi.
\end{equation}
Therefore, for the sequence $\tilde R$, we obtain
\begin{eqnarray}
\label{upsu} ||u||^p_{{L^p(S^{n-1})}} &\le& M { R}^{-\frac{a+4}{p-1}p},\\
\label{upsdu} ||D_x u||_{{L^1(S^{n-1})}} &\le& M { R}^{-\frac{p+3+a}{p-1} } , \\
\label{upsd2u} ||D^2_x u||^2_{L^2(S^{n-1})} &\le & M { R}^{-\frac{a+4p}{p-1}}.
\end{eqnarray}
Substituting (\ref{upsu}) into (\ref{firsts}) and (\ref{firstb}) we get the following decay estimate on $I_2(R)$ that is
\begin{eqnarray}\label{I2}
I_2(R) &\le& C \chi \{(s+1)(p+1)>2p \} R^{\frac{s+1}{2}a -\frac{a+4}{p-1}p}
\\ && \nonumber
+ C \chi \{(s+1)(p+1)\le 2p \} R^{\frac{s+1}{2}a -\frac{a+4}{p-1}(p+1)\frac{s+1}{2}}
\\ &=& \nonumber C \chi \{(s+1)(p+1)>2p \} R^{-\eta_1}
\\ && \nonumber + C \chi \{(s+1)(p+1)>2p \} R^{-\eta_2},
\end{eqnarray}
where $\chi$ is the characteristic function, $\eta_1:=a\left( \frac{p}{p-1}-\frac{s+1}{2} \right)+ \frac{4p}{p-1}>0$ and $\eta_2:=\frac{s+1}{p+1} (ap+2(p+1))>0$. Note that we have used the fact that $\frac{p}{p-1}-\frac{s+1}{2}>0$ because $0< s=\frac{2}{n-3}\le 1 $ when $n \ge 5$. On the other hand, substituting (\ref{upsdu}) and (\ref{upsd2u}) into the Sobolev embedding (\ref{du2s}) we get
\begin{equation}\label{du4fin}
||D_x u||_{L^{2(s+1)}(S^{n-1})} \le C { R}^{1-\frac{a+4p}{p-1}}+ C { R}^{-\frac{p+3+a}{p-1}} = 2 C { R}^{-\frac{p+3+a}{p-1}}.
\end{equation}
From this and the definition of $I_1(R)$ we end up with the following decay estimate on $I_1(R)$ that is
\begin{equation}\label{I1}
I_1(R)= \int_{S^{n-1}} |\nabla u|^{2(s+1)} \le C { R}^{-\frac{2(p+3+a)(s+1)}{p-1}}= C R^{-\eta_3},
\end{equation}
where $\eta_3:= \frac{2(p+3+a)(s+1)}{p-1}>0$. Finally from (\ref{I1}) and (\ref{I2}) we observe that $$I(R) \le C R^{-\eta} \ \ \ \text{for all\ \ } R>1,$$
where $\eta:=\min\{\eta_1,\eta_2,\eta_3\}>0$. So,
$I(R)\to 0 $ as $R\to\infty$. Note that as $R\to\infty$ then $\tilde R\to \infty$. Since $I(R)$ is a positive function and converges to zero, there is a sequence such that the functional $I'(R)$ is nonpositive. Therefore, (\ref{eqws2}) yields
\begin{equation}\label{eqws33}
\int_{B_R} |\nabla \tilde w_+|^2 \tilde w^{s-1}_+ \le 0.
\end{equation}
Hence, $\tilde w_+$ has to be a constant. From continuity of $\tilde w$, we have $\tilde w\equiv C$. Note that the constant $C$ cannot be strictly positive. So, $\tilde w_+=0$ and therefore $w_+=0$. This finishes the proof.
\hfill $\Box$
Note that Lemma \ref{boundwt} and lemma \ref{boundw} imply an iteration argument for the following sequence of functions when $k\ge -1$
\begin{equation}\label{wkkk}
w_k=\Delta u+\alpha_k (u+\epsilon)^{-1} |\nabla u|^2 + \beta_k |x|^{\frac{a}{2}} u^{\frac{p+1}{2}}
\end{equation}
as long as the right-hand side of (\ref{wk+1}) stays nonnegative. For the rest of this section, we construct sequences $\{\alpha_k\}_{k=-1}$ and $\{\beta_k\}_{k=-1}$ such that the right-hand side of (\ref{wk+1}) is nonnegative.
\subsection{Constructing sequences $\alpha_k$ and $\beta_k$}
In this part, we define sequences $\alpha_k$ and $\beta_k$ needed for the iteration argument.
\begin{lemma}\label{alphak} Suppose $\alpha_0=0$ and define
\begin{equation}
\alpha_{k+1}:= \frac{ 4(\alpha_k+1)-n+\sqrt{n(16 \alpha_k^2+24 \alpha_k +n +8)} }{4(n-1)}.
\end{equation}
Then $(\alpha_k)_k$ is a positive, bounded and increasing sequence that converges to $\alpha:=\frac{2}{n-4}$ provided $n>4$ and $p>1$. Moreover, for this choice of $(\alpha_k)_k$, one of the sequences of coefficients defined in Proposition \ref{propwk} is zero, i.e. $ I^{(2)}_{ \alpha_k}=0$.
\end{lemma}
\noindent\textbf{Proof:} It is straightforward to show that for any $k\ge 0$ sequences $\alpha_k>0$. Also, direct calculations show that $\alpha_k\to\alpha:=\frac{2}{n-4}$ provided $\alpha_k$ is convergent. Note that $\alpha_1=\frac{4-n+\sqrt{n^2+8n}}{4n-4}<\frac{2}{n-4}$ and by induction one can see that $\alpha_k \le \alpha$ for all $k\ge 0$. In what follows we show that $\alpha_k$ is an increasing sequence. For any $k$ the difference of $\alpha_k$ and $\alpha_{k+1}$ is the following
\begin{eqnarray*}
\alpha_{k+1}- \alpha_{k} &=& \frac{ \sqrt{n(16 \alpha_k^2+24 \alpha_k +n +8)} - \left( (n-4)+4a_k(n-2)\right) }{4(n-1)} \\&=& \frac{8(n-1)(n-4)(2\alpha_k+1)}{S_{n,k}} \left( \frac{2}{n-4} -\alpha_k \right)
\end{eqnarray*}
where $S_{n,k}=\sqrt{n(16 \alpha_k^2+24 \alpha_k +n +8)} + (n-4)+4a_k(n-2) >0 $. Therefore, from the fact that $\alpha_k\le \alpha=\frac{2}{n-4}$, we get the desired result.
\hfill $\Box$
Similarly, in what follows we provide an explicit formula for the sequence $\beta_k$.
\begin{lemma}\label{betak} Suppose $\beta_0=\sqrt{\frac{2}{p+1}}$ and define
\begin{equation}
\beta_{k+1}:= \sqrt{ \frac{2}{p+1} +\frac{4}{(p+1)n} \alpha_{k} \beta_k^2},
\end{equation}
where $(\alpha_k)_k$ is as in Lemma \ref{alphak}. Then $(\beta_k)_k$ is a positive, bounded and increasing sequence that converges to $\beta:=\sqrt\frac{2}{(p+1)-c_n}$ where $c_n=\frac{8}{n(n-4)}$ provided $n >4$ and $p>1$. Moreover, for this choice of $(\alpha_k)_k$ and $(\beta_k)_k$, one of the sequences of coefficients defined in Proposition \ref{propwk} is strictly positive, i.e. $I^{(1)}_{0,\alpha_k,\beta_k} > 0$.
\end{lemma}
\noindent\textbf{Proof:}
The sequence $(\beta_k)_k$ for all $k\ge 0$ is positive. Note that boundedness of the sequence $(\alpha_k)_k$ forces the boundedness of the $(\beta_k)_k$ meaning that $\beta_{k+1} \le \sqrt{\frac{2}{p+1} +\frac{4\alpha}{(p+1)n} \beta_{k}^2} $ for any $k$. By straightforward calculations we get
\begin{eqnarray*}
\beta_{k+1}^2 \le \frac{2}{p+1} \sum_{i=0}^{k+1} \left( \frac{4\alpha}{n(p+1)} \right)^i .
\end{eqnarray*}
Note that $ \frac{4\alpha}{n(p+1)}= \frac{8}{n(n-4)(p+1)}<1$ provided $n>4$ and $p>1$. Therefore, $\sum_{i=0}^{\infty} \left( \frac{4\alpha}{n(p+1)} \right)^i <\infty$. This proves the boundedness of $(\beta_k)_k$.
Since $(\alpha_k)_{k=0}$ is an increasing sequence, the sequence $(\beta_k)_{k=0}$ will be nondecreasing by induction. Note that $\beta_1=\beta_0$ and $\beta_2=\sqrt{\frac{2}{p+1}+\frac{8}{(p+1)^2n}
\frac{4-n+\sqrt{n^2+8n}}{4n-4}} > \beta_1=\sqrt{\frac{2}{p+1}}$. Suppose that $\beta_{k-1}\le \beta_k$ for a certain index $k \ge 2$ then we apply the fact that $\alpha_{k} \ge \alpha_{k-1}$ to show $\beta_{k}\le \beta_{k+1}$. This can be found as a consequence of the following
\begin{eqnarray*}
\beta_{k+1}- \beta_{k} &=& \frac{\beta^2_{k+1}- \beta^2_{k}}{ \beta_{k+1}+ \beta_{k} } = \frac{4}{(p+1)n(\beta_{k+1}+ \beta_{k})} (\beta_k^2\alpha_{k} - \beta_{k-1}^2 \alpha_{k-1} )
\\&\ge & \frac{4 \alpha_{k-1} (\beta_k + \beta_{k-1}) }{(p+1)n (\beta_{k+1}+ \beta_{k})} (\beta_k - \beta_{k-1}).
\end{eqnarray*}
So, $(\beta_k)_k$ is convergent and converges to $\beta:=\sqrt\frac{2n(n-4)}{(p+1)(n-4)n-8}$. Note that $(p+1)n(n-4)>8$ for $p>1$ and $n> 4$. Therefore, $\beta$ is well-defined.
\hfill $\Box$
Note that based on the definition of the sequences $\{\alpha_k\}_{k=-1}$ and $\{\beta_k\}_{k=-1}$ we concluded that $I^{(1)}_{0,\alpha_k,\beta_k} > 0$ and $ I^{(2)}_{ \alpha_k}=0$. In the next two lemmata we investigate the positivity of sequences $I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}}$ and $I^{(4)}_{a,\epsilon,\alpha_{k},\beta_{k}}$ appeared in (\ref{wk+1}) in Proposition \ref{propwk}.
\begin{lemma} Set $\epsilon=0$ in $I^{(3)}_{\epsilon,\alpha_{k},\beta_{k}}$ that is defined in Proposition \ref{propwk}. Then,
\begin{equation}\label{I}
I^{(3)}_{0,\alpha_{k},\beta_{k}}\to I^{(3)}_{0,\alpha,\beta}:=\frac{4}{n}\alpha\beta(2\alpha+1)+\alpha\beta+\beta q(q-3\alpha-1)
\end{equation}
as $k\to\infty$. The constant $I^{(3)}_{0,\alpha,\beta} $ is positive provided $p> \frac{n+4}{n-4}$ and $n>4$.
\end{lemma}
\noindent\textbf{Proof:} Note that when $p> \frac{n+4}{n-4}$ and $n>4$, then we have $\frac{p+1}{2} > \frac{n}{n-4}$. As $k\to\infty$, from Lemma \ref{alphak} and Lemma \ref{betak} the sequences $\alpha_k\to\alpha:=\frac{2}{n-4}$ and $\beta_k\to\beta:=\sqrt\frac{2}{(p+1)-c_n}$. Therefore,
\begin{eqnarray*}
\frac{I^{(3)}_{0,\alpha,\beta}}{\beta} &=& \frac{4}{n}\left(\frac{2}{n-4}\right) \left(\frac{4}{n-4}+1\right)+\frac{2}{n-4}+\frac{p+1}{2}\left(\frac{p-1}{2}-\frac{6}{n-4}\right) \\&=&\left(\frac{p+1}{2}\right)^2-\left(\frac{p+1}{2} \right)\left(\frac{n+2}{n-4}\right) + \frac{2n}{(n-4)^2}
\\&=& \left( \frac{p+1}{2}-\frac{n}{n-4} \right)\left(\frac{p+1}{2}-\frac{2}{n-4} \right) > 0.
\end{eqnarray*}
\hfill $\Box$
Note that $I^{(4)}_{a,\epsilon,\alpha_{k},\beta_{k}}$ appears in (\ref{wk+1}) mainly because of the weight function $|x|^a$. In other words, we have $I^{(4)}_{0,\epsilon,\alpha_{k},\beta_{k}}=0$, in case of $a=0$.
\begin{lemma} For any $k\ge 0$,
\begin{equation}
I^{(3)}_{0,\alpha_{k},\beta_{k}} < \beta_{k+1} (\frac{p+1}{2}-\alpha_{k+1})^2,
\end{equation}
provided $p>\frac{n+4}{n-4}$ and $n>4$. Therefore, for any $a\ge0$ that satisfies the following upper bound
\begin{equation}\label{ak}
a \le A_k:=\frac{2(n-2) I^{(3)}_{0,\alpha_{k},\beta_{k}}}{ \beta_{k+1} (\frac{p+1}{2}-\alpha_{k+1})^2 -I^{(3)}_{0,\alpha_{k},\beta_{k}}}
\end{equation}
the sequence $I^{(4)}_{a,0,\alpha_{k},\beta_{k}}$ is positive for any $k$.
\end{lemma}
\noindent\textbf{Proof:}
Basic calculations show that
\begin{eqnarray*}
&& \beta_{k+1} (\frac{p+1}{2}-\alpha_{k+1})^2 -I^{(3)}_{0,\alpha_{k},\beta_{k}}\\ &=&
\beta_{k+1}(\frac{p+1}{2}-\alpha_{k+1})^2 - \frac{4}{n}\alpha_{k+1} \beta_{k} (\alpha_{k+1}
\\ &&+\alpha_k+1) - \alpha_{k+1} \beta_{k+1} - \beta_{k+1} \frac{p+1}{2}(\frac{p+1}{2}-3\alpha_{k+1}-1)\\&\ge&
\beta_{k+1} ( (\frac{p+1}{2}-\alpha_{k+1})^2 - \frac{4}{n}\alpha_{k+1} (\alpha_{k+1} +\alpha_k+1) - \alpha_{k+1} \\&& - \frac{p+1}{2}(\frac{p+1}{2}-3\alpha_{k+1}-1) )
\\&=& \beta_{k+1} \left ( \frac{n-4}{n} \alpha_{k+1}^2 - \frac{4}{n} \alpha_{k+1}^2 - \frac{4}{n} \alpha_{k+1} + \frac{(p-1)\alpha_{k+1}}{2} +\frac{p+1}{2} \right)
\end{eqnarray*}
where we have used the fact that $\beta_{k}$ and $\alpha_k$ are increasing sequences in the first and the second inequality respectively. Therefore,
\begin{eqnarray*}
&& \beta_{k+1} (\frac{p+1}{2}-\alpha_{k+1})^2 -I^{(3)}_{0,\alpha_{k},\beta_{k}}
\\&\ge&
\beta_{k+1} ( \frac{n-4}{n} \alpha_{k+1}^2 + \alpha_{k+1}( \frac{p-1}{2} - \frac{4}{n} \alpha_{k+1} ) +\frac{p+1}{2} - \frac{4}{n} \alpha_{k+1} )
\\&\ge& \beta_{k+1} \left( \frac{n-4}{n} \alpha_{k+1}^2 + (\alpha_{k+1}+1)( \frac{p-1}{2} - \frac{4}{n} \alpha) \right)
\\&>&0.
\end{eqnarray*}
Note that in the last inequality we have used the fact that $ \frac{p-1}{2} - \frac{4}{n} \alpha= \frac{p-1}{2} - \frac{4}{n} \frac{2}{n-4}> \frac{4}{(n-4)n} (n-2)>0$, since $p>\frac{n+4}{n-4}$ and $n>4$.
\hfill $\Box$
\begin{remark}
It would be interesting if a counterpart of (\ref{newpoint4henon}) could be proved for bounded solutions of the fourth order semilinear equation $\Delta^2 u=f(u)$ under certain assumptions on the arbitrary nonlinearity $f\in C^1(\mathbb R)$. We expect that such an inequality could be established for some convex nonlinearity $f$.
\end{remark}
\section{Appendix}
We would like to mention that given the estimates in Lemma \ref{1bound} and Lemma \ref{1boundLap}, one can provide a somewhat simpler proof for Proposition \ref{prop0} as what follows.
\\
\\
\noindent\textbf{Second Proof for Proposition \ref{prop0}:} From Lemma \ref{1bound}, we have
$\int_{{\mathbb{R}}^n} |x|^{2-n+a} u^p dx < \infty$. Hence we define
the following function
$$w(x) = \frac{1}{n(n-2)\omega_n}\int_{{\mathbb{R}}^n}
\frac{|y|^a u^p(y)}{|x - y|^{n-2}} dy.$$
It is clear that $w(x) \ge
0$ and $\Delta w = - |x|^a u^p$. This implies that for a solution $u$ of (\ref{4henon}), the function $h(x):= w(x) + \Delta u(x)$ is a
well defined harmonic function on ${\mathbb{R}}^n$. Thus for any
$x_0 \in {\mathbb {R}}^n$ and any $R > 0$, by the mean value theorem for
harmonic functions, we will have
\begin{eqnarray}\label{hx0}
h(x_0)& := & \int_{\partial B_R(x_0)} h d\sigma
\\
& = &\int_{\partial B_R(x_0)} (w + \Delta u) d\sigma\nonumber\\
& \le & \nonumber\int_{\partial B_R(x_0)}w d\sigma + \int_{\partial B_R(x_0)}
|\Delta u| d\sigma.
\end{eqnarray}
Since $w(x_0) < \infty$, through Tonelli's theorem, we can change
the order of the integrations to see that the first integral on the
right-hand side of (\ref{hx0}) tends to zero as $R \to \infty$ for
all $R$. To be more precise notice that, up to a constant multiple, the
first integral can be written as
$$ \int_{{\mathbb {R}}^n} \int_{\partial B_R(x_0)} \frac{ d\sigma_x}{|x -
y|^{n-2}} |y|^a u^p(y) dy.$$
Then we use the fact that $\int_{\partial B_R(x_0)} \frac{d\sigma_x}{|x -
y|^{n-2}} = |y-x_0|^{2-n}$ if $| y -x_0| > R$ and equals to $ R^{2-n}$
if $|y - x_0| < R$. Thus the integral will split into two parts.
Outside part tends to zero as $R \to \infty$ due to the fact that
$w(x_0) < \infty$ while the inside part tends to zero due to the
fact that, by Lemma \ref{1bound},
\begin{eqnarray*}
R^{2-n}\int_{B_R(x_0)} |y|^a
u^p(y) dy &\le& R^{2-n} \int_{B_{R + |x_0|}(0)} |y|^a u^p dy \\ &\le& C
R^{2-n} (R+|x_0|)^{n -\frac{4p + a}{p-1}}
\end{eqnarray*}
tends to zero as $R \to
\infty$. The second integral will tend to zero for some sequence of
$R$ by Lemma \ref{1boundLap} again. Apply the above inequality to this
sequence to see that $h(x_0) \le 0$. Since $x_0$ is arbitrary, we
have $-\Delta u \ge 0$.
\hfill $\Box$
| {
"timestamp": "2015-08-21T02:12:01",
"yymm": "1310",
"arxiv_id": "1310.2275",
"language": "en",
"url": "https://arxiv.org/abs/1310.2275",
"abstract": "We prove that the following pointwise inequality holds\\begin{equation*} -\\Delta u \\ge \\sqrt\\frac{2}{(p+1)-c_n} |x|^{\\frac{a}{2}} u^{\\frac{p+1}{2}} + \\frac{2}{n-4} \\frac{|\\nabla u|^2}{u} \\ \\ \\text{in}\\ \\ \\mathbb{R}^n\\end{equation*}where $c_n:=\\frac{8}{n(n-4)}$, for positive bounded solutions of the fourth order Hénon equation that is \\begin{equation*} \\Delta^2 u = |x|^a u^p \\ \\ \\ \\ \\text {in }\\ \\ \\mathbb{R}^n \\end{equation*} for some $a\\ge0$ and $p>1$. Motivated by the Moser's proof of the Harnack's inequality as well as Moser iteration type arguments in the regularity theory, we develop an iteration argument to prove the above pointwise inequality. As far as we know this is the first time that such an argument is applied towards constructing pointwise inequalities for partial differential equations. An interesting point is that the coefficient $\\frac{2}{n-4}$ also appears in the fourth order $Q$-curvature and the Paneitz operator. This in particular implies that the scalar curvature of the conformal metric with conformal factor $u^\\frac{4}{n-4}$ is positive.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "A pointwise inequality for the fourth order Lane-Emden equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877044076955,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.70895947140673
} |
https://arxiv.org/abs/1801.04548 | Frame Moments and Welch Bound with Erasures | The Welch Bound is a lower bound on the root mean square cross correlation between $n$ unit-norm vectors $f_1,...,f_n$ in the $m$ dimensional space ($\mathbb{R} ^m$ or $\mathbb{C} ^m$), for $n\geq m$. Letting $F = [f_1|...|f_n]$ denote the $m$-by-$n$ frame matrix, the Welch bound can be viewed as a lower bound on the second moment of $F$, namely on the trace of the squared Gram matrix $(F'F)^2$. We consider an erasure setting, in which a reduced frame, composed of a random subset of Bernoulli selected vectors, is of interest. We extend the Welch bound to this setting and present the {\em erasure Welch bound} on the expected value of the Gram matrix of the reduced frame. Interestingly, this bound generalizes to the $d$-th order moment of $F$. We provide simple, explicit formulae for the generalized bound for $d=2,3,4$, which is the sum of the $d$-th moment of Wachter's classical MANOVA distribution and a vanishing term (as $n$ goes to infinity with $\frac{m}{n}$ held constant). The bound holds with equality if (and for $d = 4$ only if) $F$ is an Equiangular Tight Frame (ETF). Our results offer a novel perspective on the superiority of ETFs over other frames in a variety of applications, including spread spectrum communications, compressed sensing and analog coding. |
\section{Introduction}
Design of frames or over-complete bases with favorable properties is a thouroughly studied subject in communication, signal processing and harmonic analysis. In various applications, one is interested in finding over-complete bases
where the favorable properties hold for a random subset of the frame vectors, rather than for the entire frame.
Here are a few examples. In code-devision multiple access (CDMA), spreading sequences with low cross-correlation are preferred; when only a random subset of the users is active, the quantity of interest is the expected cross-correlation within a random subset of the spreading sequences \cite{rupf1994optimum}.
In sparse signal reconstruction from undersampled measurements, the ability to reconstruct the signal crucially depends on
properties of a subset of the measurement matrix, which corresponds to the non-zero entries of the sparse signal;
for example, if the extreme eigenvalues of the submatrix are bounded, stable recovery is guaranteed \cite{candes2008restricted}. When the support of the sparse vector is random, one is interested in extreme eigenvalues of a random frame subset \cite{calderbank2010construction}.
In analog coding, various schemes of interest require frames, for which the first inverse moment of the
covariance matrix of a randomly chosen frame subset is as small as possible. This occurs, for example,
in the presence of source erasures at the encoder \cite{haikin2016analog},
in channels with impulses \cite{wolf1983redundancy} or with erasures \cite{ITA17}
and in multiple description source coding \cite{mashiach2013sampling}.
A famous result by Welch \cite{welch1974lower} provides a universal lower bound on the mean and maximum value
of powers of absolute values of inner products (a.k.a cross-correlations) of frame vectors.
Frames which achieve the Welch lower bound on maximal absolute cross-correlation
are known as equiangular tight frame (ETF).
Motivated by frame design for various applications, in this paper we show that the Welch bound naturally extends to random frame subsets, such that
the lower bound is achieved by (and sometimes only by) ETFs. We term this new
universal lower bound the
{\em Erasure Welch Bound} (EWB) and generalize it to higher-order covariances as well.
As a universal, tight lower bound in frame theory, the EWB is essentially a geometric quantity. Surpringly,
the EWB itself coincides with a quantity appearing elsewhere in mathematics, namely in random matrix theory.
Below, we prove that the EWB matches the moments of Wachter's classical limiting MANOVA distribution \cite{wachter1980limiting}.
In a recent paper \cite{haikin2017random} we reported overwhelming empirical evidence
that the covariance matrix of a random frame subset from many well-known ETFs in fact follows the
Wachter's classical limiting MANOVA distribution. To the best of our knowledge,
the results of this paper are the first theoretical confirmations to the empirical predictions of \cite{haikin2017random}, relating ETFs to Wachter's classical limiting MANOVA distribution and random matrix phenomena.
\section{Notation and Setup} \label{definitions}
We consider a unit-norm frame, being an over-complete basis comprising $n$ elements - unit-norm vectors $f_1,\dots,f_n$.
Let $F=\{F_{j,i}\}$ denote the $m$-by-$n$ frame matrix whose columns are the frame vectors, $F=\left[f_1|\cdots|f_n\right]$.
Let us define the vector cross correlation:
\begin{equation} \label{corr}
c_{i_1,i_2}\triangleq<f_{i_1},f_{i_2}> = f_{i_1}'f_{i_2}=\sum_{j=1}^{m}F_{j,i_1}^*F_{j,i_2}
\end{equation}
where
\begin{equation} \label{c_ii}
c_{i,i} = \|f_i\|^2=1
\end{equation}
by the unit norm property.
The {\em Welch bound} \cite{welch1974lower} lower bounds the root-mean-square (rms) absolute cross correlation:
\begin{equation} \label{rms WB}
I^2_{rms}(F) \triangleq \frac{1}{n(n-1)}\sum_{i_1=1}^{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\ge \frac{n-m}{(n-1)m},
\end{equation}
and it is achieved with equality iff $F$ is a Uniform Tight Frame (UTF), i.e.
\begin{equation} \label{UTF}
FF'=\frac{n}{m}I_m.
\end{equation}
The Welch bound \cite{welch1974lower} implies a bound on the maximum absolute cross correlation:
\begin{equation} \label{max WB}
I^2_{max}(F) \triangleq \max_{1\le i_1< i_2\le n}|c_{i_1,i_2}|^2\ge \frac{n-m}{(n-1)m}.
\end{equation}
This stronger lower bound is achieved with equality iff the frame is an Equiangular Tight Frame (ETF), namely, it is UTF \eqref{UTF} and satisfies
\begin{equation} \label{ETF2}
|c_{i_1,i_2}|^2={\rm constant}=\frac{n-m}{(n-1)m} \ \ \forall i_1 \neq i_2.
\end{equation}
This unique configuration, which exists only for some dimensions $m$ and number of vectors $n$, achieves a whole family of lower bounds which are derived below.
Our main object of interest is a submatrix composed of a random subset of the frame vectors, or columns of $F$.
Define the following $m$-by-$n$ matrix
\begin{equation} \label{X}
X = FP,
\end{equation}
where $P$ is a diagonal matrix with independent Bernoulli($p$) elements on the diagonal.
In other words, each of the vectors $f_1,...,f_n$ is replaced by a zero vector with probability $1-p$.
%
The empirical moment,
\begin{equation} \label{moment d}
\frac{1}{n}\tr \left((X'X)^d\right)
\end{equation}
is the $d$-th moment of the empirical eigenvalues distribution of $X'X$.
We define the expected $d$-th moment of a random subset of $F$ as:
\begin{equation} \label{moment d}
m_d \triangleq \frac{1}{n}\Ev\left[ \tr \left((X'X)^d\right)\right]=\frac{1}{n}\Ev\left[ \tr \left((FPF')^d\right)\right]
\end{equation}
where we applied $\tr \left((X'X)^d\right)=\tr \left((XX')^d\right)$ and $P^2=P$.
The first moment ($d=1$) of a frame is constant since
\begin{equation} \label{m1}
\begin{split}
&m_1 = \frac{1}{n}\Ev\left[ \tr \left(X'X\right)\right] = \frac{1}{n}\Ev\left[ \sum_{i=1}^{n}f_i'f_iP_{i,i}\right]\\&=\frac{1}{n}\Ev\left[ \sum_{i=1}^{n}P_{i,i}\right]=\frac{1}{n}\sum_{i=1}^{n}\Ev\left[ P_{i,i}\right]=\frac{1}{n}\sum_{i=1}^{n}p = p
\end{split}
\end{equation}
where the third equality is due to \eqref{c_ii}.
A useful result for attaining bounds for $d>1$ is the special case of $p=1$, i.e. a bound on the moments of the whole frame without taking subsets.
\begin{lemma} \label{lemma1}
For any unit-norm frame,
\begin{equation} \label{md bound_p=1}
\frac{1}{n}\tr \left((FF')^d \right)\ge\left(\frac{n}{m}\right)^{d-1}
\end{equation}
with equality iff $F$ is a UTF.
\end{lemma}
\begin{proof}
The trace of the square matrix $FF'$ is equal to the sum of its eigenvalues $\{\lambda\}_{j=1}^m$. Furthermore, the eigenvalues of $(FF')^d$ are $\{\lambda^d\}_{j=1}^m$.
Using Jensen's inequality for a convex function of $\{\lambda\}_{j=1}^m$:
\begin{equation} \label{Jensen}
\frac{1}{m}\sum_{j=1}^{m}\lambda^d_{j}\ge \left( \frac{1}{m}\sum_{j=1}^{m}\lambda_{j}\right)^d
\end{equation}
with equality iff all eigenvalues are equal, i.e. $FF'\propto I_m$. Hence,
\begin{equation} \label{Jensen tr}
\Rightarrow \frac{1}{m}\tr \left((FF')^d \right)\ge \left(\frac{1}{m}\tr(FF')\right)^d
\end{equation}
\begin{equation} \label{trace FF'}
\frac{1}{m}\tr(FF')=\frac{1}{m}\tr(F'F)=\frac{1}{m}\sum_{i=1}^{n}c_{i,i}=\frac{n}{m}
\end{equation}
From \eqref{Jensen tr}, \eqref{trace FF'} and by proper normalization, \eqref{md bound_p=1} follows, with equality iff \eqref{UTF} is satisfied, i.e. $F$ is UTF.
\end{proof}
Note that for $d=2$,
\begin{equation} \label{Lemma d=2}
\frac{1}{n}\tr \left((FF')^2\right)=\frac{1}{n}\sum_{i_1,i_2=1}^{n}|c_{i_1,i_2}|^2=1+\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2,
\nonumber
\end{equation}
so \eqref{md bound_p=1} becomes
\begin{equation} \label{WB x}
\frac{1}{n}\sum_{i_1}^{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\ge \frac{n}{m}-1\triangleq x
\end{equation}
which is the Welch bound \eqref{rms WB}.
Therefore, a lower bound on $m_d$ in \eqref{moment d} generalizes the Welch bound in two senses. First as a bound on a random subsets of $F$ (where for $p=1$, it reduces to the rms Welch bound). Second, as a bound on higher orders of moments, for $d\ge 2$ \footnote{Our definition is different than that of the Welch bound on the powers of the absolute cross-correlations in \cite{welch1974lower}.}.
\section{Main Result}\label{main}
To state our main theorem, let us define the $d$-th moment of the MANOVA$(\gamma,p)$ density as, \cite{dubbs2015infinite}
\begin{equation} \label{moment d MANOVA}
m^{\rm MANOVA}(\gamma,p,d)\triangleq \min (p,\gamma)\int t^d \,\rho_{p,\gamma}(t)dt
\end{equation}
where $\gamma = \frac{m}{n}$ is the aspect ratio of the frame,
$\min (p,\gamma)$ is due to normalization by full dimension $n$,
and
\begin{equation} \label{ManovaDensity}
\begin{split}
&\rho_{p,\gamma}(t)=\frac{\gamma \sqrt{(t-r_-)(r_+-t)}}{2\pi t(1-\gamma t)\min (p,\gamma)}\cdot
I_{(r_-,r_+)}(t) \\&+\left(p+\gamma-1\right)^+/\min (p,\gamma)\cdot \delta(t-\frac{1}{\gamma})
\end{split}
\end{equation}
is Wachter's classical MANOVA desnity \cite{wachter1980limiting}, compactly supported on $[r_-,r_+]$ with
\begin{equation}
\label{ManovaDensityExtrimalValues}
r_\pm=\bigg(\sqrt{\frac{p}{\gamma}(1-\gamma)}\pm\sqrt{1-p}\bigg)^2\,.
\end{equation}
Using $x=\frac{1}{\gamma}-1$ \eqref{WB x}, let:
\begin{equation} \label{delta EWBd}
\Delta(\gamma,p,d,n)\triangleq
\begin{cases}
0,& d=2,3 \\
p^2(1-p)^2\frac{x^2}{n-1},& d=4 \\
\end{cases}.
\end{equation}
\begin{thm}[Erasure Welch Bound of order $d$] \label{th1}
For any $m$-by-$n$ unit-norm frame and $d=2,3,4$, the $d$-th moment \eqref{moment d} is lower bounded by
\begin{equation} \label{moment d bound}
m_d \ge
m^{\rm MANOVA}(\gamma,p,d)+\Delta(\gamma,p,d,n).
\end{equation}
with equality for $d=2,3$ iff $F$ is a UTF, and for $d=4$ iff $F$ is an ETF.
\end{thm}
~\\
The Erasure Welch Bound admits a simple closed form.
We can write the first term in \eqref{moment d bound} for $d=2,3,4$ as
\begin{align} \label{Manova moments}
m^{\rm MANOVA}(\gamma,p,2) &= p+p^2x\\
\nonumber
m^{\rm MANOVA}(\gamma,p,3) &= p+p^23x+p^3(x^2-x)\\
\nonumber
m^{\rm MANOVA}(\gamma,p,4) &= p+p^26x+p^3(6x^2-4x)
\nonumber
\\&+p^4(x^3-3x^2+x)
\nonumber
\end{align}
where x is defined in \eqref{WB x}.
As for the second term, note that $\Delta(\gamma,p,4,n)\to 0$ as $n\to \infty$. Therefore, the lower bound is asymptotically $m^{\rm MANOVA}(\gamma,p,d)$ for $d=2,3$ and $4$.
This is in line with the empirical results in \cite{haikin2017random}, where we showed that random subsets of ETFs have MANOVA spectra.
We can see from \eqref{delta EWBd} that $\Delta(\gamma,p=1,d,n)=0$, and from \eqref{Manova moments} that $m^{\rm MANOVA}(\gamma,p=1,d)=(x+1)^{d-1}$. Thus for $p=1$ the bound \eqref{moment d bound} becomes $\left(\frac{n}{m}\right)^{d-1}$ and coincides with Lemma \ref{lemma1}.
For $d=2$, the bound of Theorem \ref{th1} strengthens the Welch bound in the following sense. Let $k=pn$, and in contrast to our general setting of constant aspect ratio $\gamma$, in this discussion $k$ and $m$ are held constant (subset's size and dimension). Let us consider the expected average cross correlation \eqref{rms WB} of a random subset from $F$,
\begin{equation} \label{rms FP}
I^2_{rms}(FP) = \frac{1}{(k-1)k}\Ev \left[{\sum_{i_1,i2 \in S} |c_{i_1,i_2}|^2 }\right]
\end{equation}
where $S \in {1,...,n}$ is the subset of selected indices
(the $i$'s for which $P_{i,i}$ =1).
Note that we normalize by the expected subset size ($k$).
In view of the definition of the second moment $m_2$ in \eqref{moment d},
\begin{equation} \label{rms FP}
I^2_{rms}(FP) = \frac{\frac{m_2}{p} - 1}{k-1} \ge \frac{\frac{k}{m} - \frac{k}{n}}{k-1},
\end{equation}
where the lower bound follows from Theorem \ref{th1}.
Note that the Welch bound \eqref{rms WB} corresponds to the case $n=k$ and equals to $\frac{\frac{k}{m} - 1}{k-1}$
,
while for $n>k$ the lower bound above {\em increases}, and goes to $\frac{k}{(k-1)m}$ in the limit as $n\to \infty$. Thus, the new bound accounts the penalty in the rms cross correlation due to randomly choosing the vectors from a fixed larger set of vectors \cite{rupf1994optimum}.
As $n\to \infty$ $(p\to0)$, this bound amounts to choosing the $k$ vectors uniformly over a unit sphere.
Another interesting point of view is provided by
random matrix theory. The penalty of the erasure Welch bound corresponds to the increase in the MANOVA second moment $m^{MANOVA}(\gamma,p,2)$, as $p$ varies from 1 to zero. And in the limit as $p\to 0$, this becomes the second moment of the Mar\u cenko-Pastur distribution of an i.i.d matrix \cite{tulino2004random}.
\section{Proof of Theorem \ref{th1}}
We show by induction that
\begin{equation} \label{XX'k}
\begin{split}
&\left((XX')^k\right)_{j_1,j_{k+1}} = \sum_{j_2,\dots,j_k}^{m}\sum_{i_1,\dots,i_k}^{n}F_{j_1,i_1}P_{i_1,i_1}F'_{i_1,j_2}\cdot\\&
F_{j_2,i_2}P_{i_2,i_2}F'_{i_2,j_3}\cdots F_{j_k,i_k}P_{i_k,i_k}F'_{i_k,j_{k+1}} \,\,;\,\, \text{for}\,\, k=1,2,\dots
\end{split}
\end{equation}
The induction basis ($k=1$) trivially holds:
\begin{equation} \label{XX'1}
\begin{split}
&\left(XX'\right)_{j_1,j_2} = \sum_{i_1=1}^{n}F_{j_1,i_1}P_{i_1,i_1}F'_{i_1,j_2}
\end{split}
\end{equation}
For the induction step, let us assume that \eqref{XX'k} holds for $k=d$, and show for $k=d+1$,
\begin{equation} \label{XX'd+1}
\begin{split}
&\left((XX')^{d+1}\right)_{j_1,j_{d+2}} = \sum_{j_{d+1}}^{m}\left((XX')^d\right)_{j_1,j_{d+1}}\left(XX'\right)_{j_{d+1},j_{d+2}}\\&= \sum_{j_2,\dots,j_{d+1}}^{m}\sum_{i_1,\dots,i_{d+1}}^{n}F_{j_1,i_1}P_{i_1,i_1}F'_{i_1,j_2}\cdot\\&
F_{j_2,i_2}P_{i_2,i_2}F'_{i_2,j_3}\cdots F_{j_{d+1},i_{d+1}}P_{i_{d+1},i_{d+1}}F'_{i_k,j_{d+2}}
\end{split}
\end{equation}
From \eqref{XX'k} it follows that
\begin{equation} \label{tr XX'k}
\begin{split}
&\tr\left((XX')^k\right) = \sum_{j_1}\left((XX')^k\right)_{j_1,j_1}\\&=\sum_{j_1,\dots,j_k}^{m}\sum_{i_1,\dots,i_k}^{n}F_{j_1,i_1}P_{i_1,i_1}F'_{i_1,j_2}\cdot\\&
F_{j_2,i_2}P_{i_2,i_2}F'_{i_2,j_3}\cdots F_{j_k,i_k}P_{i_k,i_k}F'_{i_k,j_1}\\&=\sum_{j_1,\dots,j_k}^{m}\sum_{i_1,\dots,i_k}^{n}F_{j_1,i_1}F^*_{j_2,i_1}\cdot\\&
F_{j_2,i_2}F^*_{j_3,i_2}\cdots F_{j_k,i_k}F^*_{j_1,i_k}P_{i_1,i_1}P_{i_2,i_2}\cdots P_{i_k,i_k}.
\end{split}
\end{equation}
For the $d$-th order, we can sum over $j_1,\dots,j_d$ (row indices) and use \eqref{corr}, to obtain the following chain of correlations:
\begin{equation} \label{tr XX'd}
\begin{split}
&\frac{1}{n}\tr\left((XX')^d\right)
=\frac{1}{n}
\sum_{i_1,\dots,i_d}^{n}c_{i_1,i_2}c_{i_2,i_3}\cdots c_{i_d,i_1}P_{i_1,i_1}\cdots P_{i_d,i_d}
\end{split}
\end{equation}
In order to take the expectation we break the sum into cases according to possible combinations of distinct or equal indices. When the number of distinct values in $i_1,\dots,i_d$ is $k$, $\Ev \left[P_{i_2,i_2}\cdots P_{i_d,i_d}\right]=p^k$. The sum of $\frac{1}{n}c_{i_1,i_2}c_{i_2,i_3}\cdots c_{i_d,i_1}$ over all such combinations is denoted by $a_{d,k}(F)$.
Note that for $k=1$, $a_{d,1}(F)=\frac{1}{n}\sum_{i_1=\dots =i_d=i}^{n}c_{i,i}^d=1$. Hence, $m_d$ can be written in the following form:
\begin{equation} \label{md poly of p}
\begin{split}
m_d = p+p^2a_{d,2}(F)+p^3a_{d,3}(F)+\cdots+p^da_{d,d}(F)
\end{split}
\end{equation}
where $a_{d,d}(F)$ is of a special interest, and corresponds to the cycle of correlations of all distinct indices:
\begin{equation} \label{a_dd(F)}
\begin{split}
a_{d,d}(F) = \frac{1}{n} \sum_{i_1\neq i_2\neq i_3\neq ..\neq i_d}^{n}c_{i_1,i_2}c_{i_2,i_3}\cdots c_{i_d,i_1}
\end{split}
\end{equation}
We now turn to consider each of the special cases $d=2,3,4$.
\\ \textbf{Second moment}:
According to \eqref{md poly of p} we have
\begin{equation} \label{m2}
\begin{split}
&m_2 =p+p^2a_{2,2}(F)
\end{split}
\end{equation}
where $a_{2,2}(F)$ correspond to cases with $i_1\neq i_2$
\begin{equation} \label{a22}
\begin{split}
&a_{2,2}(F)=\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\ge x
\end{split}
\end{equation}
where the inequality is due to the rms Welch bound \eqref{WB x}, and satisfied with equality iff $F$ is a UTF.
From \eqref{m2} and \eqref{a22},
\begin{equation} \label{m2 bound}
\begin{split}
&m_2 \ge p+p^2x=m^{\rm MANOVA}(\gamma,p,2).
\end{split}
\end{equation}
\\
\textbf{Third moment}:
According to \eqref{md poly of p},
\begin{equation} \label{m3 calc}
\begin{split}
m_3 &= p + p^2a_{3,2}(F) + p^3a_{3,3}(F).
\end{split}
\end{equation}
$a_{3,2}(F)$ includes all combinations of 2 distinct values for $i_1,i_2,i_3$:
\begin{equation} \label{a32}
\begin{split}
a_{3,2}(F)&=\frac{1}{n}3\sum_{i_1=i_2}^{n}\sum_{i_3\neq i_1}^{n}c_{i_1,i_1}c_{i_1,i_3}c_{i_3,i_1}\\&=\frac{1}{n}3\sum_{i_3\neq i_1}^{n}|c_{i_1,i_3}|^2=3a_{2,2}(F),
\end{split}
\end{equation}
where we used $c_{i,i}=1$ and \eqref{a22}.
Since \eqref{m3 calc} holds for every $p$, we can set $p=1$ and use \eqref{a32}, Lemma \ref{lemma1} for $d=3$ to obtain:
\begin{equation} \label{m3_p=1}
\begin{split}
1 + 3a_{2,2}(F)+a_{3,3}(F)\ge \left(\frac{n}{m}\right)^2=(x+1)^2
\end{split}
\end{equation}
From \eqref{m3 calc}, \eqref{a32} and \eqref{m3_p=1}
\begin{equation} \label{m3 bound}
\begin{split}
&m_3\ge p + p^23a_{2,2}(F)+p^3((x+1)^2-1-3a_{2,2}(F)) \\&= (p-p^3)+(p^2-p^3)3a_{2,2}(F)+p^3(x+1)^2
\end{split}
\end{equation}
Since $p\le 1$, we have $p^2-p^3\ge 0$, and we can use \eqref{a22} to get a lower bound on the third moment of a unit norm frame:
\begin{equation} \label{m3 bound2}
\begin{split}
&m_3\ge(p-p^3)+(p^2-p^3)3x+p^3(x+1)^2\\&=p + p^2 3x + p^3 (x^2-x) =m^{\rm MANOVA}(\gamma,p,3)
\end{split}
\end{equation}
and the condition for equality in both \eqref{a22} and \eqref{m3_p=1} is the frame being a UTF.
\\ \textbf{Fourth moment}:
According to \eqref{md poly of p},
\begin{equation} \label{m4 poly of p}
\begin{split}
m_4 &= p + p^2a_{4,2}(F) + p^3a_{4,3}(F) + p^4a_{4,4}(F).
\end{split}
\end{equation}
Denote $h(\{i_l\}_{l=1}^4)=c_{i_1,i_2}c_{i_2,i_3}c_{i_3,i_4}c_{i_4,i_1} $.
Considering all partitions of $\{i_l\}_{l=1}^4$ into 2 groups (2 distinct values), we get:
\[
a_{4,2}=\underbrace{4\frac{1}{n}\sum_{i_2=i_3=i_4\neq i_1}^{n}h}_{a^{(1)}_{4,2}}
+\underbrace{2\frac{1}{n}\sum_{i_1=i_2\neq i_3=i_4}^{n}h}_{a^{(2)}_{4,2}}
+\underbrace{\frac{1}{n}\sum_{i_1=i_3\neq i_2=i_4}^{n}h}_{a^{(3)}_{4,2}}
\]
where $a^{(1)}_{4,2}$ corresponds to partitions consisting of 3 identical indices a 1 different - $i_1$ or $i_2$ or $i_3$ or $i_4$, $a^{(2)}_{4,2}$ corresponds to partitions consisting of 2 different, non-crossing, pairs of indices - $i_1=i_2,i_3=i_4$ or $i_2=i_3,i_4=i_1$, $a^{(3)}_{4,2}$ corresponds to a partition consisting of 2 different, crossing, pairs of indices - $i_1=i_3,i_2=i_4$.
We derive now the three components:
\begin{align} \label{a42 components}
&a^{(1)}_{4,2}=4\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2=4a_{2,2}(F)\\
&a^{(2)}_{4,2}=2\frac{1}{n}\sum_{i_3\neq i_1}^{n}|c_{i_1,i_3}|^2=2a_{2,2}(F)\\
\label{a42_3}
&a^{(3)}_{4,2}=\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^4
\end{align}
We lower bound $a^{(3)}_{4,2}$. By Jensen's inequality:
\begin{equation} \label{a42_3_bound}
\begin{split}
\frac{1}{n(n-1)}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^4\ge \left(\frac{1}{n(n-1)}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\right)^2
\end{split}
\end{equation}
which is achieved with equality if all absolute correlations are constant, i.e. $F$ is ETF. Hence, from \eqref{a42_3}, \eqref{a42_3_bound}:
\begin{equation} \label{a42_3 bound2}
a^{(3)}_{4,2}\ge \frac{1}{n-1}\left(\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\right)^2\ge \frac{x^2}{n-1}
\end{equation}
where the second inequality follows from Welch bound \eqref{WB x}.
Considering all partitions of $\{i_l\}_{l=1}^4$ into 3 groups, i.e. 3 distinct values, we get:
\[
a_{4,3}=\underbrace{4\frac{1}{n}\sum_{i_1=i_2\neq i_3\neq i_4}^{n}h}_{a^{(1)}_{4,3}}
+\underbrace{2\frac{1}{n}\sum_{i_1=i_3\neq i_2\neq i_4}^{n}h}_{a^{(2)}_{4,3}}
\]
where $a^{(1)}_{4,3}$ corresponds to partitions consisting of 1 pair of identical indices and 2 different values- $i_1=i_2$ or $i_2=i_3$ or $i_3=i_4$ or $i_4=i_1$, $a^{(2)}_{4,3}$ corresponds to partitions consisting of 1 pair of identical indices and 2 different values- $i_1=i_3$ or $i_2=i_4$.
We derive now these two components:
\begin{align} \label{a43 components}
&a^{(1)}_{4,3}=4\frac{1}{n}\sum_{i_2\neq i_3\neq i_4}^{n}c_{i_2,i_3}c_{i_3,i_4}c_{i_4,i_1}=4a_{3,3}(F)\\
&a^{(2)}_{4,3}=2\frac{1}{n}\sum_{i_1\neq i_2\neq i_4}^{n}|c_{i_1,i_2}|^2|c_{i_1,i_4}|^2
\end{align}
Denote $C_{i_1}$ as the sum over all absolute correlations between $i_1$ and other frame vectors.
\begin{equation} \label{C_i}
\begin{split}
C_{i_1} = \sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2
\end{split}
\end{equation}
We derive a lower bound on the sum $a^{(3)}_{4,2}+\frac{1}{2}a^{(2)}_{4,3}$
\begin{equation} \label{b2_bound}
\begin{split}
&\frac{1}{2}a^{(2)}_{4,3}
=\frac{1}{n}\sum_{i_1}^{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\sum_{i_4\neq i_2,i_1}^{n}|c_{i_1,i_4}|^2\\&=\frac{1}{n}\sum_{i_1}^{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2\left[C_{i_1}-|c_{i_1,i_2}|^2\right]\\&=\frac{1}{n}\sum_{i_1}^{n}C_{i_1}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2-\frac{1}{n}\sum_{i_1}^{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^4\\&=\frac{1}{n}\sum_{i_1}^{n}C_{i_1}^2-a^{(3)}_{4,2} \,\,\,\,\,\,\,\,\, \Rightarrow
\end{split}
\end{equation}
\begin{equation} \label{b1+b2}
\begin{split}
a^{(3)}_{4,2}+\frac{1}{2}a^{(2)}_{4,3} &= \frac{1}{n}\sum_{i_1}^{n}C_{i_1}^2 \ge \left(\frac{1}{n}\sum_{i_1}^{n}C_{i_1}\right)^2
\ge x^2
\end{split}
\end{equation}
where the first inequality is again due to Jensen and is achieved with equality if $C_i$ are equal for all $i$, and the second inequality is the Welch bound \eqref{WB x}.
Combining all terms we have
\begin{equation} \label{m4 total}
\begin{split}
&m_4=p+p^2(6a_{2,2}+a^{(3)}_{4,2})+p^3(4a_{3,3}+a^{(2)}_{4,3})+p^4a_{4,4}
\end{split}
\end{equation}
Now we repeat the procedure from the bound on $m_3$ with sequential substitution of all bounds and gathering of similar terms.
We set $p=1$ in \eqref{m4 total} and use Lemma \ref{lemma1}:
\begin{equation} \label{a44 bound}
\begin{split}
&a_{4,4}\ge (x+1)^3-1-6a_{2,2}-4a_{3,3}-a^{(3)}_{4,2}-a^{(2)}_{4,3}
\end{split}
\end{equation}
Substituting \eqref{a44 bound} into \eqref{m4 total} we get:
\begin{equation} \label{m4 bound2}
\begin{split}
&m_4\ge p-p^4+p^4(x+1)^3+(p^2-p^4)6a_{2,2}\\&+(p^2-p^4)a^{(3)}_{4,2}+(p^3-p^4)a^{(2)}_{4,3}+(p^3-p^4)4a_{3,3}.
\nonumber
\end{split}
\end{equation}
As $(p^3-p^4)\ge 0$, we can substitute \eqref{m3_p=1} and get:
\begin{equation} \label{m4 bound3}
\begin{split}
&m_4\ge p-p^4+p^4(x+1)^3+(p^3-p^4)\left(4(x+1)^2-4\right)\\&+p^2(1-p)6a_{2,2}+(p^2-p^4)a^{(3)}_{4,2}+(p^3-p^4)a^{(2)}_{4,3}
\end{split}
\end{equation}
and now we use the bound on $a_{2,2}$ \eqref{a22}.
The last two terms can be reordered to become a function of $a^{(3)}_{4,2}$ and $a^{(3)}_{4,2}+\frac{1}{2}a^{(2)}_{4,3}$ for which we have bounds
\begin{equation} \label{m4 bound4}
\begin{split}
&(p^2-p^4)a^{(3)}_{4,2}+(p^3-p^4)a^{(2)}_{4,3} \\&=(p^3-p^4)2(a^{(3)}_{4,2}+\frac{1}{2}a^{(2)}_{4,3})+p^2(1-p)^2a^{(3)}_{4,2}
\end{split}
\end{equation}
So now we can apply \eqref{a42_3 bound2} and \eqref{b1+b2}
\begin{equation} \label{m4 bound5}
\begin{split}
&m_4\ge p+p^2(6x+\frac{1}{n-1}x^2)+p^3(6x^2-4x-2\frac{1}{n-1}x^2)\\&+p^4(x^3-3x^2+x+\frac{1}{n-1}x^2)\\&=m^{\rm MANOVA}(\gamma,p,4)+p^2(1-p)^2\frac{x^2}{n-1}
\end{split}
\end{equation}
with equality iff $F$ is ETF.
Note that the asymptotic lower bound $\lim\limits_{n\to \infty}m_4\ge m^{\rm MANOVA}(\gamma,p,4)$ holds with equality under the weaker condition that $F$ is a UTF and $C_{i_1} = \sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^2$ is equal for all $i$ and $\frac{1}{n}\sum_{i_2\neq i_1}^{n}|c_{i_1,i_2}|^4\to 0$ as $n\to \infty$, i.e. ETF is sufficient but not necessary.
$\Box$
\section{Discussion and Future Work}\label{discussion}
We are currently working to extend our results to higher order moments $d$. For example, we already found the asymptotic form for the moments of orders $d=5,6$ of subsets of ETF, and verified that they agree with that of MANOVA. Furthermore, we developed a recursive procedure which allows to continue to higher order moments. A complete computation of all the moments will provide formal validation for some of the empirical results reported in \cite{haikin2017random}, and specifically, that the singular values of random subsets of an ETF asymptotically follow Wachter's MANOVA distribution.
The performance of analog coding \cite{haikin2016analog,ITA17} relies on yet another figure of merit of frame subsets, namely the harmonic-to-arithmetic means ratio of the singular values of the subframe covariance matrix. In our standing notation, this quantity is equivalent to the first inverse moment $d=-1$. Extension of the Erasure Welch Bounds to higher order moments and $d=-1$ would establish that an ETF is the most robust
frame under inversion of subsets.
A more complete description of these extensions will
appear elsewhere.
\section*{Acknowledgment}
We would like to thank Ofer Zeitouni for proposing the moment method for analyzing subsets of ETF. We also thank Benny Zaidel for a helpful discussion. This work has been partially supported by the Israeli Science Foundation grants no. 1523/16, 676/15.
\bibliographystyle{IEEEtran}
| {
"timestamp": "2018-01-16T02:08:24",
"yymm": "1801",
"arxiv_id": "1801.04548",
"language": "en",
"url": "https://arxiv.org/abs/1801.04548",
"abstract": "The Welch Bound is a lower bound on the root mean square cross correlation between $n$ unit-norm vectors $f_1,...,f_n$ in the $m$ dimensional space ($\\mathbb{R} ^m$ or $\\mathbb{C} ^m$), for $n\\geq m$. Letting $F = [f_1|...|f_n]$ denote the $m$-by-$n$ frame matrix, the Welch bound can be viewed as a lower bound on the second moment of $F$, namely on the trace of the squared Gram matrix $(F'F)^2$. We consider an erasure setting, in which a reduced frame, composed of a random subset of Bernoulli selected vectors, is of interest. We extend the Welch bound to this setting and present the {\\em erasure Welch bound} on the expected value of the Gram matrix of the reduced frame. Interestingly, this bound generalizes to the $d$-th order moment of $F$. We provide simple, explicit formulae for the generalized bound for $d=2,3,4$, which is the sum of the $d$-th moment of Wachter's classical MANOVA distribution and a vanishing term (as $n$ goes to infinity with $\\frac{m}{n}$ held constant). The bound holds with equality if (and for $d = 4$ only if) $F$ is an Equiangular Tight Frame (ETF). Our results offer a novel perspective on the superiority of ETFs over other frames in a variety of applications, including spread spectrum communications, compressed sensing and analog coding.",
"subjects": "Information Theory (cs.IT)",
"title": "Frame Moments and Welch Bound with Erasures",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877033706601,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7089594706582566
} |
https://arxiv.org/abs/2208.04490 | Homotopy techniques for analytic combinatorics in several variables | We combine tools from homotopy continuation solvers with the methods of analytic combinatorics in several variables to give the first practical algorithm and implementation for the asymptotics of multivariate rational generating functions not relying on a non-algorithmically checkable `combinatorial' non-negativity assumption. Our homotopy implementation terminates on examples from the literature in three variables, and we additionally describe heuristic methods that terminate and correctly predict asymptotic behaviour in reasonable time on examples in even higher dimension. Our results are implemented in Julia, through the use of the HomotopyContinuation.jl package, and we provide a selection of examples and benchmarks. | \section*{}
Let $(f_n)_{n\in\N}=f_0,f_1,\dots$ be a complex-valued sequence with \emph{generating function} $F(z) = \sum_{n\geq0}f_nz^n$. Although $F$ is a priori only a \emph{formal} power series, in a wide variety of applications (in fact, whenever $f_n$ has at most exponential growth) it represents an analytic function in a neighbourhood of the origin. The field of \emph{analytic combinatorics} creates effective techniques to determine the asymptotic behaviour of $f_n$ through a study of the analytic behaviour of $F(z)$. Most classical methods in analytic combinatorics take as input an algebraic or differential equation satisfied by $F(z)$ and, when successful, return the leading terms in an asymptotic expansion of $f_n$ (see~\cite{FlajoletSedgewick2009} or~\cite[Chapter 2]{Melczer2021}).
More recently, a theory of \emph{analytic combinatorics in several variables (ACSV)}~\cite{Melczer2021,PemantleWilson2013} has been developed to translate the analytic behaviour of a $d$-variate generating function
\[ F(\mathbf{z}) = \sum_{\mathbf{i}\in\N^d}f_\mathbf{i}\mathbf{z}^\mathbf{i} := \sum_{\mathbf{i}\in\N^d}f_{i_1,\dots,i_d}z_1^{i_1}\cdots z_d^{i_d} \]
into asymptotic information about its coefficient sequence $(f_{\mathbf{i}})_{\mathbf{i}\in\N^d}$. In this paper we focus on the case of a power series expansion of a multivariate rational function $F(\mathbf{z})=G(\mathbf{z})/H(\mathbf{z})$ and attempt to determine asymptotics of the \emph{$\mathbf{r}$-diagonal} sequence $(f_{n\mathbf{r}})_{n\in\N}$ for a fixed \emph{direction vector} $\mathbf{r}\in\Z_{>0}^d$. The most common situation to arise in practice is the \emph{main diagonal}, when $\mathbf{r}=\one$.
\begin{remark}
If $\mathbf{r}$ has some zero coordinates then we can reduce to the above situation by setting some of the variables equal to zero and working in a lower dimension. For instance, the $(0,r_2,r_3)$-diagonal of any series $F(x,y,z)$ is the $(r_2,r_3)$-diagonal of $F(0,y,z)$. Furthermore, our asymptotic statements continue to hold for directions $\mathbf{r}\in\Q_{>0}$ if they are interpreted to be valid only when $n\mathbf{r}\in\N^d$. In fact, the methods of ACSV show that asymptotics of the $\mathbf{r}$-diagonal usually vary smoothly with $\mathbf{r}$, allowing one to give a natural interpretation of asymptotics in irrational directions and derive central limit theorems~\cite[Section 5.3.3]{Melczer2021}.
\end{remark}
\begin{remark}
Because the methods of ACSV hold in any dimension, our requirement that $F(\mathbf{z})$ be rational is less restrictive than it may seem. For instance, the $\mathbf{r}$-diagonal of an algebraic function in $d$ variables can be represented~\cite[Section 3.2.2]{Melczer2021} as the diagonal of a rational function in $2d$ variables (and a `skew-diagonal' of a rational function in $d+1$ variables). The theoretical results discussed here also hold for \emph{meromorphic} functions, when $F(\mathbf{z})$ is (locally) the ratio of analytic functions, however our restriction to rational functions allows us to stay in the realm of algebraic quantities and polynomial systems, which we use for our explicit algorithms.
\end{remark}
There are many factors making ACSV more complicated than its univariate counterpart. Although a univariate rational function has a finite number of singularities, meaning one can determine the `asymptotic contribution' of each and simply sum those with the fastest growth, any (non-polynomial) rational function in at least two variables must have an infinite number of singularities. In addition to obscuring which singularities contribute to asymptotics, this also means that the singular set can have \emph{non-trivial geometry}, for instance by self-intersecting. The difficulties that arise mean that unlike the univariate case, which relies on standard complex-analytic results going back hundreds of years, the most advanced ACSV results rely on advanced techniques from areas of mathematics as diverse as complex analysis in several variables, the study of singular integrals, algebraic geometry, differential geometry, and topology.
The starting point of an ACSV analysis expresses the $\mathbf{r}$-diagonal of $F(\mathbf{z})$ as a $d$-dimensional complex integral. In the simplest cases, asymptotic behaviour is determined by the behaviour of $F$ near two types of points: \emph{critical points}, defined by an explicit polynomial system, and \emph{minimal points}, which are singularities that are coordinate-wise closest to the origin. Critical points satisfy a square polynomial system, and generically form a finite set that can be manipulated in a computer algebra system. In contrast, there are always an infinite number of minimal points, which are defined by inequalities involving the moduli of coordinates and are thus trickier and more expensive to manipulate in computations.
\subsection{Previous Work and Our Contributions}
From the beginning of its modern period in work of Pemantle and Wilson~\cite{PemantleWilson2002}, the goal of ACSV has always been to develop methods explicit enough to be implemented in a computer algebra system. The `surgery' approach of~\cite{PemantleWilson2002}, which applies to generating functions with \emph{smooth} singular sets that form manifolds, essentially computes a residue in one variable to obtain a $(d-1)$-dimensional integral that is approximated using the saddle-point method. Although this surgery method does not require much theory beyond univariate analytic combinatorics, it requires strong conditions on the locations of minimal points that can be computationally expensive to verify. Later techniques, using \emph{cones of hyperbolicity}~\cite{BaryshnikovPemantle2011} and multivariate residue and homology computations~\cite{BaryshnikovMelczerPemantle2022}, rely on more advanced theory but simplify the assumptions that need to be verified for the results to hold. In the simplest cases, which hold for the majority of examples encountered in combinatorial applications, it suffices to determine which of the critical points are minimal and then add explicit asymptotic contributions corresponding to the (finite number of) minimal critical points. The most expensive step in such an analysis is almost always checking minimality.
The first systematic algorithmic study of ACSV methods was conducted by Melczer and Salvy~\cite{MelczerSalvy2021}, who encoded critical points using a symbolic-numeric data structure known as a \emph{Kronecker} or \emph{rational univariate} representation and then reduced checking minimality to rigorously approximating the roots of certain univariate polynomials to sufficiently high accuracy. Those authors created a preliminary implementation of their work, which does not certify numeric computations to provide rigorous proofs and requires \emph{combinatorial} rational functions, in the Maple computer algebra system. A rational function $F(\mathbf{z})$ is \emph{combinatorial} if all of its power series coefficients are non-negative: although this condition is satisfied for any multivariate generating function, in many combinatorial examples only one diagonal of $F$ enumerates a combinatorial class and the non-diagonal entries have negative coefficients. It is an open problem, even in the univariate case, whether it is decidable to detect when a rational function is combinatorial (see~\cite{OuaknineWorrell2014} for some open problems in this area). Although Melczer and Salvy~\cite{MelczerSalvy2021} detail a method that, in principle, yields an algorithm for asymptotics that does not require combinatorality, in practice an implementation in Maple would not halt in reasonable time beyond low degree examples in two or three variables.
Instead of continuing with the Kronecker representation approach of Melczer and Salvy, in this paper we exploit homotopy continuation methods to certify minimality of critical points, and ultimately determine asymptotics of $\mathbf{r}$-diagonals of rational functions. Using the \textsc{HomotopyContinuation.jl} Julia package~\cite{breiding2018homotopycontinuation} for polynomial system solving, we provide the first implementation of ACSV methods under assumptions that often hold in practice. Our implementation is efficient enough to work even without the assumption of combinatorality, although when the user knows a priori that their input rational function is combinatorial then the computation is greatly reduced. In addition, we describe two heuristic methods to classify minimal critical points using numerical approximations that are extremely efficient, and are the only implemented algorithms we currently know of that can aid in the search for minimal points in more than three variables.
\begin{example}
The main diagonal of the power series expansion of
\[ F(x,y,z)=\frac{1}{1-(1+z)(x+y-xy)}\]
is related to a result of Apéry~\cite{Apery1983} on the irrationality measure of $\zeta(2)$. After importing our package we define the denominator polynomial in Julia using
\begin{code}
@polyvar x y z
H = 1-(1+z)*(x+y-x*y)
\end{code}
If we know that this power series expansion is combinatorial, then we can get the (truncated for clarity) minimal critical point
\begin{code}
min_cp = find_min_crits_comb(H)
\end{code}
\begin{codeout}
Out: 1-element Vector{Vector{ComplexF64}}:
[0.38 + e-39im,0.38 + e-38im,0.61 - e-38im]
\end{codeout}
and print out the leading asymptotic term of the diagonal with
\begin{code}
leading_asymptotics(1,H,min_cp)
\end{code}
\begin{codeout}
Out: "(0.09+6.2e-39im)^(-n)n^(-1)(0.47-5.7e-40im)"
\end{codeout}
It is not obvious from the definition that $F$ is combinatorial. If we don't know our function is combinatorial then we can determine minimality by running \lstinline[style=code]{find_min_crits(H)}, which returns the same point but requires approximately \emph{15 minutes} of computation. If we want to heuristically check for minimal critical points, but don't know that $F$ is combinatorial and don't want to wait for the full algorithm, we can run the algorithms \lstinline[style=code]{find_min_crits(H; approx_crit=true)} or \lstinline[style=code]{find_min_crits(H; monodromy=true)}, described below, which also find the correct point and finish in seconds.
\end{example}
\begin{remark}
Because we use numeric methods, asymptotic behaviour is returned with numeric approximations of constants. If the user wants to determine the algebraic quantities involved exactly, we recommend solving for the critical points (a relatively cheap operation) symbolically using another computer algebra system like Sage or Maple and then using the results of this package to filter out the minimal ones (the most expensive operation).
\end{remark}
The rest of this paper proceeds as follows. Section~\ref{sec:ACSV} gives a quick recap of the methods of ACSV and the high-level problems that need to be decided to find asymptotics, with a description of numerical algebraic geometry methods for polynomial system solving given in Section~\ref{sec:NAG}. Section~\ref{sec:Julia} uses this background material to detail our \textsc{ACSVHomotopy.jl} Julia package, while Section~\ref{sec:examples} illustrates the package on a wide variety of combinatorial examples, including benchmarks between different algorithms. Although our algorithms always terminate, due to the nature of homotopy continuation methods they may not always provide a rigorous proof of asymptotics -- Section~\ref{sec:certification} discusses this issue and describes situations in which the algorithms do give rigorous proofs. Finally, Section~\ref{sec:future} concludes with some extensions that we believe should be addressed next.
\section{Smooth ACSV}
\label{sec:ACSV}
From now on, $F(\mathbf{z})=G(\mathbf{z})/H(\mathbf{z})$ denotes a ratio of $d$-variate coprime polynomials $G,H \in \Z[\mathbf{z}]$ with power series expansion $F(\mathbf{z}) = \sum_{\mathbf{i}\in\N^d}f_{\mathbf{i}}\mathbf{z}^{\mathbf{i}}$ converging around the origin, and $\mathbf{r} \in \Z_{>0}^d$ is a fixed direction vector.
\begin{definition}[minimal critical points]
A point $\mathbf{w} \in \C_*^d$ is a \emph{(simple) smooth critical point} of $F$ if $(\nabla H)(\mathbf{w})\neq\zero$ and
\begin{equation}
\left\{\begin{array}{l}
H(\mathbf{w})=0\\
r_kz_1H_{z_1}(\mathbf{w}) - r_1z_kH_{z_k}(\mathbf{w}) = 0 \;~(2\leq k\leq d).
\end{array}\right. \label{eq:CP}
\end{equation}
We call $\mathbf{w} \in \C_*^d$ a \emph{minimal point} if $H(\mathbf{w})=0$ and there does not exist $\mathbf{y}\in\C^d$ such that $H(\mathbf{y})=0$ and $|y_j|<|w_j|$ for all $j=1,\dots,d$.
\end{definition}
\begin{remark}
If $(\nabla H)(\mathbf{w})=\zero$ then~\eqref{eq:CP} is trivially satisfied. If the gradient vanishes because $H$ has a higher-order pole (for instance, if $H=P^2$ for some polynomial $P$) then our analysis of minimal critical points can be performed on the square-free part of $H$ (the product of its irreducible factors) to obtain an asymptotic expansion of $f_{n\mathbf{r}}$ with minor modifications. On the other hand, if the gradient vanishes because the zero set of $H$ self-intersects then more advanced techniques are required~\cite[Part III]{Melczer2021}.
\end{remark}
We will be able to determine asymptotics in the presence of smooth minimal critical points, assuming a nondegeneracy condition on the zero set of $H$.
\begin{definition}[phase Hessian matrix]
If $\mathbf{w}$ is a smooth critical point then the \emph{phase Hessian matrix} $\mathcal{H}$ at $\mathbf{w}$ is the $(d-1)\times(d-1)$ matrix defined by
\[
\mathcal{H}_{i,j} =
\begin{cases}
V_iV_j + U_{i,j} - V_jU_{i,d} - V_iU_{j,d} + V_iV_jU_{d,d} &: i \neq j \\[+3mm]
V_i + V_i^2 + U_{i,i} - 2V_iU_{i,d} + V_i^2U_{d,d} &: i=j
\end{cases}
\]
where
\[ U_{i,j} = \frac{w_iw_j H_{z_iz_j}(\mathbf{w})}{w_dH_{z_d}(\mathbf{w})} \qquad \text{and} \qquad V_i = \frac{r_i}{r_d}.\]
\end{definition}
\begin{theorem}[{Melczer~\cite[Theorem 5.1]{Melczer2021}}]
\label{thm:ACSV}
Suppose that the system of polynomial equations~\eqref{eq:CP} admits a finite number of solutions, exactly one of which, $\mathbf{w}\in\C_*^d$, is minimal. Suppose further that $H_{z_d}(\mathbf{w})\neq0$, that the phase Hessian matrix $\mathcal{H}$ at $\mathbf{w}$ has non-zero determinant, and that $G(\mathbf{w}) \neq 0$. Then, as $n\rightarrow\infty$,
{\small
\[
f_{n\mathbf{r}} = \mathbf{w}^{-n\mathbf{r}} n^{(1-d)/2} \frac{(2\pi r_d)^{(1-d)/2}}{\sqrt{\det(\mathcal{H})}} \frac{-G(\mathbf{w})}{w_d\, H_{z_d}(\mathbf{w})}\left(1 + O\left(\frac{1}{n}\right)\right).
\]
}
When the zero set of $H$ contains a finite number of points with the same coordinate-wise modulus as $\mathbf{w}$, all of which satisfy the same conditions as $\mathbf{w}$, then an asymptotic expansion of $f_{n\mathbf{r}}$ is obtained by summing the right hand side of this expansion at each point.
\end{theorem}
\begin{remark}
The condition that $G(\mathbf{w})\neq0$ means that the leading asymptotic term in Theorem~\ref{thm:ACSV} doesn't vanish. When $G(\mathbf{w})=0$ asymptotics can usually still be determined by computing higher-order terms using (increasingly complicated) explicit formulas.
\end{remark}
\subsection{Minimality Tests}
The hardest work in applying Theorem~\ref{thm:ACSV} is computing the critical points, defined implicitly by~\eqref{eq:CP}, and determining which, if any, are minimal.
\textbf{(Combinatorial Case)}
Recall that a function is called \emph{combinatorial} if its power series expansion contains only a finite number of negative coefficients. When $F$ is combinatorial there is a simple test for minimal critical points.
\begin{lemma}[Melczer and Salvy~\cite{MelczerSalvy2021}]
\label{lem:combmin}
Suppose $F$ has only a finite number of negative power series coefficients $f_\mathbf{i}$. If $\mathbf{y}\in\C_*^d$ is a minimal critical point then so is $(|y_1|,\dots,|y_d|)$. Furthermore, $\mathbf{w} \in \R_{>0}^d$ is a minimal critical point if and only if the system
\begin{equation}
\begin{split}
H(\mathbf{z}) = H(tz_1,\dots,tz_d) &= 0 \\
z_1H_{z_1}(\mathbf{z})-r_1\lambda = \cdots = z_dH_{z_d}(\mathbf{z})-r_d\lambda &= 0
\end{split}
\label{eq:extendedSys}
\end{equation}
has a solution $(\mathbf{z},\lambda,t) \in \R^{d+2}$ with $\mathbf{z}=\mathbf{w}$ and $t=1$ \emph{and} no solution with $\mathbf{z}=\mathbf{w}$ and $0 < t < 1$.
\end{lemma}
In the combinatorial case we firstly use Lemma~\ref{lem:combmin} to characterize the minimal critical points with positive coordinates by studying the solutions to~\eqref{eq:extendedSys}. From them, find the solutions to~\eqref{eq:CP} with the same coordinate-wise modulus. The following algorithm summarizes this approach.
\begin{algorithm} [H]
Minimal Critical Points in the Combinatorial Case
\begin{enumerate}
\item Determine the set $S$ of zeros of the polynomial system (\ref{eq:extendedSys}) in the variables $\mathbf{z},\lambda,t$. If $S$ is not finite, FAIL.
\item Find $\boldsymbol{\zeta}\in \mathbb{R}^d_{>0}$ such that there exists $(\boldsymbol{\zeta},\lambda,t)\in S$ and for all such triples, $t\not\in (0,1)$. If the number of such $\boldsymbol{\zeta}$'s is not exactly $1$ or if there are such points with $\lambda=0$, FAIL.
\item Identify $\boldsymbol{\zeta}$ among the elements of the set $\mathcal{C}$ of zeros to (\ref{eq:CP}).
\item Return $$\{\mathbf{z}\in \mathbb{C}^d\mid \exists (\mathbf{z},\lambda)\in \mathcal{C},~|z_1|=|\zeta_1|,\cdots,|z_d|=|\zeta_d|\}.$$
\end{enumerate}
\end{algorithm}
\textbf{(General Case)}
If $F$ is not combinatorial, or if we don't know a priori that $F$ is combinatorial, then it is no longer sufficient to consider only the critical points with positive real coordinates to check minimality. In order to express the moduli of coordinates as algebraic equations, we write $H(\mathbf{x}+i\mathbf{y}) = H^{\mathfrak{R}}(\mathbf{x},\mathbf{y}) + iH^{\mathfrak{I}}(\mathbf{x},\mathbf{y})$ for real variables $\mathbf{x},\mathbf{y}\in\R^d$ and polynomials $H^{\mathfrak{R}},H^{\mathfrak{I}} \in \R[\mathbf{x},\mathbf{y}]$. Translating the smooth critical point equations~\eqref{eq:CP} into these new coordinates gives that $\mathbf{z}=\mathbf{a}+i\mathbf{b}$ with $\mathbf{a},\mathbf{b}\in\R^d$ is critical if and only if
\begin{align}
H^{\mathfrak{R}}(\mathbf{a},\mathbf{b}) = H^{\mathfrak{I}}(\mathbf{a},\mathbf{b}) &= 0 \label{eq:GenSys1} \\
a_j H^{\mathfrak{R}}_{x_j}(\mathbf{a},\mathbf{b}) + b_jH^{\mathfrak{R}}_{y_j}(\mathbf{a},\mathbf{b}) - r_j\lambda_R&=0 \label{eq:GenSys2} \\
a_j H^{\mathfrak{I}}_{x_j}(\mathbf{a},\mathbf{b}) + b_jH^{\mathfrak{I}}_{y_j}(\mathbf{a},\mathbf{b}) - r_j\lambda_I&=0 \label{eq:GenSys3}
\end{align}
for some $\lambda_R,\lambda_I \in R$, where $1 \leq j \leq d$ in each equation. To test minimality of these critical points we add the equations
\begin{align}
H^{\mathfrak{R}}(\mathbf{x},\mathbf{y}) = H^{\mathfrak{I}}(\mathbf{x},\mathbf{y}) &= 0 \label{eq:GenSys4} \\
x_j^2 + y_j^2 - t(a_j^2+b_j^2) &= 0 \label{eq:GenSys5}
\end{align}
for $1 \leq j \leq d$, and verify there is no real solution to~\eqref{eq:GenSys1}-\eqref{eq:GenSys5} with $0 < t < 1$. Generically~\eqref{eq:GenSys1}-\eqref{eq:GenSys5} have a finite set of \emph{real} solutions, corresponding to the generically finite number of critical points of $F$, but because this system contains $3d+4$ equations in $4d+3$ variables it will never have a (non-zero) finite number of solutions over the complex numbers. By considering critical values of the projection map onto the $t$ coordinate, Melczer and Salvy~\cite{MelczerSalvy2021} proved that minimality can be tested by adding the additional equations
\[ (\nu_1y_j-\nu_2 x_j)H^{\mathfrak{R}}_{x_j}(\mathbf{x},\mathbf{y}) - (\nu_1x_j+\nu_2 y_j)H^{\mathfrak{R}}_{y_j}(\mathbf{x},\mathbf{y}) =0 \]
for $1 \leq j \leq d$. When $\nu_1 \neq 0$ then we can scale by $\nu_1$ and introduce the equations
\begin{equation}
(y_j-\nu x_j)H^{\mathfrak{R}}_{x_j}(\mathbf{x},\mathbf{y}) - (x_j+\nu y_j)H^{\mathfrak{R}}_{y_j}(\mathbf{x},\mathbf{y}) =0
\label{eq:GenSys6}
\end{equation}
to~\eqref{eq:GenSys1}-\eqref{eq:GenSys5}, resulting in a square system with $4d+4$ variables and equations. The case when $\nu_1=0$ is dealt with separately by adding the equations
\begin{equation*}
\hspace{0.65in}
-x_jH^{\mathfrak{R}}_{x_j}(\mathbf{x},\mathbf{y}) - y_jH^{\mathfrak{R}}_{y_j}(\mathbf{x},\mathbf{y}) = 0
\hspace{0.65in}
(\ref{eq:GenSys6}')
\end{equation*}
for $1 \leq j \leq d$. We determine the minimal critical points by finding $\mathbf{p} + i\mathbf{q}$ such that equations \eqref{eq:GenSys1}-\eqref{eq:GenSys3} have a real solution with $(\mathbf{a},\mathbf{b})=(\mathbf{p},\mathbf{q})$ but neither \eqref{eq:GenSys1}-\eqref{eq:GenSys6} nor \eqref{eq:GenSys1}-(\ref{eq:GenSys6}') have a real solution with $(\mathbf{a},\mathbf{b})=(\mathbf{p},\mathbf{q})$ and $0<t<1$.
This process provides the following algorithm.
\begin{algorithm}[H]
Minimal Critical Points in the Non-Combinatorial Case
\begin{enumerate}
\item Determine the set $S$ of zeros of the polynomial system (\ref{eq:GenSys1})-(\ref{eq:GenSys6}) in the variables $\mathbf{a},\mathbf{b},\mathbf{x},\mathbf{y},\lambda_R,\lambda_I,\nu,t$. If $S$ is not finite, FAIL.
\item Construct a set $\mathcal{U}$ of minimal critical points $\mathbf{a}+i\mathbf{b}\in \mathbb{C}^d$ such that there exists $(\mathbf{a},\mathbf{b},\mathbf{x},\mathbf{y},\lambda_R,\lambda_I,\nu,t)\in S\cap \mathbb{R}^{4d+4}$ and for all such tuples, $t\not\in (0,1)$. If either $\mathcal{U}$ is empty or one of its elements has $\lambda_I=\lambda_R=0$, or if the elements of $\mathcal{U}$ do not all belong to the same torus, FAIL.
\item Identify the elements of $\mathcal{U}$ within the set $\mathcal{C}$ of zeros to (\ref{eq:CP}) and return them.
\item Do the same for the polynomial system (\ref{eq:GenSys1})-(\ref{eq:GenSys6}') in the variables $\mathbf{a},\mathbf{b},\mathbf{x},\mathbf{y},\lambda_R,\lambda_I,t$.
\end{enumerate}
\end{algorithm}
Unfortunately, to moving $4d+4$ variables makes verifying minimality much less practical than the combinatorial case. In essence, Lemma~\ref{lem:combmin} states that to prove minimality in the combinatorial case it is sufficient to consider specific line segments in $\R^d$, while to prove minimality in the general case one must consider a much larger set of points in $\C^d$ whose coordinate-wise moduli lie on specific line segments in $\R^d$.
\begin{remark}
Melczer and Salvy~\cite{MelczerSalvy2021} incorrectly state that $\nu_1$ and $\nu_2$ must both be non-zero: at least one is non-zero at the solutions of interest, but the other may vanish. This is why we introduce~(\ref{eq:GenSys6}').
Melczer and Salvy~\cite{MelczerSalvy2021} also require an extra condition that a certain Jacobian matrix is non-singular, however this is mainly required for their complexity analysis. If this condition fails then the system~\eqref{eq:GenSys1}-\eqref{eq:GenSys6} can have extra solutions that are irrelevant to detecting minimality, but the presence of such solutions does not affect correctness of the minimality test.
\end{remark}
\section{Numerical Algebraic Geometry}
\label{sec:NAG}
Having reduced the ACSV analysis to questions about polynomial systems, we now recall some methods in computational algebraic geometry for the study of such systems. Although the theory of Gr\"obner bases is, by now, the basis of much work in this area, more recently \emph{numerical algebraic geometry} has emerged as a practical alternative. In this section, we discuss several topics in numerical algebraic geometry that will be used for our techniques.
\subsection{Homotopy Continuation}
\label{sec:homotopy}
\emph{Homotopy continuation} is one method to find numerical approximations of solutions to an $n\times n$ square system $\mathcal{F}=(f_1,\dots, f_n)$ of polynomial equations with $n$ variables. From the system $\mathcal{F}$ we construct an $n\times n$ polynomial system $\mathcal{G}$ whose solutions are known a priori. The system $\mathcal{G}$ is called a \emph{start system} and the system $\mathcal{F}$ is called the \emph{target system}: connecting $\mathcal{F}$ and $\mathcal{G}$ using a homotopy $\mathcal{H}(x,t)$ such that $\mathcal{H}(x,0)=\mathcal{G}$ and $\mathcal{H}(x,1)=\mathcal{F}$, we obtain solutions of $\mathcal{F}$ by tracking homotopy paths from $t=0$ to $t=1$.
To track the homotopy paths, a numerical ordinary differential equation solving technique called the \emph{Davidenko equation} and Newton iteration are used. These tracking techniques are typically referred to as \emph{predictor-corrector} methods. For details, see \cite[Chapter 2]{SommeseWampler2005}. Homotopy continuation is implemented in \textsc{Bertini} \cite{BHSW06}, \textsc{HomotopyContinuation.jl} \cite{breiding2018homotopycontinuation}, and \textsc{NAG4M2} \cite{leykin2011numerical}.
\subsection{Polyhedral Homotopy Continuation}
\label{sec:phc}
The complexity of solving a polynomial system using homotopy continuation is determined by the number of homotopy paths to track. It is thus important to track a number of paths that are at least as large as the number of solutions of the system (so that all solutions can be found) but is not too much larger (to save computation). For polytopes $Q_1,\dots, Q_n$ the Euclidean volume $\text{Vol}(a_1Q_1+\cdots +a_nQ_n)$ of the \emph{Minkowski sum} $a_1Q_1+\cdots +a_nQ_n$ is a homogeneous polynomial in the $n$ variables $a_1,\dots, a_n$, whose coefficient of $a_1a_2\cdots a_n$ is the \emph{mixed volume} $\text{MVol}(Q_1,\dots, Q_n)$ of $Q_1,\dots, Q_n$.
\begin{theorem}[{Bernstein's theorem~\cite[Theorem A]{bernshtein1975number}}]
Let $\mathcal{F}$ be a system of polynomials $f_1,\dots, f_n$ in $\C[x_1,\dots, x_n]$. The number of isolated solutions of $\mathcal{F}$ over $\C_*^n$ is at most $\text{MVol}(Q_{f_1},\dots, Q_{f_n})$, where $Q_{f_i}$ is the Newton polytope of $f_i$. Furthermore, for polynomials $f_1,\dots, f_n$ with generic coefficients, the number of solutions for $\mathcal{F}$ over the torus is exactly $\text{MVol}(Q_{f_1},\dots, Q_{f_n})$.
\end{theorem}
The \emph{polyhedral homotopy continuation} method established by Huber and Sturmfels \cite{huber1995polyhedral} is one common way to construct a start system whose solutions form a set with the size of the mixed volume of a system. Consider a polynomial
\[f(\mathbf{x}) = \sum\limits_{\mathbf{a}\in A}c_\mathbf{a}\mathbf{x}^\mathbf{a}\in \C[x_1,\dots, x_n]\]
where $A$ is a collection of integer lattice points. Multiplying each monomial $\mathbf{x}^\mathbf{a}$ of $f$ by some term $t^{w(\mathbf{a})}$ for a \emph{lifting function} $w:A\rightarrow \Z$, we obtain the \emph{lifted polynomial}
\[\overline{f}(\mathbf{x},t)=\sum\limits_{\mathbf{a}\in A}c_\mathbf{a}\mathbf{x}^\mathbf{a} t^{w(\mathbf{a})}.\]
Suppose that a target system $\mathcal{F}$ consists of polynomials $f_1,\dots, f_n$ supported on $A_{f_1},\dots, A_{f_n}$, respectively. Lifting all polynomials $f_1,\dots, f_n$ in $\mathcal{F}$ gives a lifted system $\overline{\mathcal{F}}(\mathbf{x},t)$ satisfying $\overline{\mathcal{F}}(\mathbf{x},1)=\mathcal{F}$. The solutions of $\overline{\mathcal{F}}$ can be expressed by Puiseux series $\mathbf{x}(t)=(x_1(t),\dots, x_n(t))$ where
\[x_i(t)=t^{\alpha_i}y_i+ \text{ higher order terms}\]
for some $\alpha_i\in\Q$ and nonzero constant $y_i$, and substituting $\mathbf{x}(t)$ back into our polynomials gives
\[\overline{f}_j(\mathbf{x}(t),t) =\sum\limits_{\mathbf{a}\in A_{f_j}}c_\mathbf{a} \mathbf{y}^\mathbf{a} t^{\langle \mathbf{a},\boldsymbol{\alpha}\rangle+w(\mathbf{a})}+\text{ higher order terms}.\]
For a suitable choice of $w$, the constants $\mathbf{y}$ and exponents $\mathbf{a}$ can be computed at each branch of $\overline{\mathcal{F}}$, ultimately describing a start system $\mathcal{G}=\overline{\mathcal{F}}(\mathbf{x},0)$ with the right number of solutions. The polyhedral homotopy continuation is implemented in \textsc{HOM4PS2} \cite{lee2008hom4ps}, \textsc{HomotopyContinuation.jl} \cite{breiding2018homotopycontinuation}, and \textsc{PHCpack} \cite{verschelde1999algorithm}.
\subsection{Monodromy}
As seen in Section~\ref{sec:ACSV}, we typically have some solutions of a polynomial system representing critical points and want to determine additional solutions to rule out those that are non-minimal. This `bootstrapping' can be accomplished by monodromy.
For $m,n\in \mathbb{N}$, consider the complex linear space of $n\times n$ square systems $\mathcal{F}_p=(f_p^1,\dots, f_p^n)$ depending on some coefficient parameters $p\in \C^m$, where the monomial support for each polynomial $f_p^i$ is fixed. If we consider an affine linear map $\varphi:p\mapsto F_p$ for $p\in\mathbb{C}^m$ then we can write $\varphi(\mathbb{C}^m)=B$, where $B$ is a parametrized linear variety of systems, and we define the \emph{solution variety} $V=\{(F_p,x)\in B\times \mathbb{C}^n\mid F_p(x)=0\}$ and \emph{projection map} $\pi:V\rightarrow B$.
Assume that the fiber $\pi^{-1}(F_p)$ only has finitely many points for a generic choice of $p$. The set $D$ of systems in $B$ with non-generic fiber is called the \emph{branch locus} of $\pi$. Each element in the \emph{fundamental group} $\pi_1(B\setminus D)$ of loops in $B\setminus D$ modulo homotopy equivalence induces a permutation on the fiber $\pi^{-1}(F_p)$, which is called a \emph{monodromy action}. To find all solutions of a system $F_p\in \pi(V)$ with generic $p$, one can first find a seed solution $(p_0,x_0)\in V$ and numerically compute the monodromy action to find all solutions of $F_p$. When the solution variety $V$ is irreducible then the monodromy action is transitive. This method for finding solutions of polynomial systems is studied and implemented in \cite{breiding2018homotopycontinuation,duff2019solving}.
\subsection{Certification}
By construction, numerical methods return approximations, so some kind of certification is necessary for rigorous results. Specifically, a user needs a certificate that an approximation obtained by the homotopy method is properly approximating a solution of a system. A numerical approximation is called \emph{certified} if it can be refined to an actual solution of the system to an arbitrary precision by applying iterative operators (such as Newton iteration). Software providing such certification includes \textsc{alphaCertified} \cite{hauenstein2011alphacertified}, the function \textsc{certify} implemented in \textsc{HomotopyContinuation.jl} \cite{breiding2020certifying} and \textsc{NumericalCertification} \cite{https://doi.org/10.48550/arxiv.2208.01784}. In our implementation, we use the function \textsc{certify} in \textsc{HomotopyContinuation.jl} exploiting the Krawczyk's method via interval arithmetic \cite[Chapter 8]{moore2009introduction}.
\section{The ACSVHomotopy Package}
\label{sec:Julia}
We now combine the theory of ACSV presented in Section~\ref{sec:ACSV} with the techniques described in Section~\ref{sec:NAG} to create effective and practical algorithms for the asymptotics of multivariate rational functions. Our algorithms are implemented in the \textsc{Julia} package \textsc{ACSVHomotopy.jl}, using the \textsc{HomotopyContinuation.jl} package for our homotopy and monodromy computations.
The package is available at
\begin{center}
{github.com/ACSVMath/ACSVHomotopy}
\end{center}
and our example worksheet can be viewed at
\begin{center}
{github.com/ACSVMath/ACSVHomotopy/blob/main/ExampleWorksheet.ipynb}
\end{center}
\subsection{Combinatorial Case}
For the combinatorial case we first compute the distinct solutions to~\eqref{eq:CP} using a polyhedral homotopy with certification by Krawczyk's method. We then solve and certify~(\ref{eq:extendedSys}) with the added equation $(1-t)\mu-1=0$ to eliminate all solutions with $t=1$ (there are never any solutions with $t=0$ as this would imply $H(\zero)=0$, contradicting $F$ having a power series expansion). Since we no longer have solutions where $t=1$, by refining the solutions to sufficient precision we can determine the solutions with positive real coordinates where $0<t<1$, match the projection onto the $\mathbf{z}$ variables of each to a distinct solution of~\eqref{eq:CP}, and thus rule out all non-minimal critical points with positive coordinates. We then find all critical points with the same coordinate-wise moduli and return that set.
\begin{example}
As a simple example, we can find the minimal critical point
\begin{code}
@polyvar x y
find_min_crits_comb(1-x-y)
\end{code}
\begin{codeout}
Out:1-element Vector{Vector{ComplexF64}}:
[0.5 + 0.0im, 0.5 + 0.0im]
\end{codeout}
controlling asymptotics for the central binomial coefficient
$\binom{2n}{n}$ which forms the main diagonal sequence of
\[ F(x,y) = \frac{1}{1-x-y}.\]
Similarly, we can compute the approximations
\begin{code}
@polyvar x y z
find_min_crits_comb(1-z*(x^2*y+y+x*y^2+x))
\end{code}
\begin{codeout}
Out:2-element Vector{Vector{ComplexF64}}:
[1.0 + e-35im, 1.0 - e-35im, 0.25 - e-37im]
[-1.0 - e-36im, -1.0 + e-36im, -0.25 - e-36im]
\end{codeout}
for the two minimal critical points $\pm(1,1,1/4)$ determining
asymptotics for the main diagonal of
\[F(x,y,z) = \frac{(1+x)(1+y)}{1-zxy(x+1/x+y+1/y)}.\]
This diagonal enumerates walks on the cardinal directions $\{N,S,E,W\}=\{(\pm1,0),(0,\pm1)\}$ that start at the origin and stay in $\N^2$.
\end{example}
\subsection{General Case}
In general we must consider the extended systems~\eqref{eq:GenSys1}-\eqref{eq:GenSys6} and~\eqref{eq:GenSys1}-(\ref{eq:GenSys6}'), which essentially doubles the number of variables under consideration. Mirroring the combinatorial case, we can solve~\eqref{eq:GenSys1}-\eqref{eq:GenSys6} and~\eqref{eq:GenSys1}-(\ref{eq:GenSys6}') using a polyhedral homotopy, certify the results using Krawczyk's method, and refine to a sufficient precision to determine when $0 < t < 1$ to rule out non-minimal points.
\begin{remark}
The system~\eqref{eq:GenSys1}-(\ref{eq:GenSys6}') is over determined, with $4d+4$ equations and $4d+3$ variables. In order to use \textsc{HomotopyContinuation.jl} we drop one of the equations in~(\ref{eq:GenSys6}') to obtain a square system: this can introduce additional solutions which are irrelevant to determining minimality, but does not affect the correctness of our test for minimality.
\end{remark}
\begin{example}
Straub and Zudilin~\cite{StraubZudilin2015}, following Gillis, Reznick, and Zeilberger~\cite{GillisReznickZeilberger1983}, study families of rational functions connected to special function theory. For instance, in three dimensions they study the constants $c$ for which
\[ F_c(x,y,z) = \frac{1}{1-(x+y+z)+cxyz}\]
has non-negative power series coefficients on its main diagonal (which turns out to imply non-negativity of all power series coefficients). Running the code (for $c=5$)
\begin{code}
@polyvar x y z
find_min_crits(1 - (x+y+z) + 5*x*y*z)
\end{code}
\begin{codeout}
Out:2-element Vector{Vector{ComplexF64}}:
[0.45 - 0.12im, 0.45 - 0.12im, 0.45 - 0.12im]
[0.45 + 0.12im, 0.45 + 0.12im, 0.45 + 0.12im]
\end{codeout}
gives the minimal critical points controlling asymptotics of the main diagonal. Since this is a complex conjugate pair, the resulting asymptotic expansion implies that $F_c$ has an infinite number of negative coefficients on its main diagonal when $c=5$ (in fact, it has an infinite number of negative coefficients whenever $c>4$).
\end{example}
\subsection{Faster Heuristics}
As seen in the examples of Section \ref{sec:examples}, the high number of variables in the extended system even in low dimensional examples means it does not terminate within reasonable time for polynomials with four or more variables. In order to speed up our solvers, we can numerically approximate the distinct solutions to the small system~\eqref{eq:GenSys1}-\eqref{eq:GenSys3} and then substitute each of these solutions as parameters into the extended equations~\eqref{eq:GenSys4}-\eqref{eq:GenSys6} and~\eqref{eq:GenSys4}-(\ref{eq:GenSys6}'). In the implementation, it is done by running the function \lstinline[style=code]{find_min_crits} with the flag \lstinline[style=code]{approx_crit = true}.
\begin{remark}
This approach approximates the solutions to the extended system~\eqref{eq:GenSys1}-\eqref{eq:GenSys6} if the solutions vary smoothly with $\mathbf{a}$ and $\mathbf{b}$, which happens whenever the Jacobian of~\eqref{eq:GenSys4}-\eqref{eq:GenSys6} with respect to $\mathbf{a}$ and $\mathbf{b}$ is full rank at all values of $\mathbf{a}$ and $\mathbf{b}$ solving \eqref{eq:GenSys1}-\eqref{eq:GenSys3}. Unfortunately, verifying this condition is usually about as costly as solving the extended system, so we do not do this in our computations and refer to this method only as an efficient heuristic that correctly identifies minimal critical points in a large variety of cases.
\end{remark}
\begin{example}
To stress-test our algorithms we generate a random polynomial $p(x,y,z)$ with six terms in four variables having coefficients in $\{1,\dots,100\}$ and then set $H(x,y,z)=1-p(x,y,z)$. Running
\begin{code}
@polyvar x y z
H=1-(72*x^3*z+97*y*z^3+53*x*z^2+47*x*y+39*z^2+71*x)
find_min_crits(H; approx_crit = true)
\end{code}
\begin{codeout}
Out:1-element Vector{Vector{ComplexF64}}:
[0.001+5.5e-40im, 6.2-7.5e-37im, 0.06+0.0im]
\end{codeout}
returns the unique minimal critical point in about three minutes. This example does not terminate without the \lstinline[style=code]{approx_crit = true} flag.
\end{example}
It is also possible to use the approximations to the critical points as a start system to solve~\eqref{eq:GenSys4}-\eqref{eq:GenSys6} using the monodromy method. More precisely, for any $(\mathbf{a},\mathbf{b})$ solving \eqref{eq:GenSys1}-\eqref{eq:GenSys3} we set $(\mathbf{x},\mathbf{y})=(\mathbf{a},\mathbf{b})$ and $t=1$ in~\eqref{eq:GenSys4}-\eqref{eq:GenSys6} and then compute a corresponding start value of $\nu$ by computing the left kernel of the Jacobian matrix of \eqref{eq:GenSys4}-\eqref{eq:GenSys5} with respect to variables $\mathbf{x},\mathbf{y}$ and $t$. From a given parameter value $(\mathbf{a},\mathbf{b})$ and the initial solution $(\mathbf{x},\mathbf{y},\nu,t)$, we collect real solutions from the monodromy method and check if $t\in (0,1)$: if it is then we remove the parameter value $(\mathbf{a},\mathbf{b})$ as it is non-minimal. Interestingly, it appears that monodromy cannot detect solutions where $\nu=0$ when starting with a non-zero value of $\nu$, and vice-versa, suggesting that solution variety is the union of components corresponding to these cases. We thus repeat this process separately for the cases where $\nu=0$ and when~(\ref{eq:GenSys6}') replaces~\eqref{eq:GenSys6}. Finally, we return the values of $(\mathbf{a},\mathbf{b})$ that are not disregarded.
\begin{example}
Melczer and Salvy~\cite{MelczerSalvy2021} introduce the rational function
\[ F(x,y) = \frac{1}{(1-x-y)(20-x-40y)-1} \]
because it has two critical points with positive coordinates, one of which is smaller in the first coordinate and the other of which is smaller in the second coordinate (so it is not clear which, if any, should be minimal). Running
\begin{code}
@polyvar x y
H = (1-x-y)*(20-x-40*y)-1
find_min_crits(H; monodromy=true)
\end{code}
\begin{codeout}
Out:1-element Vector{Vector{ComplexF64}}:
[0.54 - 9.18e-41im, 0.31 + 1.83e-40im]
\end{codeout}
returns the correct minimal critical point.
\end{example}
We believe that further study of the geometric properties of the extended system~\eqref{eq:GenSys4}-\eqref{eq:GenSys6} could help make this monodromy approach a powerful tool for ACSV analysis.
\section{Examples and Benchmarks}
\label{sec:examples}
Tables~\ref{table1} and~\ref{table2} list benchmarks of our implementation against a selection of combinatorial and algebraic examples, executed on a {Macbook pro, 2 GHz Quad-Core Intel Core i5, 16 GB RAM}. The package supports arbitrary $\mathbf{r}$-diagonal sequences, but examples in this section were done with $\mathbf{r}=\one$. See our supplementary notebook for the full details on the rational functions involved.
\begin{remark}
The \textsc{HomotopyContinuation.jl} package converts input polynomials into compiled straight-line programs for fast evaluation. In order to better see the differences between examples as they grow in degree and dimension, we have removed compilation time from our benchmarks (compilation time takes the majority of the runtime for small examples but is a small part of larger examples). This results in several seconds on small examples, and up to tens of seconds on larger examples, that are not included in the reported timings. In particular, the (non-certified) package of Melczer and Salvy beats our package in the combinatorial case on most examples in Table~\ref{table1} when compilation time is added (except for the two high degree examples where the Maple package takes much longer).
\end{remark}
\begin{table}
\centering
\begin{tabular}{c|c|c}
Example & Comb. & Maple Comb. \\
\hline
$1-x-y$ & 0.0052 & 0.143 \\
Two positive CPs & 0.029 & 0.292 \\
square-root & 0.01 & 0.06 \\
Apéry $\zeta(2)$ & 0.025 & 0.06 \\
Apéry $\zeta(3)$ & 0.7 & 0.3 \\
random poly & 0.9 & 840 \\
2D Walk & 0.03 & 0.06 \\
3D Walk & 0.08 & 2.7 \\
$1-x-y^2-w^3-z^4$ & 0.06 & 509
\end{tabular}
\vspace{0.1in}
\caption{Time, in seconds, of running our Julia implementations in the combinatorial case,
compared to the Kronecker representation approach of Melczer and Salvy. The time to compile Julia functions is not included.
}
\label{table1}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
Example & HSolve & HSolve Approx & Monodromy \\
\hline
$1-x-y$ & 0.04 & 0.02 & 2.3 \\
Two positive CPs & 4.1 & 0.33 & 2.84 \\
Apéry $\zeta(2)$ & 670 & 3.8 & 8.5 \\
square-root & 29.5 & 0.72 & 14.9 \\
random poly & INC & 189.4 & 583.1 \\
2D Walk & INC & 15.3 & 31.9 \\
GRZ & 236 & 3.6 & 3.8
\end{tabular}
\vspace{0.1in}
\caption{Time, in seconds, of running our Julia implementations that do not assume combinatorality. The first column is the time to solve the extended critical point systems, the second column is the time to solve the smaller systems after solving the critical point system separately, and the final column is the time to run the monodromy method. INC indicates the code did not complete after running for an hour.}
\label{table2}
\end{table}
\section{Rigor of Homotopy Results}
\label{sec:certification}
Because we certify our solutions, we never attempt to approximate a point that is not actually a solution of the polynomial systems under consideration. However, by the way they are designed, it is possible for homotopy computations to \emph{miss} solutions, which could result in a point being deemed minimal when it is not. There are some exceptions: when the number of solutions found matches the upper bound on the number of solutions given by the mixed volume, for instance, then we can be sure we have found all solutions. Tables~\ref{tab:mv1} and~\ref{tab:mv2} show a comparison between the mixed volumes of several systems studied here, compared to the actual number of solutions found. It can be observed that we often reach the upper bound in the combinatorial case, but this usually does not happen in the non-combinatorial case.
\begin{table}
\centering
\begin{tabular}{c|c|c}
Example & Mixed volume & $\#$ Solutions found \\
\hline
$1-x-y$ &1 & 1 \\
$1 - xy - xy^2 - 2x^2y
$ & 9 & 9 \\
$1 - x - y^2 - w^3 - z^4$ & 96 & 96
\end{tabular}
\vspace{0.1in}
\caption{Mixed volume for the system (\ref{eq:extendedSys}) and the actual number of solutions found for several combinatorial examples}
\label{tab:mv1}
\end{table}
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c}
Example & (\ref{eq:GenSys1})-(\ref{eq:GenSys6}) & $\#$ Sols & (\ref{eq:GenSys1})-(\ref{eq:GenSys6}') & $\#$ Sols \\
\hline
$1-x-y$ &4 & 1 & 2 & 0 \\
$1 - xy - xy^2 - 2x^2y
$ & 3276 & 99 & 1638 & 126 \\
$1 - (x+y+z) + \frac{81}{8}xyz
$ & 13068 & 162 & 4356 & 216 \\
$1 - x - y^2 - w^3 - z^4$ & FAIL &N/A & 442368 & 442368
\end{tabular}
\vspace{0.1in}
\caption{Mixed volumes for the systems (\ref{eq:GenSys1})-(\ref{eq:GenSys6}) and (\ref{eq:GenSys1})-(\ref{eq:GenSys6}') and the number of solutions found for several examples. FAIL means that the code did not compute the mixed volume due to an out of memory error.}
\label{tab:mv2}
\end{table}
We can also conclude we know minimality rigorously when there is another way to determine that a minimal critical point must exist, and all but one point is ruled out by our algorithms. For instance, in the combinatorial case it can be shown that any polynomial whose support contains the terms $1,z_1,z_2,\dots,z_d$ must have at least one minimal critical point with positive coordinates.
\section{Conclusion}
\label{sec:future}
Despite the high computational cost associated to many of computations required to determine asymptotics using the methods of ACSV, the continued development of efficient computer algebra packages in Julia and other languages has made it feasible to automate the analysis beyond the simplest cases. There are many natural extensions still to be made, perhaps chiefly among them extending to the non-smooth case by incorporating algorithms for the Whitney stratification of algebraic varieties. Other interesting avenues for exploration include the development of better start systems for homotopy computations, to better match the number of critical points, and a theoretical study of the solution variety and its irreducible components for the monodromy approach (which could help the monodromy approach be competitive with or even surpass the polyhedral homotopy approach).
\section*{Acknowledgments}
KL and SM acknowledge the support of the AMS Math Research Community \emph{Combinatorial Applications of Computational Geometry and Algebraic Topology}, which was funded by the National Science Foundation under Grant Number DMS 1641020. SM and JS's work partially supported by NSERC Discovery Grant RGPIN-2021-02382.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-08-10T02:05:23",
"yymm": "2208",
"arxiv_id": "2208.04490",
"language": "en",
"url": "https://arxiv.org/abs/2208.04490",
"abstract": "We combine tools from homotopy continuation solvers with the methods of analytic combinatorics in several variables to give the first practical algorithm and implementation for the asymptotics of multivariate rational generating functions not relying on a non-algorithmically checkable `combinatorial' non-negativity assumption. Our homotopy implementation terminates on examples from the literature in three variables, and we additionally describe heuristic methods that terminate and correctly predict asymptotic behaviour in reasonable time on examples in even higher dimension. Our results are implemented in Julia, through the use of the HomotopyContinuation.jl package, and we provide a selection of examples and benchmarks.",
"subjects": "Combinatorics (math.CO); Symbolic Computation (cs.SC); Algebraic Geometry (math.AG)",
"title": "Homotopy techniques for analytic combinatorics in several variables",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336244,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7089594699097832
} |
https://arxiv.org/abs/1005.1919 | On the complement of the dense orbit for a quiver of type $\Aa$ | Let $\Aa_t$ be the directed quiver of type $\Aa$ with $t$ vertices. For each dimension vector $d$ there is a dense orbit in the corresponding representation space. The principal aim of this note is to use just rank conditions to define the irreducible components in the complement of the dense orbit. Then we compare this result with already existing ones by Knight and Zelevinsky, and by Ringel. Moreover, we compare with the fan associated to the quiver $\Aa$ and derive a new formula for the number of orbits using nilpotent classes. In the complement of the dense orbit we determine the irreducible components and their codimension. Finally, we consider several particular examples. | \section{Introduction}\label{Sintro}
The principal aim of this note is to describe the complement of the
generic orbit in the representation space of a directed quiver of type
${\Bbb A}_t$ with vertices $\{1,2,\ldots,t \}$ and arrows $\alpha_i: i+1 \longrightarrow
i$. For a dimension vector $d = (d_1,\ldots,d_t)$ and a
representation $A = (A_{i,i+1}) = (A_{1,2}, A_{2,3}, \ldots,
A_{t-1,t})$ with $A_{i,i+1}: V_{i+1} \longrightarrow V_i$ we define
$$
r_{i,j} := \min\{d_l\mid i \leq l \leq j\} \mbox{ and } A_{(i,j)} :=
A_{i,i+1}A_{i+1,i+2} \ldots A_{j-1,j}.
$$
Let $Y$ be defined as the complement in
$$
{\cal{R}}(Q,d) = \bigoplus_{i = 1}^{t-1} \operatorname {Hom}(\ck^{d_i}, \ck^{d_{i+1}}) \mbox{
with action of } \mbox{Gl}(d) = \prod_{i=1}^t \mbox{Gl}(d_i)
$$
of the generic orbit
$$
{\cal{O}}(d) := \{ A = (A_{1,2},\ldots,A_{i,i+1},\ldots,A_{t-1,t}) \in {\cal{R}}(Q,d)\mid
\operatorname {rk} A_{(i,j)} =
r_{i,j} \}.
$$
In the complement $Y$ we define closed (not neccesarily irreducible) varieties
$$
Y_{i,j} := \{(A_{i,i+1})_{i=1}^{t-1} \in {\cal{R}}(Q,d) \mid \operatorname {rk}
A_{(i,j)} \leq r_{i,j} - 1\}.
$$
We claim in our main result that all irreducible components of $Y$ are
among the $Y_{i,j}$ and at most $t-1$ of the $Y_{i,j}$ occure as
irreducible components in $Y$.
For the formulation of the main result we need to define a set of pairs
$$
J(d) := \{ (i,j) \mid 1 \leq i < j \leq t \mbox{ and } \mbox{ for all } i < l < j, d_l
> \max \{ d_i, d_j \} \}.
$$
and a subset
$I(d)$ consisting of alle elements in $J(d)$ satisfying one of the
following properties (where we define $d_0 = d_{t+1} = 0$) \\
(i) $d_i = d_j$; \\
(ii) $d_i < d_j$ and we define $a$ to be the minimal index $a > j$ with
$d_a < d_i$. Then $d_l \geq d_j$ for all $j < l < a$.
\\
(iii) $d_i > d_j$ and we define $b$ to be the maximal index $b < i$
with $d_b < d_j$. Then $d_l \geq d_i$ for all $b < l < i$.
If $(i,j) \in J(d)$ then we show that $Y_{i,j}$ is irreducible (Theorem
\ref{Tmain}) and there exists a unique representation $M(i,j)$ whose
orbit is dense in $Y_{i,j}$. For any irreducible component of $Y$
there exists a representation $M$ whose orbit is dense in this
component. Such a representation $M$ is called {\sl almost generic}.
Then we prove the following theorem.
\begin{Thm}\label{Tmain}
Assume $d_i > 0$ for each $1=1,\ldots,t$. \\
(i)
$$Y = \cup_{(i,j) \in I(d)} Y_{i,j} $$
is the decomposition of $Y$ into pairwise different irreducible components.
\\
(ii) Each component is the closure of an orbit corresponding to an
almost generic representation $M(i,j)$. \\
(iii) For any $(i,j) \in J(d)$ the codimension of $Y_{i,j}$ is $|d_j - d_i| + 1$. \\
(iv) The irreducible components $Y_{i,j}$ in $Y$ of codimension $1$
are in
bijection with the pairs $(i,j)$ with $d_i = d_j$ and $d_l > d_i$
for all $i < l < j$.
\end{Thm}
In fact we prove the following stronger results. First of all
$Y_{i,j}$ is irreducible precisely when $(i,j)$ is in $J(d)$ (Prop
\ref{Pirred}, Cor. \ref{Cirred} ). Then
$Y$ obviously decomposes into the union of all possible $Y_{i,j}$
(Lemma \ref{Ldecomp}). Next we show that any $Y_{k,l}$ for $(k,l)
\notin I(d)$ is already contained in a union of some other $Y_{(i,j)}$
(Prop \ref{Pcontain}). Moreover, we
interprete our result in terms of multisegments and nilpotent
classes in Section \ref{Sfurther}.
Note that the techniques are similar to the ones in \cite{BH1}, our
case corresponds to ${\mathfrak p}_u(d)/{\mathfrak p}_u(d)'$ therein,
however, the index sets are different, no case follows from the
other. With some technical modifications on the index sets
one can also handle the case ${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$ for the
remaining values of $l$ in a similar way (Section \ref{Sparabol}).
\medskip
The paper is organized as follows. In Section \ref{Sorbits} we only collect the
details we need for the proof of the main result in Section \ref{Sproof}. Then
we proceed in Section \ref{Sfurther} with some further descriptions related to
tilting modules and trees, the structure of the fan associated to
tilting modules and other
combinatorial descriptions. The associated simplicial complex of the
fan coincides with the simplicial complex considered by Riedtman and
Schofield (\cite{RiedtmannSchofield2}). Then, in Section
\ref{Sexample} we consider
several examples that are
of interest: convex and concave dimension vectors, pure and generic dimension
vectors, and symmetric ones. In the last section we compare with the
results in \cite{BH1} and mention some generalizations without proofs.
\medskip
We always work over an infinite field $k$, the results here do not depend
on the ground field. For finite fields, one needs to modify the
definition of a dense orbit slightly: an orbit is dense, if it is
dense over the algebraic closure. For a partition $\lambda =
(\lambda_1,\ldots,\lambda_n)$ we denote
by $C(\lambda)$ the corresponding nilpotent class defined by
$$
C(\lambda) = \{ A \in \operatorname {End}(V) \mid \dim A^l(V) = \mbox{max\,}\{ j \mid
\lambda_j \geq l \} \}.
$$
All varieties are considered over the
algebraic closure and might be reducible. Also the action of the group
should be understood over the algebraic closure. We will always
identify isomorphism classes of representations of ${\Bbb A}_t$ (with
directed orientation) with so-called multisegments defined below. With
$\sharp [i,j]$ we denote the number $j-i+1$ of integers in the interval
$[i,j]$.
{\sc Acknowledgment:} This work started during a stay of both authors in Oberwolfach. We are
indebted to the Institut for the perfect working conditions. The second author was supported by the DFG
priority program SPP 1388 representation theory.
\section{Description of the Orbits} \label{Sorbits}
In this section we recall some of the various descriptions of the isomorphism
classes of representations of ${\Bbb A}_t$ with the directed
orientation that we need in the proof. Moreover, we recall some
well-known facts from the
classification of tilting modules and compute the extension
groups. We proceed with these descriptions in Section
\ref{Sfurther}. Further related results can be found in
\cite{KnightZelevinsky} and in the classical papers
\cite{AbeasisdelFra} and \cite{AbeasisdelFraKraft}.
\subsection{Multisegements}
A multisegment $M$ consists of a union of
intervalls $[i,j]$ with $1 \leq i \leq j \leq t$, written as $M = \oplus
[i,j]^{a_{i,j}}$ (since a multisegments represents an isomorphism
class of representations we write 'direct sum' instead of
'union'). The dimension vector of such a multisegemt is
defined as
$$
\mathrm{\underline{dim}} M = (d(M)_1,\ldots,d(M)_t); \quad d(M)_l := \sum_{(i,j) \mid i \leq
l \leq j} a_{i,j}.
$$
There are natural bijections between the multisegments of dimensions
vector $d$, the isomorphism classes of representations of ${\Bbb A}_t$,
and the orbits of the $\mbox{Gl}(d)$--action on ${\cal{R}}(Q,d)$. Moreover, for
any dimension vector $d$ there exists a unique multisegment $M(d)$
corresponding to the dense orbit. This multisegment can be constructed
recursively as follows: Define $a_{1,t}$ to be the minimum of the
entries $d_i$ in $d$. Then we consider $d^1 := d -
a_{1,t}(1,\ldots,1)$ and consider the longest interval $[i,j]$ in
$d^1$ with minimal $i$. Then
$$
d^2 := d^1 - a_{i,j} \mathrm{\underline{dim}} [i,j] = d^1 - a_{i,j}
(0,\ldots,0,1,\ldots,1,0,\ldots0)
$$
is nonnegative for some
maximal $a_{i,j}$ and we proceed with $d^2$ instead of $d^1$ in the same
way. Eventually, we obtain a multisegment $M(d)$ with at most $t$
different direct summands. A second way to obtain this multisegment is
described in in \cite{Hvol}, Section 8 (this is a similar, but not the
same, construction as in \cite{BHRR}) as follows: consider the unique
diagram with $d_i$
vertices in the $i$th column and connect each vertex in the $i$th column
and the $k$th row with the vertex in the $(i+1)$th column and the
$k$th row (if it exists). Roughly one connects all neighboured
vertices in the same row. The connected components of this diagram are
the direct summands and this diagram represents the multisegment
$M(d)$ (see Section \ref{Sexample} for examples).
{\sc Definition. }
A dimension vector $d$ is {\sl generic} if $M(d)$ contains precisely
$t$ pairwise different segments. A dimension vector is {\sl pure} if
$d_1 = d_t$, $d_l \geq d_1 = d_t$ for each $1 \leq l \leq t$ and this
condition holds recursively for each connected component in the
support of $d(\geq a) := (\max\{d_1-a, 0\}, \ldots, \max\{d_t-a,
0\})$. Examples can be found in Section \ref{Sexample}, see
\ref{Sexample}.1 and \ref{Sexample}.4.
\subsection{Extensions and homomorphisms}
The category of finite dimensional representations of ${\Bbb A}_t$ is a
hereditary category and the Euler characteristic $\langle- ,- \rangle
= \dim \operatorname {Hom}(-,-) - \dim \operatorname {Ext}(-,-)$, respectively the
Hom- and Ext-spaces are
$$
\begin{array}{ccc}
\langle [i,j], [k,l] \rangle$ is just $\sharp( [i,j] \cap [k,l] ) -
\sharp ([i+1,j+1] \cap [k,l]) \\
\operatorname {Hom}([i,j], [k,l]) = \left\{
\begin{array}{ll}
k & \mbox{ if } k \leq i \leq l \leq j \\
0 & \mbox{ otherwise}
\end{array} \right. \\
\operatorname {Ext}([i,j], [k,l]) = \left\{
\begin{array}{ll}
k & \mbox{ if } i < k \leq j + 1 < l + 1 \\
0 & \mbox{ otherwise}
\end{array}
\right.
\end{array}
$$
All this follows from direct calculations using a projective or
an injective resolution
$$
0 \longrightarrow [j + 1,t] \longrightarrow [i,t] \longrightarrow [i,j] \qquad \mbox{ or } \qquad [i,j] \longrightarrow
[1,j] \longrightarrow [1,i+1] \longrightarrow 0.
$$
\begin{Prop}\label{Pextgroups}
a) A multisegment $M$ has no selfextension precisely when for each
pair of direct summands $[i,j]$ and $[k,l]$ of $M$ one of the
following conditions hold \\
(i) $[i,j] \subseteq [k,l]$, \quad
(ii) $j < k-1$, \quad
(iii) $[k,l] \subseteq [i,j]$, or \quad
(iv) $l < i-1$. \\
b) The multisegment $M(d)$ has no selfextension and any other
multisegment $M$ of dimension vector $d$ satisfies $\operatorname {Ext}(M,M) \not=
0$. \\
c) A multisegement $M$ satisfies $\operatorname {Ext}(M,M) = k$ precisely when it
contains two segments
$[i,j]$ and $[k,l]$ with $j \geq k-1$, $i < k$, and $j < l$ as a
direct summand and the
complement of $[i,j]$ equals $M(d')$, where $d' = \mathrm{\underline{dim}} M - \mathrm{\underline{dim}}
[i,j]$ and the complement of $[k,l]$ equals $M(d'')$, where $d'' =
\mathrm{\underline{dim}} M - \mathrm{\underline{dim}} [k,l]$. \\
d) A multisegmet $M = \oplus [i,j]^{a_{i,j}}$ is almost generic
precisely when the direct sum of the pairwise non-isomorphic direct
summands $N = \oplus_{(i,j) \mid a_{i,j} > 0} [i,j]^{}$ satisfies
$\operatorname {Ext}^1(N,N) = k$ and one of the direct summands with non-trivial
Ext--group occurs with multiplicity one in $M$.
\end{Prop}
{\sc Proof. }
a) and the first claim of b) is a direct consequence from the formula
for the extension groups above. The uniqueness in b) follows either
directly from the construction, or since ${\cal{R}}(Q,d)$ is irreducible (it
can contain at most one dense orbit). To prove c) one uses that
$\operatorname {Ext}^1$ is additive, thus there is at most one non--vanishing
extension group. Finally, to prove d) we note that for $d = \mathrm{\underline{dim}} N$
we have $\dim \operatorname {End}(N,N) =
\dim \operatorname {End}(M(d),M(d)) +1 $ by a simple computation of the Euler characteristic
$\langle M(d),M(d) \rangle = \langle N,N \rangle.$ Thus, the stabilizer of
the orbit of $M(d)$ and $N$ differ by one and then the dimension of the
orbits also differ by one. The closure of the orbit of $M(d)$ obviously
contains the orbit of $N$. Now assume $M$ is a multisegmet as in the
claim and it is neither generic nor almost generic. Take two direct
summands $[a,b]$ and $[c,d]$ of $M$ with $\operatorname {Ext}([a,b],[c,d]) \not=
0$. We define a new multisegment $M'$ of the same dimension vector by
deleting $[a,b]$ and $[c,d]$ and replacing it by $[c,b] \oplus
[a,d]$ (if $c = b+1$ we replace it just by $[a,d]$). Then $M$ is in the closure of the orbit of $M'$ and $M$ is
almost generic precisely when $M'$ is already generic. This in turn is
equivalent to the second condition in d), proving one direction of the
claim.\\
Now assume $N$ satisfies $\operatorname {Ext}(N,N) = k$. Then the closure of the
orbit of $N$ is of codimension one in the space of all representations
of dimensions vector $\mathrm{\underline{dim}} N$. Thus it is
some irreducible component in the complement of the dense
orbit. Now we add the remaining segments to $N$ so that we obtain $M =
M' \oplus N$. The multisegment $M'$ is then also a direct summand of
$M(d)$, since on can get $M(d)$ from $N \oplus M'$ by extending only
two segments in $N$.
Assume $M$ contains two indecomposable direct summands $[a,b]$
and $[c,d]$, both occuring with multiplicity at least two and
$\operatorname {Ext}([a,b],[c,d]) \not= 0$, then we can again (using extensions)
construct an orbit that is not generic and contains $M$ in ist
closure. Consequently, such an $M$ is not almost generic.
\hfill $\Box$
The proof also follows directly from Zwara's result \cite{Zwara} that the partial
order of the
Ext-degeneration and the partial order for the geometric degeneration coincide. In the
proof above we only used the trivial direction.
\subsection{Rank conditions}
To any representation $A$ of $Q$ one can associate the ranks of the
compositions of the corresponding matrices. Consider $A = (A_{i,i+1})
\in {\cal{R}}(Q,d)$. Then we define the {\sl rank triangle}
$$
r(A) = (r_{i,j}(A))_{1 \leq i < j \leq t}, \mbox{ with } r_{i,j}(A) = \operatorname {rk}
A_{i,i+1}\cdot \ldots \cdot A_{j-1,j} = \operatorname {rk} A_{(i,j)}.
$$
Moreover, it is convenient to define the {\sl extended rank triangle}
with $r_{i,i} := d_i$ and to define $r_{i,j} = 0$ whenever $i \leq
0$ or $j > t$.
Obviously, we must have $r_{i,j}(A) \leq r_{i,j} := \mbox{min\,}\{d_l \mid i
\leq l \leq
j\}$ and (using generic matrices) the set
$$
{\cal{O}}(d) := \{ A \in {\cal{R}}(Q,d) \mid r_{i,j}(A) = r_{i,j} \}
$$
is open and dense in ${\cal{R}}(Q,d)$. In fact, the set ${\cal{O}}(d)$ consists of
all representations isomorphic to $M(d)$, since $r_{i,j}(M(d)) =
r_{i,j}$ by construction.
We fix a dimension vector $d$ and
consider any triangle $s = (s_{i,j})$ of non-negative
integers $s_{i,j}$ satisfying $s_{i,j} \leq r_{i,j}$. Then
$$
X_s^0 := \{ A \in {\cal{R}}(Q,d) \mid r_{i,j}(A) = s_{i,j} \} \subseteq X_s
:= \{ A \in {\cal{R}}(Q,d) \mid r_{i,j}(A) \leq s_{i,j} \}
$$
defines an open (possibly empty) subvariety $X_s^0$ in a closed,
non-empty algebraic subvariety $X_s$ (not necessarily
irreducible) of ${\cal{R}}(Q,d)$. The rank triangles are partially ordered
by $s \leq u$ iff $u - s$ has only non-negative entries. It turns out
that some of the $X_s$ are irreducible (we determine which ones) and
the rank conditions are very useful for determining the components in
the orbit closures. Moreover, one can reconstruct the multisegment $M$
from the rank condition $s$, where the orbit of $M$ is dense in $X_s$
with $s$ minimal: A direct sum $[i,j]^{a}$ is a
direct summand of $M$ (with maximal possible $a$) if and only if $a =
r_{i,j} - r_{i+1,j} - r_{i,j+1} + r_{i+1,j+1}$. Consequently, $X_s^0$
is empty, if some $r_{i,j} - r_{i+1,j} - r_{i,j+1} + r_{i+1,j+1}$ is
negative. Otherwise $X_s^0$ is dense in $X_s$.
Conversely, given a multisegment $M$ we can easily determine its rank
vector $r(M)
=(r(M)_{i,j})$ as follows
$$
r(M)_{i,j} = \sharp \left\{ [k,l] \in M \mid k \leq i \leq j \leq l
\right\}.
$$
In the particular case of a segment $[k,l]$, we obtain just the
characteristic function of a triangle as the rank triangle
$$
r([k,l])_{i,j} =
\left\{
\begin{array}{l}
1 \mbox{ if } [i,j] \subseteq [k,l] \\
0 \mbox{ else }.
\end{array}
\right.
$$
\begin{Prop}
a) If $s \leq u$ then $X_s \subseteq X_u$. In particular, $X_r$
contains each $X_s$ and $X_0$ (consisting of the zero matrix) is
contained in each $X_u$. \\
b) The variety $X_s$ is irreducible precisely when it is the closure
of one $\mbox{Gl}(d)$--orbit. \\
c) $X_s^0$ is non-empty precisely when $s$ is a sum of functions of
the form $r([i,j])$ and this is equivalent to $s_{i,j} - s_{i+1,j}
- s_{i,j+1} + s_{i+1,j+1} \geq 0$ for all pairs $(i,j)$.
\end{Prop}
{\sc Proof. }
Assertion a) is obvious, since $\operatorname {rk} A_{(i,j)} \leq a$ implies $\operatorname {rk}
A_{(i,j)} \leq b$ for any $b > a$.
To prove b) we decompose $X_s$ in a disjoint union of
$\mbox{Gl}(d)$--orbits. This is possible, since $X_s$ is
$\mbox{Gl}(d)$--invariant. Thus we obtain a set of multisegments ${\cal{M}}_s$ with
$$
X_s = \bigsqcup_{M \in {\cal{M}}_s} \mbox{Gl}(d) M.
$$
Consequently, $X_s$ is the union of a finite number of orbit closures
$\overline{\mbox{Gl}(d) M}$ for a finite number of multisegments $M$. We
can assume this set is minimal. Thus $X_s$ is irreducible precisely
when $X_s = \overline{\mbox{Gl}(d) M}$ for some maximal $M$ in ${\cal{M}}_s$.
For c), note that $X_s^0$ is nonempty, precisley when there exists a
multisegment $M$ with $s = \operatorname {rk} M$. This is also equivalent to $s =
\sum_{(i,j)} a_{(i,j)} \operatorname {rk} [i,j]$ is the sum of rank functions of
segments. To prove the last characterization we note that for $s =
\sum_{(i,j)} a_{(i,j)} \operatorname {rk} [i,j]$ we obtain $s_{i,j} - s_{i+1,j}
- s_{i,j+1} + s_{i+1,j+1} = a_{(i,j)} \geq 0$. Conversely, if
$s_{i,j} - s_{i+1,j} - s_{i,j+1} + s_{i+1,j+1} \geq 0$ then we
define $ a_{(i,j)} = s_{i,j} - s_{i+1,j} - s_{i,j+1} +
s_{i+1,j+1}$.
\hfill $\Box$
\section{Proof of the main theorem}\label{Sproof}
We start this section by showing that some of the $Y_{i,j}$ are
irreducible and compute their dimension. Then we show that all
$Y_{i,j}$ for $(i,j)$ not in $I(d)$ are already contained in some union
of other ones. This allows a reduction to the case $Y_{i,j}$ for
$(i,j) \in I(d)$. Finally we show that $Y$ is already contained in the
union of all $Y_{i,j}$.
\subsection{Irreducible varieties}
\begin{Prop}\label{Pirred}
Assume $(i,j) \in J(d)$, then $Y_{i,j}$ is irreducible of
codimension $|d_j - d_i| + 1$ in ${\cal{R}}(Q,d)$.
\end{Prop}
{\sc Proof. }
We consider the projection of a representation of $Q$ to the quiver
$Q'$ with vertices $i,i+1,\ldots, j-1,j$ and its subvarieties
$Y_{i,j}$ in ${\cal{R}}(Q,d)$ and $Y'_{i,j}$ in ${\cal{R}}(Q',d')$ defined by
$\operatorname {rk} A_{(i,j)} < r_{i,j}$. Then ${\cal{R}}(Q,d)$ is a direct product of
${\cal{R}}(Q',d')$ with some affine space and $Y_{i,j}$ is a product of
$Y'_{i,j}$ with some affine space. Thus $Y_{i,j}$ is irreducible
precisely when $Y'_{i,j}$ is irreducible. Consequently, it is
sufficient to prove the claim for $Y_{1,t}$ in ${\cal{R}}(Q,d)$.
We now assume $(i,j) = (1,t)$ and $d_i > d_1, d_t$ for any $1 < i < t$.
Now we consider a multisegment $M$ consisting of $[1,t-1] \oplus
[2,t]$ and $M(e)$ for $e = d - (1,2,2,\ldots,2,2,1)$. A computation of
the ranks $r_{i,j}(M)$ yields $r_{1,t}(M) = r_{1,t}-1 $ and
$r_{i,j}(M) = r_{i,j} $ for all $(i,j) \not= (1,t)$. Thus the equation
$s_{1,t} = r_{1,t}-1$ and $s_{i,j} = r_{i,j}$ for $(i,j) \not= (1,t)$
defines an orbit $X_s$ and
$X_s$ is the closure of this orbit containing $M$. Consequently it is
irreducible, and it coincides with $Y_{1,t}$.
Finally, we need to compute the codimension of the orbit closure
$Y_{1,t}$. For this we compute the dimension of the stabilizer of $M(d)$
and of $M$ constructed above. To make the computation easier, we
delete the common direct summands that contribute with the same
dimension to the stabilizer and assume without loss of generality $d_1
\geq d_t$. Then we need to compute
$$
\begin{array}{c}
\dim \operatorname {End}([2,t-1] \oplus [1,t]^a \oplus [1,t-1]^b ) = a^2 + b^2 +
ab + b + 1\\
\dim \operatorname {End}([2,t] \oplus [1,t]^{a-1} \oplus [1,t-1]^{b+1}) = a^2 +
b^2 + ab + 2b + 2.
\end{array}
$$
Back to $M(d)$, we decompose it into $M(d) =
[2,t-1] \oplus [1,t]^a \oplus [1,t-1]^b \oplus M'$ with maximal $a$ and
$b$. Then $M$ is $[2,t] \oplus [1,t]^{a-1} \oplus [1,t-1]^{b+1} \oplus
M'$ and
$$
\begin{array}{l}
-(b + 1) = \dim \operatorname {End}(M(d)) - \dim \operatorname {End}(M) = \\ \dim \operatorname {End}([2,t-1] \oplus [1,t]^a
\oplus [1,t-1]^b) - \dim \operatorname {End}([2,t] \oplus [1,t]^{a-1} \oplus
[1,t-1]^{b+1}).
\end{array}
$$
Consequently, the codimension of the orbit of $M$ equals $b+1 = d_1
-d_t +1$ and
this equals the codimension of $Y_{1,t}$. Finally, note that under the
reduction from arbitrary $Y_{i,j}$ to $Y_{1,t}$ the codimension does
not change.
\hfill $\Box$
\subsection{The reduction process}
\begin{Prop} \label{Pcontain}
a) Assume $(i,j) \notin J(d)$ then there exists some $l$ with $i < l
< j$ and $d_l \leq d_k$ for all $i < k < j$. In particular, $d_l \leq
\mbox{max\,}\{d_i,d_j\}$. In this case we have an inclusion
$Y_{i,j} \subseteq Y_{i,l} \cup Y_{l,j}$. \\
b) If $(i,j) \in J(d) \setminus I(d)$ with $d_i \leq d_j$ then
there exists an $l$ with $l > j$ and $d_i \leq d_l < d_j$. In this
case we obtain $Y_{i,j} \subset Y_{i,l}$. \\
c) If $(i,j) \in J(d) \setminus I(d)$ with $d_i \geq d_j$ then
there exists an $l$ with $l < i$ and $d_j \leq d_l < d_i$. In this
case we obtain $Y_{i,j} \subset Y_{l,j}$.
\end{Prop}
{\sc Proof. }
Without loss of generality we may assume $d_i \leq d_j$ in the proof. \\
a) Consider the maps $A_{(i,j)}: V_i \longrightarrow V_j$, $A_{(i,l)}: V_i \longrightarrow
V_l$, and $A_{(l,j)}: V_l \longrightarrow V_j$. We consider two cases.
$d_i \geq d_l$: \\
Assume $\operatorname {rk} A_{(i,j)} < d_l$,
then $\operatorname {rk} A_{(i,l)} < d_l$ or $\operatorname {rk} A_{(l,j)} < d_l$ since $A_{(i,j)}:
V_i \longrightarrow V_l \longrightarrow V_j$ factors through $V_l$ with $\dim V_l \leq \dim
V_i, \dim V_j$.
$d_i < d_l$: \\
Assume $\operatorname {rk} A_{(i,j)} < d_i$, then $\operatorname {rk} A_{(i,l)} < d_i$
b) Consider the maps $A_{(i,j)}: V_i \longrightarrow V_j$ and $A_{(i,l)}: V_i
\longrightarrow V_l$. Since $(i,j) \in J(d) \setminus I(d)$ there exist some
$l>j$ with $d_i \leq d_l < d_j$ and $(i,l) \in J(d)$. Then from $\operatorname {rk}
A_{(i,j)} < d_i$ follows $\operatorname {rk} A_{(i,l)} < d_i$.
c) This case is opposite to case b). \hfill $\Box$
\begin{Lemma}\label{Ldecomp}
$$Y = \bigcup_{1 \leq i < j \leq t} Y_{i,j} = \bigcup_{(i,j) \in
J(d)} Y_{i,j} = \bigcup_{(i,j) \in
I(d)} Y_{i,j}.
$$
\end{Lemma}
{\sc Proof. }
The dense orbit is defined by the condition $\operatorname {rk} A_{(i,j)} =
r_{i,j}$. Thus, the complement satisfies $\operatorname {rk} A_{(i,j)} < r_{i,j}$ for
at least one pair $(i,j)$ with $r_{i,j} > 0$. Since $r_{i,j} =
\mbox{min\,}\{d_i\mid 1 \leq i \leq t \} > 0$ we finish the proof of the
first equality.
To prove the second one we use the proposition above. From Proposition
\ref{Pcontain} a) we obtain $\bigcup_{1 \leq i < j \leq t} Y_{i,j}
\subseteq \bigcup_{(i,j) \in J(d)}Y_{i,j}$ and from part b) and c) $
\bigcup_{(i,j) \in J(d)}Y_{i,j} \subseteq \bigcup_{(i,j) \in
I(d)}Y_{i,j}$.
\hfill $\Box$
\begin{Cor}\label{Cirred}
The variety $Y_{i,j}$ is irreducible precisely when $(i,j) \in J(d)$.
\end{Cor}
{\sc Proof. }
Thanks to Proposition \ref{Pirred}, we only need to prove that
$Y_{i,j}$ is not irreducible for $(i,j)$ not in $J(d)$. Take $(i,j)$
not in $J(d)$, thus there
exists an $l$ with $i < l < j$ and $d_l \leq \mbox{max\,}\{d_i, d_j\}$ is
minimal. Then
by Proposition \ref{Pcontain} a) we have $Y_{i,j} \subseteq Y_{i,l}
\cup Y_{l,j}$. Assume first that $d_l \leq \min\{d_i,d_j\}.$ We claim that $Y_{i,j} = Y_{i,l}
\cup Y_{l,j}$ and this is a proper decomposition, none contains the
other. To see the equality, consider any element $A$ in $ Y_{i,l}
\cup Y_{l,j}$. Then $\operatorname {rk}(A)_{i,l} < r_{i,l}$ or $\operatorname {rk}(A)_{l,j} <
r_{l,j}$. From each of the inequalities follows $\operatorname {rk}(A)_{i,j} <
r_{i,j}$. On the other hand, there exists a representation with
$\operatorname {rk}(A)_{i,l} = r_{i,l}$ and $\operatorname {rk}(A)_{l,j} <
r_{l,j}$ and vice versa, proving also the last claim. \\
In the second case $ \max{d_i,d_j} \geq d_l > \min\{d_i,d_j\}.$ We
construct two different subvarieties that contain $Y_{i,j}$ and none
contains the other. To
simplify the arguments, we assume without loss of generality $d_i \geq
d_l > d_j$ and, using the first case, $d_l$ is the minimal entry of
$d$ between $d_i$ and $d_j$. The
first variety is just the orbit closure $Y_{l,j}$, the second one is defined by
$r_{i,l}(A) < r_{i,l}$ and $r_{i,j}(A) < r_{i,j}$. Using
multisegments (or rank conditions) one can show that we obtain at
least two irreducible components in this way ($Y_{l,j}$ is
irreducible, the other variety need not to be). Anyway, we obtain at
least two irreducible components.
\hfill $\Box$
\section{Further descriptions}\label{Sfurther}
In this section we proceed with the various descriptions of the
irreducible components and the tilting modules started in Section
\ref{Sorbits}. In particular, we use trees and fans to describe the
irreducible components
and we relate our description to the nilpotent
class representations defined in \cite{Hhabil}.
\subsection{Trees and tilting modules}\label{Strees}
Let $T$ be a $3$--regular tree with one root and $t+1$ leaves, where
the leaves are enumerated by $0,1,2,\ldots,t-1,t$. We
denote the set of those trees with ${\cal{T}}_t$. With ${\cal{T}}^1_t$ we denote all
trees that have precisely on vertex with four neighbours, all other
vertices have three neighbours and admit one root and $t+1$
leaves. There is a natural map from ${\cal{T}}^1_t$ to the set of unordered
pairs ${\cal{P}}^2({\cal{T}}_t)$ of
of trees in ${\cal{T}}_t$ by ``resolving'' the vertex with four
neighbours and replacing it by two $3$--regular vertices (see Figure 1).
\medskip
\centerline{\includegraphics[height=2cm]{trees.eps}}
\centerline{{\bf Figure 1.} a tree in ${\cal{T}}^1_4$ and the two associated
$3$--regular trees in ${\cal{T}}_4$.}
\medskip
We always draw a tree in the plane and fix the numbering of the leaves
$0,\ldots,t$ from left to right. Two trees are considered to be equal,
if the abstract graphs are isomorphic and the numbering of the leaves
is preserved under the isomorphism. Then each vertex $v$ defines the set of
leaves (in fact an interval) above the vertex $\{ i_T(v)-1, i_T(v),
\ldots,j_T(v) \}$. This way, each vertex $v$ defines a segment $[i(v),j(v)]$.
To any tree in ${\cal{T}}$ or ${\cal{T}}^1$ we can associate multisegments as
follows. Assume $T \in {\cal{T}}_t$ and denote by $T_0$ the vertices in $T$, then we define
$$
M_T = \bigoplus_{v \in T_0} [i_T(v),j_T(v)]
$$
to be the union of the multisegments $[i_T(v),j_T(v)]$
above of $v$. If $S \in {\cal{T}}^1$ and $T^+$ and $T^-$ are the two
associated $3$--regular trees with unique vertex $v^+ \in T^+$ and
$v^- \in T^-$ (these are the only vertices defining a segment that is
not obtained from the other tree), we define
$$
\begin{array}{rl}
M_S & = \bigoplus_{v \in T^+_0}([i_{T^+}(v),j_{T^+}(v)]) \oplus
[i_{T^-}(v^-),j_{T^-}(v^-)] \\
& = \bigoplus_{v \in T^-_0}([i_{T^-}(v),j_{T^-}(v)]) \oplus
[i_{T^+}(v^+),j_{T^+}(v^+)], \mbox{ and } \\
\underline M_S & = \bigoplus_{v \in T_0}[i_{T}(v),j_{T}(v)]
\end{array}
$$
The module $M_T$ for $T \in {\cal{T}}^1$ has $t+1$ pairwise nonisomorphic
direct summands and the module $\underline M_S$ has $t-1$ pairwise
nonisomorphic direct summands.
\begin{Thm}
a) If $T$ is a $3$--regular tree, then $M_T = M(d(T))$ for $d(T) =
\mathrm{\underline{dim}} M_T$. In
particular, $\operatorname {Ext}(M_T,M_T) = 0 $ for any tree $T$ in ${\cal{T}}_t$. \\
b) If $S$ is in ${\cal{T}}^1$ then $\operatorname {Ext}(M_S,M_S) = k$. In particular,
$M_S$ is almost generic. \\
c) If $d$ is generic, then there exists a unique $T \in {\cal{T}}_t$ so that
$M(d)$ and $M_T$ have the same indecomposable direct summands. Thus
$M(d)$ is a direct summand of several copies of $M_T$.\\
d) If $M$ defines the open dense subset in an irreducible component
$Y_{i,j}$ of
$Y$, then there exists some tree $S \in {\cal{T}}^1_t$ so
that $M$ is a direct summand of copies of $M_S$. \\
e) For each multisegment $M$ with $Ext(M,M) = k $ there exists some
$S \in {\cal{T}}^1$ with $[i_{T^-}(v^-),j_{T^-}(v^-)]$ and
$[i_{T^+}(v^+),j_{T^+}(v^+)]$ as direct summand of $M$.
\end{Thm}
{\sc Proof. }
Using Prop. \ref{Pextgroups} a) one sees immediately that the
segments in $M_T$ satisfy the vanishing condition for the extension
groups. Thus, the only non-vanishing extension group in $M_S$ are in
the complement of $\underline M_S$, that consists of two
segments. This proves a) and b) (see also the proof in
\cite{Hvol}).
Part c) also follows from the arguments in loc.~cit.:
Each multisegment with non-vanishing extension group can be completed
to one with $t$ non-isomorphic direct summands. Finally, any
multisegment with precisely $t$ indecomposable summands, all pairwise
non-isomorphic and vanishing extension group is isomorphic to
$M_T$ for some $T$ in ${\cal{T}}_t$ and the segments determine $T$ uniquely. \\
Using the description of an almost generic multisegment $M$ in
Prop. \ref{Pextgroups} d) we find two segments $[a,b]$ and $[c,d]$
in $M$ with non-vanishing extension group. Moreover, we can assume
that $M$ has $t+1$ non-isomorphic direct summands (otherwise we add
further ones to $M$). Deleting all summands of the form
$[a,b]$ defines a tree $T^+$, deleting the other direct summand
$[c,d]$ defines a different tree $T^-$ by c). By construction, both
trees come from a common $S$ in ${\cal{T}}^1_t$ so that $M_S$ and $M$ contain
the same indecomposable direct summands up to isomorphism. This proves
d). The two
summands in e) are just $[a,b]$, respectively $[c,d]$.
\hfill $\Box$
\subsection{Nilpotent class representations}
There is an obvious formula for the number $N(d)$ of orbits in ${\cal{R}}(Q,d)$. We
just count the number of multisegments $\oplus [i,j]^{a_{i,j}}$
defined by a non-strict triangle $a = (a_{(i,j)})_{1 \leq i\leq j \leq
t}$
$$
N(d) = \sharp \{ a = (a_{i,j})_{1 \leq i \leq j \leq t}\mid a_{i,j}
\in {\Bbb Z}_{\geq 0}, \quad \sum_{i,j} a_{i,j} \mathrm{\underline{dim}} [i,j] = d \}.
$$
This function is also called {\sl Kostants partition function} for
type ${\Bbb A}$.
It is for large $d$ not efficiently computable, thus an easier formula is
desirable. For we define numbers $NA(\lambda, \mu)$ for any two
partitions $\lambda$ of $b > 0$ and $\mu$ of $c > 0$
$$
\operatorname{NA}(\lambda, \mu) = \left\{
\begin{array}{ll}
\prod_{l=1}^{\infty} (\sharp \{i\mid \lambda_i = \mu_i = l \} + 1) &
\mbox{ if } |\lambda_i - \mu_i| \leq 1 \mbox{ for all } i \\
0 & \mbox{ if } |\lambda_i - \mu_i| \geq 2 \mbox{ for some } i.
\end{array}
\right.
$$
\begin{Prop}\label{PNA}
The number of multisegments coincides with the sum, taken over all
sequences of partitions $(\lambda^1, \ldots, \lambda^t)$ with
$\lambda^i$ a partition of $d_i$, $\lambda^1 = (1)^{d_1}$ and
$\lambda^t = (1)^{d_t}$ are both trivial, of the product of the
numbers $\operatorname{NA}(\lambda^i, \lambda^{i+1}) $
$$
N(d) = \sum_{(\lambda^1, \ldots, \lambda^t)} \prod_{i=1}^{t-1}\operatorname{NA}(\lambda^i, \lambda^{i+1}).
$$
\end{Prop}
{\sc Proof. }
We only mention the idea of the proof, the details can be found in
\cite{Hhabil}, Section 4.2. First we consider the preprojective
algebra $\Pi_t$ of ${\Bbb A}_t$ and the cyclic quiver $\widetilde {\Bbb A}_1$
with two vertices together with the natural projection maps
$$
\times_{i=1}^{t-1} {\cal{R}}(\widetilde {\Bbb A}_1,(d_i,d_{i+1})) \stackrel{p_1}{\longleftarrow} {\cal{R}}(\Pi_t,d)
\stackrel{p_2}{\longrightarrow} \times_{i=1}^t {\cal{R}}(k[T]/T^{d_i}).
$$
If we denote an element in ${\cal{R}}(\Pi_t,d)$ by $(A,B)$, then it satisfies
$B_1A_1 = A_1B_1 - B_2A_2 = \ldots = A_{t-1}B_{t-1} - B_tA_t = A_tB_t
= 0$, where $A_i:V_i \longrightarrow V_{i+1}$ and $B_i: V_{i+1} \longrightarrow V_i$. The
projections are defined by
$$
p_1:(A,B) \mapsto ((A_1,B_1), \ldots,(A_{t-1},B_{t-1})) \mbox{ and } $$
$$p_2: (A,B) \mapsto
(B_1A_1, A_1B_1, \ldots,A_{t-1}B_{t-1}).
$$
In particular, each element $(A,B)$ defines a sequence of partitions
$(\lambda^1,\ldots, \lambda^t)$ (defined by the partition of the
nilpotent class of $B_1A_1,A_1B_1,\ldots,A_{t-1}B_{t-1}$). By
definition, $\lambda^1$ and $\lambda^t$ are always the trivial ones
corresponding to the zero matrix. It is known (see \cite{KnightZelevinsky} or
\cite{Piasetsky}) that ${\cal{R}}(\Pi_t,d)$ is equidimensional and the irreducible
components are in bijection with the $\mbox{Gl}(d)$--orbits on
${\cal{R}}({\Bbb A}_t,d)$. Thus $N(d)$ is just the number of irreducible
components in ${\cal{R}}(\Pi_,d)$. Now we determine the irreducible
components in a
different way using the projection map above. First note, that
$\operatorname{NA}(\mu, \lambda)$ is the number of irreducible components of
$$
{\cal{R}}(\widetilde {\Bbb A}_1,(\mu, \lambda)) = \{(A,B)\mid AB \in C(\mu), BA
\in C(\lambda)).
$$
If one fixes the sequence of
partitions $(\lambda^1,\ldots,\lambda^t)$, then one can compute the
number of irreducible components in ${\cal{R}}(\Pi_t,d)$ as the sum of the
products $\operatorname{NA}(\lambda^1, \lambda^2)\cdot \ldots \cdot \operatorname{NA}(\lambda^i, \lambda^{i+1})$ taken over all
such sequences of partitions with $\lambda^1$ and $\lambda^t$ trivial.
\hfill $\Box$
The advantage of the formula above is twofold. First, it is
independent of the orientation of the quiver. We can, for any
orientation of the quiver of type ${\Bbb A}_t$ define such a sequence of
partitions. Secondly, the formula in Prop. \ref{PNA} is much more
efficient than just a simple counting. \\
Note that for the generic
representation $M$ for a a quiver of type ${\Bbb A}_t$ with an arbitrary
orientation the corresponding sequence of partitions is just the
trivial one (all $\lambda^i$ are zero).
\subsection{The fan and the volume}
The sets of trees ${\cal{T}}_t$ and ${\cal{T}}^1_t$ define a graph $\Gamma_t$ that is the
dual graph of the simplicial complex of tilting modules defined in
\cite{RiedtmannSchofield2}. This simplicial complex has a natural
realisation as a fan $\Sigma$ in the positive quadrant $K_{{\Bbb R}}^+$ of the real
Grothendieck group $K_0$, where $K_{{\Bbb R}}^+: = {\Bbb R}_{\geq 0}^t \subset {\Bbb R}^t \simeq
K_0 \otimes
{\Bbb R}$. This fan is described in \cite{Hvol}. From the fan, one can
again determine the irreducible compenents in a simple way.
\medskip
We start to define the graph $\Gamma = \Gamma_t$. The vertices
$\Gamma_0$ are just the trees in ${\cal{T}}_t$. The set of edges is
${\cal{T}}^1_t$. The end points of the edge $S$ consists of the two
resolutions $T^+$ and $T^-$ of $S$.
\medskip
Then we recall the definition of the fan $\Sigma$. For a precise
definition of a fan,
some first properties and applications we refer to
\cite{Fultontoric}. Note first, that a fan
$\Sigma$ is a
finite collection of rational, convex, strongly convex, polyhedral
cones that satisfy two conditions: \\
F1) each face of a cone in $\Sigma$ is in $\Sigma$ and \\
F2) the intersection of two cones in $\Sigma$ is a face of both.
\medskip
Note that we only need finite fans,
for tame and wild quivers one needs to allow also infinite ones.
For each $T \in {\cal{T}}_t$ we define a cone $\sigma_T \subset K_{{\Bbb R}}^+$
as the cone spanned by the dimension vectors of the indecomposable
direct summands of $M_T$
$$
\sigma_T := \{ \sum {\Bbb R}_{\geq 0}\mathrm{\underline{dim}} [i,j] \mid [i,j] \mbox{ is a direct
summand of } M_T \}.
$$
Two cones $\sigma_{T^+}$ and $\sigma_{T^-}$ have a common facet,
precisely when there exists a tree $S \in {\cal{T}}^1_t$ with corresponding
trees $S^+ = T^+$ and $S^- = T^-$. The fan $\Sigma$ consists of all
cones, generated by dimension vectors of indecomposable direct
summands of a rigid multisegment $M$ (that is $\operatorname {Ext}(M,M) = 0$).
Already the cones $\sigma_T$ determine the fan
$\Sigma$ consisting of all the cones $\sigma$ that are faces of a cone
$\sigma_T$ (including the cones $\sigma_T$ themself). We recall the
main result from \cite{Hvol} together with some
easy consequences.
\begin{Thm}\label{Tfan}
a) The cones $\sigma \in \Sigma$ are all generated by a part of a
${\Bbb Z}$--basis (they are {\sl smooth} cones). \\
b) The union of the cones $\sigma_T$ (that is the same as the union of
all cones in $\Sigma$) cover $K_{{\Bbb R}}^+$. For two cones in $\Sigma$
their intersection is a face of both (and it is also in
$\Sigma$). Each cone is a face of a $t$--dimensional cone and each
$t$--dimensional cone equals $\sigma_T$ for some $T \in {\cal{T}}_t$. \\
c) A dimension vector $d$ is generic, precisely when it is in the
interior of some cone $\sigma_T$. Consequently, for $d$ generic, $T$
is uniquely determined by $d$. \\
d) For each dimension vector $d$ there exists a unique cone $\sigma
\in \Sigma$ with $d \in \sigma$ and no face of $\sigma$ does contain
$d$. This is equivalent to saying that, $d$ is an element of the relative
interior of the cone $\sigma$. Moreover, the cone $\sigma$ is
generated as a cone by the dimension vectors of the indecomposable
direct summands of $M(d)$. In particular, the dimension of $\sigma$ is
the number of pairwise non-isomorphic indecomposable direct summands
of $M(d)$.\\
e) The dual graph of the $t$--dimensional cones and the
$(t-1)$--dimensional cones, occuring as an intersection of two
$t$--diemensional cones, is $\Gamma_t$. \\
f) Each $t$--dimensional cone has precisely $t-1$ neighbours, that is
$\Gamma_t$ is $(t-1)$--regular.
\end{Thm}
The proof can be found in \cite{Hvol}.
\subsection{Irreducible components and the fan $\Sigma$}
Using the fan, we can again determine the irreducible components in
$Y$. Note that $d$ is contained in some maximal set of cones. We
denote the set of trees $T \in {\cal{T}}_t$ with $d \in \sigma_T$ by ${\cal{T}}(d)$.
Assume $d$ is in the relative interior of a
facet $\sigma_S$ that is the intersection of the two $t$--dimensional
cones $\sigma_{T^+}$ and $\sigma_{T^-}$. Note that $S$ is a tree in
${\cal{T}}^1_t$ (compare with section \ref{Strees}). Then $S$ defines two
segments $[a_S,b_S]$ and $[c_S,d_S]$ defined as the unique segments
not in $\sigma_S$ but one in $\sigma_{T^+}$ and the other one in $\sigma_{T^-}$.
Then we obtain one component
just by adding to $[a_S,b_S] \oplus [c_S,d_S]$ the unique generic
complement: $M(d') \oplus [a_S,b_S] \oplus [c_S,d_S].$ In this way, each inner
facet $\sigma_S$
defines a unique irreducible component $Y_S$.
If $d$ is generic, that is ${\cal{T}}(d)$ consists of just one tree $T$,
then the components in $Y$ correspond to the $t-1$ neighboured
cones. In fact, each neighboured cone of $\sigma_T$ has a common facet
$\sigma_S$ with
$\sigma_T$ and the component constructed above defines also a
component for the dimension vector $d$. In this way, we obtain
precisely $t-1$ components. It remains to show that they are pairwise
different. Decompose $M(d) = [1,t]^a \oplus \bigoplus [i,j]^{a_{i,j}}$
into $t$ indecomposable, pairwise nonisomorphic, direct
summands. Since there is a unique cone $\sigma_T$ containing $d$,
$M(d)$ has precisley $t$ pairwise nonisomorphic, direct
summands by Prop.~\ref{Tfan}. Then for each segment $[k,l] \not = [1,t]$
with $a_{k,l} \not= 0$ there exists a unique $S$ so that $[k,l]$
coincides with one of the segments, say $[a_S,b_S]$, constructed
above. Then we obtain a component $Y_S$ just by adding to $[a_S,b_S]
\oplus [c_S,d_S]$ the unique generic complement. Since $[a_S,b_S]$ and
$ [c_S,d_S]$ determine $S$, we obtain the desired $t-1$ irreducible components.
\begin{Thm} \label{ThmGeneric}
Let $d$ be generic with $d$ in the interior of a $t$--dimensional
cone $\sigma \in \Sigma$ , then the irreducible components $Y_{i,j}$ of
$Y$ are in bijection with the set of trees $T$ that define a cone with
a common facet $\sigma_S$ with $\sigma$. For such a tree, take the
two unique segments $[a_S,b_S]$ and $[c_S,d_S]$ (with $a_S < c_S \leq
b_S + 1 < d_S + 1$) in $\sigma_T \cup
\sigma$ with nontrivial extension. Then $(i,j) = (a_S,d_S)$. \\
\end{Thm}
\section{Examples} \label{Sexample}
In general, $M(d)$ can have less than $t$ pairwise non-isomorphic
direct summands. Also, the codimension of the components and the number
of components in $Y$ can vary. We discuss several examples, where we
have more precise results. This includes the pure case (all
components have codimension one), the generic case (which contains all
dimension vectors that do not lie on a proper face of a cone in the
fan $\Sigma$),
the concave case, and, eventually, the convex case.
\subsection{Generic dimension vectors}
For $d$ generic, we have always $t$
indecomposable direct summands and $d$ lies in the interior of a cone
$\sigma_T$ in the fan $\Sigma$ for some $T \in {\cal{T}}_t$.
\begin{Prop}
Assume $d$ is a generic dimension vector. Then $Y$ consists of $t-1$
irreducible components, all have codimension at least $2$.
\end{Prop}
{\sc Proof. } This follows directly from Theorem \ref{ThmGeneric}.
\hfill $\Box$
\subsection{Concave dimension vectors}
In the concave case (that is $d_1 \geq d_2 \geq \ldots d_{a-1} \geq
d_a \leq d_{a+1} \leq \ldots \leq d_t$) there are also always $t-1$
components. Moreover $I(d)$ can easily be described.
\begin{Prop} Assume $d$ is a concave dimension vector with $d_i > 0$. \\
a) The sets $I(d)$ and $J(d)$ both coincide with $\{(1,2),(2,3),\ldots
(t-2,t-1),(t-1,t)\}.$\\
b) There are precisely $t-1$ irreducible components $Y_{i,i+1}$ for
$i=1,\ldots, t-1$. The
codimension of $Y_{i,j}$ equals $|d_i - d_{i+1}| + 1$.\\
c) The variety $Y_{i,j}$ is irreducible precisely when $j = i + 1$. \\
d) The dimension vector $d$ is generic precisely when for all $i < j$
with $d_i = d_j$ there exists some $l$ with $i < l < j$ and $d_l < d_i
= d_j$. \\
e) The components in $Y$ of codimension one correspond to pairs
$(i,i+1)$ with $d_i = d_{i+1}$.
\end{Prop}
{\sc Proof. } We use our main theorem together with
the methods obtained in Section 4. \hfill$\Box$
{\sc Example. }
We consider $d = (d_1,\ldots,d_7) =(5,4,3,1,2,4,6)$ and get $I(d) = \{
(1,2),(2,3), \ldots ,(5,6),(6,7) \}$. In this case all components have
codimension at least $2$ and $d$ is generic. The corresponding
multisegment $M(d)$ is $[1,7] \oplus [1,3]^2 \oplus [1,2] \oplus [1,1]
\oplus [5,7] \oplus [6,7]^2 \oplus [7,7]^2.$
\medskip
\centerline{\includegraphics[height=2cm]{11.eps}}
\centerline{ {\bf Figure 2.} $M(5,4,3,1,2,4,6)$}
\medskip
and the almost generic multisegments (corresponding to minimal
degenerations) look like (all are different and minimal)
\medskip
\centerline{\includegraphics[width=10cm]{11deg.eps}}
\centerline{ {\bf Figure 3.} The minimal degenerations of $M(5,4,3,1,2,4,6)$}
\medskip
Thus we get the following irreducible components
$Y_{i,i+1} := \{(A_{i,i+1}\mid \operatorname {rk} A_{i,i+1} < \mbox{min\,} \{ d_i,d_{i+1}
\} \}$ of codimension $2$ and $3$, respectively.
\subsection{Unimodular (Convex) Dimension Vectors}
For an unimodular dimension vector (or a convex one) we have the
opposite inequalities $d_1 \leq
d_2 \leq \ldots \leq d_{a-1} \leq d_a \geq d_{a+1} \geq \ldots \geq d_t$
In the unimodular case there can be less than $t-1$ components. To
illustrate this
we give two examples, one with $t-1$ components and one with $(t-1)/2$
components, which is the minimal number of components. Moreover, it is
convenient to exclude the previous case of a concave dimension vector
in what follows. We denote by
$d^+$ the maximal entry in $d$.
\begin{Prop} Let $d$ be an unimodular, sincere dimension vector that
is not concave.
a) $I(d) \not= J(d)$. \\
b) There are precisely $t-1$ irreducible components in $Y$ if and only
if from $d_i = d_j$ follows $i = j-1$ or $ j+1$.
\end{Prop}
{\sc Proof. }
Under the assumptions we have a maximal entry $d^+$ in $d$ with $d_1
\not= d^+$ and $d_t \not= d^+$. Then $(1,2),(2,3), \ldots,(t-1,t)$ are
all in $J(d)$. If they are also in $I(d)$, then $I(d)$ is just the set
of all pairs $(i,i+1)$. Then we get $d_i = d_{i+1}$
for all $i$, a contradiction to our assumption.
\hfill $\Box$
\medskip
{\sc Example 1.}
We consider $d = (d_1,\ldots,d_7) =(1,3,5,7,6,4,2)$. The corresponding
multisegment is
\medskip
\centerline{\includegraphics[height=2cm]{exuni1.eps}}
\centerline{ {\bf Figure 4.} $M(1,3,5,7,6,4,2)$}
\medskip
and its minimal degenerations look like (all are different and irreducible)
\medskip
\centerline{\includegraphics[width=10cm]{exuni1a.eps}}
\centerline{ {\bf Figure 5.} minimal degenerations of $M(1,3,5,7,6,4,2)$}
\medskip
{\sc Example 2.}
The other extreme case is to have only $(t-1)/2$ components. This
forces that $d$ has often the same entry and the entries in between
two equal entries are all larger. Thus, in the convex case we may consider
$d = (1,2,4,5,4,2,1)$ with $I(d)=\{ (1,7), (2,6), (3,5) \}$ and generic
multisegment $[1,7]\oplus[2,6]\oplus[3,5]^2\oplus[4,4]$.
\medskip
\centerline{\includegraphics[height=2cm]{exuni2.eps}}
\centerline{ {\bf Figure 6.} $M(1,2,4,5,4,2,1)$}
\medskip
and its minimal degenerations look like (all are different and irreducible)
\bigskip
\centerline{\includegraphics[width=10cm]{exuni2a.eps}}
\centerline{ {\bf Figure 7.} The minimal degenerations of $M(1,2,4,5,4,2,1)$}
\medskip
\subsection{Pure dimension vectors}
A characterization of all dimension vectors $d$ with $Y$
equidimensional seems to be quite technical. So we restrict the result
to the codimension one case. We define $d$ to be pure, if for alle
elements $(i,j)$ in $I(d)$ we have $d_i = d_j$. The following
proposition is easy to check.
\begin{Prop}
a) The complement $Y$ of the dense orbit in ${\cal{R}}(Q,d)$ is equidimensional
of codimension $1$ precisely when $d$ is pure. \\
b) Assume $d$ is pure, then $J(d)$ contains all pairs $(i,i+1)$ and
$I(d)$ only consists of the pairs $(i,j)$ with $d_i = d_j$ and $d_l >
d_i = d_j$ for all $i < l < j$.
\end{Prop}
{\sc Example. }
We have already seen a pure example that is also convex in section
5.3, Example 1. So we
consider a pure one that is not convex. Let $d$ be $(1,2,3,5,3,2,3,2,1)$
\medskip
\centerline{\includegraphics[height=2cm]{expur1.eps}}
\centerline{{\bf Figure 8.} $M(1,2,3,5,3,2,3,2,1)$ }
\medskip
The components are given by $I(1,2,3,5,3,2,3,2,1) =
\{(1,9),(2,6),(6,8),(3,5)\}$.
\medskip
\centerline{\includegraphics[height=2cm]{expur1a.eps}}
\centerline{{\bf Figure 9.} The minimal degenerations of
$M(1,2,3,5,3,2,3,2,1)$ }
\medskip
\section{Parabolic group actions}\label{Sparabol}
The results in this note are inspired by the decription of the
complement of the Richardson orbit (the dense orbit) for the action of
a parabolic subgroup in $\mbox{Gl}_N$ on its unipotent radical as
considered recently in \cite{BH1}. We explain the common idea and some
generalizations.
\subsection{The Richardson orbit}
Given a dimension vector $d$ as above, then we define a group
$$
P(d) := \{ f \in \mbox{Aut\,}(\oplus_{i=1}^t V_i) \mid f(V_j) \subseteq
\oplus_{i=1}^j V_i \mbox{ for all } j=1,\ldots,t \}
$$
and a vector space
$$
{\mathfrak p}_u(d) := \{ f \in \operatorname{End}(\oplus_{i=1}^t V_i) \mid f(V_j) \subseteq
\oplus_{i=1}^{j - 1} V_i \mbox{ for all } j=1,\ldots,t \}.
$$
The group $P(d)$ is a standard parabolic subgroup in the General
Linear Group and ${\mathfrak p}_u(d)$ is the Lie algebra of the unipotent radical
of $P(d)$. The group $P(d)$ acts on ${\mathfrak p}_u(d)$, its derived Lie
algebras
$$
{\mathfrak p}_u(d)^{(l)} := \{ f \in \operatorname{End}(\oplus_{i=1}^t V_i) \mid f(V_j) \subseteq
\oplus_{i=1}^{j - l - 1} V_i \mbox{ for all } j=1,\ldots,t \},
$$
and also the quotients ${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$ (and
${\mathfrak p}_u(d)^{(k)}/{\mathfrak p}_u(d)^{(l)}$ for $k < l$) via conjugation. By a
classical result of Richardson (\cite{Rich}) the group $P(d)$ acts
with a dense orbit on ${\mathfrak p}_u(d)$ and consequently also with a dense
orbit on
${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$ for all $l > 1$. In \cite{BH1} we describe
the complement of the dense orbit explicitely using certain rank
conditions on the matrix $A \in {\mathfrak p}_u(d)$. Thus it is desirable to
obtain a similar elementary description of the complement of the dense
orbit in the case $k = l -1$. In fact this case corresponds to a
disjoint union of equioriented Dynkin quivers of type ${\Bbb A}$.
The case ${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$ can be handled using a variation
of the line diagrams introduced in \cite{BH1}.
In contrast, the case ${\mathfrak p}_u(d)^{(l)}$ is still open in general. It
is even not known for general $d$ whether there exists a dense
orbit. A first idea to attack the problem can be found in
\cite{Hvol}.
\subsection{Irreducible components for the quotients
${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$}
The combinatrorics with line diagrams allows to describe also the
components of the complement of the dense orbit in all quotients
${\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$. For this, we define subvarieties $Z_{i,j}
\subset {\mathfrak p}_u(d)/{\mathfrak p}_u(d)^{(l)}$ by certain rank conditions as in
\cite{BH1}. We claim that there are sets $I^{(l)}(d)
\subseteq J^{(l)}(d)$ so that \\
i) $Z_{i,j}$ is irreducible precisely when $(i,j) \in J^{(l)}(d)$ and
\\
ii) $Z_{i,j}$ is an irreducible component in the complement of the
Richardson orbit, if and only if $(i,j) \in I^{(l)}(d)$.
\medskip
The definition of $Z_{i,j}$ and the construction of the sets $I^{(l)}(d)
\subseteq J^{(l)}(d)$ can be read off from line diagrams with
connections of length at most $l$. Note that this specializes to the
situation in this note for $l=1$ and to the construction in
\cite{BH1} for $l$ sufficiently large.
| {
"timestamp": "2010-05-12T02:02:19",
"yymm": "1005",
"arxiv_id": "1005.1919",
"language": "en",
"url": "https://arxiv.org/abs/1005.1919",
"abstract": "Let $\\Aa_t$ be the directed quiver of type $\\Aa$ with $t$ vertices. For each dimension vector $d$ there is a dense orbit in the corresponding representation space. The principal aim of this note is to use just rank conditions to define the irreducible components in the complement of the dense orbit. Then we compare this result with already existing ones by Knight and Zelevinsky, and by Ringel. Moreover, we compare with the fan associated to the quiver $\\Aa$ and derive a new formula for the number of orbits using nilpotent classes. In the complement of the dense orbit we determine the irreducible components and their codimension. Finally, we consider several particular examples.",
"subjects": "Representation Theory (math.RT)",
"title": "On the complement of the dense orbit for a quiver of type $\\Aa$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877023336243,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7089594699097831
} |
https://arxiv.org/abs/2206.10745 | Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning | We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest. After discretizations both inputs and outputs are high-dimensional. We aim to approximate not only the operators with improved accuracy but also their derivatives (Jacobians) with respect to the input function-valued parameter to empower derivative-based algorithms in many applications, e.g., Bayesian inverse problems, optimization under parameter uncertainty, and optimal experimental design. The major difficulties include the computational cost of generating derivative training data and the high dimensionality of the problem leading to large training cost. To address these challenges, we exploit the intrinsic low-dimensionality of the derivatives and develop algorithms for compressing derivative information and efficiently imposing it in neural operator training yielding derivative-informed neural operators. We demonstrate that these advances can significantly reduce the costs of both data generation and training for large classes of problems (e.g., nonlinear steady state parametric PDE maps), making the costs marginal or comparable to the costs without using derivatives, and in particular independent of the discretization dimension of the input and output functions. Moreover, we show that the proposed DINO achieves significantly higher accuracy than neural operators trained without derivative information, for both function approximation and derivative approximation (e.g., Gauss-Newton Hessian), especially when the training data are limited. | \section{Introduction}
The so-called ``neural operators'' have gained significant attention in recent years due to their ability to approximate high-dimensional parametric maps between function spaces, and have become a major research topic in scientific machine learning (SciML) \cite{BhattacharyaHosseiniKovachkiEtAl20,FrescaManzoni2022,KovachkiLiLiuEtAl2021,LiKovachkiAzizzadenesheliEtAl2020a,LiKovachkiAzizzadenesheliEtAl2020b,OLeary-RoseberryVillaChenEtAl22,OLearyRoseberryDuChaudhuriEtAl2021}. In this work we develop efficient algorithms for the construction and deployment of ``derivative-informed neural operators'' (DINOs), which are faithful not only for the approximation of the parametric maps but also for their derivatives with respect to the input parameters. The inclusion of derivative information into the training of neural operators can improve the generalization accuracy of the underlying parametric map by enforcing that the neural operator respect the smoothness of the map at training data points, as in Hermite basis interpolation. Additionally the incorporation of this information improves the accuracy of the parametric derivative operator approximation, which increases the scope of their deployment to algorithmic contexts that require accurate derivative information. We use the term neural operator here in a broader context than the approximation of maps between function spaces and allow approximation of maps between any high-dimensional input and output. In this context, the derivatives are not directly approximated as matrices or tensors, but are instead accessed ``matrix-free'' through differentiation of the neural operator.
In the typical setting, a model is parameterized by parameters $m \in \mathbb{R}^{d_M}$ with
probability distribution $m \sim \nu$, which are mapped to outputs $q(m) \in \mathbb{R}^{d_Q}$ through an implicit dependence on the solution of
a model equation for the state variable $u \in \mathbb{R}^{d_U}$:
\begin{equation}\label{eq:state}
\underbrace{q(m) = q(u(m))}_\text{Q.o.I. mapping} \text{ where } u \text{ depends on } m \text{ through } \underbrace{R(u,m) = 0}_\text{State model},
\end{equation}
where the dimensions $d_M, d_Q, d_U \in \mathbb{N}$ can be very high.
These parametric maps usually arise via the discretization of partial differential equations, where $m,u$ represent high-dimensional approximations of infinite dimensional field quantities. We consider quantity of interests (Q.o.I.s) that are generic functions of the state variable. Examples include parametric state maps (in the case that the full state is taken to be the Q.o.I.), as well as low-dimensional quantities such as sparse observations of the state, or other derived quantities.
The neural operator $f_w(m) = f(m,w):\mathbb{R}^{d_M}\times \mathbb{R}^{d_W}\rightarrow \mathbb{R}^{d_Q}$ parametrized by both $m$ and the weights variable $w \in \mathbb{R}^{d_W}$ is a fast-to-evaluate surrogate for the map $m\mapsto q$. The neural operator ``learns'' an approximation of the map by minimizing a data and/or physics informed loss function via nonlinear stochastic optimization with respect to $w$.
When the learning yields a sufficiently accurate approximation, these surrogates offer the possibility to accelerate and extend scientific inquiry and engineering problem solving by making tractable the solution of so-called ``outer-loop'' problems. Outer-loop problems, including forward uncertainty quantification, (Bayesian) inverse problems, optimal design and control, and optimal experimental design, often require repeated evaluations of the map $m\mapsto q$ as well as its derivatives. The scalability of most algorithms to solve outer-loop problems is limited by the costs of repeatedly computing the high-dimensional map for complex models whose solution is expensive to obtain. Outer-loop problems offer an ideal venue for the deployment of neural operators, since in this setting it is possible to amortize the offline costs of training data generation through repeated evaluations of the comparatively inexpensive surrogate.
The development of parametric surrogate modeling in SciML has focused on approximating the functional relationship $m \mapsto q$ over the distribution $\nu$. Since the derivative approximation is not addressed directly in this context, it is generally unreasonable to expect derivatives $\nabla_m f_w$ to be reliable approximations of $\nabla q$\footnote{Consider for example the Weierstrass function which is everywhere continuous and nowhere differentiable.}. This shortcoming limits the deployment of the neural operators in outer-loop algorithms, since it restricts one to the use of ``derivative-free'' methods. The scalable solution of high-dimensional outer-loop problems often \emph{requires} derivative information, since this information detects map sensitivities that can make problems effectively low-dimensional. This property has been observed and used in many outer-loop problems such as model reduction for sampling and deep learning
\cite{AlgerChenGhattas20,BashirWillcoxGhattasEtAl08,ChenGhattas19a,OLearyRoseberry2020,OLeary-RoseberryVillaChenEtAl22},
optimization under uncertainty
\cite{AlexanderianPetraStadlerEtAl17,ChenHabermanGhattas21,ChenVillaGhattas19}, Bayesian inverse problems
\cite{BeskosGirolamiLanEtAl17,BrennanBigoniZahmEtAl20,Bui-ThanhBursteddeGhattasEtAl12,Bui-ThanhGhattas14,
Bui-ThanhGhattasMartinEtAl13,ChenGhattas20,ChenVillaGhattas17,ChenWuChenEtAl19,CuiLawMarzouk16,FlathWilcoxAkcelikEtAl11,IsaacPetraStadlerEtAl15,ZahmCuiLawEtAl18},
and Bayesian optimal experimental design
\cite{AlexanderianPetraStadlerEtAl16,CrestelAlexanderianStadlerEtAl17,WuChenGhattas20,WuChenGhattas21,WuOLearyRoseberryChenEtAl22}.
In this work we address the computational challenges associated with including derivative information in the neural operator training. We first formulate a derivative-informed training problem over the parameter distribution $\nu$, where the loss function includes a term for the neural operator approximation error of the (vector) output $q$ and a term for the neural operator approximation error of the (matrix) derivative of the output $\nabla q $.
The major computational issues for constructing derivative-informed neural operators arise due to the offline costs of generating Jacobian training data and the online computational and memory complexity of incorporating the derivative information into the optimization problem, since this formally requires evaluating the difference of two high-dimensional matrices of size $d_Q \times d_M$ at training points.
We propose to overcome these computational challenges by using compressed representation of the Jacobian $\nabla q$. We target a broad class of problems for which the Jacobian at a point has low rank $r$; in this case the costs of generating Jacobian training data requires $O(r)$ linear Jacobian matrix-vector products. For highly nonlinear high-dimensional maps this computation is asympotically cheap in comparison to the evaluation of $m \mapsto q$. This dimension reduction additionally allows us to reduce the online computational and memory costs of including derivative information in the optimization problem from $O(d_Q d_M)$ to $O(r^2)$, where $r$ is the intrinsic dimension of the derivative information.
We first begin by proposing the use of the reduced SVD of the Jacobian, which allows us to impose the dominant derivative curvature information in the $r$ dimensional subspaces spanned by the left and right singular vectors of the Jacobian. This strategy is highly scalable, and improves the parametric map approximation by imposing dominant curvature conditions, but is itself insufficient to achieve accurate Jacobian approximations since it leaves the Jacobian nullspaces untouched. To overcome this issue we investigate random sketching techniques for computing the Jacobian error term, or in the case that the reduced SVD objective is used, to sketch complementary nullspace constraints. The proposed sketching based approach however incurs significant online computational costs in order to resolve the neural operators nullspace errors, and continually re-sketch the high-dimensional Jacobian data at every optimization epoch.
We ultimately propose the use of reduced basis ridge function neural operators, which only parametrize the nonlinear mapping $m\mapsto q$ in low-dimensional informed subspaces of the inputs and outputs. In this case, the model can only represent Jacobian information in the reduced subspaces of the inputs and outputs with dimensions $r_M \leq d_M, r_Q \leq d_Q$, which allows us to generate and impose Jacobian information in between only these two subspaces. This significantly reduces the online linear algebraic and memory costs to $O(r_Qr_M)$. In order to adequately resolve parametric Jacobian information, we advocate for the use of derivative-informed projected neural networks (DIPNets)\cite{OLeary-RoseberryVillaChenEtAl22}, which naturally incorporate dominant Jacobian information into their architectures and are thus well suited to learn the parametric derivative operator, in addition to the parametric map.
We investigate the effects of various different optimization formulations and neural network architectures on the accuracy of the approximation of the parametric map and its Jacobian by several numerical experiments. We consider parametric PDE problems where the parameters are given by the discretization of random coefficient fields that parametrize the PDE, and quantities of interest $q$ given by sparse observation of the PDE state variable. Numerical results demonstrate that the incorporation of derivative information in the neural operator training significantly improves the parametric map approximation, especially when training data are limited, as is often the case in SciML. Additionally numerical results demonstrate that these training formulations can result in significantly improved parametric derivative operator approximations. In particular, our numerical results imply that DIPNets perform better than generic networks in both parametric map and Jacobian approximation, showing that parametric surrogates with reliable derivative information can be constructed and trained in an efficient way by making use of derivative-revealed intrinsic low-dimensionality of the map.
\subsection{Related works}
The idea of incorporating derivative information into neural network training has been studied in \cite{CzarneckiOsinderoJaderbergEtAl2017} in a different setting, which did not address the computational challenges associated with high-dimensional maps, the use of derivative information extracted from the trained neural network, and experiments were limited to learning analytic functions or supplementing ``synthetic derivatives'' in order to improve the smoothness of the trained function. Our work differs from this work in scope and focuses on neural operators and efficient and scalable algorithms necessary for high-dimensional derivative training. We also motivate the need for derivative information in SciML outer-loop problems. A more recent paper \cite{BigoniMarzoukPrieurEtAl2021} investigates using derivative information for constructing dimension reduced surrogates for scalar valued high-dimensional parametric maps via minimizing a gradient based objective function. The context of this paper is similar to ours; in our case we consider vector-valued high-dimensional maps with the focus on efficient algorithms and the approximation by neural operators.
There are many other papers on learning parametric PDE maps in a SciML context \cite{BhattacharyaHosseiniKovachkiEtAl20,FrescaManzoni2022,KovachkiLiLiuEtAl2021,LiKovachkiAzizzadenesheliEtAl2020a,LiKovachkiAzizzadenesheliEtAl2020b,NelsenStuart2020,OLeary-RoseberryVillaChenEtAl22,OLearyRoseberryDuChaudhuriEtAl2021,RaissiPerdikarisKarniadakis2019}. The algorithms advanced in this work are suitable to help any of these methods to learn high-dimensional derivative information and additionally improve approximations of high-dimensional parametric maps. This work concerns \emph{parametric derivatives}, not to be confused with \emph{spatial derivatives} which have been addressed in surrogate construction in various works \cite{LiKovachkiAzizzadenesheliEtAl2020a,YuLuMengEtAl2022}.
\subsection{Contributions}
Herein, we develop DINO, an efficient computational \newline framework for encoding high-dimensional parametric derivative information into the training of neural operators to approximate high-dimensional parametric maps.
We first consider techniques based on reduced SVD to impose only the dominant curvature condition in the reduced SVD basis, reducing the computational costs from $O(d_Qd_M)$ to $O(r^2)$ where $r$ is the rank of the Jacobian. We demonstrate that this computational cost can be reduced to $O(k^2)$ by making use of matrix subsampling techniques that randomly choose $k$ rows and columns of the reduced Jacobian, and prove that this provides an asymptotically equivalent objective function in Proposition \ref{prop:submatrix_svd_convergence}. The techniques based on reduced SVD are highly scalable and can improve approximation of the parametric map by imposing slope constraints in the directions where the map is changing the most, but alone are not sufficient to guarantee parametric Jacobian accuracy. To address this concern we consider techniques based on random matrix-sketching in order to impose the entire Jacobian information, or just the nullspace conditions if they are to be used in tandem with the reduced SVD techniques. Ultimately we do not favor this strategy due to slow asymptotics and online computational overhead during training.
We then investigate how reduced-basis surrogates can be used to provide a highly efficient framework for accurate parametric derivative learning. In regards to this task, the fundamentally important feature of these surrogates is that they cannot represent Jacobian information that is orthogonal to their input and output reduced basis, as we prove in Proposition \ref{prop:jacobian_ridge_orthogonality}. This feature has two important consequences. First, it allows us to compute and impose full Jacobian information only in these reduced bases, again reducing the complexity from $O(d_Qd_M)$ to $O(r_Qr_M)$. Second, these surrogates can be architected to attempt to ``build-out'' the Jacobian nullspaces a priori, effectively using the architecture as a form of regularization for the derivative learning problem.
In three numerical experiments we demonstrate that the efficient incorporation of derivative information via DINO can improve the parametric map generalization when training data are limited. Additionally DINO leads to significantly improved approximation of parametric derivative operators over traditional parametric map regression. Among all the tested training strategies, the best overall strategy is to use the DIPNet network with full Jacobian information in the training. This strategy is scalable since the Jacobian information need only be represented in between the input and output reduced basis (which are often discretization independent), and accurate since the architecture is designed to resolve dominant Jacobian information, and nothing else.
At the time of writing we are unaware of any techniques for the efficient training of high-dimensional parametric surrogates on derivative information. We restrict ourselves to learning the first order parametric derivative (i.e.\ matrix free Jacobian), but the ideas are sufficiently general to be extended to higher derivative tensor actions. The approach we propose relies on the compressibility of high dimensional Jacobian information; DINO may not be suitable for problems that have high-dimensional Jacobian information across parameter space, an example of this may be full parametric state estimation for non-dissipative hyperbolic PDE problems.
\section{Derivative-informed neural operator}
Let $H^1_\nu(\mathbb{R}^{d_M}; \mathbb{R}^{d_Q})$ denote a (discrete Bochner) space for the map $q:\mathbb{R}^{d_M} \rightarrow \mathbb{R}^{d_Q}$ with the norm given by
\begin{equation}
\begin{split}
\|q\|^2_{H^1_\nu(\mathbb{R}^{d_M}; \mathbb{R}^{d_Q})} & = \mathbb{E}_\nu \left[\|q\|_{H^1(\mathbb{R}^{d_Q})}^2 \right] \\
& = \int_{\mathbb{R}^{d_M}} \left(\|q(m) \|^2_{\ell^2(\mathbb{R}^{d_Q})} + \|\nabla q(m)\|_{F(\mathbb{R}^{d_Q\times d_M})}^2 \right)d\nu(m),
\end{split}
\end{equation}
where $\|q(m)\|_{\ell^2(\mathbb{R}^{d_Q})}$ is the Euclidean norm of the output $q$ evaluated at $m$ and $\|\nabla q(m)\|_{F(\mathbb{R}^{d_Q\times d_M})}$ is the Frobenius norm of the Jacobian $\nabla q$ evaluated at $m$, which we refer to as the \emph{$H^1$ semi-norm} in the rest of the paper. The derivative-informed neural operator $f_w = f(\cdot,w) :\mathbb{R}^{d_M}\times \mathbb{R}^{d_W}\rightarrow \mathbb{R}^{d_Q}$ is a parametric neural network surrogate parametrized by weights $w\in \mathbb{R}^{d_W}$, which is constructed by attempting to solve the following expected risk minimization problem:
\begin{equation}\label{h1_exp_risk}
\min_w \frac{1}{2}\mathbb{E}_\nu \left[\|q - f_w\|_{H^1(\mathbb{R}^{d_Q})}^2\right].
\end{equation}
Note that each evaluation of the output $q$ requires solution of the state equation \eqref{eq:state}, which is often very expensive, and makes it prohibitively expensive to accurately approximate the integral with respect to $\nu$ in \eqref{h1_exp_risk}. This issue is addressed via Monte Carlo sample average approximation of \eqref{h1_exp_risk}: given a finite (affordably small) number of training samples $\{(m_i,q(m_i),\nabla q(m_i))|m_i \sim \nu\}_{i=1}^N$ we can bypass direct integration by instead formulating the following empirical risk minimization:
\begin{equation}\label{h1_emp_risk}
\min_w \frac{1}{2N}\sum_{i=1}^N \|q(m_i) - f_w(m_i)\|_{H^1(\mathbb{R}^{d_Q})}^2.
\end{equation}
The empirical risk objective function in \eqref{h1_emp_risk} can be differentiated with respect to the weights $w$ by automatic differentiation, which allows for the neural operator to be trained via derivative-based nonlinear stochastic optimization methods. The critical computational challenge is to compute the Jacobian term
\begin{equation} \label{full_jacobian_error_f_norm}
\|\nabla q(m) - \nabla f_w(m)\|^2_{F(\mathbb{R}^{d_Q\times d_M})},
\end{equation}
which is the focus of this work. Formally the offline costs of computing full Jacobian training data are expensive per datum, and the online memory and linear algebra costs of evaluating the Frobenius norm of the Jacobian misfit incurs $O(d_Qd_M)$ complexity. This makes the derivative training task formally intractable when $d_Q$ and $d_M$ are large.
In practice, however, the Jacobian matrices are often low rank because of (1) the correlation in both the input parameters and the output QoIs, and (2) the smoothness of the map, which can be efficiently approximated by low rank matrices with rank $r \ll d_Q, d_M$, or $r = O(d_Q) \ll d_M$ as is the case in inverse problems where observations are sparse. This low-rank property of the Jacobian matrix has been observed and used in many inverse problems
\cite{BeskosGirolamiLanEtAl17,BrennanBigoniZahmEtAl20,Bui-ThanhBursteddeGhattasEtAl12,
Bui-ThanhGhattasMartinEtAl13,Bui-ThanhGhattas14,ChenGhattas20,ChenVillaGhattas17,ChenWuChenEtAl19,CuiLawMarzouk16,FlathWilcoxAkcelikEtAl11,IsaacPetraStadlerEtAl15,ZahmCuiLawEtAl18}. When this is the case, Jacobian training data can be computed for $O(r)$ linear Jacobian matrix-vector products at each data point, e.g. using randomized SVD \cite{HalkoMartinssonTropp11}, and the online computational and memory costs of training can be made to scale with $O(r^2)$ instead of $O(d_Qd_M)$. In this work, we discuss strategies to achieve this complexity in training, by making use of different dimension reduction techniques.
\subsection{Approximation of $H^1$ semi-norm via reduced SVD} \label{section:jacobian_svd}
At a parameter sample we can decompose the Jacobian into its rank $r$ reduced SVD and the contributions of the orthogonal complements:
\begin{subequations}
\begin{equation}\label{eq:qSVD}
\nabla q = U_r \Sigma_r V_r^T + U_r^\perp 0 (V_r^\perp)^T.
\end{equation}
All of the information about the Jacobian is contained in the reduced SVD, since the orthogonal complements to the dominant left and right singular vectors are accessible via orthogonal projectors
\begin{align}
\text{span}(U_r^\perp) &= \text{span}(I_{d_Q} - U_rU_r^T), \\
\text{span}(V_r^\perp) &= \text{span}(I_{d_M} - V_rV_r^T).
\end{align}
\end{subequations}
A first idea that we consider is to decompose the Jacobian information into the reduced SVD basis, thus separating local dominant curvature conditions from nullspace conditions. In this approach, the goal is to get the derivatives of the neural operator to match in the dominant subspaces defined by the reduced SVD, which can improve the parametric map approximation by enforcing smooth interpolation of the map between training data. Additionally this approach can be used to achieve accurate approximations of the full Jacobian, if it is used in tandem with a efficient approach to penalize learning curvature in the nullspaces of the Jacobian. We start with a proposition for the decomposition of the Jacobian error into low-dimensional curvature conditions and complementary nullspace conditions.
\begin{proposition}{Decomposing the Jacobian error via reduced SVD for $\nabla q$} \label{prop:decompose_h1_seminorm}.
At every parameter sample there hold
\begin{subequations}
\begin{align}
\Sigma_r - U_r^T\nabla_m f_wV_r = 0 \in \mathbb{R}^{r\times r}, \label{dominant_curvature_condition}, \\
(I_{d_Q} - U_rU_r^T)\nabla_m f_w (I_{d_M}- V_rV_r^T) = 0 \in \mathbb{R}^{d_Q\times d_M} \label{left_null_right_null},\\
(I_{d_Q} - U_rU_r^T)\nabla_m f_w V_r = 0 \in \mathbb{R}^{d_Q\times d_M} \label{left_null_right},\\
U_r^T\nabla_m f_w (I_{d_M} - V_rV_r^T) = 0 \in \mathbb{R}^{d_Q\times d_M} \label{left_right_null},
\end{align}
if and only if
\begin{equation}
\nabla q = \nabla_m f_w.
\end{equation}
\end{subequations}
\end{proposition}
\begin{proof}
We begin with a simple auxiliary result. Let $A \in \mathbb{R}^{m\times n}$ be an arbitrary matrix and $Q_r \in \mathbb{R}^{n\times r}$ be orthonormal, (i.e.\ $Q_r^TQ_r = I_r \in \mathbb{R}^{r\times r}$), then we have
\begin{align}\label{orth_reduction_identity}
\|AQ_rQ_r^T \|_{F(\mathbb{R}^{m\times n})} = \sqrt{\text{tr}(AQ_rQ_r^TQ_rQ_r^TA^)} = \sqrt{\text{tr}(AQ_rQ_r^TA^T)} = \|AQ_r\|_{F(\mathbb{R}^{m\times r})}.
\end{align}
By repeated use of the orthogonal decomposition identity we have
\begin{align}
\|\nabla q - \nabla_m f_w\|_{F(\mathbb{R}^{d_Q \times d_M})} = &\|U_rU_r^T(\nabla q - \nabla_m f_w)V_rV_r^T\|_{F(\mathbb{R}^{d_Q \times d_M})} \nonumber \\
+&\|(I_{d_Q} - U_rU_r^T)(\nabla q - \nabla_m f_w) (I_{d_M}- V_rV_r^T)\|_{F(\mathbb{R}^{d_Q \times d_M})} \nonumber \\
+&\|(I_{d_Q} - U_rU_r^T)(\nabla q - \nabla_m f_w)V_rV_r^T\|_{F(\mathbb{R}^{d_Q \times d_M})}\nonumber\\
+&\|U_rU_r^T(\nabla q - \nabla_m f_w) (I_{d_M}- V_rV_r^T)\|_{F(\mathbb{R}^{d_Q \times d_M})}.
\end{align}
Note that $\nabla q$ disappears in each term involving the left application of $(I_{d_Q} - U_rU_r^T)$ or the right application of $(I_{d_M}- V_rV_r^T)$, by the definition of the reduced SVD. Applying this result with multiple applications of \eqref{orth_reduction_identity} we get the following:
\begin{align}
\|\nabla q - \nabla_m f_w\|_{F(\mathbb{R}^{d_Q \times d_M})} = &\|\Sigma_r - U_r^T\nabla_m f_wV_r\|_{F(\mathbb{R}^{r \times r})} \nonumber \\
+&\|(I_{d_Q} - U_rU_r^T)\nabla_m f_w (I_{d_M}- V_rV_r^T)\|_{F(\mathbb{R}^{d_Q \times d_M})} \nonumber \\
+&\|(I_{d_Q} - U_rU_r^T)\nabla_m f_wV_r\|_{F(\mathbb{R}^{d_Q \times r})}\nonumber\\
+&\|U_r^T\nabla_m f_w (I_{d_M}- V_rV_r^T)\|_{F(\mathbb{R}^{r \times d_M})}.
\end{align}
Substituting the assumptions of the proposition sets the right hand side to zero, and the result follows.
\end{proof}
Proposition \ref{prop:decompose_h1_seminorm} establishes that dominant curvature conditions can be imposed at sample points for only $O(r^2)$ work. For certain problems however, depending on how large $r$ is, penalizing a Frobenius norm in $\mathbb{R}^{r\times r}$ may still be untenable. The reasons being that automatic differentiation requires significant memory and operational overhead, while stochastic optimization requires that additional memory be allocated for additional arrays used in evaluating finite sum gradients and Hessians used in the training of the neural operator. We propose to further ease this computational burden by randomly subsampling left and right singular vectors at each epoch of training. In the following proposition we discuss the approximation of the true Frobenius curvature condition by row and column subsampling.
\begin{proposition}{Approximation of submatrix truncated SVD $H^1$ semi-norm in expectation}\label{prop:submatrix_svd_convergence}
Let $\kappa$ be a distribution of $k$ indices subsampled from $1,\dots r$ with uniform probability. Denote samples of these indices by $[\widehat{k}],[\widetilde{k}]$, and the subsampled matrices by $U_{[\widehat{k}]}\in\mathbb{R}^{d_Q \times k },V_{[\widetilde{k}]} \in \mathbb{R}^{d_M \times k}$, etc.. When $k$ columns of $U_r$ and $V_r$ are subsampled independently then
\begin{equation}
\mathbb{E}_{[\widehat{k}],[\widetilde{k}] \sim \kappa} \left[\|\Sigma_{[\widehat{k}],[\widetilde{k}]} - U_{[\widehat{k}]}^T\nabla_m f_w V_{[\widetilde{k}]}\|^2_{F(\mathbb{R}^{k \times k})}\right] = \frac{k^2}{r^2}\|\Sigma_r - U_r^T\nabla_m f_w V_r\|^2_{F(\mathbb{R}^{r \times r})}.
\end{equation}
Here $\Sigma_{[\widehat{k}],[\widetilde{k}]} = U_{[\widehat{k}]}^TU_r\Sigma_r V_r^TV_{[\widetilde{k}]}$. When the same $k$ columns are subsampled for both $U_r$ and $V_r$, then
\begin{align}
\mathbb{E}_{[\widehat{k}] \sim \kappa} \left[\|\Sigma_{[\widehat{k}],[\widehat{k}]} - U_{[\widehat{k}]}^T\nabla_m f_w V_{[\widehat{k}]}\|^2_{F(\mathbb{R}^{k \times k})}\right] = \frac{k}{r}\|\text{diag}(\Sigma_r - U_r^T\nabla_m f_w V_r)\|^2_{F(\mathbb{R}^{r \times r})}\nonumber \\
+ \frac{k(k-1)}{r(r-1)}\|\text{offdiag}(\Sigma_r - U_r^T\nabla_m f_w V_r)\|^2_{F(\mathbb{R}^{r \times r})}.\label{eq:dependent_samples}
\end{align}
\end{proposition}
\begin{proof}
Let $B = U_r\Sigma_rV_r^T - \nabla_m f_w$. We begin with the case of independent left and right columm samples. Let $\chi_{i\in [\widetilde{k}]}$ denote the indicator function that the index $i$ is in the index subset $[\widetilde{k}]$, which implies $\mathbb{E}_{[\widetilde{k}] \sim \kappa} [\chi_{i\in [\widetilde{k}]}] = \frac{k}{r}$ for any $i = 1, \dots, r$. We have then that
\begin{align}
\|U_{[\widehat{k}]}^TBV_{[\widetilde{k}]}\|_{F(k\times k)}^2 = \sum_{i=1}^{r}\sum_{j=1}^r \chi_{i \in [\widehat{k}]}\chi_{j \in [\widetilde{k}]}(U_r^TBV_r)_{ij}^2,
\end{align}
where we used $\chi_{i\in[\widetilde{k}]} = \chi_{i\in[\widetilde{k}]}^2$. Taking expectation we have
\begin{align}
\mathbb{E}_{[\widehat{k}],[\widetilde{k}] \sim \kappa}\left[\|U_{[\widehat{k}]}BV_{[\widetilde{k}]})\|_{F(k\times k)}^2\right] = \mathbb{E}_{[\widehat{k}],[\widetilde{k}] \sim \kappa}\left[\sum_{i=1}^{r}\sum_{j=1}^r \chi_{i \in [\widehat{k}]}\chi_{j \in [\widetilde{k}]}(U_r^TBV_r)_{ij}^2 \right] \nonumber \\
= \sum_{i=1}^{r}\sum_{j=1}^r\mathbb{E}_{[\widehat{k}],[\widetilde{k}] \sim \kappa}\left[ \chi_{i \in [\widehat{k}]}\chi_{j \in [\widetilde{k}]}\right](U_r^TBV_r)_{ij}^2 = \frac{k^2}{r^2}\sum_{i=1}^{r}\sum_{j=1}^r(U_r^TBV_r)_{ij}^2.
\end{align}
For the case that the left and right samples are the same we have
\begin{align}
\|U_{[\widehat{k}]}^TBV_{[\widehat{k}]}\|_{F(k\times k)}^2 &= \sum_{i=1}^{r}\sum_{j=1}^r \chi_{i \in [\widehat{k}]}\chi_{j \in [\widehat{k}]}(U_r^TBV_r)_{ij}^2 \nonumber \\
&= \sum_{i=1}^{r} \chi_{i \in [\widehat{k}]}(U_r^TBV_r)_{ii}^2 +\sum_{i=1}^{r}\sum_{i \neq j=1}^r \chi_{i \in [\widehat{k}]}\chi_{j \in [\widehat{k}] | j\neq i}(U_r^TBV_r)_{ij}^2 \nonumber. \\
\end{align}
Taking expectation we have
\begin{align}
\mathbb{E}_{[\widetilde{k}]\sim \kappa}\left[\|U^T_{[\widehat{k}]}BV_{[\widehat{k}]})\|_{F(k\times k)}^2\right] = \nonumber
\frac{k}{r}\sum_{i=1} (U_r^TBV_r)_{ii}^2 + \frac{k}{r}\frac{(k-1)}{(r-1)}\sum_{i=1}^{r}\sum_{i \neq j=1}^r (U_r^TBV_r)_{ij}^2.
\end{align}
\end{proof}
This proposition establishes that in either case using independent or dependent samples for rows and columns, the subsampled approximation of the Frobenius norm serves as an asymptotically equivalent optimization objective function, which can be scalably incorporated into training independent of how large $r$ is. In the case that the same samples are used for both the rows and columns, the effects of the diagonals of the matrix are over-emphasized. Since $\Sigma_r$ is by definition a diagonal matrix, this can be a beneficial property since it guarantees that there will always be $k$ nonzero curvature conditions that are imposed at each sample. In the case that independent samples are used it is possible that only off diagonal elements of $\Sigma_r$ will show up in the loss function, which in our experience can make the optimization problem harder. Proposition \ref{prop:submatrix_svd_convergence} demonstrates the curvature conditions can be imposed in a memory efficient way, even when $r$ is large.
The imposition of low-dimensional curvature conditions is generic and can be imposed in a matrix-free way. The low-dimensional curvature information can be extracted from generic differentiable surrogates / neural operators via use of automatic differentiation; a schematic for this is shown in Figure \ref{derivative_schematic}.
\begin{figure}[H]
\center
\begin{tikzpicture}[scale = 0.6, transform shape,every node/.style={draw,outer sep=0pt,thick}]
\node[bag] (Input) at (0,0) [minimum width=1cm,minimum height=2cm] {Input\\ Model Parameter\\
$ m \in \mathbb{R}^{d_M}$};
\node[bag] (InputUk) at (1,-2.5) [minimum width=1cm,minimum height=2cm] {Input\\ Left Singular Vectors\\
$ U_k \in \mathbb{R}^{d_Q \times k}$};
\node[bag] (InputVk) at (3,-7.5) [minimum width=1cm,minimum height=2cm] {Input\\ Right Singular Vectors\\
$ V_k \in \mathbb{R}^{d_M \times k}$};
\node[bag] (NN) at (5,0) [minimum width=4cm,minimum height=1.5cm] {Nonlinear parametric surrogate\\
(e.g.\ Neural Operator)\\
Weights $w \in \mathbb{R}^{d_W}$};
\draw (Input.north east) -- (NN.north west) (Input.south east) -- (NN.south west);
\node[bag] (Output) at (10,0)[minimum width=1cm,minimum height=2cm] {Surrogate Output\\ $f(m,w) \in \mathbb{R}^{d_Q}$};
\draw (NN.north east) -- (Output.north west) (NN.south east) -- (Output.south west);
\node[bag] (fTU) at (11,-2.5)[minimum width=1cm,minimum height=2cm] {Tensor Product\\ $U_k^Tf(m,w) \in \mathbb{R}^{k}$};
\draw (Output.south west) -- (fTU.north west) (Output.south east) -- (fTU.north east);
\draw[-stealth] (InputUk.east) -- (fTU.west);
\node[bag] (DfTU) at (12,-5)[minimum width=1cm,minimum height=2cm] {Differentiation\\ $\nabla_m(U_k^Tf(m,w)) \in \mathbb{R}^{k\times d_M}$};
\draw (fTU.south west) -- (DfTU.north west) (fTU.south east) -- (DfTU.north east);
\node[bag] (DfTUV) at (13,-7.5)[minimum width=1cm,minimum height=2cm] {Tensor Product\\ $U_k^T\nabla_mf(m,w)V_k \in \mathbb{R}^{k\times k}$};
\draw(DfTU.south west) -- (DfTUV.north west) (DfTU.south east) -- (DfTUV.north east);
\draw[-stealth] (InputVk.east) -- (DfTUV.west);
\end{tikzpicture}
\caption{Schematic operation chart for extracting low-dimensional sensitivities from neural operators. All operations can be vectorized over an additional data dimension.}\label{derivative_schematic}
\end{figure}
Numerical results in Section \ref{section:numerical_results} demonstrate that the inclusion of the reduced Jacobian information, particularly with matrix subsampling improve the parametric map approximation. Similar to Hermite interpolation methods, we suppose that the inclusion of dominant curvature information at training data points leads to more accurate interpolation in between training data points in the directions that the parametric map is changing the most. This makes the strategy desirable in contexts where one cares about the most about more accurate parametric map approximations. Imposing the reduced Jacobian curvature constraints alone does not guarantee accurate Jacobian; in order to achieve accurate parametric Jacobian the nullspace conditions still need to be addressed directly via the optimization formulation, or indirectly through the design of the architecture. We propose strategies for these tasks in the following sections.
\subsection{Random sketching of Jacobian Information} \label{section:sketching}
Both the full Jacobian error condition $\nabla q - \nabla f_w = 0$, as well as the nullspace conditions (\ref{left_null_right_null},~\ref{left_null_right},~\ref{left_right_null}) formally require $O(d_Qd_M)$ computation for their evaluation. In this section we discuss strategies for reducing this computational burden via the use of random matrix sketching. All three of these constraints can be stated generically as $A=0$. Via randomized matrix sketching, we can approximate the constraint error $\|A\|_{F(\mathbb{R}^{d_Q\times d_M})}$ by the use of random rank $r$ left and right sketches, as $\|Q_\text{left}^TAQ_\text{right}\|_{F(\mathbb{R}^{r\times r})}$, thus reducing the computational complexity from $O(d_Qd_M)$ to $O(r^2)$, where $r$ here is the dimension of the sketching. By resampling different random matrices at different optimization epochs, the constraints are imposed \emph{in expectation} with respect to the underlying distribution of random matrices.
\begin{proposition}{Approximation of matrix equation constraints via sketching} \label{sketching_proposition}
Given (e.g. Gaussian) distributions of random orthonormal matrices $\mathbb{R}^{d_Q \times r} \ni Q_\text{left} \sim \rho_\text{left}$, and $\mathbb{R}^{d_M \times r} \ni Q_\text{right} \sim \rho_\text{right}$, the approximation of the full matrix constraint equation by the sketching has the following bound in expectation with respect to the sampling distributions:
\begin{align}
\|A\|_{F(\mathbb{R}^{d_Q \times d_M})} \leq &\mathbb{E}_{\rho_\text{left},\rho_\text{right}} [\|Q_\text{left}^TA Q_\text{right}\|_{F(\mathbb{R}^{r \times r})}] \nonumber \\ &+ C(\rho_\text{left},\rho_\text{right},d_M,d_Q)\sum_{j = r+1}^{\min(d_M,d_Q)}\sigma_{j}(A),
\end{align}
where $\sigma_j(A)$ is the $j^{th}$ singular value of $A$.
\end{proposition}
\begin{proof}
We can sample random orthonormal matrices by first sampling a random matrix from an existing distribution, $\Omega \sim \rho$ and then performing a reduced QR decomposition to compute an orthonormal basis $Q$. By applying Theorem 15.3 in \cite{MartinssonTropp2020} to the left of $A$,
\begin{equation}
\mathbb{E}_{\rho_\text{left}}[ \|A - Q_\text{left}Q_\text{left}^TA\|_{F(\mathbb{R}^{d_Q \times d_M})}] \leq C_1(\rho_\text{left},d_M,d_Q) \sum_{j = s+1}^{\min(d_M,d_Q)}\sigma_{j}(A),
\end{equation}
and then a second time to the transpose of $Q_\text{left}Q_\text{left}^TA$, we obtain
\begin{align}
&\mathbb{E}_{\rho_\text{left},\rho_\text{right}}[ \|Q_\text{left}Q_\text{left}^TA - Q_\text{left}Q_\text{left}^TAQ_\text{right}Q_\text{right}^T\|_{F(\mathbb{R}^{d_Q \times d_M})}] \nonumber \\
&\leq C_2(d_M,d_Q) \sum_{j = s+1}^{\min(d_M,d_Q)}\sigma_{j}(Q_\text{left}Q_\text{left}^TA) \leq C_2(d_M,d_Q) \sum_{j = s+1}^{\min(d_M,d_Q)}\sigma_{j}(A),
\end{align}
the two terms can be combined via the triangle inequality, which yields the following bound via another use of the triangle inequality,
\begin{align}
\|A\|_{F(\mathbb{R}^{d_Q \times d_M})} &\leq \mathbb{E}_{\rho_\text{left},\rho_\text{right}} [\|Q_\text{left}Q_\text{left}^TAQ_\text{right}Q_\text{right}^T\|_{F(\mathbb{R}^{d_Q \times d_M})}] \nonumber \\ &+ C(\rho_\text{left},\rho_\text{right},d_M,d_Q)\sum_{j = r+1}^{\min(d_M,d_Q)}\sigma_{j}(A),
\end{align}
here the constants $C_1,C_2$ are combined into one constant $C$. By making use of an auxiliary result in Proposition \ref{prop:decompose_h1_seminorm} we have that $\|Q_\text{left}Q_\text{left}^TAQ_\text{right}Q_\text{right}^T\|_{F(\mathbb{R}^{d_Q \times d_M})} = \|Q_\text{left}^TAQ_\text{right}\|_{F(\mathbb{R}^{r\times r})}$, which yields the result.
\end{proof}
If the trailing singular values are small, and we can minimize the sketched constraint in expectation, then this inequality demonstrates that the constraint $A=0$ will also be satisfied. This however is a difficult task in practice since the singular value decay of $A$ depends not only on $\nabla q$, but also on $\nabla f_w$. If $f_w$ has significant curvature in the nullspaces of $\nabla q$, then large sketching dimension $r$ may be required to adequately penalize these terms. Additionally there is a significant amount of additional computational overhead to this sketching approach, since at each epoch, new sketching matrices must be computed, along with the sketching Jacobian information in the online training procedure. The benefit of this approach is that it is generic; it can be used to impose the full Jacobian constraints without making any assumptions on the underlying structure of the architecture, or it can be used in tandem with the reduced curvature approach delineated in Section \ref{section:jacobian_svd} by sketching the nullspace constraints in a complementary low-dimensional subspace.
In what follows we propose that by making assumptions on the architecture of the neural operator we can propose a much better overall strategy. By using reduced basis architectures we can scalably handle both the full Jacobian information, or the nullspace conditions due to properties of these neural operators.
\subsection{Reduced Basis Derivative Learning} \label{section:arch_constraints}
Consider an input-output ridge function surrogate
\begin{equation}\label{ridge_function}
f^\mathbf{r}(m,w) = \mathbf{U}_{\mathbf{r}_Q }\phi_r(\mathbf{V}_{\mathbf{r}_M}^Tm,w) + b,
\end{equation}
where $\phi_r$ now parametrizes a reduced dimensional nonlinear operator between the two reduced bases \cite{BhattacharyaHosseiniKovachkiEtAl20,OLearyRoseberryDuChaudhuriEtAl2021,OLeary-RoseberryVillaChenEtAl22}, where $\mathbf{U}_{\mathbf{r}_Q} \in \mathbb{R}^{d_Q\times \mathbf{r}_Q}$ and $\mathbf{V}_{\mathbf{r}_M} \in \mathbb{R}^{d_M \times \mathbf{r}_M} $ are adequate reduced bases for the inputs and outputs for the map $m \mapsto q$ and its derivative. A property of these functions is that their Jacobian with respect to $m$ cannot represent information in the orthogonal complements of the input and output reduced bases.
\begin{proposition}{Orthogonality Conditions for Ridge Function Jacobian}\label{prop:jacobian_ridge_orthogonality}
Without loss of generality take $\mathbf{r}=\mathbf{r}_Q = \mathbf{r}_M$.
\begin{enumerate}
\item If $X_{r_i} \perp \mathbf{V}_\textbf{r}$, then $\nabla_m f^\mathbf{r}_w X_{r_i} = 0$.
\item If $Y_{r_i} \perp \mathbf{U}_\textbf{r}$, then $Y_{r_i}^T \nabla _m f^\mathbf{r}_w = 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
Consider an arbitrary column $x_j \in \text{col}(X_{r_i})$ indexed by $j$, and an arbitrary component of the output $(f_w^\mathbf{r})_k$ indexed by $k$. This represents the $j,k$ entry in the matrix $\nabla_mf^\mathbf{r}_w X_{r_i}$. In the limit definition this entry is
\begin{equation}
\nabla_m (f_w^\mathbf{r})_k x_j = \lim_{h\rightarrow 0} \frac{(f^\mathbf{r}(m_i+hx_j,w)_k - f^\mathbf{r}(m_i,w)_k )}{h}.
\end{equation}
Due to the orthogonality condition
\begin{equation}
\mathbf{V}_\mathbf{r}(m_i + hx_j) = \mathbf{V}_\mathbf{r}m_i + h\cancelto{0}{\mathbf{V}_\mathbf{r}x_j } = \mathbf{V}_\mathbf{r}m_i,
\end{equation}
and thus $(f^\mathbf{r}(m_i+hx_j,w)_k = f^\mathbf{r}(m_i,w)_k )$, so the directional derivative is zero. This concludes the proof of the first proposition.
For the second proposition consider the $j,k$ entry of $Y_{r_i}^T \nabla _m f^\mathbf{r}_w $. This corresponds to the quantity
\begin{equation}
y_j^T \frac{\partial f_w^\mathbf{r}}{\partial m^{(k)}},
\end{equation}
where $y_j$ is the $j^{th}$ column of $Y_{r_i}$ and $m^{(k)}$ is the $k^{th}$ Cartesian basis vector in $\mathbb{R}^{d_M}$. In the limit definition we have
\begin{align}
&\lim_{h\rightarrow 0}\frac{1}{h}y_j^T\left[f^\mathbf{r}(m_i + hm^{(k)},w) - f^\mathbf{r}(m_i,w) \right] =\nonumber \\
&\lim_{h\rightarrow 0}\frac{1}{h}\cancelto{0}{y_j^T\mathbf{U}_\mathbf{r}}\left[\phi_r(\mathbf{V}_\mathbf{r}(m_i + hm^{(k)}),w) - \phi_r(\mathbf{V}_\mathbf{r}m_i,w) \right] = 0.
\end{align}
\end{proof}
We emphasize two consequences of this proposition. First, if the important Jacobian information is well resolved between the reduced bases, i.e. $\mathbb{E}_\nu[\|\nabla q - \mathbf{U}_\mathbf{r}\mathbf{U}_\mathbf{r}^T\nabla q \mathbf{V}_\mathbf{r}\mathbf{V}_\mathbf{r}^T\|_{F(\mathbb{R}^{d_Q\times d_M})}] < \epsilon$ for $0 < \epsilon$ sufficiently small, then one can still use the total Jacobian information for $O(\mathbf{r}^2)$ compute. In this case, the cost of the Jacobian training data construction is reduced since the surrogate can only represent $\mathbf{U}_\mathbf{r}^T \nabla q \mathbf{V}_\mathbf{r}$, which can be computed at sample points for significantly reduced cost and only need be imposed in training as $\|\mathbf{U}_\mathbf{r}^T \nabla q \mathbf{V}_\mathbf{r} - \mathbf{U}_\mathbf{r}^T \nabla f_w \mathbf{V}_\mathbf{r}\|_{F(\mathbb{R}^{\mathbf{r}\times \mathbf{r}})}$. Second, if Jacobian nullspaces are precluded from $\mathbf{U}_\mathbf{r}$ and $\mathbf{V}_\mathbf{r}$, then one can use the reduced SVD objective delineated in section \ref{section:jacobian_svd} and have the architecture approximately satisfy the nullspace constraints, passively.
In either case, the key issue is constructing reduced bases $\mathbf{U}_\mathbf{r}$ and $\mathbf{V}_\mathbf{r}$ that adequately resolve the dominant left and right nullspaces of the parametric Jacobian \emph{in expectation}. In the second case it is specifically important to attempt to preclude the Jacobian nullspaces. For this task, we consider two bases that attempt to resolve these bases in expectation. The right singular vectors of the Jacobian (weighted by the square of the singular values) are captured by the ``active subspace'' basis (AS), i.e.\ the dominant eigenvectors of
\begin{equation}
\mathbb{E}_\nu[\nabla q^T \nabla q] \in \mathbb{R}^{d_M \times d_M},
\end{equation}
in expectation. The AS basis is a powerful derivative based dimension reduction technique that resolves the dominant sensitivity information of a parametric map in expectation. It has been useful in dimension reduction techniques and surrogate modeling \cite{ConstantineDowWang2014,OLeary-RoseberryVillaChenEtAl22,ZahmConstantinePrieurEtAl2020}. A related basis can be constructed to capture the dominant information contained in the left singular vectors of the Jacobian (again weighted by the square of the singular values):
\begin{equation}
\mathbb{E}_\nu[\nabla q \nabla q^T] \in \mathbb{R}^{d_Q \times d_Q}.
\end{equation}
We refer to this basis as the derivative outer product basis. Approximation bounds for ridge function approximations are well known for the active subspace basis \cite{ZahmConstantinePrieurEtAl2020}, bounds for the approximation of the Jacobian in the derivative outer product basis are outside of the scope of this work. We conjecture that the approximation bound is analogous to that by proper orthogonal decomposition \cite{ManzoniNegriQuarteroni2016} for the output, but here for the first order derivative of the output.
Reduced basis neural networks based on active subspace are referred to as \newline derivative-informed projected neural networks (DIPNets)\cite{OLeary-RoseberryVillaChenEtAl22}, and given limited training data can achieve high accuracy in approximating parametric functions by focusing on the most sensitive directions of parameter space. These networks are well suited to derivative approximation based on the prior discussion. The combination of DIPNets with the efficient approximation of the Jacobian in the combined reduced bases creates an accurate and scalable approach to approximating parametric maps and derivatives, as demonstrated by numerical results in the next section.
\section{Numerical Experiments} \label{section:numerical_results}
In this section we demonstrate the accuracy and efficiency of four different optimization formulations for three different network architectures to approximation parametric maps with PDE constraints, all for varying availability of training data. In general our numerical results show that Jacobian information has positive effects on the $L^2$ generalization accuracy of the parametric map approximation and resulted in more accurate approximations of derivatives. Additionally the use of derivative-informed reduced basis networks led to efficient strategies for learning accurate parametric map and Jacobian approximations simultaneously. The code for these numerical results can be found at \url{https://github.com/tomoleary/dino}.
\subsection{Definition of Parametric PDE Maps}
We consider PDE problems \newline where the mapping $m \mapsto q$ represents a mapping from a random coefficient field that parametrizes the PDE to the observable of the PDE state variable at points inside of the domain of the PDE. We use a centered Gaussian distribution $\nu = \mathcal{N}(0,\mathcal{C})$ for the random field with a trace-class Mat\'{e}rn covariance $\mathcal{C} = \mathcal{A}^{-2}$, where
\begin{equation}\label{eq:matern_covariance}
\mathcal{A} = (\delta I - \gamma \Delta),
\end{equation}
defined in physical domain $\Omega$ and imposed with homogeneous Neumann boundary condition. The correlation structure is parametrized by $\delta,\gamma> 0$ with the correlation length dictated by the ratio $\delta/\gamma$, decreased by making $\gamma$ and $\delta$ larger.
We use a uniform mesh of size $64 \times 64 $, and piecewise linear finite elements for the parameter discretization, leading to the input dimension, $d_M = 4,225$. We use a pointwise observation operator on the state at locations $\{\mathbf{x}_i \in \Omega\}_{i=1}^{50}$ so that $d_Q = 50$. Such problems are of particular relevance to Bayesian inverse problems and Bayesian optimal experimental design problems. We consider three PDE problems as follows.
\subsubsection{Poisson Problem}
For the first test case, we consider a lognormal diffusion (Poisson) problem in $\Omega = (0,1)^2$ with a lognormal diffusion coefficient field $m \sim \mathcal{N}(0,\mathcal{C})$. We take $\delta=0.5, \gamma = 0.1$ in \eqref{eq:matern_covariance}. The observable $q(m)$ represents the observation of the PDE state $u(\mathbf{x}_i)$ on a grid in the lower half of the domain (see Figure \ref{fig:poisson_state}). The example is taken from the \texttt{hIPPYlib} tutorial \cite{VillaPetraGhattas20}, the only difference is that in the tutorial the observations are random instead of grid-conforming.
\begin{minipage}{0.35\textwidth}
\begin{align}
-\nabla \cdot(e^{m}\nabla u) = 0 \text{ in } \Omega \nonumber\\
u = 1 \text{ on } \Gamma_\text{top} \nonumber \\
e^m\nabla u \cdot n = 0 \text{ on } \Gamma_\text{sides} \nonumber\\
u = 0 \text{ on } \Gamma_\text{bottom}
\end{align}
\end{minipage}%
\begin{minipage}{0.6\textwidth}
\begin{figure}[H]
\center
\includegraphics[width = \textwidth]{figures/poisson_state_observables.pdf}
\caption{An instance of Poisson state and observables}
\label{fig:poisson_state}
\end{figure}
\end{minipage}
\subsubsection{Reaction-Diffusion Problem}
For the second test case, we consider a nonlinear reaction-diffusion problem similar to the Poisson problem state above. In this PDE formulation the right handside forcing term, $\mathbf{f}$, involves $25$ point sources located on a Cartesian grid in the center of $\Omega$, and additionally there is a cubic nonlinearity in the reaction term. We take $\delta = 1.0,\gamma =0.1$. In this case there is less variance in the prior distribution, but more sensitivity to the diffusion operator due to the forcing terms and the cubic nonlinearity.
\begin{minipage}{0.35\textwidth}
\begin{align}
-\nabla \cdot(e^{m}\nabla u) + u^3 = \mathbf{f} \text{ in } \Omega \nonumber\\
u = 1 \text{ on } \Gamma_\text{top} \nonumber \\
e^m\nabla u \cdot n = 0 \text{ on } \Gamma_\text{sides} \nonumber\\
u = 0 \text{ on } \Gamma_\text{bottom}
\end{align}
\end{minipage}%
\begin{minipage}{0.6\textwidth}
\begin{figure}[H]
\center
\includegraphics[width = \textwidth]{figures/rdiff_state_observables.pdf}
\caption{An instance of reaction-diffusion state and observables}
\label{fig:rdiff_state}
\end{figure}
\end{minipage}
\subsubsection{Convection-Reaction-Diffusion Problem}
The third test case is a convection-reaction-diffusion (CRD) problem where the parameter shows up in a nonlinear reaction term\cite{OLeary-RoseberryVillaChenEtAl22,WuOLearyRoseberryChenEtAl22}. In this case the parameters for the distribution are $\delta = 1.0,\gamma = 0.1$. The quantity of interest is again a grid of observations of the PDE solution $u(\mathbf{x}_i)$ in the lower half of the domain (see Figure \ref{fig:crd_state}). The right hand side of the PDE is given by a Gaussian bump centered at $(0.7,0.7)$, and the velocity field $\mathbf{v}$ is given by a solution to a steady-state Navier-Stokes equation with walls driving the flow, for more information see the Appendices of \cite{OLeary-RoseberryVillaChenEtAl22}.
\begin{minipage}{0.35\textwidth}
\begin{align}
- \nabla \cdot (k \nabla u) + \mathbf{v} \cdot \nabla u \nonumber \\ + e^m u^3 =
f \quad \text{in } \Omega \nonumber \\
u = 0 \text{ on } \partial \Omega \nonumber \\
\end{align}
\end{minipage}%
\begin{minipage}{0.6\textwidth}
\begin{figure}[H]
\center
\includegraphics[width = \textwidth]{figures/crd_state_observables.pdf}
\caption{An instance of CRD state and observables}
\label{fig:crd_state}
\end{figure}
\end{minipage}
\subsection{Overview of Training and Accuracy Metrics}\label{section:training_and_accuracies}
We consider three different networks architectures, one generic encoder-decoder network which we label ``Generic'' and then two different derivative-informed projected networks \newline (DIPNets)\cite{OLeary-RoseberryVillaChenEtAl22}, one with an input reduced basis dimension of $\mathbf{r}_M = 50$, and the other with a reduced basis dimension of $ \mathbf{r}_M=100$; we refer to these networks as ``DIPNet 50-50'' and ``DIPNet 100-50'' respectively as they both have full $\mathbf{r}_Q = 50$ dimensional output reduced basis representation. All three networks have six hidden layers (with latent dimension $50$), and use softplus activation functions. The trainable weight dimensions are $d_W = 20,400$ for DIPNet 50-50, $d_W = 30,450$ for DIPNet 100-50, and $d_W = 226,600$ for Generic. These three networks together demonstrate the main effects that architecture can have on the quality of the approximation put forth in this work.
For each network we consider four different formulations of the loss function. The first is generic $L^2$ training where no derivative information is included, the second is full $H^1$ parametric regression where the entire $H^1$ semi-norm loss term is trained with equal weighting to the $L^2$ loss (emphasizing again in the case of DIPNet training the online costs of evaluating the $H^1$ semi-norm are reduced to $O(\mathbf{r}_Q\mathbf{r}_M)$). The third formulation is the reduced $H^1$ parametric regression problem where the reduced $H^1$ semi-norm term is supplemented for the entire $H^1$ Jacobian semi-norm, the fourth is the same but using matrix subsampling of the Jacobian with dependent row and column samples, as in Proposition \ref{prop:submatrix_svd_convergence} which we denoted ``MS''. Due to limited space we do not include results for the nullspace sketching and full Jacobian sketching delineated in Section \ref{section:sketching} as they performed worse than the aforementioned four approaches.
\begin{subequations}
\begin{align}
L^2: \quad &\min_w \mathbb{E}_\nu\left[\|q - f_w\|^2_{\ell^2(\mathbb{R}^{d_Q})}\right]\\
H^1: \quad &\min_w \mathbb{E}_\nu\left[\|q - f_w\|^2_{\ell^2(\mathbb{R}^{d_Q})} + \|\nabla q - \nabla f_w\|^2_{F(\mathbb{R}^{d_Q\times d_M})}\right]\\
\text{Reduced } H^1: \quad &\min_w \mathbb{E}_\nu\left[\|q - f_w\|^2_{\ell^2(\mathbb{R}^{d_Q})} + \|U_r^T(\nabla q - \nabla f_w)V_r\|^2_{F(\mathbb{R}^{r\times r})}\right]\\
\text{Reduced } H^1 \text{MS}: \quad &\min_w \mathbb{E}_\nu\left[\|q - f_w\|^2_{\ell^2(\mathbb{R}^{d_Q})} + \mathbb{E}_{[\widehat{k}] \sim \kappa}\left[\|U_k^T(\nabla q - \nabla f_w)V_k\|^2_{F(\mathbb{R}^{k\times k})}\right]\right]\\
\end{align}
\end{subequations}
We train for $100$ epochs using an Adam optimizer \cite{KingmaBa2014}, with \texttt{tensorflow}\cite{AbadiAgarwalBarhamEtAl2016} default hyperparamters ($\alpha = 10^{-3}$ and $32$ batch size) for simplicity. For each problem we consider training up to $4,096$ data, and use $1,024$ data for generalization tests. We employ three different generalization metrics: $L^2$ accuracy, $H^1$ semi-norm accuracy, and reduced $H^1$ semi-norm accuracy as below. The discrepancy between the $H^1$ semi-norm accuracy and the reduced $H^1$ semi-norm accuracy indicates issues with resolving the nullspace of the Jacobian over parameter space.
\begin{subequations}
\begin{align}
&L^2 \text{accuracy:} \quad & \left(1 - \mathbb{E}_\nu\left[\frac{\|q - f_w\|_{\ell^2(\mathbb{R}^{d_Q})}}{\|q\|_{\ell^2(\mathbb{R}{^{d_Q}})}}\right]\right) \\
&\text{$H^1$ semi-norm accuracy:} \quad & \left(1 - \mathbb{E}_\nu\left[\frac{\|\nabla q - \nabla f_w\|_{F(\mathbb{R}^{d_Q\times d_M})}}{\|\nabla q\|_{F(\mathbb{R}{^{d_Q\times d_M}})}}\right]\right)\\
&\text{Reduced $H^1$ semi-norm accuracy:} \quad & \left(1 - \mathbb{E}_\nu\left[\frac{\|\Sigma_r - U_r^T\nabla_mf_wV_r\|_{F(\mathbb{R}^{r\times r})}}{\|\Sigma_r\|_{F(\mathbb{R}{^{r\times r}})}}\right]\right).
\end{align}
\end{subequations}
We begin by examining the effects of derivative information on the $L^2$ accuracy, e.g. the approximation accuracy of the parametric map. Overall the inclusion of the derivative information in the training improved the $L^2$ accuracy when the number of training data is limited as shown in Figures \ref{poisson_l2_comps} and \ref{rdiff_l2_comps}. The addition of derivative information improved the accuracy of the DIPNet 100-50 significantly, in particular the matrix subsampled reduced $H^1$ loss function significantly improved the $L^2$ accuracies for limited data. The generic encoder-decoder also benefitted from additional derivative information. The results are similar for the DIPNet 50-50 network.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_l2_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_l2_allaccsgeneric.pdf}
\end{subfigure}
\caption{Poisson $L^2$ training effects for the DIPNet 100-50 and Generic encoder-decoder for the four different training formulations.}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_l2_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_l2_allaccsgeneric.pdf}
\end{subfigure}
\caption{Reaction-Diffusion $L^2$ training effects for the DIPNet 100-50 and Generic encoder-decoder for the four different training formulations.}
\label{rdiff_l2_comps}
\end{figure}
Figure \ref{crd_l2_comps} demonstrates the the CRD parametric map was easier to learn than the Poisson and reaction-diffusion maps. In both cases for this easier problem the plain $L^2$ training performed the best by a small margin when the largest amounts of training data were present, but as with Figures \ref{poisson_l2_comps} and \ref{rdiff_l2_comps} the additional derivative information improved the accuracy substantially in the limited training data regime.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/crd_l2_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/crd_l2_allaccsgeneric.pdf}
\end{subfigure}
\caption{Convection-Reaction-Diffusion $L^2$ training effects for the DIPNet 100-50 and Generic encoder-decoder for the four different training formulations.}
\label{crd_l2_comps}
\end{figure}
We note that the training data axes of these plots does not make a fair comparison of computational cost for training data, since the additional Jacobian information requires extra computation. For high-dimensional nonlinear problems however, the additional (linear) derivative information can be made very marginal relative to the cost of evaluating the nonlinear forward map $m \mapsto q$, for a discussion on this see Appendix A of \cite{OLeary-RoseberryVillaChenEtAl22}. Taking this into account these results make a strong case for the inclusion of derivative information in $L^2$ regression problems as an economical means of improving approximation accuracy for highly nonlinear problems where Jacobian matrix-products are inexpensive in comparison to the evaluation of $m \mapsto q$. In particular the matrix-subsampled reduced SVD loss was highly effective in improving parametric map accuracies.
We proceed by investigating the accuracy of the Jacobian predictions for the various architectural and optimization strategies. Starting with the Poisson and reaction-diffusion problem again, Figures \ref{poisson_h1_comps} and \ref{rdiff_h1_comps} demonstrate that learning the entire Jacobian over parameter space is quite difficult: for these problems the best overall strategy was the DIPNet 100-50 with the full $H^1$ objective, and it was still only able to achieve $42.1\%, 41.5\%$ accuracy in the Poisson and reaction-diffusion problems, respectively. It is worth emphasizing that this is an extremely difficult metric to satisfy, since it requires that $d_Q\times d_M = 211,250$ entries of the Jacobian matrix to agree for every point $m \sim \nu$. Figures \ref{poisson_h1_comps} and \ref{rdiff_h1_comps} demonstrate that networks not trained using Jacobian information give poor approximation of Jacobian information; only the DIPNets trained with the $L^2$ loss gave Jacobian predictions that were better than $0\%$ in some regime (i.e. better than the zero map). In no case was the generic $L^2$ network capable of producing reasonable full Jacobian predictions.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_h1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_h1_allaccsh1_opt.pdf}
\end{subfigure}
\caption{Comparison of $H^1$ semi-norm accuracies for the Poisson problem.}
\label{poisson_h1_comps}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_h1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_h1_allaccsh1_opt.pdf}
\end{subfigure}
\caption{Comparison of $H^1$ semi-norm accuracies for the reaction-diffusion problem.}
\label{rdiff_h1_comps}
\end{figure}
As with the $L^2$ accuracy results, Figure \ref{crd_h1_comps} demonstrates that the convection-reaction-diffusion parametric Jacobian map was easier to learn than the Poisson and reaction-diffusion problems. Figure \ref{crd_h1_comps} demonstrates that significantly better accuracy was achievable for the parametric Jacobian ($74.7\%$ with the full $H^1$ trained DIPNet 100-50 given $4,096$ data). As with the previous results the only $L^2$ trained networks that gave greater than $0\%$ accuracy were the DIPNets; we conjecture this is due to the derivative reduced basis construction discussed in Section \ref{section:arch_constraints}.
\begin{figure}[H]
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/crd_h1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.5\textwidth}
\includegraphics[width = \textwidth]{figures/crd_h1_allaccsh1_opt.pdf}
\end{subfigure}
\caption{Comparison of $H^1$ semi-norm accuracies for the convection-reaction-diffusion problem.}
\label{crd_h1_comps}
\end{figure}
Figures \ref{poisson_rh1_comps}, \ref{rdiff_rh1_comps} and \ref{crd_rh1_comps} demonstrate the accuracy of the Jacobian predictions in the \emph{informed} directions of the Jacobian. The discrepancies in their performance between the $H^1$ and reduced $H^1$ accuracies is due to the effects of the reduced $H^1$ objective function on the nullspaces of the Jacobian. As one might expect, for this less aggressive metric the reduced $H^1$ metrics produced superior accuracies. In this case the DIPNet 100-50 network again produced the best overall results, in particular with the reduced $H^1$ optimization losses (with and without matrix sketching). These figures demonstrate that remarkably high reduced Jacobian accuracy can be achieved using the reduced $H^1$ objective functions, however in the case of the generic encoder-decoder, this did not lead to good overall approximation of the Jacobian. In all of these cases the accuracies have not plateaued as in the $L^2$ accuracy plots, which suggests that superior accuracy may be attained given more training data, and with more sophisticated optimization algorithms.
\begin{figure}[H]
\begin{subfigure}{0.3925\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_rh1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.6075\textwidth}
\includegraphics[width = \textwidth]{figures/poisson_rh1_allaccsrh1_opt.pdf}
\end{subfigure}
\caption{Comparison of reduced $H^1$ semi-norm accuracies for the Poisson problem.}
\label{poisson_rh1_comps}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.3925\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_rh1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.6075\textwidth}
\includegraphics[width = \textwidth]{figures/rdiff_rh1_allaccsrh1_opt.pdf}
\end{subfigure}
\caption{Comparison of reduced $H^1$ semi-norm accuracies for the reaction-diffusion problem.}
\label{rdiff_rh1_comps}
\end{figure}
\begin{figure}[H]
\begin{subfigure}{0.3925\textwidth}
\includegraphics[width = \textwidth]{figures/crd_rh1_allaccsdipnet_10050.pdf}
\end{subfigure}%
\begin{subfigure}{0.6075\textwidth}
\includegraphics[width = \textwidth]{figures/crd_rh1_allaccsrh1_opt.pdf}
\end{subfigure}
\caption{Comparison of reduced $H^1$ semi-norm accuracies for the convection-reaction-diffusion problem.}
\label{crd_rh1_comps}
\end{figure}
Overall these results support the use of DIPNet reduced basis architectures for simultaneously learning the parametric map and its Jacobian. The computational complexity of this approach is significantly lower than the full $H^1$ learning via use of generic encoder-decoder architectures. Additionally numerical results demonstrate that this strategy led to more accurate networks in addition to being more efficient. The reduced $H^1$ optimization formulations did not generally lead to as good of Jacobian approximations as the full $H^1$ formulation, while the full $H^1$ formulation was less accurate in the \emph{informed} modes of the parametric Jacobian as those trained using the reduced loss. The combination of these two approaches may lead to a better overall strategy.
\section{Conclusion}
To the best of our knowledge, at present neural operators aim at, and are only capable of learning parametric maps themselves, and not their derivatives. In this work we present several efficient strategies for incorporating formally high-dimensional Jacobian information, by leveraging dimension reduction techniques. When Jacobian has low rank $r$ the offline costs of generating training data only require $O(r)$ linear Jacobian matrix-vector products, and by our proposed methods, the inclusion of this information in neural operator training need only require $O(r^2)$ memory and compute, instead of $O(d_Qd_M)$. This is achieved by making use of reduced SVD representations of the Jacobian, matrix-sketching and ultimately reduced basis architectures for neural operators that exploit derivative sensitivities of the map. We term neural operators trained using these frameworks derivative-informed neural operators (DINOs).
Numerical results demonstrate that the inclusion of derivative information in the optimization problem leads to superior approximation of the parametric map, especially when only limited training data are available, in particular by using the highly efficient reduced $H^1$ optimization formulation. This is an important point, as it demonstrates that additional derivative information can be supplemented for parametric map evaluations to improve the accuracy of the training, when only few training data are available. This is useful for highly nonlinear maps where the costs of (linear) derivatives are marginal in relation to the cost of the nonlinear map itself.
Additionally, numerical results demonstrate that the high-dimensional parametric Jacobian is hard to learn to typical accuracy requirements using neural operators. Our results show that the Jacobian accuracy from simple $L^2$ training was poor, and the inclusion of derivative information in the training significantly improved the Jacobian approximations. The best strategies for learning the parametric Jacobian accurately, was by imposing the full Jacobian error loss. This strategy worked well for all the networks we investigated, however as was discussed in Section \ref{section:arch_constraints}, this strategy is only scalable by making use of reduced-basis architecture, in particular, DIPNets; in this case the computational complexity of learning the full Jacobian is reduced from $O(d_Qd_M)$ to $O(r^2)$, where $r$ is the dimension of the reduced bases.
The reduced $H^1$ strategies that we proposed are generally efficient independent of the neural operator's architecture, but gave mixed results in Jacobian accuracy.
We note that this may be a more suitable metric for the reliable deployment of these networks in derivative based inference methods. In ongoing experiments we have observed that DINOs with good reduced $H^1$ approximations but less accurate $H^1$ approximations as in Figures \ref{poisson_h1_comps} - \ref{crd_h1_comps} were able to provide accurate computations used in Bayesian inverse problems, where the prior information plays a dominant role in the nullspaces. This will be further explored in an upcoming paper.
In this work the multi-objective optimization formulation was equally weighted, and made use of simple optimization algorithms. In future work we expect the performance of the models could be improved by the use of Pareto fronts and more sophisticated optimization routines.
All in all DINO provides a framework for scalably incorporating derivative information into the training of neural operators. This not only improves parametric map approximations when only limited training data are available, but also provides a path to the deployment of DINOs in outer-loop algorithms that make use of parametric derivative information.
\bibliographystyle{siamplain}
| {
"timestamp": "2022-06-24T02:18:43",
"yymm": "2206",
"arxiv_id": "2206.10745",
"language": "en",
"url": "https://arxiv.org/abs/2206.10745",
"abstract": "We propose derivative-informed neural operators (DINOs), a general family of neural networks to approximate operators as infinite-dimensional mappings from input function spaces to output function spaces or quantities of interest. After discretizations both inputs and outputs are high-dimensional. We aim to approximate not only the operators with improved accuracy but also their derivatives (Jacobians) with respect to the input function-valued parameter to empower derivative-based algorithms in many applications, e.g., Bayesian inverse problems, optimization under parameter uncertainty, and optimal experimental design. The major difficulties include the computational cost of generating derivative training data and the high dimensionality of the problem leading to large training cost. To address these challenges, we exploit the intrinsic low-dimensionality of the derivatives and develop algorithms for compressing derivative information and efficiently imposing it in neural operator training yielding derivative-informed neural operators. We demonstrate that these advances can significantly reduce the costs of both data generation and training for large classes of problems (e.g., nonlinear steady state parametric PDE maps), making the costs marginal or comparable to the costs without using derivatives, and in particular independent of the discretization dimension of the input and output functions. Moreover, we show that the proposed DINO achieves significantly higher accuracy than neural operators trained without derivative information, for both function approximation and derivative approximation (e.g., Gauss-Newton Hessian), especially when the training data are limited.",
"subjects": "Numerical Analysis (math.NA); Machine Learning (cs.LG); Optimization and Control (math.OC)",
"title": "Derivative-Informed Neural Operator: An Efficient Framework for High-Dimensional Parametric Derivative Learning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877002595527,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.708959468412836
} |
https://arxiv.org/abs/1309.6968 | Subspaces of $C^\infty$ invariant under the differentiation | Let $L$ be a proper differentiation invariant subspace of $C^\infty(a,b)$ such that the restriction operator $\frac{d}{dx}\bigl{|}_L$ has a discrete spectrum $\Lambda$ (counting with multiplicities). We prove that $L$ is spanned by functions vanishing outside some closed interval $I\subset(a,b)$ and monomial exponentials $x^ke^{\lambda x}$ corresponding to $\Lambda$ if its density does not exceed the critical value $\frac{|I|}{2\pi}$, and moreover, we show that the result is not necessarily true when the density of $\Lambda$ equals the critical value. This answers a question posed by the first author and B. Korenblum. Finally, if the residual part of $L$ is trivial, then $L$ is spanned by the monomial exponentials it contains. | \section{Introduction} Consider the space $C^\infty(a,b)$
equipped with the usual topology of uniform convergence on compacta of
each derivative $f^{(k)}, k=0,1,\ldots$; more specifically, the
topology given by any of the translation invariant metrics given
below. Consider a sequence $(I_j)$ of compact intervals with $\cup_j
I_j=(a,b)$, denote by $\|\cdot\|_j$ the sup-norm over $I_j$ and set
$$d(f,g)=\sum_{j,k=0}^\infty 2^{-j-k}
\frac{\|f^{(k)}-g^{(k)}\|_j}{1+\|f^{(k)}-g^{(k)}\|_j}.
$$
The present paper concerns the structure of closed subspaces of $C^\infty(a,b)$ which are invariant for the differentiation operator $D=\frac{d}{dx}$. Our investigation follows the classical line, namely we are going to consider an appropriate version of {\it spectral synthesis} for these subspace, which we now explain.
Strictly speaking, a continuous operator has the property of spectral syntesis if any non-trivial invariant subspace is generated by the root-vectors contained in it. The definition extends in an obvious way to families of commuting operators. The parade examples are the translation invariant subspaces of the locally
convex space of continuous functions on the real line. These are now well
understood due to the work of J. Delsarte \cite{Del}, J.-P. Kahane
\cite{Kahane} and L. Schwartz \cite{Schw}. In the setting of entire
functions translation-invariance is equivalent to complex differentiation invariance and the spectral synthesis property has been proved by L. Schwartz \cite{Schw1}.
The structure of differentiation-invariant subspaces of $C^\infty(a,b)$ is more complicated and was only investigated recently in \cite{alkor}. The reason for the additional complication is the presence of the following subspaces: Given a closed set $S\subset (a,b)$ let
\begin{equation}\label{complication}L_S=\{f\in C^\infty(a,b):~f^{(k)}(S)=\{0\},\, k\ge 0\}\,.\end{equation}
In many cases these subspaces are nontrivial, and obviously, they contain no root-vector of $D$, since these functions are monomial exponentials, i.e. they have the form $x\to x^ne^{\lambda x}\,,n\in \mathbb{N}\,, \lambda \in \mathbb{C}\,.$ According to \cite{alkor} the $D$-invariant subspaces of $C^\infty(a,b)$ can be classified in terms of the spectrum of the restriction of this operator. More precisely, given such a closed subspace $L$ of $C^\infty(a,b)$ we have the following three alternatives:
(i) $\sigma(D|_L)=\mathbb{C}$,
(ii) $\sigma(D|_L)=\emptyset$,
(iii) $\sigma(D|_L)$ is a nonvoid discrete subset of $\mathbb{C}$ consisting of eigenvalues of $D$.
Very little is known about the structure of subspaces of the form
(i). A concrete example is obtained by choosing the set $S$ in
\eqref{complication} to consist of finitely many points, or
disjoint intervals. The subspaces of type (ii) are called residual
and are completely characterized in \cite{alkor}. The main
result of that paper asserts that such a subspace has the form
$$L_I=\{f\in C^\infty(a,b): f^{(k)}(I)=0, k\geq0\}$$ for some
interval $I\subset(a,b)$ which is { \it relatively closed in $(a,b)$.} The interval $I$ may reduce to one point. We will call $I$ {\it the
residual interval}.
The subspaces of type (iii) are the main concern of this paper. Such a subspace $L$ might have a nontrivial residual part (see again \cite{alkor} for the details) given by
$$L_{res}=\bigcap\{p(D)L:~p-\text{polynomial}\,\}.$$
The natural question which arises and has been formulated in \cite{alkor} is:
{\it Is every $D$-invariant subspace of type {\rm (iii)}
generated by its residual part and the monomial exponential it contains?}
In the case when the spectrum of the restriction of $D$ is a
finite set an affirmative answer has been given in
\cite[Proposition 6.1]{alkor}. The general case
is quite subtle and, surprisingly enough, our results reveal an interplay between the length of the residual interval and the uniform upper density of the set $\sigma(D|_L)$. This is defined as usual by
$$
\mathcal{D}_+(\sigma(D|_L)):=\lim_{r\rightarrow\infty}\sup_{x\in\mathbb{R}}\frac{\card\{\lambda\in \sigma(D|_L): \Re\lambda\in[x,x+r]\}}{r}\,,
$$
where multiplicities are counted.
Our main results are given below. Throughout in the statements $L$ is a $D$-invariant subspace of $C^\infty(a,b)$ satisfying alternative (iii) from above and $\mathcal{E}(L)$ denotes the set of monomial exponential functions $x\to x^ke^{\lambda x}$ contained in $L$.
\begin{theorem}
\label{main1}
Assume that the $D$-invariant subspace $L$ has a compact residual interval $I$. If
$$
2\pi \mathcal{D}_+(\sigma(D|_L))<|I|\,,
$$
then
$$
L= \overline{L_{res}+\mathcal{E}(L)}.
$$
\end{theorem}
The restriction on the density of $\sigma(D|_L)$ turns out to be essential.
If $2\pi \mathcal{D}_+(\sigma(D|_L)) = |I|$, then the spectral synthesis may fail as the next theorem shows.
\begin{theorem}
\label{main2}
There exists a $D$-invariant subspace $L$ as above
such that
$$
L_{res} = \{f\in C^\infty(-2\pi,2\pi):
f|_{[-\pi, \pi]} \equiv 0\}\,,$$ $$\mathcal{D}_+(\sigma(D|_L)) = 1\,,
$$
but
$$
L \ne \overline{L_{res}+\mathcal{E}(L)}.
$$
\end{theorem}
{
Our methods pertain also to the case when the residual interval is not compact. Note that in this case we have either $I=(a,b)$, in which case the residual subspace is trivial $L_{res}=\{0\}$, or there exists $c\in (a,b)$ such that $I=(a,c]$, or $I=[c,b)$.
It turns out that in these cases spectral synthesis does always hold, as the following theorem shows.}
\begin{theorem}
If the residual interval $I$ of the $D$-invariant subspace $L$ is non-compact, then
$$L= \overline{L_{res}+\mathcal{E}(L)}.$$
\label{main3}
\end{theorem}
Our approach is a substantial improvement of the method used in \cite{alkor} and is inspired by the ideas in \cite{BBB1}.
It is well known that the spectral synthesis problem
for linear operators in a Hilbert space
is closely related to the {\it hereditary completeness} property
for systems of vectors (see e.g. \cite{markus}). The complete and minimal system of
vectors $\{x_n\}_{n\in\mathbb{N}}$ with complete
biorthogonal $\{y_n\}_{n\in\mathbb{N}}$ is said to be {\it hereditarily complete}
if any mixed system $\{x_n\}_{n\in \mathbb{N}\setminus N_1}\cup\{y_n\}_{n\in N_1}$
is also complete. This property for exponentials in the space $L^2(I)$
was extensively studied in \cite{BBB2}.
As we will see later we need to prove completeness of {\it some}
mixed system (the whole system $\{x_n\}$ in our case is not even complete).
This was done in \cite{BBB2} using the results of \cite{BBB1} and sharp density result
of Beurling--Malliavin type. The Proposition \ref{mainprop} is an adapted
version of the main result of \cite{BBB2}. So the proof of the Theorem \ref{main1}
consists of two steps: we reduce the problem to a Hilbert space problem (Section \ref{step1})
and solve the appropriate mixed completeness problem (Section \ref{step2}).
The idea of the construction of the counterexample in Theorem \ref{main2}
goes back to \cite[Theorem 1.3]{BBB1}.
\smallskip
\textbf{Organization of the paper.} The paper is organized as follows.
Theorem \ref{main1} is proved in Sections \ref{step1} and \ref{step2}.
Example of the absence of spectral synthesis is given in Section
\ref{examp}. In Section \ref{nores} we prove Theorem \ref{main3}.
\section{Proof of Theorem \ref{main1}
\label{step1}}
\subsection{Preliminaries}
The continuous linear functionals on $C^\infty(a,b)$ are compactly supported
distributions of finite order on the interval $(a, b)$ (that is, distributions of the form
$\phi=f^{(n)}$, $n\in\mathbb{N}$, where $f\in L^1_{loc}(a,b)$ and differentiation
is understood in the sense of distributions). We denote this class by $S(a,b)$.
As usual we can define the
Fourier transform of $\varphi\in S(a,b)$ by the formula
$$
\hat{\varphi}(z)=\varphi(e^{iz}).
$$
Since $\phi$ has finite order, the function $\hat{\varphi}$ is an entire function
of finite exponential
type with at most polynomial growth on the real line. Put
$$
\mathcal{H}_I=\biggl{\{}f: f = \sum_{k=0}^n z^k(f_k(z)+c_k),
\quad f_k\in\mathcal{PW}_I, \ c_k \in \mathbb {C} \biggr{\}},
$$
where $\mathcal{PW}_I$ is the Fourier image of the space $L^2(I)$.
For every functional $\varphi\in S(a,b)$ we have $\hat{\varphi}\in\mathcal{H}_I$,
where $I=I(\varphi)$ is a closed interval such that $\supp\phi\subset I$.
Under the Fourier transform the duality between $S(a,b)$ and $C^\infty(a,b)$ becomes
the duality between the space $\mathcal{H}=\cup_{I\subset(a,b)}\mathcal{H}_I$ and
the space of entire functions $\mathcal{U}$ with conjugate indicator diagrams
in $(a,b)$ and decreasing faster than any polynomial along the real line.
Let $F\in\cup_{I\subset(a,b)}\mathcal{H}_I$, $G\in\mathcal{U}$. If $F$ has
infinite number of zeros, then its bilinear
form $(F,G)_{\mathcal{H},\mathcal{U}}=(\check{F},\check{G})_{S(a,b), C^{\infty}(a,b)}$
can be viewed as usual inner product in $L^2(\mathbb{R})$ (or $\mathcal{PW}_I$)
of the functions $F(z)\slash P(z)$ and $G(z)\overline{P(\overline{z})}$,
where $P$ is a polynomial of a sufficiently large degree whose zero set is a subset of
zero set of $F$. This explains how spaces $\mathcal{PW}_I$ come into the picture.
For an entire function $F$ we will denote its zeros set as $\mathcal{Z}_F$.
\subsection{The associated a Hilbert space problem}
Let us denote $\Lambda=\sigma(D|_L)$,
$$
L_0= \overline{L_{res}+\mathcal{E}(L)},
$$
and let $I$ be the {\it residual interval} given by
$$
L_{res}= L_I =\{f\in C^\infty(a,b): f^{(k)}(I)=0\,,k\ge 0\}.
$$
If $I$ reduces to one point, it is easy to see that
$L=C^\infty(a,b)$. Thus we shall assume throughout that
$I$ does not reduce to a point.
Let us consider the annihilators $L^\perp$ and $L_0^\perp$
of $L$ and $L_0$ respectively in the dual space $S(a,b)$.
It is easy to see that
$$
\widehat{L_0^{\perp}}=\{F: F\in \mathcal{H}_I, F\bigl{|}_\Lambda=0\}.
$$
It will be sufficient to prove that $L^{\perp}$ is dense in $L^\perp_0$,
in the weak star topology.
Let us consider the distribution $\phi\in L^\perp$ with
the maximal length of $\conv\supp\phi$. It is clear that $\hat{\phi}\in \mathcal{H}_I$
but $\hat{\phi}\notin \mathcal{H}_J$ for any subinterval $J\subset I$, $J\neq I$. We can write
$$
\hat{\phi}(z)=G_\Lambda(z)E(z),
$$
where $G_\Lambda$ is some canonical product corresponding to the sequence
$\Lambda$) and $E$ is some entire function. In fact, we shall choose $G_\Lambda$ such that $G(z)/\overline{G(\bar{z})}$
is a quotient of two Blaschke products. Moreover, without loss of generality we can assume that $E$ has no multiple zeros and $E(\lambda)\neq0$ for $\lambda\in\Lambda$.
From \cite[Proposition 3.1]{alkor} we know that
$$
\frac{G_\Lambda(z)E(z)}{z-w}\in \widehat{L^\perp},\quad w\in\mathcal{Z}_E.
$$
Therefore, we can further assume that $G_\Lambda E\in\mathcal{PW}_I$, otherwise we can start with the function
$\frac{E(z)}{(z-w_1)...(z-w_n)}$ in place of $E$.
We argue by contradiction. Suppose that $L^{\perp}$ is not dense in $L_0^\perp$.
Then there exists an entire function $T$ such that
$G_\Lambda T\in \mathcal{H}_I$ and $G_\Lambda T\notin \widehat{L^\perp}$.
We fix such a function $T$ and number $N$ such that $G_\Lambda(x) T(x) =
O(1+|x|^N)$.
From the Hahn--Banach
Theorem we know that there exists a non-trivial function $f\in C^\infty(I)$ such that
$$
\biggl{(}\frac{G_\Lambda(z)E(z)}{z-w},\hat{f}\biggr{)}=0,
\quad w\in\mathcal{Z}_E \quad \text {and}\quad (G_\Lambda T, \hat{f})\neq 0.
$$
In order to arrive at a Hilbert space setting we assume first that $G_\Lambda T\in \mathcal{PW}_I$. In this case both equations may be understood as usual inner products in $L^2(\mathbb{R})$. The general case ($G_\Lambda T$ grows polynomially) will be reduced to
this special situation in the next subsection.
This leads to the following system of equations
\begin{equation}
\begin{cases}
\int_\mathbb{R}\frac{G_\Lambda(x)E(x)}{x-w}\overline{F(x)}dx=0,\quad w\in\mathcal{Z}_E,\\
\int_\mathbb{R}G_\Lambda(x)T(x)\overline{F(x)} dx \neq 0.
\label{inteq}
\end{cases}
\end{equation}
The contradiction we are seeking for is given by the following proposition. The reproducing kernel at $\lambda$ in $\mathcal{PW}_I$ will be denoted by $k_\lambda$.
\begin{proposition}
\label{pw}
Let $G\in \mathcal{PW}_I$ be a function with simple zeros and such that $G\notin \mathcal{PW}_J$ for any proper subinterval $J$ of $I$. If we have a partition of its zero set $\mathcal{Z}_G=\Lambda_1\cup\Lambda_2$, $\Lambda_1\cap\Lambda_2=\emptyset$ such that $2\pi \mathcal{D}_+(\Lambda_2)<|I|$,
then the mixed system
\begin{equation}
\{k_\lambda\}_{\lambda\in\Lambda_2}\cup\biggl{\{}\frac{G(z)}
{z-\lambda}\biggr{\}}_{\lambda\in\Lambda_1}
\label{mixed}
\end{equation}
is complete in $\mathcal{PW}_I$.
\label{mainprop}
\end{proposition}
Let us assume for the moment that Proposition \ref{mainprop} is proved, and that $G_\Lambda T\in \mathcal{PW}_I$.
From \eqref{inteq} it follows that $F$
is orthogonal (in $\mathcal{PW}_I$) to the family
$\bigl{\{}\frac{G_\Lambda(z)E(z)}{z-w}\bigr{\}}_{w\in\mathcal{Z}_E}$.
Apply Proposition \ref{pw} to the function $G=G_\Lambda E$,
with $\Lambda_2=\Lambda, \Lambda_1=\mathcal{Z}_E$,
to conclude that $F$ belongs to the closed span of $
\{k_\lambda\}_{\lambda\in\Lambda}$, which obviously contradicts the second equation in \eqref{inteq}.
\subsection{Reduction of the general case to the Hilbert space setting.}
In the general case we only know that that $G_\Lambda T(x)=O(1+|x|^N)$ on the real line. Put
$$
M=\{G_\Lambda H: G_\Lambda H\in \mathcal{PW}_I\}.
$$
By the previous argument we have that $\widehat{L^{\perp}}$ is dense in $M$, hence, it remains to prove that $M$ is dense in $\widehat{L^{\perp}_0}$. Assume the contrary.
Then there exists a function $f\in C^{\infty}(a,b)$ such that $\hat{f}\perp M$ but $(G_\Lambda T,\hat{f})\neq0$.
Let us fix a finite set $W$, $W\cap\Lambda=\emptyset$,
such that there exists a function $g$ of the form $\sum_{w\in W}c_we^{iw t}$ with the property that for $0\leq k\leq 2N+2$, $g^{(k)}-f^{(k)}$ vanishes at the endpoints of $I$. Moreover, it is clear that there exists $G_\Lambda F_0\in M$ such that $G_\Lambda(T+F_0)$ vanishes on $W$. Thus $G_\Lambda(T+F_0)=G_\Lambda P_W T_1$, where $P_W$ is a polynomial vanishing on $W$. Obviously,
$(G_\Lambda P_W T_1, F)\neq 0$.
Now let $\tilde{F}=\widehat{f-g}$, and note that
$\tilde{F}(x)=O((1+|x|)^{-2N})$;
in particular, $\tilde{F}Q\in L^2(\mathbb{R})$, for every polynomial $Q$ of degree strictly less than $2N$.
For every entire function $U$ with $G_\Lambda P_W U\in\mathcal{PW}_I$,
we have the system
\begin{equation}
\begin{cases}
\int_\mathbb{R}G_\Lambda(x)P_W(x)U(x)\overline{\tilde{F}(x)}dx=0,\\
\int_\mathbb{R}G_\Lambda(x)P_W(x)T_1(x)\overline{\tilde{F}(x)}dx\neq0.
\end{cases}
\end{equation}
Now fix a function $U$ such that $T_1$ and $U$ have at least $N+2$ common zeros and $G_\Lambda U\notin \mathcal{PW}_J$ for any proper subinterval $J\subset I$ . Then write $U=QU_1$, $T_1=QT_2$, where $Q$ is a polynomial of degree $N+2$ and let $Q^*(z)=\overline{Q(\overline{z})}$. We then have
$$
(G_\Lambda P_W T_2, \tilde{F}Q^*)\neq 0, \qquad \biggl{(}\frac{G_\Lambda P_W U_1}{z-u}, \tilde{F}Q^*\biggr{)}= 0,
\quad \text{if} \quad u\in\mathcal{Z}_{U_1},
$$
which is a system of the form \eqref{inteq}
with the set $\Lambda\cup W$ instead of $\Lambda$ and with $U_1$
instead of $E$. As we have seen in the previous subsection
this system leads to a contradiction. \qed
\section{Proof of Proposition \ref{pw}\label{step2}}
In order to complete the proof of Theorem \ref{main1} it remains to prove Proposition \ref{mainprop}. To simplify notations we assume without loss that $I=[-\pi,\pi]$ and write $\mathcal{PW}_\pi$ instead of $\mathcal{PW}_{[-\pi,\pi]}$. We denote by $k_\lambda$ the reproducing kernel of $\mathcal{PW}_\pi$
corresponding to the point $\lambda$, that is,
$$
k_\lambda(z) = \frac{\sin \pi (z-\overline \lambda)}{\pi(z-\overline
\lambda)}, \qquad \text{and}\quad f(\lambda) = (f,k_\lambda).
$$
Recall that for any $\gamma\in \mathbb{R}$ the system
$\{k_{n+\gamma}\}_{n\in \mathbb{Z}}$
is an orthogonal basis of $\mathcal{PW}_\pi$.
\subsection{Equation for the function of zero exponential type}
The proof is similar to the proof of the main theorem in \cite{BBB1}.
Assume the contrary. Then there exists a nonzero $h\in\mathcal{PW}_\pi$
such that
\begin{equation}
\biggl{(}h,\frac{G(z)}{z-\lambda}\biggr{)}=0,\quad \lambda\in\Lambda_1,\qquad (h,k_\lambda)=0,\quad \lambda\in\Lambda_2.
\label{inn}
\end{equation}
We expand $h$ with respect to the orthogonal basis $\{k_n\}_{n\in\mathbb Z}$:
$$
h(z)=\sum_n\overline{a_n}k_n(z).
$$
Then the equations \eqref{inn} can be rewritten as
\begin{equation}
\begin{cases}
\sum_n\frac{a_nG(n)}{\lambda-n}=0,\quad \lambda\in\Lambda_1,\\
\sum_n\frac{(-1)^n\overline{a_n}}{\lambda-n}=0,\quad \lambda\in\Lambda_2.
\end{cases}
\end{equation}
Then there exist entire functions $S_1$ and $S_2$ such that
\begin{equation}
\begin{cases}
\sum_n\frac{a_nG(n)}{z-n}=\frac{G_1(z)S_1(z)}{\sin \pi z}\\
\sum_n\frac{(-1)^n\overline{a_n}}{z-n}=\frac{G_2(z)S_2(z)}{\sin \pi z}=\frac{h(z)}{\sin \pi z},
\label{main}
\end{cases}
\end{equation}
where $G_1$ and $G_2$ are canonical products corresponding to $\Lambda_1,\Lambda_2$, respectively. The functions $S_1$, $S_2$ satisfying \eqref{main} parametrize all functions orthogonal to the mixed system \eqref{mixed}.
Put $V=S_1S_2$. Comparing the residues in equations \eqref{main} at the points $n$ we get
$$V(n)=(-1)^n|a_n|^2.$$
Therefore we have the representation
$$V(z)=Q(z)+R(z) \sin \pi z,$$
where
$$
Q(z)=\sin \pi z \sum_n\frac{|a_n|^2}{z-n}
$$
and $R$ is a function of zero exponential type. Without loss of generality we can assume
that $S_1$ and $S_2$ are real on the real line (the similar formulae hold for $S_1+S^*_1$ and $S_2+S_2^*$, see \cite{BBB1}).
We know that
\begin{equation}
R(\lambda)+\sum_n\frac{|a_n|^2}{\lambda-n}=0,\quad \lambda\in\mathcal{Z}_{S_2}.
\label{R0}
\end{equation}
This is a very restrictive condition because we can start with
a basis $\{k_{n+\gamma}\}_{n\in\mathbb{Z}}$, $\gamma\in[0,1)$ which is sufficiently far from real the zeros of $S_2$ so that the Cauchy transform of the sequence $|a_n|^2$ is not very big on real zeros of $S_2$. On the other hand, there are a lot of real zeros of $S_2$ because the function $V$ has at least one zero in each interval $[n, n+1)$. In the next subsection we present this idea in detail.
\subsection{Choice of the basis.} Choosing amongst the bases
$\{k_{n+\gamma}\},\gamma\in \mathbb{R}$, is of course equivalent
to the corresponding translation to the functions involved. For
simplicity, we shall keep the same notations for these. Then we can find
a sufficiently small $\delta>0$ for which there exist two subsets
$\Sigma, \Sigma_1$ of the zero set $\mathcal{Z}(S_2)$ of the function
$S_2$ with the following properties:
\begin{itemize}
\begin{item}
$\Sigma$ has exactly one point in those intervals
where $\mathcal{Z}(S_2)\cap[n, n+1)\neq\emptyset$,
and
$$
\dist(x,\mathbb{Z})>\frac{\delta}{1+x^2}, \qquad x\in\Sigma;
$$
\end{item}
\begin{item}
$\Sigma_1$ has positive upper density,
and $\dist(x, \mathbb{Z})>\delta$, $x\in\Sigma_1$.
\end{item}
\end{itemize}
We need to consider three cases.
If $R$ is a nonzero polynomial, then the zeros of the
function \eqref{R0} approach $\mathbb{Z}$ and we obtain
a contradiction to the existence of $\Sigma_1$.
If $R=0$, then it is known that the density of $\Sigma_1$ is zero
\cite[Proposition 3.1]{BBB1}. Finally, if $R$ is not a polynomial,
we can divide it by $(z-z_1)(z-z_2)$, where $z_1$ and $z_2$
are two arbitrary zeros of $R$, $z_1,z_2\not\in\Sigma$,
to get a function $R_1$ of zero exponential type which is
bounded on $\Sigma$.
Next, we obtain some information on $\Sigma$.
For a discrete set $X=\{x_n\} \subset \mathbb{R}$ we
consider its counting function
$n_X(t) = {\rm card}\, \{n: x_n \in [0, t)\}$, $t\ge 0$,
and $n_X(t) = -{\rm card}\, \{n: x_n \in (-t, 0)\}$, $t<0$.
If $f$ is an entire function and $X$ is the set of its real zeros
(counted according to multiplicities), then there exists a branch
of the argument of $f$ on the real axis,
which is of the form $\arg f(t) = \pi n_X(t) +\psi(t)$, where $\psi$
is a smooth function. Such choice of the argument is unique
up to an additive constant and in what follows we always assume that the argument is chosen
to be of this form.
Denote by $\tilde u$ the conjugate function (the Hilbert
transform) of $u$,
$$
\tilde u (x)=\frac{1}{\pi} v.p.
\int_\mathbb{R} \bigg(\frac{1}{x-t} +\frac{t}{t^2+1}\bigg) u(t)dt.
$$
We use the fact that for every function $f\in\mathcal{P}W_\pi$ with the
conjugate indicator diagram $[-\pi,\pi]$ and all zeros
in $\overline{\mathbb{C}_+}$, one has
\begin{equation}
\label{argum}
\arg f = \pi x +\tilde{u}+c,
\end{equation}
where $u\in L^1((1+x^2)^{-1}dx)$, $c\in\mathbb{R}$.
Indeed, the function $g=e^{-i\pi z}\overline{f(\overline{z})}$ is an {\it outer} function in $\mathbb{C}^+$ and, hence, it can be represented
in the form $e^{\log|g|+i\widetilde{\log|g|}}$. Taking into account the arguments we
get \eqref{argum}.
It follows from \eqref{main} that $GV\in\mathcal{P}W_{2\pi}$. For any $f\in\mathcal{PW}_\pi$ we put
$$f^{\#}(z)=f(z)B^-(z),$$
where $B^-(z)$ is a Blaschke product in $\mathbb{C}^-$ with zero set $\mathcal{Z}_f\cap\mathbb{C}^-$. The function $f^{\#}$ has no zeros in $\mathbb{C}^-$ and is also in $\mathcal{PW}_\pi$. So, we have
$$h^{\#}=G^{\#}_2S^{\#}_2\in\mathcal{P}W_\pi,\quad G^{\#}V^{\#}\in\mathcal{P}W_{2\pi}.$$
Using these inclusions and the fact that $V^{\#}$ has at least one zero in each
interval $(n,n+1)$ we find an equation
on the counting function of $\Sigma$.
In the next subsection we will show that this contradicts to the fact that
nonconstant entire function of zero exponential type is bounded on $\Sigma$.
Let us consider its representation $V^\# = V_0 H$, where the zeros
of $V_0$ are simple, interlacing with $\mathbb{Z}$ and
$V_0|_\Sigma=0$. It is clear that $\arg V_0 = \pi x+ O(1)$. Since,
$$
\arg(G^\# V^\#)= 2\pi x + \tilde{u_1}+c,
$$
and
$$\arg(G^\#)= \pi x + \tilde{u_2}+c,$$
we conclude that
$$
\arg(H)= \tilde{u_3}+O(1).
$$
Consider the equality $h^\# = G_2^\# H S_2^\#/H$ and note that
$$
\arg\Big(\frac{S_2}{H}\Big) = \pi n_\Sigma -\alpha,
$$
where $\alpha$ is some nondecreasing function on $\mathbb{R}$.
This follows from the fact that
$S_2^\#/H$ vanishes only on a subset of the real axis which contains $\Sigma$.
Applying the representation (\ref{argum}) to $h^\#$, we conclude that
\begin{equation}
\label{n15}
\arg G_2^\# + \pi n_\Sigma(x) = \pi x + \tilde{u}+ v + \alpha,
\end{equation}
where $u\in L^1((1+x^2)^{-1}dx)$, $v\in L^\infty(\mathbb{R})$,
and $\alpha$ is nondecreasing.
Using the fact that the upper density of $\Lambda_2$ is less than $\pi$ we get an equation
\begin{equation}
\label{n16}
\pi n_\Sigma(x) = \pi \varepsilon x + \tilde{u}+ v + \alpha_1,\quad \varepsilon >0,
\end{equation}
and $\alpha_1$ is nondecreasing.
Summing up, we have an entire function $R$ of zero exponential type which is not a polynomial, and which is bounded
on a set $\Sigma\subset\mathbb R$ satisfying \eqref{n16}.
\subsection{Beurling--Malliavin meets P\'olya}
To deduce a contradiction from \eqref{n16}, we use some information on the classical P\'olya problem
and on the second Beurling--Malliavin theorem.
We say that a sequence $X=\{x_n\} \subset \mathbb{R}$ is a {\it P\'olya sequence}
if any entire function of zero exponential type which is bounded on $X$
is a constant. We say that a disjoint sequence of intervals
$\{I_n\}$ on the real line is \itshape a long sequence of intervals
\normalfont if
$$
\sum_n\frac{|I_n|^2}{1+\dist^2(0,I_n)}=+\infty.
$$
A complete solution of the P\'olya problem was obtained by
Mishko Mitkovski and Alexei Poltoratski \cite{mp}. In particular a separated sequence
$X\subset\mathbb{R}$ is not a P\'olya sequence if and only
if there exists a long sequence of intervals $\{I_n\}$ such that
$$
\frac{\card(X \cap I_n)}{|I_n|}\rightarrow 0.
$$
Applying this result to our $R$ and $\Sigma$ (formally speaking, $\Sigma$ is not a separated sequence but by construction it is a union of two separated
sequences which are interlacing), we find a long system
of intervals $\{I_n\}$ such that
$
\frac{\card(\Sigma\cap I_n)}{|I_n|}\rightarrow0.
$
Given $I=[a,b]$, denote $I^-=[a,(2a+b)/3]$, $I^+=[(a+2b)/3,b]$,
$$
\Delta^*_I = \inf_{I^+}[\pi \varepsilon x -\pi n_\Sigma(x) +v] -
\sup_{I^-}[\pi\varepsilon x - \pi n_\Sigma(x) +v].
$$
Now, for a long system of intervals $\{I_n\}$ and for some $c>0$
we have
$$
\Delta^*_{I_n}\geq c|I_n|.
$$
Next we use a version of the second Beurling--Malliavin theorem
given by N. Makarov and A. Poltoratski in \cite{pm}.
Combining Proposition 3.13 and Theorem 5.9 from \cite{pm} we get
\begin{proposition} Suppose $\gamma\in C(\mathbb{R})$.
If there exists $c>0$ and a long system of intervals $I_n$ such that
\begin{equation}
\Delta^*_{I_n}[\gamma]\geq c|I_n|,
\label{delta}
\end{equation}
then $\gamma$ cannot be represented as $\alpha+\widetilde{h}$,
where $\alpha$ is decreasing and $h\in L^1((1+x^2)^{-1}dx)$.
\label{deltaprop}
\end{proposition}
If we apply this for the function $\pi\varepsilon x - \pi n_\Sigma(x) +v$
we arrive to a contradiction. Proposition \ref{mainprop}
is completely proved.
\qedsymbol
\begin{remark}
We close this subsection with an additional explanation of the
result in Proposition \ref{deltaprop}. Assume that
$|I_n|=o(\dist(0, I_n))$. Let $h_n=h\chi_{10I_n}$ be a restriction
of function $h$ onto interval $10I_n$ (the interval of the length
$10|I_n|$ and with the same center as $I_n$). If the inequality
\eqref{delta} holds for $h_n$, then the Kolmogorov theorem states
that there exists $c$ such that
$$\int_{|\widetilde{h_n}|>A}\frac{dx}{1+x^2}\leq \frac{c}{A}\int_{\mathbb{R}}\frac{|h_n(x)|dx}{1+x^2},\quad A>1.$$
Choosing $A=\varepsilon|I_n|$, $\varepsilon>0$ we have
$$\frac{|I_n|^2}{1+\dist^2(0,I_n)}\leq \frac{3c}{\varepsilon} \int_{\mathbb{R}}\frac{|h_n(x)|dx}{1+x^2}.$$
Summing up over $n$ we get the contradiction
$$\infty=\sum_n\frac{|I_n|^2}{1+\dist^2(0,I_n)}\leq C_1\sum_n\int_{\mathbb{R}}\frac{|h_n(x)|dx}{1+x^2}\leq C_2\int_{\mathbb{R}}\frac{|h(x)|dx}{1+x^2}<\infty.$$
We refer to \cite{pm} for the details.
\end{remark}
\subsection{A reformulation in terms of an approximation result.} Given a distribution
$\varphi$ and $w\in \mathbb{C}$ which is a zero of order $n$ of
its Fourier transform $\hat{\varphi}$, we denote by
$\varphi_{w,k}$, $1\le k\le n$, the distributions with Fourier
transforms
$$\hat{\varphi}_{w,k}(z)=\frac{\hat{\varphi}(z)}{(z-w)^k}.$$ As a
consequence of the proof of Theorem \ref{main1} we have the
following result.
\begin{corollary}\label{approxres}
Let $\varphi$ be a compactly supported distribution, let $I$ be
the convex hull of its support, and let $\Lambda$ be a subset of
the zero set of $\hat{\varphi}$ (counting multiplicities).
If$$D^+(\Lambda)<\frac{|I|}{2\pi},$$ then every distribution
$\psi$ with support in $I$ whose Fourier transform vanishes on
$\Lambda$, lies in the weak-star closure of the linear span of
$\{\varphi_{w,k}:~w\notin \Lambda\}$.
\end{corollary}
\section{Proof of Theorem \ref{main2}\label{examp}}
Let $\mathcal{P}$ be the set of all polynomials.
\begin{lemma}
\label{debr}
There exists a sequence $\Lambda \in \mathbb{R}$ of density $\pi$
with the generating function $G_\Lambda$ and an entire function $S$ such that
the following three conditions hold\textup:
$(i)$ $G_\Lambda \mathcal{P} \subset \mathcal{P}W_\pi$ and $G_\Lambda S \in \mathcal{P}W_\pi$\textup;
$(ii)$ $G_\Lambda S = \sum_{n\in \mathbb{Z}} a_n k_n$ and for any $N>0$
we have $a_n = o(n^{-N})$, $|n|\to\infty$\textup;
$(iii)$ $G_\Lambda S$ is orthogonal to $G_\Lambda \mathcal{P}$ in $\mathcal{P}W_\pi$.
\end{lemma}
Assume for the moment that Lemma \ref{debr} is proved. We first show how to deduce the
counterexample of Theorem \ref{main2} from this lemma.
\bigskip
\\
{\it Proof of Theorem \ref{main2}.}
Assume that $\Lambda$ and $S$ are constructed. Put
$$
M = \big\{f\in L^2(-\pi, \pi):\ \hat f \in G_\Lambda \mathcal{P} \big\}
$$
(recall that any function $F\in \mathcal{P}W_\pi$ is of the form $F = \hat f$
for some $f\in L^2(-\pi, \pi)$).
Of course, each element $f\in M$ defines a continuous linear functional on
$C^\infty(-2\pi, 2\pi)$ by
$$
\phi_f(h) = \int_{-\pi}^\pi h(t) \overline{f(t)} dt, \qquad h \in C^\infty(-2\pi, 2\pi).
$$
The functional $\phi_f$ is well defined since $f(t)\equiv 0$, $|t|\ge \pi$.
Now let
$$
L = M^\perp = \{h\in C^\infty(-2\pi, 2\pi):\ \phi_f(h) = 0, \, f\in M\}.
$$
By the construction, $L$ is a closed subspace of
$C^\infty(-2\pi, 2\pi)$ and
$$
\{f\in C^\infty(-2\pi,2\pi): f|_{[-\pi, \pi]} \equiv 0\} \subset L.
$$
Also, since the set of common zeros of $\widehat{L^\perp}$ coincides with $\Lambda$
we have $\sigma(D|_L) = \Lambda$.
Let us show that $L$ is $D$-invariant. Let $h\in L$.
We need to show that $\int_{-\pi}^\pi h'(t)\overline{f(t)}dt =0$ for any $f\in M$.
Since $f$ vanishes outside $[-\pi, \pi]$, the integral depends only on the values of
$h$ inside this interval. Thus we may assume without loss of generality that
$\supp h \subset (-\pi-\varepsilon, \pi+\varepsilon)$ for some small $\varepsilon>0$.
Therefore both $F = \hat f$ and $H = \hat h$ are rapidly decaying functions and we have
$$
\int_{-\pi}^\pi h'(t) \overline{f(t)} dt = \int_\mathbb{R} x H(x)\overline{F(x)}dx.
$$
We have $F = G_\Lambda P$ for some polynomial $P$. Then $xF(x) = xP(x)G_\Lambda(x)
= \hat f_1$ for some $f_1\in M$. Hence,
$$
\int_\mathbb{R} x H(x) \overline{F(x)} dx = \int_{-\pi}^\pi h(t) \overline{f_1(t)}dt = 0,
$$
since $h\in M^\perp$. We have seen that $L$ is $D$-invariant.
Now we construct a continuous functional $\phi$ on $L$ such that
$\phi|_{L_0} = 0$, but $\phi|_L \ne 0$, where, as in the proof of the first theorem,
$$L_0= \overline{L_{res}+\mathcal{E}(L)}.$$
Let $h_0\in L^2(-\pi, \pi)$ be such that $\widehat{\overline{h_0}} = G_\Lambda S$.
Recall that $G_\Lambda S = \sum_{n\in \mathbb{Z}} a_n k_n$ where $a_n = o(n^{-N})$ for any $N>0$.
Hence, $\overline{h_0(t)} = \sum_{n\in \mathbb{Z}} a_n e^{int}$ and, by
the fast decay of $a_n$ we conclude
that $h_0$ is a $C^\infty$ function in the {\it closed} interval $[-\pi, \pi]$.
Denote by $h$ some function in $C^\infty(-2\pi, 2\pi)$ such that
$h|_{[-\pi, \pi]} = h_0$.
Consider the functional $\phi$ on $L$,
$\phi(g) = \int_{-\pi}^\pi g(t)\overline {h_0(t)} dt$. It is clear that $\phi$
annihilates the set $\{g\in C^\infty(-2\pi,2\pi): g|_{[-\pi, \pi]} \equiv 0\}$.
Also, $\phi(e^{i\lambda t}) = \widehat{\overline{h_0}}(\lambda) = 0$.
Thus, $\phi$ annihilates $L_0$.
Let us show that $h \in L$. Since $\phi(h) = \int_{-\pi}^\pi |h_0(t)|^2 dt >0$,
we then conclude that $L\ne L_0$. Indeed, for any $f\in M$
we have
$$
\begin{aligned}
\phi_f(h) & = \int_{-\pi}^\pi h(t)\overline{f(t)} dt
= \int_{-\pi}^\pi h_0(t)\overline{f(t)} dt \\
& = \int_\mathbb{R} \hat h_0(x)\overline{\hat{f}(x)} dx = 0,
\end{aligned}
$$
since $\hat h_0 = G_\Lambda S$, $\hat f = G_\Lambda P$ for some polynomial $P$,
and $G_\Lambda S \perp G_\Lambda \mathcal{P}$ in $L^2(\mathbb{R})$
by the construction of Lemma \ref{debr}.
Thus we have constructed a functional which separates $L_0$ and $L$.
\qed
\bigskip
\\
{\it Proof of Lemma \ref{debr}.}
Recall that we denote by $\mathcal{Z}_F$ the zero set of an entire
function $F$ (all functions involved will have simple zeros). Let
$$
U(z) = \frac{\sin (\pi \sqrt{z})}{\pi \sqrt{z}} = \prod_{n\in \mathbb{N}}
\bigg(1-\frac{z}{n^2}\bigg),
$$
and let $V$ be some product with very lacunary zeros, say
$$
V(z) = \prod_{n\in \mathbb{N}} \bigg(1-\frac{z}{2^{2n}+1}\bigg).
$$
Put
$$
G_\Lambda(z) = \frac{\sin \pi z}{U(z)V(z)}.
$$
Note that $UV$ tends to infinity faster than any power along $\mathbb{R}$
except some small neighborhoods of the zeros of $UV$. Therefore, using, e.g.,
Plancherel--P\'olya theorem one can easily show that $G_\Lambda P \in \mathcal{P}W_\pi$
for any polynomial $P$.
Let us introduce two more entire functions
$$
T(z) = \frac{\sin (\pi \sqrt{z+1/2}/10)}{W(z) \sqrt{z+1/2}}, \qquad
W(z) = \prod_{n\in \mathbb{N}} \bigg(1-\frac{z}{100\cdot 2^{2n}-1/2}\bigg)
$$
(note that the zeros of the nominator are exactly of the form $100n^2-1/2$, $n\in \mathbb{N}$).
Now we define the sequence $a_n$ by $a_n :=0$ for $n\notin \mathcal{Z}_U\cup\mathcal{Z}_V$ and
$$
a_n:= (-1)^n T(n), \qquad n \in \mathcal{Z}_U\cup\mathcal{Z}_V.
$$
It is easy to see that $T(n) = o(n^{-N})$ for any $N$
when $n\to +\infty$, since $\sin(\pi \sqrt{x})$ is bounded for $x>0$ and $W(n)$
grows faster than any power along $\mathbb{N}$.
Now we define the function $S$ by the formula
\begin{equation}
\label{nb}
\frac{G_\Lambda(z)S(z)}{\sin \pi z} = \frac{1}{\pi}
\sum_{n\in \mathbb{Z}}\frac{(-1)^n a_n}{z-n}.
\end{equation}
Thus $G_\Lambda S = \sum_{n\in \mathbb{Z}} a_n k_n$, where $k_n$ are the elements of the
orthogonal basis of reproducing kernels of $\mathcal{P}W_\pi$ (the Shannon--Kotelnikov formula),
and so $G_\Lambda S \in \mathcal{P}W_\pi$. This proves $(i)$ and $(ii)$, since
$\{a_n\}$ has a fast decay.
Note that the summation goes only along $n \in \mathcal{Z}_U\cup\mathcal{Z}_V$, and we can rewrite
\eqref{nb} as the following interpolation formula:
$$
\frac{S(z)}{U(z)V(z)} = \frac{1}{\pi} \sum_{n \in \mathcal{Z}_U\cup\mathcal{Z}_V} \frac{T(n)}{z-n}.
$$
Next we put $S_1 = G_\Lambda T$. We need to show that the following
interpolation formula also holds:
$$
\frac{S_1(z)}{\sin \pi z} = \frac{1}{\pi}\sum_{n\in \mathbb{Z}} \frac{a_n G_\Lambda(n)}{z-n}.
$$
This formula can be rewritten in the following way using the fact that
$a_n = (-1)^n T(n)$, and $G_\Lambda(n) = \pi (-1)^n \big((UV)'(n)\big)^{-1}$,
$n \in \mathcal{Z}_U\cup\mathcal{Z}_V$:
\begin{equation}
\label{nb1}
\frac{T(z)}{U(z)V(z)} = \sum_{n\in \mathcal{Z}_U\cup\mathcal{Z}_V} \frac{T(n)}{(UV)'(n)(z-n)}.
\end{equation}
We have already mentioned that $T(n)$ decays faster than any power
when $n\to\infty$. It is also easy to see that
$|(UV)'(n)|\to \infty$ when $n\to\infty$ and $n\in \mathcal{Z}_U\cup\mathcal{Z}_V$.
Since $V$ is a lacunary product it
is clear that $|V(n)|$ is large for $n \in \mathcal{Z}_U$ and $|V'(n)|$ is large for
$n\in \mathcal{Z}_V$, while $|U(n)|$, $n\in \mathcal{Z}_V$ and $|U'(n)|$, $n\in \mathcal{Z}_U$,
have a power-type below estimate in $n$. Thus, the series in the right-hand side
of \eqref{nb1} converges uniformly on compact sets separated from the poles.
Clearly the residues coincide, and so the difference between the left
and the right-hand side of \eqref{nb1} (let us denote it by $H$)
is an entire function. Obviously. $H$ is of zero exponential type. It remains
to notice that the function $T/(UV)$
in the left-hand side tends to zero along the imaginary axis
(for the function on the right this is obvious), thus $|H(iy)|\to 0$, $|y|\to \infty$,
whence $H\equiv 0$.
By exactly the same arguments we may show that for any $P\in \mathcal{P}$,
$$
\frac{P(z) S_1(z)}{\sin \pi z} = \frac{1}{\pi}\sum_{n\in \mathbb{Z}}
\frac{P(n) a_n G_\Lambda(n)}{z-n}.
$$
It remains to verify that the function $G_\Lambda S = \sum_{n\in \mathbb{Z}} a_n k_n$
is orthogonal to all functions of the form $z^k G_\Lambda $, $k\in \mathbb{Z}_+$.
Let $a\notin \mathbb{Z}$ and let $P(z) =(z-a)z^k$. Since $k_n$
are the reproducing kernels of $\mathcal{P}W_\pi$, we have
$$
(z^k G_\Lambda, G_\Lambda S) =
\frac{1}{\pi}\sum_{n\in \mathbb{Z}} n^k a_n G_\Lambda(n) =
\frac{1}{\pi} \sum_{n\in \mathbb{Z}} \frac{P(n) a_n G_\Lambda(n)}{n-a} =
\frac{P(z)S_1(z)}{\sin \pi z}\bigg|_{z=a} = 0.
$$
This proves $(iii)$ and completes the proof of the lemma.
\qed
{ \section{Subspaces with non-compact residual interval \label{nores}}
In this section we prove Theorem \ref{main3}.
We begin with a simple observation related to the absence
of the residual part. Given a distribution $\varphi$ compactly
supported on $(a,b)$ we denote throughout this section by
$I(\varphi)$ the convex hull of its support
and by $|I(\varphi)|$ its length. Note that if $L$ has a residual interval $I$ strictly contained in $(a,b)$ then all distributions in the annihilator of $L$ are supported on this interval, and clearly,
$$
\sup \{|I(\varphi)|:~\varphi\in L^\perp\}=|I|.
$$
the next lemma shows that this observation remains true when $I=(a,b)$ as well.}
\begin{lemma}
If $\sigma(D|_L)$ is an infinite discrete subset of $\mathbb{C}$
and $L_{res}=\{0\}$, then
$$
\sup \{|I(\varphi)|:~\varphi\in L^\perp\}=b-a,
$$
where $b-a=\infty$ if $(a,b)$ has infinite length.
\label{lemma1}
\end{lemma}
\begin{proof}
Let $s$ be the supremum in the statement. Under the assumption on $\sigma(D|_L)$
it follows easily that $s>0$.
Also note that if $|I(\varphi)|,|I(\psi)|>s/2$,
then these intervals must have a nonempty intersection
otherwise $|I(\varphi+\psi)|>s$. Thus,
$$
I=\bigcup\{I(\varphi):~\varphi\in L^\perp,~|I(\varphi)|>s/2\}
$$
is an interval with $|I|\ge s$. It is easy to see that
we must have $|I|= s$. Indeed, if $c,d\in I$ with $d-c>s$ such that
$c\in I(\varphi),~d\in I(\psi)$, then clearly, $|I(\varphi+\psi)|>s$,
which is a contradiction.
We claim that
$I$ contains the interior of any interval $I(\varphi)$, with $\varphi\in L^\perp$.
Assume the contrary, i.e., that
there exists $\varphi\in L^\perp$ such that the
interior of $I(\varphi)$ is not contained in $I$.
If $J$ is a nontrivial interval in $I(\varphi)\setminus I$,
we choose $\psi\in L^\perp$ with
$$
|I(\psi)|> \max\{s-|J|, s/2\}
$$
and note again that in this case $|I(\varphi+\psi)|>s$, which is a contradiction.
The claim implies that all distributions $\varphi\in L^\perp$
are supported on the closure of $I$. If $s=|I|<b-a$,
then $L$ contains $L_I$ which contradicts our assumption.
It remains that $s=b-a$, and the lemma is proved.
\end{proof}
\subsection{Radius of completeness} Let $\Lambda$ be a discrete subset
of $\mathbb{C}$. Put
$$
R(\Lambda)=\sup\{a: \mathcal{E}(\Lambda) \text{ is complete in } L^2[0,a]\}.
$$
The number $R(\Lambda)$ is called the {\itshape radius of
completeness of} $\Lambda$. It is well known that $R(\Lambda)$ is
equal to the Beurling--Malliavin (effective) density of $\Lambda$
\cite{BM2}, but we will not use this remarkable fact. The
following observation is important for our purposes.
\begin{remark}
The conclusions of Theorem \ref{main1}, Proposition \ref{mainprop}, and Corollary \ref{approxres} continue to hold if we replace in the hypothesis the upper density by the radius of completeness.
\label{rem1}
\end{remark}
\begin{proof}
The condition $D_+(\Lambda)<\frac{|I|}{2\pi}$ is used only in the
proof of Proposition \ref{mainprop}, in order to show that
equation \eqref{n15} implies equation \eqref{n16}. We claim that
this implication remains true under the assumption that
$R(\Lambda)<|I|$. To see this, suppose again that $I=[-\pi,\pi]$.
If $R(\Lambda)<2\pi$, then for any $\varepsilon>0$ there exists a
nontrivial entire function $T$ such that
$G_2T\in\mathcal{PW}_{\pi-\varepsilon}$, where $G_2$ is a
canonical product corresponding to the sequence $\Lambda$. Without
loss of generality we can assume that the zeros of $T$ are in
$\mathbb{C}^+$. Then
$$
\arg G_2^\#+\arg T=\pi(1-\varepsilon)x+\tilde{u}+c,\quad u\in L^1((1+x^2)^{-1}),\quad c\in\mathbb{R},
$$
and this implies \eqref{n16}.
\end{proof}
\subsection{Proof of Theorem \ref{main3}.} { Let $I$ be the residual interval of $L$.
If $R(\Lambda)\geq |I|$, then $\mathcal{E}(\Lambda)$ is complete in $L^2(J)$ for any compact subinterval of $J$ of $I$, and consequently,
$L$ can not be annihilated by any
distribution with compact support contained in $I$, i.e. $L=C^\infty(a,b)$.}
If $R(\Lambda)<|I|$, we can use the considerations at the beginning of this section together with Lemma \ref{lemma1} to conclude
that given $\varepsilon >0$, there exists $\varphi\in L^\perp$
with $|I(\varphi)|>|I|-\varepsilon$. Then by the modified version
of Corollary \ref{approxres} stated in the remark above,
$L^\perp$ contains all distributions with support in
$I(\varphi)$, whose Fourier transform vanishes at the points of
$\sigma(D|_L)$ with the appropriate multiplicities. Since
$\varepsilon$ is arbitrary, the result follows. $\square$
| {
"timestamp": "2013-12-31T02:06:54",
"yymm": "1309",
"arxiv_id": "1309.6968",
"language": "en",
"url": "https://arxiv.org/abs/1309.6968",
"abstract": "Let $L$ be a proper differentiation invariant subspace of $C^\\infty(a,b)$ such that the restriction operator $\\frac{d}{dx}\\bigl{|}_L$ has a discrete spectrum $\\Lambda$ (counting with multiplicities). We prove that $L$ is spanned by functions vanishing outside some closed interval $I\\subset(a,b)$ and monomial exponentials $x^ke^{\\lambda x}$ corresponding to $\\Lambda$ if its density does not exceed the critical value $\\frac{|I|}{2\\pi}$, and moreover, we show that the result is not necessarily true when the density of $\\Lambda$ equals the critical value. This answers a question posed by the first author and B. Korenblum. Finally, if the residual part of $L$ is trivial, then $L$ is spanned by the monomial exponentials it contains.",
"subjects": "Complex Variables (math.CV)",
"title": "Subspaces of $C^\\infty$ invariant under the differentiation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876997410349,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7089594680385994
} |
https://arxiv.org/abs/1502.00457 | Bounds for Jacobian of harmonic injective mappings in n-dimensional space | Using normal family arguments, we show that the degree of the first nonzero homogenous polynomial in the expansion of $n$ dimensional Euclidean harmonic $K$-quasiconformal mapping around an internal point is odd, and that such a map from the unit ball onto a bounded convex domain, with $K< 3^{n-1}$, is co-Lipschitz. Also some generalizations of this result are given, as well as a generalization of Heinz's lemma for harmonic quasiconformal maps in $\mathbb R^n$ and related results. | \section{Introduction}
In his seminal paper, Olli Martio \cite{OM1} observed that every
quasiconformal harmonic mapping of the unit planar disk onto
itself is co-Lipschitz. Later, the subject of quasiconformal
harmonic mappings was intensively studied by the participants of
the Belgrade Analysis Seminar, see for example \cite{kalaj.thesis, mama,
mm.fil12a,topic, rckm0,MP}. Harmonic quasiconformal maps have found applications in Teichm\" uller theory, among other things.
Recently, V. Markovi\' c \cite{mar} proved that a quasiconformal map of
the sphere $\mathbb{S}^2$ admits a harmonic quasi-isometric
extension to the hyperbolic space $\mathbb{H}^3$, thus confirming
the well known Schoen Conjecture in dimension $3$.
Related questions of bi-Lipschitzity and bounds of Jacobian have been studied in a sequence of papers by Kalaj and Mateljevi\'c; see also a recent paper of
Iwaniec-Onninen \cite{IwOn}. The corresponding results for
harmonic maps between surfaces were obtained previously by Jost
and Jost-Karcher \cite{jost2,jost}. In the planar case, the
complex harmonic function $h$ on a simply connected planar domain
can be written in the form $h=f+\overline{g}$, where $f$ and
$g$ are holomorphic, so that $|f'| $ satisfies the minimum
principle and Lewy's theorem. There is no appropriate analogy in
higher dimensions; if $h$ is a harmonic mapping from a domain in
$\mathbb{R}^{n}$ to $\mathbb{R}^n$ then $\|\partial_j h\| $ is
subharmonic, but it does not satisfy the minimum principle in
general. In fact, Lewy's theorem is false in dimensions higer
than two (see \cite{wood}, also \cite{dur} pp. 25-27 for Wood's
counterexample). In a very special case, gradients of harmonic
functions in $\mathbb{R}^3$, for which Lewy's theorem is true,
that also turn out to map unit ball onto a convex domain, are
known to be co-Lipschitz (see Astala-Manojlovi\' c \cite{ast.ma}, Mateljevi\'c \cite{rckm0}). However, it seems
that in general, one needs a different approach in higher
dimensions.
For example, in \cite{KaMpacific} the following general theorem was proved:
\begin{thm}
A $K$-quasiconformal harmonic mapping $f$ of the unit $n$ dimensional ball
($n>2$) onto itself is Euclidean
bi-Lipschitz, provided that $f(0) = 0$ and that $K<2^{n-1}$,
where $n$ is the dimension of the space.
\end{thm}
It is an extension of a similar result for hyperbolic harmonic
mappings with respect to the hyperbolic metric (see Tam and Wan,
\cite{tw}, 1998). The proof makes use of M\"obius transformations
in the space, and of a recent Kalaj's result \cite{Ka.janal13},
which states that harmonic quasiconformal self-mappings of the
unit ball are Lipschitz continuous.
Among other things, in this paper we prove that the above result holds if $K<3^{n-1}$, and when codomain is only assumed to be convex. A suitable application of normal family
argument allows us to take a conceptually simpler approach then
in \cite{KaMpacific}.
The proof is based on Theorem \ref{t.locJac}, showing that the degree of the first nonzero homogenous polynomial in the expansion of $n$ dimensional Euclidean harmonic quasiconformal mapping around an internal point is odd. We combine this with a distortion property of quasiconformal maps to
prove that for $n$ dimensional Euclidean harmonic quasiconformal mappings with
$K_O(f) < 3^{n-1}$, Jacobian is never zero.
Our approach gives motivation to define Jacobian non-zero closed families (see Definition \ref{dfn3.1}) for which a generalization of Heinz's lemma is shown; we also prove bounds for Jacobian from above for arbitrary harmonic quasiconformal maps. Generalization of Heinz's lemma allows us to prove Theorem \ref{thm.space2}, namely that harmonic quasiconformal maps from unit ball onto a convex domain, that are from Jacobian non-zero closed families, are co-Lipschitz. Essentially, we show that if a map is not co-Lipschitz, then one can get a map of the same type, for which Jacobian vanishes at some internal point. Several applications are also given.
The content of the paper is as follows. In section \ref{sec2} we
collect some known definitions and results which we use in the
paper. Proofs that Jacobians of quasiconformal harmonic maps cannot vanish when $K_O<3^{n-1}$ or in the case of gradients of harmonic functions, and that Taylor expansions of quasiconformal harmonic maps have odd lowest degree are
subject of section \ref{sec4}.
In section \ref{sec3} we prove the generalization of Heniz's lemma and the co-Lipschitz properties described above, and related results.
\section{Background and Auxiliary results}\label{sec2}
Throughout this paper, we will consider maps from domains, i.e. open and connected regions, of $\mathbb{R}^n$, usually denoted by $\Omega,\Omega'$, to $\mathbb{R}^n$. We will use notation $\mathbb{B}^n$ for the unit ball in $\mathbb{R}^n$, and for $x\in \mathbb{R}^n$ its norm will be denoted by $\|x\|$. For $x\in \mathbb{R}^n$, $a>0$ and $A \subseteq \mathbb{R}^n$, by $d(x,A)$ we will denote the Euclidean distance of point $x$ from the set $A$, by $aA$ the set $\{a y\,| \,y\in A\}$ and by $x+A$ the set $\{x+y\,|\, y\in A\}$.
By $J_f(x)$ we will denote the Jacobian of $f$ at point $x$, $\partial_j f$ will stand for $\frac{\partial f}{\partial x_j}$, and $\partial_{ij}^2 f$ for $\frac{\partial^2 f}{\partial x_i \partial x_j}$, where $x=(x_1, x_2, \ldots, x_n)$ is the vector argument of $f$.
We will consider Euclidean harmonic maps, also called harmonic maps in this paper, i.e. those with zero Laplacian of each coordinate function. Also, we will deal with quasiconformal maps.
For a domain $\Omega$ in $\mathbb{R}^{n}$, a map $f:\Omega \mapsto \mathbb{R}^{n}$ is quasiconformal if it is a homeomorphism of $\Omega$ to $f(\Omega)$, and if $f$ belongs to Sobolev space $W_{1,\,loc}^{n}(\Omega )$ and there exists $K$, $1\leq K<\infty $, such that
\begin{equation}
\|Df(x)\|^{n}\leq K\,J_{f}(x)\,\,\,\textrm{a.e. on }\Omega, \label{outer.qr0}
\end{equation}
where $\|Df(x)\|$ denotes the operator norm of the Jacobian matrix of $f$ at $x$. The smallest $K$ in (\ref{outer.qr0}) is called the outer dilatation $K_{O}(f)$. The inner dilatation $K_{I}(f)=K_{O}(f^{-1})$, and map $f$ is $K$-quasiconformal if $\max (K_O,K_I) \leq K$.
We will need the following proposition concerning a distortion
property of quasiconformal mappings (see \cite{vu}):
\begin{prop}\label{p-dist} If $g:\mathbb{B}^n \mapsto \mathbb{B}^n$ is quasiconformal,
$g(0)=0$ and $1/\alpha =K_I(g^{-1})^{1/(n-1)}$, then for some $m>0$, $\|g(x)\|\geq m \|x\|^{1/\alpha}.$
\end{prop}
The next theorem concerns harmonic maps onto a convex domain. For the planar version of Theorem \ref{thm.space1} cf.
\cite{revroum01,napoc1}, also \cite{topic}, pp.~152-153. The space version was communicated on
International Conference on Complex Analysis and Related Topics
(Xth Romanian-Finnish Seminar, August 14-19, 2005, Cluj-Napoca,
Romania), by Mateljevi\'c, cf. also \cite{rckm0}. For convenience of the reader, we repeat the proof.
\begin{thm}\label{thm.space1}
Suppose that $h$ is an Euclidean harmonic mapping
from the unit ball $\mathbb{B}^n$ onto a
bounded convex domain $D=h(\mathbb{B}^n)$, which
contains the ball $h(0)+R_0 \mathbb{B}^n$. Then for any $x\in \mathbb{B}^n$
$$d(h(x),\partial D) \geq (1-\|x\|) R_0/2^{n-1}.$$
\end{thm}
\begin{proof}
To every $a\in \partial D$ we associate
a nonnegative harmonic function $u=u_a$. Since $D$ is convex, for
$a\in
\partial D$, there is a supporting hyper-plane $\Lambda_a$, defined as set of all $y$ for which $(y-a,n_a)=0$, where
$n_a$ is a unit vector such that
$(y-a,n_a)\geq 0$ for every $y\in \overline{D}$.
Define $u(x)=(h(x)-a,n_a)$. Since $n_a$ is a unit vector, $u(x)\leq \|h(x) -a\|$. Then
$u(0)=(h(0)-a,n_a)= d(h(0),\Lambda_a)$. From geometric
interpretation it is clear that $d(h(0),\Lambda_a) \geq R_0$.
By Harnack's inequality (cf. \cite{gtrudi}, p. 29), ${c}_n (1-\|x\|) u(0) \leq u(x)$,
where ${c}_n=2^{1-n}$. In particular,
${c}_n (1-\|x\|) R_0 \leq u(x)\leq \|h(x) -a\|$ for every $a \in \partial D$. Hence, for a
fixed $x\in \mathbb{B}^n$, $d(h(x),\partial D)=\inf_{a \in \partial D}\|h(x) -a\| \geq
{c}_n (1-\|x\|) R_0$ and therefore we obtain the required inequality.
\end{proof}
To apply normal family arguments, we need the following
results; see Vaisala \cite{vaisala}.
\begin{thm}
Suppose that $\Omega$ is a domain in $\overline{\mathbb{R}^n}$, that $K\geq 1$ and that $r>0$. If $\mathcal{F}$ is a family of $K$-quasiconformal mappings of $\Omega$ (not necessarily onto a fixed domain), such that each $f\in \mathcal{F}$ omits two points $a_f, b_f$ whith spherical distance in $\overline{\mathbb{R}^n}$ at least $r$, then $\mathcal{F}$ is a normal family.
\end{thm}
\begin{thm}\label{nrm}
Let $(f_{j})$, $f_j: \Omega \mapsto \overline{\mathbb{R}^n}$,
be a sequence of $K$-quasiconformal maps, which converges pointwise
to a mapping $f: \Omega \mapsto \overline{\mathbb{R}^n}$.
Then there are three
possibilities:\\
A. $f$ is a homeomorphism and the convergence is uniform on compact sets.\\
B. $f$ assumes exactly two values, one of which at exactly one
point; covergence is not uniform on compact sets in that case.\\
C. $f$ is constant.
\end{thm}
Note that the case B does not happen when we use normal families.
\section{Interior zeros of the Jacobian}\label{sec4}
In this section we prove that a quasiconformal harmonic map cannot have lowest degree polynomials in the Taylor expansion of even degree.
Because harmonic functions are real analytic, in the neighborhood of a zero of the Jacobian there is a power expansion in terms of coordinates. The following proposition follows directly
from the quasiconformality condition:
\begin{prop}\label{p.same}
Suppose that $h$ is a real analytic quasiconformal mapping
from a domain $\Omega \subset\mathbb{R}^n$ to
$\mathbb{R}^n$, such that Jacobian is zero at some point $x_0$. Then all the
degrees of first non-zero homogenous polynomials in the Taylor expansion of the coordinate functions of $h$
around $x_0$ are the same.
\end{prop}
Now the following theorem holds:
\begin{thm}\label{t.locJac}
Suppose that $h$ is an Euclidean harmonic quasiconformal mapping
from a domain $\Omega \subset\mathbb{R}^n$ to
$\mathbb{R}^n$, such that Jacobian is zero at $x_0\in \Omega$. Then the
degree of first non-zero homogenous polynomials in the Taylor expansion of $h$
around $x_0$ is odd, and the corresponding homogenous polynomial map, obtained by taking the lowest degree homogenous polynomials in the Taylor expansion of the coordinates, is harmonic and quasiconformal.
\end{thm}
\begin{proof}
Without loss of generality, by restricting to a ball neighbourhood
and a suitable change of variable, we may assume that $x_0=0$ and
that $\Omega=\mathbb{B}^n$. Suppose the contrary, that the lowest
degree of the first non-zero homogenous polynomials in the Taylor
expansion of the coordinates of $h$ is even, say equal to $2m$.
Then consider the sequence of harmonic quasiconformal maps, $h_j:
\mathbb{B}^n \mapsto \mathbb{R}^n$, $h_j(x)=j^{2m} h(x/j)$. Note
that the first non-zero homogenous polynomials in the expansion of
all the maps $h_j$ are the same as for $h$. Because for $j>1$
derivatives of $h$ are bounded uniformly in $j$ and on
$\mathbb{B}^n$, from Taylor expansion it follows that all the maps
$h_j$, for $j>1$, are uniformly bounded on the unit ball.
Therefore, $\{h_j \,|\, j\in \mathbb{N}\}$ is a normal family, and
a subsequence of our sequence converges to a harmonic mapping $f$
uniformly on compact sets. By elliptic regularity theory (cf. H\"
older and Schauder apriori estimates, \cite{gtrudi}, pp. 60, 90),
all the derivatives of the subsequence will converge to the
corresponding derivatives of $f$. It follows that coordinates of
$f$ are equal to homogenous polynomials of degree $2m$, since
higher degree homogenous polynomials in the expansions of $h_j$
tend to zero. In particular, the limit function $f$ is not
constant, and by Theorem \ref{nrm}, $f$ is quasiconformal, and hence injective. But the
limit map $f$ satisfies $f(-x)=f(x)$, which is a contradiction. Similar procedure in the case of odd lowest degree gives the claimed
homogenous polynomial harmonic quasiconformal map.
\end{proof}
Next, we will combine this theorem with a distortion property of quasiconformal maps.
\begin{prop}\label{p-tom2}
Suppose that $h: \Omega\mapsto \mathbb{R}^n$ is a harmonic quasiconformal map. If $\partial_j h(x_0)=0$ and $\partial^2_{ij}h(x_0)=0$ for some $x_0\in \Omega$,
then $K_O(h) \geq 3^{n-1}$ .
\end{prop}
\begin{proof} Without loss of generality, by restricting to ball neighbourhood of $x_0$ whose closure is in the domain, and a change of variable which does not change quasiconformal distortion, we can suppose that $x_0=0$, $h(x_0)=0$, and that, by the Taylor formula, there is $M>0$ such that $\|h(x)\|
\leq M \|x\|^3$ on $\mathbb{B}^n$. Let $g=(h|_{\mathbb{B}^n})/M$
If $1/\alpha =K_I(g^{-1})^{1/(n-1)}$, then by Proposition \ref{p-dist}, $m \|x\|^{1/\alpha}\leq \|g(x)\|\leq
\|x\|^3$. Hence $K_O^{1/(n-1)} \geq 3$, and therefore $K_O
\geq 3^{n-1}$, where $K_O(g)=K_I(g^{-1})$.
\end{proof}
\begin{thm}\label{p-tom3}
Suppose that $h: \Omega\mapsto \mathbb{R}^n$ is a harmonic quasiconformal map. If $K_O(h) <
3^{n-1}$, then its Jacobian has no zeros.
\end{thm}
\begin{proof}
Contrary suppose that $J_h(x_0)=0$ for some $x_0\in \Omega$. Since $h$ is quasiconformal,
by Theorem \ref{t.locJac} we find $\partial^2_{ij}h(x_0)=0$.
Now, by Proposition \ref{p-tom2}, $K_O(h) \geq 3^{n-1}$ and this
yields a contradiction.
\end{proof}
\section{Bounds
for the Jacobian}\label{sec3}
Heinz's lemma-type results can be obtained for quasiconformal harmonic maps, using normal families. To state our results clearly and in their generality, it is useful to give some definitions.
\begin{dfn}\label{dfn3.2}
We say that a family $\mathcal F$ of maps from domains in $\mathbb R^n$ to $\mathbb R^n$ is RHTC-closed if the following holds:
\begin{itemize}
\item (Restrictions) If $f: \Omega\mapsto \mathbb{R}^n$ is in $\mathcal{F}$, $\Omega'\subset \Omega$ is open, connected and nonempty, then $f|_{\Omega'}\in \mathcal{F}$.
\item (Homothety) If $f: \Omega\mapsto \mathbb{R}^n$ is in $\mathcal{F}$, $a\in \mathbb R$, $a>0$ then $g: \Omega\mapsto \mathbb{R}^n$ and $h: a \Omega\mapsto \mathbb{R}^n$ are in $\mathcal{F}$, where $g(x)=a f(x)$ and $h(x)=f(x/a)$.
\item (Translations) If $f: \Omega\mapsto \mathbb{R}^n$ is in $\mathcal{F}$, $t\in \mathbb R^n$, then $g: \Omega\mapsto \mathbb{R}^n$ and $h: t+ \Omega\mapsto \mathbb{R}^n$ are in $\mathcal{F}$, where $g(x)=t+ f(x)$ and $h(x)=f(x-t)$.
\item (Completeness) If $f_j: \Omega\mapsto \mathbb{R}^n$, $j\in \mathbb{N}$ are in $\mathcal{F}$, $(f_j)$ converges uniformly on compact sets to $g: \Omega\mapsto \mathbb{R}^n$, where $g$ is non-constant, then $g\in \mathcal{F}$.
\end{itemize}
\end{dfn}
For instance, families of harmonic maps and of gradients of harmonic functions are RTHC-closed. Also, due to Theorem $\ref{nrm}$, for any given $K\geq 1$, a subfamily of $K$-quasiconformal members of a RTHC-closed family is also RTHC-closed.
\begin{dfn}\label{dfn3.1}
We say that a family $\mathcal F$ of harmonic maps from domains in $\mathbb R^n$ to $\mathbb R^n$ is non-zero Jacobian closed, if it is RHTC-closed and Jacobians of all maps in the family have no zeros.
\end{dfn}
Note that uniform convergence on compact sets in the case of harmonic maps implies convergence of higher order derivatives, via H\" older and Schauder apriori estimates (see \cite{gtrudi}, pp. 60, 90). This is related to elliptic regularity and holds for more general elliptic operators, and not just Laplacian, so that this method applies in that more general setting too.
\begin{thm}\label{ghall}
For every non-zero Jacobian closed family of $K$-quasiconformal harmonic maps, there is a constant $c>0$, such that if $f:\mathbb{B}^n \mapsto \mathbb{R}^n$ is from the family, $d(0,\partial f(\mathbb{B}^n)) \geq 1$ and $f(0)=0$, then
$$J_f(0)\geq c.$$
\end{thm}
\begin{proof}
Suppose the contrary, i.e. that the family contains a sequence
$(f_j)$ of maps from the unit ball satisfying
$\mathbb{B}^n\subseteq f_j(\mathbb{B}^n)$ and $f_j(0)=0$, such
that $J_{f_j}(0) \rightarrow 0$ as $j\rightarrow \infty$.
Multiplying functions by constants less than $1$ if necessary, we
may assume, without loss of generality, that the boundary of the
image $f_j(\mathbb{B}^n)$ always contains a point on the unit
spere, and thus use a normal family argument (since infinity and
point on the unit sphere are on a fixed spherical distance) to
pass to a convergent subsequence. Now note that because of the
Gehring distortion property
(see \cite{vaisala} p. 63, \cite{aaa}), $f_j(\frac{1}{2}\mathbb{B}^n)$ will contain a ball around zero of fixed radius, so the limit cannot degenerate to a constant function. But then the limit, say $g$, is in the family. However, by apriori estimates of elliptic regularity theory, derivatives of $f_j$ will also converge to derivatives of $g$, and hence $J_g(0)=0$, contradicting the non-zero Jacobian assumption.
\end{proof}
Note that the same normal family argument gives upper bound for Jacobian, but for general quasiconformal harmonic maps. Namely, the following theorem holds:
\begin{thm}\label{upperl}
There is a constant $c>0$, depending only on $K$, such that if $f:\mathbb{B}^n \mapsto \mathbb{R}^n$ is K-quasiconformal harmonic, $d(0,\partial f(\mathbb{B}^n))\leq 1$ and $f(0)=0$, then
$$J_f(0)\leq c.$$
\end{thm}
\begin{proof}
Proof is essentially the same as for Theorem \ref{ghall}: we take a sequence of $K$-quasiconformal harmonic maps $(f_j)$ from the unit ball, such that $J_{f_j}(0) \rightarrow \infty$ as $j\rightarrow \infty$, $f_j(0)=0$ and $d(0,\partial f_j(\mathbb{B}^n))=1$, multiplying by constants now greater than one if necessary. This will provide a subsequence with a limit mapping whose Jacobian at zero is finite, a contradiction.
\end{proof}
Applying Theorem \ref{thm.space1}, we get the following result regarding co-Lipschitz condition for maps from ball to a convex domain:
\begin{thm}\label{thm.space2}
Suppose that $h$ is a harmonic quasiconformal mapping
from the unit ball $\mathbb{B}^n$ onto a
bounded convex domain $D=h(\mathbb{B}^n)$, and that $h$ belongs to a non-zero Jacobian closed family of harmonic maps. Then $h$ is co-Lipschitz on $\mathbb{B}^n$.
\end{thm}
\begin{proof}
Let $x_0$ be a point in $\mathbb{B}^n$. Define $f:\mathbb{B}^n\mapsto \mathbb{R}^n$ by
$$f(x)=\frac{h(x_0+(1-\|x_0\|)x)-h(x_0)}{d(h(x_0),\partial D)}.$$
Applying Theorem \ref{ghall} to $f$, and using the fact that norm of the derivative of $K$-quasiconformal map is bounded from below by a constant times $n$-th root of the Jacobian, and using the uniform estimate $(1-\|x_0\|)/(d(h(x_0),\partial D))\leq 2^{n-1}/d(h(0),\partial D)$ from Theorem \ref{thm.space1}, we get a uniform bound from below for norm of the derivative of $h$, and hence conclude that map is co-Lipschitz.
\end{proof}
A special case of interest we get by applying Theorem \ref{thm.space2}, combining it with Theorem \ref{p-tom3}.
\begin{thm}
Suppose $h$ is a harmonic $K$-quasiconformal mapping
from the unit ball $\mathbb{B}^n$ onto a
bounded convex domain $D=h(\mathbb{B}^n)$, with $K< 3^{n-1}$.
Then $h$ is co-Lipschitz on $\mathbb{B}^n$.
\end{thm}
| {
"timestamp": "2015-02-13T02:10:55",
"yymm": "1502",
"arxiv_id": "1502.00457",
"language": "en",
"url": "https://arxiv.org/abs/1502.00457",
"abstract": "Using normal family arguments, we show that the degree of the first nonzero homogenous polynomial in the expansion of $n$ dimensional Euclidean harmonic $K$-quasiconformal mapping around an internal point is odd, and that such a map from the unit ball onto a bounded convex domain, with $K< 3^{n-1}$, is co-Lipschitz. Also some generalizations of this result are given, as well as a generalization of Heinz's lemma for harmonic quasiconformal maps in $\\mathbb R^n$ and related results.",
"subjects": "Analysis of PDEs (math.AP); Complex Variables (math.CV)",
"title": "Bounds for Jacobian of harmonic injective mappings in n-dimensional space",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876997410349,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7089594680385993
} |
https://arxiv.org/abs/1703.02009 | Learning across scales - A multiscale method for Convolution Neural Networks | In this work we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data through prolongation and restriction of CNN parameters. We demonstrate that this enables classifying high-resolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations. |
\section{Introduction}
In this work we consider the problem of designing and training Convolutional Neural Networks (CNNs). The topic has been a major field of research over the last years, after it has shown remarkable success, e.g., in classifying images of hand writing, natural images, videos; see, e.g., ~\cite{LeCunBengio1995,KrizhevskySutskeverHinton2012,LeCunKavukcuogluFarabet2010}
and references within. This success has generated thousands of research papers and a few celebrated software packages.
However, the success of CNNs is not fully understood and in fact, tuning network architecture and parameters is very hard in practice.
Typically many trial and error experiments are required to find a CNN that is effective for a specific class of data.
In addition to the computational costs associated with those experiments, in many cases, small changes to the network can yield large changes in the learning performance.
To overcome the difficulty of learning, systematic approaches using Bayesian optimization have recently been proposed to infer the best architecture~\cite{GeneralFrameworkBayesOpt2016}.
Currently used training methods depend, e.g., on the architecture of the network as well as the resolution of the image data.
Changing any of those parameters in the training or prediction phase can severely affect the performance of the CNN.
For example, CNNs are typically trained using images of a fixed resolution, and classifying images of a different resolution requires interpolation.
Such a process can be computationally expensive, particularly if the data represents videos or high-resolution 3D images as is common in applications, e.g., in medical imaging and geosciences~\cite{JiangTrundleRen2010,KarpathyCVPR14}).
In this paper we derive a framework that allows scaling CNNs across image resolution and depths and thus enables multiscale learning.
As a backbone of our methods we present an interpretation of deep CNNs as an optimal control problem involving a nonlinear time dependent differential equations. This understanding leads to a very common structure that is used in fields such as path planning, data assimilation, and nonlinear Kalman filtering; see~\cite{PDEOptBook} and reference within.
We present new methods for scaling CNNs from low- to high-resolution image data and vice versa.
We propose an algebraic multigrid approach to adapt the coefficients of the convolution kernel for different scales and demonstrate the importance of this step.
Our method allows multiscale learning using image pyramids, where the network is trained at different resolutions.
Such a process is known to be very efficient in other fields, both from a computational point of view, and from skipping local minima~\cite{Modersitzki2004,HabModMG04,WarnerEtAt2013}.
The method also allows for the classification of low-resolution images by networks that have been designed for and trained using high-resolution images {\em without} interpolation of the coarse scale image to finer scales.
We also present a method for scaling the number of layers in CNNs. Our method is based on the interpretation of the forward propagation in CNN as a discretization of a time-dependent nonlinear differential equation. In that framework the number of layers corresponds to the number time steps. Our observation motivates the use of multi-level learning algorithms that accelerate the training of deep networks by solving a series of learning problems from shallow to deep architectures.
The rest of this paper is structured as follows. In the next section, we show the connection between time dependent differential equations and CNNs. This connection allows us to introduce the optimization problem as a dynamic control problem. In Sec.~\ref{sec3} we present multiscale methods connecting CNNs across image resolutions and depths. In Sec.~\ref{sec4} we demonstrate the potential of our methods using image classification benchmarks. Finally, in Sec.~\ref{sec5} we summarize the paper.
\section{An Optimal Control Perspective on CNNs}
\label{sec:optControl}
In this section we derive a fully continuous formulation of deep convolution neural networks in the framework of optimal control.
In Sec.~\ref{sub:fwd} we give a continuous interpretation of the spatial convolution and the forward propagation. In Sec.~\ref{sub:optContr} we discuss the remaining components of CNNs and present the continuous optimal control problem.
We focus on image classification and assume we are given training data consisting of discrete $d$-dimensional images ${\bf x}^{(1)}, {\bf x}^{(2)}, \ldots, {\bf x}^{(m)} \in \R^n$ and corresponding labels ${\bf c}^{(1)}, {\bf c}^{(2)}, \ldots, {\bf c}^{(m)} \in \R^\ell$.
In this paper, we consider $d = 2$ and thus $n$ corresponds to the number of pixels in the data.
As common in image processing, we interpret the image data as discretization of a continuous function $x : \Omega \to \R$ at the cell-centers of a rectangular grid with $n$ equally sized pixels. Here, $\Omega\subset\R^d$ denotes the image domain and for simplicity we assume square pixels of edge length $h>0$. We denote the number of layers in the deep CNN by $N$.
\subsection{Forward Propagation as a Nonlinear Differential Equation}
\label{sub:fwd}
In this section we establish the interpretation of the forward propagation Residual Neural Networks (ResNN)~\cite{he2016identity} as a nonlinear differential equation. A simple way to write the forward propagation of a discrete image ${\bf x}\in\R^n$ through a ResNet is
\begin{eqnarray}
\label{cnn}
{\bf y}_{k+1} = {\bf y}_k + \delta t F({\bf y}_{k},\bftheta_k), \quad \quad {\bf y}_0 = {\bf L} {\bf x}, \quad \forall k=0,1,\ldots,N.
\end{eqnarray}
Here, ${\bf y}_0 \in \R^{n_f}$ are the input features, ${\bf y}_1,\ldots,{\bf y}_N$ are the hidden layers, and ${\bf y}_{N+1}$ are the output layers. The matrix ${\bf L}$ maps the input image into the feature space $\R^{n_f}$. This matrix can be "learned" or fixed.
The parameters
$\bftheta_k $ need to be determined by the "learning" process.
We generalize the original ResNet model by adding the parameter $\delta t>0$, which helps to derive the continuous interpretation below (the original formulation is obtained for $\delta t = 1$).
A common choice in CNN is to have the function $F$
as a convolution with parameter $\bftheta$ that represent the
convolution weights and bias, leading to the explicit
expression
\begin{equation}
\label{conv}
F({\bf y},{\bf s},{\bf b})= \sigma_{\alpha}\left( {\bf K}({\bf s}) {\bf y} + {\bf b} \right).
\end{equation}
Here ${\bf K}({\bf s})$ is a convolution matrix, which is a circulant matrix that represents the convolution and depends on the stencil or convolution kernel, ${\bf s}\in\R^{n_s}$, ${\bf b} \in \R^N$ is a bias vector and $\sigma_{\alpha}$ is an activation function.
The size of the stencil is typically much smaller than the number of pixels in the image and thus ${\bf K}({\bf s})$ is a sparse matrix.
Next, we interpret the depth of the network in a continuous framework.
We start by rewriting the forward propagation~\eqref{cnn} as
\begin{eqnarray}
\label{cnn1}
\frac{{\bf y}_{k+1} - {\bf y}_k}{\delta t}= \sigma_{\alpha}({\bf K}({\bf s}_k) {\bf y}_{k} + {\bf b}_k).
\end{eqnarray}
The left hand side of the above equation is a finite difference approximation to the differential operator $\partial_t {\bf y}$ with step size $\delta t$. While the approximation used in the original ResNet (using $\delta t = 1$) is valid if features change sufficiently slowly, the obtained dynamical system can be chaotic if the features change quickly.
Having a chaotic system as a forward problem implies that one can expect difficulties when considering the learning problem. Therefore, our first goal is to stabilize the forward propagation process.
To obtain a fully continuous formulation of the forward propagation we note that the convolution weights ${\bf s}$ can be seen as a discretization of continuous functions $s: \Omega \to \R$ (whose support is limited to a small region around the origin). This allows to interpret ${\bf K}({\bf s}) {\bf y}$ as a discretization of $s * y$.
Upon taking the limit $\delta t \to 0$ in~\eqref{cnn1} we obtain the {\em continuous} forward propagation process
\begin{equation}
\label{cnncont}
\frac{d y}{d t}(t) = \sigma_{\alpha}\left( s(t) * y(t) + b(t) \right), \quad y(0) = L x,
\end{equation}
for all $t \in [0,T]$, where $T$ is the final time corresponding with the output layer.
General stability of ordinary differential equations (ODEs) applies for this process and in particular, it is easy to verify that the system of ODEs is stable as long as the real part of the eigenvalues of the convolution are non-positive.
A second well known problem is vanishing gradients~\cite{BengioEtAl1994}. This implies that the eigenvalues of the convolution have a strong negative real part. Note that if the eigenvalues of ${\bf K}$ are imaginary, then no decay of the signal occurs, and no vanishing gradients are expected. This can aid in choosing and initializing the network parameters.
Given the continuous forward propagation in~\eqref{cnncont} we interpret~\eqref{cnn1} as a forward Euler discretization with a fixed time step size of $\delta t$. Thus, the forward propagation is stable as long as the real parts of the eigenvalues of the convolution and the time steps are sufficiently small. We note that there are numerous methods for time integration, some of which provide superior stability of the forward propagation.
\subsection{Optimal Control Formulation of Supervised Learning}
\label{sub:optContr}
Having discussed forward propagation, we now briefly review classification and give continuous and discrete formulations of the learning problem.
The hypothesis or classification function, which predicts the label for each data using the values at the output layer, ${\bf y}_{N+1}$, can be written as
\begin{eqnarray}
\label{classifier}
{\bf c}_{\rm pred} = g(h^d {\bf W}^\top{\bf y}_{N+1} + \mu),
\end{eqnarray}
where the columns of ${\bf W} \in \R^{n_f \times \ell}$ are classification weights and $\mu \in \R^\ell$ are biases for the respective classes.
Commonly used choices are softmax, least-squares, logistic regression, or support vector machines.
We have generalized the common notation by adding the parameter $h^d$ that allows to interpret ${\bf W}^\top {\bf y}_{N+1}$ as a midpoint rule applied to the standard $L_2$ inner product $(w_j,y)_{L_2} = \int_{\Omega} w_j(r) y(r) dr$ for a sufficiently regular function $w_j : \Omega \to \R$. The $j$th column of ${\bf W}$ is the discretization of $w_j$ at the cell-centers of the grid. This generalization allows to adjust the weights across image resolutions.
For training data containing of continuous functions $x^{(1)}, \ldots, x^{(m)}$ and labels ${\bf c}^{(1)}, \ldots, {\bf c}^{(m)}$, learning consists of solving the optimal control problem
\begin{subequations}
\begin{eqnarray}
\min_{w, \mu,s, b} & \frac{1}{m} \sum_{j=1}^m S(g((w,y^{(j)}(T))_{L_2} + \mu) ,{\bf c}^{(j)}) + R(w,\mu,s, b)\\
{\rm subject\ to} &
\frac{d y^{(j)}}{d t}(t) = \sigma_{\alpha}\left( s(t) * y^{(j)}(t) + b(t) \right), \quad y^{(j)}(0) = L x^{(j)}, \quad \forall j=1,\ldots,m.
\end{eqnarray}
\end{subequations}
Here $S$ is a loss function measuring the mismatch between the predicted and known label and $R$ is a regularization function that penalizes undesired features in the parameters and avoids overfitting.
Typically, the problem is not solved to a high accuracy and low accuracy solutions are sufficient. A validation set is often used to determine the stopping criteria for the optimization algorithm.
For completeness we note that a discrete version of the optimal control problem is
\begin{subequations}
\label{opt}
\begin{eqnarray}
\min_{{\bf W}, \mu,{\bf s}_{1,2,\ldots,N}, b_{1,2,\ldots,L}} & \frac{1}{m} \sum_{j=1}^m S(g(h^d {\bf W}^\top{\bf y}_{N+1}^{(j)} + \mu) ,{\bf c}^{(j)}) + R({\bf W},\mu,{\bf s}_{1,2,\ldots,N}, b_{1,2,\ldots,N})\\
{\rm subject\ to} &
{\bf y}^{(j)}_{k+1} = {\bf y}^{(j)}_k + \delta t \sigma_a ({\bf y}^{(j)}_{k},\bftheta_k), \quad \quad {\bf y}^{(j)}_0 = {\bf L} {\bf x}^{(j)}, \quad \forall j=1,\ldots,m.
\end{eqnarray}
\end{subequations}
Note that for simplicity we have ignored the pooling layer, although it can be added in general (The necessity of pooling has been debated in~\cite{DBLP:journals/corr/SpringenbergDBR14}).
\section{Multiscale Methods}\label{sec3}
In this section we present new methods for scaling deep CNNs along two dimensions. In Sec.~\ref{sub:restriction} and Sec.~\ref{sub:prolongation} we discuss restriction and prolongation of convolution operators as a way to scale CNNs along image resolution. In Sec.~\ref{sub:timeProlongation} we scale the depth of the network to simplify initialization and accelerate training.
\subsection{From High-Resolution to Low-Resolution: Restricting Convolution Operators} \label{sub:restriction}
Assume first that we are given some image data, ${\bf y}_h$, on a mesh with pixel size $h$ and a stencil, ${\bf s}_h$ that operates on this image. Assume also that we would like to apply the fine mesh convolution to an image, ${\bf y}_H$, given on a coarser mesh with pixel size $H > h$. In other words the goal is to find a stencil ${\bf s}_H$ for which the coarse mesh convolution is equivalent to refining the image data and applying fine mesh convolution with ${\bf s}_h$. This problem is well studied in the multigrid literature~\cite{tos}.
Our method for restricting the stencil follows the algebraic multigrid approach; see, e.g.,~\cite{tos} for details and alternative approaches using re-discretization. We assume that the following connection holds between the fine mesh image, ${\bf y}_h$, and the coarse mesh image ${\bf y}_H$
\begin{eqnarray}\label{pro-rest}
{\bf y}_H = {\bf R} {\bf y}_h \quad {\rm and} \quad \tilde{{\bf y}}_h = {\bf P} {\bf y}_H.
\end{eqnarray}
Here, ${\bf P}$ is a prolongation matrix and ${\bf R}$ is a restriction matrix.
$\tilde{{\bf y}}_h$ is an interpolated coarse scale image on the fine mesh
and typically, ${\bf R} {\bf P} = \gamma{\bf I}$ for some $\gamma$ that depends on the dimensionality of the problem. The interpretation is that the coarse scale image is obtained using some linear transformation from the fine scale image (e.g. averaging). Conversely, an approximate fine scale image can be obtained from the coarse scale image by interpolation. This interpretation can easily be extended to 3D data to allow, e.g., classification of videos.
Let ${\bf K}_h({\bf s}_h)$ be the sparse matrix that represents the convolution on the fine scale. This matrix operates on a vectorized image and is equivalent to convolving the vector ${\bf y}_h$ with the stencil, ${\bf s}_h$. The matrix is circulant and sparse with a few non-zero diagonals. Our goal is to build a coarse scale convolution, ${\bf K}_H$ that operates on a vector ${\bf y}_H$
and is consistent with the operation of ${\bf K}_h$ on a fine scale vector ${\bf y}_h$. Using the prolongation and restriction we obtain that
\begin{equation}
\label{KH1}
{\bf K}_H {\bf y}_H = {\bf R} {\bf K}_h {\bf P} {\bf y}_H.
\end{equation}
That is, given ${\bf y}_H$ we first prolong it to the mesh $h$, then operate
on it with the matrix ${\bf K}_h$, which yields a vector on mesh $h$. Finally, we restrict the result to the mesh $H$. This implies that the coarse scale convolution matrix can be expressed as
${\bf K}_H = {\bf R} {\bf K}_h {\bf P}$.
This definition of the operator can be built directly using any interpolation/restriction operators. Furthermore, assuming that the stencil, ${\bf s}_H$ is constant on the coarse mesh (that is, it is not changing on the mesh as commonly assumed in CNN), it is straightforward to evaluate it without generating the matrix ${\bf K}_H$, as commonly done in algebraic multigrid.
\begin{example}
To demonstrate the concept of how to the convolution changes with resolution,
we use the following simple example demonstrated in Figure~\ref{fig1}.
We select an image from the MNIST data set (bottom left) and convolve it
with the fine mesh convolution parameterized by the stencil
$$ {\bf s}_h = \begin{pmatrix}
-0.89 & -2.03 & 4.30 \\
-2.07 & 0.00 & -2.07 \\
4.39 & -2.03 & 1.28
\end{pmatrix} $$
obtaining the image in the bottom right panel of Figure~\ref{fig1}. Now, by restricting the weights using the algebraic multigrid approach, we obtain that on a coarse mesh the weights are,
$$ {\bf s}_H = \begin{pmatrix}
-0.48 & -0.17 & 0.82 \\
-0.15 & -0.80 & 0.37 \\
0.84 & 0.40 & 0.07
\end{pmatrix}. $$
These weights are used on the coarse scale image (top left panel of Figure~\ref{fig1}) to construct the filtered image on the top right panel of Figure~\ref{fig1}.
\begin{figure}
\begin{center}
\includegraphics[width=.35\textwidth]{convDiagram}
\end{center}
\caption{Illustration of a fine mesh vs. coarse mesh convolution. \label{fig1}}
\end{figure}
Looking at the weights obtained on the coarse mesh, it is evident that they are significantly different from the fine scale weights. It is also evident from the fine and coarse images, that adjusting the weights on the coarse mesh is necessary if we are to keep a faithful transformation between the images.
\end{example}
The interpretation of images and convolution weights as continuous functions, allows us to work with different image-resolutions. This has two important consequences. First, assume that we have trained our network on some fine scale images and that we are given a coarse scale image. Rather than interpolating the image to a fine mesh (which can be memory intensive and computationally expensive), we transform the {\bf stencils} to a coarse mesh and use the coarse mesh stencils to classify the coarse scale image. Such a process can be particularly efficient when considering the classification of videos on mobile devices where expanding the video to high resolution can be computationally prohibitive. A second consequence is that we are able to train the network on a coarse mesh and then interpolate the result to a fine mesh. As we see next, this allows us to use a process of image pyramid or multi-resolution for the solution of the optimization problem that is at the heart of the training process.
\subsection{From Low-Resolution to High-Resolution: Prolongating Convolution Operators}
\label{sub:prolongation}
Understanding how to move between different scales allows us to construct efficient algorithms that use inexpensive coarse mesh representations of the problem in order to initialize the problem on finer scales. This is similar to the classical image pyramid process~\cite{HabModMG04} and multilevel methods that are used in applications that range from full waveform inversion to shape from shading~\cite{ZTCM99}. The idea is to solve the optimization problem on a coarse mesh first in order to initialize fine grid parameters.
The algorithm is summarized in Algorithm \ref{alg1}.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\STATE{ Restrict the images $n_c$ times}
\STATE{ Initialize the stencils, biases, and classification weights}
\FOR{$i=n_c:-1:1$}
\STATE{Solve the optimization problem \eqref{opt} on mesh $i$ from its initial point}
\STATE{Prolong the stencils to level $i-1$}
\STATE{Update the classifier weights}
\ENDFOR
\end{algorithmic}
\caption{ \label{alg1} Multigrid Prolongation}
\end{algorithm}
Solving each optimization problem on coarser meshes is cheaper than solving the problem on finer meshes. In fact, when an image is coarsened by a factor of 2, each convolution step is 4 times cheaper in 2D and 8 times cheaper in 3D. In some cases, such a process leads to linear complexity of the problem~\cite{tos}.
In order to apply such algorithms in our context, we need to address the transformation of the coarse scale operator to a fine scale one. This process is different than classical multigrid where the operator on a fine mesh is given, and a coarse scale representation is desired. As previously discussed, we use the classical multigrid result to transform a fine mesh operator to a coarse one
\begin{eqnarray}
\label{KhH}
{\bf K}_H = {\bf R} {\bf K}_h {\bf P}.
\end{eqnarray}
In the classical multigrid implementation, one has a hold on the {\em fine scale} operator ${\bf K}_h$, and the goal is to compute the coarse scale operator ${\bf K}_H$. In our application, throughout the mesh continuation method, we are given the {\em coarse mesh} operator and we are to compute the fine mesh operator. In principle, there is no unique fine scale operator given a coarse scale one; however, assuming that the fine scale operator is a convolution with fixed stencil (as in the case of CNN), there is a unique solution. This is a classical result in Fourier analysis on multigrid methods~\cite{tos}. Since \eqref{KhH} represents a linear connection between ${\bf K}_h$ and ${\bf K}_H$, we extract $n_K^2$ equations (where $n_K$ is the size of each convolution stencil)
that connect the fine scale convolutions to the coarse scale ones. For a convolution stencil of size $3^2$ and linear prolongation/restriction, this is a simple $9 \times 9$ linear system that can be easily solved to obtain the fine scale convolution. A classical multigrid result is that this linear system is well-posed. Assuming that the coarse mesh is a $n_K \times n_K$ stencil and that
the interpolation is linear, the fine mesh stencil is also a $n_K \times n_K$
stencil which is uniquely determined from the coarse mesh stencil.
\subsection{From Shallow to Deep Networks}
\label{sub:timeProlongation}
In this section we consider scaling the number of layers in the network as another way to use the continuous framework.
In our case we gradually increase the number of layers keeping the final time $T$ constant in order to accelerate learning by re-using parameters from shallow networks to initialize the learning problem for the deeper architecture. Note that the number of layers in the network corresponds to the number of discretization points in the discrete forward propagation. Similar ideas have been used in multigrid~\cite{BornemannDeuflhard1996} and image processing; see, e.g.,~\cite{ModSiamBook}.
To solve a learning task in practice, we first solve the learning problem using a network with only a few layers. Subsequently, we prolongate the estimated parameters of the forward propagation to initialize the optimization problem for the next network that features, e.g., twice as many layers. To this end, simple linear interpolation can be used. We repeat this process until the desired network depth is reached.
Besides realizing some obvious computational savings on shallower networks, the main motivation behind our approach is to obtain good starting guesses for the next level. This is key since, while deeper architectures offer more flexibility to model complicated data-label relation, deeper networks are notoriously difficult to initialize. Another advantage when using second-order learning algorithms is the faster convergence rate obtained by warm starting.
\section{Experiments}\label{sec4}
In this section, we demonstrate the benefits of the proposed multiscale algorithms for CNN using two supervised image classification problems.
\subsection{Classification of Images Across Resolutions}\label{sub:exp1}
We demonstrate that using the continuous formulation we are able to classify low-resolution images using CNNs trained on high-resolution images and vice versa. Here, no additional learning is performed and, for example, classifying the low-resolution images does require neither interpolation nor high-resolution convolutions. This is important for efficient classification on mobile devices etc.
We consider the MNIST dataset and independently train two networks with two layers each using the coarse and fine data, respectively. The MNIST dataset that consists of 60,000 labeled images each with $28\times 28$ pixels. Since the images are rather coarse, we use only two levels. To obtain coarse scale images the fine scale images are convolved with a Gaussian, and restricted to a coarse mesh using the operator introduced above. This yields a coarse mesh data consisting of $14\times 14$ images. We randomly divide the datasets into a training set consisting of $50,000$ images, and a validation set consisting of $10,000$ images.
In all experiments, we choose a CNN with identical layers, $\tanh$ activation function, and a softmax classifier. For optimization we use a Block-Coordinate-Descent (BCD) method. Each iteration consists of one Gauss-Newton step with sub-sampled Hessian to update the forward propagation parameters and 5 Newton steps to update the weights and biases of the classifier. To avoid overfitting and stabilize the process we enforce spatial smoothness of the classification weights and smoothness across layers for the propagation parameters through derivative-based regularization.
The validation accuracy for the networks on the resolutions they are trained on is around 98.28\% and 98.18\% for the coarse and fine scale network, respectively. Next, we prolongate the classification weights and apply the multigrid prolongation to the convolution kernels from the coarse network to the fine resolution. Using only the results from the coarse level and no training on the fine level, we get a validation accuracy of $91.02$\%. For comparison, using the original convolution kernels gives a validation accuracy of $61.02$\%.
Next, we restrict the classification weights and convolution kernel of the network trained on fine data. Using this, gives a validation accuracy $94.92$\% (compared to $84.09$\% without restricting the kernels). Again, we note that no training is performed on the coarse resolution.
\subsection{Shallow to Deep Training}\label{sub:exp2}
We show the benefit of the multilevel training strategy using the MNIST example. We solve a sequence of training problems for CNNs whose depths increase in powers of two from 2 layers up to 64. For each CNN, we estimate the parameters using 20 iterations of the BCD. Except the number of layers, all parameters are chosen identical to the previous experiment.
We compare the convergence properties of the learning algorithm using random initialization and the proposed multiscale initialization, which uses the prolongated network parameters from
the previous level. The validation accuracy and the value of the loss function can be seen in~Fig.\eqref{fig:mlconv}.
It can be seen that the initial guesses provided by the multiscale process have a lower value of the loss function and higher validation accuracy. For the deeper networks where training is most costly, the optimal accuracy is reached after only a few iterations using the multiscale method while for random initialization more iterations are needed to achieve a comparable accuracy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.9\textwidth]{mlConv}
\end{center}
\caption{Multilevel convergence for MNIST problem.}
\label{fig:mlconv}
\end{figure}
\subsection{Multiscale CNN Training on ImageNet}\label{sub:exp3}
We demonstrate the computational benefit of coarse mesh training compared with only training the fine mesh CNN. To this end, we select ten categories from the ImageNet dataset~\cite{ILSVRC15}. The ImageNet consists of images that are of varying dimensions and hence, we pre-process the images to be of dimension 224-by-224 as also proposed in~\cite{he2016deep}. On this resolution level, the image quality is sufficiently high to visually recognize the objects in the images and the discrete images appear smooth, i.e., are free of block artifacts. This choice leads to non-trivial training problem, where one can expect to reduce training time using our multigrid approach. For each category, there are 1,300 images for the total of 13,000 images. We randomly divide the data into 10,000 images used for training and 3,000 images used for validation. We use ResNet-34 architecture shown in Fig.~\ref{fig:imagenet10-results} of ~\cite{he2016deep} with two differences, first, the first CNN kernel is of dimension $3 \times 3 \times 64$ rather than $7 \times 7 \times 64$ and second, we did not use the fully connected layer, but rather, the output of the average pooling is connected to the classification layer with softmax activation directly. Note that the average pooling ensures the dimension of the penultimate layer to be identical regardless of the dimension of the input layer.
We demonstrate the computational benefit of coarse mesh training compared with only training the fine mesh CNN. The first step is coarse mesh training. To this end, we restrict the $224 \times 224$ images to a $112 \times 112$ mesh and train ResNet-34 on the coarsened images. Then, we obtain the weights for the finer scale using the method described in Section~\ref{sub:prolongation}. For comparison, we also train ResNet-34 using the $224 \times 224$ image data directly. We show in the left subplot of Fig.~\ref{fig:imagenet10-results} that the multiscale approach results in considerable reduction in the number of epochs before reaching convergence. We adopt early stopping to stop training if the loss does not improve for 10 Epochs. In the right subplot of Fig.~\ref{fig:imagenet10-results}, we show the total time to training for multiscale, sum of the time to train for the coarse scale and time to train for the fine scale, compared to directly training on $224 \times 224$. Note that the multiscale approach not only requires fewer epochs and lower runtime, but also gives lower training and validation errors. To ensure that this is not a phenomenon that is specific to the given train-test data split, we report the average over 5 splits; we have provided additional plots in Figure~\ref{fig:imagenet10-additional-results}. In all train-test data splits, the multiscale approach produce lower training and validation error in smaller number of epochs.
\begin{figure}[t!]
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[width=0.5\textwidth]{imagenet10_multiscale_112_224_0.pdf} &
\includegraphics[width=0.5\textwidth]{imagenet10_timing_results2.pdf} \\
\end{tabular}
\caption{Results on ImageNet-10. Left: Comparison of training and validation accuracy for fine mesh training using $224 \times 224$ images (red) and our coarse-to-fine multiscale approach using weights trained from $112 \times 112$ (blue). Note the multiscale method achieves superior training and validation accuracy in fewer epochs. Right: Slight computational savings are achieved using our multiscale approach compared to independent learning on each resolution. }
\label{fig:imagenet10-results}
\end{figure}
\begin{figure}[t!]
\centering
\begin{tabular}{@{}c@{}c@{}}
\includegraphics[width=0.45\textwidth]{imagenet10_multiscale_112_224_1.pdf} &
\includegraphics[width=0.45\textwidth]{imagenet10_multiscale_112_224_2.pdf} \\
\includegraphics[width=0.45\textwidth]{imagenet10_multiscale_112_224_3.pdf} &
\includegraphics[width=0.45\textwidth]{imagenet10_multiscale_112_224_4.pdf} \\
\end{tabular}
\caption{Additional results on ImageNet-10. Refer to Figure~\ref{fig:imagenet10-results} (a) for the legend.}
\label{fig:imagenet10-additional-results}
\end{figure}
\section{Conclusions}
\label{sec5}
In this work, we explore the connection between optimal control and
training deep Convolution Neural Networks (CNNs) that enables learning across scales.
The foundation of our approaches is a continuous image model and interpreting forward propagation in CNN as discretization of a time-dependent nonlinear differential equation.
We showed how this mathematical framework can be used to scale deep CNNs along two dimensions: Image resolution and depth.
While the obtained multiscale approaches are new in deep learning they are commonly used for the numerical solution of related optimal control problems, e.g., in image processing and parameter estimation.
Our method for connecting low- and high-resolution images is unique in that it scales the parameter of the network rather than interpolating the image data to different resolutions.
To this end, we present an algebraic multigrid approach to computing convolution operators that are consistent with coarse and fine scale images. We exemplified the benefit of our approach in two ways. In Sec.~\ref{sub:exp1}, we show that CNNs trained on fine resolution images can be adapted and used to classify coarse resolution images and vice versa.
Our method is advantageous when memory is limited and interpolation of images or videos is not practical.
In Sec.~\ref{sub:exp3}, we demonstrate that it is possible to use coarse representation of the images to learn the convolution kernels and – after prolongation – use the weights to classify high-resolution images.
Our example in Sec.~\ref{sub:exp2} shows that scaling the number of layers of the CNN can improve and accelerate the training of deep networks through initialization using results from shallow ones. In our experiment this drastically reduces the number of iterations for the deep networks where cost-per-iteration is high.
Casting CNNs as a continuous optimal control problem of differential equations provides new insights into the field and motivates new ways to solve and regularize the learning problem.
\section{Acknowledgements}
\label{sec:acknowledgements}
This work is supported in part by the US National Science Foundation (NSF) award DMS 1522599.
\input{multigridNIPS.bbl}
\end{document}
| {
"timestamp": "2017-06-23T02:08:06",
"yymm": "1703",
"arxiv_id": "1703.02009",
"language": "en",
"url": "https://arxiv.org/abs/1703.02009",
"abstract": "In this work we establish the relation between optimal control and training deep Convolution Neural Networks (CNNs). We show that the forward propagation in CNNs can be interpreted as a time-dependent nonlinear differential equation and learning as controlling the parameters of the differential equation such that the network approximates the data-label relation for given training data. Using this continuous interpretation we derive two new methods to scale CNNs with respect to two different dimensions. The first class of multiscale methods connects low-resolution and high-resolution data through prolongation and restriction of CNN parameters. We demonstrate that this enables classifying high-resolution images using CNNs trained with low-resolution images and vice versa and warm-starting the learning process. The second class of multiscale methods connects shallow and deep networks and leads to new training strategies that gradually increase the depths of the CNN while re-using parameters for initializations.",
"subjects": "Neural and Evolutionary Computing (cs.NE); Computer Vision and Pattern Recognition (cs.CV)",
"title": "Learning across scales - A multiscale method for Convolution Neural Networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876997410348,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7089594680385992
} |
https://arxiv.org/abs/1206.6249 | The Surgery Unknotting Number of Legendrian Links | The surgery unknotting number of a Legendrian link is defined as the minimal number of particular oriented surgeries that are required to convert the link into a Legendrian unknot. Lower bounds for the surgery unknotting number are given in terms of classical invariants of the Legendrian link. The surgery unknotting number is calculated for every Legendrian link that is topologically a twist knot or a torus link and for every positive, Legendrian rational link. In addition, the surgery unknotting number is calculated for every Legendrian knot in the Legendrian knot atlas of Chongchitmate and Ng whose underlying smooth knot has crossing number 7 or less. In all these calculations, as long as the Legendrian link of $j$ components is not topologically a slice knot, its surgery unknotting number is equal to the sum of $(j-1)$ and twice the smooth 4-ball genus of the underlying smooth link. | \section{Introduction}
\label{sec:intro}
A classical invariant for smooth knots is the
unknotting number:
the unknotting number of a diagram of
a knot $K$ is the minimum number of crossing changes required to change the diagram into a
diagram of the unknot; the unknotting number of $K$ is the minimum of the unknotting numbers of
all diagrams of $K$.
In the following, we will define a sugery unknotting number for
Legendrian knots and links.
Legendrian links are smooth links that satisfy
an additional geometric condition imposed by a contact structure. We will focus on Legendrian links in $\mathbb R^3$ with
its standard contact structure.
The notion
of Legendrian equivalence is more refined than smooth equivalence: there is only one smooth unknot, but there
are an infinite number of Legendrian unknots. Figure~\ref{fig:3-unknots} shows the front projections
of three
different Legendrian unknots; the entire infinite ``tree" representing all Legendrian unknots can be found in Figure~\ref{fig:unknots}.
The act of changing a crossing (smoothly passing a knot through itself) is not a natural operation in a contact manifold. Instead,
given a Legendrian link, we will attempt to arrive at a Legendrian unknot through a Legendrian ``surgery" operation in which
two oppositely oriented strands in
a Legendrian $0$-tangle are replaced by an oriented, Legendrian $\infty$-tangle as illustrated in Figure~\ref{fig:0-surgery}. It is shown
in Proposition~\ref{prop:unknottable} that every Legendrian link can become a Legendrian unknot after a finite number of surgeries.
The surgery unknotting number of a Legendrian link $\Lambda$, $\sigma_0(\Lambda)$, measures the minimal number of these surgeries that are required to convert
$\Lambda$ to a Legendrian unknot; see Definitions~\ref{defn:strings} and \ref{defn:sun}.
\begin{figure}
\centerline{\includegraphics[height=1.3in]{0-surgery}}
\caption{Oriented Legendrian surgeries: a basic, compatibly oriented $0$-tangle is replaced by a
basic, compatibly oriented $\infty$-tangle. }
\label{fig:0-surgery}
\end{figure}
In the following, our goal is to study and calculate this Legendrian invariant $\sigma_0(\Lambda)$.
\subsection{Main Results}
Lower bounds on $\sigma_0(\Lambda)$ exist in terms of the classical invariants of $\Lambda$. These invariants include
invariants of the underlying smooth link type $L_\Lambda$
and the classical Legendrian invariants of $\Lambda$: the Thurston-Bennequin, $tb(\Lambda)$, and rotation number, $r(\Lambda)$,
as defined in Section~\ref{sec:background}.
\begin{thm} \label{thm:lso-lb} Let $\Lambda$ be a Legendrian link. Then:
\begin{enumerate}
\item $tb(\Lambda) + |r(\Lambda)| + 1 \leq \sigma_0(\Lambda);$
\item if $\Lambda$ has $j$ components, $L_\Lambda$
denotes the underlying smooth
link type of $\Lambda$, and $g_4(L_\Lambda)$ denotes the smooth $4$-ball genus of $L_\Lambda$
\footnote{$g_4(L_\Lambda)$ denotes the minimal genus of a smooth, compact, connected, oriented surface $\Sigma
\subset B^4$ with
$\partial \Sigma = L_\Lambda \subset \mathbb R^3 \subset S^3 = \partial B^4$.},
then $$2 g_4(L_\Lambda) + (j-1) \leq \sigma_0(\Lambda).$$
\end{enumerate}
\end{thm}
\begin{rem} \label{rem:sl-b} In parallel to Theorem~\ref{thm:lso-lb} (1), when $\Lambda$ is a
Legendrian knot with underlying smooth knot type $K_\Lambda$,
the well-known Slice-Bennequin Inequality
says that:
\begin{equation} \label{ineq:sl-b}
tb(\Lambda) + |r(\Lambda)| + 1 \leq 2g_4(K_\Lambda).
\end{equation}
There are now a number of proofs of this result, but all use deep theory.
In \cite{lisca-matic}, Lisca and Mati\'c prove this using
their adjunction inequality obtained by Seiberg-Witten theory. See also \cite{akbulut-matveyev} and \cite{rudolph}.
In contrast, the proof of Theorem~\ref{thm:lso-lb} is elementary and is given in
Lemmas~\ref{lem:tb-lb} and \ref{lem:g4-lb}.
\end{rem}
When $\Lambda$ is a knot, combining Theorem~\ref{thm:lso-lb}(2) and the Slice-Bennequin Inequality~(\ref{ineq:sl-b}), we find:
\begin{cor} \label{cor:realize-g4}
For any Legendrian knot
$\Lambda$, if $K_\Lambda$ denotes the smooth knot type of $\Lambda$ then
$$tb(\Lambda) + |r(\Lambda)| + 1 \leq 2g_4(K_\Lambda) \leq \sigma_0(\Lambda).$$
Thus when
$\sigma_0(\Lambda) = tb(\Lambda) + |r(\Lambda)| + 1$, $\sigma_0(\Lambda) = 2g_4(K_\Lambda)$.
\end{cor}
As we will see below, this corollary sometimes allows us to calculate the smooth $4$-ball genus of a knot.
Using the established lower bounds, we can calculate $\sigma_0(\Lambda)$ when the underlying smooth link type of $\Lambda$ falls within some
important families.
\begin{thm} \label{thm:family-sum}
\begin{enumerate}
\item If $\Lambda$ is a Legendrian knot that is topologically a non-trivial twist knot,
then $\sigma_0(\Lambda) = 2$.
\item If $\Lambda$ is a $j$-component Legendrian link that is topologically a $(jp,jq)$-torus link,
$|p| > q > 1$ and $\gcd(p,q) = 1$, then
$$\sigma_0(\Lambda) = (|jp|-1)(jq-1).$$
\end{enumerate}
\end{thm}
Theorem~\ref{thm:family-sum} is proved in Section~\ref{sec:families} as Theorems~\ref{thm:twist} and \ref{thm:torus}.
The proof of this theorem relies heavily on the classification of Legendrian twist knots given by Etnyre, Ng, and V\'ertesi, \cite{etnyre-ng-vertesi},
and the classification of Legendrian torus knots by Etnyre and Honda, \cite{etnyre-honda:knots}, which was extended to a classification of
Legendrian torus links by Dalton, \cite{dalton}. When $\Lambda$ is topologically a positive torus link, $p > 0$, of maximal Thurston-Bennequin invariant,
the calculation of $\sigma_0(\Lambda)$ is obtained realizing the lower bound given in Theorem~\ref{thm:lso-lb} by the Legendrian
invariants of $\Lambda$.
Thus by Corollary~\ref{cor:realize-g4}, we are able to
to reprove the Milnor conjecture about torus knots, originally proved by Kronheimer and Mrowka:
\begin{cor}[\cite{kronheimer-mrowka}] \label{cor:Milnor} If $T(p,q)$ is a $(p,q)$-torus knot, $|p| > q > 1$, then
$$2g_4(T(p,q)) = (|p| - 1)(q-1).$$
\end{cor}
By comparing $\sigma_0$ of the Legendrian and $g_4$ of the underlying smooth link type,
we can rephrase the conclusions of Theorem~\ref{thm:family-sum} as:
\begin{cor}\label{cor:twist-torus} If $\Lambda$ is a Legendrian link that is topologically a non-slice twist knot
\footnote{Casson and Gordon proved that the only twist knots that are slice are the unknot, $6_1$, and $m(6_1)$; \cite{casson-gordon}. }
or a $j$-component torus link, $L_\Lambda$,
then $$\sigma_0(\Lambda) = 2g_4(L_\Lambda) + (j-1).$$
\end{cor}
As an additional family of Legendrian links, we consider ``positive, Legendrian rational links". These links are defined as
Legendrian numerator closures of the Legendrian rational tangles
studied, for example, by the second author, \cite{traynor:strat}, and Schneider, \cite{schneider}. These links are positive in the sense that
an orientation is chosen on the components so that all the crossings have a
positive sign. Such Legendrian links are specified by a vector $(c_n, \dots, c_1)$ of positive
integers; see Definition~\ref{defn:pos-rat} and Figure~\ref{fig:rat-gen}.
Lemma~\ref{lem:pos} gives conditions on the $c_i$
that guarantee that the link is positive.
\begin{thm} \label{thm:rat-sum}
If $\Lambda(c_n, \dots, c_2, c_1)$ is a positive, Legendrian rational link, then
$$\sigma_0(\Lambda(c_n, \dots, c_2, c_1)) = \sum_{i \text{ odd }} c_{i} - p(n),$$
where $p(n) $ equals $1$ when $n$ is odd and equals $0$ when $n$ is even.
\end{thm}
This is proved in Section~\ref{sec:families}; see Theorem~\ref{thm:pos-rat}.
\begin{rem} When $\Lambda$ is a positive, Legendrian rational link,
the calculation of $\sigma_0(\Lambda)$ is obtained realizing the lower bound given in Theorem~\ref{thm:lso-lb} given
by the classical Legendrian invariants of $\Lambda$.
Thus by Corollary~\ref{cor:realize-g4}, when
$\Lambda(c_n, \dots, c_1)$ is a positive, Legendrian rational knot,
Theorem~\ref{thm:rat-sum} gives a formula for twice the smooth $4$-ball genus of the underlying smooth knot.
This can be used to get formulas for the smooth $4$-ball genus of a knot in terms of its rational
notation. In particular,
$$\begin{aligned}
&g_4(5_2) = g_4(N(3,2)) = \frac12 \sigma_0(\Lambda(3,2)) = \frac12 (2) = 1, \\
&g_4(7_5) = g_4(N(3,2,2)) = \frac12 \sigma_0(\Lambda(3,2,2)) = \frac12 (2 + 3 -1) = 2, \\
&g_4(N(5, 244, 4, 16, 3, 104, 2, 12, 1)) = \frac12(1 + 2 + 3 + 4 + 5-1).
\end{aligned}$$
This is an alternate to formulas for calculating the smooth $4$-ball genus in terms of crossings and Seifert circles
as given by Nakamura in \cite{nakamura}. In turn, using Nakamura's formula, we see that when the underlying link type
of $\Lambda(c_n, \dots, c_2, c_1)$ is
a $2$-component link $L_\Lambda$, $$\sigma_0(\Lambda(c_n, \dots, c_2, c_1)) = 2 g_4(L_\Lambda)+ 1;$$
see Remark~\ref{rem:g4-rats}.
\end{rem}
Given the above calculations, it is natural to ask:
\begin{ques} \label{ques:sun-g4} If $\Lambda$ is a Legendrian knot that is topologically a non-slice knot $K_\Lambda$, is
$\sigma_0 (\Lambda) = 2 g_4(K_\Lambda)?$
More generally, if $\Lambda$ is a Legendrian link of $j \geq 2$ components that is topologically the link $L_\Lambda$,
is
$\sigma_0 (\Lambda) = 2 g_4(L_\Lambda) + (j-1)?$
\end{ques}
To investigate the knot portion of this question, we examined Legendrian representatives of knots with crossing
number $7$ or less. There is not yet
a Legendrian classification of all these knot types, but a conjectured
classification is given by Chongchitmate and Ng in \cite{chongchitmate-ng}.
\begin{prop}\label{prop:small-cross} Assuming the conjectured classification of Legendrian knots in \cite{chongchitmate-ng} \footnote{Potential duplications
in their atlas will not affect the statement.}, if $\Lambda$ is a Legendrian knot that is topologically a
non-slice knot $K_\Lambda$ with crossing number $7$ or
less,
$\sigma_0(\Lambda) = 2 g_4 (K_\Lambda).$
\end{prop}
The only non-torus and non-twist
knots with crossing number at most $7$ are $6_2, m(6_2)$, $6_3 = m(6_3)$, $7_3$, $m(7_3)$, $7_4$, $m(7_4)$, $7_5$, $m(7_5)$,
$7_6$, $m(7_6)$, $7_7$, and $m(7_7)$.
While doing the calculations for Legendrians with these knot types, in general we found that for a Legendrian $\Lambda$
whose underlying smooth knot type $K_\Lambda$
satisfied $g_3(K_\Lambda) = g_4(K_\Lambda)$, where $g_3(K_\Lambda)$ denotes the ($3$-dimensional)
genus of the knot,
it is fairly straight forward to show that
$\sigma_0(\Lambda) = 2 g_4 (K_\Lambda)$. Legendrians that are topologically $7_3, m(7_3), 7_4, m(7_4), 7_5$, and $m(7_5)$
fall into this category.
For the remaining knot types under consideration, the calculation of the smooth $4$-ball genus follows from
the fact that the topological unknotting number of these knots is equal to $1$. We show
that in a front projection of a Legendrian knot, it is possible to locally change
any negative crossing to a positive one by $2$ surgeries; see Lemma~\ref{lem:neg-cross-unknot}. This allowed us to prove
Proposition~\ref{prop:small-cross} in the cases where $\Lambda$ is topologically $6_2, 6_3 = m(6_3), 7_6$, or $7_7$.
For the remaining cases of $m(6_2)$, $m(7_6)$, and $m(7_7)$, results of \cite{signed-unknot} show that it is not possible to find
a front projection that can be
unknotted at a negative crossing. However, we found front projections that could
be unknotted at a positive crossing in a special ``S" or ``hooked-X" form: a positive crossing in
one of these special forms can be locally changed to a negative crossing by $2$ surgeries; see Lemma~\ref{lem:pos-cross-unknot}.
\subsection{The Lagrangian Motivation and Discussion}
All of our calculations indicate that $\sigma_0(\Lambda)$ is measuring an invariant of the underlying
smooth link type and that this invariant will be the same for $\Lambda$ and $\Lambda'$ when they represent
smooth knots that differ by the topological mirror operation. Below is an explanation for why this may be true.
Although the definition of the surgery unknotting number has been formulated above combinatorially,
the motivation comes from trying to understand the flexibility and rigidity of Lagrangian submanifolds of
a symplectic manifold. From theory developed by Bourgeois, Sabloff, and the second author in
\cite{bourgeois-sabloff-traynor},
the existence
of an unknotting surgery string $\left( \Lambda_n, \dots, \Lambda_0\right)$, as defined in Definition~
\ref{defn:strings}, implies the existence of an oriented Lagrangian
cobordism $\Sigma$ in $\ensuremath{\mathbb{R}} \times \ensuremath{\mathbb{R}}^3 = \{ (s, x, y, z) \} \cap \{ 0 \leq s \leq n \}$ so that $(\Sigma \cap \{ s = i \} ) = \Lambda_i$, for $i = n, \dots, 0$.
Furthermore, if $\Lambda_0$ is the Legendrian unknot with maximal Thurston-Bennequin invariant,
this cobordism can be ``filled in" with a Lagrangian $\overline \Sigma \subset \{ s \leq n\}$ so that $\partial \overline \Sigma = \Lambda_n$.
In fact, it is shown by Chantraine in \cite{chantraine} that if $\Lambda_0$ is not the Legendrian unknot with maximal Thurston-Bennequin
invariant, then the cobordism $\Sigma$ cannot be filled in to $\overline \Sigma$; moreover, when there does exist the filling
to $\overline \Sigma$
and the smooth underlying knot type of $\Lambda_n$ is $K_n$, then the genus of $\overline \Sigma$ agrees with the smooth $4$-ball genus
of $K_n$.
From this Lagrangian perspective, it is a bit more natural to consider surgery strings $(\Lambda_n, \dots, \Lambda_0)$
where $\Lambda_0$ is a Legendrian unlink (a trivial link of Legendrian unknots), and define a corresponding ``surgery unlinking number"; this is a project
that the second author has begun to pursue with other undergraduates.
A Lagrangian analogue of Question~\ref{ques:sun-g4} is:
\begin{ques} \label{ques:lag-g4} If $\Lambda$ is a Legendrian knot with underlying smooth knot type
$K_\Lambda$,
does there exist a Lagrangian cobordism constructed from Legendrian isotopy and oriented Legendrian surgeries
between $\Lambda$ and $\Lambda_0$, a Legendrian that is a smooth unlink,
that realizes $g_4(K_\Lambda)$?
\end{ques}
Any Lagrangian constructed from Legendrian isotopy and oriented Legendrian surgeries would be in ribbon form; this means that
the restriction of the height function, given by the $s$ coordinate, to the cobordism would not have any local maxima in the interior of the cobordism.
So a positive answer to Question~\ref{ques:lag-g4} would imply that the slice genus
agrees with the ribbon genus; for some background on this and related problems, see, for example, \cite{livingston-survey}.
\subsection*{Acknowledgements} The ideas of this project were inspired by joint work of
Josh Sabloff and the second author. We are extremely grateful for the many fruitful discussions we have had
with Sabloff throughout this project. We also gained much by discussions with
Chuck Livingston about the smooth $4$-ball genus; we are very thankful for his clear
explanations. We also thank Paul Melvin and other members of our PACT (Philadelphia Area
Contact/Topology) seminar for useful comments
during a series of presentations on this work.
\section{Background Information on Legendrian Links} \label{sec:background}
Below is some basic background on Legendrian links. More information can be found, for example, in
\cite{etnyre:knot-survey}.
The {\bf standard contact
structure} on $\mathbb R^3$ is the field of hyperplanes $\xi$ where
$\xi_p = \ker(dz - ydx)_p$. A {\bf Legendrian link} is a submanifold, $L$, of $\mathbb R^3$
diffeomorphic to a disjoint union of circles so that for all $p \in L$,
$T_pL \subset \xi_p$.
It is common to examine Legendrian links from their
$xz$-projections, known as their {\bf front projections}. A Legendrian link will generically have
an immersed front projection with semi-cubical cusps and no vertical tangents; conversely, any such projection can
be uniquely lifted to a Legendrian link using $y = dz/dx$. Figure~\ref{fig:trefoils} shows
Legendrian versions of the trefoils $3_1$ and $m(3_1)$.
\begin{figure}
\centerline{\includegraphics[height=.8in]{trefoils}}
\caption{Front projections of (a) a Legendrian knot that is topologically the (negative/left) trefoil $3_1$ and
(b) a Legendrian knot that is topologically the mirror trefoil $m(3_1)$. At crossings,
it is not necessary to specify which strand is the overstrand: the strand with lesser slope will always be on top.}
\label{fig:trefoils}
\end{figure}
$\Lambda_0$ and $ \Lambda_1$ are {\bf equivalent Legendrian links}
if there exists a $1$-parameter family of Legendrian links $\Lambda_t$ joining $\Lambda_0$ and $\Lambda_1$.
In fact, Legendrian links $\Lambda_0, \Lambda_1$ are equivalent if and only if their front projections
are equivalent by planar isotopies that do not introduce vertical tangents and
the {\bf Legendrian Reidemeister moves} as shown in Figure~\ref{fig:L-R-moves}.
\begin{figure}
\centerline{\includegraphics[height=1.2in]{L-R-moves}}
\caption{The three Legendrian Reidemeister moves: there is another type $1$ move obtained by flipping the
planar figure about a horizontal line, and there are three additional type $2$ moves obtained by flipping the
planar figure about a vertical, a horizontal, and both a vertical and horizontal line.}
\label{fig:L-R-moves}
\end{figure}
Every Legendrian knot and link has a Legendrian representative. In fact, every Legendrian knot
and link has an infinite number of different Legendrian representatives. For example, Figure~\ref{fig:3-unknots}
shows three different Legendrians that are all topologically the unknot. These unknots can be distinguished
by classical Legendrian invariants, the Thurston-Bennequin and rotation number.
These invariants can easily be computed from a front projection of the Legendrian link once we understand how to
assign a $\pm$ sign to each crossing and an up/down direction to each cusp.
\begin{figure}
\centerline{\includegraphics[height=1.2in]{3-unknots}}
\caption{Three different Legendrian knots that are all topologically the unknot. }
\label{fig:3-unknots}
\end{figure}
A {\bf positive (negative) crossing} of a front projection of an oriented Legendrian link is a crossing where the strands point to the same side (opposite sides) of a
vertical line passing through the crossing point; see figure~\ref{fig:positive_negative_crossings}.
Each cusp can also be assigned an {\bf up} or {\bf down} direction; see Figure~\ref{fig:up-down-cusps}.
Then for an oriented Legendrian link $\Lambda$,
we have the following formulas for the {\bf Thurston-Bennequin}, $tb(\Lambda)$, and {\bf rotation number}, $r(\Lambda)$, invariants:
\begin{equation} \label{eqn:tb-r}
tb(\Lambda) = P - N - R, \qquad r(\Lambda) = \frac12(D - U),
\end{equation}
where
$P$ is the number of positive crossings,
$N$ is the number of negative crossings,
$R$ is the number of right cusps,
$D$ is the number of down cusps,
and $U$ is the number of up cusps in a front projection of $\Lambda$.
Given that two front projections of equivalent Legendrian links
differ by the Legendrian Reidemeister moves described in Figure~\ref{fig:L-R-moves}, it is easy to verify that
$tb(\Lambda)$ and $r(\Lambda)$ are Legendrian
link invariants.
\begin{figure}
\centerline{\includegraphics[height=.8in]{positive_negative_crossings}}
\caption{(a) Negative crossings; (b) Positive crossings. }
\label{fig:positive_negative_crossings}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=.8in]{up-down-cusps}}
\caption{(a) Right and left down cusps; (b) Right and left up cusps. }
\label{fig:up-down-cusps}
\end{figure}
The two unknots in the second line of Figure~\ref{fig:3-unknots} are obtained from the one at the top
by adding an up or down {\bf zig-zag} (also known as a {\bf $\mp$ stabilization}).
In general, this stabilization procedure will not change the underlying smooth knot type but will
will decrease the Thurston-Bennequin number by $1$; adding an up (down) zig-zag will decrease (increase)
the rotation number by $1$.
If $\Lambda$ is a Legendrian knot,
we will use the notation $S_{\pm}(\Lambda)$ to denote the {\bf double stabilization} of $\Lambda$, the Legendrian knot obtained by adding
both a positive and negative zig-zag.
In fact, as discovered by Eliashberg and Fraser, all Legendrian unknots are classified by their Thurston-Bennequin and
rotation numbers:
\begin{thm}[\cite{eliashberg-fraser}, \cite{etnyre-honda:knots}] \label{thm:unknot-classification}
Suppose $\Lambda_0$ and $\Lambda_0'$ are oriented Legendrian knots that are both topologically
the unknot. Then $\Lambda_0$ is equivalent to $\Lambda_0'$ if and only if $tb(\Lambda_0) = tb(\Lambda_0')$ and
$r(\Lambda_0) = r(\Lambda_0')$.
\end{thm}
\begin{figure}
\centerline{\includegraphics[height=1in]{unknots}}
\caption{The tree of all Legendrian unknots. }
\label{fig:unknots}
\end{figure}
Figure~\ref{fig:unknots} describes all the Legendrian unknots. Notice that any Legendrian unknot is equivalent to one that is
obtained by adding up and/or down zig-zags to the unknot with Thurston-Bennequin number equal to $-1$ and rotation number equal to
$0$ shown in Figure~\ref{fig:3-unknots}.
In general, it is an important question to understand the ``geography" of other knot types. By work of Etnyre and Honda, \cite{etnyre-honda:knots} and
Etnyre, Ng, and V\'ertesi, \cite{etnyre-ng-vertesi}, we understand the trees/mountain ranges for all torus and twist knots. The Legendrian knot atlas
of Chongchitmate and Ng, \cite{chongchitmate-ng}, gives the known and conjectured mountain ranges for all Legendrian knots with arc index at most $9$; this
includes all knot types with crossing number at most $7$ and all non-alternating knots with crossing number at most $9$.
\section{The Surgery Unknotting Number}
In this section, we define the surgery operation, show that every Legendrian link can be unknotted by surgeries, define the
surgery unknotting number, and give some basic properties of the surgery unknotting number.
The surgery operation can be viewed as a ``tangle surgery": the replacement
of one Legendrian tangle by another.
A {\bf basic, compatibly-oriented Legendrian $0$-tangle} is a Legendrian tangle that is topologically the 0-tangle where the strands
are oppositely oriented and each strand has neither crossings nor cusps; the two basic, compatibly-oreinted Legendrian $0$-tangles
can be seen on the left side of Figure~\ref{fig:0-surgery}.
A {\bf basic, compatibly-oriented Legendrian $\infty$-tangle}
is Legendrian tangle that is topologically the $\infty$-tangle where the strands are oppositely oriented and each strand has precisely
$1$ cusp and no crossings; the two basic, compatibly-oriented Legendrian $\infty$-tangles
can be seen on the right side of Figure~\ref{fig:0-surgery}.
\begin{defn} \label{defn:strings}
An {\bf oriented, Legendrian surgery} of an oriented, Legendrian link is the Legendrian link obtained by replacing a
basic, compatibly-oriented Legendrian
$0$-tangle with a basic, compatibly-oriented Legendrian $\infty$-tangle;
see Figure~\ref{fig:0-surgery}. An {\bf oriented surgery string} consists of a vector of oriented, Legendrian links
$\left( \Lambda_n, \Lambda_{n-1}, \dots, \Lambda_0\right)$ where, for all $j \in \{n-1, \dots, 0\}$, $\Lambda_{j}$ is obtained from
$\Lambda_{j+1}$ by an oriented, Legendrian surgery. An {\bf oriented, unknotting surgery string of length $n$
for $\Lambda$} consists of an oriented surgery string
$\left(\Lambda_n, \Lambda_{n-1}, \dots, \Lambda_0\right)$ where $\Lambda_n = \Lambda$ and $\Lambda_0$ is topologically an unknot.
\end{defn}
To start, we have the following relationships between the classic invariants of two Legendrian links related by surgery:
\begin{lem} \label{lem:parity+tb} If $\Lambda$ is an oriented, Legendrian link and $\Lambda'$ is obtained from $\Lambda$ by an oriented, Legendrian surgery, then:
\begin{enumerate}
\item the parity of the number of components of $\Lambda$ and $\Lambda'$ differ;
\item $tb(\Lambda') = tb(\Lambda) - 1$, and $r(\Lambda') = r(\Lambda)$.
\end{enumerate}
\end{lem}
\begin{proof} The statements about the Thurston-Bennequin and rotation numbers are easily verified using
Equation~(\ref{eqn:tb-r}).
Regarding the parity,
one surgery to a knot will always produce a link of two components, while doing a surgery to a link will increase or decrease the number of components
by $1$ depending on whether or not the strands in the $0$-tangle belong to the same component of the link.
\end{proof}
Recall that for any Legendrian knot $\Lambda$, the Legendrian knot $\Lambda' = S_\pm(\Lambda)$ obtained as the
double $\pm$ stabilization of $\Lambda$ will have $r(\Lambda') = r(\Lambda)$ and
$tb(\Lambda') = tb(\Lambda) -2$. Thus it is potentially possible that $\Lambda'$ can be obtained from $\Lambda$ by two oriented Legendrian
surgeries. In fact, it is possible.
\begin{lem} \label{lem:2-steps-down} For any oriented, Legendrian knot $\Lambda$
there exists an oriented surgery string
$(\Lambda_2, \Lambda_1, \Lambda_0)$ with $\Lambda_2 = \Lambda$ and $\Lambda_0 = S_\pm(\Lambda)$.
\end{lem}
\begin{proof} These surgeries are illustrated in Figure~\ref{fig:2-steps-down}.
Every Legendrian link $\Lambda$ must have a right cusp. By a Legendrian isotopy, we call pull a right cusp far to
the right and perform one surgery near this right cusp. This produces a link consisting of the original
link and a Legendrian unknot. After a Legendrian isotopy, a second surgery can be done using one
strand near the same cusp of the original link and a strand from the unknot. The result is $S_\pm(\Lambda)$.
\end{proof}
\begin{figure}[htbp]\label{fig:2-steps-down}
\begin{center}
\centerline{\includegraphics[height=1.5in]{2-steps-down}}
\caption{Two oriented, Legendrian surgeries produce $S_\pm(\Lambda)$ from $\Lambda$.}
\end{center}
\end{figure}
In the chart of Legendrian unknots given in Figure~\ref{fig:unknots}, we see that any two unknots
with the same rotation number are related by a sequence of double $\pm$ stabilizations. Thus we get:
\begin{cor} If $\Lambda$ and $\Lambda'$ are oriented, Legendrian unknots with $r(\Lambda) = r(\Lambda')$ and
$tb(\Lambda) = tb(\Lambda') + 2m$, then there exists an oriented surgery string \newline
$\left(\Lambda_{2m}, \Lambda_{2m-1}, \dots, \Lambda_{0}\right)$,
where $\Lambda_{2m} = \Lambda$, and $\Lambda_{0} = \Lambda'$.
\end{cor}
Thus if we can reach a Legendrian unknot by surgeries, then we can reach an infinite number of Legendrian
unknots by surgery. The basis for our new invariant is the fact that every Legendrian link can be ``unknotted" by a string of
surgeries:
\begin{prop} \label{prop:unknottable}
For any oriented, Legendrian link $\Lambda$, there exists an oriented, unknotting surgery string
$\left( \Lambda = \Lambda_{u}, \Lambda_{u-1}, \dots, \Lambda_0\right)$.
Moreover, if $\Lambda$ has $j$ components
and there exists a front projection of $\Lambda$ with $m$ crossings, then $u \leq 2m + j - 1$.
\end{prop}
\begin{proof} Assume that there is a front projection of $\Lambda$ with $m$ crossings. We will first show that
there is an oriented surgery string $\left(\widetilde \Lambda_m, \widetilde \Lambda_{m-1}, \dots, \widetilde \Lambda_0\right)$,
where $\widetilde \Lambda_m = \Lambda$ and
$\widetilde \Lambda_0$ is a trivial link of Legendrian unknots. If $\widetilde \Lambda_0$ has $c$ components, we will then show that
it is possible to do an additional $c-1$ surgeries to get this into a single unknot.
Given the initial Legendrian link $\Lambda$ having a projection with $m$ crossings, assume that $n$ of these crossings
are negative. It is then possible to construct a surgery string $\left(\widetilde \Lambda_m, \widetilde \Lambda_{m-1}, \dots, \widetilde \Lambda_{m-n}\right)$ where
$\widetilde \Lambda_m = \Lambda$ and $\widetilde \Lambda_{m-n}$ has a front projection with $m - n$ crossings, all of which are positive.
This surgery string is obtained by doing a surgery
to the right of each negative crossing as shown in Figure~\ref{fig:neg-cross}, and then doing a Legendrian isotopy to
remove the positive crossing introduced by the surgery.
Next, by applying a
planar Legendrian isotopy, it is possible to assume that all the crossings of $\widetilde \Lambda_{m-n}$ have distinct $x$-coordinates.
The left cusps associated to the leftmost positive crossing are either nested or stacked and fall into one of the $6$ cases
listed in Figure~\ref{fig:positive-cross}.
For each case, it is possible to do a surgery immediately to the right of this leftmost crossing. After Legendrian Reidemeister moves, the crossing is eliminated
and the number of crossings of the projection of the resulting link has decreased by 1; see Figure~\ref{fig:positive-cross}. What was
the second leftmost positive crossing is now the leftmost positive crossing and the procedure can be repeated.
In this way, we obtain a surgery string of Legendrian links
$\left(\widetilde \Lambda_m, \dots, \widetilde \Lambda_{m-n}, \widetilde \Lambda_{m-n-1}, \dots, \widetilde \Lambda_0\right)$
where $\widetilde \Lambda_0$ has a front projection with no crossings. It follows that $\widetilde \Lambda_0$ is topologically a trivial link of unknots.
By applying a Legendrian isotopy, we can assume that $\widetilde \Lambda_0$ consists of $c$ Legendrian unknots which are
vertically stacked and where each unknot
is oriented ``clockwise"; an example of this is shown in Figure~\ref{fig:unknot-stack}. It is then easy to see
that after applying $c-1$ additional surgeries, we can obtain a Legendrian unknot. Thus there is a length $u = m + c-1$
unknotting surgery sequence for $\Lambda$.
By Lemma~\ref{lem:parity+tb}, if $\Lambda = \widetilde \Lambda_m$ has
$j$ components, $\widetilde \Lambda_0$ has at most $c= j+m$ components. Thus we see that $u \leq 2m +j - 1$, as claimed. \end{proof}
\begin{figure}
\centerline{\includegraphics[height=0.5in]{neg-cross}}
\caption{A negative crossing can be removed by an oriented Legendrian surgery and then Legendrian isotopy.}
\label{fig:neg-cross}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=2in]{positive_cross}}
\caption{Three cases for the leftmost positive crossing and their associated left cusps; three additional cases
are obtained by reversing the orientations on both strands.}
\label{fig:positive-cross}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=1.2in]{unknot-stack}}
\caption{After all crossings are eliminated, a Legendrian isotopy can be applied so that $\Lambda_0$ is a stack
of $c$ Legendrian unknots oriented clockwise. After $c-1$ additional surgeries, a Legendrian unknot is obtained.}
\label{fig:unknot-stack}
\end{figure}
\begin{defn} \label{defn:sun} Given a Legendrian link $\Lambda$, the (oriented) {\bf Legendrian surgery unknotting number of $\Lambda$}, $\sigma_0(\Lambda)$, is
defined as the minimal length of an oriented, unknotting surgery string for $\Lambda$.
\end{defn}
\begin{rem} \label{rem:basics} Here are some basic properties of $\sigma_0(\Lambda)$:
\begin{enumerate}
\item By Lemma~\ref{lem:parity+tb},
for any Legendrian link $\Lambda$, the parity of $\sigma_0(\Lambda)$ is opposite the parity of the number of components of $\Lambda$;
\item
For any oriented, Legendrian link $\Lambda$ with $j$ components, $j-1 \leq \sigma_0(\Lambda) < \infty$, with $0 = \sigma_0(\Lambda)$ iff $\Lambda$ is topologically an unknot;
\item If $\Lambda$ is a topolopologically non-trivial Legendrian knot and there exists an oriented unknotting surgery string for $\Lambda$
of length $2$, then $\sigma_0(\Lambda) = 2$;
\item If $\Lambda'$ is obtained from $\Lambda$ by stabilization(s), then $\sigma_0(\Lambda') \leq \sigma_0(\Lambda)$.
\end{enumerate}
\end{rem}
Propositon~\ref{prop:unknottable} and, more importantly,
explicit calculations will give upper bounds for $\sigma_0(\Lambda)$.
Now we turn to examining some lower bounds for $\sigma_0(\Lambda)$.
First, by Theorem~\ref{thm:unknot-classification},
if $\Lambda'$ is a Legendrian unknot, then $tb(\Lambda') + |r(\Lambda')| \leq -1$. Thus if $\Lambda$ is a Legendrian link with a
``large" Thurston-Bennequin and/or rotation number, one is forced to do a certain number of Legendrian surgeries. More precisely,
Lemma~\ref{lem:parity+tb} implies:
\begin{lem}\label{lem:tb-lb} For any Legendrian link $\Lambda$,
$$tb(\Lambda) + |r(\Lambda)| + 1 \leq \sigma_0(\Lambda).$$
\end{lem}
Lemma~\ref{lem:tb-lb} gives us improved lower bounds over those given in Remark~\ref{rem:basics} when $2 \leq tb(\Lambda) + |r(\Lambda)|$.
\footnote{The parity of $tb(\Lambda) + |r(\Lambda)|$ agrees with the parity of the number of components of $\Lambda$, so for knots, we get
interesting new bounds when $3 \leq tb(\Lambda) + |r(\Lambda)|$.}
For example, there exists a Legendrian whose underlying smooth knot type is $m(5_1)$ and whose classical
invariants satisfy $tb(\Lambda) +| r(\Lambda)|= 3$; see, for example, \cite{chongchitmate-ng}. Thus
Lemma~\ref{lem:tb-lb} implies
$4 \leq \sigma_0(\Lambda)$. However
for many links, $tb(\Lambda) + |r(\Lambda)| \leq 2$. For example, for any Legendrian $\Lambda$ that is topologically the $5_1$ knot,
$tb(\Lambda) + |r(\Lambda)| \leq -5$. Although Lemma~\ref{lem:tb-lb} will not help us, in this case we can make use of:
\begin{lem}\label{lem:g4-lb} For a Legendrian link $\Lambda$ with $j$ components, let $L_\Lambda$ denote the underlying smooth
link type of $\Lambda$,
and let $g_4(L_\Lambda)$ denote the smooth $4$-ball genus of $L_\Lambda$.
Then $$2 g_4(L_\Lambda) + (j-1) \leq \sigma_0(\Lambda).$$
\end{lem}
\begin{proof} From a Legendrian surgery string of length $n$ that ends at an unknot, one can construct
a smooth, orientable, compact, and connected $2$-dimensional surface in $B^4$ with boundary equal to $L_\Lambda$
and Euler characteristic equal to
$1 - n$; the genus, $g$, of this surface satisfies $1 - n= 2 - 2g - j$.
Thus, by definitions of the smooth $4$-ball genus,
$$(j-1) + 2g_4(L_\Lambda) \leq (j-1) + 2g = n.$$
Since $\sigma_0(\Lambda)$ is the minimum length of a surgery unknotting string, the claim follows.
\end{proof}
A convenient table of smooth $4$-ball genera of knots can be found at Cha and Livingston's KnotInfo website,
\cite{knot-info}.
\section{The Surgery Unknotting Number for Families of Knots}
\label{sec:families}
In this section we will calculate the surgery unknotting numbers for Legendrian twist knots, Legendrian torus links,
and positive, Legendrian rational links. The fact that we can precisely calculate these numbers for the first two families rests upon
classification results of \cite{etnyre-ng-vertesi}, \cite{etnyre-honda:knots}, and \cite{dalton}.
\begin{subsection}{Legendrian twist knots}
A twist knot is a knot that is smoothly equivalent to a knot $K_m$ in the form
of Figure~\ref{fig:top-twist}. In other words, a twist knot is a twisted Whitehead double of the unknot.
\begin{figure}
\centerline{\includegraphics[height=.7in]{top-twist}}
\caption{The twist knot $K_m$; the box contains $m$ right-handed half twists if
$m \geq 0$, and $|m|$ left-handed twists if $m < 0$. Notice that $K_0$ and $K_{-1}$ are unknots.}
\label{fig:top-twist}
\end{figure}
\begin{thm}\label{thm:twist} If $\Lambda$ is a Legendrian knot that is topologically a non-trivial twist knot
then $\sigma_0(\Lambda) = 2$.
\end{thm}
\begin{proof}
Etnyre, Ng, and V\'ertesi, have classified all
Legendrian twist knots, \cite{etnyre-ng-vertesi}. In particular, every Legendrian knot $\Lambda$ with maximal Thurston-Bennequin
invariant that is topologically $K_m$, for some $m \leq -2$, is Legendrian isotopic to one of the form
in Figure~\ref{fig:gen-neg-twist}, and every Legendrian knot $\Lambda$ with maximal Thurston-Bennequin
invariant that is topologically $K_m$, for $m \geq 1$ with maximal Thurston-Bennequin invariant
is Legendrian isotopic to one of the form in Figure~\ref{fig:gen-pos-twist}. \footnote{We omit $m = 0, -1$ since
those corresponds to the unknot.}
Every Legendrian knot $\Lambda$ that is topologically a non-trivial twist knot
is obtained by stabilization of one of these with maximal Thurston-Bennequin invariant.
By Remark~\ref{rem:basics}, it suffices to show for any Legendrian knot $\Lambda^+$ that is topologically
a non-trivial twist knot and has maximal Thurston-Bennequin invariant, $\sigma_0(\Lambda^+) = 2$.
For $\Lambda^+$, we can do the two unknotting surgeries near the ``clasp". The sign of
the crossings in the clasp will depend on whether $m$ is even or odd:
Figure~\ref{fig:twist-surg} shows the positions
of two surgeries that result in an unknot.
\end{proof}
\begin{figure}
\centerline{\includegraphics[height=.7in]{gen-neg-twist}}
\caption{Any Legendrian knot that is topologically a negative twist knot, $K_m$ with $m \leq -2$, and has maximal Thurston-Bennequin
invariant
is Legendrian isotopic to one of the form in
(a) where the box contains $|m+2|$ half twists, each of form $S$ as shown in (b) or of form $Z$ as
shown in (c).}
\label{fig:gen-neg-twist}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=.7in]{gen-pos-twist}}
\caption{Any Legendrian knot that is topologically a positive twist knot, $K_m$ with $m \geq 0$, and has maximal Thurston-Bennequin
invariant
is Legendrian isotopic to one of the form in
(a) where the box contains $m$ half twists, each of form $X$ as shown in (b).}
\label{fig:gen-pos-twist}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=1.8in]{twist-surg}}
\caption{For a Legendrian knot with maximal Thurston-Bennequin invariant that is topologically $K_m$,
(a) gives the surgery points when $m$ is even, and (b) gives the surgery points when $m$ is odd.}
\label{fig:twist-surg}
\end{figure}
\end{subsection}
\begin{subsection}{Legendrian Torus Links}
A torus link is a link that can be smoothly isotoped so that it lies on the surface of an unknotted torus in $\ensuremath{\mathbb{R}}^3$. Every torus
knot can be specified by a pair $(p,q)$ of coprime integers: we will use the convention that
the $(p,q)$-torus knot, $T(p,q)$, winds $p$ times around a meridonal curve of the torus and $q$ times in the longitudinal direction.
See, for example, \cite{adams}.
In fact, $T(p, q)$ is equivalent to $T(q, p)$ and to $T(-p, -q)$. So we will
always assume that $|p| > q > 0$; in addition we will assume $q > 1$ since we are interested in non-trivial torus knots.
For $j\geq 2$, $T(jp, jq)$, with $|p| > q > 1$ and $\gcd(p,q) = 1$, will be a $j$-component link where each component is
a $T(p,q)$ torus link. We will only consider torus links of non-trivial components.
\begin{thm}\label{thm:torus} %
If $\Lambda$ is a $j$-component Legendrian link that is topologically the $(jp, jq)$-torus link, $|p| > q > 1$,
then $\sigma_0(\Lambda) = (|jp|-1)(jq-1)$.
\end{thm}
\begin{proof} First consider the case where $\Lambda$ is topologically a positive torus knot, $T(p, q)$ with $p > 0$. As shown by
Etnyre and Honda, \cite{etnyre-honda:knots},
the list of different Legendrian representations of a positive torus knot can be represented as a ``tree" in parallel
to the tree of unknots shown in Figure~\ref{fig:unknots}. Namely, for
fixed $p > q>1$, there is a unique Legendrian knot $\Lambda^+$ that is topologically $T(p,q)$ with
maximal Thurston-Bennequin invariant $tb(\Lambda^+) = pq-p-q$ and $r (\Lambda^+)= 0$; any Legendrian knot $\Lambda$ that is topologically
$T(p,q)$ is obtained
by stabilizations of $\Lambda^+$.
By Remark~\ref{rem:basics}, it suffices
to show that if $\Lambda^+$ is a Legendrian knot that is topologically
$T(p,q)$ and has maximal Thurston-Bennequin invariant, then $\sigma_0(\Lambda^+) = (p-1)(q-1)$.
By Lemma~\ref{lem:tb-lb},
$$tb(\Lambda^+) + |r(\Lambda^+)| + 1 = (p-1)(q-1) \leq \sigma_0(\Lambda).
$$
In fact, it is possible to unknot with $(p-1)(q-1)$ surgeries. Starting from the left most string of crossings,
do $(q-1)$ successive surgeries as illustrated for the $(5,3)$-torus knot in Figure~\ref{fig:5-3-torus-unknot}. In general,
this take the $(p,q)$-torus knot to the $(p-1,q)$-torus link. Repeating this $(p-1)$ times results in the
$(1,q)$-torus knot, which is an unknot.
\footnote{By Corollary~\ref{cor:realize-g4}, we can now reprove the Milnor conjecture as mentioned in Corollary~\ref{cor:Milnor}.}
The above proof easily generalizes to positive torus links of non-trivial components. Dalton showed in \cite{dalton} that
there is a unique Legendrian link $\Lambda^+$ that is topologically $T(jp,jq)$ with
maximal Thurston-Bennequin invariant $tb(\Lambda^+) = jpjq-jp-jq$. The construction of this
one exactly parallels the construction in Figure~\ref{fig:5-3-torus-unknot}, and so the same pattern of $(jp-1)(jq-1)$ surgeries of
will produce a Legendrian unknot.
\begin{figure}
\centerline{\includegraphics[height=1.5in]{5-3-torus-unknot}}
\caption{The Legendrian $(5,3)$-torus knot with maximal tb invariant. The general,
positive Legendrian $(p,q)$-torus knot with maximal Thurston-Bennequin invariant is constructed using
$q$ strands and a length $p$ string of crossings. Shown are the $(p-1)(q-1)$ oriented Legendrian surgeries that unknot the Legendrian positive
$(p,q)$-torus knot with maximal tb.}
\label{fig:5-3-torus-unknot}
\end{figure}
Next consider the case where $\Lambda$ is topologically a negative torus knot, $T(p, q)$ with $p < 0$. In this case,
Etnyre and Honda have shown that
the list of different Legendrian representations of a negative torus knots, $T(p, q)$ for $p < 0$ and $|p| > q > 1$,
can be represented as a ``mountain range" where the number of representatives with maximal Thurston-Bennequin
invariant depends on the divisibility of $p$ by $q$. Namely, if
$|p| = m q + e$, $0 < e < q$, then
there will be $2m$
Legendrian representatives of $T(p, q)$
with maximal Thurston-Bennequin
invariant of $pq < 0$. Half of these different representatives with maximal
Thurston-Bennequin invariant are obtained by writing $m = 1 + n_1 + n_2$, where $ n_1, n_2 \geq 0$,
and then $\Lambda_{(n_1, n_2)}^+$ is constructed using the form shown in Figure~\ref{fig:neg-torus-unknot} with $n_1$ and $n_2$ copies of the tangle
$B$ inserted as indicated: $r(\Lambda_{(n_1, n_2)}^+) = q (n_2 - n_1) + e$.
The other $m$ Legendrian versions of $T(p,q)$ with maximal Thurston-Bennequin invariant
are obtained by reversing the orientation. For negative torus knots, Lemma~\ref{lem:tb-lb} will not be a useful lower bound.
However, since the calculation of the $4$-ball genus is the same for both the knot and its mirror,
the calculations in the positive torus knot case
and Corolloary~\ref{cor:realize-g4}, (or \cite{kronheimer-mrowka}), show that for a negative torus knot $T(p,q)$, $2g_4(T(p, q)) = (|p| - 1)(q-1)$. Thus, by Lemma~\ref{lem:g4-lb}
$$(|p| - 1)(q-1) \leq \sigma_0(\Lambda).$$ In fact, it is possible to arrive at an unknot with $(|p| - 1)(q-1)$ surgeries.
Figure~\ref{fig:neg-torus-unknot} shows the claimed surgeries: a surgery is done to the right of all crossings
in the $L$, $R$, and $B$ regions (contributing $\frac12 q(q-1) + \frac12 q(q-1) + (n_1 + n_2)q(q-1)$ surgeries),
and between each ``$Z$" in the $e$ string one does $(q-1)$ successive surgeries (contributing $(e-1)(q-1)$ surgeries). Thus the total
number of surgeries is:
$$(1 + n_1 + n_2) q (q-1) + (e-1)(q-1) = (mq + e - 1)(q-1) = (|p| - 1)(q-1).$$
\begin{figure}
\centerline{\includegraphics[height=2in]{neg-torus-unknot}}
\caption{The $(|p|-1)(q-1)$ oriented Legendrian surgeries that unknot a Legendrian negative
$(p,q)$-torus knot with maximal Thurston-Bennequin invariant.}
\label{fig:neg-torus-unknot}
\end{figure}
The above proof easily generalizes to negative torus links. It follows from \cite{nakamura} that
$g_4(T(jp, jq)) + (j-1) = (j|p|-1)(jq-1)$; see Remark~\ref{rem:g4-torus-links}. Dalton showed in \cite{dalton} that
there are $2m$ Legendrian links $\Lambda^+$ that are topologically $T(jp,jq)$ with
maximal Thurston-Bennequin invariant, and all Legendrians that are topologically $T(jp, jq)$ are obtained by
stabilizations of one of these. Each of these with maximal Thurston-Bennequin invariant
can be constructed as in Figure~\ref{fig:neg-torus-unknot},
and so the same pattern of $(j|p|-1)(jq-1)$ surgeries
will produce a Legendrian unknot.
\end{proof}
\begin{rem} \label{rem:g4-torus-links} Nakamura's formula, \cite{nakamura}, for the smooth $4$-ball genus of a $j$-component positive link $L$ is that
$$2 g_4(L) = 2 - j - s(D) + c(D),$$
where $s(D)$ is the number of Seifert circles and $c(D)$ is the number of crossings in a non-split positive diagram $D$ for $L$.
It is straightforward to see that when $L$ is the positive torus link $T(jp, jq)$, using the diagram $D$ corresponding to Figure~\ref{fig:5-3-torus-unknot},
$s(D) = jq$ and $c(D) = jp(jq-1)$. So,
$$2 g_4(T(jp, jq)) = 2 - j - jq + jp(jq-1) = (1-j) + (jp-1) (jq-1).$$
Thus for any Legendrian link $\Lambda$ that is topologically $T(jp, jq)$, for either $p$ positive or negative,
$$2g_4(T(jp, jq)) + (j-1) = \sigma_0(\Lambda).$$
\end{rem}
\end{subsection}
\begin{subsection}{Positive, Legendrian Rational Links}
\begin{defn}\label{defn:pos-rat} Given a vector of integers $(c_n, \dots, c_2, c_1)$, where $c_n \geq 2$,
and $n \geq 2$ implies $c_i \geq 1$ for $i = 1, \dots, n-1$,
we construct the {\bf rational Legendrian link}
$\Lambda(c_n, \dots, c_2, c_1)$ to be the Legendrian numerator closure of the Legendrian tangle
$(c_n, \dots, c_2, c_1)$ as demonstrated in Figure~\ref{fig:rat-gen};
see also \cite{adams}, \cite{traynor:strat}, \cite{schneider}. The rational Legendrian link $\Lambda(c_n, \dots, c_2, c_1)$ is {\bf positive} if all crossings are positive.
\end{defn}
This Legendrian link $\Lambda(c_n, \dots, c_1)$ is topologically the numerator closure of the rational tangle associated to the rational number $q$
with continued fraction expansion $q = c_1 + 1/(c_2 + 1/(c_3 + \dots ))$; \cite{conway}.
\begin{figure}
\centerline{\includegraphics[height=2in]{rat-gen}}
\caption{(a) The general form of $\Lambda(c_0)$, (b) the general form of $\Lambda(c_1, c_0)$, (c) the general form of $\Lambda(c_2, c_1, c_0)$,
the general form of $\Lambda(c_3, c_2, c_1, c_0)$.}
\label{fig:rat-gen}
\end{figure}
The ``even" entries $c_2, c_4, \dots$ of the vector $(c_n, \dots, c_2, c_1)$ denote the strings of vertical crossings. It is straightforward
to verify that the parity of these
vertical entries determine when $\Lambda(c_n, \dots, c_1)$ is a positive link:
\begin{lem} \label{lem:pos}
\begin{enumerate}
\item When $n$ is odd, there exists an orientation on the components of $\Lambda(c_n, \dots, c_1)$ so that it
is a positive link if and only if
$c_i$ is even, for all $i$ even.
Moreover,
$\Lambda(c_n, \dots, c_1)$ is a knot when $\sum_{i \text{ odd}} c_i$ is odd.
\item When $n$ is even, there exists an orientation on the components of $\Lambda(c_n, \dots, c_1)$ so it
is a positive link if and only if $c_n$ is odd and $c_{n-2}, c_{n-4}, \dots, c_2$ are all even.
Moreover,
$\Lambda(c_n, \dots, c_1)$ is a knot when $\sum_{i \text{ odd}} c_i$ is even.
\end{enumerate}
\end{lem}
The Legendrian surgery unknotting number of a positive link has a convenient formula in terms of the
``odd" entries, which correspond to the strings of horizontal crossings.
There will be some differences in following formulas depending on whether $\Lambda$ is constructed from
an odd or an even length vector. Define
$$p(n) =
\begin{cases}
1, & n \text{ odd} \\
0, & n \text{ even};
\end{cases}
$$
$p(n)$ measures the parity of the ``length" of the vector $(c_n, \dots, c_1)$.
\begin{thm} \label{thm:pos-rat} If $\Lambda(c_n, \dots, c_2, c_1)$ is a positive, Legendrian rational link, then
$$\sigma_0(\Lambda(c_n, \dots, c_2, c_1)) = \sum_{i \text{ odd }} c_{i} - p(n).$$
\end{thm}
\begin{proof} This will be proved using the lower bound on $\sigma_0(\Lambda)$ provided by
Lemma~\ref{lem:tb-lb}, and explicit calculations.
We will first show that
$$r(\Lambda(c_n, \dots, c_2, c_1)) = 0, \text{ and }
tb(\Lambda(c_n, \dots, c_2, c_1)) = \sum_{i \text{ odd}} c_{i}- p(n) - 1.$$
It is easy to verify that when all the crossings are positive, the up and down cusps cancel in pairs and thus
the rotation number vanishes. To calculate $tb(\Lambda(c_n, \dots, c_1))$, notice that when $n$ is odd
the number of right cusps
is $2$ more than the number of vertical crossings, $\sum_{i \text{ even}} c_i$, while when $n$ is even, the number of rights cusps
is $1$ more than the number of vertical crossings. Thus:
$$tb(\Lambda(c_n, \dots, c_2, c_1)) = \sum_{i=1}^n c_i - \left( \sum_{i \text{ even}} c_i + 1 + p(n) \right) = \sum_{i \text{ odd}} c_i - 1 - p(n). $$
Thus, by Lemma~\ref{lem:tb-lb},
$$ \sum_{i \text{ odd}} c_{i} - p(n) \leq \sigma_0(\Lambda(c_n, \dots, c_2, c_1)).$$
In fact, it is possible to unknot $\Lambda(c_n, \dots, c_2, c_1)$ by doing
$c_i - 1$ surgeries in each horizontal component
and $1$ surgery in each vertical segment; Figure~\ref{fig:pos-rat-unknot} illustrates some examples of this.
When $n=1$, there are no vertical segments; for other odd $n$, the number of vertical components is one less than the number
of horizontal components, and when $n$ is even,
the number of vertical components agrees with the number
of horizontal components. Thus
$$\sigma_0(\Lambda(c_n, \dots, c_1)) \leq \sum_{i \text{ odd}} c_i - p(n),$$ and the desired
calculation of $\sigma_0(\Lambda(c_n, \dots, c_1))$ follows.
\begin{figure}
\centerline{\includegraphics[height=1.4in]{pos-rat-unknot}}
\caption{Two positive, Legendrian rational knots of odd and even lengths. In both cases, it is possible to
unknot by doing $c_i - 1$ surgeries in each horizontal segment ($i$ odd) and $1$ surgery in
each vertical segment.}
\label{fig:pos-rat-unknot}
\end{figure}
\end{proof}
\begin{rem} \label{rem:g4-rats} In the above proof, $\sigma_0(\Lambda(c_n, \dots, c_1))$ is obtained by realizing
the lower bound given by the classical Legendrian invariants. Thus, by
Corollary~\ref{cor:realize-g4}, we see that when $\Lambda(c_n, \dots, c_1)$ has an underlying
topological type of the knot $K_\Lambda$,
$\sigma_0(\Lambda(c_n, \dots, c_1)) = 2 g_4(K_\Lambda)$. Moreover, when $\Lambda(c_n, \dots, c_1)$ has an underlying
topological type of a $2$-component link $L_\Lambda$, we can compare $\sigma_0(\Lambda(c_n, \dots, c_1))$
to the smooth $4$-ball genus of $L_\Lambda$ using Nakamura's formula (see Remark~\ref{rem:g4-torus-links}) for the smooth $4$-ball
genus of a positive link .
When $n$ is odd, the number of Seifert circles is $s(D) = 2 + \sum_{i \text{ even}} c_i$, while when
$n$ is even, $s(D) = 1 + \sum_{i \text{ even}} c_i$. Thus we find that for a $2$-component, positive, Legendrian rational link $\Lambda(c_n, \dots, c_1)$,
$$2g_4(L_\Lambda) + 1 = c(D) - s(D) + 1 = \sum_{i \text{ odd}} c_i - p(n) = \sigma_0(\Lambda(c_n, \dots, c_1)).$$
\end{rem}
\end{subsection}
\section{The Surgery Unknotting Number for Small Crossing Knots}
Given the calculations of the previous section, it is natural to
ask Queston~\ref{ques:sun-g4} in the Introduction.
To investigate the knot portion of this question, we examined Legendrian representatives of low-crossing knots.
There is not a Legendrian classification of all these knot types, but a conjectured
classification of these knot types can be found in \cite{chongchitmate-ng}.
In the following, we prove Proposition~\ref{prop:small-cross}, which says that
the surgery unknotting number of the Legendrian agrees with twice the smooth $4$-ball genus
of the underlying smooth knot for all Legendrians that are topologically
a non-slice knot with crossing number at most $7$.
In Section~\ref{sec:families}, Proposition~\ref{prop:small-cross} is verified for all torus and twist knots.
The only non-torus and non-twist
knots with $7$ or fewer crossings are $6_2, m(6_2)$, $6_3 = m(6_3)$, $7_3$, $m(7_3)$, $7_4$, $m(7_4)$, $7_5$, $m(7_5)$,
$7_6$, $m(7_6)$, $7_7$, and $m(7_7)$. The needed calculations fall into three categories as described below.
\begin{ex} For the smooth knots $7_3, m(7_3), 7_4, m(7_4), 7_5$, and $m(7_5)$, the genus, $g_3$, agrees with
the smooth $4$-ball genus $g_4$. \footnote{This is also the situation for the torus and non-slice twist knots studied in Section~\ref{sec:families}.}
In general, we find that for a Legendrian $\Lambda$
whose underlying knot type $K_\Lambda$
satisfied $g_3(K_\Lambda) = g_4(K_\Lambda)$,
it is fairly straight forward to show that
$\sigma_0(\Lambda) = 2 g_4 (L_\Lambda)$. For example, Figure~\ref{fig:g3-g4} shows all
conjectured representatives of $7_3$, $m(7_3)$, $7_4$, $m(7_4)$, $7_5$, and $m(7_5)$ with
maximal Thurston-Bennequin invariant (after perhaps selecting alternate orientations and/or
performing the Legendrian mirror operation). For each of these with maximal Thurston-Bennequin invariant,
it is possible to unknot with $2 g_4 (K_\Lambda)$ surgeries as indicated.
\begin{figure}
\centerline{\includegraphics[height=2.4in]{Pg3-g4}}
\caption{Front projections representing all conjectured Legendrian representatives of $7_3$, $m(7_3)$, $7_4$, and $m(7_4)$ with maximal Thurston-Bennequin invariant. For all of these knot types, $g_3(K_\Lambda) = g_4(K_\Lambda)$; the indicated surgery points realize $\sigma_0(\Lambda) = 2 g_4(K_\Lambda)$.}
\label{fig:g3-g4}
\end{figure}
\end{ex}
In general, we found that for a Legendrian $\Lambda$
whose underlying knot type $K_\Lambda$
satisfied $g_4(K_\Lambda) < g_3(K_\Lambda)$, it is more difficult to calculate
$\sigma_0(\Lambda)$. To do calculations for our remaining cases, we made use of the well-known fact that the unknotting number of a knot, $u(K)$, gives
an upper bound to the smooth $4$-ball genus:
\begin{equation} \label{ineq:g4-u}
g_4(K) \leq u(K).
\end{equation}
Figure~\ref{fig:g4-u} demonstrates two topological surgeries that produce a crossing change; an argument as in the proof of
Lemma~\ref{lem:g4-lb} then proves Inequaltiy~\ref{ineq:g4-u}. Notice that the topological Reidemeister moves used in the
equivalence are not Legendrian Reidemeister moves. However, near a negative crossing, it is possible to
``Legendrify" this construction:
\begin{figure}
\centerline{\includegraphics[height=.5in]{g4-u}}
\caption{ A sequence of two topological surgeries in a neighborhood of a negative crossing that
toplogically change the crossing. An analogous picture shows that a positive crossing can be changed into
a negative crossing by two topological surgeries.}
\label{fig:g4-u}
\end{figure}
\begin{lem} \label{lem:neg-cross-unknot} If the Legendrian knot $\Lambda$ has a front projection that can
be topologically unknotted by changing a negative crossing, then
$$\sigma_0(\Lambda) \leq 2.$$
\end{lem}
\begin{proof} Figure~\ref{fig:neg-unknot} demonstrates how two surgeries can locally produce a topological crossing
change.
\begin{figure}
\centerline{\includegraphics[height=1.4in]{neg-unknot}}
\caption{ A sequence of two oriented surgeries in a neighborhood of a negative crossing that
toplogically change the crossing.}
\label{fig:neg-unknot}
\end{figure}
\end{proof}
\begin{ex} Using Lemma~\ref{lem:neg-cross-unknot}, it is possible to show that for any conjectured Legendrian representative $\Lambda$
of $6_2$, $6_3$, $7_6$, or $7_7$, $\sigma_0(\Lambda) = 2 g_4(L_\Lambda)$. Figure~\ref{fig:neg-pt-unknot} shows the conjectured
Legendrian representatives of these knot types with maximal Thurston-Bennequin invariant
(after perhaps selecting alternate orientations and/or
performing a mirror operation) and the negative crossing that when topologically changed produces an unknot.
\begin{figure}
\centerline{\includegraphics[height=2in]{neg-pt-unknot}}
\caption{ Front projections representing all conjectured Legendrian representatives of $6_2$, $6_3$, $7_6$, and $7_7$ with maximal Thurston-Bennequin invariant. These projections can be topologically unknotted at the indicated negative crossing.}
\label{fig:neg-pt-unknot}
\end{figure}
\end{ex}
We were not able to find front projections of the conjectured maximal Thurston-Bennequin representatives
of $m(6_2)$, $m(7_6)$, or $m(7_7)$ that could be topologically unknotted by changing a negative crossing;
in fact, by \cite{signed-unknot}, it is {\it not} possible to do this even in the smooth setting.
Luckily, sometimes we can topologically change a positive crossing when it has a special form.
\begin{defn} A positive crossing is of {\bf S form}, {\bf Z form}, or {\bf hooked-X form} if it takes the form
as shown in Figure~\ref{fig:pos-S-Z}.
\end{defn}
\begin{figure}
\centerline{\includegraphics[height=.8in]{pos-S-Z-X}}
\caption{A positive crossing of (a) S form, (b) Z form, and (c) Hooked-X form. Reversing the orientations
on both strands keeps the respective forms. Also reflecting the planar figure in (c) about a horizontal line produces
another Hooked-X form.}
\label{fig:pos-S-Z}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3in]{pos-cross-unknot}}
\caption{ A positive crossing of S form can be transformed into a negative crossing
with $2$ surgeries. Similarly, a positive crossing of $Z$ form can be transformed into a negative
crossing with $2$ surgeries.}
\label{fig:pos-cross-unknot}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=5in]{hook-X}}
\caption{A positive crossing of hooked-X form can be transformed into a negative crossing with
$2$ Legendrian surgeries. }
\label{fig:hook-X}
\end{figure}
\begin{lem}\label{lem:pos-cross-unknot} If $\Lambda$ is a non-trivial Legendrian knot that has a projection that can be topologically unknotted by
changing a positive crossing in S, Z, or hooked-X form, then
$$\sigma_0(\Lambda) \leq 2.$$
\end{lem}
\begin{ex} Using Lemma~\ref{lem:pos-cross-unknot}, it is possible to show that for any conjectured Legendrian representative $\Lambda$
of $m(6_2)$, $m(7_6)$, or $m(7_7)$, $\sigma_0(\Lambda) = 2 g_4(K_\Lambda)$. Figure~\ref{fig:pos-pt-unknot} shows the conjectured
Legendrian representatives of these knot types with maximal Thurston-Bennequin invariant (after perhaps selecting alternate orientations and/or
performing a mirror operation). These projections differ from those in \cite{chongchitmate-ng} by Legendrian Reidemeister moves of type II and III.
The black dot indicates a positive
crossing that when topologically changed produces an unknot.
\begin{figure}
\centerline{\includegraphics[width=5in]{pos-pt-unknot}}
\caption{Front projections representing all conjectured Legendrian representatives of $m(6_2)$, $m(7_6)$ and $m(7_7)$ with maximal Thurston-Bennequin
invariant.
Each of these can be topologically unknotted by changing the indicated positive crossing in S or Hooked-X form. }
\label{fig:pos-pt-unknot}
\end{figure}
\end{ex}
The proofs of Lemmas~\ref{lem:neg-cross-unknot} and ~\ref{lem:pos-cross-unknot} in fact show
that if the Legendrian knot $\Lambda$ has a front projection that can
be topologically unknotted by changing $\nu$ negative crossings and $\rho$ crossings
in S, Z, or hooked-X form, then
$\sigma_0(\Lambda) \leq 2\nu + 2\rho.$
However, for our calculations we did not need this more general form.
\bibliographystyle{plain}
| {
"timestamp": "2012-06-28T02:03:27",
"yymm": "1206",
"arxiv_id": "1206.6249",
"language": "en",
"url": "https://arxiv.org/abs/1206.6249",
"abstract": "The surgery unknotting number of a Legendrian link is defined as the minimal number of particular oriented surgeries that are required to convert the link into a Legendrian unknot. Lower bounds for the surgery unknotting number are given in terms of classical invariants of the Legendrian link. The surgery unknotting number is calculated for every Legendrian link that is topologically a twist knot or a torus link and for every positive, Legendrian rational link. In addition, the surgery unknotting number is calculated for every Legendrian knot in the Legendrian knot atlas of Chongchitmate and Ng whose underlying smooth knot has crossing number 7 or less. In all these calculations, as long as the Legendrian link of $j$ components is not topologically a slice knot, its surgery unknotting number is equal to the sum of $(j-1)$ and twice the smooth 4-ball genus of the underlying smooth link.",
"subjects": "Symplectic Geometry (math.SG); Geometric Topology (math.GT)",
"title": "The Surgery Unknotting Number of Legendrian Links",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822876992225169,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7089594676643625
} |
https://arxiv.org/abs/1310.1442 | Binary Cyclic Codes from Explicit Polynomials over $\gf(2^m)$ | Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, monomials and trinomials over finite fields with even characteristic are employed to construct a number of families of binary cyclic codes. Lower bounds on the minimum weight of some families of the cyclic codes are developed. The minimum weights of other families of the codes constructed in this paper are determined. The dimensions of the codes are flexible. Some of the codes presented in this paper are optimal or almost optimal in the sense that they meet some bounds on linear codes. Open problems regarding binary cyclic codes from monomials and trinomials are also presented. | \section*{\bibname\markright{\MakeUppercase{\bibname}}}}
\usepackage{latexsym,amssymb,amsmath,amsthm,amssymb}
\usepackage{color}
\newcommand{{\rm Re}}{{\rm Re}}
\newcommand{{\rm Rank}}{{\rm Rank}}
\newcommand{{\rm ord}}{{\rm ord}}
\newcommand{{\rm rord}}{{\rm rord}}
\newcommand{{\bf Z}}{{\bf Z}}
\newcommand{{\cal S}}{{\cal S}}
\newcommand{{\rm Tr}}{{\rm Tr}}
\newcommand{{\rm GF}}{{\rm GF}}
\newcommand{{\rm PG}}{{\rm PG}}
\newcommand{{\rm Range}}{{\rm Range}}
\newcommand{{\cal C}}{{\cal C}}
\newcommand{{\cal D}}{{\cal D}}
\newcommand{{\bf c}}{{\bf c}}
\newcommand{{\bf g}}{{\bf g}}
\newcommand{{\bf u}}{{\bf u}}
\newcommand{{\bf 0}}{{\bf 0}}
\newcommand{\mathbb {Z}}{\mathbb {Z}}
\newcommand{\mathbb {N}}{\mathbb {N}}
\newcommand{\mathbb {M}}{\mathbb {M}}
\newcommand{\mathbb {M}}{\mathbb {M}}
\newcommand{{\mathcal C}}{{\mathcal C}}
\newcommand{{\mathbb L}}{{\mathbb L}}
\newcommand{{\rm Image}}{{\rm Image}}
\newcommand{{\mathbb{F}}}{{\mathbb{F}}}
\newcommand{{\mathrm{wt}}}{{\mathrm{wt}}}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{rp}{Research Problem}
\newtheorem{open}{Open Problem}
\newtheorem{definition}{Definition}
\newtheorem{example}{Example}
\journal{Discrete Mathematics}
\begin{document}
\begin{frontmatter}
\title{Binary Cyclic Codes from Explicit Polynomials over ${\rm GF}(2^m)$ \tnoteref{fn1}}
\tnotetext[fn1]{C. Ding's research was supported by
The Hong Kong Research Grants Council, Proj. No. 601013. Z. Zhou's research was supported by
the Natural Science Foundation of China, Proj. No. 61201243, and also The Hong Kong Research
Grants Council, Proj. No. 601013.}
\author[cding]{Cunsheng Ding}
\ead{cding@ust.hk}
\author[zcz]{Zhengchun Zhou}
\ead{zzc@home.swjtu.edu.cn}
\address[cding]{Department of Computer Science
and Engineering, The Hong Kong University of Science and Technology,
Clear Water Bay, Kowloon, Hong Kong, China}
\address[zcz]{School of Mathematics, Southwest Jiaotong University, Chengdu, 610031, China}
\begin{abstract}
Cyclic codes are a subclass of linear codes and have applications in consumer electronics,
data storage systems, and communication systems as they have efficient encoding and
decoding algorithms. In this paper, monomials and trinomials over finite fields with
even characteristic are employed
to construct a number of families of binary cyclic codes. Lower bounds on the minimum
weight of some families of the cyclic codes are developed. The minimum weights of other
families of the codes constructed in this paper are determined. The dimensions of the
codes are flexible. Some of the codes presented in this paper are optimal or almost optimal
in the sense that they meet some bounds on linear codes. Open problems regarding binary cyclic
codes from monomials and trinomials are also presented.
\end{abstract}
\begin{keyword}
Polynomials \sep permutation polynomials \sep cyclic codes \sep linear span \sep sequences.
\MSC 94B15\sep 11T71
\end{keyword}
\end{frontmatter}
\section{Introduction}
Let $q$ be a power of a prime $p$.
A linear $[n,k, d]$ code over ${\rm GF}(q)$ is a $k$-dimensional subspace of ${\rm GF}(q)^n$
with minimum (Hamming) nonzero weight $d$.
A linear $[n,k]$ code ${\cal C}$ over the finite field ${\rm GF}(q)$ is called {\em cyclic} if
$(c_0,c_1, \cdots, c_{n-1}) \in {\cal C}$ implies $(c_{n-1}, c_0, c_1, \cdots, c_{n-2})
\in {\cal C}$.
By identifying any vector $(c_0,c_1, \cdots, c_{n-1}) \in {\rm GF}(q)^n$
with
$
\sum_{i=0}^{n-1} c_ix^i \in {\rm GF}(q)[x]/(x^n-1),
$
any code ${\cal C}$ of length $n$ over ${\rm GF}(q)$ corresponds to a subset of ${\rm GF}(q)[x]/(x^n-1)$.
The linear code ${\cal C}$ is cyclic if and only if the corresponding subset in ${\rm GF}(q)[x]/(x^n-1)$
is an ideal of the ring ${\rm GF}(q)[x]/(x^n-1)$.
It is well known that every ideal of ${\rm GF}(q)[x]/(x^n-1)$ is principal. Let ${\cal C}=(g(x))$ be a
cyclic code, where $g(x)$ has the smallest degree and constant term 1. Then $g(x)$ is
called the {\em generator polynomial} and
$h(x)=(x^n-1)/g(x)$ is referred to as the {\em parity-check} polynomial of
${\cal C}$.
The error correcting capability of cyclic codes may not be as good as some other linear
codes in general. However, cyclic codes have wide applications in storage and communication
systems because they have efficient encoding and decoding algorithms
\cite{Chie,Forn,Pran}.
For example, Reed-Solomon codes have found important applications from deep-space
communication to consumer electronics. They are prominently used in consumer
electronics such as CDs, DVDs, Blu-ray Discs, in data transmission technologies
such as DSL \& WiMAX, in broadcast systems such as DVB and ATSC, and in computer
applications such as RAID 6 systems.
Cyclic codes have been studied for decades and a lot of progress has been made
(see for example, \cite{CLP,CDY05,Dinh,DL06,Feng,GO08,HDLA,HT09,HPbook,JLX11,LF08,LintW,MK93,Mois,PM09,RP10,ZHJ}). The total number of cyclic codes
over ${\rm GF}(q)$ and their constructions are closely related to $q$-cyclotomic cosets
modulo $n$, and thus many topics of number theory. One way of
constructing cyclic codes over ${\rm GF}(q)$ with length $n$ is to use the generator polynomial
\begin{eqnarray}\label{eqn-defseqcode}
\frac{x^n-1}{\gcd(S^n(x), x^n-1)}
\end{eqnarray}
where
$$
S^n(x)=\sum_{i=0}^{n-1} s_i x^i \in {\rm GF}(q)[x]
$$
and $s^{\infty}=(s_i)_{i=0}^{\infty}$ is a sequence of period $n$ over ${\rm GF}(q)$.
Throughout this paper, we call the cyclic code ${\cal C}_s$ with the generator polynomial
of (\ref{eqn-defseqcode}) the {\em code defined by the sequence} $s^{\infty}$,
and the sequence $s^{\infty}$ the {\em defining sequence} of the cyclic code ${\cal C}_s$.
One basic question is whether good cyclic codes can be constructed with
this approach. It turns out that the code ${\cal C}_s$ could
be an optimal or almost optimal linear code if the sequence $s^\infty$ is properly
designed \cite{Ding121}.
In this paper, a number of types of monomials and trinomials over ${\rm GF}(2^m)$ will be
employed to construct a number of classes of binary cyclic codes. Lower bounds on
the minimum weight of some classes of the cyclic codes are developed. The minimum
weights of some other classes of the codes constructed in this paper are determined.
The dimensions of the codes
of this paper are flexible. Some of the codes obtained in this paper are optimal
or almost optimal as they meet certain bounds on linear codes. Several open problems
regarding cyclic codes from monomials and trinomials are also presented in this
paper.
The first motivation of this study is that some of the codes constructed in this paper
could be optimal or almost optimal. The second motivation is the simplicity of the
constructions of the cyclic codes that may lead to efficient encoding and decoding
algorithms.
\section{Preliminaries}
In this section, we present basic notations and results of $q$-cyclotomic cosets,
highly nonlinear functions, and sequences that will be employed in subsequent sections.
\subsection{Some notations fixed throughout this paper}\label{sec-notations}
Throughout this paper, we adopt the following notations unless otherwise stated:
\begin{itemize}
\item $q=2$, $m$ is a positive integer, $r=q^m$, and $n=r-1$.
\item $\mathbb {Z}_n=\{0,1,2,\cdots, n-1\}$ associated with the integer addition modulo $n$ and
integer multiplication modulo $n$ operations.
\item $\alpha$ is a generator of ${\rm GF}(r)^*$, and
$m_a(x)$ is the minimal polynomial of $a \in {\rm GF}(r)$ over ${\rm GF}(q)$.
\item $\mathbb {N}_q(x)$ is a function defined by $\mathbb {N}_q(i) =0$ if $i \equiv 0 \pmod{q}$ and $\mathbb {N}_q(i) =1$ otherwise, where $i$
is any nonnegative integer.
\item ${\rm Tr}(x)$ is the trace function from ${\rm GF}(r)$ to ${\rm GF}(q)$.
\item By the Database we mean the collection of the tables of best linear codes known maintained by
Markus Grassl at http://www.codetables.de/.
\end{itemize}
\subsection{The linear span and minimal polynomial of periodic sequences}\label{sec-sequences}
Let $s^\infty=(s_i)_{i=0}^{\infty}$ be a sequence of period $L$ over ${\rm GF}(q)$.
The polynomial $c(x)= \sum_{i=0}^{\ell} c_ix^i$ over ${\rm GF}(q)$,
where $c_0=1$, is called a {\em characteristic polynomial} of $s^\infty$ if
\begin{eqnarray*}
-c_0s_i=c_1s_{i-1}+c_2s_{i-2}+\cdots +c_ls_{i-\ell} \mbox{ for all } i \ge \ell.
\end{eqnarray*}
The characteristic polynomial with the smallest degree is called the {\em minimal
polynomial} of $s^\infty$. The degree of the minimal polynomial is referred to as
the {\em linear span} or {\em linear complexity} of $s^\infty$.
Since we require that the constant term of any characteristic polynomial
be 1, the minimal polynomial of any periodic sequence $s^{\infty}$ must
be unique. In addition, any characteristic polynomial must be a multiple
of the minimal polynomial.
For periodic sequences, there are a few ways to determine their linear
span and minimal polynomials. One of them is given in the following
lemma \cite{LN97}.
\begin{lemma}\label{lem-ls1}
Let $s^{\infty}$ be a sequence of period $L$ over ${\rm GF}(q)$.
Define
$
S^{L}(x)= \sum_{i=0}^{L-1} s_i x^i \in {\rm GF}(q)[x].
$
Then the minimal polynomial $\mathbb {M}_s(x)$ of $s^{\infty}$ is given by
\begin{eqnarray}\label{eqn-base1}
\frac{x^{L}-1}{\gcd(x^{L}-1, S^{L}(x))}
\end{eqnarray}
and the linear span ${\mathbb L}_s$ of $s^{\infty}$ is given by
$
L-\deg(\gcd(x^{L}-1, S^{L}(x))).
$
\end{lemma}
The other one is given in the following lemma \cite{Antweiler}
\begin{lemma} \label{lem-ls2}
Any sequence $s^{\infty}$ over ${\rm GF}(q)$ of period $q^m-1$ has a unique expansion of the form
\begin{equation*}
s_t=\sum_{i=0}^{q^m-2}c_{i}\alpha^{it}, \mbox{ for all } t\ge 0,
\end{equation*}
where $\alpha$ is a generator of ${\rm GF}(q^m)^*$ and $c_i \in {\rm GF}(q^m)$.
Let the index set be $I=\{i \left.\right| c_i\neq 0\}$, then the minimal polynomial $\mathbb {M}_s(x)$ of $s^{\infty}$ is
$\mathbb {M}_s(x)=\prod_{i\in I}(1-\alpha^i x),$
and the linear span of $s^{\infty}$ is $|I|$.
\end{lemma}
It should be noticed that in some references the reciprocal of $\mathbb {M}_s(x)$ is called the minimal polynomial
of the sequence $s^\infty$. So Lemma \ref{lem-ls2} is a modified version of the original one in \cite{Antweiler}.
\subsection{The $2$-cyclotomic cosets modulo $2^m-1$}\label{sec-cpsets}
Let $n=2^m-1$. The 2-cyclotomic coset containing $j$ modulo $n$ is defined by
$$
C_j=\{j, 2j, 2^2j, \cdots, 2^{\ell_j-1}j\} \subset \mathbb {Z}_n,
$$
where $\ell_j$ is the smallest positive integer such that $2^{\ell_j}j \equiv j \pmod{n}$,
and is called the size of $C_j$. It is known that $\ell_j$ divides $m$. The smallest integer
in $C_j$ is called the {\em coset leader} of $C_j$. Let $\Gamma$ denote the set of all
coset leaders and $\Gamma^*=\Gamma\setminus \{0\}$. By definition, we have
$$
\bigcup_{j \in \Gamma} C_j =\mathbb {Z}_n:=\{0,1,2, \cdots, n-1\}
$$
and
$$
C_i \bigcap C_j =\emptyset ~\textrm{for~any}~i\neq j\in \Gamma.
$$
For any integer $j$ with $0\leq j\leq 2^m-1$, the { 2-weight} of $j$, denoted by ${\mathrm{wt}}(j)$, is defined to be the number of
nonzero coefficients in its 2-adic expansion:
\begin{align*}
j=j_0+j_1\cdot 2+\cdots+j_{m-1}\cdot 2^{m-1},~~j_i\in \{0,1\}.
\end{align*}
The following lemmas will be useful in the sequel.
\begin{lemma}\cite{SiDing}\label{Lemma_for_coset1}
For any coset leader $j \in \Gamma^*$, $j$ is odd and $1\leq j<2^{n-1}$.
\end{lemma}
\begin{lemma}\label{lemma_odd_number}
For any $j\in \Gamma^*$ with $\ell_j=m$, the number of odd and even integers in the 2-cyclotomic coset $C_j$
are equal to ${\mathrm{wt}}(j)$ and $m-{\mathrm{wt}}(j)$, respectively.
\end{lemma}
\begin{proof}
By Lemma \ref{Lemma_for_coset1}, $j$ is odd.
Then we can assume that $j=1+2^{i_1}+2^{i_2}+\cdots+2^{i_{k-1}}$ where $k={\mathrm{wt}}(j)$ and
$1\leq i_1<i_2\cdots<i_{k-1}\leq m-1$.
It is easy to check that the odd integers in $C_j$ are given by
\begin{align}\label{eqn_odd_integers}
j,~ j2^{m-i_{k-1}} \bmod{n}, ~j2^{m-i_{k-2}} \bmod{n}, \cdots, ~j2^{m-i_1} \bmod{n}
\end{align}
which are pairwise distinct due to $\ell_j=m$. Thus the number of odd integers in the 2-cyclotomic coset $C_j$ is equal to $k$ and the
number of even ones is equal to $m-k$.
\end{proof}
\begin{lemma}\label{Lemma_equivalent}
All integers $1\leq j\leq 2^m-2$ with ${\mathrm{wt}}(j)=m-1$ are in the same $2$-cyclotomic coset.
\end{lemma}
\begin{proof}
It is clear that the number of integers $1\leq j\leq 2^m-2$ with ${\mathrm{wt}}(j)=m-1$ is equal to ${m\choose m-1}=m$. Note that all the integers in the same coset have the weight on the other hand.
The conclusion then follows from the facts that ${\mathrm{wt}}(2^{m-1}-1)=m-1$ and $\ell_{2^{m-1}-1}=m$.
\end{proof}
\begin{lemma}\label{Lemma_coset}
Let $h$ be an integer with $1\leq h\leq \lceil{m+1\over 2}\rceil$ and $\Gamma_1=\{1\leq j\leq 2^h-1: j ~\textrm{is~odd}\}$.
Then for any $j\in \Gamma_1$,
\begin{itemize}
\item $j$ is the coset leader of $C_j$;
\item $\ell_j=m$ except that $\ell_{2^{m/2}+1}=m/2$ for even $m$.
\end{itemize}
\end{lemma}
\begin{proof}
We first prove the first assertion. For any $j\in \Gamma_1$, let ${\mathrm{wt}}(j)=k$ and $j=1+2^{i_1}+2^{i_2}+\cdots+2^{i_{k-1}}$.
where $1\leq i_1<i_2<\cdots<i_{k-1}\leq h-1$. It follows from Lemma \ref{Lemma_for_coset1} that the coset leader must be odd.
By Lemma \ref{lemma_odd_number}, all the odd integers in
the $2$-cyclotomic coset containing $j$ are listed in (\ref{eqn_odd_integers}) in which
the least one is exactly $j$ due to $i_t\leq m/2$ for all $1\leq t\leq k-1$. This finishes the proof of the first
assertion.
We now prove the second one. Note that for each $j\in \Gamma_1$, $\ell_j| m$
and $j$ is divisible by $(2^m-1)/(2^{\ell_j}-1)$. When $m$ is odd, if $\ell_j<m$, then $\ell_j\leq {m/3}$ and thus
$(2^m-1)/(2^{\ell_j}-1)>2^{2m/3}$ which means that $j> 2^{2m/3}$. This is impossible since $j<2^{(m+1)/2}$. Thus
$\ell_j=m$ for odd $m$.
Similarly, when $m$ is even, if $\ell_j< {m}$, then $\ell_j\leq {m/2}$ and
$(2^m-1)/(2^{\ell_j}-1)>2^{m-\ell_j}$. It is easy to check that $j\in \Gamma_1$ is divisible by $(2^m-1)/(2^{\ell_j}-1)$ if and only
if $j=2^{m/2}+1$ and $\ell_j=m/2$.
\end{proof}
\subsection{PN and APN functions}
A polynomial $f(x)$ over ${\rm GF}(r)$ is called {\em almost perfect nonlinear (APN)} if
$$
\max_{a \in {\rm GF}(r)^*} \max_{b \in {\rm GF}(r)} |\{x \in {\rm GF}(r): f(x+a)-f(x)=b\}| =2,
$$
and is referred to as
{\em perfect
nonlinear or planar} if
$$
\max_{a \in {\rm GF}(r)^*} \max_{b \in {\rm GF}(r)} |\{x \in {\rm GF}(r): f(x+a)-f(x)=b\}| =1.
$$
In subsequent sections, we need the notion of PN and APN functions.
\section{Codes defined by polynomials over finite fields ${\rm GF}(r)$}\label{sec-sequence}
\subsection{A generic construction of cyclic codes with polynomials}\label{sec-gconstruct}
Given any polynomial $f(x)$ over ${\rm GF}(r)$, we define its associated sequence
$s^\infty$ by
\begin{eqnarray}\label{eqn-sequence}
s_i={\rm Tr}\left(f\left(\alpha^i+1\right)\right)
\end{eqnarray}
for all $i \ge 0$.
The objective of this paper is to consider the codes ${\cal C}_s$ defined by some monomials and
trinomials over ${\rm GF}(2^m)$.
\subsection{How to choose the polynomial $f(x)$}\label{sec-howto}
Regarding the generic construction of Section \ref{sec-gconstruct}, the
following two questions are essential.
\begin{itemize}
\item Is it possible to construct optimal cyclic codes meeting some bound on
parameters of linear codes or cyclic codes with good parameters?
\item If the answer to the question above is positive, how should we select the polynomial $f(x)$
over ${\rm GF}(r)$?
\end{itemize}
It will be demonstrated in the sequel that the answer to the first question above is indeed
positive. However, it seems hard to answer the second question.
Any method of constructing an $[n, k]$ cyclic code over ${\rm GF}(q)$ becomes the selection
of a divisor $g(x)$ over ${\rm GF}(q)$ of $x^n-1$ with degree $n-k$, which is employed as the
generator polynomial of the cyclic code. The minimum weight $d$ and other parameters
of this cyclic code are determined by the generator polynomial $g(x)$.
Suppose that an optimal $[n, k]$ cyclic code over ${\rm GF}(q)$ exists. The question is how to
find out the divisor $g(x)$ of $x^n-1$ that generates the optimal cyclic code. Note that
$x^n-1$ may have many divisors of small degrees. If the construction method is not well
designed, optimal cyclic codes cannot be produced even if they exist.
The construction of Section \ref{sec-gconstruct} may produce cyclic codes with bad
parameters. For example, let $(q,m)=(2,6)$, let $\alpha$ be the generator of ${\rm GF}(2^6)$
with $\alpha^6 + \alpha^4
+ \alpha^3 + \alpha + 1=0$, and let $f(x)=x^e$. When $e \in
\{7, 14, 28, 35, 49, 56\}$, the binary code ${\cal C}_s$ defined by the monomial $f(x)$
has parameters $[63, 45, 3]$. These codes are very bad as there are binary linear
codes with parameters $[63, 45, 8]$ and binary cyclic codes with parameters $[63, 57, 3]$.
On the other hand, the construction of Section \ref{sec-gconstruct} may produce
optimal cyclic codes. For example, let $(q,m)=(2,6)$ and let $f(x)=x^e$. When
$$e \in
\{1, 2, 4, 5, 8, 10, 16, 17, 20, 32, 34, 40\},$$
the binary code ${\cal C}_s$ defined by the
monomial $f(x)$ has parameters $[63, 57, 3]$ and should be equivalent to the binary
Hamming code with the same parameters. These cyclic codes are optimal with respect
to the sphere packing bound.
Hence, a monomial may give good or bad cyclic codes within the framework of the
construction of Section \ref{sec-gconstruct}. Now the question is how to choose
a monomial $f(x)$ over ${\rm GF}(r)$ so that the cyclic code ${\cal C}_s$ defined by $f(x)$
has good parameters.
In this paper, we employ monomial and tronomials $f(x)$ over ${\rm GF}(r)$ that are
either permutations on ${\rm GF}(r)$ or such that $|f({\rm GF}(r))|$ is very close to $r$. Most of the
monomials and trinomials $f(x)$ employed in this paper are either almost perfect
nonlinear or planar functions on ${\rm GF}(r)$.
It is unnecessary to require that $f(x)$ be highly nonlinear, to obtain cyclic codes ${\cal C}_s$
with good parameters. Both linear and highly nonlinear polynomials $f(x)$ could give
optimal cyclic codes ${\cal C}_s$ when they are plugged into the generic construction of
Section \ref{sec-gconstruct}.
\section{Binary cyclic codes from the permutation monomial $f(x)=x^{2^{t}+3}$}\label{sec-Welch}
In this section we study the code ${\cal C}_s$ defined by the permutation monomial $f(x)=x^{2^{t}+3}$ over ${\rm GF}(2^{2t+1})$.
Before doing this, we need to prove the following lemma.
\begin{lemma}\label{lem-Welch}
Let $m =2t+1 \ge 7$.
Let $s^{\infty}$ be the sequence of (\ref{eqn-sequence}), where $f(x)=x^{2^{t}+3}$.
Then the linear span ${\mathbb L}_s$ of $s^{\infty}$ is equal to $5m+1$ and the minimal polynomial $\mathbb {M}_s(x)$
of $s^{\infty}$ is given by
\begin{eqnarray}\label{eqn-Welch}
\mathbb {M}_s(x)= (x-1) m_{\alpha^{-1}}(x) m_{\alpha^{-3}}(x) m_{\alpha^{-(2^t+1)}}(x)m_{\alpha^{-(2^t+2)}}(x) m_{\alpha^{-(2^t+3)}}(x).
\end{eqnarray}
\end{lemma}
\begin{proof}
By definition, we have
\begin{eqnarray}\label{eqn-Welch2}
s_i &=& {\rm Tr}\left((\alpha^i+1)^{2^t+2+1}\right) \nonumber \\
&=& {\rm Tr}\left( (\alpha^i)^{2^t+3} + (\alpha^i)^{2^t+2} + (\alpha^i)^{2^t+1} +(\alpha^i)^{3} +\alpha^i +1 \right) \nonumber \\
&=& \sum_{j=0}^{m-1} (\alpha^i)^{(2^t+3)2^j} + \sum_{j=0}^{m-1} (\alpha^i)^{(2^{t-1}+1)2^j} + \sum_{j=0}^{m-1} (\alpha^i)^{(2^t+1)2^j} \nonumber\\
&& + \sum_{j=0}^{m-1} (\alpha^i)^{3\times2^j} + \sum_{j=0}^{m-1} (\alpha^i)^{2^j} + 1.
\end{eqnarray}
By Lemma \ref{Lemma_coset}, the following $2$-cyclotomic cosets are pairwise disjoint and have size $m$:
\begin{eqnarray}\label{eqn-fourcosets}
C_1, \ C_3, \ C_{2^t+1}, \ C_{2^{t-1}+1}.
\end{eqnarray}
It is clear that the $2$-cyclotomic coset $C_{2^t+3}$ are
disjoint with all the cosets in (\ref{eqn-fourcosets}).
We now prove $C_{2^t+3}$ has size $m$. It is sufficient to prove that
$
\gcd:=\gcd(2^{2t+1}-1, 2^t+3)=1.
$
The conclusion is true for all $1 \le t \le 4$. So we consider only the case that $t \ge 5$.
Note that
$
2^{2t+1}-1=(2^{t+1}-6)(2^{t}+3) + 17.
$
We have $\gcd=\gcd(2^{t}+3, 17)$. Since
$
2^{t}+3=2^{t-4}(2^{4}+1)-(2^{t-4}-3),
$
we obtain that $\gcd=\gcd(2^{t-4}-3, 2^3-1)$.
Let $t_1=\lfloor t/4\rfloor$. Using the Euclidean division recursively, one gets
\begin{eqnarray*}
\gcd &=& \gcd(2^{t-1-4t_1}+3 (-1)^{t_1}, 2^3-1)\\
&=& \left\{ \begin{array}{ll}
\gcd(2^0+(-1)^{t_1}3, 17)=1 & \mbox{if } t \equiv 0 \pmod{4}, \\
\gcd(2^1+(-1)^{t_1}3, 17)=1 & \mbox{if } t \equiv 1 \pmod{4}, \\
\gcd(2^2+(-1)^{t_1}3, 17)=1 & \mbox{if } t \equiv 2 \pmod{4}, \\
\gcd(2^3+(-1)^{t_1}3, 17)=1 & \mbox{if } t \equiv 3 \pmod{4}.
\end{array}
\right.
\end{eqnarray*}
Therefore $\ell_{2^t+3}=|C_{2^t+3}|=\ell_{-(2^t+3)}=m$.
The desired conclusions on the linear span and the minimal polynomial $\mathbb {M}_s(x)$ then follow from Lemma \ref{lem-ls2},
(\ref{eqn-Welch2}) and the conclusions on the five cyclotomic cosets and their sizes.
\end{proof}
The following theorem provides information on the code ${\cal C}_{s}$.
\begin{theorem}\label{thm-Welch}
Let $m \ge 7$ be odd.
The binary code ${\cal C}_{s}$ defined by the sequence of Lemma \ref{lem-Welch} has parameters
$[2^m-1, 2^{m}-2-5m, d]$ and generator polynomial $\mathbb {M}_s(x)$ of (\ref{eqn-Welch}), where $d \ge 8$.
\end{theorem}
\begin{proof}
The dimension of ${\cal C}_{s}$ follows from Lemma \ref{lem-Welch} and the definition of the
code ${\cal C}_s$. We need to prove the conclusion on the minimum distance $d$ of ${\cal C}_{s}$.
To this end, let ${{\cal D}_{s}}$ denote the cyclic code with generator polynomial
$m_{\alpha^{-1}}(x) m_{\alpha^{-(2^{j}+1)}}(x) m_{\alpha^{-(2^{2j}+1)}}(x)$, and let $\overline{{\cal D}_{s}}$
be the even-weight subcode of ${{\cal D}_{s}}$.
Then ${{\cal D}_{s}}$ is a triple-error-correcting code for any $j$ with $\gcd(j,m)=1$ \cite{Kasa}.
This means that the minimal distance of ${{\cal D}_{s}}$ is equal to 7 and that of $\overline{{\cal D}_{s}}$ is 8.
Take $j=t+1$, then ${{\cal D}_{s}}$ has generator polynomial
$m_{\alpha^{-1}}(x) m_{\alpha^{-3}}(x) m_{\alpha^{-(2^{t}+1)}}(x)$ and ${\cal C}_{s}$ is a subcode of $\overline{{\cal D}_{s}}$.
The conclusion then follows from the fact that $x-1$ is a factor of $\mathbb {M}_s(x)$.
\end{proof}
\begin{example}
Let $m=5$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^5 + \alpha^2 + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^{16} + x^{15} + x^{13} + x^{12} + x^8 + x^6 + x^3 + 1,
$
and ${\cal C}_s$ is a $[31, 15, 8]$ binary cyclic code. Its dual is a $[31,16,7]$ cyclic code. Both codes are optimal
according to the Database.
\end{example}
\begin{example}\label{ex-welch2}
Let $m=7$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^7 + \alpha + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x) = x^{36} + x^{34} + x^{33} + x^{32} + x^{29} + x^{28} + x^{27} +
x^{26} + x^{25} +
x^{24} + x^{21} + x^{12} + x^{11} + x^9 + x^7 + x^6 + x^5 + x^3 + x + 1
$
and ${\cal C}_s$ is a $[127, 91, 8]$ binary cyclic code.
\end{example}
It can be seen from Example \ref{ex-welch2}
that the bound on the minimal distance of ${\cal C}_s$ in
Theorem \ref{thm-Welch} is tight in certain cases.
\section{Binary cyclic codes from the permutation monomial $f(x)=x^{2^h-1}$}\label{sec-2hminus1}
Consider monomials over ${\rm GF}(2^m)$ of the form $f(x)=x^{2^h-1}$, where $h$ is a positive integer with $1\leq h\leq {\lceil {m\over 2} \rceil}$.
In this section, we deal with the binary code ${\cal C}_s$ defined by the sequence $s^{\infty}$
of (\ref{eqn-sequence}), where $f(x)=x^{2^h-1}$.
We need to do some preparations before presenting and proving the main results
of this section. Let $t$ be a positive integer. We define $T=2^t-1$. For any
odd $a \in \{1,2,3,\cdots,T\}$, define
\begin{equation*}
\epsilon_a^{(t)} =\left\{
\begin{array}{ll}
1, &\textrm{if~} a=2^h-1\\
\left\lceil {\log_2{T\over a}}\right\rceil \bmod 2,&\textrm{if~} 1\leq a<2^h-1.
\end{array} \right.\ \
\end{equation*}
and
\begin{equation}\label{eqn-def-kappa}
\kappa_a^{(t)} = \epsilon_a^{(t)}~\bmod 2.
\end{equation}
Let
$$
B_a^{(t)} =\left\{2^ia: i =0,1,2, \cdots, \epsilon_a^{(t)} -1 \right\}.
$$
Then it can be verified that
$$
\bigcup_{1 \le 2j+1 \le T} B_{2j+1}^{(t)} =\{1,2,3,\cdots, T\}
$$
and
$$
B_a^{(t)} \cap B_b^{(t)} = \emptyset
$$
for any pair of distinct odd numbers $a$ and $b$ in $\{1,2,3,\cdots, T\}$.
The following lemma follows directly from the definitions of $\epsilon_a^{(t)}$
and $B_a^{(t)}$.
\begin{lemma}\label{lem-f263}
Let $a$ be an odd integer in $\{0,1,2. \cdots, T\}$. Then
\begin{eqnarray*}
& & B_a^{(t+1)} = B_a^{(t)} \cup \{a 2^{\epsilon_a^{(t)}}\} \mbox{ if } 1 \le a \le 2^t-1, \\
& & B_a^{(t+1)} = \{a\} \mbox{ if } 2^t+1 \le a \le 2^{t+1}-1, \\
& & \epsilon_a^{(t+1)} = \epsilon_a^{(t)} +1 \mbox{ if } 1 \le a \le 2^t-1, \\
& & \epsilon_a^{(t+1)} = 1 \mbox{ if } 2^t+1 \le a \le 2^{t+1}-1.
\end{eqnarray*}
\end{lemma}
\begin{lemma}\label{lem-f264}
Let $N_t$ denote the total number of odd $\epsilon_a^{(t)}$ when $a$ ranges over all
odd numbers in the set $\{1,2,\cdots, T\}$. Then $N_1=1$ and
$$
N_t = \frac{2^t+(-1)^{t-1}}{3}
$$
for all $t \ge 2$.
\end{lemma}
\begin{proof}
It is easily checked that $N_2=1$, $N_3=3$ and $N_4=5$.
It follows from Lemma \ref{lem-f263} that
$$
N_t= 2^{t-2} + (2^{t-2} -N_{t-1}).
$$
Hence
$$
N_t - 2^{t-2} = 2^{t-3} - (N_{t-1}-2^{t-3})= 3\times 2^{t-4} + (N_{t-2}-2^{t-4}).
$$
With the recurcive application of this recurrence formula, one obtains the desired
formula for $N_t$.
\end{proof}
\begin{lemma}\label{lem-22mm1}
Let $s^{\infty}$ be the sequence of (\ref{eqn-sequence}), where $f(x)=x^{2^h-1}$, $2\leq h\leq {\lceil {m\over 2} \rceil}$. Then the linear span ${\mathbb L}_s$ of $s^{\infty}$ is given by
\begin{eqnarray}\label{eqn-22m0}
{\mathbb L}_s =\left\{ \begin{array}{l}
\frac{m(2^h+(-1)^{h-1})}{3},~ \mbox{ if $m$ is even} \\
\frac{m(2^h+(-1)^{h-1}) +3}{3},~ \mbox{ if $m$ is odd.}
\end{array}
\right.
\end{eqnarray}
We have then
\begin{equation}\label{eqn-2m31}
\mathbb {M}_s(x) =
(x-1)^{\mathbb {N}_2(m)} \prod_{1 \le 2j+1 \le 2^h-1 \atop \kappa_{2j+1}^{(h)} =1} m_{\alpha^{-(2j+1)}}(x).
\end{equation}
\end{lemma}
\begin{proof}
We have
\begin{eqnarray}\label{eqn-22m11}
{\rm Tr}(f(x+1))
&= & {\rm Tr}\left( (x+1)^{\sum_{i=0}^{h-1} 2^{i}} \right)
= {\rm Tr}\left( \prod_{i=0}^{h-1} \left(x^{2^{i}}+1\right) \right)
= {\rm Tr}\left( \sum_{i=0}^{2^h-1} x^{i} \right) \nonumber \\
&=&{\rm Tr}(1) + {\rm Tr}\left( \sum_{i=1}^{2^h-1} x^{i} \right)
= {\rm Tr}(1) + {\rm Tr}\left( \sum_{1 \le 2i+1 \le 2^h-1 \atop \kappa_{2i+1}^{(h)} =1} x^{2i+1} \right)
\end{eqnarray}
where the last equality follows from Lemma \ref{Lemma_coset}.
By definition, the sequence of (\ref{eqn-sequence}) is given by $s_t={\rm Tr}(f(\alpha^t+1))$ for all $t \ge 0$.
The desired conclusions on the linear span and the minimal polynomial $\mathbb {M}_s(x)$ then follow from Lemmas
\ref{Lemma_coset}, \ref{lem-f264} and Equation
(\ref{eqn-22m11}).
\end{proof}
The following theorem provides information on the code ${\cal C}_{s}$.
\begin{theorem}\label{thm-38}
Let $h \ge 2$.
The binary code ${\cal C}_{s}$ defined by the binary sequence of Lemma \ref{lem-22mm1} has parameters
$[2^m-1, 2^{m}-1-{\mathbb L}_s, d]$ and generator polynomial $\mathbb {M}_s(x)$ of (\ref{eqn-2m31}),
where ${\mathbb L}_s$ is given in (\ref{eqn-22m0}) and
\begin{eqnarray*}
d \ge \left\{ \begin{array}{l}
2^{h-2}+2 \mbox{ if $m$ is odd and $h>2$} \\
2^{h-2}+1.
\end{array}
\right.
\end{eqnarray*}
\end{theorem}
\begin{proof}
The dimension of ${\cal C}_{s}$ follows from Lemma \ref{lem-22mm1} and the definition of the
code ${\cal C}_s$. We now derive the lower bounds on the minimum weight $d$ of the code. It is
well known that the codes generated by $\mathbb {M}_s(x)$ and its reciprocal have the same weight
distribution. It follows from Lemmas \ref{lem-22mm1} and \ref{lem-f263} that the reciprocal
of $\mathbb {M}_s(x)$ has zeros $\alpha^{2j+1}$ for all $j$ in $\{2^{h-2}, 2^{h-2}+1, \cdots, 2^{h-1}-1\}$.
By the Hartman-Tzeng bound, we have $d \ge 2^{h-2}+1$. If $m$ is odd, ${\cal C}_s$ is an
even-weight code. In this case, $d \ge 2^{h-2}+2$.
\end{proof}
\begin{example}
Let $(m,h)=(7,2)$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^7 + \alpha + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x) = x^8 + x^6 + x^5 + x^4 + x^3 + x^2 + x + 1,
$
and ${\cal C}_s$ is a $[127, 119, 4]$ binary cyclic code and optimal according to the Database.
\end{example}
\begin{example}
Let $(m, h)=(7,3)$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^7 + \alpha + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x) = x^{22} + x^{21} + x^{20} + x^{18} + x^{17} + x^{16} + x^{14} +
x^{13} + x^8 + x^7 + x^6 + x^5 + x^4 + 1
$
and ${\cal C}_s$ is a $[127, 105, d]$ binary cyclic code, where $4 \le d \le 8$.
\end{example}
\begin{remark}
The code ${\cal C}_s$ of Theorem \ref{thm-38} may be bad when $\gcd(h, m) \ne 1$. In this case the monomial
$f(x)=x^{2^h-1}$ is not a permutation of ${\rm GF}(2^m)$. For example, when $(m, h)=(6,3)$, ${\cal C}_s$ is a
$[63, 45, 3]$ binary cyclic code, while the best known linear code in the Database has parameters $[63, 45, 8]$. Hence,
we are interested in this code only for the case that $\gcd(h, m)=1$, which guarantees that $f(x)=x^{2^h-1}$ is a
permutation of ${\rm GF}(2^m)$.
\end{remark}
\section{Binary cyclic codes from the permutation monomial $f(x)=x^e$, $e=2^{(m-1)/2}+2^{(m-1)/4}-1$ and $m \equiv 1 \pmod{4}$}\label{sec-1Niho}
Let $f(x)=x^e$, where $e=2^{(m-1)/2}+2^{(m-1)/4}-1$ and $m \equiv 1 \pmod{4}$.
It can be proved that $f(x)$ is a permutation of ${\rm GF}(r)$.
Define $h=(m-1)/4$. We have then
\begin{eqnarray}\label{eqn-1Niho}
{\rm Tr}(f(x+1))
&=& {\rm Tr}\left( (x^{2^{2h}}+1) (x+1) ^{\sum_{i=0}^{h-1} 2^{i}} \right)
= {\rm Tr}\left( (x^{2^{2h}}+1) \prod_{i=0}^{h-1} \left(x^{2^{i}}+1\right) \right) \nonumber \\
&=& {\rm Tr}\left( (x^{2^{2h}}+1) \sum_{i=0}^{2^h-1} x^{i} \right)
= 1+ {\rm Tr}\left(\sum_{i=0}^{2^h-1} x^{i+2^{2h}} + \sum_{i=1}^{2^h-1} x^{i} \right).
\end{eqnarray}
The sequence $s^{\infty}$ of (\ref{eqn-sequence}) defined by the the monomial $f(x)=x^e$ is then
given by
\begin{eqnarray}\label{eqn-1Nihoseq}
s_t= 1+ {\rm Tr}\left(\sum_{i=0}^{2^h-1} (\alpha^t)^{i+2^{2h}} + \sum_{i=1}^{2^h-1} (\alpha^t)^{i} \right)
\end{eqnarray}
for all $t \ge 0$, where $\alpha$ is a generator of ${\rm GF}(2^m)^*$.
In this section, we deal with the code ${\cal C}_s$ defined by the sequence $s^{\infty}$ of
(\ref{eqn-1Nihoseq}). To this end, we need to prove a number of auxiliary results on
$2$-cyclotomic cosets.
We define the following two sets for convenience:
\begin{eqnarray*}
A=\{0,1,2, \cdots, 2^h-1\}, \ B=2^{2h}+A=\{i+2^{2h}: i \in A\}.
\end{eqnarray*}
\begin{lemma}\label{lem-1Nf261}
For any $j \in B$, the size $\ell_j=|C_j|=m$.
\end{lemma}
\begin{proof}
Let $j = i + 2^{2h}$, where $i \in A$. For any $u$ with $1 \le u \le m-1$, define
\begin{eqnarray*}
\Delta_1(j, u) = j(2^{u}-1)=(i+2^{2h}) (2^{u}-1), \ \
\Delta_2(j, u) = j(2^{m-u}-1)=(i+2^{2h}) (2^{m-u}-1).
\end{eqnarray*}
If $\ell_j <m$, there would be an integer $1 \le u \le m-1$ such that $\Delta_t(j,u) \equiv 0 \pmod{n}$
for all $t \in \{1,2\}$.
Note that $1 \le u \le m-1$. We have that $\Delta_1(j, u) \ne 0$ and $\Delta_2(j, u) \ne 0$.
When $u \le m-2h-1$, we have
$$
2^h \le \Delta_1(j, u) \le (2^{2h}+2^h-1)(2^{m-2h-1}-1) <n.
$$
In this case, $\Delta_1(j,u) \not\equiv 0 \pmod{n}$.
When $u \ge m-2h$, we have $m-u \le 2h$ and
$$
2^h \le \Delta_2(j, u) \le (2^{2h}+2^h-1)(2^{2h}-1) <n.
$$
In this case, $\Delta_2(j,u) \not\equiv 0 \pmod{n}$.
Combining the conclusions of the two cases above completes the proof.
\end{proof}
\begin{lemma}\label{lem-1Nf262}
For any pair of distinct $i$ and $j$ in $B$,
$C_i \cap C_j = \emptyset$, i.e., they cannot be in the same $2$-cyclotomic
coset modulo $n$.
\end{lemma}
\begin{proof}
Let $i=i_1+2^{2h}$ and $j=j_1+2^{2h}$, where $i_1 \in A$ and $j_1 \in A$. Define
\begin{eqnarray*}
& & \Delta_1(i, j, u) = i2^{u}-j= (i_1+2^{2h})2^u- (j_1+2^{2h}), \\
& & \Delta_2(i, j, u) = j2^{m-u}-i= (j_1+2^{2h})2^{m-u}- (i_1+2^{2h}).
\end{eqnarray*}
If $C_i=C_j$, there would be an integer $1 \le u \le m-1$ such that $\Delta_t(i,j,u) \equiv 0 \pmod{n}$
for all $t \in \{1,2\}$.
We first prove that $\Delta_1(i, j, u) \ne 0$. When $u=0$, $\Delta_1(i, j, u)=i_1-j_1 \ne 0$. When
$1 \le u \le m-1$, we have
$$
\Delta_1(i, j, u) \ge 2i_1 + 2^{2h+1}-2^{2h}-j_1 >0.
$$
Since $1 \le u \le m-1$, one can similarly prove that $\Delta_2(i, j, u) >0$.
When $u \le m-2h-1$, we have
$$
-n < -2^{2h} \le \Delta_1(i,j, u) \le (2^{2h}+2^h-1)(2^{m-2h-1}-1) <n.
$$
In this case, $\Delta_1(i,j,u) \not\equiv 0 \pmod{n}$.
When $u \ge m-2h$, we have $m-u \le 2h$ and
$$
0< \Delta_2(i, j, u) \le (2^{2h}+2^h-1)2^{2h}-i_1-2^h <n.
$$
In this case, $\Delta_2(i,j,u) \not\equiv 0 \pmod{n}$.
Combining the conclusions of the two cases above completes the proof.
\end{proof}
\begin{lemma}\label{lem-feb281}
For any $i+2^{2h} \in B$ and odd $j \in A$,
\begin{eqnarray}
C_{i+2^{2h}} \cap C_j = \left\{ \begin{array}{l}
C_j \mbox{ if } (i,j)=(0,1) \\
\emptyset \mbox{ otherwise.}
\end{array}
\right.
\end{eqnarray}
\end{lemma}
\begin{proof}
Define
\begin{eqnarray*}
\Delta_1(i, j, u) = j2^{u}-(i+2^{2h}), \ \
\Delta_2(i, j, u) = (i+2^{2h})2^{m-u}- j.
\end{eqnarray*}
Suppose $C_{i+2^{2h}}=C_j$, there would be an integer $0 \le u \le m-1$ such that $\Delta_t(i,j,u) \equiv 0 \pmod{n}$
for all $t \in \{1,2\}$.
If $u=2h$, then
\begin{eqnarray*}
0 \equiv \Delta_1(i, j, u) & \equiv & 2^{2h+1}(j2^{2h}-(i+2^{2h})) \pmod{n} \\
& \equiv & j2^{m}-i2^{2h+1}-2^{m} \pmod{n} \\
& \equiv & j -1 -i 2^{2h+1} \pmod{n} \\
& = & j -1 -i 2^{2h+1}.
\end{eqnarray*}
Whence, the only solution of $\Delta_1(i,j,2h) \equiv 0 \pmod{n}$ is $(i,j)=(0,1)$.
We now consider the case that $0 \le u <2h$. We claim that $\Delta_1(i, j, u) \ne 0$.
Suppose on the contrary that $\Delta_1(i, j, u) = 0$. We would then have
$
j 2^u -i - 2^{2h} =0.
$
Because $u<2h$ and $j$ is odd, there is an odd $i_1$ such that $i=2^u i_1$. It then
follows from $i < 2^h$ that $u <h$. We obtain then
$$
j=i_1+2^{2h-u}>i_1 + 2^{h}>2^h-1.
$$
This is contrary to the assumption that $j \in A$. This proves that $\Delta_1(i, j, u) \ne 0$.
Finally, we deal with the case that $2h+1 \le u <4h=m-1$. We prove that $\Delta_2(i, j, u) \not\equiv 0 \pmod{n}$
in this case. Since $j$ is odd, $\Delta_2(i, j, u) \ne 0$. We have also
\begin{eqnarray*}
\Delta_2(i, j, u)
= i2^{m-u}+2^{m+2h-u} -j
\le (2^h-1)2^{m-u} + 2^{m-1} -j
\le 2^{m-(h-1)}+2^{m-1}-j
< n.
\end{eqnarray*}
Clearly, $\Delta_2(i, j, u) >-j >-n$. Hence in this case we have $\Delta_2(i, j, u) \not\equiv 0 \pmod{n}$.
Summarizing the conclusions above proves this lemma.
\end{proof}
\begin{lemma}\label{lem-1N2m1}
Let $m \ge 9$ be odd.
Let $s^{\infty}$ be the sequence of (\ref{eqn-1Nihoseq}). Then the linear span ${\mathbb L}_s$ of $s^{\infty}$ is given by
\begin{eqnarray}\label{eqn-1N2m0}
{\mathbb L}_s =\left\{ \begin{array}{l}
\frac{m\left(2^{(m+7)/4}+(-1)^{(m-5)/4}\right) +3}{3}, ~\mbox{ if $m \equiv 1 \pmod{8}$} \\
\frac{m\left(2^{(m+7)/4}+(-1)^{(m-5)/4}-6\right) +3}{3}, ~\mbox{ if $m \equiv 5 \pmod{8}$.}
\end{array}
\right.
\end{eqnarray}
We have also
\begin{equation*}\label{eqn-1N2m21}
\mathbb {M}_s(x) = (x-1) \prod_{i=0}^{2^{\frac{m-1}{4}}-1} m_{\alpha^{-i-2^{\frac{m-1}{2}} }}(x)
\prod_{1 \le 2j+1 \le 2^{\frac{m-1}{4}}-1 \atop \kappa_{2j+1}^{((m-1)/4)} =1} m_{\alpha^{-2j-1}}(x)
\end{equation*}
if $m \equiv 1 \pmod{8}$; and
\begin{equation*}\label{eqn-1N2m31}
\mathbb {M}_s(x) =(x-1) \prod_{i=1}^{2^{\frac{m-1}{4}}-1} m_{\alpha^{-i-2^{\frac{m-1}{2}} }}(x)
\prod_{3 \le 2j+1 \le 2^{\frac{m-1}{4}}-1 \atop \kappa_{2j+1}^{((m-1)/4)} =1} m_{\alpha^{-2j-1}}(x)
\end{equation*}
if $m \equiv 5 \pmod{8}$,
where $\kappa_{2j+1}^{(h)}$ was
defined in Section \ref{sec-2hminus1}.
\end{lemma}
\begin{proof}
By Lemma \ref{lem-1Nf262}, the monomials in the function
\begin{equation}\label{eqn-feb28111}
{\rm Tr}\left(\sum_{i=0}^{2^h-1} x^{i+2^{2h}} \right)
\end{equation}
will not cancel each other. Lemmas \ref{lem-f264} and \ref{lem-22mm1} say that after cancellation, we have
\begin{equation}\label{eqn-feb28121}
{\rm Tr}\left(\sum_{i=1}^{2^h-1} x^{i} \right) =
{\rm Tr}\left(\sum_{1 \le 2j+1 \le 2^h-1 \atop \kappa_{2j+1}^{(h)}=1} x^{2j+1} \right).
\end{equation}
By Lemma \ref{lem-feb281}, the monomials in the function of (\ref{eqn-feb28111}) will not cancel the monomials in
the function in the right-hand side of (\ref{eqn-feb28121}) if $m \equiv 1 \pmod{8}$, and only the term
$x^{2^{2h}}$ in the function of (\ref{eqn-feb28111}) cancels the monomial $x$ in the function in the right-hand
side of (\ref{eqn-feb28121}) if $m \equiv 5 \pmod{8}$.
The desired conclusions on the linear span and the minimal polynomial $\mathbb {M}_s(x)$ then follow from Lemmas \ref{lem-ls2},
\ref{lem-1Nf261}, and Equation
(\ref{eqn-1Niho}).
\end{proof}
The following theorem provides information on the code ${\cal C}_{s}$.
\begin{theorem}\label{thm-yue}
Let $m \ge 9$ be odd.
The binary code ${\cal C}_{s}$ defined by the sequence of (\ref{eqn-1Nihoseq}) has parameters
$[2^m-1, 2^{m}-1-{\mathbb L}_s, d]$ and generator polynomial $\mathbb {M}_s(x)$,
where ${\mathbb L}_s$ and $\mathbb {M}_s(x)$ are given in Lemma \ref{lem-1N2m1} and the minimum weight $d$ has the following
bounds:
\begin{eqnarray}\label{eqn-niho1b}
d \ge \left\{ \begin{array}{ll}
2^{(m-1)/4} + 2 & \mbox{if } m \equiv 1 \pmod{8} \\
2^{(m-1)/4} & \mbox{if } m \equiv 5 \pmod{8}.
\end{array}
\right.
\end{eqnarray}
\end{theorem}
\begin{proof}
The dimension and the generator polynomial of ${\cal C}_{s}$ follow from Lemma \ref{lem-1N2m1} and the
definition of the code ${\cal C}_s$. We now derive the lower bounds on the minimum weight $d$. It is well
known that the codes generated by $\mathbb {M}_s(x)$ and its reciprocal have the same weight distribution. The
reciprocal of $\mathbb {M}_s(x)$ has the zeros $\alpha^{i+2^{2h}}$ for all $i$ in $\{0,1,2, \cdots, 2^h-1\}$ if
$m \equiv 1 \pmod{8}$, and for all $i$ in $\{1,2, \cdots, 2^h-1\}$ if $m \equiv 5 \pmod{8}$.
Note that ${\cal C}_s$ is an even-weight code. Then the desired bounds on $d$ follow from the BCH bound.
\end{proof}
\begin{example}
Let $m=5$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^5 + \alpha^2 + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^6 + x^3 + x^2 + 1,
$
and ${\cal C}_s$ is a $[31, 25, 4]$ binary cyclic code and optimal according to the Database.
\end{example}
\begin{example}
Let $m=9$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^9 + \alpha^4 + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x) = x^{46} + x^{45} + x^{41} + x^{40} + x^{39} + x^{36} + x^{35} + x^{33} + x^{28} + x^{27} + x^{26} + x^{25} + x^{24} + x^{22} + x^{21} + x^{20} + x^{19} + x^{14} + x^{12} + x^7 + x^4 +
x^2 + x + 1
$
and ${\cal C}_s$ is a $[511, 465, d]$ binary cyclic code, where $d \ge 6$. The actual minimum weight
may be larger than 6.
\end{example}
\section{Binary cyclic codes from the monomials $f(x)=x^{2^{2h}-2^h+1}$, where $\gcd(m,h)=1$}\label{sec-Kasami}
Define $f(x)=x^e$, where $e=2^{2h}-2^h+1$ and $\gcd(m,h)=1$.
In this section, we have the following additional restrictions on $h$:
\begin{eqnarray}\label{eqn-Hcondition}
1 \le h \le \left\{ \begin{array}{l}
\frac{m-1}{4} \mbox{ if } m \equiv 1 \pmod{4}, \\
\frac{m-3}{4} \mbox{ if } m \equiv 3 \pmod{4}, \\
\frac{m-4}{4} \mbox{ if } m \equiv 0 \pmod{4}, \\
\frac{m-2}{4} \mbox{ if } m \equiv 2 \pmod{4}.
\end{array}
\right.
\end{eqnarray}
Note that
\begin{eqnarray}\label{eqn-Kasami}
{\rm Tr}(f(x+1))
&=& {\rm Tr}\left( (x+1) (x+1) ^{\sum_{i=0}^{h-1} 2^{h+i}} \right)\\
&=& {\rm Tr}\left( (x+1) \prod_{i=0}^{h-1} \left(x^{2^{h+i}}+1\right) \right) \nonumber \\
&=& {\rm Tr}\left(\sum_{i=0}^{2^h-1} x^{2^{h}i+1} + \sum_{i=0}^{2^h-1} x^{i} \right) \nonumber \\
&=& {\rm Tr}\left(\sum_{i=0}^{2^h-1} x^{i+2^{m-h}} + \sum_{i=1}^{2^h-1} x^{i} \right) +1.
\end{eqnarray}
The sequence $s^{\infty}$ of (\ref{eqn-sequence}) defined by $f(x)$ is then
\begin{eqnarray}\label{eqn-Kasamiseq}
s_t= {\rm Tr}\left(\sum_{i=0}^{2^h-1} (\alpha^t)^{i+2^{m-h}} + \sum_{i=1}^{2^h-1} (\alpha^t)^{i} \right) +1
\end{eqnarray}
for all $t \ge 0$, where $\alpha$ is a generator of ${\rm GF}(2^m)^*$.
In this section, we deal with the code ${\cal C}_s$ defined by the sequence $s^{\infty}$ of
(\ref{eqn-Kasamiseq}).
It is noticed that the final expression of the function of (\ref{eqn-Kasami}) is of the same format
as that of the function of (\ref{eqn-1Niho}). The proofs of the lemmas and theorems in this section
are very similar to those of Section \ref{sec-2hminus1}. Hence, we present only the main results without
providing proofs.
We define the following two sets for convenience:
\begin{eqnarray*}
A=\{0,1,2, \cdots, 2^h-1\}, \ B=2^{m-h}+A=\{i+2^{m-h}: i \in A\}.
\end{eqnarray*}
\begin{lemma}\label{lem-K1Nf261}
Let $h$ satisfy the conditions of (\ref{eqn-Hcondition}).
For any $j \in B$, the size $\ell_j=|C_j|=m$.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lem-1Nf261} is easily modified into a proof for this lemma.
The detail is omitted.
\end{proof}
\begin{lemma}\label{lem-K1Nf262}
Let $h$ satisfy the conditions of (\ref{eqn-Hcondition}).
For any pair of distinct $i$ and $j$ in $B$,
$C_i \cap C_j = \emptyset$, i.e., they cannot be in the same $2$-cyclotomic
coset modulo $n$.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lem-1Nf262} is easily modified into a proof for this lemma.
The detail is omitted.
\end{proof}
\begin{lemma}\label{lem-Kfeb281}
Let $h$ satisfy the conditions of (\ref{eqn-Hcondition}).
For any $i+2^{m-h} \in B$ and odd $j \in A$,
\begin{eqnarray}
C_{i+2^{m-h}} \cap C_j = \left\{ \begin{array}{l}
C_j \mbox{ if } (i,j)=(0,1) \\
\emptyset \mbox{ otherwise.}
\end{array}
\right.
\end{eqnarray}
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lem-feb281} is easily modified into a proof for this lemma.
The detail is omitted here.
\end{proof}
\begin{lemma}\label{lem-K1N2m1}
Let $h$ satisfy the conditions of (\ref{eqn-Hcondition}).
Let $s^{\infty}$ be the sequence of (\ref{eqn-Kasamiseq}). Then the linear span ${\mathbb L}_s$ of $s^{\infty}$ is given by
\begin{eqnarray}\label{eqn-K1N2m0}
{\mathbb L}_s =\left\{ \begin{array}{l}
\frac{m\left(2^{(h+2}+(-1)^{h-1}\right) +3}{3} \mbox{ if $h$ is even} \\
\frac{m\left(2^{h+2}+(-1)^{h-1}-6\right) +3}{3} \mbox{ if $h$ is odd.}
\end{array}
\right.
\end{eqnarray}
We have also
\begin{equation*}\label{eqn-K1N2m21}
\mathbb {M}_s(x) = (x-1) \prod_{i=0}^{2^{h}-1} m_{\alpha^{-i-2^{m-h} }}(x)
\prod_{1 \le 2j+1 \le 2^{h}-1 \atop \kappa_{2j+1}^{h} =1} m_{\alpha^{-2j-1}}(x)
\end{equation*}
if $h$ is even; and
\begin{equation*}\label{eqn-K1N2m31}
\mathbb {M}_s(x) =(x-1) \prod_{i=1}^{2^{h}-1} m_{\alpha^{-i-2^{m-h} }}(x)
\prod_{3 \le 2j+1 \le 2^{h}-1 \atop \kappa_{2j+1}^{h} =1} m_{\alpha^{-2j-1}}(x)
\end{equation*}
if $h$ is odd,
where $\kappa_{2j+1}^{(h)}$ was
defined in Section \ref{sec-2hminus1}.
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lem-1N2m1} is easily modified into a proof for this lemma.
The detail is omitted.
\end{proof}
The following theorem provides information on the code ${\cal C}_{s}$.
\begin{theorem} \label{thm-Kyue}
Let $h$ satisfy the conditions of (\ref{eqn-Hcondition}).
The binary code ${\cal C}_{s}$ defined by the sequence of (\ref{eqn-Kasamiseq}) has parameters
$[2^m-1, 2^{m}-1-{\mathbb L}_s, d]$ and generator polynomial $\mathbb {M}_s(x)$,
where ${\mathbb L}_s$ and $\mathbb {M}_s(x)$ are given in Lemma \ref{lem-K1N2m1} and the minimum weight $d$ has the following
bounds:
\begin{eqnarray}\label{eqn-Kniho1b}
d \ge \left\{ \begin{array}{ll}
2^{h} + 2 & \mbox{if $h$ is even} \\
2^{h} & \mbox{if $h$ is odd.}
\end{array}
\right.
\end{eqnarray}
\end{theorem}
\begin{proof}
The proof of Lemma \ref{thm-yue} is easily modified into a proof for this lemma with the helps of the
lemmas presented in this section.
The detail is omitted here.
\end{proof}
\begin{example}
Let $(m,h)=(5,2)$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^5 + \alpha^2 + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x)= x^{16} + x^{14} + x^{10} + x^{9} + x^8 + x^7 + x^5 + x^4 + x^3 + x^2 + x+ 1
$
and ${\cal C}_s$ is a $[31, 15, 8]$ binary cyclic code. Its dual is a $[31,16,7]$ cyclic code. Both codes are optimal according to the Database.
In this example, the condition of (\ref{eqn-Hcondition}) is not satisfied. So the conclusions on the code of
this example do not agree with the conclusions of Theorem \ref{thm-Kyue}.
\end{example}
\begin{example}
Let $(m,h)=(7,2)$ and $\alpha$ be a generator of ${\rm GF}(2^m)^*$ with $\alpha^7 + \alpha + 1=0$. Then
the generator polynomial of the code ${\cal C}_s$ is
$
\mathbb {M}_s(x) = x^{36} + x^{28} + x^{27} + x^{23} + x^{21} + x^{20} + x^{18} +
x^{13} + x^{12} + x^9 + x^7 + x^6 + x^5 + 1
$
and ${\cal C}_s$ is a $[127, 91, 8]$ binary cyclic code.
\end{example}
In this section, we obtained interesting results on the code ${\cal C}_s$ under the conditions
of (\ref{eqn-Hcondition}). When $h$ is outside the ranges, it may be hard to determine the
dimension of the code ${\cal C}_s$, let alone the minimum weight $d$ of the code. Hence,
it would be nice if the following open problem can be solved.
\begin{open}
Determine the dimension and the minimum weight of the code ${\cal C}_s$ of this section when $h$ satisfies
\begin{eqnarray}\label{eqn-Hcondition2}
\left\{ \begin{array}{l}
\frac{m-1}{2} \ge h > \frac{m-1}{4} \mbox{ if } m \equiv 1 \pmod{4}, \\
\frac{m-3}{2} \ge h > \frac{m-3}{4} \mbox{ if } m \equiv 3 \pmod{4}, \\
\frac{m-4}{2} \ge h > \frac{m-4}{4} \mbox{ if } m \equiv 0 \pmod{4}, \\
\frac{m-2}{2} \ge h > \frac{m-2}{4} \mbox{ if } m \equiv 2 \pmod{4}.
\end{array}
\right.
\end{eqnarray}
\end{open}
\section{Binary cyclic codes from a trinomial over ${\rm GF}(2^m)$}
In this section, we study the code $\mathcal{C}_s$ from the trinomial $x+x^{r}+x^{2^h-1}$ where
${\mathrm{wt}}(r)=m-1$ and $0\leq h\leq \lceil {m\over 2}\rceil$. Before doing this, we first introduce some
notations and lemmas which will be used in the sequel. Let $\rho_i$ denote the number of even integers in the 2-cyclotomic coset $C_i$. For each
$i\in \Gamma$, define
\begin{align}\label{eqn-def-v}
v_i={m\rho_i\over \ell_i} \bmod 2
\end{align}
where $\ell_i=|C_i|$.
\begin{lemma}\label{Lemma_inverse}\cite{SiDing}
With the notations as before,
\begin{align}\label{eqn-invese}
{\rm Tr}((1+\alpha^t)^{2^m-2})=\sum_{j\in \Gamma}v_j\left(\sum_{i\in C_j}(\alpha^t)^i\right).
\end{align}
Furthermore, the total number of nonzero coefficients of $\alpha^{it}$ in (\ref{eqn-invese})
is equal to $2^{m-1}$.
\end{lemma}
\begin{lemma}\label{Lemma_thirdclass}
Let $m\geq 4$, $r$ be an integer with $1\leq r\leq 2^m-2$ and ${\mathrm{wt}}(r)=m-1$, and $h$ an integer with $0\leq h\leq \lceil {m\over 2}\rceil$.
Let $s^{\infty}$ be the sequence of (\ref{eqn-sequence}), where
$
f(x)=x+x^{r}+x^{2^h-1}.
$
Then
the linear span of $s^{\infty}$ is given by
\begin{align}\label{eqn-linspan-thirdclass}
{\mathbb L}_s=\left\{
\begin{array}{ll}
2^{m-1}+m, &\textrm{if~} m\textrm{~is~odd~and~}h=0 \\
2^{m-1}-m, &\textrm{if~} m\textrm{~is~even~and~}h=0 \\
2^{m-1}, &\textrm{if~} h\neq 0
\end{array} \right.\ \
\end{align}
and the minimal polynomial of $s^{\infty}$ is given by
\begin{align}\label{eqn-minimalpoly-thirdclass}
\mathbb {M}_s(x)=\left\{
\begin{array}{ll}
m_{\alpha^{-1}}(x)\prod_{j\in \Gamma\setminus \{1\},v_j =1}m_{\alpha^{-j}}(x), &\textrm{if~} m\textrm{~is ~odd~and~}h=0 \\
\prod_{j\in \Gamma\setminus \{1\},v_j =1}m_{\alpha^{-j}}(x), &\textrm{if~} m\textrm{~is ~even~and~}h=0 \\
\prod_{j\in \Gamma,u_j =1}m_{\alpha^{-j}}(x), &\textrm{if~} h\neq 0
\end{array} \right.\ \
\end{align}
where $m_{\alpha^{-j}}(x)$ is the minimal polynomial of $\alpha^{-j}$ over ${\rm GF}(2)$, $u_1=(v_1+\kappa^{(h)}_1+1) \bmod{2}$, $u_{2j+1}=(v_{2j+1}+\kappa^{(h)}_{2j+1}) \bmod{2}$ for $3\leq 2j+1\leq 2^h-1$, and $u_j=v_j$
for~ $j\in \Gamma\setminus \{2i+1: 1\leq 2i+1\leq 2^h-1\}$. Herein, $\kappa^{(h)}_{2i+1}$ is given by (\ref{eqn-def-kappa}).
\end{lemma}
\vspace{2mm}
\begin{proof}
Note that ${\mathrm{wt}}(r)={\mathrm{wt}}(2^{m}-2)=m-1$. It then follows from Lemma \ref{Lemma_equivalent} and properties of the trace function that
$
{\rm Tr}(x^r)={\rm Tr}(x^{2^m-2}) \textrm{~~for~all~~} x\in {\rm GF}(2^m).
$
We first deal with the case of $h=0$ where $f(x)=1+x+x^{r}$. According to Lemma \ref{Lemma_inverse}, one has
\begin{align}\label{eqn-s-class3_1}
{\rm Tr}(f(1+\alpha^t))&={\rm Tr}(1)+{\rm Tr}(1+\alpha^t)+{\rm Tr}((1+\alpha^t)^{2^{m-2}-2})\nonumber \\
&=\sum_{i\in C_1}(\alpha^t)^i+\sum_{j\in \Gamma}v_j\left(\sum_{i\in C_j}(\alpha^t)^i\right)\nonumber\\
&=\sum_{i\in C_1}(1+v_1)(\alpha^t)^i+\sum_{j\in \Gamma\setminus \{1\}}v_j\left(\sum_{i\in C_j}(\alpha^t)^i\right)
\end{align}
where $1+v_1$ is performed modulo 2. By the definition of $v_i$, $v_1=(m-1) \bmod 2$. The desired conclusion on the linear span and minimal polynomial of $s^{\infty}$ for the case $h=0$ then follows
from Equation (\ref{eqn-s-class3_1}) and Lemma \ref{lem-ls2}.
Now we assume that $h\neq 0$. From the proof of Lemma \ref{lem-22mm1}, we know that
\begin{equation*}
{\rm Tr}\left(\sum_{i=1}^{2^h-1}x^i\right)=\sum_{\scriptstyle 1\leq 2j+1\leq 2^h-1 \atop \scriptstyle \kappa^{(h)}_{2j+1}=1}{\rm Tr}(x^{2j+1})
\end{equation*}
It then follows from Lemma \ref{Lemma_inverse} that
\begin{align}\label{eqn-s-general}
{\rm Tr}(f(1+\alpha^t))&={\rm Tr}(1+\alpha^t)+{\rm Tr}((1+\alpha^t)^{2^{m-2}-2})+{\rm Tr}((1+\alpha^t)^{2^h-1})\nonumber \\
&=\sum_{i\in C_1}(\alpha^t)^i+\sum_{j\in \Gamma}v_j\left(\sum_{i\in C_j}(\alpha^t)^i\right)+\sum_{j\in \Gamma_1}\kappa^{(h)}_j\left(\sum_{i\in C_j}(\alpha^t)^i\right)\nonumber\\
&=\sum_{j\in \Gamma}u_j\left(\sum_{i\in C_j}(\alpha^t)^i\right)
\end{align}
where $\Gamma_1=\{2i+1: 1\leq 2i+1\leq 2^h-1\}$, $u_1=(v_1+\kappa^{(h)}_1+1) \bmod{2}$,
$u_j=(v_j+\kappa^{(h)}_j) \bmod{2}$ for $j\in \Gamma_1 \setminus \{1\}$ and $u_j=v_j$
for $j\in \Gamma \setminus \Gamma_1$.
The minimal polynomial in (\ref{eqn-minimalpoly-thirdclass}) then follows from Equation (\ref{eqn-s-general}).
Finally, we show that the linear span of $s^{\infty}$ is equal to $2^{m-1}$ when $h\neq 0$, i.e.,
the total number of nonzero coefficient of $\alpha^{it}$ in (\ref{eqn-s-general}) is equal to $2^{m-1}$.
According to Lemmas \ref{Lemma_inverse} and \ref{Lemma_coset},
it is sufficient to prove that the number of $j\in \Gamma_1$ such that $u_j\neq 0$ is equal to the number
of $j\in \Gamma_1$ such that $v_j\neq 0$. By the definition of $v_j$ and Lemma \ref{lemma_odd_number}, we have
$v_1=(m-1) \bmod{2}$, $v_3=(m-2) \bmod{2}$. According to the definition of $\kappa^{(h)}_j$ in (\ref{eqn-def-kappa}), we have $\kappa^{(h)}_1=h \bmod{2}$ and $\kappa^{(h)}_3=(h-1) \bmod{2}$.
Thus, $u_1=(m+h) \bmod{2}$ and $u_3=(m+h-1) \bmod{2}$. Thus, whenever $m$ is even or odd, there are exactly one nonzero $u_i$ and $v_i$ for $i\in \{1,3\}$.
Note that, for any integer $a$ with $2\leq a\leq h-1$, the number of odd integers $j$ satisfying
$2^a<j<2^{a+1}$ is equal to $2^{a-1}$. It is clear that
when $2j+1$ rangers over the odd integers between $2^a$ and $2^{a+1}$, the number of $2j+1$
such that ${\mathrm{wt}}(2j+1)$ to be even is equal to $2^{a-2}$. It then follows from Lemma \ref{lemma_odd_number} that
\begin{align*}
|\{2^a<2j+1<2^{a+1}: v_{2j+1}=1\}|=|\{2^a<2j+1<2^{a+1}: v_{2j+1}=0\}|=2^{a-2}.
\end{align*}
On the other hand, when $2j+1$ runs over the odd integers from $2^a$ to $2^{a+1}$,
$\kappa^{(h)}_{2j+1}$ has the same value for these $j$'s due to the definition of $\kappa^{(h)}_{2j+1}$ in (\ref{eqn-def-kappa}). If $\kappa^{(h)}_j=0$, $u_j=v_j$, and otherwise $u_j=v_j+1$.
Thus we have
\begin{align*}
|\{2^a<2j+1<2^{a+1}: u_{2j+1}=1\}|=|\{2^a<2j+1<2^{a+1}: u_{2j+1}=0\}|=2^{a-2}.
\end{align*}
It then follows that
\begin{align*}
|\{j\in \Gamma_1: u_j=1\}|=|\{j\in \Gamma_1: v_j=1\}|.
\end{align*}
By Lemma \ref{Lemma_coset}, $|C_j|=m$ for each $j\in \Gamma_1$. The conclusion then follows
from the analysis above.
\end{proof}
\vspace{2mm}
\begin{theorem}\label{Theorem_third}
The code $\mathcal{C}_s$ defined by the sequence of Lemma \ref{Lemma_thirdclass}
has parameters $[n,n-{\mathbb L}_s,d]$ and generator polynomial $\mathbb {M}_s(x)$ of
(\ref{eqn-minimalpoly-thirdclass}), where ${\mathbb L}_s$ is given by (\ref{eqn-linspan-thirdclass}) and
\begin{align*}
d\geq \left\{
\begin{array}{ll}
8, &\textrm{if~} m\textrm{~is ~odd~and~}h=0 \\
3, &\textrm{if~} m\textrm{~is ~even~and~}h=0. \\
\end{array} \right.\ \
\end{align*}
\end{theorem}
\begin{proof}
The dimension of ${\cal C}_s$ follows from Lemma \ref{Lemma_thirdclass} and the definition
of this code. We only need to prove the conclusion on the minimal distance $d$ of ${\cal C}_s$. It is known that
codes generated by any polynomial $g(x)$ and its reciprocal have the same weight distribution. When $m$ is odd and $h=0$,
since $v_3=v_5=1$, the reciprocal of $\mathbb {M}_s(x)$ has zeros $\alpha^t$ for $t\in \{0,1,2,3,4,5,6\}$. It then follows from the BCH bound
that $d\geq 8$. When $m$ is odd and $h=0$, note that $v_7=v_{13}=1$, the reciprocal of $\mathbb {M}_s(x)$ has zeros $\alpha^t$ for $t\in \{13,14\}$.
By the BCH bound, $d\geq 3$.
\end{proof}
\begin{open}
Develop a tight lower bound
on the minimal distance of the code $\mathcal{C}_s$ in Theorem
\ref{Theorem_third} for the case that $h>0$.
\end{open}
\vspace{2mm}
\begin{remark}
When $r=2^m-2$ and $h=1$, $f(x)$ becomes the monomial $x^{2^m-2}$ which is the the inverse APN function.
It was pointed in \cite{Ding56} that the code $\mathcal{C}_s$ from the inverse APN function may have poor minimal
distance when $m$ is even. However, when we choose some other $h$, the corresponding codes may have excellent
minimal distance. This is demonstrated by some of the examples below.
\end{remark}
\begin{example}
Let $m=4$, $r=2^m-2$, $h=1$, and $\alpha$ be the generator of ${\rm GF}(2^m)$ with $\alpha^4+\alpha+1=0$.
Then the generator polynomial of ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^8 + x^7 + x^5 + x^4 + x^3 + x + 1
$
and ${\cal C}_s$ is a $[15, 7, 3]$ binary cyclic code. It is not optimal.
\end{example}
\begin{example}
Let $m=4$, $r=2^m-2$, $h=0$, and $\alpha$ be the generator of ${\rm GF}(2^m)$ with $\alpha^4+\alpha+1=0$.
Then the generator polynomial of ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^4+x+1
$
and ${\cal C}_s$ is a $[15, 11, 3]$ optimal binary cyclic code. The optimal binary linear code with the
same parameters in the Database is not cyclic.
\end{example}
\begin{example}
Let $m=4$, $r=2^m-2$, $h=2$, and $\alpha$ be the generator of ${\rm GF}(2^m)$ with $\alpha^4+\alpha+1=0$.
Then the generator polynomial of ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^8+x^7+x^6+x^4+1
$
and ${\cal C}_s$ is a $[15, 7, 5]$ optimal binary cyclic code. The optimal binary linear code with the
same parameters in the Database is not cyclic.
\end{example}
\begin{example}
Let $m=5$, $r=2^m-2$, $h=0$, and $\alpha$ be the generator of ${\rm GF}(2^m)$ with $\alpha^5+\alpha^2+1=0$.
Then the generator polynomial of ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^{21} + x^{18} + x^{17} + x^{15} + x^{13} + x^{10} + x^5 + x^4 + x^3 + x^2 + x + 1
$
and ${\cal C}_s$ is a $[31,10,12]$ optimal binary cyclic code.
\end{example}
\begin{example}
Let $m=5$, $r=2^m-2$, $h=1$, and $\alpha$ be the generator of ${\rm GF}(2^m)$ with $\alpha^5+\alpha^2+1=0$.
Then the generator polynomial of ${\cal C}_s$ is
$
\mathbb {M}_s(x)=x^{16} + x^{14} + x^{13} + x^{10} + x^9 + x^8 + x^7 + x^6 + x^5 + x^2 + x + 1
$
and ${\cal C}_s$ is a $[31,15,8]$ optimal binary cyclic code. The optimal binary linear code with the
same parameters in the Database is not cyclic.
\end{example}
\section{Open problems regarding binary cyclic codes from monomials}
In the previous sections, we investigated binary cyclic codes defined by some monomials.
It would be good if the following open problems could be solved.
\begin{open}
Determine the dimension and the minimum weight of the code ${\cal C}_s$ defined by the monomials
$x^e$, where $e=2^{(m-1)/2}+2^{(3m-1)/4}-1$ and $m \equiv 3 \pmod{4}$.
\end{open}
\begin{open}
Determine the dimension and the minimum weight of the code ${\cal C}_s$ defined by the monomials
$x^e$, where $e=2^{4i}+2^{3i}+2^{2i}+2^i-1$ and $m=5i$.
\end{open}
\section{Concluding remarks and summary}
In this paper, we constructed a number of families of cyclic codes with monomials and trinomials
of special types. The dimension of some of the codes is flexible. We determined the minimum
weight for some families of cyclic codes, and developed tight lower bounds for other
families of cyclic codes. The main results of this paper showed that the approach of
constructing cyclic codes with polynomials is promising. While it is rare to see optimal
cyclic codes constructed with tools in algebraic geometry and algebraic function fields,
the simple construction of cyclic codes with monomials and trinomials over ${\rm GF}(r)$ employed
in this paper is impressive in the sense that it has produced optimal and almost
optimal cyclic codes.
The binary sequences defined by some of the monomials and trinomials have large linear
span. These sequences have also reasonable autocorrelation property. They could be
employed in certain stream ciphers as keystreams. So the contribution of this paper in cryptography
is the computation of the linear spans of these sequences.
It is known that long BCH codes are bad \cite{LinWeldon}. However, it was indicated in
\cite{BJ74,MartW} that there may be good cyclic codes. The cyclic codes presented in
this paper proved that some families of cyclic codes are in fact very good.
Four open problems regarding binary cyclic codes were proposed in this paper. The reader is cordially
invited to attack them.
| {
"timestamp": "2013-10-08T02:01:48",
"yymm": "1310",
"arxiv_id": "1310.1442",
"language": "en",
"url": "https://arxiv.org/abs/1310.1442",
"abstract": "Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, monomials and trinomials over finite fields with even characteristic are employed to construct a number of families of binary cyclic codes. Lower bounds on the minimum weight of some families of the cyclic codes are developed. The minimum weights of other families of the codes constructed in this paper are determined. The dimensions of the codes are flexible. Some of the codes presented in this paper are optimal or almost optimal in the sense that they meet some bounds on linear codes. Open problems regarding binary cyclic codes from monomials and trinomials are also presented.",
"subjects": "Information Theory (cs.IT)",
"title": "Binary Cyclic Codes from Explicit Polynomials over $\\gf(2^m)$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877033706601,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7089594647790692
} |
https://arxiv.org/abs/0708.2871 | Sharpness of the Finsler-Hadwiger inequality | In this paper we shall prove a sharpened version of the Finsler-Hadwiger inequality which is a strong generalization of Weitzenbock inequality. After that we give another refinement of this inequality and in the final part we provide some basic applications. | \section{Introduction \& Preliminaries}\selabel{0}
The Hadwiger-Finsler inequality is known in literature of mathematics
as a generalization of the
following
\begin{te}\label{t1}
In any triangle $ABC$ with the side lenghts $a, b, c$ and $S$ its area,
the following inequality
is valid
$$a^{2}+b^{2}+c^{2}\geq 4S\sqrt{3}.$$
\end{te} This inequality is due to Weitzenbock, Math. Z, 137-146, 1919,
but this has also appeared at International Mathematical Olympiad
in 1961.
In [7.], one can find eleven proofs. In fact, in any triangle $ABC$ the
following sequence of
inequalities is valid:
$$a^2+b^2+c^2\geq ab+bc+ca\geq a\sqrt{bc}+b\sqrt{ca}+c\sqrt{ab}\geq
3\sqrt [3] {a^2b^2c^2}\geq
4S\sqrt{3}.$$
A stronger version is the one found by Finsler and Hadwiger in
1938, which states that ([2.]) \begin{te} \label{t2_cezar}
In any triangle $ABC$ with the side lenghts $a, b, c$ and $S$ its area,
the following inequality
is valid
$$a^{2}+b^{2}+c^{2}\geq 4S\sqrt{3}+(a-b)^{2}+(b-c)^{2}+(c-a)^{2}.$$
\end{te} In [8.] the first author of this note gave a simple proof only
by using AM-GM and the following inequality due to Mitrinovic: \begin{te}
\label{t3}
In any triangle $ABC$ with the side lenghts $a, b, c$ and $s$ its
semiperimeter and $R$ its
circumradius, the following inequality holds
$$s\leq\frac{3\sqrt{3}}{2}R.$$
\end{te}
This inequality also appears in [3.].
A nice inequality, sharper than Mitrinovic and equivalent to the first
theorem is the following:
\begin{te} \label{t4}
In any triangle $ABC$ with sides of lenghts $a, b, c$ and with inradius
of $r$, circumradius of
$R$ and $s$ its semiperimeter the following inequality holds
$$4R+r\geq s\sqrt{3}.$$
\end{te}
In [4.], Wu gave a nice sharpness and a generalization of the
Finsler-Hadwiger inequality.
Now, we give an algebraic inequality due to I. Schur ([5.]), namely
\begin{te} \label{t5}
For any positive real numbers $\displaystyle x, y, z$ and
$t\in\mathbb{R}$
the following inequality holds
$$x^{t}(x-y)(x-z)+y^{t}(y-x)(y-z)+z^{t}(z-y)(z-x)\geq 0.$$
\end{te}
The most common case is $\displaystyle t=1$, which has the following
equivalent form:
$$x^3+y^3+z^3+3xyz\geq xy(x+y)+yz(y+z)+zx(z+x)$$
which is equivalent to
$$x^3+y^3+z^3+6xyz\geq (x+y+z)(xy+yz+zx).$$
Now, using the identity $\displaystyle
x^3+y^3+z^3-3xyz=(x+y+z)(x^2+y^2+z^2-xy-yz-zx)$ one can
easily deduce that
$$2(xy+yz+zx)-(x^2+y^2+z^2)\leq\frac{9xyz}{x+y+z}.(*)$$
Another interesting case is $\displaystyle t=2$. We have
$$x^4+y^4+z^4+xyz(x+y+z)\geq xy(x^2+y^2)+yz(y^2+z^2)+zx(z^2+x^2)$$
which is equivalent to
$$x^4+y^4+z^4+2xyz(x+y+z)\geq (x^2+y^2+z^2)(xy+yz+zx).(**)$$
Now, let's rewrite theorem 1.2. as $$2(ab+bc+ca)-(a^2+b^2+c^2)\geq
4S\sqrt{3}.(***)$$
By squaring $(***)$ and using Heron formula we obtain
$$4\left(\sum_{cyc}ab\right)^{2}+\left(\sum_{cyc}a^{2}\right)^{2}-4\left(\sum_{cyc}ab\right)\left
\sum_{cyc}a^{2}\right)\geq 3(a+b+c)\prod (b+c-a)$$
which is equivalent to
$$6\sum_{cyc}{a^2b^2} + 4\sum_{cyc}{a^2bc} + \sum_{cyc}{a^4} -
4\sum_{cyc}{ab(a+b)} \geq
3(a+b+c)\prod (b+c-a).$$
By making some elementary calculations we get
$$6\sum_{cyc}{a^2b^2} + 4\sum_{cyc}{a^2bc} + \sum_{cyc}{a^4} -
4\sum_{cyc}{ab(a+b)} \geq
3(a+b+c)\left(\sum_{cyc}{ab(a+b)} - \sum_{cyc}{a^3} - 2abc\right).$$
We obtain the equivalent inequalities
$$\sum_{cyc}{a^4} +\sum_{cyc}{a^2bc} \geq \sum_{cyc}{ab(a^2+b^2)}$$
$$a^2(a-b)(a-c)+b^2(b-a)(b-c)+c^2(c-a)(c-b)\geq 0,$$
which is nothing else than Schur's inequality in the particular case
$\displaystyle t=2$.
In what follows we will give another form of Schur's inequality. That
is
\begin{te} \label{t6} For any positive reals $\displaystyle m, n, p$, the
following inequality holds
$$\frac{mn}{p}+\frac{np}{m}+\frac{pm}{n}+\frac{9mnp}{mn+np+pm}\geq
2(m+n+p).$$
\end{te}
\it Proof. \normalfont We denote $\displaystyle x=\frac{1}{m},
y=\frac{1}{n}$ and $\displaystyle
z=\frac{1}{p}$. We obtain the equivalent inequality
$$\displaystyle\frac{x}{yz}+\frac{y}{zx}+\frac{z}{xy}+\frac{9}{x+y+z}\geq\frac{2(xy+yz+zx)}{xyz}
\Leftrightarrow$$
$$\displaystyle
2(xy+yz+zx)-(x^{2}+y^{2}+z^{2})\leq\frac{9xyz}{x+y+z},$$
which is $(*)$.
\section{Main results}\selabel{1}
In the previous section we stated a sequence of inequalities stronger
than Weitzenbock inequality.
In fact, one can prove that the following sequence of inequalities
holds
$$a^2+b^2+c^2\geq ab+bc+ca\geq a\sqrt{bc}+b\sqrt{ca}+c\sqrt{ab}\geq
3\sqrt [3] {a^2b^2c^2}\geq
18Rr,$$
where $\displaystyle R$ is the circumradius and $\displaystyle r$ is
the inradius of the triangle
with sides of lenghts $\displaystyle a, b, c$. In this moment, one
expects to have a stronger
Finsler-Hadwiger inequality with $\displaystyle 18Rr$ instead of
$\displaystyle 4S\sqrt{3}$.
Unfortunately, the following inequality holds true
$$a^{2}+b^{2}+c^{2}\leq 18Rr+(a-b)^{2}+(b-c)^{2}+(c-a)^{2},$$
because it is equivalent to
$$2(ab+bc+ca)-(a^{2}+b^{2}+c^{2})\leq 18Rr=\frac{9abc}{a+b+c},$$
which is $(*)$ again. Now, we are ready to prove the first
refinement of the Finsler-Hadwiger inequality:
\begin{te}\label{t1}
In any triangle $ABC$ with the side lenghts $a, b, c$ with $S$ its
area, $\displaystyle R$ the
circumradius and $\displaystyle r$ the inradius of the triangle
$\displaystyle ABC$ the following
inequality is valid
$$a^{2}+b^{2}+c^{2}\geq
2S\sqrt{3}+2r(4R+r)+(a-b)^{2}+(b-c)^{2}+(c-a)^{2}.$$
\end{te}
\it Proof. \normalfont We rewrite the inequality as
$$\displaystyle 2(ab+bc+ca)-(a^{2}+b^{2}+c^{2})\geq
2S\sqrt{3}+2r(4R+r).$$
Since, $\displaystyle ab+bc+ca=s^{2}+r^{2}+4Rr$, it follows immediately
that $\displaystyle
a^{2}+b^{2}+c^{2}=2(s^{2}-r^{2}-4Rr)$.
The inequality is equivalent to
$$\displaystyle 16Rr+4r^{2}\geq 2S\sqrt{3}+2r(4R+r).$$
We finally obtain
$$\displaystyle 4R+r\geq s\sqrt{3},$$ which is exactly theorem 1.4.
$\hfill\Box$
The second refinement of the Finsler-Hadwiger inequality is the
following
\begin{te}\label{t2}
In any triangle $ABC$ with the side lenghts $a, b, c$ with $S$ its
area, $\displaystyle R$ the
circumradius and $\displaystyle r$ the inradius of the triangle
$\displaystyle ABC$ the following
inequality is valid
$$\displaystyle a^{2}+b^{2}+c^{2}\geq
4S\sqrt{3+\displaystyle\frac{4(R-2r)}{4R+r}}+(a-b)^{2}+(b-c)^{2}+(c-a)^{2}.$$
\end{te}
\it Proof. \normalfont In theorem 1.6 we put $\displaystyle
m=\frac{1}{2}(b+c-a),
n=\frac{1}{2}(c+a-b)$ and $\displaystyle p=\frac{1}{2}(a+b-c)$. We get
$$\displaystyle\sum_{cyc}\frac{(b+c-a)(c+a-b)}{(a+b-c)}+
\frac{9(b+c-a)(c+a-b)(a+b-c)}{\displaystyle\sum_{cyc}(b+c-a)(c+a-b)}\geq
2(a+b+c).$$
Since $\displaystyle ab+bc+ca=s^{2}+r^{2}+4Rr$ $(1)$ and
$a^{2}+b^{2}+c^{2}=2(s^{2}-r^{2}-4Rr)$
$(2)$, we deduce
$$\displaystyle\sum_{cyc}(b+c-a)(c+a-b)=4r(4R+r).$$
On the other hand, by Heron's formula we have $\displaystyle
(b+c-a)(c+a-b)(a+b-c)=8sr^{2}$, so
our inequality is equivalent to
$$\displaystyle\sum_{cyc}\frac{(b+c-a)(c+a-b)}{(a+b-c)} +
\frac{18sr}{4R+r} \geq 4s
\Leftrightarrow$$
$$\displaystyle\sum_{cyc}\frac{(s-a)(s-b)}{(s-c)} + \frac{9sr}{4R+r}
\geq 2s \Leftrightarrow \displaystyle\sum_{cyc}{(s-a)^2(s-b)^2} +
\frac{9s^2r^3}{4R+r} \geq 2s^2r^2.$$
Now, according to the identity
$$\sum_{cyc}{(s-a)^2(s-b)^2} = \left(\sum_{cyc}{(s-a)(s-b)}\right)^2 -
2s^2r^2,$$
we have
$$\displaystyle\left(\sum_{cyc}{(s-a)(s-b)}\right)^2-2s^2r^2 +
\frac{9s^2r^3}{4R+r} \geq 2s^2r^2.$$
And since
$$\displaystyle\sum_{cyc}{(s-a)(s-b)}=r(4R+r),$$
it follows that
$$\displaystyle r^2(4R+r)^2 + \frac{9s^2r^3}{4R+r} \geq 4s^2r^2,$$
which rewrites as
$$ \left(\frac{4R+r}{s}\right)^{2} + \frac{9r}{4R+r} \geq 4. $$
From the identities mentioned in $(1)$ and $(2)$ we deduce that
$$ \frac{4R+r}{s}=\frac{2(ab+bc+ca)-(a^2+b^2+c^2)}{4S}. $$
The inequality rewrites as
$$ \left(\frac{2(ab+bc+ca)-(a^2+b^2+c^2)}{4S}\right)^2 \geq 4 -
\frac{9r}{4R+r} \Leftrightarrow
$$
$$ \left(\frac{(a^2+b^2+c^2) - \left( (a-b)^2+ (b-c)^2 + (c-a)^2
\right)}{4S}\right)^2 \geq 3 +
\frac{4(R-2r)}{4R+r} \Leftrightarrow $$
$$ \displaystyle a^{2}+b^{2}+c^{2}\geq
4S\sqrt{3+\displaystyle\frac{4(R-2r)}{4R+r}}+(a-b)^{2}+(b-c)^{2}+(c-a)^{2}.$$
$\hfill\Box$
{\bf{Remark.}} From Euler inequality, $\displaystyle R\geq 2r$, we
obtain theorem 1.2.
\section{Applications}
In this section we illustrate some basic applications of the second
refinement of Finsler-Hadwiger
inequality. We begin with\\
{\bf{Application 1.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ the following inequality holds\normalfont
$$\displaystyle\frac{1}{b+c-a}+\frac{1}{c+a-b}+\frac{1}{c+a-b}\geq\frac{1}{2r}\sqrt{4-\frac{9r}{
R+r}}.$$
\it Solution.\normalfont From $$\displaystyle
(b+c-a)(c+a-b)(a+b-c)=4r(4R+r),$$ it is quite easy to observe that
$$\displaystyle\frac{1}{b+c-a}+\frac{1}{c+a-b}+\frac{1}{a+b-c}=\frac{4R+r}{2sr}.$$
Now, applying the inequality $$\displaystyle
\left(\frac{4R+r}{s}\right)^{2}+\frac{9r}{4R+r}\geq
4,$$ we get
$$\displaystyle\left(\frac{1}{b+c-a}+\frac{1}{c+a-b}+\frac{1}{a+b-c}\right)^{2}=\frac{1}{4r^{2}}
\left(\frac{4R+r}{s}\right)^{2}\geq\frac{1}{4r^{2}}\left(4-\frac{9r}{4R+r}\right).$$
The given inequality follows immediately. $\hfill\Box$
{\bf{Application 2.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ the following inequality holds\normalfont
$$\displaystyle\frac{1}{a(b+c-a)}+\frac{1}{b(c+a-b)}+\frac{1}{c(a+b-c)}\geq\frac{r}{8R}\left(5-
\frac{9r}{4R+r}\right).$$
\it Solution.\normalfont From the following identity
$$\displaystyle\sum_{cyc}\frac{(s-a)(s-b)}{c}=\frac{r(s^{2}+(4R+r)^{2})}{4sR}=\frac{S}{4R}\left(1
\left(\frac{4R+r}{p}\right)^{2}\right).$$
Using the inequality
$$\displaystyle\left(\frac{4R+r}{s}\right)^{2}+\frac{9r}{4R+r}\geq 4,$$
we have
$$\displaystyle\sum_{cyc}\frac{(s-a)(s-b)}{c}\geq\frac{S}{4R}\left(5-\frac{9r}{4R+r}\right).$$
In this moment, the problem follows easily. $\hfill\Box$
{\bf{Application 3.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ the following inequality holds\normalfont
$$\displaystyle\frac{1}{(b+c-a)^{2}}+\frac{1}{(c+a-b)^{2}}+\frac{1}{(a+b-c)^{2}}\geq
\frac{1}{r^{2}}\left(\frac{1}{2}-\frac{9r}{4(4R+r)}\right).$$\\
\it Solution.\normalfont
From $\displaystyle (b+c-a)(c+a-b)(a+b-c)=4r(4R+r),$ it follows that
$$(b+c-a)^{2}+(c+a-b)^{2}+(a+b-c)^{2}=4(s^2-2r^2-8Rr)$$
and
$$(b+c-a)^{2}(c+a-b)^{2}+(a+b-c)^{2}(c+a-b)^{2}+(b+c-a)^{2}(a+b-c)^{2}=4r^{2}\left((4R+r)^{2}-2s^{2}\right).$$
We get
$$\displaystyle\frac{1}{(b+c-a)^{2}}+\frac{1}{(c+a-b)^{2}}+\frac{1}{(a+b-c)^{2}}=\frac{1}{4}\left(\frac{(4R+r)^{2}}{s^{2}r^{2}}-\frac{2}{r^{2}}\right).$$
Now, applying the inequality
$$\displaystyle\left(\frac{4R+r}{s}\right)^{2}+\frac{9r}{4R+r}\geq 4,$$
we have
$$\displaystyle\frac{1}{(b+c-a)^{2}}+\frac{1}{(c+a-b)^{2}}+\frac{1}{(a+b-c)^{2}}\geq\frac{1}{4r^{2}}\left(2-\frac{9r}{4R+r}\right)=\frac{1}{r^{2}}\left(\frac{1}{2}-\frac{9r}{4(4R+r)}\right).$$
$\hfill\Box$
{\bf{Application 4.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ the following inequality holds\normalfont
$$\displaystyle\frac{a^2}{b+c-a}+\frac{b^2}{c+a-b}+\frac{c^2}{a+b-c}\geq
3R\sqrt{4-\frac{9r}{4R+r}}.$$\\
\it Solution.\normalfont
Without loss of generality, we assume that $\displaystyle a\leq b\leq
c$. It follows quite easily
that $\displaystyle a^2\leq b^2\leq c^2$ and
$\displaystyle\frac{1}{b+c-a}\leq\frac{1}{c+a-b}\leq\frac{1}{a+b-c}$.
Applying Chebyshev's
inequality, we have
$$\displaystyle\frac{a^2}{b+c-a}+\frac{b^2}{c+a-b}+\frac{c^2}{a+b-c}\geq\left(\frac{a^2+b^2+c^2}{
}
\right)\left(\frac{1}{b+c-a}+\frac{1}{c+a-b}+\frac{1}{c+a-b}\right).$$
Now, the first application and the inequality $\displaystyle
a^2+b^2+c^2\geq 18Rr$ solves the
problem. $\hfill\Box$
{\bf{Application 5.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ and with the exradii $\displaystyle r_{a},
r_{b}, r_{c}$ corresponding to the triangle $ABC$, the following inequality
holds\normalfont
$$\displaystyle\frac{a}{r_{a}}+\frac{b}{r_{b}}+\frac{c}{r_{c}}\geq
2\sqrt{3+\frac{4(R-2r)}{4R+r}}.$$
\it Solution.\normalfont
From the well-known relations $\displaystyle r_{a}=\frac{S}{s-a}$ and
the analogues, the inequality is equivalent to
$$\displaystyle\frac{a}{r_{a}}+\frac{b}{r_{b}}+\frac{c}{r_{c}}=\frac{2(ab+bc+ca)-(a^2+b^2+c^2)}{2
}\geq 2S\sqrt{3+\frac{4(R-2r)}{4R+r}}.$$
The last inequality follows from theorem 2.2 immediately. $\hfill\Box$
{\bf{Application 6.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ and with the exradii $\displaystyle r_{a},
r_{b}, r_{c}$ corresponding to the triangle $ABC$ and with $\displaystyle
h_{a}, h_{b}, h_{c}$ be the altitudes of the triangle $\displaystyle
ABC$, the following inequality holds\normalfont
$$\displaystyle\frac{1}{h_{a}r_{a}}+\frac{1}{h_{b}r_{b}}+\frac{1}{h_{c}r_{c}}\geq\frac{1}{S}\sqrt{3+\frac{4(R-2r)}{4R+r}}.$$
\it Solution.\normalfont From the well-known relations in triangle
$ABC$, $\displaystyle h_{a}=\frac{2S}{a},\displaystyle
r_{a}=\frac{S}{s-a}$ we have
$\displaystyle\frac{1}{h_{a}r_{a}}=\frac{a(s-a)}{2S^{2}}$. Doing
the same thing for the analogues and adding them up we get
$$\displaystyle\frac{1}{h_{a}r_{a}}+\frac{1}{h_{b}r_{b}}+\frac{1}{h_{c}r_{c}}=\frac{1}{2S^{2}}\left(a(s-a)+b(s-b)+c(s-c)\right).$$
On the other hand by using theorem 2.2 in the form
$$\displaystyle a(s-a)+b(s-b)+c(s-c)\geq
2S\sqrt{3+\frac{4(R-2r)}{4R+r}}$$
we obtain the desired inequality. $\hfill\Box$
{\bf{Application 7.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts $\displaystyle a, b, c$ the following inequality holds
true
$$\displaystyle\tan\frac{A}{2}+\tan\frac{B}{2}+\tan\frac{C}{2}\geq\sqrt{3+\frac{4(R-2r)}{4R+r}}.$$
\it Solution.\normalfont
From the cosine law we get $\displaystyle a^{2}=b^{2}+c^{2}-2bc\cos A$.
Since $S=\frac{1}{2}bc\sin A$ it follows that
$$\displaystyle a^{2}=(b-c)^{2}+4S\cdot\frac{1-\cos A}{\sin A}.$$
On the other hand by the trigonometric formulae $\displaystyle 1-\cos
A=2\sin^{2}\frac{A}{2}$ and $\displaystyle\sin
A=2\sin\frac{A}{2}\cos\frac{A}{2}$ we get
$$\displaystyle a^{2}=(b-c)^{2}+4S\tan\frac{A}{2}.$$
Doing the same for all sides of the triangle $ABC$ and adding up we
obtain
$$a^{2}+b^{2}+c^{2}=(a-b)^{2}+(b-c)^{2}+(c-a)^{2}+4S\left(\tan\frac{A}{2}+\tan\frac{B}{2}+\tan\frac{C}{2}\right).$$
Now, by theorem 2.2 the inequality follows. $\hfill\Box$
{\bf{Application 8.}} \it In any triangle $\displaystyle ABC$ with the
sides of lenghts
$\displaystyle a, b, c$ and with the exradii $\displaystyle r_{a},
r_{b}, r_{c}$ corresponding to the triangle $ABC$, the following inequality
holds\normalfont
$$\displaystyle\frac{r_{a}}{a}+\frac{r_{b}}{b}+\frac{r_{c}}{c}\geq
\frac{s(5R-r)}{R(4R+r)}.$$
\it Solution.\normalfont It is well-known that the following identity
is valid in any triangle ABC
$$\displaystyle\frac{r_{a}}{a}+\frac{r_{b}}{b}+\frac{r_{c}}{c}=\frac{(4R+r)^2+s^2}{4Rs}.$$
So, the inequality rewrites as
$$\displaystyle\frac{(4R+r)^2}{s^2} + 1 \geq \frac{4(5R-r)}{4R+r},$$
which is equivalent with
$$\displaystyle\left(\frac{4R+r}{s}\right)^{2}+\frac{9r}{4R+r}\geq 4.$$
$\hfill\Box$
\paragraph*{Acknowledgment.}
The authors would like to thank to Nicolae Constantinescu, from
University of Craiova and to
Marius Ghergu, from the Institute of Mathematics of the Romanian
Academy for useful suggestions. This paper has been completed while the first
author participated in the summer school on \it Critical Point theory
and its applications \normalfont organized in Cluj-Napoca city. We are
kindly grateful to professors Vicen\c tiu R\u adulescu from the
Institute of Mathematics of the Romanian Academy and to Csaba Varga from
Babe\c s-Bolyai University, Cluj-Napoca. \\
| {
"timestamp": "2007-09-19T20:09:40",
"yymm": "0708",
"arxiv_id": "0708.2871",
"language": "en",
"url": "https://arxiv.org/abs/0708.2871",
"abstract": "In this paper we shall prove a sharpened version of the Finsler-Hadwiger inequality which is a strong generalization of Weitzenbock inequality. After that we give another refinement of this inequality and in the final part we provide some basic applications.",
"subjects": "Metric Geometry (math.MG); History and Overview (math.HO)",
"title": "Sharpness of the Finsler-Hadwiger inequality",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877018151064,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.708959463656359
} |
https://arxiv.org/abs/2211.06606 | A Lower Bound on the List-Decodability of Insdel Codes | For codes equipped with metrics such as Hamming metric, symbol pair metric or cover metric, the Johnson bound guarantees list-decodability of such codes. That is, the Johnson bound provides a lower bound on the list-decoding radius of a code in terms of its relative minimum distance $\delta$, list size $L$ and the alphabet size $q.$ For study of list-decodability of codes with insertion and deletion errors (we call such codes insdel codes), it is natural to ask the open problem whether there is also a Johnson-type bound. The problem was first investigated by Wachter-Zeh and the result was amended by Hayashi and Yasunaga where a lower bound on the list-decodability for insdel codes was derived.The main purpose of this paper is to move a step further towards solving the above open problem. In this work, we provide a new lower bound for the list-decodability of an insdel code. As a consequence, we show that unlike the Johnson bound for codes under other metrics that is tight, the bound on list-decodability of insdel codes given by Hayashi and Yasunaga is not tight. Our main idea is to show that if an insdel code with a given Levenshtein distance $d$ is not list-decodable with list size $L$, then the list decoding radius is lower bounded by a bound involving $L$ and $d$. In other words, if the list decoding radius is less than this lower bound, the code must be list-decodable with list size $L$. At the end of the paper we use such bound to provide an insdel-list-decodability bound for various well-known codes, which has not been extensively studied before. | \section{Introduction}\label{sec:intro}
The Johnson bound is a benchmark for the study of list-decodability of codes. The Johnson bounds for Hamming, symbol-pair and cover metric codes were derived in \cite{Joh62,Joh63}, \cite{LXY18} and \cite{LXY19}, respectively. The usual way to derive the Johnson bound with a given minimum distance is to show that the intersection of a code of the given minimum distance with any ball of radius $t$ contains at most $L$ elements. However, for some metrics such as rank-metric, a Johnson bound does not exist \cite{Wac13}.
When considering Levenshtein distance, which is measured based on the minimum amount of insertion and deletion operation required to transform one word to another, the classical approach does not work well. This is due to the fact that, in contrast to the Hamming distance or other distances, which is invariant of translation, the Levenshtein distance is not. This causes the size of a Levenshtein ball of any positive radius to be dependent of the centre. Hence, one wonders whether there is also the Johnson-type bound on list-decodability of insdel codes. On the other hand, the Johnson bounds for Hamming, symbol-pair and cover metric codes obey the following two properties:
\begin{itemize}
\item[(i)] every code that satisfies such a bound is guaranteed to be list-decodable with polynomial list size;
\item[(ii)] For any decoding radius exceeding the bound, there exists a code that is not list-decodable up to such radius for any polynomial list size.
\end{itemize}
In this paper, we define the bound satisfying the above two properties to be the Johnson bound. For a more formal definition of Johnson bound, interested readers can refer to \cite[Chapter $4$]{Gur05}. In particular, by \cite{Wac13}, the Johnson bound for rank-metric codes is just half of the minimum distance which is also the unique decoding radius. It is clear that for study of list-decodability, it is of great importance to derive the Johnson bound satisfying the above properties. Thus, we propose an open problem.
\noindent\textbf{Open problem:} Does the Johnson-type bound exist for insdel codes that satisfies the two requirements discussed above? If it exists, find the bound.
The main purpose of this paper is to investigate lower bounds on the list-decodability of insdel codes in the effort to provide a clearer picture towards the Johnson-type bound we have previously discussed.
\subsection{Informal definition and brief literature review}
Insertion and deletion (Insdel for short) errors are synchronization errors~\cite{HS17,HSS18} in communication systems caused by the loss of positional information of the message. They have recently attracted many attention due to their applicabilities in many interesting fields such as DNA storage and DNA analysis~\cite{JHS+17, XW05}, race-track memory error correction \cite{CKV+17} and language processing \cite{BM00, Och03}.
The study of codes with insertion and deletion errors was pioneered by Levenshtein, Varshamov and Tenengolts in the 1960s~\cite{VT65, Lev65, Lev67, Ten84}. This study was then further developed by Brakensiek, Guruswami and Zbarsky \cite{BGZ18}. There have also been different directions for the study of insdel codes such as the study of some special forms of the insdel errors \cite{SWG+17, CSF+14, LWY17, Mit08} as well as their relations with Weyl groups \cite{Hag18}.
Given two words defined over the same alphabet (not necessarily of the same length), we define their Levenshtein distance to be the minimum number of insertion and deletion operations required to transform one word to the other\footnote{Originally, Levenshtein distance is defined to be the minimum number of synchronization operation, including substitution, required to transform one word to the other. In this work, we abuse the term for ease of notation, as remarked in Remark~\ref{rmk:Levenshtein}.}. As in the other metrics, the unique decoding radius of an insdel code is completely determined by its Levenshtein distance. Precisely speaking, the total number of insertion and deletion errors that can be uniquely corrected is $\lfloor(d-1)/2\rfloor$ if Levenshtein distance is $d$.
However, in the case of list-decoding, we have to separate insertion and deletion errors as explained in the Subsection \ref{sec:IntPrevRes}.
Thus, we call a code of length $n$ imbued with Levenshtein distance to be $(\tau_I,\tau_D,L)$-insdel-list-decodable if it can list-decode against up to $\tau_I$ insertions and $\tau_D$ deletion errors with list size of at most $L.$ The formal definitions of Levenshtein distance and insdel-list-decodability can be found in Definitions~\ref{def:insdeldist} and \ref{def:LD} respectively.
\subsection{Previous Results}\label{sec:IntPrevRes}
There have been several works on the list-decoding of codes under Levenshtein distance. Guruswami and Wang \cite{GW17} studied list-decoding of binary codes with deletion errors only. They constructed binary codes with decoding radius close to $\frac 12$ for deletion errors only. Based on the indexing scheme and concatenated codes, there are further works that provided efficient encoding and decoding algorithms by concatenating an inner code achieving a previously derived bound and an outer list-recoverable Reed-Solomon code achieving the classical Johnson bound.
In 2018, Haeupler~\textit{et al.}~\cite{HSS18} provided an explicit construction of a family of list-decodable insdel codes through the use of synchronization strings with large list-decoding radius for sufficiently large alphabet size and designed its efficient list-decoding algorithm. Furthermore, they derived some upper bounds on list-decodability for insertion or deletion errors. They also established that in contrast to the unique decoding scenario where the effect of insertion and deletion errors are equivalent, in the list-decoding scenario, insertion errors have less effect on the insdel-list-decodability compared to deletion errors. It can be observed that the maximum fraction of deletion errors that can be list-decoded by a code of rate $R$ is at most $1-R<1$~\cite[Theorem 1.3]{HSS18}. On the other hand, any amount of insertion errors can always be resolved if the code is defined over a sufficiently large alphabet. This observation shows that in an investigation of list-decoding of insdel codes, the number of insertion errors should be separated from the number of deletion errors. Lastly, they considered the list-decodability of random codes with insertion or deletion errors only. Their results reveal that there is a gap between the upper bound on list-decodability of insertion (or deletion) codes and list-decodability of a random insertion (or deletion) code.
Liu \textit{et al.} \cite{LTX21} investigated the list-decodability of a random code of a given rate. Furthermore, with the help of concatenation and indexing scheme, they have also provided a Monte-Carlo construction of an insdel-list-decodable code. Haeupler, Rubinstein and Shahrasbi~\cite{HRS19} introduced probabilistic fast-decodable indexing schemes for insdel distance which reduces the computing complexity of the list-decoding algorithm in~\cite{HSS18}. In 2020, Guruswami \textit{et al.}~\cite{GHS20} established the zero-rate threshold of insdel-list-decodable codes. More specifically, for any alphabet size $q,$ they establish the set of all possible amount of insertion and deletion errors such that there exists a code of positive rate that can list-decode those amounts of errors. Recently, Haeupler and Shahrasbi~\cite{HS20} expanded the bound derived in~\cite{GHS20} to provide an upper bound of the list-decodability of insdel codes.
In general the works on list-decoding of insdel codes we have discussed above focus on one of the following two directions. Firstly, some of the works focuses on construction of specific insdel-list-decodable code with a given rate. Secondly, some other works focus on finding the upper bound of the insdel-list-decodability of a code, which is a necessary condition for a code to be insdel-list-decodable. Although such bounds provide the largest possible parameter for a code that can list-decode a given number of insertion and deletion errors, it does not guarantee that any code of a given parameter to be insdel-list-decodable. Hence, given a code of a given parameter, it is not guaranteed to be insdel-list-decodable and there needs to be a scheme to check whether it is insdel-list-decodable. This implies that such works do not provide any clearer picture about the Johnson-type bound we are aiming to obtain.
On the other hand, there have also been a few works on the lower bound of the insdel-list-decodability of a code. Such works provides a lower bound to the Johnson-type bound. More specifically, it satisfies the first condition of the Johnson-type bound that we have discussed above. However, it does not guarantee that any code that does not satisfy those bounds cannot be list-decodable for any polynomial list size.
In 2017, Wachter-Zeh~\cite{Wac18} firstly considered the list-decoding of insdel codes and provided a lower bound of the insdel-list-decodability of a code. Hayashi and Yasunaga~\cite{HY20} provided some amendments on the result in~\cite{Wac18} and derived a lower bound which is only meaningful when insertion occurs. Unlike the Johnson bound for Hamming-metric codes, the bound given in \cite{HY20} is not tight (see Subsection \ref{sec:IntComp}).
Thus, it is interesting to see if we can improve such lower bounds to obtain a closer estimate to the true Johnson-type bound satisfying both conditions discussed above.
\subsection{Our Main Contribution}\label{sec:IntOurRes}
Following the work of Wachter-Zeh~\cite{Wac18} which was then amended by Hayashi and Yasunaga~\cite{HY20}, we focus on the lower bound of insdel-list-decodability which provides bounds on parameters that guarantees that a code with the given parameters is insdel-list-decodable. More specifically, we focus on a sufficient condition on the relative minimum Levenshtein distance of a code for it to be insdel-list-decodable. First, we provide an informal restatement of our main result on the sufficient condition for a code to be insdel-list-decodable.
\begin{theorem}[Informal Restatement of Theorem~\ref{thm:genl}]
Let $\mathcal{C}$ be a code of length $n$ over an alphabet of size $q$ with minimum Levenshtein distance $d=2\delta n$ and $L\ge 2$ be a positive integer. Suppose that $\tau_I<q-1$ and $\tau_D<\frac{q-1}{q}$ are non-negative real numbers. If $\tau_D<\delta$ and $\tau_I<\rho^{(\delta,L)}(1-\tau_D)$ where
\[\rho^{(\delta,L)}(x)=\max_{r=1,\cdots, L}\left\{\frac{2L-r+1}{L+1} x -\frac{L}{r}(1-\delta)\right\},\]
then $\mathcal{C}$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
Here, $\rho^{(\delta,L)}(1-\tau_D)$ is a piecewise linear function with $L$ linear pieces where it coincides with the unique decoding bound $\tau_I<\delta-\tau_D$ when $\tau_D\ge 1-\frac{L+1}{L-1}(1-\delta).$
Such result provides a lower bound on the insdel-list-decodability of a code given the values of $\delta, \tau_I, \tau_D$ and $L.$ As illustrated in Section~\ref{sec:inscons}, Theorem~\ref{thm:genl} provides an insdel-list-decodability results for various constructions of codes, including some Reed-Solomon codes~\cite{DLTX21, LT21, CZ22, LX21, CST21}, Varshamov-Tenengolts (VT for short) codes~\cite{VT65, Lev65, Ten84} and Helberg codes~\cite{HF02, LN16}. This provides such constructions with a stronger property on their insdel-list-decodability, as can be observed in Theorems~\ref{thm:RSLD}, \ref{thm:VTLD} and \ref{thm:HCLD} respectively. As insdel-list-decodability property has not been studied for such codes except for a brief analysis for VT codes in \cite{Wac18}, an interesting open question to consider is to provide an efficient insdel-list-decoding algorithm for all these codes for a general list size $L.$
Another potential application of such result is in the construction of insdel-list-decodable codes with efficient list-decoding algorithms (see for example~\cite{GL16, GW17, HSS18, LTX21}). One of the common methods for constructing such code is to use an insdel-list-decodable inner code with short length, equipped with either an indexing scheme or synchronization strings, which is then concatenated with an outer code that is either list-recoverable under the classical Hamming metric (see for example \cite{GR08,GX13,HRW17}) or list-decodable against erasure errors (See for example \cite{Gur03,GR06,BDT18}). Such concatenation allows the decoding algorithm to be designed by using exhaustive search to list decode the inner code, which transforms the problem to either a list-recovery problem or list decoding against erasures, which can then be solved using the list recovery algorithm of the outer code. In such codes, the inner code is in general either specifically constructed or is sampled at random with the condition that the sampled code is list-decodable with high probability. Such approaches either requires a specific code for the inner code or an additional scheme to verify that the sampled code is indeed list-decodable. Having the result presented in Theorem~\ref{thm:genl}, we may sample any insdel code with a given minimum Levenshtein distance and we can guarantee that such code is insdel-list-decodable.
\subsection{Our Technique}\label{sec:IntOurTec}
In order to show our main claim, we suppose that the conditions of $\tau_D$ and $\tau_I$ are satisfied and $\mathcal{C}$ is not $(\tau_I,\tau_D,L)$-insdel-list-decodable. Then, there exists an integer $N\in[n-\tau_D n, n+\tau_I n]$ and a word $\mathbf{y}$ of length $N$ such that there exists $L+1$ codewords $\mathbf{c}_0,\cdots, \mathbf{c}_{L}$ that can be obtained from $\mathbf{y}$ by $\tau_D n$ insertion and $\tau_I n$ deletion operations. For $i=0,\cdots, L,$ define $Y_i\subseteq\{1,\cdots, N\}$ the set of indices that correspond to a longest common subsequence of $\mathbf{y}$ and $\mathbf{c}_i.$ It is then easy to see that $Y_i\cap Y_j$ is a subset of the indices that correspond to a longest common subsequence of $\mathbf{c}_i$ and $\mathbf{c}_j$ seen as entries of $\mathbf{y}.$ Then, due to the relation of $Y_i, \tau_I$ and $\tau_D$ as well as the relation between $Y_i\cap Y_j$ with $\delta,$ any relation we may derive on the sets $Y_0,\cdots, Y_L$ and their intersections or unions also provides a relation between the code parameters, $\delta, \tau_I,\tau_D$ and $L.$ More concretely, we examine the size of the unions of sets obtained by intersecting a fixed number of $Y_i.$ That is, for various values of $r=1,\cdots, L,$ we consider $\left|\bigcup_{0\le i_1<\cdots<i_r\le L} \bigcap_{j=1}^r Y_{i_j}\right|.$ With the help of inclusion-exclusion principle as well as some carefully chosen linear combination of the analysis for different values of $r,$ we obtain the relation $\tau_I\ge\rho^{(\delta,L)}(1-\tau_D)$ which contradicts our condition on $\tau_I.$
We conclude this section by briefly discussing our strategy on choosing the linear combination to obtain the $L$ bounds that defines our bound $\rho^{(\delta,L)}(x).$ Following our notations in Section~\ref{sec:InsL}, for any positive integer $L\ge 2$ and $j\le L+1,$ define $\Sigma_j^{(L+1)}\triangleq \sum_{0\le i_1<i_2<\cdots<i_j\le L}|\bigcap_{\ell=1}^{j} Y_{i_\ell}|$ and $\Psi_{j}^{(L+1)}\triangleq \left|\bigcup_{0\le i_1<i_2<\cdots<i_j\le L}\bigcap_{\ell=1}^{j} Y_{i_\ell}\right|.$ It is easy to see that for any $v=1,\cdots, L+1,$ by Inclusion-Exclusion Principle, we can express $\Psi_v^{(L+1)}$ as a linear combination of $\Sigma_v^{(L+1)},\cdots, \Sigma_{L+1}^{(L+1)},$ i.e., there exist constants $A_{v,v},\cdots, A_{L+1,v}$ such that $\Psi_v^{(L+1)}=\sum_{j=v}^{L+1} A_{j,v} \Sigma_j^{(L+1)}.$ Hence, for any real numbers $c_1,\cdots, c_{L+1},$ we have $\sum_{v=1}^{L+1} c_v \Psi_v^{(L+1)}=\sum_{v=1}^{L+1} c_v\sum_{j=v}^{L+1} A_{j,v}\Sigma_j^{(L+1)} = \sum_{j=1}^{L+1} \Phi_{j}^{(L+1)} \Sigma_j^{(L+1)}$ for some constants $\Phi_1^{(L+1)},\cdots, \Phi_{L+1}^{(L+1)}.$ On one hand, we note that $\Psi_v^{(L+1)}\leq N$ for any $v=1,\cdots,L+1.$ Hence $\sum_{v=1}^{L+1} c_v \Psi_v^{(L+1)}$ is upper bounded by $\sum_{v=1}^{L+1} c_v N.$ On the other hand, for $j=1,2,$ we can obtain a bound on $\Sigma_j^{(L+1)}$ based on its relation with the assumption of $\mathcal{C}$ being not insdel-list-decodable and its relative minimum Levenshtein distance respectively. Hence, assuming that $\sum_{j=3}^{L+1} \Phi_j^{(L+1)} \Sigma_j^{(L+1)}\ge 0, \sum_{j=1}^{L+1} \Phi_{j}^{(L+1)} \Sigma_j^{(L+1)}$ can be lower bounded by some function of $\tau_I, \tau_D, \delta, L$ and $n.$ In conclusion, for any choice of such $c_1,\cdots, c_{L+1}$ such that $\sum_{j=3}^{L+1} \Phi_j^{(L+1)} \Sigma_j^{(L+1)}\ge 0,$ recalling that $n-\tau_D n\le N\le n+\tau_I n,$ we obtain an inequality on $\tau_I,\tau_D,\delta$ and $L.$ Here, we choose such $c_1,\cdots, c_{L+1}$ such that we may eliminate the terms corresponding to $\Sigma_j^{(L+1)}$ for $j\ge 3.$ Note that by definition, we have $\Sigma_j^{(L+1)}\ge \Sigma_{j+1}^{(L+1)}$ for $j\le L.$ Hence, to reduce the impact of using the inequality, $\sum_{j=3}^{L+1} \Phi_j^{(L+1)} \Sigma_j^{(L+1)}\ge 0,$ we choose $c_1,\cdots, c_{L+1}$ that eliminate $\Sigma_j^{(L+1)}$ in an ascending order on the value of $j.$ Hence, we are left with finding the values of $A_{j,v}$ for different values of $j$ and $v.$ We utilize the notion of $v$-cover of a set to express each coefficient as a function of the number of $v$-cover of different sets, denoted by $A_{j,\ell,v}.$ Intuitively, the term $v$-cover refers to the the covering of the set with its subsets, each of size $v.$ Utilizing a recursive relation we derive for $A_{j,\ell,v},$ we can find an explicit formula for $A_{j,v}.$ A more detailed discussion in the value of $A_{j,v}$ along with a formal definition of $v$-cover and the choice of the constant $c_i$'s can be found in Section~\ref{sec:InsL}. The general strategy is also discussed in more detail in Remark~\ref{rmk:strat}.
In our work, we only consider $L$ specific combinations of the sizes of such unions. It is interesting to see if a better bound can be derived by considering a different combination of the unions. Similarly, such analysis can also be used to analyse the size of different combination of $Y_i$'s which is different from just unions of sets obtained by intersecting the same number of sets.
\subsection{Comparison}\label{sec:IntComp}
In this work, we compare our result with the unique decoding bound $\tau_I+\tau_D<\delta$ and the bound by Hayashi and Yasunaga~\cite{HY20} which we denote by HY bound. The comparison with unique decoding bound yields the fact that due to the condition $\tau_D<\delta,$ similar to the HY bound, our bound is only meaningful when $\tau_I>0.$ Furthermore, based on the form of our bound, when $\tau_D<1-\frac{L+1}{L-1}(1-\delta),$ our bound outperforms the unique decoding bound. On the other hand, we also show that for sufficiently large $\delta,$ there exists a range of $\tau_D$ such that in such values of $\tau_D,$ our bound also outperforms the HY bound. A formal discussion of such comparison can be found in Section~\ref{sec:Inscom}. An illustration of this result for $L=2$ and $\delta=0.9$ can be found in Figure~\ref{fig:rhodelta2exalt}. Note that the value of $q$ only affects the curves in Figure~\ref{fig:rhodelta2exalt} in determining the curves' endpoints while the overall curves do not change. Because of this, Figure~\ref{fig:rhodelta2exalt} is drawn without considering the effect of $q.$ We define $P_1,$ the point with smallest value of $\tau_D$ such that our bound outperforms the HY bound and $P_2,$ the point where our bound coincides with the unique decoding bound.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figure/rhodelta2exalt.png}
\caption{Comparison between Our Bound, HY Bound~\cite{HY20} and Unique Decoding Bound when $L=2$ and $\delta=0.9.$}
\label{fig:rhodelta2exalt}
\end{figure}
\subsection{Organization}\label{sec:IntOrg}
The rest of this paper is organized as follows. In Section~\ref{sec:prelim}, we briefly introduce some basic definitions and results that are essential in our discussion. We consider a special case of our main result when the list size is $2$ in Section~\ref{sec:Insdel2} for a simple illustration of our approach. Such approach is then generalized for any list size in Section~\ref{sec:InsL}. We compare and prove that our result contains an improvement to previously established lower bound of insdel-list-decodability in Section~\ref{sec:Inscom}. Lastly, we provide some insdel-list-decodability result of various codes in Section~\ref{sec:inscons}.
\section{Preliminaries}\label{sec:prelim}
For positive integers $q$ and $n$ such, let $\mathbb{\Sigma}_q$ be a finite alphabet of size $q$ and $\mathbb{\Sigma}_q^n$ be the set of all vectors of length $n.$ For any positive real number $i,$ we denote by $[i],$ the set of integers $\{1,\cdots, \lfloor i\rfloor\}.$ Given a vector $\mathbf{v}=(v_1,\cdots, v_n)\in \mathbb{\Sigma}_q^n$ and a set $S\subseteq [n],$ we define the projection of $\mathbf{v}$ at $S,$ denoted by $\left.\mathbf{v}\right|_S$ as a vector of length $|S|$ containing the coordinates of $\mathbf{v}$ with index $S.$ That is, $ \left.\mathbf{v}\right|_S=(v_i)_{i\in S}.$ Given $\alpha\in \mathbb{\Sigma}_q$ and any non-negative integer $m,$ we define ${\boldsymbol \alpha}^m\in \mathbb{\Sigma}_q^m$ obtained by repeating $\alpha~m$ times. Furthermore, for any two vectors $\mathbf{u}\in \mathbb{\Sigma}_q^{n_1}$ and $\mathbf{v}\in\mathbb{\Sigma}_q^{n_2}$ over $\mathbb{\Sigma}_q,$ we define $(\mathbf{u}\|\mathbf{v})\in \mathbb{\Sigma}_q^{n_1+n_2}$ to be the vector obtained by concatenating $\mathbf{v}$ in the right of $\mathbf{u}.$
\begin{defn}\label{def:insdeldist}
Let $\mathbf{a}\in \mathbb{\Sigma}_q^{n_1}$ and $\mathbf{b}\in\mathbb{\Sigma}_q^{n_2}$ be two words over $\mathbb{\Sigma}_q$ not necessarily of the same length.
\begin{enumerate}
\item We define the Levenshtein distance between $\mathbf{a}$ and $\mathbf{b}, d_{L}(\mathbf{a},\mathbf{b})=d$ if there exists non-negative integers $t_I$ and $t_D$ such that $t_I+t_D=d$ and we may obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I$ symbols and deleting $t_D$ symbols. Furthermore, we also require that for any $t_I'$ and $t_D'$ such that $t_I'+t_D'<d,$ it is impossible to obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I'$ symbols and deleting $t_D'$ symbols.
\item For a non-negative integer $m\le \min\{n_1,n_2\},$ we say $\mathbf{v}=(v_1,\cdots, v_{m})$ is a common subsequence of $\mathbf{a}$ and $\mathbf{b}$ if there exists $1\le i_1<i_2<\cdots<i_m\le n_1$ and $1\le j_1<j_2<\cdots<j_m\le n_2$ such that $\mathbf{v}=\mathbf{a}|_{\{i_1,\cdots,i_m\}}=\mathbf{b}|_{\{j_1,\cdots,j_m\}}.$ Furthermore, we say that $\mathbf{v}$ is a longest common subsequence of $\mathbf{a}$ and $\mathbf{b}$ if there does not exist $\mathbf{u}$ of length $m'>m$ such that $\mathbf{u}$ is also a common subsequence of $\mathbf{a}$ and $\mathbf{b}.$ We denote by $\ell_{\mathtt{LCS}}(\mathbf{a},\mathbf{b})$ the length of a longest common subsequence of $\mathbf{a}$ and $\mathbf{b}.$ It is then easy to see that taking $\mathbf{v},$ a longest common subsequence of $\mathbf{a}$ and $\mathbf{b}$ of length $\ell=\ell_{\mathtt{LCS}}(\mathbf{a},\mathbf{b}),$ we can obtain $\mathbf{b}$ from $\mathbf{a}$ by deleting all $n_1- \ell$ symbols in $\mathbf{a}$ outside $\mathbf{v}$ and inserting all $n_2-\ell$ symbols of $\mathbf{b}$ outside $\mathbf{v}$ and it can also be verified that $d_L(\mathbf{a},\mathbf{b})=n_1+n_2-2\ell.$ Since $0\le \ell\le \min\{n_1,n_2\},$ we have $|n_1-n_2|\le d_L(\mathbf{a},\mathbf{b})\le n_1+n_2.$
\end{enumerate}
\end{defn}
\begin{rmk}\label{rmk:Levenshtein}
We note that the Levenshtein distance of two words was defined to be the minimum number of insertions, deletions and substitution required to transform a word to the other, which was first defined by Levenshtein~\cite{Lev65,Lev67}. In this manuscript, to simplify the notation, we will abuse the term and call $d_L(\mathbf{a},\mathbf{b})$ the Levenshtein distance. This is to differentiate it from the term ``insdel'' that we use when we want to separate the number of insertions from deletions.
\end{rmk}
\begin{rmk}\label{rmk:insdeldistsamelen}
Note that if $d_L(\mathbf{a},\mathbf{b})=d$ where $\mathbf{a}$ and $\mathbf{b}$ have the same length, we must have $d$ to be an even integer.
\end{rmk}
In the following we provide an alternative definition of Levenshtein distance with respect to the numbers of insertion and deletion required
\begin{prop}\label{prop:2insdeldist}
Let $\mathbf{a}\in \mathbb{\Sigma}_q^{n_1}$ and $\mathbf{b}\in\mathbb{\Sigma}_q^{n_2}$ be two words over $\mathbb{\Sigma}_q.$ Then $d_L(\mathbf{a},\mathbf{b})=d$ if and only if there exist $t_I$ and $t_D$ such that \begin{enumerate}
\item $t_I+t_D=d.$
\item We can obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I$ symbols and deleting $t_D$ symbols
\item For any pair $(t_I',t_D')$ such that $\mathbf{b}$ can be obtained from $\mathbf{a}$ by inserting $t_I'$ symbols and deleting $t_D'$ symbols, then $t_I'\ge t_I$ and $t_D'\ge t_D.$
\end{enumerate}
\end{prop}
\begin{proof}
First, suppose that $d_L(\mathbf{a},\mathbf{b})=d.$ Then by definition, there exists non-negative integers $t_I$ and $t_D$ such that $t_I+t_D=d$ and we may obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I$ symbols and deleting $t_D$ symbols. This directly proves the first two claims. Suppose that there exists a pair $(t_I',t_D')$ such that $t_I'< t_I$ or $t_D'< t_D$ and we may obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I'$ symbols and deleting $t_D'$ symbols. Note that we must have $t_I-t_D=n_2-n_1=t_I'-t_D'.$ Hence $t_I'<t_I$ if and only if $t_D'<t_D.$ In such case, $t_I'+t_D'<d,$ contradicting the assumption that $d_L(\mathbf{a},\mathbf{b})=d.$ Hence we must have $t_I'\ge t_I$ and $t_D'\ge t_D.$
Now suppose that there exist $t_I$ and $t_D$ satisfying the three conditions. We aim to show that $d_L(\mathbf{a},\mathbf{b})=t_I+t_D.$ By the second assumption, we have $d_L(\mathbf{a},\mathbf{b})\le t_I+t_D.$ Now suppose that there exists $t_I'$ and $t_D'$ such that $t_I'+t_D'<t_I+t_D$ and we can obtain $\mathbf{b}$ from $\mathbf{a}$ by inserting $t_I'$ symbols and deleting $t_D'$ symbols. However, by the third assumption, we must have $t_I'\ge t_I$ and $t_D'\ge t_D$ which implies $t_I'+t_D'<t_I+t_D\le t_I'+t_D',$ which is a contradiction. Then we must have $t_I'+t_D'\ge t_I+t_D,$ completing the proof.
\end{proof}
A code $\mathcal{C}$ over $\mathbb{\Sigma}_q$ of length $n$ is a non-empty subset $\mathbb{\Sigma}_q^n.$ We define the rate of $\mathcal{C}$ to be $\mathcal{R}(\mathcal{C})\triangleq \frac{\log_q|C|}{n}.$ The minimum Levenshtein distance of $\mathcal{C}$ is defined to be $d_L(\mathcal{C})\triangleq \min_{\mathbf{a},\mathbf{b}\in \mathcal{C},\mathbf{a}\neq \mathbf{b}}\{d_L(\mathbf{a},\mathbf{b})\}.$ We call a code that we consider under Levenshtein distance as an insdel code.
We define the relative minimum Levenshtein distance of $\mathcal{C}$ to be $\delta_L(\mathcal{C})=\frac{d_L(\mathcal{C})}{2n}.$
Next we discuss the definition of Levenshtein and insdel balls.
\begin{defn}\label{def:balls}
Let $q\ge 2,n\ge 2$ and $d$ be positive integers and $t_I$ and $t_D$ be non-negative integers such that $t_D\le n.$
\begin{enumerate}
\item For any $\mathbf{x}\in \mathbb{\Sigma}_q^n,$ we define the Levenshtein ball with centre $\mathbf{x}$ and radius $d$ as $\mathcal{B}_L(\mathbf{x},d)\triangleq\{\mathbf{y}\in \mathbb{\Sigma}_q^\ast: d_L(\mathbf{x},\mathbf{y})\le d\}.$
\item For any $\mathbf{x}\in \mathbb{\Sigma}_q^n,$ we define the insdel ball with centre $\mathbf{x}$ and insertion radius $t_I$ and deletion radius $t_D$ as $\mathcal{B}_{ID}(\mathbf{x},t_I,t_D)\triangleq\{\mathbf{y}\in \mathbb{\Sigma}_q^\ast: \exists 0\le t_I'\le t_I, 0\le t_D'\le t_D: \mathbf{y}\mathrm{~can~be~obtained~from~}\mathbf{x}\mathrm{~by~inserting~}t_I'$ $\mathrm{~symbols~and~deleting~}t_D'$ $\mathrm{symbols}\}.$
\end{enumerate}
\end{defn}
We can then use the Levenshtein and insdel balls to define the list-decodability of a code.
\begin{defn}\label{def:LD}
Let $\mathcal{C}\subseteq \mathbb{\Sigma}_q^n$ be a code and let $\tau,\tau_I,\tau_D\ge 0$ be non-negative real numbers and $L\ge 1$ be integers. We say $\mathcal{C}$ is $(\tau,L)$-Levenshtein-list-decodable if for any non-negative integer $N$ such that $n-\tau n\le N\le n+\tau n$ and every $\mathbf{r}\in\mathbb{\Sigma}_q^N,$ we must have $|\mathcal{B}_L(\mathbf{r},\tau n)\cap \mathcal{C}|\le L.$ Furthermore, $\mathcal{C}$ is said to be $(\tau_I,\tau_D,L)$-insdel-list-decodable if for any non-negative integer $N$ such that $n-\tau_D n\le N\le n+\tau_I n$ and every $\mathbf{r}\in \mathbb{\Sigma}_q^N,$ we must have $|\mathcal{B}_{ID}(\mathbf{r},\tau_D n,\tau_I n)\cap \mathcal{C}|\le L.$
\end{defn}
In general, we want $\lim_{n\rightarrow \infty} \mathcal{R}(\mathcal{C})>0$ and $L=\mathrm{poly}(n).$ Note that by allowing $\tau_D n\ge\frac{(q-1)}{q} n$ deletions we can always transform any codeword to a word of length $\frac{n}{q}$ with all entries being the most frequently occurring element in $\mathbf{c}.$ Hence if $\mathcal{C}$ is list-decodable against $\tau_D n$ deletions with list size $L=\mathrm{poly}(n),$ there can only be at most $Lq$ codewords which means that $\mathcal{R}(\mathcal{C})$ tends to $0.$ Hence to have a positive rate and polynomial list size, $\tau_D<\frac{(q-1)}{q}.$ Similarly, if we allow $\tau_I n\ge (q-1)n$ insertions, assuming that $\mathbb{\Sigma}_q=\{a_1,\cdots, a_q\},$ we can always transform any codeword in $\mathcal{C}$ to a word of length $qn$ obtained by repeating $(a_1,\cdots, a_q)~n$ times. So if $\mathcal{C}$ is list-decodable against $(q-1)n$ insertions with polynomial list size, its size must be at most $L=\mathrm{poly}(n),$ which causes its rate to be asymptotically zero. Hence, to have a positive rate and polynomial list size, we must have $\tau_I<(q-1).$ In the remainder of this work, we always assume $\tau_D<\frac{(q-1)}{q}$ and $\tau_I<(q-1).$
Next, we provide a relation between the list-decodability of codes under the two considered distance.
\begin{lemma}\label{lem:LLDtoILD}
A code $\mathcal{C}$ of length $n$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable whenever it is $(\tau_I+\tau_D,L)$-Levenshtein-list-decodable.
\end{lemma}
\begin{proof}
It is sufficient to show that for any $\mathbf{y}\in \mathbb{\Sigma}_q^N$ such that $N\in[n-\tau_D n,n+\tau_I n],$ we have $\mathcal{B}_{ID}(\mathbf{y},\tau_D n,\tau_I n)\cap\mathcal{C} \subseteq \mathcal{B}_L(\mathbf{y},(\tau_D+\tau_I)n)\cap\mathcal{C}.$ Suppose that $\mathbf{c}\in \mathcal{B}_{ID}(\mathbf{y},\tau_D n,\tau_I n)\cap\mathcal{C}.$ Then there exists $t_D'\le \tau_D n$ and $t_I'\le \tau_I n$ such that $\mathbf{c}$ can be obtained from $\mathbf{y}$ through $t_D'$ deletions and $t_I'$ insertions. By Proposition~\ref{prop:2insdeldist}, we have $d_L(\mathbf{y},\mathbf{c})\le t_I'+t_D'\le (\tau_I+\tau_D)n$ directly implying that $\mathbf{c}\in \mathcal{B}_{L}(\mathbf{y},(\tau_D+\tau_I)n)\cap\mathcal{C},$ completing the proof.
\end{proof}
In this work, we are mainly interested in the lower bound of list-decodability of a code with a given Levenshtein distance. More specifically, we are interested in finding a relation between $\tau_I,\tau_D,L$ and $d$ that ensures that any code of length $n$ and minimum Levenshtein distance $d$ must be $(\tau_I,\tau_D,L)$-insdel-list-decodable.
Lastly we define a function that will be important in our analysis in the latter sections.
\begin{defn}\label{def:rhodeltaLx}
For a relative minimum Levenshtein distance $\delta\in[0,1]$ and list size $L\ge 2,$ we define $\rho^{(\delta,L)}:[1-\delta,1]\rightarrow\mathds{R}$ such that
\[\rho^{(\delta,L)}(x)=\max_{r=1,\cdots, L}\left\{\frac{2L-r+1}{L+1} x -\frac{L}{r}(1-\delta)\right\}.\]
\end{defn}
It is easy to see that $\rho^{(\delta,L)}(x)$ is a piecewise linear function for $x\in[1-\delta,1].$ We refer to Figure~\ref{fig:rhodeltalex} for illustration of the function $\rho^{(\delta, L)}(x).$ Here the term ``turning points'' is defined in the following way. A turning point of the function $\rho^{(\delta,L)}(x)$ is defined to the point where $\rho^{(\delta,L)}(x)$ moves from one linear piece to another. A more complete analysis of $\rho^{(\delta, L)}(x)$ can be found in the Appendix~\ref{app:rhodeltaL}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figure/rhodeltalex.png}
\caption{Piecewise linear function $\rho^{(\delta,L)}(x)$ for small $L$}
\label{fig:rhodeltalex}
\end{figure}
\section{Insdel List-decodability with List Size 2}\label{sec:Insdel2}
First, we consider the case when $L=2.$
\begin{lemma}\label{lem:L=2}
Let $\mathcal{C}\subseteq \mathbb{\Sigma}_q^n$ be a $q$-ary code with length $n$ and minimum Levenshtein distance $d=2\delta n.$ Given $\tau_I$ and $\tau_D,$ if $\mathcal{C}$ is not $(\tau_I,\tau_D, 2)$-insdel-list-decodable, then either $\tau_D\ge\delta$ or
\[\tau_I\ge \rho^{(\delta, 2)}\left(1-\tau_D\right).\]
\end{lemma}
\begin{proof}
Suppose that $\tau_D<\delta.$ For simplicity, we denote $t_I=\tau_I n$ and $t_D=\tau_D n.$ Since $\mathcal{C}$ is not $(\tau_I,\tau_D, 2)$-insdel-list-decodable, then there exists an integer $N\in[n-t_D,n+t_I],$ a vector ${\bf y}\in\mathbb{\Sigma}_q^N$ and $3$ distinct codewords of $\mathcal{C}$ such that ${{\bf c}_0}, {{\bf c}_1}, {{\bf c}_{2}}\in \mathcal{B}_{ID}({\bf y}, t_D,t_I).$ For any $0\le i\le 2,$ we define $Y_i\subseteq [N]$ to be a set such that $\mathbf{y}|_{Y_i}$ is a longest common subsequence of $\mathbf{y}$ and $\mathbf{c}_i.$ Hence we have $t_D+t_I\ge d_L(\mathbf{y},\mathbf{c}_i)=N+n-2|Y_i|$ or equivalently, $|Y_i|\ge \frac{1}{2}(N+n-t_D-t_I).$ For any $0\le i_1<i_2\le 2,$ we define $Y_{i_1i_2}\triangleq Y_{i_1}\cap Y_{i_2}.$ Similarly, we define $Y_{012}\triangleq Y_0\cap Y_1\cap Y_2.$ It is easy to see that $\mathbf{y}|_{Y_{i_1i_2}}$ is a common subsequence of $\mathbf{c}_{i_1}$ and $\mathbf{c}_{i_2}.$ Hence we must have
$d\le d_L(\mathbf{c}_{i_1},\mathbf{c}_{i_2})\le 2n-2|Y_{i_1i_2}|$
which implies $|Y_{i_1i_2}|\le n-\frac{d}{2}.$ By Inclusion-Exclusion principle, we have
\begin{equation}\label{eq:Yi3}
\left|\bigcup_{i=0}^2 Y_i\right|= \sum_{i=0}^2 |Y_i|-\sum_{0\le i_1<i_2\le 2} |Y_{i_1 i_2}| + |Y_{012}|.
\end{equation}
Since $\bigcup_{i=0}^2 Y_i\subseteq [N]$ and $|Y_{012}|\ge 0,$ we have
\begin{equation*}
N\ge \frac{3}{2}\left(N+n-t_D-t_I\right)-3\left(n-\frac{d}{2}\right).
\end{equation*}
Noting that $N\ge n-t_D,$ we obtain $0\ge 2n-2t_D-\frac{3}{2}t_I-3\left(n-\frac{d}{2}\right)$ which implies
\begin{equation}\label{eq:Zi3}
t_I\ge \frac{4}{3}(n-t_D)-2\left(n-\frac{d}{2}\right).
\end{equation}
Here we again consider the Inclusion-Exclusion principle for the union of $Y_{i_1i_2}.$ We have
\begin{small}
\begin{eqnarray}\label{eq:Yij3}
\nonumber \left|\bigcup_{0\le i_1<i_2\le 2}Y_{i_1i_2}\right|&=& \sum_{0\le i_1<i_2\le 2} |Y_{i_1i_2}|- |Y_{01}\cap Y_{02}|- |Y_{01}\cap Y_{12}|-|Y_{02}\cap Y_{12}|+|Y_{012}|\\
&=&\sum_{0\le i_1<i_2\le 2}|Y_{i_1i_2}|-2|Y_{012}|.
\end{eqnarray}
\end{small}
Noting that $\bigcup_{0\le i_1<i_2\le 2}Y_{i_1i_2}\subseteq [N],$ adding two times Equation~\eqref{eq:Yi3} to Equation~\eqref{eq:Yij3}, we obtain $3N\ge 2\left|\bigcup_{i=0}^2Y_i\right|-\left|\bigcup_{0\le i_1<i_2\le 2}Y_{i_1i_2}\right| \ge 3(N+n-t_I-t_D)-3\left(n-\frac{d}{2}\right).$ Hence we have
\begin{equation}\label{eq:Zij3}
t_I\ge (n-t_D)-\left(n-\frac{d}{2}\right).
\end{equation}
So by Inequalities~\eqref{eq:Zi3} and~\eqref{eq:Zij3}, if $\mathcal{C}$ is not $(t_I,t_D,2)$-list-decodable, then
\begin{equation*}
\tau_I\ge \max\left\{\frac{4}{3}(1-\tau_D)-2\left(1-\delta\right),(1-\tau_D)-\left(1-\delta\right)\right\}=\rho^{(\delta,2)}(1-\tau_D).
\end{equation*}
\end{proof}
Taking the contrapositive of Lemma~\ref{lem:L=2}, we directly have the following result.
\begin{theorem}\label{thm:L=2}
Let $\mathcal{C}\subseteq \mathbb{\Sigma}_q^n$ be a $q$-ary code with length $n$ and minimum Levenshtein distance $d=2\delta n.$ If $\tau_d<\delta$ and $\tau_I <\rho^{(\delta,2)}(1-\delta),$ then $\mathcal{C}$ is $(\tau_I,\tau_D,2)$-insdel-list-decodable.
\end{theorem}
\section{Insdel List-decodability for General List Size}\label{sec:InsL}
In this section, we generalize the result from Section~\ref{sec:Insdel2} to any $L\ge 2.$ In the remainder of this section, for any positive integers $v\le L,$ sets $Y_0,\cdots, Y_L$ and indices $0\le i_1<i_2<\cdots<i_v\le L,$ we define $Y_{i_1i_2\cdots i_v}\triangleq \bigcap_{j=1}^v Y_{i_j}.$ Before we consider the insdel-list-decodability of codes, first we consider the size of the union of such $Y_{i_1i_2\cdots i_v}$ for any fixed $v.$
Let $S,S_1,\cdots, S_\ell$ be subsets of $\{0,\cdots, L\}.$ We say that $S_1,\cdots, S_\ell$ covers $S$ if we have $\bigcup_{i=1}^\ell S_i=S.$ Furthermore, if $|S_i|=v$ for all $i=1,\cdots, \ell,$ we say that $\{S_1,\cdots, S_\ell\}$ is a $v$-cover of $S.$ Note that the number of such $v$-cover only depends on the size of $S, S_i$ as well as $\ell.$ For any positive integers $j,\ell$ and $v,$ we define $A_{j,\ell,v}$ to be the number of $v$-covers of size $\ell$ of a set of size $j.$
Note that in general, to obtain the extension from the discussion in Section~\ref{sec:Insdel2}, we use Inclusion-Exclusion principle on $\left|\bigcup_{0\le i_1<\cdots<i_v\le L} Y_{i_1i_2\cdots i_v}\right|$ for increasing values of $v$ to eliminate the terms $|Y_{i_1i_2\cdots i_b}|$ for $b\ge 3$ starting from smaller values of $b,$ which in general has a larger size. Expanding the size of the set $\bigcup_{0\le i_1<\cdots<i_v\le L} Y_{i_1i_2\cdots i_v}$ using Inclusion-Exclusion principle, we can reorder the terms to the following form
\begin{equation}\label{eq:YvL}
\left|\bigcup_{0\le i_1<\cdots<i_v\le L} Y_{i_1i_2\cdots i_v}\right|=\sum_{j=v}^{L+1} A_{j,v} \Sigma_j^{(L+1)}
\end{equation}
for some integer $A_{j,v}$ where we define $\Sigma_j^{(L+1)}\triangleq \sum_{0\le i_1<i_2<\cdots<i_{j}\le L} |Y_{i_1i_2\cdots i_{j}}|.$ To calculate $A_{j,v},$ due to the symmetry of any choice of the $j$-tuple $(i_1,\cdots,i_j),$ we only consider the case when $i_t=t-1$ for $t=1,\cdots, j$ and we are interested to find the amount of times $|Y_{012\cdots(j-1)}|$ can be covered by its subsets of size $v.$ Note that for any $v$-cover of size $\ell,$ it contributes $(-1)^{\ell-1}$ to $A_{j,v}.$ This is because such $v$-cover of size $\ell$ is obtained by taking intersection of $\ell$ different sets of size $v.$ So in the Inclusion-Exclusion formula of the size of $\bigcup_{0\le i_1<\cdots<i_v\le L} Y_{i_1i_2\cdots i_v},$ it appears exactly once with coefficient $(-1)^{\ell-1}.$ It can be observed that here $A_{j,v}$ is independent of the original value of $L+1,$ which justifies the omission of the parameter $L$ in the variable $A_{j,v}.$ Furthermore, we have
\begin{equation}\label{eq:AjvtoAjlv}
A_{j,v}=\sum_{\ell=1}^{\binom{j}{v}} (-1)^{\ell-1} A_{j,\ell,v}
\end{equation}
where $\ell$ is at most $\binom{j}{v}$ since there are only $\binom{j}{v}$ subsets of $\{0,\cdots, j-1\}$ of size $v.$
So to simplify Equation~\eqref{eq:YvL}, we are interested in the value of $A_{j,\ell,v}.$ The following result provides a recursive relation of $A_{j,\ell,v}$ with some base cases.
\begin{lemma}\label{lem:Ajlv}
\begin{enumerate}
\item $A_{j,\ell,v}=0$ whenever $\ell> \binom{j}{v}.$ In particular, if $j<v, A_{j,\ell,v}=A_{j,v}=0.$
\item $A_{j,\ell,v}=0$ whenever $\ell v<j.$
\item Suppose that $\ell\le \binom{j}{v}.$ Then
\begin{equation}\label{eq:AjlvRec}
A_{j,\ell,v}=\binom{\binom{j}{v}}{\ell}-\sum_{t=1}^{j-1} \binom{j}{t} A_{t,\ell,v}.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item Note that a set of size $j$ has exactly $\binom{j}{v}$ pairwise distinct subsets of size exactly $v.$ Hence the maximum number of pairwise distinct sets of size $v$ with its union having size $j$ is $\binom{j}{v}.$ So if $\ell>\binom{j}{v},$ it is impossible to have the union of $\ell$ sets of size $v$ to have size $j,$ implying that $A_{j,\ell,v}=0.$
\item Note that the union of $\ell$ sets, each of size $v$ has size at most $\ell v.$ Hence the union cannot have size larger than $\ell v.$
\item Let $S_v=\{A\subseteq \{0,1,\cdots, j-1\}:|A|=v\}$ and $\mathcal{T}_{\ell,v}=\{S\subseteq S_v:|S|=\ell\}.$ Then $|\mathcal{T}_{\ell,v}|=\binom{\binom{j}{v}}{\ell}.$ However, note that not all $S\in \mathcal{T}_{\ell,v}$ may result in $\bigcup_{A\in S}A=\{0,\cdots, j-1\}.$ So from $\mathcal{T}_{\ell,v},$ we need to exclude those with union of size less than $j.$ For any $t<j,$ there are $\binom{j}{t}$ subsets of $\{0,\cdots, j-1\}$ of size $t$ and for each $S\subseteq\{0,\cdots, j-1\}$ of size $t,$ there are $A_{t,\ell,v}$ elements of $\mathcal{T}_{\ell,v}$ that covers $S.$ This directly provides Equation~\eqref{eq:AjlvRec}.
\end{enumerate}
\end{proof}
Next, we also provide the values of $A_{j,\ell,v}$ for some special case, which can be easily verifed.
\begin{rmk}\label{rmk:Ajlv}
\begin{enumerate}
\item When $v=1,$
\[A_{j,\ell,1}=\left\{
\begin{array}{cc}
1,&\mathrm{~if~}\ell=j\\
0,&\mathrm{~otherwise}.
\end{array}
\right.\]
In particular, $A_{j,1}=(-1)^{j-1}.$
\item For any $j$ and $v,$ we have $A_{j,\binom{j}{v},v}=1.$ In particular, $A_{j,j}=1.$
\end{enumerate}
\end{rmk}
Next, we provide an explicit value of $A_{j,v}.$ Throughout the remainder of the section, for simplicity of notation, we denote $\Psi_{v}^{(L+1)}\triangleq \left|\bigcup_{0\le i_1<i_2<\cdots<i_v\le L+1}Y_{i_1i_2\cdots i_v}\right|.$
\begin{prop}\label{prop:Ajv}
For any positive integers $j,v$ such that $\max\{2,v\}\le j,$ we have
\begin{equation}\label{eq:Ajv}
A_{j,v}=(-1)^{j-v}\binom{j-1}{v-1}.
\end{equation}
\end{prop}
\begin{proof}
We will prove Equation~\eqref{eq:Ajv} by induction on $j.$ Firstly, we show the claim for $j=v.$ By Remark~\ref{rmk:Ajlv}, $A_{v,v}=1=(-1)^{v-v}\binom{v-1}{v-1}.$
Now we assume that Equation~\eqref{eq:Ajv} is true for any $j'=v,v+1,\cdots, j-1.$ That is, $A_{j',v}=(-1)^{j'-v}\binom{j'-1}{v-1}.$ Then by Equation~\eqref{eq:AjvtoAjlv} and Lemma~\ref{lem:Ajlv}, it can be verified that
\begin{equation}\label{eq:WTS1}
A_{j,v}
=1-\sum_{t=1}^j\binom{j}{t}(-1)^{t-v}\binom{t-1}{t-v}+(-1)^{j-v}\binom{j-1}{v-1}
\end{equation}
For completeness, the proof of Equation~\eqref{eq:WTS1} can be found in Appendix~\ref{app:WTS}.
So to complete our proof, it is sufficient to show the following claim.
\begin{claim}\label{claim:tailgenlv}
For any $v\ge 1$ and $j\ge \max\{v,2\},$
\[\sum_{t=1}^j (-1)^{t-v}\binom{j}{t}\binom{t-1}{v-1}=1.\]
\end{claim}
\begin{proof}
We prove this by induction on both $v$ and $j.$ First, when $v=1,$ the sum can be simplified to $\sum_{t=1}^j(-1)^{t-1}\binom{j}{t}=-\sum_{t=0}^j (-1)^t\binom{j}{t}+1=1$ proving the case when $v=1.$ Next, we show the claim for the case when $j=v$ for any $v\ge 2.$ When $j=v,$ we have $\sum_{t=1}^v(-1)^{t-v}\binom{v}{t}\binom{t-1}{v-1}=\sum_{t=v}^v(-1)^{t-v}\binom{v}{t}\binom{t-1}{v-1}=1.$
Now we assume that the claim is true for $(v',j')\in\{(v-1,j-1),(v-1,j),(v,j-1)\}$ and we consider the claim for $(v,j).$ That is, we assume that for any such $v'$ and $j', \sum_{t=1}^{j'}(-1)^{t-v'}\binom{j'}{t}\binom{t-1}{v'-1}=1.$ Using the fact that $\binom{a}{b}=\binom{a-1}{b-1}+\binom{a-1}{b}$ for any $0\le a\le b,$ It can be verified that
\begin{eqnarray}\label{eq:WTS2}
\nonumber&&\sum_{t=1}^j (-1)^{t-v}\binom{j}{t}\binom{t-1}{v-1}\\
&=&1-\sum_{t=1}^{j-1}(-1)^{t-v}\binom{j-1}{t}\binom{t-1}{v-1}+\sum_{t=1}^{j-1} (-1)^{t-(v-1)}\binom{j-1}{t}\binom{t-1}{(v-1)-1}=1.
\end{eqnarray}
For completeness, the proof of Equation~\eqref{eq:WTS2} can be found in Appendix~\ref{app:WTS}.
\end{proof}
This completes the proof that $A_{j,v}=(-1)^{j-v}\binom{j-1}{v-1}.$
\end{proof}
We then utilize the explicit expressions of the coefficients of $\Sigma_j^{(L+1)}$ in $\Phi_v^{(L+1)}$ for various $j$ and $v$ to obtain $L$ different lower bounds for $t_I$ with respect to the minimum Levenshtein distance $d,$ the deletion upper bound $t_D$ and list size $L.$ In the following remark, we first discuss the overall strategy of how these $L$ lower bounds are derived.
\begin{rmk}\label{rmk:strat}
For $r=1,\cdots, L,$ our strategy to find the $r$-th lower bound for $t_I$ is to consider the sum $\Phi_r^{(L+1)}=\sum_{j=1}^{L+1} \Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)}=\sum_{u=1}^rc_{r,u}\Psi_u^{(L+1)},$ a linear combination of $\Psi_1^{(L+1)},\cdots, \Psi_r^{(L+1)}$ such that $\Phi_{r,3}^{(L+1)}=\cdots=\Phi_{r,r+1}^{(L+1)}=0.$ We will then show that $\Phi_{r,1}^{(L+1)}>0>\Phi_{r,2}^{(L+1)}.$ Lastly, we will also show that $\sum_{j=r+2}^{L+1} \Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)}\ge 0.$ Note that to show that the sum is non-negative, it is sufficient to show that for any $j\in\{r+2,\cdots, L\}$ such that $j-(r-2)$ is even, $\Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)} + \Phi_{r,j+1}^{(L+1)}\Sigma_{j+1}^{(L+1)}\ge 0$ and when $L-r$ is odd, $\Phi_{r,L+1}^{(L+1)}\Sigma_{L+1}^{(L+1)}\ge 0.$ Having shown these claims, we have
\begin{equation}\label{eq:Phirgen}
\sum_{u=1}^r c_{r,u} \Psi_u^{(L+1)}\ge \Phi_{r,1}^{(L+1)}\Sigma_1^{(L+1)} + \Phi_{r,2}^{(L+2)}\Sigma_2^{(L+1)}.
\end{equation}
Noting that $\bigcup_{0\le i_1<i_2<\cdots i_v\le L}Y_{i_1i_2\cdots i_v}\subseteq [N], N\ge n-t_D,$ for any $v=1,\cdots, L+1, \Sigma_1^{(L+1)}\ge \frac{L+1}{2}(N+n-t_I-t_D)$ and $\Sigma_2^{(L+1)}\le \frac{(L+1)L}{2}\left(n-\frac{d}{2}\right),$ we have the inequality
\begin{equation*}
\left(\sum_{u=1}^r c_{r,u}\right)N \ge \Phi_{r,1}^{(L+1)}\frac{L+1}{2}(N+n-t_I-t_D)+ \Phi_{r,2}^{(L+1)}\frac{(L+1)L}{2}\left(n-\frac{d}{2}\right)
\end{equation*}
or equivalently
\begin{equation}\label{eq:Phirins}
t_I\ge
\left(\frac{2\Phi_{r,1}^{(L+1)}(L+1)-2\sum_{u=1}^r c_{r,u}}{\Phi_{r,1}^{(L+1)}(L+1)}\right)(n-t_D)
+\frac{\Phi_{r,2}^{(L+1)}}{\Phi_{r,1}^{(L+1)}}L\left(n-\frac{d}{2}\right)
\end{equation}
which provides us with the $r$-th lower bound for $t_I.$
\end{rmk}
As discussed in Remark~\ref{rmk:strat}, fixing $L,$ we also fix $r=1,\cdots, L$ to calculate $\Phi_r^{(L+1)}$ which we use to obtain the $r$-th lower bound for $t_I.$ In the following lemma, we provide a choice of $c_{r,1},\cdots, c_{r,r}$ that satisfy all the requirements stated in Remark~\ref{rmk:strat}.
\begin{lemma}\label{lem:cruchoice}
For $u=1,\cdots, r,$ define $c_{r,u}=r+1-u$ and
\[\Phi_r^{(L+1)}=\sum_{j=1}^{L+1} \Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)}=\sum_{u=1}^rc_{r,u}\Psi_u^{(L+1)}.\]
Then
\begin{enumerate}
\item $\Phi_{r,1}^{(L+1)}=r>0>-1=\Phi_{r,2}^{(L+1)}.$
\item For $j=3,\cdots, r+1, \Phi_{r,j}^{(L+1)}=0.$
\item For any $j\in\{r+2,\cdots, L\}$ such that $j-r$ is even, $\Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)} + \Phi_{r,j+1}^{(L+1)}\Sigma_{j+1}^{(L+1)}\ge 0.$ Furthermore, if $L-r$ is odd, $\Phi_{r,L+1}^{(L+1)}\Sigma_{L+1}^{(L+1)}\ge 0.$ This directly implies that $\sum_{j=r+2}^{L+1} \Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)}\ge 0.$
\end{enumerate}
\end{lemma}
\begin{proof}
Recall that for any $u=1,\cdots, r, \Psi_u^{(L+1)}=\sum_{j=u}^{L+1} (-1)^{j-u}\binom{j-1}{u-1} \Sigma_j^{(L+1)}.$ Hence for $j=1,\cdots, L+1, \Phi_{r,j}^{(L+1)}=\sum_{u=1}^r (r+1-u)(-1)^{j-u}\binom{j-1}{u-1}=\sum_{u=1}^{\min\{r,j\}}(r+1-u)(-1)^{j-u}\binom{j-1}{u-1}.$
\begin{enumerate}
\item When $j=1,$ the sum only has one term, which is when $u=1.$ Hence $\Phi_{r,1}^{(L+1)} = r>0.$ Secondly, when $j=2,$ we may have one or two terms depending on whether $r=1$ or $r>1.$ Note that if $r=1,$ we again only have one term when $u=1$ and hence $\Phi_{1,2}^{(L+1)}=-1<0.$ Next, when $r\ge 2,$ the sum has two terms, when $u=1$ and $2.$ Hence $\Phi_{r,2}^{(L+1)}=-r+(r-1)=-1<0.$ This shows that $\Phi_{r,1}^{(L+1)}=r>0>-1=\Phi_{r,2}^{(L+1)},$ proving the first claim.
\item For the proof of the last two claims, we simplify the expression of $\Phi_{r,j}^{(L+1)}$ by shifting the index from $u$ to $u-1.$ Noting that for any $0<b<a, b\binom{a}{b}=a\binom{a-1}{b-1},$ this gives
\begin{small}
\begin{equation*}
\Phi_{r,j}^{(L+1)}
= r\sum_{u=0}^{\min\{r-1,j-1\}}(-1)^{j-1-u}\binom{j-1}{u}-(j-1)\sum_{u=0}^{\min\{r-2,j-2\}}(-1)^{j-2-u}\binom{j-2}{u}.
\end{equation*}
\end{small}
Recall that for any positive integer $b, \sum_{a=0}^b (-1)^{b-a}\binom{b}{a}=0.$ Hence for any $c<b,$ we have
$\sum_{a=0}^c (-1)^{b-a}\binom{b}{a}=-\sum_{a=c+1}^b (-1)^{b-a}\binom{b}{a}.$ Now we are ready to prove the last two claims. First, we consider the case when $j\le r.$ Then in this case, both sums in $\Phi_{r,j}^{(L+1)}$ equal zero, which directly implies $\Phi_{r,j}^{(L+1)}=0$ for $3\le j\le r.$ Next, assume that $j=r+1.$ It is then easy to see that $\Phi_{r,j}^{(L+1)}=-r-(-r)=0.$ This completes the proof for the second claim.
\item Lastly, assume that $r+2\le j\le L+1.$ Then we have
\begin{equation*}
\Phi_{r,j}^{(L+1)}=-r\sum_{u=r}^{j-1} (-1)^{j-1-u}\binom{j-1}{u}+(j-1)\sum_{u=r-1}^{j-2}(-1)^{j-u}\binom{j-2}{u}.
\end{equation*}
We denote the first sum by $A$ and the second sum by $B.$ We consider $A$ and $B$ separately using the fact that $\binom{a}{b}=\binom{a-1}{b-1}+\binom{a-1}{b}.$
\begin{itemize}
\item First, we consider $A.$ Then
\begin{equation*}
-\frac{A}{r}=\sum_{u=r}^{j-1}(-1)^{j-1-u}\binom{j-2}{u-1}+\sum_{u=r}^{j-2}(-1)^{j-1-u}\binom{j-2}{u}= (-1)^{j-r-1}\binom{j-2}{r-1}
\end{equation*}
which implies that $A=r(-1)^{j-r}\binom{j-2}{r-1}.$
\item Next, we consider $B.$ Then
\begin{equation*}
\frac{B}{j-1}=\sum_{u=r-1}^{j-2}(-1)^{j-u}\binom{j-3}{u-1}+\sum_{u=r-1}^{j-3}(-1)^{j-u}\binom{j-3}{u}=(-1)^{j-r-1}\binom{j-3}{r-2}
\end{equation*}
which implies that $B=-(j-1)(-1)^{j-r}\binom{j-3}{r-2}.$
\end{itemize}
Hence
\begin{equation*}
\Phi_{r,j}^{(L+1)}=(-1)^{j-r}\cdot\binom{j-3}{r-2}\cdot\left(\frac{r(j-2)}{r-1}-(j-1)\right).
\end{equation*}
It is easy to see that since $j> r+1, \binom{j-3}{r-2}\frac{r(j-2)}{r-1}-(j-1)> 0.$ Hence we must have $\Psi_{r,j}^{(L+1)}>0$ if $j-r$ is even and it is negative when $j-r$ is odd. In particular, when $L-r$ is odd, when we take $j=L+1,$ we have $L+1-r$ to be even implying $\Psi_{r,L+1}^{(L+1)}>0.$
Lastly, suppose that $ r+2\le j\le L$ such that $j-r$ is even. We consider $\Delta=\Phi_{r,j}^{(L+1)}\Sigma_j^{(L+1)}+\Phi_{r,j+1}^{(L+1)}\Sigma_j^{(L+1)}.$ Then we have
\begin{equation*}
\Delta=\binom{j-3}{r-2}\left(\frac{r(j-2)}{r-1}-(j-1)\right)\Sigma_j^{(L+1)}-\binom{j-2}{r-2}\left(\frac{r(j-1)}{r-1}-j\right)\Sigma_{j+1}^{(L+1)}.
\end{equation*}
Now consider $Y_{i_1\cdots i_{j+1}}$ for some $0\le i_1<i_2<\cdots<i_{j+1}\le L+1.$ Note that in the second term, we are deducting $\binom{j-2}{r-2}\left(\frac{r(j-1)}{r-1}-j\right)$ copies of $|Y_{i_1\cdots i_{j+1}}|.$ Note that there are exactly $j+1$ choices of $0\le i_1'<i_2<\cdots<i'_j\le L$ such that $Y_{i_1\cdots i_{j+1}}\subseteq Y_{i'_1\cdots i'_j}.$ Hence in the first term, we are adding $(j+1)\binom{j-3}{r-2}\left(\frac{r(j-2)}{r-1}-(j-1)\right)$ copies of $|Y_{i_1\cdots i_{j+1}}|.$ Hence
\[
\Delta\ge C\sum_{0\le i_1<\cdots<i_{j+1}\le L+1} |Y_{i_1\cdots i_{j+1}}|
\]
where
\begin{eqnarray*}
C&=&(j+1)\binom{j-3}{r-2}\left(\frac{r(j-2)}{r-1}-(j-1)\right)-\binom{j-2}{r-2}\left(\frac{r(j-1)}{r-1}-j\right)\\
&=&\binom{j-3}{r-1}\left(j-\frac{r-1}{j-r-1}\right)\ge 3>0
\end{eqnarray*}
where the inequalities are due to the fact that $j\ge r+2.$
\end{enumerate}
This completes the proof of Lemma~\ref{lem:cruchoice}.
\end{proof}
Lemma~\ref{lem:cruchoice} shows that by taking $c_{r,u}=r+1-u$ for $u=1,\cdots, r,$ we obtain the desirable linear combination to obtain the $r$-th lower bound for $\tau.$
We are now ready to prove the insdel-list-decodability result for a general list size.
\begin{lemma}\label{lem:genl}
Let $\mathcal{C}\in\mathbb{\Sigma}_q^n$ be a $q$-ary code with length $n$ and minimum Levenshtein distance $d=2\delta n.$ We further let $L\ge 2$ be a non-negative integer and $\tau_I,\tau_D$ be non-negative real numbers. Then if $\mathcal{C}$ is not $(\tau_I,\tau_D,L)$-insdel-list-decodable, then either $\tau_D\ge \delta$ or
\[\tau_I\ge \rho^{(\delta,L)}(1-\tau_D).\]
\end{lemma}
\begin{proof}
Suppose that $\tau_D<\delta.$ For simplicity, we denote by $t_I=\tau_I n$ and $t_D=\tau_D n.$ For $r=1,\cdots, L,$ utilizing $\Phi_r^{(L+1)}$ defined in Lemma~\ref{lem:cruchoice}, since $\sum_{u=1}^{r} c_{r,u}=\frac{r(r+1)}{2},\Phi_{r,1}^{(L+1)}=r$ and $\Phi_{r,2}^{(L+1)}=-1,$ by Equation~\eqref{eq:Phirins},
\[t_I\ge\left(\frac{2L-r+1}{L+1}\right)(n-t_D)-\frac{L}{r}\left(n-\frac{d}{2}\right)\]
as required. Hence
\begin{equation}\label{eq:genlmax}
\tau_I\ge \max_{r=1,\cdots, L}\left\{
\left(\frac{2L-r+1}{L+1}\right)(1-\tau_D)-\frac{L}{r}\left(1-\delta\right)
\right\}=\rho^{(\delta,L)}(1-\tau_D).
\end{equation}
This completes the proof.
\end{proof}
The next theorem directly follows from Lemma~\ref{lem:genl}.
\begin{theorem}\label{thm:genl}
Let $\mathcal{C}\in\mathbb{\Sigma}_q^n$ be a $q$-ary code of length $n$ with minimum Levenshtein distance $d=2\delta n$ and $L\ge 2$ be a positive integer. We further let $\tau_I$ and $\tau_D$ be two non-negative real numbers such that $\tau_D<\delta$ and $\tau_I<\rho^{(\delta,L)}(1-\tau_D).$
Then $\mathcal{C}$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
\begin{rmk}\label{rmk:ourboundgood}
Note that our bound requires $\tau_D<\delta.$ Recall that the unique decoding bound is defined as $\tau_I+\tau_D<\delta.$ Hence our bound is only meaningful when some insertion error occurs or $\tau_I>0.$ Furthermore, when $1-\tau_D\le \frac{L+1}{L-1}\left(1-\delta\right),$ the requirement we have is $\tau_I+\tau_D<\delta,$ which coincides with the unique decoding bound. Hence for our result to be meaningful, we also need $\tau_D$ to satisfy $1-\tau_D>\frac{L+1}{L-1}\left(1-\delta\right)$ or equivalently, $1<\frac{L+1}{2}\delta-\frac{L-1}{2}\tau_D.$ So in particular, to have the code to be insdel-list-decodable with list size $L,$ we need it to have relative minimum Levenshtein distance larger than $\frac{2}{L+1}.$ Hence, if $\delta=o(n),$ we require $L=\omega(n).$
\end{rmk}
\section{Comparison with the HY Bound}\label{sec:Inscom}
In this section, we compare the lower bound of the list-decodability of an insdel code $\mathcal{C}\subseteq\mathbb{\Sigma}_q^n$ we have derived in Theorem~\ref{thm:genl} with the best known lower bound. In such case, we compare with the bound derived in \cite{HY20}, which we call HY bound. First, we note that the result presented in \cite[Lemma $1$]{HY20} is dependent on the length $N$ of the received word is fixed. So we focus on the comparison with \cite[Theorem $1$]{HY20}. First, we restate the result presented in \cite[Theorem $1$]{HY20}.
\begin{theorem}[Restatement of Theorem $1$ of \cite{HY20}]\label{thm:HY20}
Let $\mathcal{C}\subseteq \mathbb{\Sigma}_q^n$ be a code of minimum Levenshtein distance $d=2\delta n.$ Define non-negative integers $t_I=\tau_I n$ and $t_D=\tau_D n<n.$ If $\tau_I<\frac{(\delta-\tau_D)(1-\tau_D)}{(1-\delta)}$ and $L= \left\lfloor\frac{\delta(1+\tau_I)}{(\delta-\tau_D)(1-\tau_D)-(1-\delta)\tau_I}\right\rfloor,$ the code $\mathcal{C}$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
Denote by $\phi_1^{(\delta, L)}(x)\triangleq \frac{x^2}{1-\delta}-x.$ Note that by the value assigned to $L,$ we have $L>\frac{\delta(1+\tau_I)}{(\delta-\tau_D)(1-\tau_D)-(1-\delta)\tau_I}-1$ or equivalently, $\tau_I<\phi_2^{(\delta,L)}(1-\tau_D)$
where
\begin{small}
\[\phi_2^{(\delta,L)}(x)\triangleq\frac{(L+1)x^2-(L+1)(1-\delta)x+(1-\delta)-1}{L(1-\delta)+1}.\]
\end{small}
Hence Theorem~\ref{thm:HY20} can be reformulated as $\tau_I<\min\{\phi_1^{(\delta,L)}(1- \tau_D),\phi_2^{(\delta,L)}(1- \tau_D)\}.$ Now, for given $\delta$ and $L,$ we are interested to show that our bound outperforms HY bound for some values of $\tau_D.$ Note that for such values of $\tau_D,$ we have $\min\{\phi_1^{(\delta,L)}(1-\tau_D),\phi_2^{(\delta,L)}(1-\tau_D)\}<\rho^{(\delta,L)}(1-\tau_D).$ Recall that by Remark~\ref{rmk:ourboundgood}, our bound coincides with the unique decoding bound when $\tau_D\ge 1-\frac{L+1}{L-1}(1-\delta).$ Hence, in addition to the requirement that $\min\{\phi_1^{(\delta,L)}(1-\tau_D),\phi_2^{(\delta,L)}(1-\tau_D)\}<\rho^{(\delta,L)}(1-\tau_D),$ we also require that $\tau_D<1-\frac{L+1}{L-1}(1-\delta).$ In order for such requirement to make sense, we require that $1-\delta<\frac{L-1}{L+1}$ or equivalently $\delta>\frac{2}{L+1}.$
Indeed, the existence of such $\tau_D$ is discussed in Theorem~\ref{thm:oursbettergen}.
\begin{theorem}\label{thm:oursbettergen}
Let $\delta$ and $L$ be given fixed constants. Then, there exists $0<\delta_1<1$ such that if $\delta_1<\delta<1,$ there exists an open interval $\mathcal{I}^{(\delta,L)}\subseteq \left[0,1-\frac{L+1}{L-1}(1-\delta)\right)$ where for any $\tau_D\in \mathcal{I}^{(\delta,L)}, \min\{\phi_1^{(\delta,L)}(1-\tau_D),\phi_2^{(\delta,L)}(1-\tau_D)\}<\rho^{(\delta,L)}(1-\tau_D).$
\end{theorem}
\begin{proof}
Note that $\phi_2^{(\delta,L)} (1-\tau_D)<\phi_1^{(\delta,L)}(1- \tau_D)$ if and only if
\[1-\tau_D<\frac{1-\delta+\delta^2+\sqrt{\delta^4+2\delta^3-5\delta^2+2\delta+1}}{2(1-\delta)}.\]
It can be verified that $\frac{1-\delta+\delta^2+\sqrt{\delta^4+2\delta^3-5\delta^2+2\delta+1}}{2(1-\delta)}\ge 1.$ This shows that for any $\tau_D<\delta,$ we have $\phi_2^{(\delta,L)} (1-\tau_D)<\phi_1^{(\delta,L)} (1-\tau_D).$ Hence, Theorem~\ref{thm:HY20} can be further simplified to $\tau_I<\phi_2^{(\delta,L)} (1-\tau_D)$ and we only aim to show that $\phi_2^{(\delta,L)}(1-\tau_D)<\rho^{(\delta,L)}(1-\tau_D).$
It is easy to see that when $\tau_D\rightarrow 1-\frac{L+1}{L-1}(1-\delta), \rho^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)=\frac{2}{L-1}(1-\delta)$ while $\phi_2^{(\delta,L)}(1-\tau_D)$ approaches
\begin{equation*}
\phi_2^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)= \frac{1}{L(1-\delta)+1}\left[\frac{2(L+1)^2}{(L-1)^2}(1-\delta)^2+(1-\delta)-1\right].
\end{equation*}
It can be verified that $\phi_2^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)<\rho^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)$ if and only if $(6L+2)(1-\delta)^2+(L^2-4L+3)(1-\delta)-(L^2-2L+1)<0,$ which has one positive $\beta_2>0$ and one negative real roots $\beta_1<0.$ Hence $\phi_2^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)<\rho^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)$ if and only if $\beta_1<1-\delta<\beta_2$ or equivalently $1-\beta_2<\delta<1-\beta_1.$ Noting that $\beta_1<0$ and we require $\delta>\frac{2}{L+1},$ setting $\delta_1=\max\left\{\frac{2}{L+1},1-\beta_2\right\},$ for any $\delta_1<\delta<1,$
\[\phi_2^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)<\rho^{(\delta,L)}\left(\frac{L+1}{L-1}(1-\delta)\right)\]
Since both functions are continuous, it is easy to see that for any $L$ and $\delta\in(\delta_1,1),$ there exists an open interval $\mathcal{I}^{(\delta,L)}\subseteq \left[0,1-\frac{L+1}{L-1}(1-\delta)\right)$ such that for any $\tau_D\in \mathcal{I}^{(\delta,L)}, \rho^{(\delta,L)}(1-\tau_D)>\phi_2^{(\delta,L)}(1-\tau_D),$ proving the claim.
\end{proof}
\begin{rmk}\label{rmk:valueofbeta}
A simple algebraic manipulation yields
\[\beta_2=\frac{L-1}{4(3L+1)}\left(-(L-3)+\sqrt{L^2+18L+17}\right).\] It is also easy to verify that
\[\frac{L-1}{4(3L+1)}\left(-(L-3)+\sqrt{L^2+18L+17}\right)<\frac{L-1}{L+1}.\]
Hence, we have \[\delta_1=1-\beta_2=\frac{L^2+8L+7-(L-1)\sqrt{L^2+18L+17}}{4(3L+1)}.\]
\end{rmk}
To better illustrate the region claimed in Theorem~\ref{thm:oursbettergen}, we provide an example when we set $L=2,$ which can be found in Example~\ref{ex:oursbettergen}.
\begin{ex}\label{ex:oursbettergen}
Note that when $L=2,$ when $x>3(1-\delta),$ we have $\rho^{(\delta,2)}(x)=\frac{4}{3}x-2(1-\delta)$ while $\phi_2^{(\delta,2)}(x)=\frac{3}{2(1-\delta)+1}x^2-\frac{3(1-\delta)}{2(1-\delta)+1}x+\frac{(1-\delta)-1}{L(1-\delta)+1}.$
It can be verified that $\rho^{(\delta,2)}(1-\tau_D)>\phi_2^{(\delta,2)}(1-\tau_D)$ if and only if $\delta>\frac{27-\sqrt{57}}{28}$ and $1-\tau_D\in\left(3(1-\delta),\alpha\right)$
where
\begin{tiny}
\[\alpha=\frac{17(1-\delta)+4+\sqrt{-143(1-\delta)^2-188(1-\delta)+124}}{18}.\]
\end{tiny}
As presented in Section~\ref{sec:IntComp}, we illustrate this example in Figure~\ref{fig:rhodelta2exalt}.
\end{ex}
\section{List-decodability of Various Insdel Codes}\label{sec:inscons}
In this section, we utilize the result derived in Theorem~\ref{thm:genl} to determine a lower bound for the insdel-list-decodability of various families of insdel codes.
\subsection{Reed-Solomon Codes}\label{sec:RScodes}
The first family we consider is the Reed-Solomon codes, which is one of the most commonly used codes under Hamming metric. First, we recall the definition of a Reed-Solomon code.
\begin{defn}\label{def:RS}
Let $q$ be a prime power and $\mathds{F}_q$ be the finite field of $q$ elements. Let $n\le q$ be a positive integer and $\alpha_1,\cdots, \alpha_n$ be $n$ distinct elements of $\mathds{F}_q.$ Denote ${\boldsymbol \alpha}=(\alpha_1,\cdots, \alpha_n).$ For a positive integer $k\in\{1,\cdots, n\},$ we define $\mathds{F}_q[x]_{<k},$ the set of all polymomials over $\mathds{F}_q$ with degree less than $k.$ We define the Reed-Solomon code with dimension $k$ and evaluation vector ${\boldsymbol\alpha}, \mathtt{RS}_{\boldsymbol\alpha}(n,k)$ as follows
\[\mathtt{RS}_{\boldsymbol\alpha}(n,k)\triangleq\left\{(f(\alpha_1),\cdots, f(\alpha_n)):f(x)\in \mathds{F}_q[x]_{<k}\right\}.\]
\end{defn}
It can be shown that regardless of the choice of ${\boldsymbol \alpha},$ the minimum Hamming distance of $\mathtt{RS}_{\boldsymbol\alpha}(n-k)$ is always $n-k+1.$ However, when considering its Levenshtein distance, the minimum Levenshtein distance of $\mathtt{RS}_{\boldsymbol \alpha}(n-k)$ is not invariant with respect to ${\boldsymbol \alpha}.$ On one hand, if ${\boldsymbol \alpha}$ is chosen such that the corresponding Reed-Solomon code is cyclic, then its minimum Levenshtein distance is $2.$ On the other hand, it can also be shown that for some choices of ${\boldsymbol \alpha},$ the Reed-Solomon code of dimension $k$ can correct up to $n-2k+1$ Levenshtein errors~\cite{CST21}. This implies that for such choice of the evaluation points, the minimum Levenshtein distance is at least $2n-4k+4.$ Although there have been numerous works on the insdel-correcting capabilities of Reed-Solomon codes (see, for example~\cite{DLTX21, LT21, CZ22, LX21, CST21}), there have not been any study on its list-decoding capability. It is easy to see from Theorem~\ref{thm:genl} that the upper bound for $t_I$ increases as the minimum Levenshtein distance increases. By Theorem~\ref{thm:genl}, we obtain the following result.
\begin{theorem}[List-Decodability of some Reed-Solomon codes]\label{thm:RSLD}
Let $q$ be a prime power, $n\le q$ and $k=Rn$ be positive integers for some $R\in(0,1).$ Furthermore, let $\delta=1-2R.$ Then if $\tau_D<1-2R$ and $\tau_I<\rho^{(\delta,L)}(1-\tau_D),$ there exists ${\boldsymbol \alpha}=(\alpha_1,\cdots, \alpha_n)\in \mathds{F}_q^n,$ a vector of $n$ distinct elements of $\mathds{F}_q$ such that $\mathtt{RS}_{\boldsymbol \alpha}(n,k)$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
An illustration of the region of $(\tau_D,\tau_I)$ that can be list-decoded with list size $25$ by Reed-Solomon codes of various rate $R$ can be observed in Figure~\ref{fig:rhodeltalRS}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{Figure/rhodeltalRS.png}
\caption{Region of relative insertion and deletion errors list-decodable by Reed-Solomon codes of various rates $R$ with list size $L=25$}
\label{fig:rhodeltalRS}
\end{figure}
\subsection{Varshamov-Tenengolts Codes}\label{sec:VTcodes}
In this remainder of this section, for any integer $q\ge 2,$ we define $\mathbb{\Sigma}_q\triangleq\{0,1,\cdots, q-1\}\subseteq\mathds{Z}.$
The next family we consider is the Varshamov-Tenengolts Codes or VT codes for short. It was first constructed over alphabet of size $2$ in~\cite{VT65} and shown to be able to correct up to single insertion or deletion errors in~\cite{Lev65}. Here we recall the definition of a binary VT codes.
\begin{defn}\label{def:VT2}
For $a\in\{0,\cdots, n\},$ the binary VT code $\mathcal{VT}_a(n)$ is defined to be
\[\mathcal{VT}_a(n)\triangleq \left\{
\begin{array}{c}
\mathbf{c}=(c_1,\cdots, c_n)\in \mathbb{\Sigma}_2^n:\\
\sum_{i=1}^n i\cdot c_i\equiv a\pmod{n+1}
\end{array}
\right\}.\]
\end{defn}
Such construction is then extended by Tenengolts in~\cite{Ten84} to any alphabet of size at least $2.$ We recall the definition of a non-binary VT codes.
\begin{defn}\label{def:VTq}
Let $q>2$ be a positive integer. For any $q$-ary vector $\mathbf{s}=(s_0,\cdots, s_{n-1})\in \mathbb{\Sigma}_q^n,$ define a corresponding length $(n-1)$ binary vector $\mathbf{a}_{\mathbf{s}}=(\alpha_1,\cdots, \alpha_{n-1})\in \mathbb{\Sigma}_2^{n-1}$ such that for $1\le i\le n-1,$
\[\alpha_i=\left\{
\begin{aligned}
1,&\mathrm{~if~} s_i\ge s_{i-1}\\
0,&\mathrm{~if~} s_i<s_{i-1}.
\end{aligned}
\right.\]
Then for any $a=0,\cdots, n-1$ and $b\in\mathbb{\Sigma}_q,$ the $q$-ary VT code $\mathcal{VT}^{(q)}_{a,b}(n)$ is defined to be
\[\mathcal{VT}^{(q)}_{a,b}(n)\triangleq \left\{
\begin{array}{c}
\mathbf{s}=(s_0,\cdots, s_{n-1}):\\
\sum_{i=1}^{n-1} i\alpha_i\equiv a\pmod{n}\\
\mathrm{~and~}\sum_{i=0}^{n-1} s_i\equiv b\pmod{q}
\end{array}
\right\}.\]
\end{defn}
Since both $\mathcal{VT}_a(n)$ and $\mathcal{VT}^{(q)}_{a,b}(n)$ can correct a single insertion/deletion error, their minimum Levenshtein distance must be at least $3.$ Furthermore, since their minimum Levenshtein distance must also be even, we have that their minimum Levenshtein distance to be at least $4.$
Furthermore, it can also be verified that when $n$ is sufficiently large, $\mathcal{VT}_a(n)$ and $\mathcal{VT}^{(q)}_{a,b}(n)$ have minimum Levenshtein distance of exactly $4.$
Hence, by Theorem~\ref{thm:genl}, we obtain the following result.
\begin{theorem}[List-Decodability of VT code]\label{thm:VTLD}
Let $q,n\ge 2$ be positive integers. If $\tau_D<\frac{2}{n}$ and $\tau_I<\rho^{\left(\frac{4}{n},L\right)}(1-\tau_D),$ then for any $a=0,\cdots, n, a'=0,\cdots, n-1$ and $b\in \mathbb{\Sigma}_q,$ both $\mathcal{VT}_a(n)$ and $\mathcal{VT}_{a',b}^{(q)}(n)$ are $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
To the best of our knowledge, there has only been one work on the list-decoding of binary VT codes~\cite{Wac18} which is only against deletions errors. Recall that for our upper bound $\rho^{\left(\frac{4}{n},L\right)}$ to be meaningful, we require $\tau_I>0$ and $n<L+1-\frac{L-1}{2}\tau_D n\le L+1$ where $\tau_D\le \frac{1}{n}.$ However, when such requirement is satisfied, both codes are insdel-list-decodable against up to $\tau_I n>0$ insertions and $\tau_D n=1$ deletion.
\subsection{Helberg Code}\label{sec:HC}
Note that VT codes are only constructed to correct a single insertion/deletion error. This construction was then extended in~\cite{HF02}, which is more commonly known as binary Helberg codes. Such family is shown to be capable of correcting up to $s$ insertion/deletion errors where $s$ is set to be one of its parameters. The family of Helberg codes is then generalized to non-binary alphabets in~\cite{LN16}, which is again shown to be capable of correcting up to $s$ insertion/deletion errors for some parameter $s.$ Here we recall the definition of a Helberg code.
\begin{defn}\label{def:HC}
Let $s<n$ and $q\ge 2$ be positive integers. Define a sequence of non-negative integer $\{v_i(q,s)\}_{i\in \mathbb{Z}}$ where
\[v_i(q,s)=\left\{
\begin{aligned}
0,&\mathrm{~if~}i\le 0\\
1+(q-1)\sum_{j=1}^s v_{i-j}(q,s),&\mathrm{~if~}i\ge 1.
\end{aligned}
\right.
\]
Define $m\ge v_{n+1}(q,s)=1+\sum_{j=0}^{s-1}v_{n-j}(q,s)$ and $a\in\{0,\cdots, m-1\}.$ Then we define the $q$-ary Helberg code $\mathcal{C}_H(q,n,s,a)$ as follows:
\begin{small}
\[\mathcal{C}_H(q,n,s,a)\triangleq \left\{
\begin{array}{c}
(x_1,\cdots, x_n)\in \mathbb{\Sigma}_q^n:\\
\sum_{i=1}^n v_i(q,s) x_i\equiv a\pmod{m}
\end{array}
\right\}.\]
\end{small}
\end{defn}
Note that since $\mathcal{C}_H(q,n,s,a)$ can correct up to $s$ insertion/deletion errors, their minimum Levenshtein distance must be at least $2s+1.$ Furthermore, since its minimum Levenshtein distance must also be even, we have that its minimum Levenshtein distance to be at least $2s+2.$ We note that here $s$ is a constant with respect to $n.$ Hence our lower bound for $d_L(\mathcal{C}_H(q,n,s,a))$ is also a constant with respect to $n.$ Hence we require $n<\frac{L+1}{4}(2s+2)-\frac{L-1}{2} t_D$ where $t_D<s+1.$ In other words, for our bound to be meaningful, we again require $L=\Omega(n).$ In such case, we have the following result.
\begin{theorem}[List-Decodability of the Helberg code]\label{thm:HCLD}
Let $s<n,q,L\ge 2$ and $a\in\{0,\cdots, m-1\}$ for some $m\ge v_{n+1}(q,s)$ be positive integers. Then if $\tau_D<\frac{s+1}{n}$ and $\tau_I<\rho^{\left(\frac{2s+2}{n},L\right)}(1-\tau_D),$ the Helberg code $\mathcal{C}_H(q,n,s,a)$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
To the best of our knowledge, there has not been any study on the list-decoding of Helberg code.
\subsection{Optimal Codes} The next code we consider is the $s$-deletion correcting codes proposed by Sima and Bruck~\cite{SB21} which is denoted by SB code and the two deletion correcting codes proposed by Guruswami and H{\aa}stad~\cite{GH21} which we denote by GH code. Due to the complexity of the construction which uses synchronization vector, dense sequences and hash function for the case of SB code and the use of sketches and regular strings for the case of GH code, combined with the fact that in our analysis, we only require their minimum Levenshtein distances, we omit the definition of such codes. As discussed before, in either case, we use the deletion correcting capability shown for the codes to provide a lower bound for their minimum Levenshtein distance. Noting that both codes only works for minimum Levenshtein distance that is constant with respect to their lengths ($s$ for some constant $s$ in the case of SB codes and $2$ for the case of GH code), in order for our bound to be meaningful, we require $L=\Omega(n).$ Then we have the following two results.
\begin{theorem}[List-Decodability of SB code~\cite{SB21}]\label{thm:SBCLD}
Let $s<N$ and $n=N+8s\log N+o(\log N)$ while $\mathcal{C}$ be the $s$-deletion correcting SB code of length $n$ constructed in~\cite[Theorem $1$]{SB21}. Let $L=\Omega(n)$ be the considered list size. Then if $\tau_D<\frac{s+1}{n}$ and $\tau_I<\rho^{\left(\frac{2s+2}{n},L\right)}(1-\tau_D),$ the SB code $\mathcal{C}$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
\begin{theorem}[List-Decodability of GH code~\cite{GH21}]\label{thm:GHCLD}
Let $\mathcal{C}$ be a $2$-deletion correcting code constructed in~\cite[Theorem $1.1$]{GH21} of length $n$ and let $L=\Omega(n)$ be the list size we consider. Then if $\tau_D<\frac{3}{n}$ and $\tau_I<\rho^{\left(\frac{6}{n},L\right)}(1-\tau_D),$ the GH code $\mathcal{C}$ is $(\tau_I,\tau_D,L)$-insdel-list-decodable.
\end{theorem}
\begin{appendices}
\section{Discussion on Our Bound}\label{app:rhodeltaL}
In the following lemma, we show that $\rho^{(\delta,L)}(x)$ consists of at most $L$ linear pieces and $\rho^{(\delta,L)}(x)>0.$
\begin{lemma}\label{lem:rhovalact}
Let $\delta$ and $L$ be given and $\rho^{(\delta,L)}$ be as defined above. Let $r_{\min}\in[L]$ such that
\[r_{\min}\triangleq\min\left\{
r\in\{1,2,\cdots, L\}:
\frac{L(L+1)}{r(r+1)}\left(1-\delta\right)<1
\right\}.\]
Then $\rho^{(\delta,L)}(x)>0$ and
$\rho^{(\delta,L)}(x)$ is a piecewise linear function in $[1-\delta,1]$ with $L-r_{\min}+1$ pieces where
\begin{enumerate}
\item When $1-\delta\le x\le \frac{L+1}{L-1}(1-\delta),$ we have $\rho^{(\delta,L)}(x)=x-(1-\delta).$
\item For $r=r_{\min}+1,\cdots, L-1,$ when $\frac{L(L+1)}{r(r+1)}(1-\delta)<x\le \frac{L(L+1)}{r(r-1)}(1-\delta),$ we have $\rho^{(\delta,L)}(x) = \frac{2L-r+1}{L+1}x -\frac{L}{r}(1-\delta).$
\item Lastly, when $\frac{L(L+1)}{r_{\min}(r_{\min}+1)}(1-\delta)<x\le 1,$ we have $\rho^{(\delta,L)}(x)=\frac{2L-r_{\min}+1}{L+1} x -\frac{L}{r_{\min}}(1-\delta).$
\end{enumerate}
\end{lemma}
\begin{proof}
First, we consider the definition $\rho^{(\delta,L)}(x)$ where the domain is the whole non-negative real space.
For $r=1,\cdots, L,$ define $u_r(x)=\frac{2L-r+1}{L+1}x-\frac{L}{r}\left(1-\delta\right).$ By simple algebraic manipulation, for $r=1,\cdots, L-1,$ we obtain $u_r(x)>u_{r+1}(x)$ if and only if $x>\frac{L(L+1)}{r(r+1)}\left(1-\delta\right).$ This directly implies that
\begin{enumerate}
\item When $0\le x\le \frac{L+1}{L-1}(1-\delta),$ we have $\rho^{(\delta,L)}(x)=x-(1-\delta).$
\item For $r=2,\cdots, L-1,$ when $\frac{L(L+1)}{r(r+1)}(1-\delta)<x\le \frac{L(L+1)}{r(r-1)}(1-\delta),$ we have $\rho^{(\delta,L)}(x)=\frac{2L-r+1}{L+1}x-\frac{L}{r}(1-\delta).$
\item Lastly, when $x>\frac{L(L+1)}{2}(1-\delta),$ we have $\rho^{(\delta,L)}(x)=\frac{2L}{L+1}x - L(1-\delta).$
\end{enumerate}
Now we show the positivity of $\rho^{(\delta,L)}$ in all the $L$ different intervals.
\begin{enumerate}
\item Note that when $x>\frac{L+1}{L-1}\left(1-\delta\right), \rho^{(\delta,L)}(x)\ge \frac{L+2}{L+1}x-\frac{L}{L-1}\left(1-\delta\right)>0.$ This shows that $\rho^{(\delta,L)}(x)>0$ when $x>\frac{L+1}{L-1}\left(1-\delta\right).$
\item Lastly, suppose that $0\le x\le \frac{L+1}{L-1}\left(1-\delta\right).$ Then $\rho^{(\delta,L)}(x)=x-\left(1-\delta\right).$ It is then easy to see that $\rho^{(\delta,L)}(x)>0$ if and only if $x>1-\delta.$
\end{enumerate}
Note that since $1-\delta<1, r_{\min}\le L.$ Hence we obtain the claimed value of $\rho^{(\delta,L)}(x),$ completing the proof.
\end{proof}
\section{Proofs of Equations}\label{app:WTS}
Here, we provide the proof of some calculation claims we made for completeness.
\begin{enumerate}
\item \textbf{Proof of Equation~\eqref{eq:WTS1}.} Recall that $A_{j,v}=\sum_{\ell=1}^{\binom{j}{v}} (-1)^{\ell-1} A_{j,\ell,v}, A_{j,\ell,v}=\binom{\binom{j}{v}}{\ell}-\sum_{t=1}^{j-1}\binom{j}{t} A_{t,\ell,v}$ and we assumed that for $t\le j-1, A_{t,v}=(-1)^{t-v}\binom{t-1}{v-1}.$ Then
\begin{eqnarray*}
A_{j,v}&=&\sum_{\ell=1}^{\binom{j}{v}} (-1)^{\ell-1} A_{j,\ell,v}=-\sum_{\ell=1}^{\binom{j}{v}}(-1)^{\ell}\binom{\binom{j}{v}}{\ell} -\sum_{t=1}^{j-1}\binom{j}{t} \sum_{\ell=1}^{\binom{j}{v}}(-1)^{\ell-1} A_{t,\ell,v}\\
&=&-\left(\sum_{\ell=0}^{\binom{j}{v}}(-1)^\ell\binom{\binom{j}{v}}{\ell}-1\right)-\sum_{t=1}^{j-1}\binom{j}{t}\sum_{\ell=1}^{\binom{t}{v}}(-1)^{\ell-1} A_{t,\ell,v}\\
&=&1-\sum_{t=1}^{j-1}\binom{j}{t}A_{t,v}=1-\sum_{t=1}^{j-1}\binom{j}{t}(-1)^{t-v}\binom{t-1}{v-1}\\
&=&1-\sum_{t=1}^j\binom{j}{t}(-1)^{t-v}\binom{t-1}{t-v}+(-1)^{j-v}\binom{j-1}{v-1}
\end{eqnarray*}
\noindent where the third equality is due to the fact that $A_{t,\ell,v}=0$ for any $\ell>\binom{t}{v}$ while the equalities in the subsequent line is due to Equation~\eqref{eq:AjvtoAjlv} and the induction hypothesis. \qed
\item \textbf{Proof of Equation~\eqref{eq:WTS2}.} Equation~\eqref{eq:WTS2} can be verified using the fact that $\binom{a}{b}=\binom{a-1}{b-1}+\binom{a-1}{b},$ which can be found below.
\begin{eqnarray*}
\sum_{t=1}^j (-1)^{t-v}\binom{j}{t}\binom{t-1}{v-1}&=&\sum_{t=v}^j(-1)^{t-v}\binom{j}{t}\binom{t-1}{v-1}\\
&=&\sum_{t=v}^j (-1)^{t-v}\binom{j-1}{t}\binom{t-1}{v-1}+\sum_{t=v}^j (-1)^{t-v} \binom{j-1}{t-1}\binom{t-1}{v-1}\\
&=&\sum_{t=1}^{j-1} (-1)^{t-v}\binom{j-1}{t}\binom{t-1}{v-1}+\sum_{t=v}^j(-1)^{t-v}\binom{j-1}{t-1}\binom{t-2}{v-1}\\
&&+\sum_{t=v}^j (-1)^{t-v}\binom{j-1}{t-1}\binom{t-2}{v-2}\\
&=&1-\sum_{t=v-1}^{j-1}(-1)^{t-v}\binom{j-1}{t}\binom{t-1}{v-1}\\
&&+\sum_{t=v-1}^{j-1} (-1)^{t-(v-1)}\binom{j-1}{t}\binom{t-1}{(v-1)-1}\\
&=&1-\sum_{t=1}^{j-1}(-1)^{t-v}\binom{j-1}{t}\binom{t-1}{v-1}\\
&&+\sum_{t=1}^{j-1} (-1)^{t-(v-1)}\binom{j-1}{t}\binom{t-1}{(v-1)-1}=1,
\end{eqnarray*}
\noindent which completes the proof.
\end{enumerate}
\end{appendices}
| {
"timestamp": "2022-11-15T02:05:44",
"yymm": "2211",
"arxiv_id": "2211.06606",
"language": "en",
"url": "https://arxiv.org/abs/2211.06606",
"abstract": "For codes equipped with metrics such as Hamming metric, symbol pair metric or cover metric, the Johnson bound guarantees list-decodability of such codes. That is, the Johnson bound provides a lower bound on the list-decoding radius of a code in terms of its relative minimum distance $\\delta$, list size $L$ and the alphabet size $q.$ For study of list-decodability of codes with insertion and deletion errors (we call such codes insdel codes), it is natural to ask the open problem whether there is also a Johnson-type bound. The problem was first investigated by Wachter-Zeh and the result was amended by Hayashi and Yasunaga where a lower bound on the list-decodability for insdel codes was derived.The main purpose of this paper is to move a step further towards solving the above open problem. In this work, we provide a new lower bound for the list-decodability of an insdel code. As a consequence, we show that unlike the Johnson bound for codes under other metrics that is tight, the bound on list-decodability of insdel codes given by Hayashi and Yasunaga is not tight. Our main idea is to show that if an insdel code with a given Levenshtein distance $d$ is not list-decodable with list size $L$, then the list decoding radius is lower bounded by a bound involving $L$ and $d$. In other words, if the list decoding radius is less than this lower bound, the code must be list-decodable with list size $L$. At the end of the paper we use such bound to provide an insdel-list-decodability bound for various well-known codes, which has not been extensively studied before.",
"subjects": "Information Theory (cs.IT)",
"title": "A Lower Bound on the List-Decodability of Insdel Codes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877012965886,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7089594632821222
} |
https://arxiv.org/abs/1902.06720 | Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent | A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions. | \section{Introduction}
Machine learning models based on deep neural networks have achieved unprecedented performance across a wide range of tasks \cite{Krizhevsky2012, he2016deep, devlin2018bert}.
Typically, these models are regarded as complex systems for which many types of theoretical analyses are intractable.
Moreover, characterizing the gradient-based training dynamics of these models is challenging owing to the typically high-dimensional non-convex loss surfaces governing the optimization. As is common in the physical sciences, investigating the extreme limits of such systems can often shed light on these hard problems. For neural networks, one such limit is that of infinite width, which refers either to the number of hidden units in a fully-connected layer or to the number of channels in a convolutional layer.
Under this limit, the output of the network at initialization is a draw from a Gaussian process~(GP); moreover, the network output remains governed by a GP after exact Bayesian training using squared loss~\citep{neal,lee2018deep,matthews2018,novak2018bayesian, garriga2018deep}. Aside from its theoretical simplicity, the infinite-width limit is also of practical interest as wider networks have been found to generalize better~\cite{lee2018deep, novak2018bayesian,neyshabur2014search, novak2018sensitivity, neyshabur2018the}.
In this work, we explore the learning dynamics of wide neural networks under gradient descent and find that the weight-space description of the dynamics becomes surprisingly simple: as the width becomes large, the neural network can be effectively replaced by its first-order Taylor expansion with respect to its parameters at initialization. For this
linear model, the dynamics of gradient descent become \emph{analytically tractable}. While the linearization is only exact in the infinite width limit, we nevertheless find excellent agreement between the predictions of the original network and those of the linearized version even for finite width configurations. The agreement persists across different architectures, optimization methods, and loss functions.
For squared loss, the exact learning dynamics admit a closed-form solution that allows us to characterize the evolution of the predictive distribution in terms of a GP. This result can be thought of as an extension of ``sample-then-optimize" posterior sampling~\cite{matthews2017sample} to the training of deep neural networks. Our empirical simulations confirm that the result accurately models the variation in predictions across an ensemble of finite-width models with different random initializations.
Here we summarize our contributions:
\begin{itemize}[leftmargin=*]
\item \textbf{Parameter space dynamics}:
We show that wide network training dynamics in parameter space are equivalent to the training dynamics of a model which is affine in the collection of all network parameters, the weights and biases. This result holds regardless of the choice of loss function. For squared loss, the dynamics admit a closed-form solution as a function of time.
\item \textbf{Sufficient conditions for linearization}: We formally prove that there exists a threshold learning rate $\eta_{{\rm critical}} $ (see Theorem \ref{thm:main}), such that
gradient descent training trajectories
with learning rate smaller than $\eta_{{\rm critical}} $ stay in an $\mathcal O\left(n^{-1/ 2}\right)$-neighborhood of the trajectory of the linearized network when $n$, the width of the hidden layers, is sufficiently large.
\item \textbf{Output distribution dynamics}:
We formally show that the predictions of a neural network throughout gradient descent training are described by a GP as the width goes to infinity (see Theorem \ref{thm:distribution}), extending results from \citet{Jacot2018ntk}.
We further derive explicit time-dependent expressions for the evolution of this GP during training.
Finally, we provide a novel interpretation of the result. In particular, it offers a quantitative understanding of the mechanism by which gradient descent differs from Bayesian posterior sampling of the parameters: while both methods generate draws from a GP, gradient descent does not generate samples from the posterior of any probabilistic model.
\item \textbf{Large scale experimental support}: We empirically investigate the applicability of the theory in the finite-width setting and find that it gives an accurate characterization of both learning dynamics and posterior function distributions across a variety of conditions, including some practical network architectures such as the wide residual network~\cite{zagoruyko2016wide}.
\item \textbf{Parameterization independence}: We note that linearization result holds both in standard and NTK parameterization (defined in \sref{sec:notation}),
while previous work assumed the latter,
emphasizing that the effect is due to increase in width rather than the particular parameterization.
\item \textbf{Analytic {$\operatorname{ReLU}$}{} and $\operatorname{erf}$ {} neural tangent kernels}: We compute the analytic neural tangent kernel corresponding to fully-connected networks with {$\operatorname{ReLU}$}{} or $\operatorname{erf}$ {} nonlinearities.
\item \textbf{Source code}:
Example code investigating both function space and parameter space linearized learning dynamics described in this work is released as open source code within~\cite{neuraltangents2019}.\footnote{Note that the open source library has been expanded since initial submission of this work.}
We also provide accompanying interactive Colab notebooks for both \href{https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/weight_space_linearization.ipynb}{\bf parameter space}\footnote{\scriptsize \href{https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/weight_space_linearization.ipynb}{colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/weight\_space\_linearization.ipynb}} and \href{https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/function_space_linearization.ipynb}{\bf function space}\footnote{\scriptsize \href{https://colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/function_space_linearization.ipynb}{colab.sandbox.google.com/github/google/neural-tangents/blob/master/notebooks/function\_space\_linearization.ipynb}} linearization.
\end{itemize}
\subsection{Related work}
We build on recent work by~\citet{Jacot2018ntk} that characterize the exact dynamics of network outputs throughout gradient descent training in the infinite width limit. Their results establish that full batch gradient descent in parameter space corresponds to kernel gradient descent in function space with respect to a new kernel, the Neural Tangent Kernel (NTK).
We examine
what this implies about
dynamics in parameter space, where training updates are actually made.
\citet{daniely2016} study the relationship between neural networks and kernels at initialization. They bound the difference between the infinite width kernel and the empirical kernel at finite width $n$, which diminishes as $\mathcal{O}(1/\sqrt{n})$. \citet{daniely2017sgd} uses the same kernel perspective to study stochastic gradient descent (SGD) training of neural networks.
\citet{saxe2013exact} study the training dynamics of deep linear networks, in which the nonlinearities are treated as identity functions. Deep linear networks are linear in their inputs, but not in their parameters.
In contrast, we show that the outputs of sufficiently wide neural networks are linear in the updates to their parameters during gradient descent, but not usually their inputs.
\citet{du2018gradient, allen2018convergence-fc, allen2018convergence-rnn, zou2018stochastic} study the convergence of gradient descent to global minima.
They proved that for i.i.d. Gaussian initialization, the parameters of sufficiently wide networks move little from their initial values during SGD.
This small motion of the parameters is crucial to the effect we present, where wide neural networks behave linearly in terms of their parameters throughout training.
\citet{mei2018mean, chizat2018global,rotskoff2018neural,sirignano2018mean} analyze the mean field SGD dynamics of training neural networks in the large-width limit. Their mean field analysis describes distributional dynamics of network parameters via a PDE. However, their analysis is restricted to one hidden layer networks with a scaling limit $\left(1/n\right)$ different from ours $\left(1/\sqrt{n}\right)$, which is commonly used in modern networks~\cite{he2016deep, glorot2010understanding}.
\citet{chizat2018note}\footnote{We note that this is a concurrent work and an expanded version of this note is presented in parallel at NeurIPS 2019.} argued that infinite width networks are in `lazy training' regime and maybe too simple to be applicable to realistic neural networks. Nonetheless, we empirically investigate the applicability of the theory in the finite-width setting and find that it gives an accurate characterization of both the learning dynamics and posterior function distributions across a variety of conditions, including some practical network architectures such as the wide residual network~\cite{zagoruyko2016wide}.
\section{Theoretical results}
\label{sec:TheoryResults}
\subsection{Notation and setup for architecture and training dynamics}
\label{sec:notation}
Let $\mathcal{D} \subseteq \mathbb R^{n_0} \times \mathbb R^{k}$ denote the training set and $\mathcal{X}=\left\{x: (x,y)\in \mathcal{D}\right\}$ and $\mathcal{Y}=\left\{y: (x,y)\in \mathcal{D}\right\}$ denote the inputs and labels, respectively. Consider a fully-connected feed-forward network with $L$
hidden
layers with widths $n_{l}$, for $l = 1, ..., L$ and a readout layer with $n_{L+1} = k$. For each $x\in\mathbb R^{n_0}$, we use $h^l(x), x^l(x)\in\mathbb R^{n_l}$ to represent the pre- and post-activation functions at layer $l$ with input $x$. The recurrence relation for a feed-forward network is defined as
\begin{align}
\label{eq:recurrence}
\begin{cases}
h^{l+1}&=x^l W^{l+1} + b^{l+1}
\\
x^{l+1}&=\phi\left(h^{l+1}\right)
\end{cases}
\,\, \textrm{and}
\,\,
\begin{cases}
W^{l}_{i, j}& = \frac {\sigma_\omega} {\sqrt{n_l}} \omega_{ij}^l
\\
b_j^l &= \sigma_b \beta_j^l
\end{cases}
,
\end{align}
where $\phi$ is a point-wise activation function, $W^{l+1}\in \mathbb R^{n_l\times n_{l+1}}$ and $b^{l+1}\in\mathbb R^{n_{l+1}}$ are the weights and biases, $\omega_{ij}^l$ and $ b_j^l $ are the trainable variables, drawn i.i.d. from a standard Gaussian $ \omega_{ij}^l, \beta_{j}^l\sim \mathcal N(0, 1)$ at initialization, and $\sigma_\omega^2$ and $\sigma_b^2$ are weight and bias variances. Note that this parametrization
is non-standard,
and we will refer to it as the NTK parameterization. It has already been adopted in several recent works~\cite{van2017l2, karras2018progressive, Jacot2018ntk, du2018gradient, parkoptimal}. Unlike the standard parameterization that only normalizes the forward dynamics of the network, the NTK-parameterization also normalizes its backward dynamics.
We note that the predictions and training dynamics of NTK-parameterized networks are identical to those of standard networks, up to a width-dependent scaling factor in the learning rate for each parameter tensor.
As we derive, and support experimentally, in Supplementary Material (SM) \sref{sec:compare-parameterization} and \sref{sec: converge proof},
our results (linearity in weights, GP predictions) also hold for
networks with a standard parameterization.
We define
$\theta^l\equiv \operatorname{vec}\pp{\{ W^l, b^l\} }$,
the $\pp{(n_{l-1}+1) n_l} \times 1$ vector of all parameters for layer $l$.
$\theta = \operatorname{vec}\pp{\cup_{l=1}^{L+1}{\theta^l}}$ then indicates the vector of all network parameters, with similar definitions for $\theta^{\leq l}$ and $\theta^{>l}$. Denote by $\theta_t$ the time-dependence of the parameters and by $\theta_0$ their initial values. We use $f_t(x) \equiv h^{L+1}(x)\in \mathbb R^{k}$ to denote the output (or logits) of the neural network at time $t$.
Let $\ell (\hat y, y):\mathbb R^{k}\times \mathbb R^{k}\to\mathbb{R}$ denote the loss function where the first argument is the prediction and the second argument the true label. In supervised learning, one is interested in learning a $\theta$ that minimizes the empirical loss\footnote{To simplify the notation for later equations, we use the \emph{total} loss here instead of the {\it average} loss, but for all plots in \sref{sec:experiments}, we show the \emph{average} loss.}, $\mathcal L = \sum_{(x, y)\in\mathcal{D}} \ell (f_t(x, \theta), y).$
Let $\eta$ be the learning rate\footnote{Note that compared to the conventional parameterization, $\eta$ is larger by factor of width~\cite{parkoptimal}. The NTK parameterization allows usage of a universal learning rate scale irrespective of network width.}.
Via continuous time gradient descent,
the evolution of the parameters $\theta$ and the logits $f$ can be written as
\begin{align}
\label{eq:nn-gradient-descent-weights}
&\dot \theta_t =
- \eta {\nabla_\theta f_t(\mathcal{X})}^T
\nabla_{f_t(\mathcal{X})} \mathcal L
\\
&\dot f_t(\mathcal{X}) = {\nabla_\theta f_t(\mathcal{X})}\, \dot \theta_t
= - \eta \, \hat\Theta_t (\mathcal{X}, \mathcal{X}) \nabla_{f_t(\mathcal{X})} \mathcal L
\label{eq:nn-gradient-descent-outputs}
\end{align}
where
$f_t(\mathcal{X}) = \operatorname{vec}\pp{\left[ f_t\pp{x} \right]_{x\in\mathcal{X}}}$, the $k|\mathcal{D}|\times 1$ vector of concatenated logits for all examples, and
$\nabla_{f_t(\mathcal{X})} \mathcal L$ is the gradient of the loss with respect to the model's output, $f_t(\mathcal{X})$.
$\hat\Theta_t \equiv \hat\Theta_t(\mathcal{X}, \mathcal{X}) $ is the tangent kernel at time $t$, which is a $k|\mathcal{D}|\times k|\mathcal{D}|$ matrix
\begin{align}\label{eq:tangent-kernel}
\hat\Theta_t &=
{\nabla_\theta f_t(\mathcal{X})} {\nabla_\theta f_t(\mathcal{X})}^{T}= \sum_{l=1}^{L+1} {\nabla_{\theta^l} f_t(\mathcal{X})} {\nabla_{\theta^l} f_t(\mathcal{X})}^{T}.
\end{align}
One can define the tangent kernel for general arguments, e.g. $\hat\Theta_t(x, \mathcal{X})$ where $x$ is test input. At finite-width, $\hat{\Theta}$ will depend on the specific random draw of the parameters and in this context we refer to it as the \emph{empirical} tangent kernel.
The dynamics of discrete gradient descent
can be obtained by replacing $\dot \theta_t$ and $\dot f_t(\mathcal{X})$ with $(\theta_{i+1} - \theta_i)$ and $(f_{i+1}(\mathcal{X}) -f_i(\mathcal{X}))$ above, and replacing $e^{-\eta\hat\Theta_0t}$ with $(1 - (1-\eta\hat\Theta_0)^i)$ below.
\subsection{Linearized networks have closed form training dynamics for parameters and outputs}
In this section, we consider the training dynamics of the linearized network. Specifically, we replace the outputs of the neural network by their first order Taylor expansion,
\begin{align}
f^{\textrm{lin}}_{t}(x)\equiv f_{0}(x) + \left.{\nabla_\theta f_0(x)}\right\vert_{\theta=\theta_0}\, \omega_t\,, %
\end{align}
where $\omega_t \equiv \theta_t-\theta_0$ is the change in the parameters from their initial values.
Note that $f^{\textrm{lin}}_{t}$ is the sum of two terms: the first term is the initial output of the network, which remains unchanged during training, and the second term captures the change to the initial value during training. The dynamics of gradient flow using this linearized function are governed by,
\begin{align}
&\dot \omega_t
= - \eta {\nabla_\theta f_0(\mathcal{X})}^T
\nabla_{f^{\textrm{lin}}_t(\mathcal{X})} \mathcal L
\label{eq:lin-nn-gradient-descent-weights}
\\
&{{\dot f}_t^{\textrm {lin}}}(x)
= - \eta \, \hat\Theta_0 (x, \mathcal{X}) \nabla_{f^{\textrm{lin}}_t(\mathcal{X})} \mathcal L\,.
\label{eq:lin-nn-gradient-descent-outputs}
\end{align}
As ${\nabla_\theta f_0(x)}$ remains constant throughout training, these dynamics are often quite simple.
In the case of an MSE loss, i.e., $\ell(\hat y , y) = \frac 1 2 \|\hat y -y\|_2^2$, the ODEs have closed form solutions
\begin{align}
&\omega_t = - {\nabla_\theta f_0(\mathcal{X})}^T \hat\Theta_0^{-1}\left(I - e^{- \eta \hat\Theta_0 t}\right)\left(f_{0}(\mathcal{X}) - \mathcal{Y}\right)\,, \label{eq:lin-dynamics-weights}
\\
&f^{\textrm{lin}}_{t}(\mathcal{X})=(I - e^{- \eta\hat\Theta_0 t})\mathcal{Y} + e^{-\eta \hat\Theta_0 t}f_{0}(\mathcal{X}) \,. \label{eq:lin-dynamics-outputs}
\end{align}
For an arbitrary point $x$, $f^{\textrm{lin}}_t(x) = \mu_t(x) +\gamma_t(x)$, where
\begin{align}
\label{eq:flin-x}
&\mu_t(x) = \hat\Theta_0(x, \mathcal{X})\hat\Theta_0^{-1}\left(I- e^{-\eta\hat\Theta_0 t}\right)\mathcal{Y}
\\
\label{eq:flin-x-2}
&\gamma_t(x) = f_{0}(x)-\hat\Theta_0\left(x, \mathcal{X}\right)\hat\Theta_0^{-1}\left(I\!-\!e^{- \eta \hat\Theta_0 t}\right)f_{0}(\mathcal{X}).
\end{align}
Therefore, we can obtain the time evolution of the linearized neural network without
running gradient descent.
We only need to compute the tangent kernel $\hat\Theta_0$ and the outputs $f_0$ at initialization and use Equations \ref{eq:lin-dynamics-weights}, \ref{eq:flin-x}, and \ref{eq:flin-x-2} to compute the dynamics of the weights and the outputs.
\subsection{Infinite width limit yields a Gaussian process}
As the width of the hidden layers approaches infinity, the Central Limit Theorem (CLT) implies that the outputs at initialization $\left\{f_{0}(x)\right\}_{x\in\mathcal{X}}$ converge to a multivariate Gaussian in distribution. Informally, this occurs because the pre-activations at each layer are a sum of Gaussian random variables (the weights and bias), and thus become a Gaussian random variable themselves.
See
\cite{poole2016exponential,schoenholz2016, lee2018deep, xiao18a, yang2017} for more details, and \cite{matthews2018b_arxiv, novak2018bayesian}
for a formal treatment.
Therefore, randomly initialized neural networks are in correspondence with a certain class of GPs (hereinafter referred to as NNGPs), which facilitates a fully Bayesian treatment of neural networks \citep{lee2018deep,matthews2018}. More precisely, let $f_t^{i}$ denote the $i$-th output dimension and $\mathcal K$ denote the sample-to-sample kernel function (of the pre-activation) of the outputs in the infinite width setting,
\begin{align}
\mathcal K^{i, j}(x,x') =
\lim_{\min\pp{n_{1}, \dots, {n_L}}\to\infty}
\mathbb E \left[ f_0^i(x)\cdot f_0^j(x')\right],
\end{align}
then $f_{0}(\mathcal{X}) \sim \mathcal{N}(0, \mathcal K(\mathcal{X}, \mathcal{X}))$, where $\mathcal K^{i, j}(x, x')$
denotes the covariance between the $i$-th output of $x$ and $j$-th output of $x'$,
which can be computed recursively (see \citet[\S 2.3]{lee2018deep} and SM \sref{sec:KernelDerivation}).
For a test input $x\in \mathcal{X}_T$, the joint output distribution $f\left([x, \mathcal{X}]\right)$ is also multivariate Gaussian.
Conditioning on the training samples\footnote{
This imposes that $h^{L+1}$ directly corresponds to the network predictions.
In the case of softmax readout,
variational or sampling methods are required to marginalize over $h^{L+1}$.
}, $f(\mathcal{X})=\mathcal{Y}$, the
distribution of $\left.f(x)\right\vert \mathcal{X}, \mathcal{Y}$ is also a Gaussian $\mathcal N \left(\mu(x), \Sigma(x)\right)$,
\begin{align}
\label{eq:nngp-exact-posterior}
\mu(x) = \mathcal K(x, \mathcal{X}) \mathcal K^{-1}\mathcal{Y}, \quad
\Sigma(x) = \mathcal K(x, x) - \mathcal K(x, \mathcal{X}) \mathcal K^{-1}\mathcal K(x, \mathcal{X})^T
,
\end{align}
and where $\mathcal K = \mathcal K(\mathcal{X}, \mathcal{X})$.
This is the posterior predictive distribution resulting from exact Bayesian inference in an infinitely wide neural network.
\subsubsection{Gaussian processes from gradient descent training}
If we freeze the variables $\theta^{\leq L}$ after initialization and only optimize $\theta^{L+1}$, the original network and its linearization are identical. Letting the width approach infinity, this particular tangent kernel $\hat\Theta_0$ will converge to $\mathcal K$ in probability
and \eqref{eq:flin-x} will converge to the posterior \eqref{eq:nngp-exact-posterior} as $t\to\infty$ (for further details see SM \sref{sec:gradient-readout-layer}).
This is a realization of the ``sample-then-optimize" approach for evaluating the posterior of a Gaussian process proposed in \citet{matthews2017sample}.
If none of the variables are frozen, in the infinite width setting, $\hat\Theta_0$ also converges in probability
to a deterministic kernel $\Theta$ \cite{Jacot2018ntk, yang2019scaling}, which we sometimes refer to as the analytic kernel, and which can also be computed recursively (see SM \sref{sec:KernelDerivation}). For {$\operatorname{ReLU}$}{} and $\operatorname{erf}$ {} nonlinearity, $\Theta$ can be exactly computed (SM \sref{sec:analytic_kernel}) which we use in \sref{sec:experiments}. Letting the width go to infinity, for any $t$, the output $f^{\textrm{lin}}_t(x)$ of the linearized network is also Gaussian distributed because Equations \ref{eq:flin-x} and \ref{eq:flin-x-2} describe an affine transform of the Gaussian $[f_0(x), f_0(\mathcal{X})]$. Therefore
\begin{corollary}\label{cor:lin-distribution}
For every test points in $x \in \mathcal{X}_T$, %
and $t \geq 0$, $f^{\textrm{lin}}_t(x)$ converges in distribution as width goes to infinity to a Gaussian with mean and covariance given by\footnote{Here {``\it+h.c.'' } is an abbreviation for ``plus the Hermitian conjugate''.}
\small
\begin{align}
&\mu(\mathcal{X}_T) =\Theta\left(\mathcal{X}_T, \mathcal{X}\right)\Theta^{-1}\left(I -e^{- \eta \Theta t}\right)\mathcal{Y} \,,
\label{eq:lin-exact-dynamics-mean}
\\
&\Sigma(\mathcal{X}_T, \mathcal{X}_T) =\mathcal K\left(\mathcal{X}_T, \mathcal{X}_T\right) +\Theta(\mathcal{X}_T, \mathcal{X})\Theta^{-1}\left(I-e^{- \eta \Theta t}\right) \mathcal K \left(I - e^{-\eta \Theta t}\right) \Theta^{-1} \Theta\left(\mathcal{X}, \mathcal{X}_T \right)\nonumber \\
&\phantom{\Sigma(\mathcal{X}_T, \mathcal{X}_T) =\mathcal K\left(\mathcal{X}_T, \mathcal{X}_T\right)} -\left(\Theta(\mathcal{X}_T, \mathcal{X})\Theta^{-1}\left(I-e^{- \eta \Theta t}\right) \mathcal K\left(\mathcal{X}, \mathcal{X}_T \right) + h.c. \right).
\label{eq:lin-exact-dynamics-var}
\end{align}
\normalsize
Therefore, over random initialization, $\lim_{t\to\infty}\lim_{n\to\infty}f^{\textrm{lin}}_t(x)$ has distribution
\begin{align}\label{eq:lin-exact-dynamics-var_inf}
\mathcal N\big(&\Theta\left(\mathcal{X}_T, \mathcal{X}\right)\Theta^{-1}\mathcal{Y}, \nonumber\\
&\mathcal K\left(\mathcal{X}_T, \mathcal{X}_T\right) +\Theta(\mathcal{X}_T, \mathcal{X})\Theta^{-1}\mathcal K \Theta^{-1} \Theta\left(\mathcal{X}, \mathcal{X}_T \right)- \left(\Theta(\mathcal{X}_T, \mathcal{X})\Theta^{-1}\mathcal K\left(\mathcal{X}, \mathcal{X}_T\right) + h.c. \right)\big).
\end{align}
\end{corollary}
Unlike the case when only $\theta^{L+1}$ is optimized, Equations~\ref{eq:lin-exact-dynamics-mean} and \ref{eq:lin-exact-dynamics-var}
do not admit an interpretation corresponding to the posterior sampling of a probabilistic model.\footnote{One possible exception is when the NNGP kernel and NTK are the same up to a scalar multiplication. This is the case when the activation function is the identity function and there is no bias term.}
We contrast the predictive distributions from the NNGP, NTK-GP (i.e. Equations \ref{eq:lin-exact-dynamics-mean} and \ref{eq:lin-exact-dynamics-var}) and ensembles of NNs in Figure~\ref{fig:posterior-dynamics}.
Infinitely-wide neural networks open up ways to study deep neural networks both under fully Bayesian training through the Gaussian process correspondence, and under GD training through the linearization perspective. The resulting distributions over functions are inconsistent (the distribution resulting from GD training does not generally correspond to a Bayesian posterior). We believe understanding the biases over learned functions induced by different training schemes and architectures is a fascinating avenue for future work.
\subsection{Infinite width networks are linearized networks}
\label{sec:Justification}
\eqref{eq:nn-gradient-descent-weights} and \ref{eq:nn-gradient-descent-outputs} of the original network are intractable in general, since $\hat\Theta_t$ evolves with time. However, for the mean squared loss, we are able to prove formally that, as long as the learning rate $\eta< \eta_{{\rm critical}} :=2({\lambda_{\rm{min}}(\Theta) + \lambda_{\rm{max}}(\Theta)})^{-1}$, where ${\lambda_{\textrm{min/max}}}(\Theta)$ is the min/max eigenvalue of $\Theta$, the gradient descent dynamics of the original neural network falls into its linearized dynamics regime.
\begin{theorem}[Informal]\label{thm:main}
Let $n_1 =\dots =\ n_L=n$ and assume $\lambda_{\rm{min}}(\Theta)>0$. Applying gradient descent with learning rate $\eta < \eta_{{\rm critical}} $ (or gradient flow), for every $x\in \mathbb R^{n_0}$ with $\|x\|_2\leq 1$, with probability arbitrarily close to 1 over random initialization,
\begin{align}\label{eq:discrepancy-training}
\sup_{t\geq 0}\left\|f_t(x) - f^{\textrm{lin}}_t(x)\right\|_2,
\,\,\sup_{t\geq 0}\frac{\left\|\theta_t -\theta_0\right\|_2}{\sqrt n},
\,\,
\sup_{t\geq 0}\left\|\hat\Theta_t - \hat\Theta_0\right\|_F = \mathcal O(n^{-\frac 1 2}), \,\, {\rm as }\quad n\to \infty\,.
\end{align}
\end{theorem}
Therefore, as $n\to\infty$, the distributions of $f_t(x)$ and $f^{\textrm{lin}}_t(x)$ become the same. Coupling with Corollary \ref{cor:lin-distribution}, we have
\begin{theorem}\label{thm:distribution}
If $\eta < \eta_{\rm critical}$, then for every $x\in\mathbb{R}^{n_0}$ with $\|x\|_2\leq 1$,
as $n\to\infty$, $f_t(x)$ converges in distribution to the Gaussian with mean and variance given by \eqref{eq:lin-exact-dynamics-mean} and \eqref {eq:lin-exact-dynamics-var}.
\end{theorem}
We refer the readers to Figure ~\ref{fig:posterior-dynamics} for empirical verification of this theorem.
The proof of Theorem \ref{thm:main} consists of two steps. The first step is to prove the global convergence of overparameterized neural networks \citep{du2018gradient, allen2018convergence-fc, allen2018convergence-rnn, zou2018stochastic} and stability of the NTK under gradient descent (and gradient flow);
see SM \sref{sec: converge proof}. This stability was first observed and proved in \cite{Jacot2018ntk} in the gradient flow and sequential limit (i.e. letting $n_1\to\infty$, \dots, $n_L\to \infty$ sequentially) setting under certain assumptions about global convergence of gradient flow.
In \sref{sec: converge proof}, we show how to use the NTK to provide a self-contained (and cleaner) proof of such global convergence and the stability of NTK simultaneously.
The second step is to couple the stability of NTK with Gr\"{o}nwall's type arguments~\cite{dragomir2003some} to upper bound the
discrepancy between $f_t$ and $f^{\textrm{lin}}_t$, i.e. the first norm in \eqref{eq:discrepancy-training}.
Intuitively, the ODE of the original network (\eqref{eq:nn-gradient-descent-outputs}) can be considered as a $\|\hat\Theta_t - \hat\Theta_0\|_F$-fluctuation from the linearized ODE (\eqref{eq:lin-nn-gradient-descent-outputs}). One expects the difference between the solutions of these two ODEs to be upper bounded by some functional of $\|\hat\Theta_t - \hat\Theta_0\|_F$; see SM \sref{sec:sup-discrepancy}.
Therefore, for a large width network, the training dynamics can be well approximated by linearized dynamics.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/ntk_vs_step_3_layers_relu.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/weight_change_vs_width_3_layers_relu.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/ntk_change_vs_width_3_layers_relu.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/nngp_change_vs_width_3_layers_relu.pdf}
\end{subfigure}
\caption{\textbf{Relative Frobenius norm change during training.} Three hidden layer {$\operatorname{ReLU}$}{} networks trained with $\eta=1.0$ on a subset of MNIST ($|\mathcal{D}|=128$). We measure changes of (input/output/intermediary) weights, empirical $\hat\Theta$, and empirical $\hat {\mathcal K}$
after $T=2^{17}$ steps of gradient descent updates for varying width. We see that the relative change in input/output weights scales as $1/\sqrt{n}$ while intermediate weights scales as $1/n$, this is because the dimension of the input/output does not grow with $n$. The change in $\hat\Theta$ and $\hat {\mathcal K}$
is upper bounded by $\mathcal{O}\left(1/\sqrt{n}\right)$ but is closer to $\mathcal{O}\left(1/n\right)$.
See Figure \ref{fig_sm:Weight-NTK-vs-width} for the same experiment with 3-layer $\tanh$ and 1-layer {$\operatorname{ReLU}$}{} networks.
See Figures \ref{fig:convergence-vs-width-d3} and \ref{fig:convergence-vs-width2} for additional comparisons of finite width empirical and analytic kernels.}
\label{fig:Weight-NTK-vs-width}
\vspace{-0.5cm}
\end{figure}
Note that the updates for individual weights in \eqref{eq:lin-nn-gradient-descent-weights} vanish in the infinite width limit, which for instance can be seen from the explicit width dependence of the gradients in the NTK parameterization. Individual weights move by a vanishingly small amount for wide networks in this regime of dynamics, as do hidden layer activations, but they collectively conspire to provide a finite change in the final output of the network, as is necessary for training.
An additional insight gained from linearization of the network is that the individual instance dynamics derived in \cite{Jacot2018ntk} %
can be viewed as a random features method,\footnote{We thank Alex Alemi for pointing out a subtlety on correspondence to a random features method.} where the features are the gradients of the model with respect to its weights.
\subsection{Extensions to other optimizers, architectures, and losses}
Our theoretical analysis thus far has focused on fully-connected single-output architectures trained by full batch gradient descent.
In SM \sref{sec extensions} we derive corresponding results for: networks with multi-dimensional outputs, training against a cross entropy loss, and gradient descent with momentum.
In addition to these generalizations, there is good reason to suspect the results to extend to much broader class of models and optimization procedures.
In particular, a wealth of recent literature suggests that the mean field theory governing the wide network limit of fully-connected models~\cite{poole2016exponential,schoenholz2016} extends naturally to residual networks~\cite{yang2017}, CNNs~\cite{xiao18a}, RNNs~\cite{chen2018rnn}, batch normalization~\cite{yang2018a}, and to broad architectures~\cite{yang2019scaling}.
We postpone the development of these additional theoretical extensions in favor of an empirical investigation of linearization for a variety of architectures.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{figure/nngp_ntk_posterior_neurips}
\caption{\textbf{Dynamics of mean and variance of trained neural network outputs follow analytic dynamics from linearization}. Black lines indicate the time evolution of the predictive output distribution from an ensemble of 100 trained neural networks (NNs). The blue region indicates the analytic prediction of the output distribution throughout training (Equations \ref{eq:lin-exact-dynamics-mean}, \ref{eq:lin-exact-dynamics-var}). Finally, the red region indicates the prediction that would result from training only the top layer, corresponding to an NNGP (Equations \ref{eq:NNGP-exact-dynamics-mean}, \ref{eq:NNGP-exact-dynamics-var}).
The trained network has 3 hidden layers of width 8192, $\operatorname{tanh}$ activation functions, $\sigma_w^2 = 1.5$, no bias, and $\eta = 0.5$. The output is computed for inputs interpolated between two training points (denoted with black dots) $x(\alpha) = \alpha x^{(1)} + (1-\alpha)x^{(2)}$. The shaded region and dotted lines denote 2 standard deviations ($\sim 95\%$ quantile) from the mean denoted in solid lines. Training was performed with full-batch gradient descent with dataset size $|\mathcal{D}|=128$. For dynamics for individual function initializations, see SM Figure~\ref{fig:posterior-dynamics-nn-samples}.}
\label{fig:posterior-dynamics}
\vspace{-0.5cm}
\end{figure}
\section{Experiments}
\label{sec:experiments}
In this section, we provide empirical support showing that the training dynamics of wide neural networks are well captured by linearized models. We consider fully-connected, convolutional, and wide ResNet architectures trained with full- and mini- batch gradient descent using learning rates sufficiently small so that the continuous time approximation holds well. We consider two-class classification on CIFAR-10 (horses and planes) as well as ten-class classification on MNIST and CIFAR-10. When using MSE loss, we treat the binary classification task as regression with one class regressing to $+1$ and the other to $-1$.
Experiments in Figures \ref{fig:Weight-NTK-vs-width}, \ref{fig:NTK-dynamics-wresnet}, \ref{fig:NTK-dynamics-cnn}, \ref{fig:NTK-dynamics-xent-mom}, \ref{fig:logit-deviation-xent}, \ref{fig:error-vs-width} and \ref{fig_sm:Weight-NTK-vs-width}, were done in JAX \citep{jaxrepo}. The remaining experiments used TensorFlow \citep{abadi2016tensorflow}.
An open source implementation of this work providing tools to investigate linearized learning dynamics is available at \href{https://www.github.com/google/neural-tangents}{\texttt{\textbf www.github.com/google/neural-tangents}} ~\cite{neuraltangents2019}.
\begin{figure}%
\centering
\includegraphics[width=.85\columnwidth]{figure/dynamics_fc_d5_neurips}
\caption{{\bf Full batch gradient descent on a model behaves similarly to analytic dynamics on its linearization, both for network outputs, and also for individual weights.}
A binary CIFAR classification task with MSE loss and a $\operatorname{ReLU}$ fully-connected network with 5 hidden layers of width $n=2048$, $\eta = 0.01$, $|\mathcal{D}|=256$, $k=1$, $\sigma_w^2=2.0$, and $\sigma_b^2=0.1$. Left two panes show dynamics for a randomly selected subset of datapoints or parameters. Third pane shows that the dynamics of loss for training and test points agree well between the original and linearized model. The last pane shows the dynamics of RMSE between the two models on test points. We observe that the empirical kernel $\hat\Theta$ gives more accurate dynamics for finite width networks.
}
\label{fig:NTK-dynamics}
\end{figure}
\textbf{Predictive output distribution}:
In the case of an MSE loss, the output distribution remains Gaussian throughout training.
In Figure~\ref{fig:posterior-dynamics}, the predictive output distribution for input points interpolated between two training points is shown for an ensemble of neural networks and their corresponding GPs. The interpolation is given by $x(\alpha) = \alpha x^{(1)} + (1-\alpha)x^{(2)}$ where $x^{(1,2)}$ are two training inputs with different classes.
We observe that the mean and variance dynamics of neural network outputs during gradient descent training follow the analytic dynamics from linearization well (Equations \ref{eq:lin-exact-dynamics-mean}, \ref{eq:lin-exact-dynamics-var}).
Moreover the NNGP predictive distribution which corresponds to exact Bayesian inference, while similar, is noticeably \emph{different} from the predictive distribution at the end of gradient descent training. For dynamics for individual function draws see SM Figure~\ref{fig:posterior-dynamics-nn-samples}.
\textbf{Comparison of training dynamics of linearized network to original network}:
For a particular realization of a finite width network, one can analytically predict the dynamics of the weights and outputs over the course of training using the empirical tangent kernel at initialization. In Figures ~\ref{fig:NTK-dynamics}, \ref{fig:NTK-dynamics-wresnet} (see also \ref{fig:NTK-dynamics-cnn}, \ref{fig:NTK-dynamics-xent-mom}), we compare these linearized dynamics (Equations \ref{eq:lin-dynamics-weights},~\ref{eq:lin-dynamics-outputs}) with the result of training the actual network. In all cases we see remarkably good agreement. We also observe that for finite networks, dynamics predicted using the empirical kernel $\hat\Theta$ better match the data than those obtained using the infinite-width, analytic, kernel $\Theta$. To understand this we note that $\|\hat\Theta^{(n)}_T -\hat\Theta^{(n)}_0\|_F = \mathcal O(1 /n) \leq \mathcal O( 1/ {\sqrt n})=\|\hat\Theta^{(n)}_0 - \Theta\|_F$,
where $\hat\Theta^{(n)}_0$ denotes the empirical tangent kernel of width $n$ network, as plotted in Figure~\ref{fig:Weight-NTK-vs-width}.
One can directly optimize parameters of $f^{\textrm{lin}}$ instead of solving the ODE induced by the tangent kernel $\hat\Theta$. Standard neural network optimization techniques such as mini-batching, weight decay, and data augmentation can be directly applied. In Figure~\ref{fig:NTK-dynamics-wresnet} (\ref{fig:NTK-dynamics-cnn}, \ref{fig:NTK-dynamics-xent-mom}), we compared the training dynamics of the linearized and original network while directly training both networks.
With direct optimization of linearized model, we tested full ($|\mathcal{D}|= 50,000$) MNIST digit classification with cross-entropy loss, and trained with a momentum optimizer (Figure~\ref{fig:NTK-dynamics-xent-mom}).
For cross-entropy loss with softmax output, some logits at late times grow indefinitely, in contrast to MSE loss where logits converge to target value. The error between original and linearized model for cross entropy loss becomes much worse at late times if the two models deviate significantly before the logits enter their late-time steady-growth regime (See Figure~\ref{fig:logit-deviation-xent}).
Linearized dynamics successfully describes the training of networks beyond vanilla fully-connected models.
To demonstrate the generality of this procedure
we show we can predict the learning dynamics of subclass of Wide Residual Networks (WRNs)~\cite{zagoruyko2016wide}.
WRNs are a class of model that are popular in computer vision and leverage convolutions, batch normalization, skip connections, and average pooling. In Figure~\ref{fig:NTK-dynamics-wresnet}, we show a comparison between the linearized dynamics and the true dynamics for a wide residual network trained with MSE loss and SGD with momentum, \emph{trained on the full CIFAR-10 dataset}. We slightly modified the block structure described in Table~\ref{tab:wide_resnet_config} so that each layer has a constant number of channels (1024 in this case), and otherwise followed the original implementation.
As elsewhere, we see strong agreement between the predicted dynamics and the result of training.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figure/dynamics_wresnet_n1_mom_mse_full_neurips_modified}
\caption{{\bf A wide residual network and its linearization behave similarly when both are trained by SGD with momentum on MSE loss on CIFAR-10.}
We adopt the network architecture from~\citet{zagoruyko2016wide}. We use $N=1$, channel size $1024$, $\eta = 1.0$, $\beta=0.9$, $k=10$, $\sigma_w^2=1.0$, and $\sigma_b^2=0.0$. See Table~\ref{tab:wide_resnet_config} for details of the architecture. Both the linearized and original model are trained directly on full CIFAR-10 ($|\mathcal{D}|= 50,000$), using SGD with batch size 8.
Output dynamics for a randomly selected subset of train and test points are shown in the first two panes.
Last two panes show training and accuracy curves for the original and linearized networks.
}
\label{fig:NTK-dynamics-wresnet}
\vspace{-0.5cm}
\end{figure}
\textbf{Effects of dataset size}:
The training dynamics of a neural network match those of its linearization when the width is infinite and the dataset is finite. In previous experiments, we chose sufficiently wide networks to achieve small error between neural networks and their linearization for smaller datasets.
Overall, we observe that as the width grows the error decreases (Figure~\ref{fig:error-vs-width}).
Additionally, we see that the error grows
in the size of the dataset. Thus, although error grows with dataset this can be counterbalanced by a corresponding increase in the model size.
\section{Discussion}
We showed theoretically that the learning dynamics in parameter space of deep nonlinear neural networks are exactly described by a linearized model in the infinite width limit. Empirical investigation
revealed that this agrees well with actual training dynamics and predictive distributions across fully-connected, convolutional, and even wide residual network architectures, as well as with different optimizers (gradient descent, momentum, mini-batching) and loss functions (MSE, cross-entropy). Our results suggest that a surprising number of realistic neural networks may be operating in the regime we studied.
This is further consistent with recent experimental work showing that neural networks are often robust to re-initialization but not re-randomization of layers (\citet{zhang2019all}).
In the regime we study, since the learning dynamics are fully captured by the kernel $\hat\Theta$ and the target signal, studying the properties of $\hat\Theta$ to determine trainability and generalization are interesting future directions. Furthermore, the infinite width limit gives us a simple characterization of both gradient descent and Bayesian inference.
By studying properties of the NNGP kernel $\mathcal K$ and the tangent kernel $\Theta$, we may shed light on the inductive bias of gradient descent.
Some layers of modern neural networks may be operating far from the linearized regime. Preliminary observations in~\citet{lee2018deep} showed that wide neural networks trained with SGD perform similarly to the corresponding GPs as width increase, while GPs still outperform trained neural networks for both small and large dataset size.
Furthermore, in~\citet{novak2018bayesian}, it is shown that the comparison of performance between finite- and infinite-width networks is highly architecture-dependent. In particular, it was found that infinite-width networks perform as well as or better than their finite-width counterparts for many fully-connected or locally-connected architectures.
However, the opposite was found in the case of convolutional networks without
pooling.
It is still an open research question to determine the main factors that determine these performance gaps. We believe that examining the behavior of infinitely wide networks provides a strong basis from which to build up a systematic understanding of finite-width networks (and/or networks trained with large learning rates).
\section*{Acknowledgements}
We thank Greg Yang and Alex Alemi for useful discussions and feedback. We are grateful to Daniel Freeman, Alex Irpan and anonymous reviewers for providing valuable feedbacks on the draft. We thank the JAX team for developing a language which makes model linearization and NTK computation straightforward. We would like to especially thank Matthew Johnson for support and debugging help.
\section{Additional figures}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure/nngp_ntk_posterior_nn_samples_neurips}
\caption{\textbf{Sample of neural network outputs.} The lines correspond to the functions learned for 100 different initializations. The configuration is the same as in Figure~\ref{fig:posterior-dynamics}.}
\label{fig:posterior-dynamics-nn-samples}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure/linear_cnn_mse_mom_cifar}
\caption{{\bf
A convolutional network and its linearization behave similarly when trained using full batch gradient descent with a momentum optimizer.}
Binary CIFAR classification task with MSE loss, $\operatorname{tanh}$ convolutional network with 3 hidden layers of channel size $n=512$, $3\times3$ size filters, average pooling after last convolutional layer, $\eta = 0.1$, $\beta=0.9$, $|\mathcal{D}|= 128$, $\sigma_w^2=2.0$ and $\sigma_b^2=0.1$.
The linearized model is trained directly by full batch gradient descent with momentum, rather than by integrating its continuous time analytic dynamics.
Panes are the same as in Figure \ref{fig:NTK-dynamics}.
}
\label{fig:NTK-dynamics-cnn}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure/fc_full_mnist_bs64}
\caption{{\bf A neural network and its linearization behave similarly when both are trained
via SGD with momentum
on cross entropy loss on MNIST.}
Experiment is for 10 class MNIST classification using a $\operatorname{ReLU}$ fully connected network with 2 hidden layers of width $n=2048$, $\eta = 1.0$, $\beta=0.9$, $|\mathcal{D}|=50,000$, $k=10$, $\sigma_w^2=2.0$, and $\sigma_b^2=0.1$. Both models are trained using stochastic minibatching with batch size 64.
Panes are the same as in Figure \ref{fig:NTK-dynamics}, except that the top row shows all ten logits for a single randomly selected datapoint.
}
\label{fig:NTK-dynamics-xent-mom}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure/xent-test-vs-width}
\caption{\textbf{Logit deviation for cross entropy loss.} Logits for models trained with cross entropy loss diverge at late times. If the deviation between the logits of the linearized model and original model are large early in training, as shown for the narrower networks (first row), logit deviation at late times can be significantly large. As the network becomes wider (second row), the logit deviates at a later point in training. Fully connected $\operatorname{tanh}$ network $L=4$ trained on binary CIFAR classification problem. }
\label{fig:logit-deviation-xent}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figure/RMSE-vs-width_neurips}
\caption{\textbf{Error dependence on depth, width, and dataset size.} Final value of the RMSE for fully-connected, convolutional, wide residual network as networks become wider for varying depth and dataset size.
Error in fully connected networks as the depth is varied from 1 to 16 (first) and the dataset size is varied from 32 to 4096 (last). Error in convolutional networks as the depth is varied between 1 and 32 (second), and WRN for depths 10 and 16 corresponding to N=1,2 described in Table~\ref{tab:wide_resnet_config} (third). Networks are critically initialized $\sigma_w^2=2.0$, $\sigma_b^2=0.1$, trained with gradient descent on MSE loss. Experiments in the first three panes used $|\mathcal{D}|=128$.}
\label{fig:error-vs-width}
\end{figure}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/1_layer/ntk_vs_width}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/1_layer/weight_change_vs_width}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/1_layer/ntk_change_vs_width}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/1_layer/nngp_change_vs_width}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/ntk_vs_step_3_layers_tanh.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/weight_change_vs_width_3_layers_tanh.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/ntk_change_vs_width_3_layers_tanh.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\includegraphics[width=\textwidth]{figure/3_layers/nngp_change_vs_width_3_layers_tanh.pdf}
\end{subfigure}
\caption{\textbf{Relative Frobenius norm change during training.} \emph{(top)} One hidden layer, {$\operatorname{ReLU}$}{} networks trained with $\eta=1.0$, on a 2-class CIFAR10 subset of size $|\mathcal{D}|=128$. We measure changes of (read-out/non read-out) weights, empirical $\hat\Theta$ and empirical $\hat {\mathcal K}$
after $T=2^{16}$ steps of gradient descent updates for varying width.
\emph{(bottom)} Networks with three layer $\tanh$ nonlinearity and other details are identical to Figure \ref{fig:Weight-NTK-vs-width}.}
\label{fig_sm:Weight-NTK-vs-width}
\end{figure}
\newpage
\newpage
\section{Extensions}\label{sec extensions}
\subsection{Momentum}
One direction is to go beyond vanilla gradient descent dynamics. We
consider momentum updates\footnote{Combining the usual two stage update into a single equation.}
\begin{align}
\theta_{i+1} = \theta_i + \beta(\theta_i - \theta_{i-1}) - \eta \nabla_\theta \mathcal L|_{\theta = \theta_i} \,.
\end{align}
The discrete update to the function output becomes
\begin{align}
f^{\textrm{lin}}_{i+1}(x)
=f^{\textrm{lin}}_{i}(x)
-\eta \hat\Theta_0(x, \mathcal{X})\nabla_{f^{\textrm{lin}}_i(\mathcal{X})} \mathcal L + \beta (f^{\textrm{lin}}_{i}(x) - f^{\textrm{lin}}_{i-1}(x))
\end{align}
where $f^{\textrm{lin}}_{t}(x)$ is the output of the linearized network after $t$ steps.
One can take the continuous time limit as in \citet{qian1999momentum,su2014differential} and obtain
\begin{align}
\ddot \omega_t &= \tilde \beta \dot \omega_t - \nabla_\theta f^{\textrm{lin}}_0(\mathcal{X})^T \nabla_{f^{\textrm{lin}}_t(\mathcal{X})} \mathcal L \\
\ddot f_t {} ^{\textrm{lin}}(x) &= \tilde \beta \dot f_t ^{\textrm{lin}}(x) - \hat\Theta_0(x, \mathcal{X}) \nabla_{f^{\textrm{lin}}_t(\mathcal{X})} \mathcal L
\label{eq:lin-mom-output}
\end{align}
where continuous time relates to steps $t=i \sqrt{\eta}$ and $\tilde \beta = (\beta - 1)/\sqrt{\eta}$. These equations are also amenable to analytic treatment for MSE loss. See Figure~\ref{fig:NTK-dynamics-cnn},~\ref{fig:NTK-dynamics-xent-mom} and~\ref{fig:NTK-dynamics-wresnet} for experimental agreement.
\subsection{Multi-dimensional output and cross-entropy loss}
\label{sec:CrossEntropy}
One can extend the loss function to general functions with multiple output dimensions.
Unlike for squared error, we do not have a closed form solution to the dynamics equation. However, the equations for the dynamics can be solved using an ODE solver as an initial value problem.
\begin{equation}
\ell(f, y) = - \sum_i y^i \log \sigma(f^i), \qquad \sigma(f^i) \equiv \frac{\exp(f^i)}{\sum_j \exp(f^j)}\,.
\end{equation}
Recall that $\frac{\partial \ell}{\partial \hat y^i} = \sigma(\hat y^i) - y^i$.
For general input point $x$ and for an arbitrary parameterized function $f ^i (x)$ parameterized by $\theta$, gradient flow dynamics is given by
\begin{align}
\dot {f}_t^i(x)= \nabla_\theta f_t^i(x) \frac {d\theta} {dt} &= - \eta \nabla_\theta f_t^i(x) \sum_j\sum_{(z, y)\in\mathcal{D}} \left[ \nabla_\theta f_t^j(z)^T\frac {\partial \ell(f_t, y) }{\partial \hat y^j} \right] \\
&= - \eta \sum_{(z, y)\in\mathcal{D}}\sum_j \nabla_\theta f_t^i(x) \nabla_\theta f_t^j(z) ^T \left(\sigma(f_t^j(z)) - y^j\right)
\end{align}
Let $\hat\Theta^{ij}(x, \mathcal{X}) = \nabla_\theta f^i(x) \nabla_\theta f^j(\mathcal{X}) ^T$. The above is
\begin{align}
\dot {f_t} (\mathcal{X}) &= -\eta \hat\Theta_t (\mathcal{X}, \mathcal{X}) \left( \sigma(f_t(\mathcal{X})) - \mathcal{Y} \right)\\
\dot {f_t} (x) &= - \eta \hat\Theta_t (x, \mathcal{X}) \left( \sigma(f_t(\mathcal{X})) - \mathcal{Y} \right)\,.
\end{align}
The linearization is
\begin{align}
\label{eq:xent-ode-train}
\dot {f_t}^{\textrm{lin}} (\mathcal{X}) &= - \eta \hat\Theta_0 (\mathcal{X}, \mathcal{X}) \left( \sigma(f^{\textrm{lin}}_t(\mathcal{X})) - \mathcal{Y} \right)\\
\label{eq:xent-ode-test}
\dot {f_t}^{\textrm{lin}} (x) &= - \eta \hat\Theta_0 (x, \mathcal{X}) \left( \sigma(f_t^{\textrm{lin}}(\mathcal{X})) - \mathcal{Y} \right)\,.
\end{align}
For general loss, e.g. cross-entropy with softmax output, we need to rely on solving the ODE Equations \ref{eq:xent-ode-train} and \ref{eq:xent-ode-test}. We use the \texttt{dopri5} method for ODE integration, which is the default integrator in TensorFlow (\texttt{tf.contrib.integrate.odeint}).
\section{Neural Tangent kernel for {$\operatorname{ReLU}$}{} and $\operatorname{erf}$ {}}
\label{sec:analytic_kernel}
For {$\operatorname{ReLU}$}{} and $\operatorname{erf}$ {} activation functions, the tangent kernel can be computed analytically. We begin with the case $\phi = $ {$\operatorname{ReLU}$}{}; using the formula from \citet{cho2009}, we can compute $\mathcal{T}$ and $\dot\mathcal{T}$ in closed form.
Let $\Sigma $ be a $2\times 2$ PSD matrix.
We will use
\begin{align}
k_n(x , y) = \int \phi^n(x \cdot w) \phi^n(y \cdot w) e^{-\|w\|^2/2} dw \cdot (2\pi)^{-d/2} = \frac 1 {2\pi} \|x\|^{n} \|y\|^{n} J_n(\theta)
\end{align}
where
\begin{align}
\phi(x) &= \max(x, 0), \quad \theta(x, y) = \arccos \left(\frac{x\cdot y} {\|x\|\|y\|} \right)\,,
\nonumber \\
J_0(\theta) &= \pi - \theta\,, \quad
J_1(\theta) = \sin \theta + (\pi - \theta) \cos \theta
= \sqrt{1 - \left(\frac{x\cdot y} {\|x\|\|y\|} \right)^2 } + (\pi - \theta) \left(\frac{x\cdot y} {\|x\|\|y\|} \right)\,.
\end{align}
Let $d=2$ and $u = (x\cdot w, y\cdot w)^T$. Then $u$ is a mean zero Gaussian with $\Sigma = [[x\cdot x, x\cdot y]; [x\cdot y, y\cdot y]]$.
Then
\begin{align}
\mathcal{T}(\Sigma) &= k_1(x, y) = \frac 1 {2\pi} \|x\| \|y\| J_1(\theta) \\
\dot\mathcal{T}(\Sigma) &= k_0(x, y) = \frac 1 {2\pi} J_0(\theta)
\end{align}
For $\phi= \operatorname{erf}$, let $\Sigma$ be the same as above. Following \citet{williams1997}, we get
\begin{align}
\mathcal{T}(\Sigma) &= \frac 2 \pi \sin^{-1} \left(\frac {2x\cdot y} {\sqrt {(1 + 2 x \cdot x) (1 + 2 y \cdot y)} }\right)
\\
\dot\mathcal{T}(\Sigma) &= \frac 4 \pi {\rm det} (I + 2\Sigma)^{-1/2}
\end{align}
\section{Gradient flow dynamics for training only the readout-layer}
\label{sec:gradient-readout-layer}
The connection between Gaussian processes and Bayesian wide neural networks can be extended to the setting when only the readout layer parameters are being optimized.
More precisely, we show that when training only the readout layer, the outputs of the network form a Gaussian process (over an ensemble of draws from the parameter prior) throughout training, where that output is an interpolation between the GP prior and GP posterior.
Note that for any $x, x'\in\mathbb R^{n_0}$, in the infinite width limit
$\bar x (x) \cdot \bar x (x') =\hat {\mathcal K}(x, x') \to \mathcal K(x, x')$ in probability,
where for notational simplicity we assign $\bar {x}(x) = \left[\frac{\sigma_w x^{L}(x)}{\sqrt {n_L}}, {\sigma_b}\right]$.
The regression problem is specified with mean-squared loss
\begin{align}
\mathcal L = \frac 1 2 \|f(\mathcal{X}) - \mathcal{Y}\|_2^2
=\frac 1 2 \|{\bar x}(\mathcal{X}) \theta^{L+1} - \mathcal{Y}\|_2^2,
\end{align}
and applying gradient flow to optimize the readout layer (and freezing all other parameters),
\begin{align}
\dot \theta^{L+1} = - {\eta} {\bar x(\mathcal{X})}^T \left( {\bar x}(\mathcal{X}) \theta^{L+1} - \mathcal{Y} \right)\, ,
\end{align}
where $\eta$ is the learning rate. The solution to this ODE gives the evolution of the output of an arbitrary $x^*$.
So long as the empirical kernel $\bar x(\mathcal{X})\bar x(\mathcal{X})^T$ is invertible, it is
\begin{align}
\label{eq:NNGP-exact-dynamics}
f_t(x^*) =
f_0(x^*)+ \hat {\mathcal K}(x, \mathcal{X})\hat {\mathcal K}(\mathcal{X}, \mathcal{X})^{-1}
\left(\exp\left(-\eta t \hat {\mathcal K}(\mathcal{X}, \mathcal{X} )\right)-I \right)(f_0(\mathcal{X}) - \mathcal{Y})
\end{align}
For any $x, x'\in\mathbb R^{n_0}$, letting $n_l\to \infty$ for $l=1, \dots, L$, one has the convergence in distribution in probability and distribution respectively
\begin{align}
\bar x(x) \bar x(x') \to \mathcal K(x, x')
\quad \textrm{and}\quad
\bar x(\mathcal{X}) \theta_0^{L+1}\to \mathcal N(0, \mathcal K(\mathcal{X}, \mathcal{X})).
\end{align}
Moreover $\bar x(\mathcal{X})\theta_0^{L+1}$ and the term containing $f_0(\mathcal{X})$ are the only stochastic term over the ensemble of network initializations,
therefore for any $t$ the output $f(x^*)$ throughout training converges to a Gaussian distribution in the infinite width limit, with
\begin{align}
\mathbb E [f_t(x^*)] &= \mathcal K(x^*, \mathcal{X})\mathcal K^{-1}(I -e^{-\eta \mathcal K t})\mathcal{Y} \,,
\label{eq:NNGP-exact-dynamics-mean}
\\
{\rm Var}[f_t(x^*) ]
&= \mathcal K(x^*, x^*) - \mathcal K(x^*, \mathcal{X})\mathcal K^{-1}(I-e^{-2\eta \mathcal K t})\mathcal K(x^*, \mathcal{X})^T \,.
\label{eq:NNGP-exact-dynamics-var}
\end{align}
Thus the output of the neural network is also a GP and the asymptotic solution (i.e. $t\to\infty$) is identical to the posterior of the NNGP (\eqref{eq:nngp-exact-posterior}). Therefore, in the infinite width case, the optimized neural network is performing posterior sampling if only the readout layer is being trained. This result is a realization of sample-then-optimize equivalence identified in~\citet{matthews2017sample}.
\section{Computing NTK and NNGP Kernel}
\label{sec:KernelDerivation}
For completeness, we reproduce, informally, the recursive formula of the NNGP kernel and the tangent kernel from \cite{lee2018deep} and \cite{Jacot2018ntk}, respectively.
Let the activation function $\phi:\mathbb R\to\mathbb R$ be absolutely continuous. Let $\mathcal{T}$ and $\dot {\mathcal T}$ be functions from $2\times 2$ positive semi-definite matrices $\Sigma$ to $\mathbb R$ given by
\begin{align}
\begin{cases}
\mathcal{T}(\Sigma) = \mathbb E [\phi(u)\phi(v)]\,\,\,\,
\\
\dot{\mathcal{T}}(\Sigma) = \mathbb E [\phi'(u)\phi'(v)]\,\,\,\,
\end{cases}
(u, v)\sim \mathcal N(0, \Sigma) \,.
\end{align}
In the infinite width limit, the NNGP and tangent kernel can be computed recursively.
Let $x, x'$ be two inputs in $\mathbb{R}^{n_0}$.
Then $h^l(x)$ and $h^l(x')$ converge in distribution to a joint Gaussian as $\min\{n_1, \dots, n_{l-1}\}$. The mean is zero and the variance $\mathcal K^{l}(x,x')$ is
\begin{align}
\mathcal K^{l}(x,x') = \tilde \mathcal K^l(x, x')\otimes {\Id}_{n_l}
\end{align}
\begin{align}
\label{eq:sigma-map}
\tilde\mathcal K^{l}(x, x') = \sigma_\omega^2 \mathcal{T}\left(\begin{bmatrix}
\tilde \mathcal K^{l-1}(x, x) & \tilde \mathcal K^{l-1}(x, x')
\\
\tilde \mathcal K^{l-1}(x, x') & \tilde\mathcal K^{l-1}(x', x')
\end{bmatrix}\right ) + \sigma_b^2 \,
\end{align}
with base case
\begin{align}
\mathcal K^1(x,x') = \sigma^2_{\omega} \cdot \frac{1}{n_0} x^T x' + \sigma^2_b.
\end{align}
Using this one can also derive the tangent kernel for gradient descent training.
We will use induction to show that
\begin{align}\Theta^l (x,x') = \tilde \Theta^l(x,x') \otimes {\Id}_{n_l}
\end{align}
where
\begin{align}\label{eq:tangent-kernel-recursive}
\tilde \Theta^l(x,x') = \tilde \mathcal K^l(x,x') + \sigma_\omega^2 \tilde\Theta^{l-1}(x, x') \dot \mathcal{T} \left(\begin{bmatrix}
\tilde \mathcal K^{l-1}(x, x) & \tilde \mathcal K^{l-1}(x, x')
\\
\tilde \mathcal K^{l-1}(x, x') & \tilde\mathcal K^{l-1}(x', x')
\end{bmatrix}\right )
\end{align}
with $\tilde\Theta^1 = \tilde\mathcal K^1$.
Let
\begin{align}
J^l(x) = \nabla_{\theta^{\leq l}} h^l_0(x) = [\nabla_{\theta^{l}} h^l_0(x), \nabla_{\theta^{< l}} h^l_0(x) ] .
\end{align}
Then
\begin{align}
J^l(x)J^l(x')^T &= \nabla_{\theta^{l}} h^l_0(x) \nabla_{\theta^{l}} h^l_0(x')^T + \nabla_{\theta^{< l}} h^l_0(x) \nabla_{\theta^{< l}} h^l_0(x')^T
\end{align}
Letting $n_1, \dots, n_{l-1}\to\infty$ sequentially, the first term converges to the NNGP kernel $\mathcal K^l(x, x') $.
By applying the chain rule and the induction step (letting $n_1, \dots, n_{l-2} \to\infty$ sequentially), the second term is
\begin{align}
\nabla_{\theta^{< l}} h^l_0(x) \nabla_{\theta^{< l}} h^l_0(x')^T
&=
\frac{\partial h^l_0(x)}{\partial h^{l-1}_0(x)}
\nabla_{\theta^{\leq l-1}} h^{l-1}_0(x) \nabla_{\theta^{\leq l-1}} h^{l-1}_0(x')^T
\frac{\partial h^l_0(x')}{\partial h^{l-1}_0(x')}^T
\\
&\to \frac{\partial h^l_0(x)}{\partial h^{l-1}_0(x)}
\tilde\Theta^{l-1}(x, x')\otimes {\bf Id}_{n_{l-1}}
\frac{\partial h^l_0(x')}{\partial h^{l-1}_0(x')}^T \quad \quad (n_1, \dots, n_{l-2} \to\infty)
\\
&\to
\sigma_\omega^2 \left(\mathbb{E} \phi'(h_{0, i}^{l-1}(x)) \phi'(h_{0, i}^{l-1}(x')) \tilde\Theta^{l-1}(x, x')\right)\otimes {\Id}_{n_l} \quad \quad ( n_{l-1} \to\infty)
\\
&=
\left(\sigma_\omega^2 \tilde\Theta^{l-1}(x, x') \dot \mathcal{T} \left(\begin{bmatrix}
\tilde \mathcal K^{l-1}(x, x) & \tilde \mathcal K^{l-1}(x, x')
\\
\tilde \mathcal K^{l-1}(x, x') & \tilde\mathcal K^{l-1}(x', x')
\end{bmatrix}\right ) \right)\otimes {\Id}_{n_l}
\end{align}
\section{Results in function space for NTK parameterization transfer to standard parameterization}
\label{sec:compare-parameterization}
\begin{figure}[t]
\vskip 0.2in
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{figure/mnist-ntk-vs-standard}
\caption{MNIST}
\label{fig:mnist-ntk-vs-standard}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{figure/cifar-ntk-vs-standard}
\caption{CIFAR}
\label{fig:cifar-ntk-vs-standard}
\end{subfigure}
\caption{{\bf NTK vs Standard parameterization.} Across different choices of dataset, activation function and loss function, models obtained from (S)GD training for both parameterization (circle and triangle denotes NTK and standard parameterization respectively) get similar performance.}
\label{fig:ntk-vs-standarad}
\vskip -0.2in
\end{figure}
In this Section we present a sketch for why the function space linearization results, derived in \cite{Jacot2018ntk} for NTK parameterized networks, also apply to networks with a standard parameterization. We follow this up with a formal proof in \sref{sec: converge proof} of the convergence of standard parameterization networks to their linearization in the limit of infinite width. A network with standard parameterization is described as:
\begin{align}
\label{eq:recurrence-std}
\begin{cases}
h^{l+1}&=x^l W^{l+1} + b^{l+1}
\\
x^{l+1}&=\phi\left(h^{l+1}\right)
\end{cases}
\,\, \textrm{and}
\,\,
\begin{cases}
W^{l}_{i, j}& = \omega_{ij}^l \sim \mathcal{N}\left(0, \frac {\sigma_\omega^2}{n_l}\right)
\\
b_j^l &= \beta_j^l \sim \mathcal N\left(0, \sigma_b^2\right)
\end{cases} \,.
\end{align}
The NTK parameterization in \eqref{eq:recurrence} is not commonly used for training neural networks. While the function that the network represents is the same for both NTK and standard parameterization, training dynamics under gradient descent are generally different for the two parameterizations. However, for a particular choice of layer-dependent learning rate
training dynamics also become identical. Let $\eta^l_{\text{NTK},w}$ and $\eta^l_{\text{NTK},b}$ be layer-dependent learning rate for $W^l$ and $b ^l$ in the NTK parameterization, and $\eta_\text{std} = \frac{1}{n_\text{max}} \eta_0$ be the learning rate for all parameters in the standard parameterization, where $n_\text{max} = \max_l n_l$.
Recall that gradient descent training in standard neural networks requires a learning rate that scales with width like $\frac{1}{n_\text{max}}$, so $\eta_0$ defines a width-invariant learning rate \citep{parkoptimal}.
If we choose
\begin{align}
\label{eq lr equiv}
\eta^l_\text{NTK, w} = \frac{n_l}{n_\text{max} \sigma_\omega^2 }\eta_0, \qquad \text{and} \qquad
\eta^l_\text{NTK, b} = \frac{1}{n_\text{max}\sigma_b^2} \eta_0
,
\end{align}
then learning dynamics are identical for networks with NTK and standard parameterizations.
With only extremely minor modifications, consisting of incorporating the multiplicative factors in Equation \ref{eq lr equiv}
into the per-layer contributions to the Jacobian, the arguments in \sref{sec:Justification} go through for an NTK network with learning rates defined in Equation \ref{eq lr equiv}.
Since an NTK network with these learning rates exhibits identical training dynamics to a standard network with learning rate $\eta_\text{std}$, the result in \sref{sec:Justification} that sufficiently wide NTK networks are linear in their parameters throughout training also applies to standard networks.
We can verify this property of networks with the standard parameterization experimentally.
In Figure~\ref{fig:ntk-vs-standarad}, we see that for different choices of dataset, activation function and loss function, final performance of two different parameterization leads to similar quality model for similar value of normalized learning rate $\eta_{\textrm{std}} = \eta_{\textrm{NTK}} / n
$. Also, in Figure~\ref{fig:NTK-dynamics-standard}, we observe that our results is not due to the parameterization choice and holds for wide networks using the standard parameterization.
\begin{figure}%
\centering
\includegraphics[width=\columnwidth]{figure/dynamics_fc_standard_d5}
\caption{{\bf Exact and experimental dynamics are nearly identical for network outputs, and are similar for individual weights (Standard parameterization).}
Experiment is for an MSE loss, $\operatorname{ReLU}$ network with 5 hidden layers of width $n=2048$, $\eta = 0.005/2048$ $|\mathcal{D}|=256$, $k=1$, $\sigma_w^2=2.0$, and $\sigma_b^2=0.1$. All three panes in the first row show dynamics for a randomly selected subset of datapoints or parameters. First two panes in the second row show dynamics of loss and accuracy for training and test points agree well between original and linearized model. Bottom right pane shows the dynamics of RMSE between the two models on test points using empirical kernel.}
\label{fig:NTK-dynamics-standard}
\end{figure}
\section{Convergence of neural network to its linearization, and stability of NTK under gradient descent}
\label{sec: converge proof}
In this section, we show that how to use the NTK to provide a simple proof of the global convergence of a neural network under (full-batch) gradient descent and the stability of NTK under gradient descent.
We present the proof for standard parameterization. With some minor changes, the proof can also apply to the NTK parameterization. To lighten the notation, we only consider the asymptotic bound here.
The neural networks are parameterized as in \eqref{eq:recurrence-std}.
We make the following assumptions:
\\
\\
{\bf Assumptions [1-4]:}
\begin{enumerate}
\item The widths of the hidden layers are identical, i.e. $n_1 = \dots = n_L = n$ (our proof extends naturally to the setting $\frac {n_l}{n_{l'}}\to \alpha_{l, l'}\in(0, \infty)$
as $\min\{n_1, \dots , n_L\}\to \infty$.)
\item The analytic NTK $\Theta$ (defined in \eqref{eq:ntk-standard}) is full-rank, i.e. $0<\lambda_{\rm{min}} := \lambda_{\rm{min}}(\Theta) \leq \lambda_{\rm{max}} :=\lambda_{\rm{max}}(\Theta)<\infty .$ We set $\eta_{{\rm critical}} = 2 (\lambda_{\rm{min}} + \lambda_{\rm{max}})^{-1}$ .
\item The training set $(\mathcal{X}, \mathcal{Y})$ is contained in some compact set and $x\neq \tilde x$ for all $x, \tilde x \in \mathcal{X}$.
\item The activation function $\phi$ satisfies
\begin{align}
|\phi(0)|,\quad \|\phi'\|_\infty, \quad \sup_{x\neq \tilde x} |\phi'(x) - \phi'(\tilde x)|/|x-\tilde x| < \infty. \label{eq:activation-assumption}
\end{align}
\end{enumerate}
Assumption 2 indeed holds when $\mathcal{X}\subseteq \{x\in \mathbb R^{n_0}\}: \|x\|_2=1\}$ and
$\phi(x)$ grows non-polynomially for large $x$ \cite{Jacot2018ntk}.
Throughout this section, we use $C>0$ to denote some constant whose value may depend on
$L$, $|\mathcal{X}|$ and $(\sigma_w^2, \sigma_b^2)$ and may change from line to line, but is always independent of $n$.
Let $\theta_t$ denote the parameters at time step $t$. We use the following short-hand
\begin{align}
f(\theta_t) &= f(\mathcal{X}, \theta_t) \in\mathbb{R}^{|\mathcal{X}|\times k}
\\
g(\theta_t) &= f(\mathcal{X}, \theta_t) - \mathcal{Y} \in\mathbb{R}^{|\mathcal{X}|\times k}
\\
J(\theta_t) &= \nabla_{\theta}f(\theta_t) \in\mathbb{R}^{(|\mathcal{X}| k)\times |\theta|}
\end{align}
where $|\mathcal{X}|$ is the cardinality of the training set and $k$ is the output dimension of the network.
The empirical and analytic NTK of the standard parameterization is defined as
\begin{align}\label{eq:ntk-standard}
\begin{cases}
\hat\Theta_t &:=\hat\Theta_t(\mathcal{X}, \mathcal{X}) = \frac 1 n J(\theta_t)J(\theta_t)^T
\\
\Theta &:= \lim_{n\to\infty} \hat\Theta_0 \quad {\rm in\quad probability}.
\end{cases}
\end{align}
Note that the convergence of the empirical NTK in probability is proved rigorously in \cite{yang2019scaling}.
We consider the MSE loss
\begin{align}
\mathcal L(t) = \frac 1 2 \|g(\theta_t)\|_2^2.
\end{align}
Since $f(\theta_t)$ converges in distribution to a mean zero Guassian with covariance $\mathcal K$, one can show that for arbitrarily small $\delta_0>0$, there are constants $R_0>0$ and $n_0$ (both may depend on $\delta_0$, $|\mathcal{X}|$ and $\mathcal{K}$) such that for every $n \geq n_0$, with probability at least $(1 - \delta_0)$ over random initialization,
\begin{align}
\|g(\theta_0)\|_2 < R_0. \label{eq:base-loss}
\end{align}
The gradient descent update with learning rate $\eta$ is
\begin{align}
\theta_{t+1} = \theta_t - \eta J(\theta_t)^Tg(\theta_t)
\end{align}
and the gradient flow equation is
\begin{align}
\dot\theta_{t} = - J(\theta_t)^Tg(\theta_t)
.
\end{align}
We prove convergence of neural network training and the stability of NTK for both discrete gradient descent and gradient flow.
Both proofs rely on the local lipschitzness of the Jacobian $J(\theta)$.
\begin{lemma}[{\bf Local Lipschitzness of the Jacobian}] \label{lemma:stability-jacobian}
There is a $K>0$ such that for every $C>0$, with high probability over random initialization (w.h.p.o.r.i.) the following holds
\begin{align}\label{eq:jacobian-lip}
\begin{cases}
\frac 1 {\sqrt n}\|J(\theta) - J(\tilde \theta)\|_{F} &\leq K\|\theta - \tilde \theta\|_2
\\
\\
\frac 1 {\sqrt n} \|J(\theta)\|_{F} & \leq K
\end{cases}
, \quad \quad \forall \theta, \, \tilde \theta \in B(\theta_0, C n^{-\frac 1 2})
\end{align}
where
\begin{align}
B(\theta_0, R) := \{\theta: \|\theta-\theta_0\|_2 < R\}.
\end{align}
\end{lemma}
The following are the main results of this section.
\begin{theorem}[{\bf Gradient descent}]\label{thm:convergence}
Assume {\bf Assumptions [1-4]}.
For $\delta_0>0$ and $\eta_0< \eta_{{\rm critical}} $, there exist $R_0>0$, $N\in\mathbb N$ and $K>1$, such that for every $n\geq N$, the following holds with probability at least $(1 - \delta_0)$ over random initialization when applying
gradient descent with learning rate $\eta = \frac {\eta_0}{n}$,
\begin{align} \label{eq:exp-decay}
\begin{cases}
&\|g(\theta_{t})\|_2 \leq \left(1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}\right)^t R_0 %
\\
\\
&\sum_{j=1}^{t}\|\theta_j - \theta_{j-1}\|_2 \leq \frac{\eta_0KR_0}{\sqrt n} \sum_{j=1}^{t} (1 - \frac {\eta_0 \lambda_{\rm{min}}}{3})^{j-1}
\leq \frac {3K R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2}
\end{cases}
\end{align}
and
\begin{align}\label{eq:convergence-ntk}
\sup_{t} \| \hat\Theta_0 - \hat\Theta_t\|_F \leq \frac {6K^3R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2}\, .
\end{align}
\end{theorem}
\begin{theorem}[{\bf Gradient Flow}]\label{thm:convergence-flow}
Assume {\bf Assumptions[1-4]}.
For $\delta_0>0$, there exist $R_0>0$, $N\in\mathbb N$ and $K>1$, such that for every $n\geq N$, the following holds with probability at least $(1 - \delta_0)$ over random initialization when applying gradient flow with ``learning rate" $\eta = \frac {\eta_0}{n}$
\begin{align} \label{eq:exp-decay-flow}
\begin{cases}
&\|g(\theta_{t})\|_2 \leq e^{- \frac { \eta_0 \lambda_{\rm{min}}}{3}t} R_0
\\
\\
&\|\theta_t - \theta_{0}\|_2 \leq \frac {3K R_0}{\lambda_{\rm{min}}}(1 - e^{-\frac 1 3 \eta_0 \lambda_{\rm{min}} t}) n^{-\frac 1 2}
\end{cases}
\end{align}
and
\begin{align}\label{eq:convergence-ntk-flow}
\sup_{t} \| \hat\Theta_0 - \hat\Theta_t\|_F \leq \frac {6K^3R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2}\, .
\end{align}
\end{theorem}
See the following two subsections for the proof.
\begin{remark}
One can extend the results in Theorem \ref{thm:convergence} and Theorem \ref{thm:convergence-flow} to other architectures or functions as long as \begin{enumerate}
\item The empirical NTK converges in probability and the limit is positive definite.
\item Lemma \ref{lemma:stability-jacobian} holds, i.e. the Jacobian is locally Lipschitz.
\end{enumerate}
\end{remark}
\subsection{Proof of Theorem \ref{thm:convergence}}\label{subsection:convergence-descent}
As discussed above, there exist $R_0$ and $n_0$ such that for every $n \geq n_0$, with probability at least $(1 - \delta_0/10)$ over random initialization,
\begin{align}
\|g(\theta_0)\|_2 < R_0 \label{eq:base-loss-1} \, .
\end{align}
Let $C = \frac {3K R_0}{\lambda_{\rm{min}}}$ in Lemma \ref{lemma:stability-jacobian}.
We first prove \eqref{eq:exp-decay} by induction.
Choose $n_1>n_0$ such that for every $n\geq n_1$ \eqref{eq:jacobian-lip} and \eqref{eq:base-loss-1} hold with probability at least $(1 - \delta_0/5)$ over random initialization.
The $t=0$ case is obvious and we assume \eqref{eq:exp-decay} holds for $t=t$.
Then by induction and the second estimate of \eqref{eq:jacobian-lip}
\begin{align}
\|\theta_{t+1} - \theta_t\|_2 \leq \eta \|J(\theta_t)\|_{\rm{op}} \|g(\theta_t)\|_2 \leq \frac {K\eta_0}{\sqrt n} \left(1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}\right)^t
R_0,
\end{align}
which gives the first estimate of \eqref{eq:exp-decay} for $t+1$ and which also implies $\|\theta_{j} - \theta_0\|_2\leq \frac {3K R_0}{\lambda_{\rm{min}}}n^{-\frac 1 2}$ for $j=0, \dots, t+1$. To prove the second one, we apply the mean value theorem and the formula for gradient decent update at step $t+1$
\begin{align}
\|g(\theta_{t+1})\|_2 &= \|g(\theta_{t+1}) - g(\theta_{t}) + g(\theta_{t})\|_2
\\
&= \|J(\tilde \theta_t)(\theta_{t+1} - \theta_t) + g(\theta_{t})\|_2
\\
&= \|-\eta J(\tilde \theta_t)J(\theta_t)^T g(\theta_{t}) + g(\theta_{t})\|_2
\\
&\leq \|1 - \eta J(\tilde \theta_t)J(\theta_t)^T\|_{\rm{op}} \|g(\theta_t)\|_2
\\
&\leq \|1 - \eta J(\tilde \theta_t)J(\theta_t)^T\|_{\rm{op}} \left(1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}\right)^t R_0,
\end{align}
where $\tilde \theta_t$ is some linear interpolation between $ \theta_t$ and $ \theta_{t+1}$. It remains to show with probability at least $(1-\delta_0/2)$,
\begin{align}
\|1 - \eta J(\tilde \theta_t)J(\theta_t)^T\|_{\rm{op}} \leq 1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}.
\end{align}
This can be verified by Lemma \ref{lemma:stability-jacobian}. Because $\hat\Theta_0\to \Theta$ \cite{yang2019scaling} in probability, one can find $n_2$ such that the event
\begin{align}
\|\Theta - \hat\Theta_0\|_F \leq \frac {\eta_0\lambda_{\rm{min}}}{3}
\end{align}
has probability at least $(1-\delta_0/5)$ for every $n\geq n_2$.
The assumption $\eta_0 < \frac 2{\lambda_{\rm{min}}+ \lambda_{\rm{max}}}$ implies
\begin{align}
\|1 - \eta _0\Theta\|_{\rm{op}} \leq 1 - \eta_0\lambda_{\rm{min}}.
\end{align}
Thus
\begin{align}
&\|1 - \eta J(\tilde \theta_t)J(\theta_t)^T\|_{\rm{op}}
\\ \leq&
\|1 - \eta _0\Theta\|_{\rm{op}} + \eta_0\|\Theta - \hat\Theta_0\|_{\rm{op}} + \eta\|J(\theta_0)J(\theta_0)^T-J(\tilde \theta_t)J(\theta_t)^T\|_{\rm{op}}
\\
\leq & 1 - \eta_0\lambda_{\rm{min}} + \frac {\eta_0\lambda_{\rm{min}}}{3} + \eta_0 K^2 (\|\theta_t - \theta_0\|_2 + \|\tilde \theta_t - \theta_0\|_2)
\\
\leq & 1 - \eta_0\lambda_{\rm{min}} + \frac {\eta_0\lambda_{\rm{min}}}{3} + 2 \eta_0 K^2 \frac {3K R_0}{\lambda_{\rm{min}}} \frac 1 {\sqrt n} \leq 1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}
\end{align}
with probability as least $(1 -\delta_0/2)$ if
\begin{align}
n \geq \left(\frac {18K^3 R_0}{\lambda_{\rm{min}}^2 }\right)^2.
\end{align}
Therefore, we only need to set
\begin{align}
N = \max\left\{n_0, n_1, n_2, \left(\frac {18K^3 R_0}{\lambda_{\rm{min}}^2 }\right)^2 \right\}.
\end{align}
To verify \eqref{eq:convergence-ntk}, notice that
\begin{align}
\| \hat\Theta_0 - \hat\Theta_t\|_F &= \frac 1 n \| J(\theta_0) J(\theta_0) ^T - J(\theta_t) J(\theta_t)^T \|_F
\\
&\leq \frac 1 n \left(\| J(\theta_0)\|_{\rm{op}} \|J(\theta_0) ^T - J(\theta_t)^T \|_F
+ \| J(\theta_t) - J(\theta_0)\|_{\rm{op}} \|J(\theta_t)^T\|_F\right)
\\
&\leq 2K^2 \|\theta_0 - \theta_t\|_2
\\
& \leq \frac {6K^3R_0}{\lambda_{\rm{min}}} \frac 1 {\sqrt n},
\end{align}
where we have applied the second estimate of \eqref{eq:exp-decay} and \eqref{eq:jacobian-lip}.
\subsection{Proof of Theorem \ref{thm:convergence-flow}} \label{subsection:proof-gradient-flow}
The first step is the same. There exist $R_0$ and $n_0$ such that for every $n \geq n_0$, with probability at least $(1 - \delta_0/10)$ over random initialization,
\begin{align}
\|g(\theta_0)\|_2 < R_0 \label{eq:base-loss-1-flow} \, .
\end{align}
Let $C = \frac {3K R_0}{\lambda_{\rm{min}}}$ in Lemma \ref{lemma:stability-jacobian}. Using the same arguments as in Section \ref{subsection:convergence-descent}, one can show that there exists $n_1$ such that for all $n\geq n_1$, with probability at least
$(1 - \delta_0/10) $
\begin{align}
\frac 1 n J(\theta)J(\theta)^T \succ \frac 1 3 \lambda_{\rm{min}} {\Id} \quad \forall \theta \in B(\theta_0, Cn^{-\frac 1 2})
\end{align}
Let
\begin{align}
t_1 = \inf \left\{t: \|\theta_t - \theta_{0}\|_2 \geq \frac {3K R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2} \right\}
\end{align}
We claim $t_1=\infty$. If not, then for all $t\leq t_1$, $\theta_t\in B(\theta_0, Cn^{-\frac 1 2})$ and
\begin{align}
\hat\Theta_t \succ \frac 1 3 \lambda_{\rm{min}} \Id .
\end{align}
Thus
\begin{align}
\frac d {dt} \left ( \|g(t)\|_2^2\right) = - 2\eta_0g(t)^T \hat\Theta_t g(t) \leq - \frac 2 3 \eta_0 \lambda_{\rm{min}} \|g(t)\|_2^2
\end{align}
and
\begin{align} \label{eq:useful-2}
\|g(t)\|_2^2 \leq e^{-\frac 2 3 \eta_0\lambda_{\rm{min}} t} \|g(0)\|_2^2 \leq e^{-\frac 2 3 \eta_0\lambda_{\rm{min}} t} R_0^2 .
\end{align}
Note that
\begin{align}
\frac d {dt} \|\theta_t - \theta_0\|_2 \leq \left\| \frac d {dt} \theta_t\right\|_2 = \frac {\eta_0} {n}\|J(\theta_t)g(t)\|_2 \leq {\eta_0} KR_0 e^{-\frac 1 3 \eta_0 \lambda_{\rm{min}} t} n^{-1/2}
\end{align}
which implies, for all $t\leq t_1$
\begin{align}
\|\theta_t - \theta_0\|_2 \leq \frac {3K R_0}{\lambda_{\rm{min}}}(1 - e^{-\frac 1 3 \eta_0 \lambda_{\rm{min}} t}) n^{-\frac 1 2}
\leq \frac {3K R_0}{\lambda_{\rm{min}}}(1 - e^{-\frac 1 3 \eta_0 \lambda_{\rm{min}} t_1}) n^{-\frac 1 2} < \frac {3K R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2} \,.
\end{align}
This contradicts to the definition of $t_1$ and thus $t_1=\infty$. Note that \eqref{eq:useful-2} is the same as the first equation of \eqref{eq:exp-decay-flow}.
\subsection{Proof of Lemma \ref{lemma:stability-jacobian}}
The proof relies on upper bounds of operator norms of random Gaussian matrices.
\begin{theorem}[Corollary 5.35 \cite{vershynin2010introduction}]\label{thm:operator-bound-random-gaussian}
Let $A = A_{N, n}$ be an $N \times n$ random matrix whose
entries are independent standard normal random variables. Then for every $t \geq 0$,
with probability at least $1-2 \exp(-t^2/2)$ one has
\begin{align}
\sqrt N - \sqrt n - t \leq \lambda_{\rm{min}}(A) \leq \lambda_{\rm{max}} (A) \leq \sqrt N + \sqrt n + t.
\end{align}
\end{theorem}
For $l\geq 1$, let
\begin{align}
& \delta^l(\theta, x) := \nabla_{h^l(\theta, x)} f^{L+1}(\theta, x)\in \mathbb{R}^{kn}
\\
&\delta^l(\theta, \mathcal{X}) := \nabla_{h^l(\theta, \mathcal{X})} f^{L+1}(\theta, \mathcal{X})\in \mathbb{R}^{(k\times |\mathcal{X}|)\times (n\times \mathcal{X})}
\end{align}
Let $\theta = \{W^l, b^l\}$ and $\tilde \theta = \{\tilde W^l, \tilde b^l\}$ be any two points in $B(\theta_0, \frac C {\sqrt n})$.
By the above theorem and the triangle inequality, w.h.p. over random initialization,
\begin{align}
\| W^1\|_{\rm{op}}, \quad \|\tilde W^1\|_{\rm{op}} \leq 3\sigma_\omega \frac{\sqrt n} { \sqrt {n_0}}, \quad \| W^l\|_{\rm{op}}, \quad \|\tilde W^l\|_{\rm{op}} \leq 3\sigma_\omega \quad {\rm for }\quad 2\leq l \leq L+1 %
\end{align}
Using this and the assumption on $\phi$ \eqref{eq:activation-assumption}, it is not difficult to show that there is a constant $K_1$, depending on $\sigma_\omega^2, \sigma_b^2, |\mathcal{X}|$ and $L$ such that with high probability over random initialization\footnote{These two estimates can be obtained via induction. To prove bounds relating to $x^l$ and $\delta^l$, one starts with $l=1$ and $l=L$, respectively.}
\begin{align}
n^{-\frac 1 2} \|x^l(\theta, \mathcal{X})\|_{2}, \quad \|\delta^l(\theta, \mathcal{X} )\|_{2} &\leq K_1,
\\
n^{-\frac 1 2} \|x^l(\theta, \mathcal{X})- x^l(\tilde \theta, \mathcal{X}) \|_{2}, \quad
\|\delta^l(\theta, \mathcal{X} ) - \delta^l(\tilde \theta, \mathcal{X} )\|_{2} &\leq K_1\|\tilde \theta - \theta\|_2
\end{align}
Lemma \ref{lemma:stability-jacobian} follows from these two estimates. Indeed, with high probability over random initialization
\begin{align}
\|J(\theta)\|_{F}^2 &= \sum_l \|J(W^l)\|_F^2 + \|J(b^l)\|_F^2
\\
&= \sum_l \sum_{x\in \mathcal{X}}\|x^{l-1}(\theta, x)\delta^l(\theta, x)^T\|_F^2 + \|\delta^l(\theta, x)^T\|_F^2
\\
&\leq \sum_l \sum_{x\in \mathcal{X}}
(1+ \|x^{l-1} (\theta, x)\|_F^2)\|\delta^l(\theta, x)^T\|_F^2
\\
&\leq \sum_l (1+ K_1^2n)\sum_x \|\delta^l(\theta, x)^T\|_F^2
\\ &\leq \sum_l K_1^2(1+ K_1^2 n)
\\
& \leq 2(L+1)K_1^4 n,
\end{align}
and similarly
\begin{align}
&\|J(\theta)- J(\tilde\theta)\|_{F}^2
\\
=& \sum_l \sum_{x\in \mathcal{X}}\|x^{l-1}(\theta, x)\delta^l(\theta, x)^T
- x^{l-1}(\tilde\theta, x)\delta^l(\tilde\theta, x)^T\|_F^2 + \|\delta^l(\theta, x)^T- \delta^l(\tilde\theta, x)^T\|_F^2
\\
\leq &\left( \sum_l \left(K_1^4n + K_1^4n \right) + K_1^2 \right) \| \theta - \tilde{\theta} \|_2
\\ \leq & 3(L+1)K_1^4 n \, \| \theta - \tilde{\theta} \|_2.
\end{align}
\subsection{Remarks on NTK parameterization}
For completeness, we also include analogues of Theorem \ref{thm:convergence} and Lemma \ref{lemma:stability-jacobian} with NTK parameterization.
\begin{theorem}[NTK parameterization]
Assume {\bf Assumptions [1-4]}.
For $\delta_0>0$ and $\eta_0< \eta_{{\rm critical}} $, there exist $R_0>0$, $N\in\mathbb N$ and $K>1$, such that for every $n\geq N$, the following holds with probability at least $(1 - \delta_0)$ over random initialization when applying
gradient descent with learning rate $\eta = {\eta_0}$,
\begin{align}
\begin{cases}
&\|g(\theta_{t})\|_2 \leq \left(1 - \frac {\eta_0 \lambda_{\rm{min}}}{3}\right)^t R_0 %
\\
\\
&\sum_{j=1}^{t}\|\theta_j - \theta_{j-1}\|_2 \leq {K\eta_0} \sum_{j=1}^{t} (1 - \frac {\eta_0 \lambda_{\rm{min}}}{3})^{j-1}
R_0 \leq \frac {3K R_0}{\lambda_{\rm{min}}}
\end{cases}
\end{align}
and
\begin{align}
\sup_{t} \| \hat\Theta_0 - \hat\Theta_t\|_F \leq \frac {6K^3R_0}{\lambda_{\rm{min}}} n^{-\frac 1 2}\, .
\end{align}
\end{theorem}
\begin{lemma}[NTK parameterization: Local Lipschitzness of the Jacobian]
There is a $K>0$ such that for every $C>0$, with high probability over random initialization the following holds
\begin{align}
\begin{cases}
\|J(\theta) - J(\tilde \theta)\|_{F} &\leq K\|\theta - \tilde \theta\|_2
\\
\\
\|J(\theta)\|_{F} & \leq K
\end{cases}
, \quad \quad \forall \theta, \, \tilde \theta \in B(\theta_0, C)
\end{align}
\end{lemma}
\section{Bounding the discrepancy between the original and the linearized network: MSE loss}
\label{sec:sup-discrepancy}
We provide the proof for the gradient flow case. The proof for gradient descent can be obtained similarly.
To simplify the notation, let $g^{\textrm{lin}}(t) \equiv f^{\textrm{lin}}_t(\mathcal{X}) - \mathcal{Y}$ and $ g(t) \equiv f_t(\mathcal{X}) - \mathcal{Y}$. The theorem and proof apply to both standard and NTK parameterization. We use the notation $\lesssim$ to hide the dependence on uninteresting constants.
\begin{theorem}
Same as in Theorem \ref{thm:convergence-flow}. For every $x\in\mathbb R^{n_0}$ with $\|x\|_2\leq 1$, for $\delta_0>0$ arbitrarily small, there exist $ R_0>0$ and $N\in\mathbb N$ such that for every $n\geq N$, with probability at least $(1-\delta_0)$ over random initialization,
\begin{align}
\sup_{t}\left\|g^{\textrm{lin}}(t) - g(t)\right\|_2\, ,\quad \sup_{t}\left\|g^{\textrm{lin}}(t, x) - g(t, x)\right\|_2 \lesssim n^{-\frac 1 2} R_0^2.
\end{align}
\end{theorem}
\begin{proof}
\begin{align}
&\frac {d }{dt}\left( \exp(\eta_0 \hat\Theta_0 t)(g^{\textrm{lin}}(t) - g(t) )\right)
\\
= &\eta_0 \left(\hat\Theta_0 \exp(\eta_0 \hat\Theta_0 t)(g^{\textrm{lin}}(t) -g(t) )
+ \exp(\eta_0\hat\Theta_0 t)(-\hat\Theta_0 g^{\textrm{lin}}(t) + \hat\Theta_t g(t) )\right)
\\
= & \eta_0\left(\exp( \eta_0 \hat\Theta_0 t)(\hat\Theta_t - \hat\Theta_0)g(t) \right)
\end{align}
Integrating both sides and using the fact $g^{\textrm{lin}}(0) = g(0)$,
\begin{align}
(g^{\textrm{lin}}(t) - g(t) )
=
-&\int_{0}^t \eta_0\left(\exp(\eta_0\hat\Theta_0 (s-t))(\hat\Theta_s - \hat\Theta_0)(g^{\textrm{lin}}(s) - g(s))\right) ds
\\
+&\int_{0}^t \eta_0\left(\exp(\eta_0\hat\Theta_0 (s-t))(\hat\Theta_s - \hat\Theta_0)g^{\textrm{lin}}(s) \right) ds
\end{align}
Let $\lambda_0>0$ be the smallest eigenvalue of $\hat\Theta_0$ (with high probability $\lambda_0 >\frac 1 3\lambda_{\rm{min}} $). Taking the norm gives
\begin{align}
\|g^{\textrm{lin}}(t) - g(t)\|_2
\leq
&
\eta_0 \Big(\int_{0}^t \|\exp(\hat\Theta_0 \eta_0(s-t))\|_{op}\|(\hat\Theta_s - \hat\Theta_0)\|_{op}\|g^{\textrm{lin}}(s) - g(s)\|_2 ds
\\
&+\int_{0}^t\|\exp(\hat\Theta_0 \eta_0(s-t))\|_{op}\|(\hat\Theta_s - \hat\Theta_0)\|_{op} \|g^{\textrm{lin}}(s)\|_2 ds
\Big)
\\
\leq
&\eta_0 \Big(\int_{0}^t e^{\eta_0\lambda_0(s-t)}\|(\hat\Theta_s - \hat\Theta_0)\|_{op}\|g^{\textrm{lin}}(s) - g(s)\|_2 ds
\\
&+\int_{0}^t e^{\eta_0\lambda_0(s-t)} \|(\hat\Theta_s - \hat\Theta_0)\|_{op} \|g^{\textrm{lin}}(s)\|_2 ds \Big)
\end{align}
Let
\begin{align}
u(t) &\equiv e^{\lambda_0 \eta_0 t} \|g^{\textrm{lin}}(t) - g(t)\|_2
\\
\alpha(t) &\equiv \eta_0 \int_{0}^t e^{\lambda_0 \eta_0 s } \|(\hat\Theta_s - \hat\Theta_0)\|_{op} \|g^{\textrm{lin}}(s)\|_2 ds
\\
\beta(t) &\equiv \eta_0\|(\hat\Theta_t - \hat\Theta_0)\|_{op}
\end{align}
The above can be written as
\begin{align}
u(t) \leq \alpha(t) + \int_{0}^{t}\beta(s) u(s)ds
\end{align}
Note that $\alpha(t)$ is non-decreasing. Applying an integral form of the Gr\"{o}nwall's inequality (see Theorem 1 in \cite{dragomir2003some}) gives
\begin{align}
u(t) \leq \alpha(t) \exp \left({\int_{0}^t \beta(s)ds} \right)
\end{align}
Note that
\begin{align}
\|g^{\textrm{lin}}(t)\|_2 = \|\exp\left( -\eta_0 \hat\Theta_0 t\right) g^{\textrm{lin}}(0)\|_2 \leq \|\exp\left( -\eta_0 \hat\Theta_0 t\right)\|_{op} \|g^{\textrm{lin}}(0)\|_2 = e^{-\lambda_0 \eta_0 t} \|g^{\textrm{lin}}(0)\|_2 \,.
\end{align}
Then
\begin{align}
\|g^{\textrm{lin}}(t)-g(t)\|_2 &\leq
\eta_0 e^{-\lambda_0 \eta_0 t}\int_{0}^t e^{\lambda_0 \eta_0 s }
\|\hat\Theta_s - \hat\Theta_0\|_{op} \|g^{\textrm{lin}}(s)\|_2 ds
\exp\left({\int_{0}^t \eta_0 \|\hat\Theta_s - \hat\Theta_0\|_{op}ds}\right)
\\
&\leq
\eta_0 e^{-\lambda_0 \eta_0 t}\|g^{\textrm{lin}}(0)\|_2 \int_{0}^t
\|(\hat\Theta_s - \hat\Theta_0)\|_{op}ds
\exp \left({\int_{0}^t \eta_0 \|\hat\Theta_s - \hat\Theta_0\|_{op}ds}\right)
\end{align}
Let $\sigma_t = \sup_{0\leq s\leq t}\|\hat\Theta_s - \hat\Theta_0\|_{op}$. Then
\begin{align}\label{eq:useful}
\|g^{\textrm{lin}}(t)-g(t)\|_2 \lesssim \left( \eta_0 t {\sigma_t} e^{-\lambda_0 \eta_0 t + \sigma_t\eta_0 t}\right) \|g^{\textrm{lin}}(0)\|_2
\end{align}
As it is proved in Theorem \ref{thm:convergence}, for every $\delta_0 >0$, with probability at least $(1-\delta_0)$ over random initialization,
\begin{align}
\sup_{t} \sigma_t \leq \sup_{t} \| \hat\Theta_0 - \hat\Theta_t\|_F \lesssim n^{-1/2}R_0 \to 0 \, \label{eq: sigma-decay}
\end{align}
when $n_1=\dots =n_L=n\to\infty$.
Thus for large $n$ and any polynomial $P(t)$ (we use $P(t) = t$ here)
\begin{align}
\sup_{t} e^{-\lambda_0 \eta_0 t + \sigma_t \eta_0 t } \eta_0 P(t) =\mathcal O(1)
\end{align}
Therefore
\begin{align}
\label{eq:discrepancy-training-appendix}
\sup_{t }\|g^{\textrm{lin}}(t)-g(t)\|_2 \lesssim \sup_t \sigma_t R_0 \lesssim n^{-1/2} R_0^2\to 0 \,,
\end{align}
as $n \to \infty$.
Now we control the discrepancy on a test point $x$. Let $y$ be its true label. Similarly,
\begin{align}
\frac d {dt} \left(g^{\textrm{lin}}(t, x) - g(t, x)\right) = - \eta_0 \left(\hat\Theta_0(x, \mathcal{X})- \hat\Theta_t(x, \mathcal{X})\right) g^{\textrm{lin}}(t)
+ \eta_0 \hat\Theta_t(x, \mathcal{X})(g(t) - g^{\textrm{lin}}(t)).
\end{align}
Integrating over $[0,t]$ and taking the norm imply
\begin{align}
&\left\|g^{\textrm{lin}}(t, x) - g(t, x)\right\|_2
\\ \leq& \eta_0 \int_0^t
\left\|\hat\Theta_0(x, \mathcal{X})- \hat\Theta_s(x, \mathcal{X})\right\|_2
\| g^{\textrm{lin}}(s)\|_2 ds
+ \eta_0 \int_0^t\|\hat\Theta_s(x, \mathcal{X}) \|_2 \|g(s) - g^{\textrm{lin}}(s)\|_2ds
\\
\leq& \eta_0
\|g^{\textrm{lin}}(0)\|_2 \int_0^t \left\|\hat\Theta_0(x, \mathcal{X})- \hat\Theta_s(x, \mathcal{X})\right\|_2
e^{-\eta_0 \lambda_0 s} ds \label{eq: first-bound}
\\
& + \eta_0 \int_0^t(\|\hat\Theta_0(x, \mathcal{X})\|_2 + \|\hat\Theta_s(x, \mathcal{X}) - \hat\Theta_0(x,\mathcal{X})\|_2)
\|g(s) - g^{\textrm{lin}}(s)\|_2ds \label{eq: second bound}
\end{align}
Similarly, Lemma \ref{lemma:stability-jacobian} implies
\begin{align}
\sup_{t}\left\|\hat\Theta_0(x, \mathcal{X})- \hat\Theta_t(x, \mathcal{X})\right\|_2 \lesssim n^{-\frac 1 2} R_0
\end{align}
This gives
\begin{align}
\textrm{ }(\ref{eq: first-bound}) \lesssim n^{-\frac 1 2} R_0^2.
\end{align}
Using \eqref{eq:useful} and \eqref{eq: sigma-decay},
\begin{align}
\textrm{(\ref{eq: second bound})} \lesssim
\|\hat\Theta_0(x, \mathcal{X})\|_2 \int_0^t \left( \eta_0 s {\sigma_s} e^{-\lambda_0 \eta_0 s + \sigma_s\eta_0 s}\right) \|g^{\textrm{lin}}(0)\|_2 dt \lesssim n^{-\frac 1 2}\,.
\end{align}
\end{proof}
\section{Convergence of empirical kernel}
\label{sec kernel converge}
As in \citet{novak2018bayesian}, we can use Monte Carlo estimates of the tangent kernel (\eqref{eq:tangent-kernel}) to probe convergence to the infinite width kernel (analytically computed using Equations \ref{eq:sigma-map}, \ref{eq:tangent-kernel-recursive}).
For simplicity, we consider random inputs drawn from ${\mathcal N}(0, 1)$ with $n_0=1024$. In Figure~\ref{fig:convergence-vs-width-d3}, we observe convergence as both width $n$ increases and the number of Monte Carlo samples $M$ increases.
For both NNGP and tangent kernels we observe $\|\hat\Theta^{(n)} - \Theta\|_F = \mathcal O\left(1/\sqrt{n}\right)$
and $\|\hat {\mathcal K}^{(n)} - \mathcal K\|_F = \mathcal O\left({1}/\sqrt{n}\right)$, as predicted by a CLT in \citet{daniely2016}.
\begin{figure}%
\centering
\includegraphics[width=\columnwidth]{figure/ntk-convergence-vs-width_neurips}
\caption{\textbf{Kernel convergence.} Kernels computed from randomly initialized {$\operatorname{ReLU}$}{} networks with one and three hidden layers converge to the corresponding analytic kernel as width $n$ and number of Monte Carlo samples $M$ increases. Colors indicate averages over different numbers of Monte Carlo samples.}
\label{fig:convergence-vs-width-d3}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{figure/ntk-convergence-d1}
\caption{\textbf{Kernel convergence.} Kernels from single hidden layer randomly initialized {$\operatorname{ReLU}$}{} network convergence to analytic kernel using Monte Carlo sampling ($M$ samples). See \sref{sec kernel converge} for additional discussion.}
\label{fig:convergence-vs-width2}
\end{figure}
\section{Details on Wide Residual Network}
\begin{table}[ht]
\caption{{\bf Wide Residual Network architecture from~\citet{zagoruyko2016wide}}. In the residual block, we follow Batch Normalization-ReLU-Conv ordering.}
\label{tab:wide_resnet_config}
\centering
\begin{tabular}{ccc}
\toprule
group name & output size & block type \\
\midrule
conv1 & 32 $\times$ 32 & [3$\times$3, \textrm{channel size}] \\
conv2 & 32 $\times$ 32 & $\begin{bmatrix} 3\times3,& \textrm{channel size}\\ 3\times3,& \textrm{channel size} \end{bmatrix}$ $\times$ N\\
conv3 & 16 $\times$ 16 & $\begin{bmatrix} 3\times3,& \textrm{channel size}\\ 3\times3,& \textrm{channel size} \end{bmatrix}$ $\times$ N\\
conv4 & 8 $\times$ 8 & $\begin{bmatrix} 3\times3,& \textrm{channel size}\\ 3\times3,& \textrm{channel size} \end{bmatrix}$ $\times$ N\\
avg-pool & 1 $\times$ 1 & [8 $\times$ 8]\\
\bottomrule
\end{tabular}
\end{table}
| {
"timestamp": "2019-12-10T02:12:09",
"yymm": "1902",
"arxiv_id": "1902.06720",
"language": "en",
"url": "https://arxiv.org/abs/1902.06720",
"abstract": "A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9822877007780706,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7089594629078854
} |
https://arxiv.org/abs/1304.7340 | Systems of quotients of Lie triple systems | In this paper, we introduce the notion of system of quotients of Lie triple systems and investigate some properties which can be lifted from a Lie triple system to its systems of quotients. We relate the notion of Lie triple system of Martindale-like quotients with respect to a filter of ideals and the notion of system of quotients, and prove that the system of quotients of a Lie triple system is equivalent to the algebra of quotients of a Lie algebra in some sense, and these allow us to construct the maximal system of quotients for nondegenerate Lie triple systems. | \section{Introduction}
Lie triple systems arose initially in Cartan's study of Riemannian geometry, but whose concept was introduced by Nathan Jacobson in 1949 to study
subspaces of associative algebras closed under triple commutators $[[u,v],w]$(cf. \cite{J1}). The role played by Lie triple systems in the theory of symmetric
spaces is parallel to that of Lie algebras in the theory of Lie groups: the tangent space at every point of a symmetric space has the structure of a Lie triple system.
The notion of ring of quotients was introduced by Utumi in 1956 (cf. \cite{U}). He proved that the ring without right zero divisors has a maximal left quotient ring
and constructed it. Inspired by \cite{U}, Siles Molina studied the algebras of quotients of Lie algebras (cf. \cite{S}). The notion of Martindale ring of quotients
was introduced by Martindale in 1969 for prime rings (cf. \cite{M2}). In \cite{GG}, E. Garc\'{i}a and M. G\'{o}mez defined Martindale-like quotients for Lie triple
systems with respect to power filters of sturdy ideals and constructed the maximal system in the nondegenerate cases.
In this paper we introduce the notion of system of quotients of Lie triple systems and prove that some properties such as semiprimeness, primeness
or nondegeneracy can be lifted from a Lie triple system to its systems of quotients. We answer the question about the relation between $S$ being
a Lie triple system of Martindale-like quotients with respect to a filter and $S$ being a system of quotients of a Lie triple system $T$. We also prove that if $S$ is
a system of quotients of a semiprime Lie triple system $T$, then
$L(S)=S\oplus\LL(S,S)$ is an algebra of quotients of $L(T)=T\oplus \LL(T,T)$ in Theorem \ref{S/T implies L(S)/L(T)}. Finally, we construct the maximal system of
quotients for a nondegenerate Lie triple system, and show that the maximal system of quotients of a finite dimensional semisimple Lie triple system over an algebraically closed field of characteristic 0 is itself.
Throughout this paper, we let $\F$ be a field of arbitrary
characteristic. For background material on Lie triple systems the
reader is referred to \cite{GG,GGN,J,J1,L}. Our notation and terminology
are standard as may be found in \cite{G,S,S1}.
\section{Preliminaries}
\begin{defn}{\rm \supercite{M}}
A vector space $T$ together with a trilinear map $(x, y, z)\mapsto[x,y,z]$ is called a Lie triple system(LTS) if
\begin{enumerate}[(1)]
\item $[x,x,z]=0$,
\item $[x,y,z]+[y,z,x]+[z,x,y]=0$,
\item $[u,v,[x,y,z]]=[[u,v,x],y,z]+[x,[u,v,y],z]+[x,y,[u,v,z]]$,
\end{enumerate}
for all $x,y,z,u,v\in T$.
\end{defn}
\begin{defn}{\rm\supercite{L}}
A subsystem of a LTS $T$ is a subspace $I$ for which $[I,I,I]\subseteq I$. An ideal of a LTS $T$ is a subspace $I$ for which $[I,T,T]\subseteq I$,
in this case we have that $[T,I,T]$ and $[T,T,I]$ are contained in $I$.
\end{defn}
If $L$ is a Lie algebra, then $L$ is a LTS relative to $[a,b,c]\equiv[[a,b],c]$. Conversely, it was showed in \cite{CF} that if $T$ is a LTS, then the standard embedding of $T$ is the $\Z_2$-graded Lie algebra $L(T) =
L_0 \oplus L_1$, $L_0$ being the $\F$-span of $\{\LL(x, y): x, y \in T \}$, denoted by $\LL(T,T)$, where $\LL(x, y)$ denotes the left
multiplication operator in $T$, $\LL(x, y)(z) := [x, y, z]$; $L_1 := T$ and where the product is given by
$$[\left(\LL(x, y), z\right), \left(\LL(u, v),w\right)]:=\left(\LL([u, v, y], x) - \LL([u, v, x], y) + \LL(z,w), [x, y, w] - [u, v, z]\right).$$
Let us observe that $L_0$ with the product induced by the one in $L(T) = L_0 \oplus L_1$ becomes a Lie algebra. Moreover, for a $\Z_2$-graded Lie algebra $L=L_0\oplus L_1$, $L_1$
has the structure of a LTS and every LTS $T$ is the $1$ component of a $\Z_2$-graded Lie algebra since its standard imbedding $L(T)$ is a
$\Z_2$-graded Lie algebra with $L(T)_0=\LL(T,T), L(T)_1=T$.
\begin{defn}{\rm\supercite{S}}
A Lie algebra $L$ is said to be semiprime if $[I, I]\neq0$ for every nonzero ideal $I$ of $L$. $L$ is said to be prime if every two nonzero ideals $I, J$
of $L$ give $[I, J]\neq0$. $L$ is said to be nondegenerate if $[[L, x], x]\neq0$ for every nonzero element $x$ of $L$. For a nonempty subset $I$ of $L$ the
set $C_L(I)=\{x\in L|[x,I]=0\}$ is called the centralizer of $I$ in $L$.
\end{defn}
\begin{defn}{\rm\supercite{CF}}
A LTS $T$ is semiprime if $[T, I, I]\neq0$ for every nonzero ideal $I$ of $T$. It is prime if every two nonzero ideals $I, J$ of $T$ give $[T, I, J]\neq0$.
$T$ is called nondegenerate if $[T, x, x]\neq 0$ for every nonzero element $x$ of $T$. Let $I$ be a nonempty subset of $T$. $C_{T}(I)=\{x\in T|[x,I,T]=[T,I,x]=0\}$ is
called the centralizer of $I$ in $T$. In particular, $C_{T}(T)=\{x\in T|[x,T,T]=0\}$ is called the center of $T$, and is denoted by $C(T)$.
\end{defn}
\begin{defn}
An ideal $I$ of a LTS $T$ is said to be essential if every nonzero ideal of $T$ hits $I$ ($I\cap J\neq 0$ for every nonzero ideal $J$ of $T$).
\end{defn}
Clearly, if $I$ and $J$ are essential ideals, then $I\cap J$ is an
essential ideal.
The following results contain analogous results to the corresponding
ones for Lie algebras in \cite{S}, and their proofs are similar.
Note that we will always consider a LTS as the $1$ component
of its standard imbedding.
\begin{prop}{\rm\supercite{CF}}\label{center and essential}
Let $I$ be an ideal of a LTS $T$. Then
\begin{enumerate}[(1)]
\item $C_{T}(I)$ is an ideal of $T$.
\item\label{center implies essential} If~~$C_{T}(I)=0$, then $I$ is essential. Moreover, if $T$ is semiprime, then $I\cap C_{T}(I)=0$ and $I$
is essential if and only if $C_T(I)=0$.
\end{enumerate}
\end{prop}
\begin{prop}\label{converse not true}
If $T$ is a LTS, then
\begin{center}
$T$ is nondegenerate $\Rightarrow$ $T$ is semiprime $\Rightarrow$ $C(T)=0$.
\end{center}
\end{prop}
However, the converses of Proposition \ref{converse not true} are not true. Inspired by the counterexample in \cite{S}, we consider a vector space
$A=\left\{\left( \begin{array}{cc}
a & b \\
0 & 0
\end{array} \right)
| ~a,b\in\R\right\}$. Then $A$ becomes a LTS $A^{-}$ by putting
$[E,F,G]=EFG-FEG-GEF+GFE, \forall E, F, G\in A$. It is easy to prove $C(A^{-})=0$. Note that
$I^-=\left\{\left( \begin{array}{cc}
0 & b \\
0 & 0
\end{array} \right)
|~ b\in\R\right\}$ is an ideal of $A^{-}$ which satisfies $[A^{-}, I^-, I^-]=0$, hence $A^{-}$ is not semiprime.
In \cite{G}, the author constructed a semiprime degenerate Lie algebra, and we denote it by $L$, then $L$ can be regarded as a LTS $\tilde{L}$ with $[a,b,c]=[[a,b],c], \forall a,b,c\in L$. Then $\tilde{L}$ is a semiprime degenerate LTS.
\begin{re}
A LTS $T$ is prime if $C_{T}(I)=0$ for every nonzero ideal $I$ of $T$.
\end{re}
\section{Systems of quotients of a Lie triple system}
Inspired by the notion of algebra of quotients of Lie algebras in \cite{S}, we introduce the notion of system of quotients of Lie triple systems.
Suppose that $T$ and $S$ are two LTS such that $T\subseteq S$, and $R(-,-):S\rightarrow S$ is defined by $R(x,y)(z)=[z,x,y], \forall x,y,z\in S$. For
every $s\in S$, set
$$_T(s)=\F s+\left\{\sum_{i=1}^n R(x^i_{1},y^i_{1}) \cdots R(x^i_{k_i},y^i_{k_i})(s)
\ \vert\ x^i_j,y^i_j\in T \text{ with } n, k_i\in \mathbb{N} \right\}.$$
That is, $_T(s)$ is the linear span in $S$ of the elements of the
form $R(x_1,y_1)\cdots R(x_n,y_n)(s)$ and $s$, where $n\in\N$ and
$x_1, \cdots, x_n, y_1, \cdots, y_n\in T$. It is clear that $[{_T}(s), T, T]\subseteq {_T}(s)$, $[T, {_T}(s), T]\subseteq {_T}(s)$ and $[ T, T, {_T}(s)]\subseteq {_T}(s)$. In particular, in the case that $s\in T$, $_T(s)$ is the
ideal of $T$ generated by $s$. Moreover, we define
$$(T:s)=\{x\in T|~[x,T,{ _T(s)}]+[x,{ _T(s)},T]\subseteq T\}.$$
Obviously, if $s\in T$, then $(T:s)=T$. By definition, for $x\in (T:s)$, $[x,T, {_T}(s)]$ and $[x,{_T}(s), T]$ are contained in $T$, moreover, by identity (1) in the definition of Lie triple systems, $[T,x, {_T}(s)]$ and $[{_T}(s),x, T]$ are contained in $T$. Finally by (2),
$[T,{_T}(s), x] \subseteq [{_T}(s), x,T] +[x, T, {_T}(s)] \subseteq T$.
\begin{prop}\label{(T:s) is an ideal}
Let $T$ be a subsystem of a LTS $S$ and take $s\in S$. Then $(T:s)$
is an ideal of $T$. Moreover, it is maximal among the ideal $I$ of $T$ such that $[I,T,s]+[I,s,T]\subseteq T$.
\end{prop}
\begin{proof} It is clear that $(T:s)$ is a subspace of $T$. Now, for any
$x\in (T:s)$, we have \Bea
[T,{_{T}(s)},[x,T,T]]&\subseteq&[[T,{_{T}(s)},x],T,T]+[x,[T,{_{T}(s)},T],T]+[x,T,[T,{_{T}(s)},T]]\\
&\subseteq& [T,T,T]+[x,{_{T}(s)},T]+[x,T,{_{T}(s)}]\subseteq T
\Eea
and
\Bea
[[x,T,T],{_{T}(s)},T]&\subseteq&[x,T,[T,{_{T}(s)},T]]+[T,[x,T,{_{T}(s)}],T]+[T,{_{T}(s)},[x,T,T]]\\
&\subseteq&[x,T,{_{T}(s)}]+[T,T,T]+T\subseteq T.
\Eea
It follows that $[[x,T,T],T,{_{T}(s)}]\subseteq [T,{_{T}(s)},[x,T,T]]+[[x,T,T],{_{T}(s)},T]\subseteq T$, and so $[x,T,T]\subseteq (T:s)$, i.e., $(T:s)$ is an ideal of $T$.
Suppose that $I$ is an ideal of $T$ such that $[I,T,s]+[I,s,T]\subseteq T$. We will show that for all $x_1, \cdots, x_n, y_1, \cdots, y_n\in T, n\in \N$,
$$[I,T,R(x_{1},y_{1})\cdots R(x_{n},y_{n})(s)]\subseteq T, ~~~[I,R(x_{1},y_{1})\cdots R(x_{n},y_{n})(s),T]\subseteq T.$$
We prove it by induction on $n\geq1$. The base step holds since
\Bea
[I,T,[s,x_{1},y_{1}]]&\subseteq&[[I,T,s],x_{1},y_{1}]+[s,[I,T,x_{1}],y_{1}]+[s,x_{1},[I,T,y_{1}]]\\
&\subseteq &[T,T,T]+[s,I,T]+[s,T,I]\subseteq T
\Eea
and
\Bea
[I,[s,x_{1},y_{1}],T]&\subseteq&[s,x_{1},[I,y_{1},T]]+[[s,x_{1},I],y_{1},T]+[I,y_{1},[s,x_{1},T]]\\
&\subseteq & [s,T,I]+[T,T,T]+T\subseteq T.
\Eea
For the inductive step, assume $$[I,T,R(x_{2},y_{2})\cdots R(x_{n},y_{n})(s)]\subseteq T \text{ and } [I,R(x_{2},y_{2})\cdots R(x_{n},y_{n})(s),T]\subseteq T.$$
Apply step one to the element $R(x_{2},y_{2})\cdots R(x_{n},y_{n})(s)$ we have $[I,T,_{T}(s)]+[I,_{T}(s),T]\subseteq T$, and so $I\subseteq (T:s)$.
\end{proof}
\begin{defn}
Let $T$ be a subsystem of a LTS $S$. Then $S$ is called a system of quotients of $T$, if given $s,s'\in S$ with $s\neq0$, there exist $x,y\in T$ such that
\beq\label{[sxy]neq0} [s,x,y]\neq0, \eeq
\beq\label{x or y in(T:s)} x\in (T:s') ~or~ y\in (T:s'). \eeq
\end{defn}
\begin{prop}
Let $T$ be a subsystem of a LTS $S$.
\begin{enumerate}[(1)]
\item If $C(T)=0$, then $T$ is a system of quotients of itself.
\item If $S$ is a system of quotients of $T$, then $C_S(T)=C(T)=0$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Given any $s,s'\in T$ with $s\ne 0$, since $s\not\in C(T)$ there exist $x,y\in T(=(T:s'))$ such that $[s,x,y]\ne 0$.
(2) Since $C(T)\subseteq C_S(T)$ we only need to prove $C_S(T)=0$. Given any $s\in S$ there exist $x,y\in T$ such that $[s,x,y]\ne 0$. So $s\not\in C_S(T)$.
\end{proof}
The above proposition says for a LTS $T$, that $C(T)=0$ is a sufficient and necessary condition such that $T$ has a system of quotients.
We will show that some properties of a LTS $T$ can be inherited by its system of quotients $S$. Actually, $S$ just needs a weaker condition.
\begin{defn}
Let $T$ be a subsystem of a LTS $S$. Then $S$ is called a weak system of quotients of $T$, if for every $0\neq s\in S$, there exist $x,y\in T$ such that $0\neq[s,x,y]\in T$.
\end{defn}
\begin{re}
Every system of quotients of a LTS $T$ is a weak system of quotients.
\end{re}
\begin{proof}
Suppose that $S$ is a system of quotients of $T$. Then for all nonzero $s\in S$, there exist $x,y\in T$ such that $[s,x,y]\neq0$ and at least one of $x$ and
$y$ belongs to $(T:s)$. Thus $0\neq[s,x,y]\in T$.\end{proof}
However, the converse is false. Adapted from the examples of Utumi in \cite{U} and Siles Molina in \cite{S}, we consider the set $\C[x]$ of all polynomials in
$x$ with complex coefficients. Let $\alpha^{\sigma}$ denote the complex conjugate of a complex number $\alpha$. Then the following ternary product makes $\C[x]$
into a LTS, denoted by $P$:
$$\left[\sum_{r=0}^{m}\alpha_rx^r, \sum_{s=0}^{n}\beta_sx^s, \sum_{t=0}^{l}\gamma_tx^t\right] = \sum_{r=0}^{m} \sum_{s=0}^{n} \sum_{t=0}^{l}
(\alpha_r\beta_s^\sigma - \alpha_r^\sigma\beta_s)(\gamma_t + \gamma_t^\sigma)x^{r+s+t}.$$
Let $I=\langle x^4\rangle$ be the ideal of $P$ and let $S=P/I$. Let $T=\{\overline{\alpha_0}+\overline{\alpha_2x^2}+\overline{\alpha_3x^3}\}\subseteq S$.
Then $T$ is a subsystem of $S$ and $S$ is a weak system of quotients of $T$.
Indeed, $S=\overline{\C x}\dotplus T$ and $[\overline{1}, \overline{i}, \overline{1+i}]=\overline{-4i}\neq\overline{0}$. It follows that for all
$\overline{0}\neq\overline{t}\in T$, $\overline{0}\neq[\overline{t}, \overline{i}, \overline{1+i}]=\overline{-4it}\in T$. Take
$\overline{0}\neq\overline{\alpha_1x}+\overline{t}=
\overline{\alpha_0}+\overline{\alpha_1x}+\overline{\alpha_2x^2}+\overline{\alpha_3x^3}\in S$. If $\overline{\alpha_1x}=\overline{0}$, then
$\overline{0}\neq[\overline{t}, \overline{i}, \overline{1+i}]\in T$; if $\overline{\alpha_1x}\neq\overline{0}$, then $\overline{\alpha_1}\neq\overline{0}$,
and hence $\overline{0}\neq[\overline{\alpha_1x}+\overline{t},\overline{i}, \overline{(1+i)x^2}]=\overline{-4i}(\overline{\alpha_0x^2}+\overline{\alpha_1x^3})\in T$.
Suppose that $S$ is a system of quotients of $T$, then for $\overline{x}, \overline{x^3}\in S$, there must be $\overline{t}, \overline{t'}\in T$ such
that $[\overline{x^3}, \overline{t}, \overline{t'}]\neq\overline{0}$ and at least one of $\overline{t}$ and $\overline{t'}$ belongs to $(T:\overline{x})$.
Since $[\overline{x^3}, \overline{t}, \overline{t'}]\neq\overline{0}$, it follows that $\overline{t}=\overline{\alpha_0}$, $\overline{t'}=\overline{\alpha_0'}$,
and $[\overline{1}, \overline{\alpha_0}, \overline{\alpha_0'}]\neq\overline{0}$, for some $\alpha_0, \alpha_0'\in \C$. Thus
$[\overline{x}, \overline{t}, \overline{t'}]=[\overline{1}, \overline{\alpha_0}, \overline{\alpha_0'}]\overline{x}\notin T$. But
if $\overline{t}\in(T:\overline{x})$, then $[\overline{x}, \overline{t}, \overline{t'}]=-[\overline{t}, \overline{x}, \overline{t'}]\in T$;
if $\overline{t'}\in(T:\overline{x})$, then $[\overline{x}, \overline{t}, \overline{t'}]=[\overline{t'}, \overline{t}, \overline{x}]-[\overline{t'},
\overline{x}, \overline{t}]\in T$. This contradiction shows that $S$ is not a system of quotients of $T$.
\begin{re}
Let $S$ be a weak system of quotients of a LTS $T$.
\begin{enumerate}[(1)]
\item If $I$ is a nonzero ideal of $S$, then $I\cap T$ is a nonzero ideal of $T$.
\item If $T$ is semiprime (prime), then $S$ is semiprime (prime).
\end{enumerate}
\end{re}
\begin{defn}
Let $T$ be a subsystem of a LTS $S$. Then $S$ is called ideally absorbed into $T$ if for every $0\neq s\in S$, there exists an ideal $I$ of $T$ with $C_{T}(I)=0$
such that $0\neq [s,I,T]+[s,T,I]\subseteq T$.
\end{defn}
\begin{lem}\label{C(T:s)=0}
Let $T$ be a subsystem of a LTS $S$ and take $s\in S$.
\begin{enumerate}[(1)]
\item If $S$ is a system of quotients of $T$, then $(T:s)$ is an essential ideal of $T$. Moreover, $C_{T}((T:s))=0$.
\item If $S$ is ideally absorbed into $T$, then $(T:s)$ is an essential ideal of $T$. Moreover, $C_{T}((T:s))=0$.
\end{enumerate}
\end{lem}
\begin{proof}
(1) Let $I$ be a nonzero ideal of $T$. Take $0\neq x\in I\subseteq T$. Since $S$ is a system of quotients of $T$, there exist $y,z\in T$ such
that $[x,y,z]\neq 0$ and at least one of $y$ and $z$ belongs to $(T:s)$. By Proposition \ref{(T:s) is an ideal}, $(T:s)$ is an ideal of $T$, and
note that $I$ is an ideal of $T$, it follows that $0\neq[x,y,z]\in (T:s)\cap I$, and hence $(T:s)$ is an essential ideal of $T$.
Now, suppose $C_{T}((T:s))\neq 0$. Since $(T:s)$ is an essential ideal of $T$, there exists $0\neq t\in C_{T}((T:s))\cap (T:s)$. Note that $S$ is a
system of quotients of $T$, then there exist $t',t''\in T$ such that $[t,t',t'']\neq 0$. However, $[t,t',t'']=0$ since $t\in C_{T}((T:s))$. This
contradiction proves that $C_{T}((T:s))=0$.
(2) Since $S$ is ideally absorbed into $T$, there is an ideal $I$ of $T$ with $C_{T}(I)=0$ such that $0\neq [s,I,T]+[s,T,I]\subseteq T$. Then
$I\subseteq (T:s)$ by the maximum of $(T:s)$, and so $C_{T}((T:s))\subseteq C_{T}(I)=0$. Thus $(T:s)$ is an essential ideal of $T$ by Proposition
\ref{center and essential}(\ref{center implies essential}).
\end{proof}
We will show the relation between $S$ being a system of quotients of $T$ and $S$ being ideally absorbed into $T$, and the following result is needed.
\begin{lem}\label{CS(I)=0}
Let $S$ be a weak system of quotients of a LTS $T$ and let $I$ be an ideal of $T$ with $C_T(I)=0$. Then there is no nonzero $x\in S$ such that $[x,I,T]=[x,T,I]=0$.
\end{lem}
\begin{proof}
Suppose that there is $0\neq x\in S$ such that $[x,I,T]=[x,T,I]=0$. Apply that $S$ is a weak system of quotients of $T$ to find $y,z\in T$ such
that $0\neq[x,y,z]\in T$. Since $C_T(I)=0$, $[[x,y,z],I,T]+[T,I,[x,y,z]]\neq0$. However, note that
\begin{equation*}
\begin{split}
[[x,y,z],I,T]&\subseteq[x,y,[z,I,T]]+[z,[x,y,I],T]+[z,I,[x,y,T]]\\
&\subseteq[x,T,I]+0+[[T,I,x],T,T]+[x,[T,I,T],T]+[x,T,[T,I,T]]\\
&\subseteq0+0+[x,I,T]+[x,T,I]=0,
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
[T,I,[x,y,z]]&\subseteq[[T,I,x],T,T]+[x,[T,I,T],T]+[x,T,[T,I,T]]\\
&\subseteq0+[x,I,T]+[x,T,I]=0.
\end{split}
\end{equation*}
This is a contradiction.
\end{proof}
\begin{thm}\label{main thm}
Let $T$ be a subsystem of a LTS $S$. Then $S$ is a system of quotients of $T$ if and only if $S$ is ideally absorbed into $T$.
\end{thm}
\begin{proof}
Suppose that $S$ is a system of quotients of $T$, and take $0\neq s\in S$. By Proposition \ref{(T:s) is an ideal} and Lemma \ref{C(T:s)=0}(1), $(T:s)$ is
an ideal of $T$ with $C_T((T:s))=0$. Then $[s,(T:s),T]+ [T,(T:s),s]\neq0$, and so $[s,(T:s),T]+[s,T,(T:s)]\neq0$. Use the definition of $(T:s)$, we have
$$[s,(T:s),T]+[s,T,(T:s)]\subseteq[(T:s),T,s]+[(T:s),s,T]\subseteq T.$$
That is, $S$ is ideally absorbed into $T$.
Conversely, suppose that $S$ is ideally absorbed into $T$, then $S$ is a weak system of quotients of $T$. Take $0\neq s,s'\in S$. By Lemma \ref{C(T:s)=0}(2),
we have $C_{T}((T:s'))=0$, and hence $[s,(T:s'),T]+[s,T,(T:s')]\neq0$ by Lemma \ref{CS(I)=0}. Therefore there are $x\in(T:s')$ and $y\in T$ such that
$[s,x,y]\neq0$ or $[s,y,x]\neq0$, which proves that $S$ is a system of quotients of $T$.
\end{proof}
In the proof of Theorem \ref{main thm}, we can conclude that $S$ is a system of quotients of $T$ if and only if $S$ is a weak system of quotients of $T$
satisfying $C_T((T:s))=0, \forall s\in S$.
We now introduce the notion of Martindale-like quotients of a LTS $T$ in \cite{GG}, and investigate the connection of which and the notion of system of quotients
we have defined.
\begin{defn}{\rm\supercite{GG}}
A filter $\mF$ on a Lie algebra is a nonempty family of nonzero ideals such that for any $I_1, I_2 \in \mF$ there exists $I\in \mF$ such that $I\subseteq I_1\cap I_2$.
Moreover, $\mF$ is a power filter if for any $I\in \mF$ there exists $K\in \mF$ such that $K\subseteq[I,I]$.
A Lie algebra of Martindale-like quotients $Q$ of a Lie algebra $L$ with respect to a power filter of sturdy ideals $\mF$ if $L\subseteq Q$ such that for every
nonzero element $q\in Q$ there exists an ideal $I_q\in \mF$ such that $0\neq [q,I_q]\subseteq L.$
\end{defn}
\begin{defn}{\rm\supercite{GG}}
A filter $\mF$ on a LTS is a nonempty family of nonzero ideals such that for any $I_1, I_2 \in \mF$ there exists $I\in \mF$ such that $I\subseteq I_1\cap I_2$.
Moreover, $\mF$ is a power filter if for any $I\in \mF$ there exists $K\in \mF$ such that $K\subseteq[I,T,I]$.
Let $T$ be a LTS and let $\mF$ be a filter on $T$. A LTS $S$ is a LTS of Martindale-like quotients of $T$ with respect to $\mF$ if $S$ is $\mF$-absorbed into $T$, i.e.,
for each $0\neq s\in S$ there exists an ideal $I_s\in \mF$ such that
$$0\neq [s,I_s,T]+[s,T,I_s]\subseteq T.$$
\end{defn}
Note that if $T$ is a semiprime LTS, then the notion of ideally absorbed(i.e., the notion of system of quotients) is equivalent to the notion of Martindale quotient
over the set of all essential ideals.
We now show the relationship between a system of quotients of a LTS and an algebra of quotients of a Lie algebra.
\begin{defn}{\rm\supercite{S}}
Let $L\subseteq Q$ be an extension of Lie algebras. We say that $Q$ is an algebra of quotients of $L$ if the following equivalent conditions are satisfied:
\begin{enumerate}[(i)]
\item Given $p$ and $q$ in $Q$ with $p\neq0$, there exists $x$ in $L$ such that
$$[x,p]\neq0 \text{ and } [x, {_L(q)}]\subseteq L,$$
where $_L(q)$ is the linear span in $Q$ of $q$ and the elements of the form $\ad x_1\cdots \ad x_nq$ with $n\in\N$ and $x_1,\cdots,x_n\in L$.
\item For every nonzero element $q$ in $Q$ there exists an ideal $I$ of $L$ with $C_L(I)=0$ such that $0\neq[I,q]\subseteq L$.
\end{enumerate}
\end{defn}
\begin{prop}
Let $Q$ be an algebra of quotients of a Lie algebra $L$. Then $Q$ and $L$ can be regarded as LTS $\tilde{Q},\tilde{L}$ with $[a,b,c]=[[a,b],c]$, respectively.
Then $\tilde{Q}$ is a system of quotients of $\tilde{L}$.
\end{prop}
\begin{proof}
For any $0\neq q\in \tilde{Q}=Q$, since $Q$ is an algebra of quotients of a Lie algebra $L$, there exists a nonzero ideal $I$ of $L$ such that $C_L(I)=0$ and
$0\neq[q,I]\subseteq L$. Let $\tilde{I}$ denote the induced ideal of $\tilde{L}$ by $I$ with $[a,b,c]=[[a,b],c]$. Suppose $x\in C_{\tilde{L}}(\tilde{I})$,
then $[[x,I],L]=[x,\tilde{I},\tilde{L}]=0$, and so $[x,I]=0$. Then $x\in C_L(I)=0$, and hence $C_{\tilde{L}}(\tilde{I})=0$.
Note that $0\neq[q,I]\subseteq L$, and it follows that
$$0\neq[q,\tilde{I},\tilde{L}]=[[q,I],L]\subseteq[L,L]\subseteq L=\tilde{L}.$$
Then
$$[q,\tilde{L},\tilde{I}]\subseteq[\tilde{I},q,\tilde{L}]+[\tilde{L},\tilde{I},q]
\subseteq\tilde{L}+[[L,I],q]\subseteq\tilde{L}.$$
Therefore, $\tilde{Q}$ is a system of quotients of $\tilde{L}$.
\end{proof}
\begin{prop}\label{C(L(I))=0 iff C(I)=0}
Let $T$ be a LTS and $L(T)=T\oplus \LL(T,T)$ be its standard imbedding. Let $I$ be an ideal of $T$.
\begin{enumerate}[(1)]
\item $C_{L(T)}(I\oplus \LL(I,T))\cap T=C_T(I)$.
\item $C_{L(T)}(I\oplus \LL(I,T))=0$ if and only if $C_T(I)=0$.
\end{enumerate}
\end{prop}
\begin{proof}
(1) Suppose that $x\in C_T(I)\subseteq T$, then $[x,I,T]=[T,I,x]=0$. Note that every element $\sum\LL(a,b)\in \LL(T,T)$ with $[\sum\LL(a,b),T]=0$ implies $\sum\LL(a,b)=0$.
It follows that $\LL(x,I)=0$. The relation
$$[x,I\oplus \LL(I,T)]=\LL(x,I)\oplus[I,T,x]=0$$
shows that $x\in C_{L(T)}(I\oplus \LL(I,T))\cap T$.
Conversely, let $x\in C_{L(T)}(I\oplus \LL(I,T))\cap T$. Then $[x,I\oplus \LL(I,T)]=0$, and so $\LL(x,I)=0, [I,T,x]=0$. Thus $[x,I,T]=0$, it follows that $x\in C_T(I)$.
(2) Only sufficiency needs proof. If $C_T(I)=0$ let us prove that $C_{L(T)}(I\oplus \LL(I,T))=0$. Note that $C_{L(T)}(I\oplus \LL(I,T))\cap T=0$. Then for any $\LL(y,z)\in C_{L(T)}(I\oplus \LL(I,T))$,
the equation $[\LL(y,z), I\oplus \LL(I,T)]=0$ implies that $[y,z,I]=0$ and $[\LL(y,z),\LL(I,T)]=0$. Thus, $\LL([y,z,T],I)\subseteq [\LL(y,z),\LL(T,I)]+\LL([y,z,I],T)=0,$ and so $[[y,z,T],I,T]=0$.
Notice that
\begin{equation*}
\begin{split}
[T,I,[y,z,T]]\subseteq &[y,z,[T,I,T]]+[[y,z,T],I,T]+[T,[y,z,I],T]\\
\subseteq &[y,z,I]+0+0=0.
\end{split}
\end{equation*}
Then $[y,z,T]\subseteq C_T(I)=0$, and this proves $\LL(y,z)=0$. Therefore, $C_{L(T)}(I\oplus \LL(I,T))=0$.
\end{proof}
\begin{prop}{\rm\supercite{GG}}\label{S/T-F iff L(S)/L(T)-L(F)}
Let $T$ be a subsystem of a LTS $S$ and let $L(T)$ and $L(S)$ be standard embeddings of $T$ and $S$, respectively. Then $S$ is a LTS of Martindale-like
quotients of $T$ with respect to a power filter $\mF$ of ideals with zero centralizer on $T$ if and only if $L(S)$ is a Lie algebra of quotients of $L(T)$
with respect to the power filter $L(\mF)=\{I\oplus \LL(I,T)| I\in \mF\}$ of ideals with zero centralizer on $L(T)$.
\end{prop}
\begin{prop}{\rm\supercite{GGN}}\label{T is nondege. iff L(T) is nondege.}
A LTS $T$ is nondegenerate if and only if $L(T)$ is a nondegenerate
Lie algebra.
\end{prop}
\begin{prop}{\rm\supercite{S}}\label{L nondege. implies Q nondege.}
If $Q$ is an algebra of quotients of a nondegenerate Lie algebra $L$ and the characteristic of the base field is not 2 or 3, then $Q$ is nondegenerate.
\end{prop}
\begin{thm}\label{S/T implies L(S)/L(T)}
Let $S$ be a system of quotients of a semiprime LTS $T$. Then $L(S)$ is an algebra of quotients of $L(T)$. Moreover, if $T$ is nondegenerate and $\ch{\bf F}\neq2,3$,
then $S$ is also nondegenerate.
\end{thm}
\begin{proof}
Since $T$ is semiprime, the set $\mF$ of all ideals of $T$ with zero centralizer is a power filter on $T$ such that $S$ is a LTS of Martindale-like quotients of $T$
with respect to $\mF$. It follows from Proposition \ref{S/T-F iff L(S)/L(T)-L(F)} that $L(S)$ is a LTS of Martindale-like quotients of $L(T)$ with respect to $L(\mF)$.
Then for each $0\neq x\in L(S)$, there exists an ideal $I\oplus \LL(I,T)\in L(\mF)$ such that $0\neq [x, I\oplus \LL(I,T)]\subseteq L(T)$. Note that $C_{L(T)}(I\oplus \LL(I,T))=0$, since
$C_T(I)=0$ and by Proposition \ref{C(L(I))=0 iff C(I)=0}. Therefore $L(S)$ is an algebra of quotients of $L(T)$.
Suppose that $T$ is nondegenerate. Then by Proposition \ref{T is nondege. iff L(T) is nondege.}, $L(T)$ is nondegenerate, and so $L(S)$ is nondegenerate by
Proposition \ref{L nondege. implies Q nondege.}. Therefore, $S$ is nondegenerate again by Proposition \ref{T is nondege. iff L(T) is nondege.}.
\end{proof}
At the end of this paper, we will construct the maximal system of quotients for nondegenerate Lie triple systems.
In \cite{GGN}, the authors built the maximal Lie algebra $Q$ of a Lie algebra $L$ with respect to a filter $\mF$ and showed that if $L$ is
a nondegenerate Lie algebra with finite $\Z$-grading and $\mF$ is a power filter of ideals with zero centralizer on $L$, then $Q$ has a finite $\Z$-grading.
Moreover, $L$ and $Q$ have the same support. It is also proved that if $S$ is another Lie algebra of $L$ with respect to $\mF$, then there is a Lie monomorphism
of $S$ into $Q$ which is the identity on $L$.
Let $T$ be a nondegenerate LTS and $\mF$ be the power filter of $T$ consisting of all ideals with zero centralizer. Then $L(T)$ is a nondegenerate Lie algebra by
Proposition \ref{T is nondege. iff L(T) is nondege.} and $L(\mF)$ is the power filter of $L(T)$ consisting of ideals with zero centralizer by Proposition
\ref{S/T-F iff L(S)/L(T)-L(F)}. Suppose that the $\Z_2$-graded Lie algebra $Q=Q_0\oplus Q_1$ is the maximal Lie algebra of quotients of $L(T)$ with respect
to $L(\mF)$. Remember that $Q_1$ is a LTS by putting $[q,q',q'']=[[q,q'],q''], \forall q,q',q''\in Q_1$.
For any nonzero $q\in Q_1$, there exists $I\oplus \LL(I,T)\in L(\mF)$ such that $0\neq[q,I\oplus \LL(I,T)]\subseteq T\oplus \LL(T,T)$. Then $\LL(q,I)\subseteq \LL(T,T)$ and $[T,I,q]\subseteq T$,
and so $[q,I,T]\subseteq [T,T,T]\subseteq T$, and it follows that $[q,I,T]+[q,T,I]\subseteq T$. Since $C_{L(T)}(I\oplus \LL(I,T))=0$, $C_T(I)=0$ by Proposition \ref{C(L(I))=0
iff C(I)=0}, which implies $[q,I,T]+[q,T,I]\neq0$. Therefore, $Q_1$ is a system of quotients of $T$.
Moreover, $Q_1$ is maximal in the sense that if $S$ is another system of quotients of $T$, then there exists a Lie monomorphism $\phi: S\rightarrow Q_1$ which is
the identity on $T$. Now assume that $S$ is a system of quotients of $T$. Then $L(S)$ is an algebra of quotients of $L(T)$ by Proposition \ref{S/T implies L(S)/L(T)}.
Note that $Q=Q_0\oplus Q_1$ is the maximal Lie algebra of quotients of $L(T)$ with respect to $L(\mF)$. Then there exists a Lie monomorphism $\phi: L(S)\rightarrow Q$
which is the identity on $L(T)$, and hence $\phi|_S: S\rightarrow Q_1$ is a Lie monomorphism which is the identity on $T$. This implies that $Q_1$ is the maximal
system of quotients of $T$.
\begin{lem}{\rm\supercite{S}}\label{semisimple Lie alg}
If $L$ is a finite dimensional semisimple Lie algebra over an algebraically closed field of characteristic 0, i.e., the solvable radical of $L$ is zero, then the maximal Lie algebra of quotients of $L$
is $L$.
\end{lem}
\begin{prop}
Let $T$ be a finite dimensional semisimple LTS over an algebraically closed field of characteristic 0 and $S$ be the maximal system of quotients of $T$. Then $S=T$.
\end{prop}
\begin{proof}
Since $T$ is semisimple, its solvable radical $R(T)=0$, then $R(L(T))=R(T)\oplus\LL(R(T),T)=0$(cf. \cite{L}), which implies that $L(T)$ is semisimple. Hence the maximal
Lie algebra of quotients of $L(T)$ is itself by Lemma \ref{semisimple Lie alg}, then the maximal system of quotients of $T$ is $T$.
\end{proof}
{\bf ACKNOWLEDGMENTS}\quad The authors would like to thank the referee for valuable comments and
suggestions on this article.
The work was supported by NNSF of China (No. 11171055, No. 11226054), NSF of Jilin Province (No. 201115006), Scientific Research Foundation for
Returned Scholars Ministry of Education of China and the Fundamental
Research Funds for the Central Universities(No. 12SSXT139).
| {
"timestamp": "2013-04-30T02:00:39",
"yymm": "1304",
"arxiv_id": "1304.7340",
"language": "en",
"url": "https://arxiv.org/abs/1304.7340",
"abstract": "In this paper, we introduce the notion of system of quotients of Lie triple systems and investigate some properties which can be lifted from a Lie triple system to its systems of quotients. We relate the notion of Lie triple system of Martindale-like quotients with respect to a filter of ideals and the notion of system of quotients, and prove that the system of quotients of a Lie triple system is equivalent to the algebra of quotients of a Lie algebra in some sense, and these allow us to construct the maximal system of quotients for nondegenerate Lie triple systems.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Systems of quotients of Lie triple systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287698185481,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7089594610367015
} |
https://arxiv.org/abs/2103.09168 | Geometric invariance of determining and resonating centers: Odd- and any-number limitations of Pyragas control | In the spirit of the well-known odd-number limitation, we study failure of Pyragas control of periodic orbits and equilibria. Addressing the periodic orbits first, we derive a fundamental observation on the invariance of the geometric multiplicity of the trivial Floquet multiplier. This observation leads to a clear and unifying understanding of the odd-number limitation, both in the autonomous and the non-autonomous setting. Since the presence of the trivial Floquet multiplier governs the possibility of successful stabilization, we refer to this multiplier as the determining center. The geometric invariance of the determining center also leads to a necessary condition on the gain matrix for the control to be successful. In particular, we exclude scalar gains. Application of Pyragas control on equilibria does not only imply a geometric invariance of the determining center, but surprisingly also on centers which resonate with the time delay. Consequently, we formulate odd- and any-number limitations both for real eigenvalues together with arbitrary time delay as well as for complex conjugated eigenvalue pairs together with a resonating time delay. The very general nature of our results allows for various applications. | \section{\label{sec:introduction}Introduction}
In a dynamical system given by the ordinary differential equation $\dot{x}(t)=f(x(t))$, $x \in \mathbb{R}^{N}$, unstable periodic orbits can be stabilized using additive control terms of the form
\begin{equation}\label{pyragas}
K\big(x(t)-x(t-T)\big).
\end{equation}
Here $T>0$ is the time delay, and $K \in \mathbb{R}^{N\times N}$ is the weight of the control term, which we call the \emph{gain matrix}.
Such control terms were first introduced by Kestutis Pyragas in his work from 1992 \cite{PYR92}.
The control term by Pyragas uses the difference between the delayed state $x(t-T)$ and the current state $x(t)$ of the system.
Frequently, the time delay $T$ is chosen to be an integer multiple of the period of the periodic orbit $x_\ast(t)$ of the uncontrolled system.
In this case, the control vanishes on the orbit itself, and $x_\ast(t)$ is also a solution of the controlled system.
We call such a control term \emph{noninvasive} because it does not change the periodic orbit itself, but only affects its stability properties.
In the case of equilibria, the time delay $T$ can be chosen arbitrarily to achieve noninvasiveness.
The main advantage of the Pyragas control scheme, also for experimental realizations, is its model-independence; no expensive calculations are needed for its implementation, and the only information needed is the period of the targeted periodic orbit.
As a consequence, Pyragas control has many different successful applications, e.g., in atomic force microscopes \cite{YAM09}, un-manned helicopters \cite{OMA12}, complex robots \cite{STE10}, semiconductor lasers \cite{SCH06,SCH11a}, and the enzymatic peroxidase-oxidase reaction \cite{LEK95}, among others.
The success of Pyragas control has been verified for a large number of specific theoretical models as well, including spiral break-up in cardiac tissues \cite{RAP99}, flow alignment in sheared liquid
crystals \cite{STR13}, near Hopf bifurcation \cite{FIE07} and unstable foci \cite{HOE05,YAN06}, synchrony in networks of coupled Stuart-Landau oscillators \cite{SCH13,SCH16}, delay equations \cite{FIE15,FIE17}, in quantum systems \cite{HEI15,DRO19}, the Duffing oscillator \cite{FIE20} and Turing patterns \cite{KUS18}.
General conditions on the success or failure of Pyragas control are hard to obtain because the time delay adds infinitely many dimensions to the complexity of the dynamical system.
In fact, there has been serious confusion in the literature on the so-called \emph{``odd-number limitation''}, which was correctly proven for non-autonomous systems in 1997 \cite{NAK97}; see also Corollary \ref{thm: nakajima} in Section \ref{sec: odd number}. The odd-number limitation states that in non-autonomous periodic ordinary differential equations, hyperbolic periodic orbits with an odd number of real Floquet multipliers larger than one cannot be stabilized using Pyragas control.
In a footnote, Nakajima formulated the conjecture that the odd-number limitation also holds in the autonomous case and this was subsequently often wrongly cited as a proven fact.
However, in 2007 Fiedler et al.~found a counter-example: It is possible to stabilize a periodic orbit near a subcritical Hopf bifurcation with one real positive Floquet multiplier \cite{FIE07}.
A correct version of the odd-number limitation for autonomous equations was subsequentially presented by Hooton and Amann in 2012 \cite{HOO12}.
In the present paper we clarify the confusion on the limitations of Pyragas control.
In contrast to previous works, we focus on the \emph{geometric}, rather than the algebraic, multiplicity of Floquet multipliers (for periodic orbits, see Section II A for a precise definition) and eigenvalues (for equilibria).
That is, our interest lies in the dimension of the eigenspace and not on the number of solutions of the characteristic equation.
Our main results Theorems \ref{lem: preservation} and \ref{lem: preservation eq} show that the geometric multiplicity of the Floquet multiplier 1, or eigenvalue zero, is invariant under control. Whether such a Floquet multiplier 1, or eigenvalue 0, is present in the uncontrolled system or not decides whether the periodic orbit can in principle be stabilized via Pyragas control. Therefore we refer to geometric eigenspace of the Floquet multiplier 1, or the eigenvalue 0, as the \emph{determining center}.
From the main results Theorem \ref{lem: preservation} and Theorem \ref{lem: preservation eq} we obtain several corollaries, among others the odd-number limitation and an any-number limitation for commuting control matrices.
Moreover, and rather surprisingly, for steady states, the geometric multiplicities of the \emph{resonating centers} $2 \pi n i/T$, with $n \in \mathbb{Z}$ and $T$ the time delay, are also preserved under control. As a corollary, we obtain that not only real eigenvalues are impossible to stabilize using commuting gain matrices, but also those of the form $\lambda\pm 2 \pi i n /T$.
All results are of a qualitative nature, and apply to any ordinary differential equation (ODE) subject to Pyragas control. They do not give any quantitative restrictions on the Floquet multipliers \cite{FIE08,JUS99} or on the time delay \cite{YAN06}.
This paper is organized as follows:
In Section II, we investigate Pyragas control of periodic orbits.
We formulate the fundamental principle of the invariance of the geometric multiplicity of the trivial, yet determining Floquet multiplier 1.
As corollaries, we prove the odd-number limitation as well as any-number limitations for commuting gain matrices, both with real and complex spectrum.
In Section III, we study Pyragas control of equilibria.
The main invariance principle here concerns the determining as well as resonating centers.
From this, we derive an odd-number limitation for equilibria as well as any-number limitations for commuting gain matrices with either real or complex spectrum. We conclude the paper with a discussion on applications and generalizations in Section IV.
\section{Geometric invariance of the determining center for periodic orbits}
In this section, we focus on feedback stabilization of periodic orbits. The main result concerns invariance of the geometric multiplicity of the Floquet multiplier $1$ under Pyragas control (Theorem \ref{lem: preservation}). From this invariance we deduce several limitations on feedback stabilization. Since the presence of the Floquet multiplier $1$ determines whether a periodic solution can in principle be stabilized, we refer to it as the \emph{determining center}.
The geometric invariance of the determining center is all the more striking since we compare a center in a finite-dimensional system, given by eigenvectors, to a center in an infinite-dimensional system, given by eigenfunctions.
Still, there is a one-to-one correspondence between these eigenvectors and eigenfunctions,
which we explain in the following.
\subsection{Main result concerning periodic orbits } \label{sec:invariance periodic}
Throughout we consider the ODE
\begin{align} \label{eq: time periodic ode}
\dot{x}(t) = f(x(t), t), \qquad t \geq 0,
\end{align}
with $f: \mathbb{R}^N \times \mathbb{R} \to \mathbb{R}^N$ a $C^1$-function.
We make the following, very general, assumptions on system \eqref{eq: time periodic ode}:
\begin{assumption} \hfill \label{assumption}
\begin{enumerate}
\item The function $f$ is periodic with (not necessarily minimal) period $T > 0$ in its time-argument, i.e., $f(x, t+T) = f(x, t)$ for all $x \in \mathbb{R}^{N}$ and $t \in \mathbb{R}$;
\item System \eqref{eq: time periodic ode} has a periodic solution $x_\ast(t)$ with (again not necessarily minimal) period $T$.
\end{enumerate}
\end{assumption}
Note that besides periodic non-autonomous ODE, Assumption 1 also includes autonomous ODE which possess a periodic orbit of period $T$. In this case, Assumption 1.1 is trivially fulfilled. Indeed, our results in the rest of the section give a unifying approach to both the autonomous and the non-autonomous case.
Linearizing around the periodic orbit $x_\ast(t)$, we obtain the system
\begin{equation} \label{eq: linvareq ode 1}
\dot{y}(t) = \partial_x f (x_\ast(t), t) y(t).
\end{equation}
By $Y_0(t) \in \mathbb{R}^{N \times N}, \ t \geq 0$ we denote the fundamental solution of the linear matrix differential equation:
\begin{align}
\begin{cases}
\frac{d}{dt} Y_0(t) &= \partial_x f(x_\ast(t), t) Y_0(t), \qquad t > 0; \\
Y_0(0) &= I,
\end{cases}
\end{align}
where $I: \mathbb{R}^N \to \mathbb{R}^N$ denotes the identity matrix. If $y_0 \in \mathbb{R}^N$, then $y(t) : = Y_0(t) y_0$ solves \eqref{eq: linvareq ode 1} with initial condition $y(0)= y_0$, and hence the fundamental solution can be viewed as `summarizing' the solution information to \eqref{eq: linvareq ode 1}.
We refer to the matrix $Y_0(T): \mathbb{R}^N \to \mathbb{R}^N$ as the \textbf{monodromy operator} of system \eqref{eq: linvareq ode 1}. We define the \textbf{Floquet multipliers} of system \eqref{eq: linvareq ode 1} as the eigenvalues of the monodromy operator $Y_0(T)$.
Let $\mu \in \mathbb{C}$ be an eigenvalue of $Y_0(T)$, i.e., $\mu$ is a Floquet multiplier of \eqref{eq: linvareq ode 1}. Then the linear space
\begin{equation} \label{eq: geometric mp}
\mathcal{N}\left(\mu I - Y_0(T) \right) : = \{ x \in \mathbb{C}^N \mid \mu x - Y_0(T) x = 0 \}
\end{equation}
has dimension at least $1$; and moreover, $\mu$ satisfies
\begin{equation} \label{eq: ce ode}
0 = \det \left( \mu I - Y_0(T) \right).
\end{equation}
The \textbf{geometric multiplicity} of $\mu$ is defined as the dimension of the linear space \eqref{eq: geometric mp} and the \textbf{algebraic multiplicity} is defined as the order of $\mu$ as a zero of the function $z \mapsto \det\left( z I - Y_0(T) \right)$. The geometric and algebraic multiplicity of a Floquet multiplier can in general be different; however, the algebraic multiplicity is always larger than or equal to the geometric multiplicity. In Theorem \ref{lem: preservation}, we show that the geometric multiplicity of the Floquet multiplier $1$ plays a fundamental role in stabilization. Therefore, we will refer to the geometric eigenspace of the Floquet multiplier $1$, i.e., to the space
\begin{equation}
\mathcal{N}\left(1 - Y_0(T) \right)
\end{equation}
as the \textbf{determining center}. In this definition, we do \emph{not} assume that the space $\mathcal{N}\left(1 - Y_0(T) \right)$ contains more than the zero vector (i.e. we do \emph{not} assume that $1$ is a Floquet multiplier). If indeed the space $\mathcal{N}\left(1 - Y_0(T) \right)$ is non-trivial, it is a subspace of the center \emph{eigenspace}, but instead we refer to it as `center' for brevity of notation.
We now apply Pyragas control to the system \eqref{eq: time periodic ode} and write the controlled system as
\begin{equation} \label{eq: time periodic pyragas}
\dot{x}(t) = f(x(t), t) + K \left[x(t) - x(t-T) \right]
\end{equation}
with nonzero gain matrix $K \in \mathbb{R}^{N \times N}$. For $t \geq 0$, denote by
\begin{equation}
Y_1(t): C \left([-T, 0], \mathbb{R}^N\right) \to C \left([-T, 0], \mathbb{R}^N\right)
\end{equation}
the fundamental solution of the linearized equation
\begin{equation} \label{eq: linvareq dde}
\dot{y}(t) = \partial_x f(x_\ast(t), t) y(t)+ K \left[y(t) - y(t-T) \right].
\end{equation}
The map
\begin{equation}
Y_1(T): C \left([-T, 0], \mathbb{R}^N \right) \to C \left([-T, 0], \mathbb{R}^N \right) \end{equation}
is a bounded linear operator, which is also compact (see appendix for a proof). Compactness of the operator implies that all non-zero spectral points are eigenvalues of finite algebraic multiplicity. In particular, if $\mu \neq 0$ is an eigenvalue of $Y_1(T)$, then the linear space
\begin{equation} \label{eq: geom mp dde}
\mathcal{N}\left( \mu I - Y_1(T) \right) : = \{ \phi \in C \left([-T, 0], \mathbb{C}^N \right) \mid \mu \phi - Y_1(T) \phi = 0 \}
\end{equation}
is finite dimensional and the geometric multiplicity of $\mu$ equals the dimension of the space \eqref{eq: geom mp dde}. The novelty of our approach is that we initially focus on the geometric, rather than the algebraic multiplicity, of the Floquet multipliers. We show that the geometric multiplicity of the Floquet multiplier $1$ is preserved under control. This then serves as an determining principle which decides whether the targeted periodic solution can, in principle, be stabilized.
The following main result compares the geometric multiplicity of the eigenvalue $1$ of $Y_0(T)$ with the geometric multiplicity of the eigenvalue $1$ of $Y_1(T)$. By convention, if $1$ is \emph{not} an eigenvalue of $Y_0(T)$ (resp., $Y_1(T)$), we say that the geometric multiplicity of the eigenvalue $1$ of $Y_0(T)$ (resp., $Y_1(T)$) is zero.
\begin{theorem}[Geometric invariance of the determining center under Pyragas control] \label{lem: preservation}
The geometric multiplicity of the Floquet multiplier $1$ is preserved under Pyragas control.
That is, for any gain matrix $K \in \mathbb{R}^{N\times N}$, the geometric multiplicity of the eigenvalue 1 of $Y_0(T)$ without control is equal to the geometric multiplicity of the eigenvalue 1 of $Y_1(T)$ with control.
\end{theorem}
\begin{proof}
We show that there is a one-to-one correspondence between eigenvectors to the eigenvalue $1$ for $Y_0(T)$ and eigenfunctions to the eigenvalue $1$ of $Y_1(T)$. The statement of the claim then follows.
On the one hand, the vector $y_0 \in \mathbb{C}^{N} \backslash \{0 \}$ is an eigenvector of $Y_0(T)$ with eigenvalue $1 \in \mathbb{C}$ if and only if \eqref{eq: linvareq ode 1} has a solution $y(t)$ that satisfies
\begin{align} \label{eq: eigenvector ode}
\begin{cases}
y(t+T) = y(t), \quad t \in \mathbb{R} \\
y(0) = y_0;
\end{cases}
\end{align}
see also Appendix \ref{sec:appendix ode}. On the other hand, $\phi \in C \left([-T,0], \mathbb{C}^N \right) \backslash \{0 \}$ is an eigenfunction of $Y_1(T)$ with eigenvalue $1 \in \mathbb{C}$ if and only if \eqref{eq: linvareq dde} has a solution $y(t)$ that satisfies
\begin{align} \label{eq: eigenvector dde}
\begin{cases}
y(t+T) = y(t), \quad t \geq 0 \\
y(t) = \phi(t), \quad t \in [-T, 0];
\end{cases}
\end{align}
see also Appendix \ref{sec:appendix dde}. But since the control term $K \left[y(t) - y(t-T) \right]$ vanishes on $T$-periodic functions, \eqref{eq: linvareq dde} has a solution of the form \eqref{eq: eigenvector dde} \emph{if and only if} \eqref{eq: linvareq ode 1} has a solution of the form \eqref{eq: eigenvector ode}.
We conclude that there is a one-to-one correspondence between eigenvectors of the eigenvalue $1$ of $Y_0(T)$ and eigenfunctions of the eigenvalue 1 of $Y_1(T)$, which proves the claim.
\end{proof}
\subsection{Corollary: The odd-number limitation} \label{sec: odd number}
Strictly speaking, our main result does not make any statement about stabilization via Pyragas control, only addressing the seemingly unimportant center.
However, we can use it to deduce a number of restrictions, that is, necessary conditions, on Pyragas control.
We start with the long-known odd-number limitation, which follows easily and can now be fully understood in this context of geometric multiplicities.
Previous statements of the odd-number limitation are formulated for either autonomous or non-autonomous systems. In contrast, we formulate the odd-number limitation for \emph{non-degenerate periodic orbits}, i.e., periodic orbits that do not have a Floquet multiplier $1$. By shifting the focus from (non)-autonomous systems to (non)-degenerate periodic orbits, we clarify the confusion in the literature regarding the odd-number limitation.
For non-degenerate periodic orbits, the absence of a Floquet multiplier $1$ in the uncontrolled system forbids stabilization, as no other path for real eigenvalues is possible.
In autonomous systems, every periodic orbit (provided it is not an equilibrium) is degenerate, since translation along the periodic orbit leads to a trivial Floquet multiplier. A clever choice of the gain matrix can allow for a change of the algebraic multiplicity while leaving the geometric multiplicity invariant, thus achieving stabilization.
As a technical prerequisite, we first state and prove the odd-number limitation on the linear level. This has the advantage that, in blockdiagonalizing ODE, the linear statement can be applied to individual blocks.
\begin{prop} \label{prop: odd number technical}
Consider the linear system
\begin{equation} \label{eq: A}
\dot{y}(t) = A(t) y(t)
\end{equation}
with $y(t) \in \mathbb{R}^N$ and $A(t) \in \mathbb{R}^{N \times N}$. Assume that there exists a time $T > 0$ such that $A(t+T) = A(t)$ for all $t \in \mathbb{R}$. Moreover, assume that system \eqref{eq: A} does not have a Floquet multiplier equal to 1 and that it possesses an odd number (counting algebraic multiplicities) of real Floquet multipliers strictly larger than 1.
Then, for all gain matrices $K \in \mathbb{R}^{N \times N}$, the controlled system
\begin{equation} \label{eq: A control}
\dot{y}(t) = A(t) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has at least one real Floquet multiplier larger than 1.
\end{prop}
\begin{proof}
We first give an intuitive argument: By assumption, the linearized system \eqref{eq: A} has an odd number of real Floquet multipliers on the line $(1, \infty)$.
But, since we use a real control matrix $K \in \mathbb{R}^{N \times N}$, non-real Floquet multipliers of the controlled system appear in complex conjugated pairs. Therefore, it is impossible to change the parity of eigenvalues on the line $(1, \infty)$ by leaving the real axis and at least one Floquet multiplier stays on the line $(1, \infty)$. This Floquet multiplier can only move into the unit circle by crossing the point $1 \in \mathbb{C}$. However, this is forbidden by the invariance of the determining center (Theorem \ref{lem: preservation}).
To make this argument precise, fix a gain matrix $K \in \mathbb{R}^{N \times N}$ and introduce the homotopy parameter $\alpha \in [0,1]$:
\begin{equation}\label{eq: parameter}
\dot{y}(t) = A(t) y(t) + \alpha K \left[y(t) - y(t-T) \right].
\end{equation}
The monodromy operator $Y_\alpha(T)$ for \eqref{eq: parameter} is compact for all $\alpha \in [0,1]$, and the map $\alpha \mapsto Y_\alpha(T)$ is continuous; therefore the Floquet multpliers of \eqref{eq: parameter} (i.e., the eigenvalues of $Y_\alpha(T)$) depend continuously on $\alpha$ (in the sense of \cite{Kato95}).
We first show that the number of Floquet multipliers outside the unit circle cannot change by multipliers ``coming from infinity''. Indeed, let $\mu$ be an eigenvalue of $Y_\alpha(T)$ and let $\phi$ be such that
\begin{equation}
Y_\alpha(T) \phi = \mu \phi, \qquad \norm{\phi} = 1,
\end{equation}
i.e., $\phi$ is an eigenfunction with norm equal to $1$. Then
\begin{equation}
\left| \mu \right| = \norm{\mu \phi} = \norm{Y_\alpha(T) \phi} \leq \norm{Y_\alpha(T)}.
\end{equation}
Thus, if $\mu$ is an eigenvalue of $Y_\alpha(T)$, then $\left| \mu \right| \leq \norm{Y_\alpha(T)}$, i.e., the norm of $\mu$ can be bounded by the operator norm of $Y_\alpha(T)$. For every $\alpha \in [0,1]$, the operator $Y_\alpha(T)$ is bounded and the map $ \alpha \mapsto Y_\alpha(T)$ is continuous. Therefore, there exists a $0 < C < \infty$ such that
\begin{equation}
\sup_{\alpha \in [0,1]} \norm{Y_\alpha(T)} < C.
\end{equation}
We conclude that if $\mu$ is an eigenvalue of $Y_\alpha(T)$, then $\left| \mu \right| < C$. Hence the number of eigenvalues of $Y_\alpha(T)$ that lies outside the unit circle cannot change by an eigenvalue ``coming from infinity''; the number of eigenvalues of $Y_\alpha(T)$ outside the unit circle can only change by an eigenvalue crossing the unit circle.
Now, for $\alpha \in [0,1]$, define
\begin{equation}
n_\alpha = \# \{ \mu \in \sigma_{pt}(Y_\alpha(T)) \mid \mu \in (1, \infty) \}
\end{equation}
i.e., $n_\alpha$ is the number of eigenvalues of $Y_\alpha(T)$ lying on the half-line $(1, \infty)$.
Note that the parity of $n_\alpha$ can only change by an eigenvalue crossing the point $1 \in \mathbb{C}$:
Indeed, if $\mu$ is an eigenvalue of $Y_\alpha(T)$, then $\overline{\mu}$ is an eigenvalue as well, so non-real eigenvalues appear in pairs which do not affect the parity of $n_\alpha$.
The only way left is through the real line, i.e., through $1 \in \mathbb{C}$.
However, by assumption, $1 \in \mathbb{C}$ is not a Floquet multiplier of \eqref{eq: parameter} for $\alpha = 0 $ and, by Theorem \ref{lem: preservation}, it will not be a Floquet multiplier of \eqref{eq: parameter} for any $\alpha \in (0, 1]$.
Hence it is impossible to change the parity of $n_\alpha$ through Pyragas control. Since by assumption, $n_{\alpha = 0}$ is odd, we conclude that $n_{\alpha}$ is odd for all $\alpha \in [0,1]$, and \eqref{eq: A control} has at least 1 Floquet multiplier larger than one.
\end{proof}
Note how the proof combines continuous dependence on parameters of the Floquet multipliers with the geometric invariance of the center to conclude that stabilization is impossible.
The assumption that $1 \in \mathbb{C}$ is not a multiplier of the uncontrolled system is crucial here. To facilitate terminology on this essential assumption, we say that $x_\ast$ is \emph{non-degenerate} as a solution of \eqref{eq: time periodic ode} if the linearization \eqref{eq: linvareq ode 1} does \emph{not} have a Floquet multiplier 1.
For non-degenerate periodic solutions of non-autonomous ODE, we recover the well-known odd-number limitation \cite{NAK97}:
\begin{cor}[Nakajima\cite{NAK97}, '97] \label{thm: nakajima} Consider the system \eqref{eq: time periodic ode} satisfying Assumption \ref{assumption}. Assume that $x_\ast$ is non-degenerate as a solution of \eqref{eq: time periodic ode} and that the linearized equation \eqref{eq: linvareq ode 1} has an odd number (counting algebraic multiplicities) of real Floquet multipliers larger than $1$.
Then, for every gain matrix $K \in \mathbb{R}^{N \times N}$, the periodic solution $x_\ast$ is unstable as a solution of the controlled system \eqref{eq: time periodic pyragas}.
\end{cor}
\begin{proof}
We apply Proposition \ref{prop: odd number technical} with $A(t) = \partial_x f(x_\ast(t), t)$: By assumption,
\begin{equation}
\dot{y}(t) = \partial_x f(x_\ast(t), t) y(t)
\end{equation}
does not have a Floquet multiplier 1 and has an odd number of Floquet multipliers larger than 1. Thus Proposition \ref{prop: odd number technical} implies that
\begin{equation}
\dot{y}(t) = \partial_x f(x_\ast(t), t) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has a Floquet multiplier larger than 1 for any gain $K$. Therefore $x_\ast$ is unstable as a solution of \eqref{eq: time periodic pyragas}.
\end{proof}
For any linear, time-periodic DDE, the eigenvalues of the monodromy operator are also captured by a finite dimensional function called the characteristic matrix function \cite{KaashoekVL92, Sieber11, KaashoekVL20}. For system \eqref{eq: A control}, where the time delay is equal to the period, the expression for this characteristic matrix function is relatively explicit. In fact, the original proof in \cite{NAK97} relies heavily on the explicit form of the characteristic matrix function. This is unnecessary, as the argument here shows: the odd number limitation follows directly from the invariance principle in Theorem \ref{lem: preservation}; it does \emph{not} rely on the fact that in system \eqref{eq: A control} the delay is equal to the period.
Let us shortly reflect the situation for autonomous systems \eqref{eq: time periodic ode}, i.e., if
\begin{equation} \label{eq: autonomous}
\partial_t f(x, t) = 0 \qquad \mbox{for all } x \in \mathbb{R}^n \mbox{ and } t \in \mathbb{R}.
\end{equation}
In this case, if the periodic orbit $x_\ast$ is not an equilibrium, we differentiate the relation
\begin{equation}
\dot{x}_\ast(t) = f(x_\ast(t), t)
\end{equation}
with respect to $t$ to see that $\dot{x}_\ast(t)$ is a non-zero, $T$-periodic solution of \eqref{eq: linvareq ode 1}.
It follows that \eqref{eq: linvareq ode 1} has a Floquet multiplier $1$ (called the \emph{trivial Floquet multiplier}).
Thus, if system \eqref{eq: time periodic ode} is in fact autonomous, the solution $x_\ast$ is degenerate and the assumptions of Corollary \ref{thm: nakajima} are not satisfied.
Moreover, if \eqref{eq: time periodic ode} is autonomous, Theorem \ref{lem: preservation} implies that the geometric multiplicity of the trivial Floquet multiplier is preserved.
However, its \emph{algebraic multiplicity} is not fixed under control, and changing the algebraic multiplicity is necessary for successful stabilization. Indeed, the results from Hooton and Amann \cite{HOO12} on stabilization in autonomous systems have a natural interpretation in terms of the algebraic multiplicity of the trivial Floquet multiplier (as will be discussed in more detail upcoming work by the first author \cite{deWolff21}). Also the positive stabilization result \cite{FIE07} for an autonomous system shows stabilization through the center generated by the trivial Floquet multiplier 1.
Some of the results up to this point -- most notably, the invariance principle and the odd-number limitation -- \emph{might} (!) apply to a more general class of noninvasive control terms than `only' Pyragas control.
However, before drawing conclusions on different control terms, one should carefully consider the functional analytical framework, in particular, whether the monodromy operator is still a Riesz operator and whether eigenvalues depend continuously on parameters.
\subsection{Corollary: Any-number limitation for commuting gain matrices with real spectrum} \label{sec:real periodic}
In addition to the odd-number limitation, we obtain direct restrictions on the choice of the gain matrix, again directly from the geometric invariance of the determining center.
This will be explored in this subsection:
In summary, in combination with real Floquet multipliers, stabilization is impossible if the gain matrix commutes with the linearization.
In particular, scalar gains are excluded.
This statement is independent on the actual number of real Floquet multipliers larger than 1, therefore we call it the \emph{any-number limitation for commuting gain matrices}.
We first formulate our results on a linear level. We use Floquet theory (see also Appendix \ref{sec:appendix ode}--\ref{sec:appendix dde}) to transform the linear, time-periodic system $y(t) = A(t) y(t)$ into an autonomous one.
Assume that the map $t \mapsto A(t)$ is $T$-periodic; denote by $Y_0(T)$ its monodromy operator.
Since $Y_0(T)$ is invertible (i.e., $0 \not \in \sigma(Y_0(T, 0))$), there exists a matrix $B \in \mathbb{C}^{N \times N}$ such that
\begin{subequations}
\begin{equation} \label{eq: B defn}
Y_0(T) = e^{BT}.
\end{equation}
Floquet theory for ODE (see Appendix \ref{sec:appendix ode}) gives that the map
\begin{equation} \label{eq: P defn}
P(t): = Y_0(t) e^{- Bt}
\end{equation}
\end{subequations}
is $T$-periodic; moreover, the coordinate transformation $y(t) = P(t) v(t)$ transforms the time-periodic system $\dot{y}(t)=A(t)y(t)$ into the linear, autonomous system
\begin{equation} \label{eq: B ode}
\dot{v}(t) = B v(t).
\end{equation}
In the next proposition, we consider gain matrices $K$ that are \emph{commutative} in the sense that
\begin{equation}\label{commutative}
K P(t) = P(t) K \quad \& \quad K B = BK
\end{equation}
for all $t > 0$.
This seemingly restrictive assumption can easily be fulfilled, as it is trivially true for scalar gains, or, in the case of symmetric systems, for any gain matrix which leaves the periodic orbit invariant pointwise.
In fact, in this case, gain matrices fulfilling assumption \eqref{commutative} seem a very natural choice.
In the next proposition, we provide an analogous statement to Proposition \ref{prop: odd number technical}, but with the above assumptions on the gain matrix.
Regarding the Floquet theory of the uncontrolled system, we assume that there \emph{exists} a real Floquet multiplier strictly larger than 1, but we make no assumptions on the number or parity of such multipliers. Moreover, we do not make assumptions on the presence of a Floquet multiplier 1, and thus the result can be applied in both autonomous and non-autonomous settings.
\begin{prop} \label{prop: even number technical}
Consider the linear, non-autonomous system
\begin{equation} \label{eq: A 2}
\dot{y}(t) = A(t) y(t)
\end{equation}
and assume that there exists $T > 0$ such that $A(t+T) = A(t)$ for all $t \in \mathbb{R}$. Assume that system \eqref{eq: A 2} has at least one real Floquet multiplier larger than $1$.
Assume that the gain matrix $K \in \mathbb{R}^{N \times N}$ satisfies $\sigma(K) \subseteq \mathbb{R}$. Moreover, with $P(t)$ and $B$ as in \eqref{eq: B defn}--\eqref{eq: P defn}, assume that
\begin{subequations}
\begin{align}
K P(t) = & \ P(t) K \quad\mbox{ for all }t \in \mathbb{R};\label{eq: commute P} \\
K B = & \ B K. \label{eq: commute B}
\end{align}
\end{subequations}
Then the controlled system
\begin{equation} \label{eq: A control 2}
\dot{y}(t) = A(t) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has at least one real Floquet multiplier larger than $1$.
\end{prop}
\begin{proof}
We divide the proof into three steps:
\emph{Step 1:} We first transform the DDE \eqref{eq: A control 2} into an autonomous DDE. Recall that the coordinate transform $y(t) =P(t) v(t)$ transforms solutions of \eqref{eq: A 2} into solutions of \eqref{eq: B ode}. Therefore, the coordinate transformation $y(t) = P(t) v(t)$ transforms solutions of \eqref{eq: A control 2} into solutions of
\begin{align}
\dot{v}(t) &= B v(t) + P(t)^{-1} K \left[P(t) v(t) - P(t) v(t-T) \right] \\
&=B v(t) + K \left[v(t) - v(t-T)\right],
\end{align}
where we have used that $P(t+T) = P(t)$ and identity \eqref{eq: commute P}. Thus, the invertible, time-periodic coordinate transformation $y(t) =P(t) v(t)$ transforms solutions of \eqref{eq: A control 2} into solutions of
\begin{equation} \label{eq: transformed}
\dot{v}(t) = B v(t) + K \left[v(t) - v(t-T)\right].
\end{equation}
\emph{Step 2}: Next we use the commutativity property \eqref{eq: commute B} to find a common eigenvector for the unstable eigenvalue of the uncontrolled system and the gain matrix. Since by assumption $Y_0(T) = e^{BT}$ has an eigenvalue $\mu_\ast > 1$, we can choose $B$ such that $B$ has an eigenvalue $\lambda_\ast > 0$. Now let $y \in \mathbb{C}^N$ be such that $\lambda_\ast y- B y = 0$. Then
\begin{equation}
(\lambda_\ast I -B) Ky = K (\lambda_\ast I - B) y = 0
\end{equation}
since, by \eqref{eq: commute B}, $B$ and $K$ commute. Therefore the space
\begin{equation} \label{eq:space}
\mathcal{N} \left( \lambda_\ast I - B \right) : = \{y \in \mathbb{C}^N \mid (\lambda_\ast I - B) y = 0 \}
\end{equation}
is invariant under $K$.
Hence we can find a non-zero $y_\ast \in\mathcal{N} \left( \lambda_\ast I - B \right)$ and a $k_\ast \in \sigma(K)$ such that $K y_\ast = k_\ast y_\ast$. So we conclude we can find a $y_\ast \in \mathbb{C}^N \backslash \{0 \}$ such that
\begin{equation} \label{eq: eigenvector}
B y_\ast = \lambda_\ast y_\ast, \quad K y_\ast = k_\ast y_
\ast
\end{equation}
i.e., $y_\ast$ is a simultaneous eigenvector for $B$ and $K$.
\emph{Step 3:} We reduce to a 1-dimensional, real valued DDE using the common eigenvector from Step 2. Consider the real-valued, scalar DDE
\begin{equation} \label{eq: scalar dde}
\dot{w}(t) = \lambda_\ast w(t) + k_\ast \left[w(t) - w(t-T)\right].
\end{equation}
Since $\lambda_\ast > 0$, the ODE
\begin{equation}
\dot{w}(t) = \lambda_\ast w(t)
\end{equation}
has one Floquet multiplier $e^{\lambda_\ast T} >1$ and no trivial Floquet multiplier. Therefore, Proposition \ref{prop: odd number technical} implies that \eqref{eq: scalar dde} has at least one trivial Floquet multiplier $\mu$ larger than $1$, i.e. \eqref{eq: scalar dde} has a solution $w_\mu(t)$ with $w_\mu(t+T) = \mu w_\mu(t)$.
If $w(t) \in \mathbb{R}$ is a solution of \eqref{eq: scalar dde} and $y_\ast$ is as in \eqref{eq: eigenvector}, then $v(t) = w(t) y_\ast$ solves
\begin{align}
\dot{v}(t) &= \lambda_\ast w(t) y_\ast + k_\ast \left[w(t) y_\ast - w(t-T) y_\ast \right] \\
&= B v(t) + K \left[v(t) - v(t-T) \right].
\end{align}
Thus, if $w(t)$ solves \eqref{eq: scalar dde}, then $v(t) = w(t) y_\ast$ solves \eqref{eq: transformed}. In particular, $v_\mu(t) : = w_\mu(t) y_\ast$ is a solution of \eqref{eq: transformed} with $v_\mu(t+T) = \mu v_\mu(t)$. This implies that $y_\mu(t) := P(t) v_\mu(t) $ is a solution of \eqref{eq: A control 2} with $y_\mu(t+T) = \mu y_\mu(t)$, i.e., $\mu > 1$ is a Floquet multiplier of \eqref{eq: A control 2}.
\end{proof}
Remarkably, Proposition \ref{prop: even number technical} does not make any assumptions on the multiplicity of the unstable Floquet multiplier in the uncontrolled system. Therefore Proposition \ref{prop: even number technical} has a wide applicability (see also Corollary \ref{cor:any number} below). However, if in Proposition \ref{prop: even number technical} we additionally assume that system \eqref{eq: A 2} has a Floquet multiplier larger than 1 \emph{with odd geometric multiplicity}, we can drop the assumption that $K$ has real spectrum.
Indeed, as in the proof of Proposition \ref{prop: even number technical}, the space \eqref{eq:space} is invariant under $K$. Hence, if this space is odd-dimensional, the real matrix $K$ has at least one real eigenvalue in this space. From there, Step 3 of the proof of Proposition \ref{prop: even number technical} implies instability of the controlled system.
\medskip
If the gain matrix $K$ satisfies the conditions \eqref{eq: commute B}--\eqref{eq: commute P}, then in particular $K A(t) = A(t) K$ for all $t$, i.e. the gain matrix commutes with the linear ODE. Under this weaker assumption, a different proof yields the same any-number limitation, see the upcoming work \cite{deWolff21} by the first author.
As an application of Proposition \ref{prop: even number technical}, we consider the case $K = k I$ with $k \in \mathbb{R}$. Then the conditions \eqref{eq: commute P}--\eqref{eq: commute B} are trivially satisfied. Therefore Proposition \ref{prop: even number technical} leads to the following corollary:
\begin{cor}[Any-number limitation for scalar gain matrices] \label{cor:any number}
Consider the system \eqref{eq: time periodic ode} satisfying Assumption \ref{assumption}. Suppose that the linearized equation \eqref{eq: linvareq ode 1} has at least one real Floquet multiplier larger than $1$.
Then, for every $k \in \mathbb{R}$, $x_\ast$ is unstable as a solution of the controlled system \begin{equation} \label{eq: scalar control}
\dot{x}(t) = f(x(t), t) + k \left[x(t) - x(t-T)\right].
\end{equation}
\end{cor}
\subsection{Corollary: Any-number limitation for commuting gain matrices with complex spectrum} \label{sec:complex periodic}
In this section, we address more restrictions on the choice of gain matrix. We again consider commutative gain matrices, but in contrast to the results in Section \ref{sec:real periodic}, we do not make any assumptions on the spectrum of the matrix $K$. The main point here is that the limitation on control similar to the formulation in Corollary \ref{prop: even number technical} still holds, but the reasoning behind the limitation is different: the result in this section is no longer a directly corollary of the odd-number limitation, but requires an explicit analysis of the relevant Floquet multipliers.
\begin{prop} \label{prop:complex gain periodic}
Consider the linear, non-autonomous system
\begin{equation} \label{eq: A 3}
\dot{y}(t) = A(t) y(t)
\end{equation}
and assume that there exists $T > 0$ such that $A(t+T) = A(t)$ for all $t \in \mathbb{R}$. Assume that system \eqref{eq: A 3} has at least one real Floquet multiplier larger than $1$.
Let $P(t), \ B$ be as in \eqref{eq: B defn}--\eqref{eq: P defn} and assume that the gain matrix $K \in \mathbb{R}^{N \times N}$ satisfies
\begin{subequations}
\begin{align}
K P(t) = & \ P(t) K \quad\mbox{ for all }t \in \mathbb{R};\label{eq: commute P 2} \\
K B = & \ B K. \label{eq: commute B 2}
\end{align}
\end{subequations}
Then the system
\begin{equation} \label{eq: A control 3}
\dot{y}(t) = A(t) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has at least one real Floquet multiplier outside the unit circle.
\end{prop}
\begin{proof}
We divide the proof into five steps. Since the first two steps are identical to the first two steps in the proof of Proposition \ref{prop: even number technical}, we will not repeat them here.
\emph{Step 1:} Let $y(t)$ be a solution of \eqref{eq: A control 3} and let $P(t),\ B$ be as in \eqref{eq: B defn}--\eqref{eq: P defn}. Then the periodic coordinate transformation $y(t) =P(t) v(t)$ transforms solutions of \eqref{eq: A control 3} into solutions of
\begin{align}
\dot{v}(t) =B v(t) + K \left[v(t) - v(t-T)\right].
\end{align}
\emph{Step 2}: Since by assumption $Y_0(T) = e^{BT}$ has an eigenvalue $\mu_\ast > 1$, we can choose $B$ such that $B$ has an eigenvalue $\lambda_\ast > 0$. Since by assumption \eqref{eq: commute B 2} the matrices $B$ and $K$ commute, the space
\begin{equation}
\mathcal{N} \left( \lambda_\ast I - B \right) : = \{y \in \mathbb{C}^N \mid (\lambda_\ast I - B) y = 0 \}
\end{equation}
is invariant under $K$. Therefore we can find a non-zero $y_\ast \in \mathcal{N} \left( \lambda_\ast I - B \right)$ and a (possibly complex!) $k_\ast \in \mathbb{C}$ such that
\begin{equation}
B y_\ast = \lambda_\ast y_\ast, \quad K y_\ast = k_\ast y_
\ast
\end{equation}
i.e., $y_\ast$ is a simultaneous eigenvector for $B$ and $K$.
\emph{Step 3:} We now consider the reduced, scalar-valued (possibly complex!) DDE
\begin{equation} \label{eq: scalar dde 2}
\dot{w}(t) = \lambda_\ast w(t) + k_\ast \left[w(t) - w(t-T)\right].
\end{equation}
The DDE \eqref{eq: scalar dde 2} has a solution of the form $w(t) = e^{\lambda t}$ if and only if $\lambda \in \mathbb{C}$ satsifies
\begin{equation} \label{eq:ce complex}
\lambda = \lambda_\ast + k_\ast \left(1-e^{-\lambda T} \right).
\end{equation}
If $\lambda$ satisfies \eqref{eq:ce complex}, then $y(t) = P(t) e^{\lambda t} y_\ast$ is a solution of \eqref{eq: A control 3} with $y(t+T) = e^{\lambda T} y(t)$. Therefore, to prove that system \eqref{eq: A control 3} has a Floquet multiplier outside the unit circle, it suffices to prove that equation \eqref{eq:ce complex} has a solution in the right half of the complex plane. To do so, we distinguish between the case where $k_\ast \in \mathbb{R}$ (Step 4) and the case where $k_\ast \in \mathbb{C} \backslash \mathbb{R}$ (Step 5).
\emph{Step 4:} For $k_\ast = 0$, equation \eqref{eq: scalar dde 2} becomes
\begin{equation} \dot{w}(t) = \lambda_\ast w(t)
\end{equation}
which has one Floquet multiplier $e^{\lambda_\ast T} >1$. If we assume that $k_\ast \in \mathbb{R}$, Proposition \ref{prop: even number technical} on real gain matrices implies that \eqref{eq: scalar dde 2} has a Floquet multiplier larger than one for all gains $k_\ast$. This proves the theorem in the case that $k_\ast \in \mathbb{R}$.
\emph{Step 5:} We now consider the case $k_\ast \in \mathbb{C} \backslash \mathbb{R}$. If $\lambda \in \mathbb{C}$ satisfies \eqref{eq:ce complex}, then $\overline{\lambda}$ satisfies
\begin{equation}
\overline{\lambda} = \lambda_\ast + \overline{k}_\ast \left(1 - e^{- \overline{\lambda} T} \right)
\end{equation}
and both $e^{\lambda T}$ and $e^{\overline{\lambda}T}$ are Floquet multipliers of \eqref{eq: A control 3}. Hence, for non-real $k_\ast$, non-real solutions of \eqref{eq:ce complex} in the right half of the complex plane lead to \emph{two} unstable Floquet multipliers of \eqref{eq: A control 3}, see also Figure 1.
If $\lambda$ is a solution of \eqref{eq:ce complex} on the imaginary axis, then $e^{\lambda T}$ is a Floquet multiplier of \eqref{eq: A control 3} on the unit circle. So to search for stability changes of \eqref{eq: A control 3}, we search for solutions of \eqref{eq:ce complex} on the imaginary axis. Equation \eqref{eq:ce complex} has a solution of the form $\lambda = i \omega$ if and only if $k_\ast$ is on the curve
\begin{equation}\label{Hopfcurves}
k_\ast(\omega)=\frac{i \omega-\lambda_\ast}{1-e^{-i \omega T}}.
\end{equation}
\begin{figure} \label{fig:hopfcurves}
\includegraphics[width=\columnwidth]{Hopfcurvesa005_2.pdf
\caption{\label{Hopfcurvesa005} Hopf curves (red) from eq. \eqref{Hopfcurves}, horizontal axis: Re$(k_\ast)$, vertical axis: Im$(k_\ast)$. Increasing $\omega$ is indicated by arrows. The unstable dimension is given in brackets and is higher by two to the right of the curves. No control is possible. Parameter values: $\lambda_\ast=0.05, T= 2 \pi$.}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{Floquet.pdf
\caption{\label{circle} Unit circle and Floquet multipliers in the complex plane parametrized by different real and complex $k_\ast$. Yellow dot:
Geometrically invariant determining center. Parameter values: $\lambda_\ast=0.05$ (magenta dot), $T= 2 \pi$. Blue: real $k_\ast$ (note the invariance of the real line!), red (dotted): purely imaginary $k_\ast$, green: arg$(k_\ast)=e^{0.8\pi i}$.}
\end{figure}
Note how we can write these curves in one single equation but actually obtain a family of curves through the exponential term in the denominator.
More precisely, there is one curve for each $\omega \in (2 \pi m/T,2 \pi (m+1)/T)$, $m \in \mathbb{Z}$.
Each of these curves defines a continuous graph over the real axis:
Indeed, the real part is strictly increasing along the curves.
Moreover, we remark that \eqref{Hopfcurves} is symmetric with respect to the real axis.
It suffices therefore to consider $\omega>0$, w.l.o.g.
Next, notice that the curves, parametrized by $\omega$, are oriented in the direction of decreasing real part of $k_\ast$.
Moreover, since all the segments are complex differentiable (see also \cite{SCH16}), complex orientation is preserved and we conclude the number of solutions in the right half plane is higher by 2 to the right of the curve.
Therefore, the unstable dimension can only increase if any of the curves \eqref{Hopfcurves} are crossed, proving the theorem.
\end{proof}
Note that in the above proof, in contrast to Proposition \ref{prop: even number technical}, the real line does not stay invariant; see Figure \ref{circle}.
In Section \ref{sec:complex equilibria}, we will give an application of Proposition \ref{prop: any number eq} to the stabilization of equilibria.
\section{Geometric invariance of resonating centers of equilibria}
In this section, we focus on Pyragas control of equilibria. The main result concerns invariance of the geometric multiplicity not only of the determining center 0, but also of the resonating centers $\pm 2 \pi i n/ T$ (Theorem \ref{lem: preservation eq}).
From this invariance we deduce several limitations on feedback stabilization. These limitations explicitly depend on the time delay.
If the time delay resonates with the eigenvalues, stabilization becomes impossible. We expect these results to be particularly important at (equivariant) Hopf bifurcation, where the time delay $T$ is in resonance with the purely imaginary eigenvalues $\pm 2 \pi i / T$ of the equilibrium.
\subsection{Main result concerning equilibria}
Throughout this section, consider the autonomous ODE
\begin{align} \label{eq: autonomous ode}
\dot{x}(t) = f(x(t))
\end{align}
with $f: \mathbb{R}^N \to \mathbb{R}^N$ a $C^1$-function. We assume that there exists a $x_\ast \in \mathbb{R}^N$ such that $f(x_\ast) = 0$, i.e., $x_\ast$ is an equilibrium of \eqref{eq: autonomous ode}.
The linearization
\begin{equation} \label{eq: linear ode eq}
\dot{y}(t) = f'(x_\ast) y(t)
\end{equation}
has an eigenvalue $\lambda$ (or, more precisely, the generator of the semigroup associated to \eqref{eq: linear ode eq} has an eigenvalue $\lambda$) if and only
\begin{equation} \label{eq: ce ode 1}
\det \left( \lambda I - f'(x_\ast) \right) = 0.
\end{equation}
The geometric multiplicity of the eigenvalue $\lambda$ is the dimension of the linear space
\begin{equation}
\begin{aligned}
\mathcal{N}\left( \lambda I - f'(x_\ast) \right) = \{x \in \mathbb{C}^N \mid \lambda x - f'(x_\ast) x = 0 \}.
\end{aligned}
\end{equation}
and its algebraic multiplicity is given by the order of $\lambda$ as a zero of the function $z \mapsto \det \left(z I - f'(x_\ast) \right)$.
For a fixed time delay $T > 0$, the geometric multiplicity of the eigenvalues $\frac{2 \pi n i}{T}$ of \eqref{eq: linear ode eq} plays an important role in the stabilization results in the rest of this section. To facilitate this in notation, and to emphasize the connection to the time delay, we define the \textbf{resonating centers} to be the geometric eigenspace of the eigenvalues $\frac{2 \pi ni }{T}$, i.e., the space
\begin{equation}
\mathcal{N}\left( \frac{2 \pi n i}{T}I - f'(x_\ast) \right).
\end{equation}
For $T > 0$, $x_\ast$ is again an equilibrium of the controlled system
\begin{equation} \label{eq: pyragas fixed point}
\dot{x}(t) = f(x(t)) + K \left[x(t) - x(t-T)\right]
\end{equation}
with gain matrix $K \in \mathbb{R}^{N \times N}$.
Suprisingly, we can find the relevant eigenvalues from a finite-dimensional characteristic equation. Indeed, $\lambda$ is an eigenvalue of the linearization
\begin{equation} \label{eq: linear dde eq}
\dot{y}(t) = f'(x_\ast) y(t) + K \left[y(t) -y(t-T)\right],
\end{equation}
(or, more precisely, of the generator of the semigroup associated to \eqref{eq: linear dde eq}) if and only if $\lambda$ satisfies
\begin{equation}
\det \left( \lambda I - f'(x_0) - K \left[1-e^{-\lambda T}\right] \right) = 0.
\end{equation}
The geometric multiplicity of $\lambda$ equals the dimension of the linear space
\begin{equation}
\begin{aligned}
\mathcal{N}\left( \lambda I - f'(x_\ast) - K \left[1-e^{-\lambda T}\right] \right) \subseteq \mathbb{C}^N
\end{aligned}
\end{equation}
and its algebraic multiplicity is the order of $\lambda$ as a zero of the function
\begin{equation} \label{eq: ce dde 2}
z \mapsto \det \left( z I - f'(x_\ast) - K \left[1-e^{-z T} \right] \right);
\end{equation}
see also \cite[Chapter IV]{DIE95}.
\medskip
In the following theorem, we compare the geometric multiplicity of the resonating eigenvalues of \eqref{eq: linear ode eq} with the geometric multiplicity of the resonating eigenvalues of \eqref{eq: linear dde eq}. By convention, if $\lambda \in \mathbb{C}$ is \emph{not} an eigenvalue of \eqref{eq: linear ode eq} (resp. \eqref{eq: linear dde eq}), we say that the geometric multiplicity of the eigenvalue $\lambda$ of \eqref{eq: linear ode eq} (resp. \eqref{eq: linear dde eq}) is zero.
\begin{theorem}[Geometric invariance of resonating centers] \label{lem: preservation eq}
For each $n \in \mathbb{Z}$, the geometric multiplicity of the eigenvalue $\frac{2 \pi i n}{T}$ is preserved under control of Pyragas type. That is, for $K \in \mathbb{R}^{N \times N}$ and for every $n \in \mathbb{Z}$, the geometric multiplicity of $\frac{2 \pi n i}{T}$ as an eigenvalue of the ODE
\begin{equation} \label{eq: linear ode eq 1}
\dot{y}(t) = f'(x_\ast) y(t)
\end{equation}
equals the geometric multiplicity of $\frac{2 \pi n i}{T}$ as an eigenvalue of the DDE
\begin{equation} \label{eq: linear dde eq 1}
\dot{y}(t) = f'(x_\ast) y(t) + K \left[y(t) -y(t-T)\right].
\end{equation}
\end{theorem}
\begin{proof}
Fix $n \in \mathbb{Z}$. The geometric multiplicity of $\frac{2 \pi i n}{T}$ as an eigenvalue of \eqref{eq: linear ode eq 1} is given by
\begin{equation} \dim \mathcal{N} \left( \frac{2 \pi i n}{T} I - f'(x_\ast) \right). \end{equation}
Conversely, the geometric multiplicity of $\frac{2 \pi i n}{T}$ as an eigenvalue of \eqref{eq: linear dde eq 1} is given by
\begin{align}
\dim \mathcal{N} \left( \frac{2 \pi i n}{T} I - f'(x_\ast) - K \left[1 - e^{-\frac{2 \pi i n}{T} T} \right] \right) \\
= \dim \mathcal{N} \left( \frac{2 \pi i n}{T} I - f'(x_\ast) \right).
\end{align}
Note that the delay and the resonant eigenvalue cancel in the exponent, causing the contribution of Pyragas control to vanish.
We conclude that the geometric multiplicity of $\frac{2 \pi i n}{T}$ as an eigenvalue of \eqref{eq: linear ode eq 1} equals the geometric multiplicity of $\frac{2 \pi i n}{T}$ as an eigenvalue of \eqref{eq: linear dde eq 1}.
\end{proof}
We briefly reflect on the connection between the two main results on determining and resonating centers. We can interpret the equilibrium $x_\ast$ of \eqref{eq: autonomous ode} as a periodic solution of arbitrary period $T>0$.
Theorem \ref{lem: preservation eq} then implies Theorem \ref{lem: preservation} for equilibria of autonomous systems, but Theorem \ref{lem: preservation eq} on resonating centers is \emph{finer} in the following sense: If we view
\begin{equation} \label{eq:comparison}
\dot{y}(t) = f'(x_\ast) y(t)
\end{equation}
as a linear, time-periodic equation with trivial time-dependence, its monodromy operator is given by
\begin{equation}
Y_0(T) = e^{f'(x_\ast) T}.
\end{equation}
Therefore, $1 \in \sigma(Y_0(T))$, i.e., 1 is a Floquet multiplier, if and only if there is (at least one) $n \in \mathbb{Z}$ such that $\frac{2 \pi i n}{T}$ as an eigenvalue of \eqref{eq:comparison}.
In this case the geometric multiplicity of $1 \in \sigma(Y_0(T))$ is given by
\begin{equation}
\sum_{n \in \mathbb{Z}} \dim \mathcal{N} \left( \frac{2 \pi i n}{T} I - f'(x_\ast) \right).
\end{equation}
Note that all the resonating eigenvalues $\frac{2 \pi n i}{T}$ for the equilibrium together form the determining Floquet multiplier $1$ for the periodic orbit. Theorem \ref{lem: preservation} only gives information on the geometric multiplicity of the determining Floquet multiplier, thereby losing information on the geometric multiplicity of the individual resonating eigenvalues.
At equivariant bifurcation points, we can expect a higher geometric multiplicity of a resonating center. Note that the invariance principle in Theorem \ref{lem: preservation eq} in this case forbids \emph{asymptotic} stabilization.
\subsection{Corollary: Odd-number limitation for equilibria}
In this section, we use Theorem \ref{lem: preservation eq} to prove limitations for stabilization of equilibria of autonomous systems.
Again, the absence of an determining center forbids stabilization. To facilitate this essential point in notation, we say that the equilibrium $x_\ast$ of the ODE $\dot{x}(t) =f(x(t))$ is \emph{non-degenerate} if $0$ is not an eigenvalue of the Jacobian $f'(x_\ast)$.
The following proposition gives an odd-number limitation for equilibria, analogous to the odd-number limitation for periodic orbits in Corollary \ref{thm: nakajima}.
\begin{cor}[Odd-number limitation for equilibria] \label{prop: odd number equilibrium}
Consider the system
\begin{equation} \label{eq: odd number eq}
\dot{x}(t) = f(x(t)), \qquad t \geq 0
\end{equation}
with $f: \mathbb{R}^N \to \mathbb{R}^N$ a $C^1$-function. Suppose that $x_\ast \in \mathbb{R}^N$ is an non-degenerate equilibrium of \eqref{eq: odd number eq}. Moreover, assume that $f'(x_\ast)$ has an odd number of eigenvalues (counting algebraic multiplicities) in the strict right half plane.
Then, for all $K \in \mathbb{R}^{N \times N}$ and all $T > 0$, $x_\ast$ is unstable as a solution of the controlled system
\begin{equation} \label{eq: odd number eq control}
\dot{x}(t) = f(x(t)) + K \left[x(t) - x(t-T)\right].
\end{equation}
\end{cor}
\begin{proof}
We again start by giving an intuitive argument. Since the matrix $f'(x_\ast): \mathbb{R}^N \to \mathbb{R}^N$ is real, its non-real eigenvalues appear in pairs. Therefore, if $f'(x_\ast)$ has an odd number of eigenvalues in the right half plane, an odd number of these eigenvalues is real. So the assumptions of the statement imply that $f'(x_\ast)$ has an odd number of eigenvalues on the positive real axis.
Since the gain matrix $K \in \mathbb{R}^{N \times N}$ is real, non-real eigenvalues of the controlled system appear in pairs as well. Therefore, at least one eigenvalue stays on the positive, real axis as control is applied. This eigenvalue can only move into the left half of the complex plane by crossing the point $0 \in \mathbb{C}$. However, this is forbidden, since by Theorem \ref{lem: preservation eq} the controlled system never has an eigenvalue $0$.
\medskip
To make the argument more precise, we fix a matrix $K \in \mathbb{R}^{N \times N}$ and a time delay $T > 0$. Moreover, we introduce the homotopy parameter $\alpha \in [0, 1]$:
\begin{equation}
\dot{x}(t) = f(x(t)) + \alpha K \left[x(t) - x(t-T) \right].
\end{equation}
We first show that the number of eigenvalues with positive real part can only change via a crossing of the imaginary axis:
The linearization
\begin{equation}\label{eq: parameter equilibrium}
\dot{y}(t) = f'(x_\ast) y(t) + \alpha K \left[y(t) - y(t-T) \right].
\end{equation}
has an eigenvalue $\lambda \in \mathbb{C}$ if and only if $\lambda$ is a zero of the equation
\begin{equation}
d (\lambda, \alpha) := \det \left( \lambda I - f'(x_\ast) - \alpha K \left[1-e^{- \lambda T}\right] \right).
\end{equation}
Since the function $d(\lambda, \alpha)$ is analytic in $\lambda$ and continuous in $\alpha$, the zeros of $\lambda \mapsto d(\lambda, \alpha)$ depend continuously on $\alpha$ (in the sense of \cite{Kato95}).
Moreover, we can find a $C > 0$ such that
\begin{equation}
\sup \{ \mbox{Re}\,\xspace \lambda \mid d(\lambda, \alpha) = 0 \} < C
\end{equation}
for all $\alpha \in [0, 1]$ (see \cite[Section I.4]{DIE95}).
Therefore, when varying $\alpha$, we cannot change the number of zeros of $\lambda \mapsto d(\lambda, \alpha)$ in the strict right half plane by roots ``coming from infinity''. Thus the number of roots of $\lambda \mapsto d(\lambda, \alpha)$ in the strict right half plane can only change by a root crossing the imaginary axis. Moreover, if $d(\lambda, \alpha) = 0$, then also $d(\overline{\lambda}, \alpha) = 0$, so non-real roots appear in complex conjugated pairs.
In particular, the \emph{parity} of the number of roots with positive real parts can only change if a single eigenvalue crosses zero.
Now, for $\alpha \in [0,1]$, define
\begin{equation} n_\alpha = \# \{ \lambda \mid d(\lambda, \alpha) = 0 \mbox{ and } \mbox{Re}\,\xspace \lambda > 0\}
\end{equation}
i.e., $n_\alpha$ is the number of roots of $\lambda \mapsto d(\lambda, \alpha)$ in the strict right half of the complex plane.
By the previous remarks, the parity of $n_\alpha$ can only change if a root of $\lambda \mapsto d(\lambda, \alpha)$ crosses the point $0 \in \mathbb{C}$. However, by assumption, $0 \not \in \sigma(f'(x_\ast))$; so Theorem \ref{lem: preservation eq} implies that
\begin{equation}
d(0, \alpha) \neq 0
\end{equation}
for all $\alpha \in [0, 1]$.
It follows that the parity of $n_\alpha$ cannot change. Since by assumption, $n_{\alpha = 0}$ is odd, we conclude that $n_{\alpha}$ is odd for all $\alpha \in [0,1]$. In particular,
\begin{equation}
\dot{y}(t) = f'(x_\ast) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has an odd number (and thus at least one) of eigenvalues in the right half of the complex plane. This implies that $x_\ast$ is unstable as a solution of \eqref{eq: odd number eq control}.
\end{proof}
We briefly compare the definition of a non-degenerate periodic orbit with the definition of a non-degenerate equilibrium. We can view the equilibrium $x_\ast$ of \eqref{eq: odd number eq} as a periodic solution of arbitrary period $p > 0$. The invariance result of Theorem \ref{lem: preservation} relies on the fact that the time-delay in the controlled system is the same as the period of the periodic orbit. Therefore, to compare the results in this section with the results on periodic orbits, we view the equilibrium $x_\ast$ as a periodic orbit of period $p = T$. In this case, the monodromy operator is given by
\begin{equation}
Y_0(T) = e^{f'(x_\ast) T}.
\end{equation}
Therefore, $x_\ast$ is non-degenerate as a periodic orbit of period $T$ if and only if $\frac{2 \pi n i}{T}$ is not an eigenvalue of $f'(x_\ast)$ for every $n \in \mathbb{Z}$. In contrast, for $x_\ast$ to be non-degenerate as an equilibrium, we only require that $0$ is not an eigenvalue of $f'(x_\ast)$. So the assumption that $x_\ast$ is non-degenerate as an equilibrium is milder than the assumption that $x_\ast$ is non-degenerate as a periodic orbit of period $T > 0$.
\subsection{Corollary: Any-number-resonance limitation for commuting gain matrices with real spectrum}
The next proposition provides an `Any-number limitation' for equilibria, analogous to the statement in Proposition \ref{prop: even number technical} for time-periodic systems. Here we highlight the case in which the linearization has eigenvalues in the right half plane which are in `resonance' with the delay.
In the case of commuting gain matrices with real spectrum, we find that the eigenvalues stay on invariant lines parallel to the real axis.
\begin{cor}[Any-number resonance limitation for equilibria and commuting gain matrices with real spectrum] \label{prop: any number eq}
Consider the system
\begin{equation} \label{eq: resonance diagonal ode}
\dot{x}(t) = f(x(t)), \qquad t \geq 0
\end{equation}
with $f: \mathbb{R}^N \to \mathbb{R}^N$ a $C^1$-function. Let $x_\ast \in \mathbb{R}^N$ be an equilibrium of \eqref{eq: resonance diagonal ode} and fix a time delay $T > 0$. Suppose that the time delay is in resonance with one of the unstable eigenvalues, that is, there exist a real $\lambda_\ast > 0$ and $n \in \mathbb{Z}$ such that
\begin{equation} \label{eq: resonant ev any number}
\lambda_\ast + \frac{2 \pi n i}{T} \in \sigma(f'(x_\ast)).
\end{equation}
Assume that $K \in \mathbb{R}^{N \times N}$ satisfies $\sigma(K) \subseteq \mathbb{R}$ and that the gain matrix commutes with the Jacobian, i.e.,
\begin{equation}
f'(x_\ast) K = K f'(x_\ast). \label{eq: commute fixed point}
\end{equation}
Then $x_\ast$ is unstable as a solution of the controlled system
\begin{equation} \label{eq: resonance diagonal dde}
\dot{x}(t) =f(x(t)) + K \left[x(t) - x(t-T) \right].
\end{equation}
\end{cor}
\begin{proof}
We apply Proposition \ref{prop: even number technical} with
\begin{equation}
A(t) := f'(x_\ast).
\end{equation}
The fundamental solution of the linear equation
\begin{equation}
\dot{y}(t) = f'(x_\ast) y(t)
\end{equation}
is given by
\begin{equation}
Y_0(t) = e^{f'(x_\ast) t}
\end{equation}
and hence $B, P(t)$ in \eqref{eq: B defn}--\eqref{eq: P defn} are given by
\begin{equation} B: = f'(x_\ast), \qquad P(t) \equiv I.
\end{equation}
Therefore, $P(t) K = K P(t)$ is trivially satisfied and \eqref{eq: commute fixed point} implies that $KB = BK$. Moreover, \eqref{eq: resonant ev any number} implies that the monodromy operator $Y_0(T) = e^{f'(x_\ast) T}$ has an eigenvalue $e^{\lambda_\ast T} > 1$. Thus, Proposition \ref{prop: even number technical} implies that
\begin{equation} \dot{y}(t) = f'(x_\ast) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has at least one real Floquet multiplier larger than $1$ (or, equivalently, at least one eigenvalue in the strict right half plane). Therefore $x_\ast$ is unstable as a solution of \eqref{eq: resonance diagonal dde}.
\end{proof}
We briefly comment on the connection between Corollary \ref{prop: any number eq} and Proposition \ref{prop: even number technical}. In Corollary \ref{prop: any number eq}, we assume that the Jacobian $f'(x_\ast)$ has an eigenvalue $\lambda_\ast + \frac{2 \pi n i}{T}$ with $\lambda_\ast > 0$. Therefore, if we view $x_\ast$ as a periodic orbit of period $T > 0$, the monodromy operator
\begin{equation}
Y_0(T) = e^{f'(x_\ast) T}
\end{equation}
has a Floquet multiplier $e^{\lambda_\ast T} > 1$. By Proposition \ref{prop: even number technical}, this Floquet multiplier stays on the real line in the controlled system, provided that assumption \eqref{eq: commute fixed point} holds. Therefore, the corresponding \emph{eigenvalue} should stay on one of the lines
\begin{equation}\label{lines}
\ell_k := \Big \{ z \in \mathbb{C} \mid \mbox{Im}\,\xspace z = \frac{2 \pi k i}{T} \Big \}.
\end{equation}
with $k \in \mathbb{Z}$.
However, by continuity of the eigenvalues, the eigenvalue cannot `jump' between lines and thus the eigenvalue should stay on the line $\ell_k$ with $k = n$. So the eigenvalue of the controlled system stays on the same line as the eigenvalue of the uncontrolled system.
\subsection{Corollary: Any-number-resonance limitation on commuting gain matrices with complex spectrum} \label{sec:complex equilibria}
For the case of commuting gain matrices with \emph{complex} spectrum, we prove an `Any-number limitation' analogous to the statement in Proposition \ref{prop:complex gain periodic} for periodic orbits. The limitation also applies when the uncontrolled system has unstable eigenvalues whose imaginary part is in resonance with the time-delay.
See also the result by Hövel and Schöll \cite{HOE05}, where the case of an unstable focus was addressed in detail.
Note that in contrast to real control gains, but in agreement with Proposition \ref{prop:complex gain periodic} for periodic orbits, the lines \eqref{lines} are not invariant, see Figure \ref{EigenvalueCurve2}.
\begin{figure}
\includegraphics[width=\columnwidth]{Eigenvalues.pdf
\caption{\label{EigenvalueCurve2} Imaginary axis and eigenvalues in the complex plane depending of $k_\ast$. Yellow dot: Geometrically invariant resonating center at $i$. Parameter values: $\lambda_\ast=0.05$ (magenta dot), $T= 2 \pi$ $n=1$. Blue: real $k_\ast=\kappa$ (note the invariant line!), red (dotted): purely imaginary $k_\ast$, green (thin): arg$(k_\ast)=e^{0.8\pi i}$. Note that the picture is seemingly asymmetric, because we have not drawn the complex conjugated resonating center.}
\end{figure}
\begin{cor} [Any-number-resonance limitation for equilibria and commuting gain matrices with complex spectrum]\label{thm: resonance} Consider the system
\begin{align} \label{eq: ode 2}
\dot{x}(t) = f(x(t)), \qquad t \geq 0
\end{align}
with $f: \mathbb{R}^N \to \mathbb{R}^N$ a $C^1$-function. Let $x_\ast \in \mathbb{R}^N$ be an equilibrium of \eqref{eq: ode 2} and fix $T > 0$. Suppose there exists a $\lambda_\ast > 0$ and $n \in \mathbb{Z}$ such that
\begin{equation} \label{eq: ev complex resonance}
\lambda_\ast \pm\frac{2 \pi n i}{T} \in \sigma(f'(x_\ast)).
\end{equation}
Assume that $K \in \mathbb{R}^{N \times N}$ commutes with the Jacobian, i.e.
\begin{equation} \label{eq: commute fixed point 2}
f'(x_\ast) K = K f'(x_\ast).
\end{equation}
Then $x_\ast$ is unstable as a solution of the controlled system
\begin{equation} \label{eq: pyragas2}
\dot{x}(t) = f(x(t)) + K \left[x(t) - x(t-T) \right].
\end{equation}
\end{cor}
\begin{proof}
We apply Proposition \ref{prop:complex gain periodic} with
\begin{equation}
A(t) := f'(x_\ast).
\end{equation}
The fundamental solution of the linear equation
\begin{equation}
\dot{y}(t) = f'(x_\ast) y(t)
\end{equation}
is given by
\begin{equation}
Y_0(t) = e^{f'(x_\ast) t}
\end{equation}
and hence $B, P(t)$ in \eqref{eq: B defn}--\eqref{eq: P defn} are given by
\begin{equation} B: = f'(x_\ast), \qquad P(t) \equiv I.
\end{equation}
Therefore, $P(t) K = K P(t)$ is trivially satisfied and \eqref{eq: commute fixed point 2} implies that $KB = BK$. Moreover, \eqref{eq: ev complex resonance} implies that the monodromy operator $X_0(T) = e^{f'(x_\ast) T}$ has an eigenvalue $e^{\lambda_\ast T} > 1$. Thus, Proposition \ref{prop: even number technical} implies that
\begin{equation} \dot{y}(t) = f'(x_\ast) y(t) + K \left[y(t) - y(t-T) \right]
\end{equation}
has at least one Floquet multiplier outside the unit circle (or, equivalently, at least one eigenvalue in the strict right half plane). Therefore $x_\ast$ is unstable as a solution of \eqref{eq: pyragas2}.
\end{proof}
As an application, we consider 2-dimensional systems where both the Jacobian and the control matrix have two complex conjugated eigenvalues. In this case, stabilization is impossible if the time delay is chosen in resonance with the eigenvalues of the Jacobian, see also \cite{HOE05}.
\begin{cor}
Consider the system
\begin{align} \label{eq: ode 3}
\dot{x}(t) = f(x(t)), \qquad t \geq 0
\end{align}
with $f: \mathbb{R}^2 \to \mathbb{R}^2$ a $C^1$-function. Let $x_\ast \in \mathbb{R}^N$ be an equilibrium of \eqref{eq: ode 3}. Suppose that the Jacobian $f'(x_\ast)$ is given by
\begin{equation}
f'(x_\ast) = \begin{pmatrix}
\lambda & - \omega \\
\omega & \lambda
\end{pmatrix}
\end{equation}
with $\lambda > 0$ and $\omega > 0$. Moreover, let $K$ be given by
\begin{equation} \label{eq:K ev}
K = \begin{pmatrix} \alpha & - \beta \\
\beta & \alpha
\end{pmatrix}
\end{equation}
with $\alpha, \beta \in \mathbb{R}$. Then, for $n \in \mathbb{N}$, $x_\ast$ is unstable as a solution of the controlled system
\begin{equation}
\dot{x}(t) = f(x(t)) + K \left[x(t) - x\left(t- \frac{2 \pi n}{\omega} \right) \right].
\end{equation}
\end{cor}
\begin{proof}
The spectrum of $f'(x_\ast)$ is given by
\begin{equation}
\sigma(f'(x_\ast)) = \{ \lambda \pm i \omega \}.
\end{equation}
Moreover, with $K$ as in \eqref{eq:K ev}, we have that $f'(x_\ast) K =K f'(x_\ast)$. Thus, if we apply Corollary \ref{thm: resonance} with $T = \frac{2 \pi n}{\omega}$ the claim follows.
\end{proof}
Another application is given by a ring of $n$ coupled Stuart-Landau oscillators, which shows a multitude of ponies-on-a-merry-go-round-solutions. It has been proven\cite{SCH16} that only the fully synchronized orbit can be stabilized via Pyragas control. This is a direct consequence of Corollary \ref{thm: resonance}: The time delay $2\pi$ is prescribed by the periodic orbit and in resonance with the imaginary part $1$ of all complex eigenvalues near Hopf bifurcation. In the linearization, the system decouples into $n$ complex equations. For all periodic orbits except for the synchronized one, at least one of these equations will have an empty resonating center at $i$ together with an eigenvalue with positive real part \cite{SCH16}. Therefore, Corollary \ref{thm: resonance} directly forbids stabilization via Pyragas control, at equivariant Hopf bifurcation. The result in\cite{SCH16} shows in addition that this failure of Pyragas control persists close to the Hopf bifurcation point.
\section{Conclusion}
We proved two fundamental invariance principles of Pyragas control for periodic orbits and equilibria. A number of limitations on Pyragas stabilization of periodic solutions and equilibria follow.
Compared to previous literature on this subject\cite{NAK97,JUS99,HOO12}, our approach provides a new perspective on two crucial points. First of all, in our analysis, we emphasized the geometric rather than the algebraic aspects of the Floquet or eigenvalue problem.
Instead of purely calculating algebraic multiplicities, we count the geometric multiplicity, that is, the dimension of the eigenspace.
The geometric multiplicity of the determining and resonating centers stay invariant under Pyragas control, while the algebraic number of eigenvalues does not\cite{FIE07,JUS07}. Moreover, in our formulation of limitations to control, we emphasised properties of the unstable object itself, rather than the properties of the uncontrolled dynamical systems. For controllability, it matters whether there exists an determining or resonating center, but its provenance is irrelevant. These two shifts of perspective provide a clear and unifying understanding of the previously often misinterpreted odd-number limitation, and moreover lead naturally to `any-number limitations' for commutative gain matrices.
\begin{acknowledgments}
This work was partially supported by SFB 910 “Control of self-determining nonlinear systems: Theoretical methods and concepts of application”, project A4: “Spatio-temporal patterns: observation, control, and design”. The work of BdW was supported by the Berlin Mathematical School (BMS). We are grateful to Prof. Dr. Bernold Fiedler and Prof. Dr. Sjoerd Verduyn Lunel for their constant support and encouragement. We thank Dr. Jia-Yuan Dai and Alejandro L\'opez Nieto for many fruitful discussions and helpful remarks.
\end{acknowledgments}
| {
"timestamp": "2021-06-03T02:20:20",
"yymm": "2103",
"arxiv_id": "2103.09168",
"language": "en",
"url": "https://arxiv.org/abs/2103.09168",
"abstract": "In the spirit of the well-known odd-number limitation, we study failure of Pyragas control of periodic orbits and equilibria. Addressing the periodic orbits first, we derive a fundamental observation on the invariance of the geometric multiplicity of the trivial Floquet multiplier. This observation leads to a clear and unifying understanding of the odd-number limitation, both in the autonomous and the non-autonomous setting. Since the presence of the trivial Floquet multiplier governs the possibility of successful stabilization, we refer to this multiplier as the determining center. The geometric invariance of the determining center also leads to a necessary condition on the gain matrix for the control to be successful. In particular, we exclude scalar gains. Application of Pyragas control on equilibria does not only imply a geometric invariance of the determining center, but surprisingly also on centers which resonate with the time delay. Consequently, we formulate odd- and any-number limitations both for real eigenvalues together with arbitrary time delay as well as for complex conjugated eigenvalue pairs together with a resonating time delay. The very general nature of our results allows for various applications.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Geometric invariance of determining and resonating centers: Odd- and any-number limitations of Pyragas control",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982287697666963,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7089594606624646
} |
https://arxiv.org/abs/2204.08596 | Data driven soliton solution of the nonlinear Schrödinger equation with certain $\mathcal{PT}$-symmetric potentials via deep learning | We investigate the physics informed neural network method, a deep learning approach, to approximate soliton solution of the nonlinear Schrödinger equation with parity time symmetric potentials. We consider three different parity time symmetric potentials, namely Gaussian, periodic and Rosen-Morse potentials. We use physics informed neural network to solve the considered nonlinear partial differential equation with the above three potentials. We compare the predicted result with actual result and analyze the ability of deep learning in solving the considered partial differential equation. We check the ability of deep learning in approximating the soliton solution by taking squared error between real and predicted values. {Further, we examine the factors that affect the performance of the considered deep learning method with different activation functions, namely ReLU, sigmoid and tanh. We also use a new activation function, namely sech which is not used in the field of deep learning and analyze whether this new activation function is suitable for the prediction of soliton solution of nonlinear Schrödinger equation for the aforementioned parity time symmetric potentials. In addition to the above, we present how the network's structure and the size of the training data influence the performance of the physics informed neural network. Our results show that the constructed deep learning model successfully approximates the soliton solution of the considered equation with high accuracy. | \section{Introduction}
\par For the past four decades, soliton and its applications have been studied in depth in several branches of optics. In particular, the demand of harnessing the fruitfulness of solitons in nonlinear fiber optics and communication systems have attracted plethora of interests \cite{malomed}. Mathematically, the dynamics of such optical soliton pulses can be described by the nonlinear Schr\"odinger (NLS) equation. By properly managing the dispersion and nonlinearity parameters in the NLS equation one can generate a stable soliton.
\par It has also been shown that by introducing a proper complex parity-time ($\mathcal{PT}$) symmetric potential in the NLS equation, one can gain more access on the optical soliton pulse propagation. Even though the complex $\mathcal{PT}$-symmetric potential is non-Hermitian in nature, the underlying system admits real eigenspectra \cite{bender1998} and it also supports continuous range of stable optical solitons \cite{musslimani2008optical,kominis2019continuous,zyan}. The dynamical behaviours of $\mathcal{PT}$-symmetric optical solitons have been investigated in many optical experiments and theoretical models \cite{guo2009observation,Ruter,regensburger,Makris,Chen2,wen,hari,
manikandan2018deformation,manikandan2021nonlinear}.
\par Nowadays Machine Learning (ML) and Deep Learning (DL) approaches have become important tools in the prediction task in various fields of physics \cite{Carleo2019,sudhe1,sudhe3,santo1}. In the field of nonlinear dynamics, ML methods have been used for the replication of chaotic attractors \cite{Pathak2017}, prediction of chaotic laser pulses amplitude \cite{Amil2019}, detection of unstable periodic orbits \cite{zhu2019}, chaotic signals separation \cite{Krishnagopal2020}, network classification from symbolic time series \cite{Panday2021}, identification of chimera states \cite{BARMPARIS2020,ganaie2020,kushwaha2020} and also in the study of extreme events \cite{lellep2020, meiysudha1, dibak1, ray2021optimized, meiysudha2}.
\par The rapid growth in the field of DL enables us to solve linear and nonlinear partial differential equations (PDEs) by an approximation technique, namely Physics Informed Neural Network (PINN) which was introduced by Raissi {et al.} \cite{raissi2019physics}. For the past couple of years PINNs have been widely used to solve NLS equation and its generalizations \cite{pu2021data, wang2021data, zhou2021deep, wang2021data2, mo2022data}. In this direction, quite recently, the logarithmic NLS equation with $\mathcal{PT}$-symmetric harmonic potential and Scarf-II potential have been solved through PINN approach \cite{zhou2021solving,li2021solving}. In the present work, we consider NLS equation with three different $\mathcal{PT}$-symmetric potentials, namely Gaussian, periodic and Rosen-Morse potentials and approximate the soliton solution of all three cases with the PINN approach. In our study, we introduce a new activation function, namely $\sech$ and test the ability of this new function by comparing it with the other activation functions that are being used in the literature. To the best of our knowledge, this is the first time wherein the PINN approach is being used to solve the NLS equation with the above said $\mathcal{PT}$-symmetric potentials and this is also the first time $\sech$ is used as an activation function in this approach.
\par We organize our presentation as follows. In Sec. II, we present the methodologies involved in the PINN approach and the general way of solving the considered NLS equation with PINN method. The data driven soliton solution of the NLS equation with all three considered potentials, a comparison with exact solution and the error occurring in this approximation are given in Sec. III. A comparative study on {factors that affect the performance of the PINN is discussed in Sec. IV}. We present our conclusions in Sec. V.
\section{PINN and the NLS equation with $\mathcal{PT}$-symmetric potential}
\subsection{The scheme of PINN}
\par Usually, the PINNs have been used for solving nonlinear PDEs that have the general form \cite{raissi2019}
\begin{equation}
u_t - \mathcal{N}[u(x,t);\lambda] = 0, \quad x \in \Omega,\quad t\in[0,T]. \label{main_eq}
\end{equation}
In this work, we consider the complex nonlinear PDEs with the following initial and boundary conditions, that is
\begin{equation}
\begin{cases}\label{pdeibc}
iu_t=\mathcal{N}[u(x,t);\lambda_0], \quad &x \in \Omega,\quad t\in[0,T],\\
I[u(x,t)]|_{t=0}=u_I(x), \quad &x\in\Omega\; (\textrm{initial condition}),\\
B[u(x,t)]|_{x\in\partial \Omega}=u_B(t), \quad &t\in [0,t]\;(\textrm{boundary conditions}),
\end{cases}
\end{equation}
where $u(x,t)$ is the solution of the PDE, $\mathcal{N}[.,\lambda]$ is the combination of linear and nonlinear operators which are parametrized by the initial vector $\lambda_0$, $[0,T]$ represents the lower and upper boundary of the time variable $t$, $\Omega$ and $\partial\Omega$ denote the spatial variable range and the boundary of that domain respectively, $I$ and $B$ are operators corresponding to initial and boundary values, $I[u(x,t)]|_{t=0}=u_I(x)$ and $B[u(x,t)]|_{x\in\partial \Omega}=u_B(t)$ respectively represent the initial and boundary conditions. We define a complex-valued physics model $f(x,t)$ as follows
\begin{equation}
f(x,t) := iu_t-\mathcal{N}[u;\lambda_0].
\end{equation}
\par We can differentiate the latent solution $u(x,t)$ with respect to time variable $t$ and spatial variable $x$ using the derivative technique, namely Automatic Differentiation (AD) \cite{baydin2018,margossian2019review} based on the chain rule which is used to make Back Propagation (BP) \cite{rumelhart1986} in Artificial Neural Networks (ANN). For the implementation of BP, AD and other optimization steps involved in the complex-valued PINN, we use Tensorflow \cite{abadi2016}, which is a well known open-source software library used for AD and DL computations. We use four different kinds of activation functions for the activation of neurons in the ANN (a comparative study on the activation functions is given in Sec.~\ref{a_fn_sec}). However, for the main study, we choose $\tanh$ as the nonlinear activation function which is being used in the current literature in the form \cite{raissi2019}
\begin{equation}
Z_l = \tanh(w_l.Z_{l-1}+b_l), \qquad l = 1,2,3,...n, \label{tanh}
\end{equation}
where $w_l$ is the dim($Z_l$)$\times$dim($Z_{l-1}$) weight matrix and $b_l$ is the dim($Z_l$) bias vector. We define the loss function for the training process as
\begin{equation}
\begin{aligned}[b]\label{loss}
L_{Train} = &\frac{1}{N_I}\sum_{j=1}^{N_I} \left|I[u(x_I^j,t)]|_{t=0}-u_I(x_I^j)\right|^2\\&+\frac{1}{N_B}\sum_{j=1}^{N_B} \left|B[u(x_B,t_B^j)]|_{x_B\in\partial D}-u_B(t_B^j)\right|^2\\
&+\frac{1}{N_C}\sum_{j=1}^{N_C}\left|f(x_C^j,t^j_C)\right|^2,
\end{aligned}
\end{equation}
where $\left\{x_I^j,u_I^j\right\}_{j=1}^{N_I}$ and $\left\{t_B^j,u_B^j\right\}_{j=1}^{N_B}$ are respectively represent the initial and boundary conditions and $\left\{x_C^j, t_C^j, f(x_C^j,t_C^j)\right\}_{j=1}^{N_C}$ denote the collocation points of $f(x,t)$. We create the sample points using Latin Hypercube Sampling (LHS) algorithm \cite{stein1987} and the optimization for the loss function by Limited memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm \cite{liu1989}.
\par The major steps involved in solving the PDE \eqref{main_eq}, with initial and boundary conditions \eqref{pdeibc}, using the PINN method, are given below:
\begin{enumerate}[label=(\roman*)]
\item Defining the {structure of ANN which is described by fixed number of layers and fixed number of neurons}.
\item Preparing three training sets, namely (i) the initial condition set, (ii) boundary conditions sets and (iii) the random collocation points using the LHS technique \cite{stein1987}.
\item Getting the training loss function $L_{Train}$ given in \eqref{loss} by adding weighted $\mathbb{L}^2$-norm errors of the initial, boundary condition residuals and $f(x,t)$.
\item Train the ANN in order to {get suitable values of $\{\mathbf{\hat{w}},\mathbf{\hat{b}}\}$ to minimize the $L_{Train}$} using the L-BFGS algorithm.
\end{enumerate}
Using these four steps we approximate the solution of the considered PDE.
\subsection{PINN method for NLS equation with $\bm{\mathcal{PT}}$-symmetric potential}
\par We consider NLS equation with a $\mathcal{PT}$-symmetric potential in the form,
\begin{equation}
i\psi_t+\psi_{xx}+P(x)\psi+\sigma|\psi|^2\psi=0, \label{nls_main}
\end{equation}
where $\psi=\psi(x,t)$ is a complex field, $\sigma$ is the nonlinear coefficient corresponding to focusing and defocusing interactions and $P(x)$ is the $\mathcal{PT}$-symmetric potential which has the form,
\begin{equation}
P(x) = [V(x)+iW(x)],
\end{equation}
where $V(x)$ and $W(x)$ are real and imaginary parts of the $\mathcal{PT}$-symmetric potential and they should satisfy the following two conditions:
\begin{equation}
V(-x) = V(x), \qquad W(-x) = -W(x). \label{con1}
\end{equation}
\par In this work, to obtain the soliton solution of \eqref{nls_main} using the above mentioned PINN method \cite{raissi2019}, we define the equation, initial and boundary conditions respectively as follows:
\begin{equation}
i\psi_t = -\psi_{xx}-P(x)\psi-\sigma|\psi|^2\psi, \quad x\in(-L,L), \quad t\in(0,T),\label{nls2}
\end{equation}
\begin{subequations}
\begin{equation}
\psi(x,0)=\psi_0(x), \quad x\in [-L,L],
\end{equation}
\begin{equation}
\psi(-L,t) = \psi(L,t), \quad t \in [0,T].
\end{equation}
\end{subequations}
\par Since the solution $\psi(x,t)$ of Eq. \eqref{nls2} is complex, we consider it in the form $\psi(x,t)=u(x,t)+iv(x,t)$, where $u(x,t)$ and $v(x,t)$ are two real functions denoting real and imaginary parts of the solution $\psi$, respectively. Now we use the associated complex-valued PINN as $f(x,t)=if_u(x,t)-f_v(x,t)$ with $-f_v(x,t)$ and $f_u(x,t)$ being the real and imaginary parts of $f(x,t)$ respectively. The explicit form of the functions read
\begin{subequations}
\begin{equation}
f(x,t) = i\psi_t+\psi_{xx}+[V(x)+iW(x)]\psi+\sigma|\psi|^2\psi,
\end{equation}
\begin{equation}
f_u(x,t) = u_t+v_{xx}+V(x)v+W(x)u+\sigma (u^2+v^2)v,
\end{equation}
\begin{equation}
f_v(x,t) = v_t-u_{xx}-V(x)u+W(x)v-\sigma (u^2+v^2)u,
\end{equation}
\end{subequations}
and using this, we approximate $\psi(x,t)$ by a complex-valued deep neural network. The shared parameters between $\psi(x,t)$ and $f(x,t)$ can be trained by minimizing $L_{Train}$ which is the combination of three mean squared errors as given below
\begin{equation}
L_{Train} = L_I + L_B + L_C,
\end{equation}
where the mean squared errors are taken in the form
\begin{equation}
\begin{aligned}[b]\label{losses}
L_I = &\frac{1}{N_I}\sum_{j=1}^{N_I} \left(\left|u(x_I^j,0)-u_0^j\right|^2+\left|v(x_I^j,0)-v_0^j\right|^2\right),\\
L_B = &\frac{1}{N_B}\sum_{j=1}^{N_B} \left(\left|u(-L,t_B^j)-u(L,t_B^j)\right|^2+\left|v(-L,t_B^j)-v(L,t_B^j)\right|^2\right),\\
L_C = &\frac{1}{N_C}\sum_{j=1}^{N_C}\left(\left|f_u(x_C^j,t^j_C)\right|^2+\left|f_v(x_C^j,t^j_C)\right|^2\right),
\end{aligned}
\end{equation}
with $\{x_I^j,u_0^j,v_0^j\}_{j=1}^{N_I}$ denote the initial data, $\{t_B^j,u(\pm L,t_B^j), v(\pm L,t_B^j)\}_{j=1}^{N_B}$ denote the boundary data and $\{x_C^j,t_C^j,f_u(x_C^j,t_C^j),f_v(x_C^j,t_C^j)\}_{j=1}^{N_C}$ denote the collocation points on $f(x,t)$. The losses $L_I$, $L_B$ and $L_C$ respectively represent the $\mathbb{L}^2$-norm error in initial, boundary and inside the spatio-temporal regime. Figure \ref{fig:pinn} shows the schematic diagram of the PINN. The left panel of the figure corresponds to the ANN where we have two input neurons for space and time and two output neurons for real and imaginary parts of the solution. The right panel shows the physics information which we give as a form of training loss function $L_{Train}$.
\begin{figure*}[!ht]
\includegraphics[width=0.9\linewidth]{full_fig.jpg}
\caption{\label{fig:pinn} Schematic diagram of PINN. The left panel shows the input and output layers with two neurons and $n$ hidden layers. Each neuron is activated by the activation function $A$. The right panel corresponds to the physics information of the PINN given as a loss function of the optimization problem.}
\end{figure*}
\section{Data driven solutions of the NLS equation with $\mathcal{PT}$-symmetric potentials}
\par In this section, we present the outcomes of DL while solving the focusing ($\sigma=1$) NLS equation with soliton solution with three $\mathcal{PT}$-symmetric potentials, namely (i) Gaussian, (ii) periodic and (iii) Rosen-Morse potential. To make the PINN to solve the considered problem, we are in need of training set data. The training set consists of $N_I = 50$ data points on initial conditions, $N_B=100$ data points on the periodic boundary conditions {(50 on the upper boundary and another 50 on the lower boundary)} and $N_C=20000$ collocation points which are chosen randomly using the LHS \cite{stein1987} method. We choose a 6-layer ANN in which the first and the last layer having two neurons which are used for input $(x,t)$ and output $(u(x,t),v(x,t))$. The other 4 hidden layers have 100 units of neurons each. The hyperbolic tangent function given in Eq. \eqref{tanh} is used for the activation of the neurons. The space and time interval are taken as $L=10$ and $T=5$ respectively. So the limit for spatial and time points are $[-10,10]$ and $[0,5]$. The PINN model has been run for 40,000 optimization steps to minimize the loss function $L_{Train}$. {To verify the outcome of PINN we compare the solutions obtained from PINN with the numerical solutions. To generate the latter data we use Fourier spectral method {\cite{yang2010}} with a special Fourier discretization with 256 space modes and a fourth-order explicit Runge-Kutta temporal integrator with 201 points at the same space/time interval to solve the NLS equation {\eqref{nls2}}. So $\psi(x,t)$ is a $256\times 201$ matrix. We note here that the solution obtained using the above said numerical method is just to access the accuracy of the PINN solution. Training of the PINN itself does not require a numerical solution.}
The above setup has been considered same for all three $\mathcal{PT}$-symmetric potentials throughout this work except the activation function which we change in each case since it will be used to study the influence in the accuracy of solving the NLS equation with $\mathcal{PT}$-symmetric potential using PINN method.
\subsection{NLS equation with $\mathcal{PT}$-symmetric Gaussian potential}
\par To begin, we consider the NLS equation \eqref{nls2} with $\mathcal{PT}$-symmetric Gaussian potential \cite{hu2011}
\begin{equation}
P(x) = V(x) + i W(x) = e^{-x^2}+iW_0 xe^{-x^2}, \label{gauss}
\end{equation}
where $W_0$ is the strength of the imaginary part with the value 0.1. The real $(V(x))$ and the imaginary $(W(x))$ parts of the potential $P(x)$ given in \eqref{gauss} satisfy the conditions given in \eqref{con1}. {The Gaussian profile is taken as the initial profile to solve the NLS equation {\eqref{nls2}} with the above potential {\eqref{gauss}}.} After training the PINN with the above mentioned setup with the Gaussian potential, the approximated solution is shown in Fig. \ref{fig:gauss_main}.
\begin{figure}[!ht]
\includegraphics[width=0.5\linewidth]{gauss_tanh_40k.jpg}
\caption{\label{fig:gauss_main} Results of PINN in approximating soliton solution of NLS equation \eqref{nls2} with Gaussian potential \eqref{gauss}. (a) representing the predicted values of $|\psi(x,t)|$ by PINN. The stars in Fig. (a) representing randomly selected data points on initial and boundary conditions. (b) showing the exact values of $|\psi(x,t)|$. (c) corresponding to the squared error values between predicted and exact results. (d)-(f) a comparison of approximation done by PINN in finding soliton solution at particular time instants $t=0.65$, $t=2.64$ and $t=4.38$}
\end{figure}
Figures \ref{fig:gauss_main} (a) and (b) respectively represent the predicted and exact magnitude of the soliton solution $|\psi(x,t)|=\sqrt{u^2(x,t)+v^2(x,t)} $ for the NLS equation \eqref{nls2} with $\mathcal{PT}$-symmetric Gaussian potential \eqref{gauss}. The star markers in Fig. \ref{fig:gauss_main} (a) denote the randomly chosen data points on the initial (50 points) and boundary (100 points) conditions. Figure \ref{fig:gauss_main} (c) shows the value of squared errors between predicted and the exact values of the solution. From Fig. \ref{fig:gauss_main} (a) we can see that the predicted soliton solution of the NLS equation \eqref{nls2} with potential \eqref{gauss} is similar to that of the exact solution shown in Fig. \ref{fig:gauss_main} (b). To examine the error between these two solutions, we plot the squared error of them in Fig. \ref{fig:gauss_main} (c). This figure infers that the error value between predicted and the exact solutions are in the order of $10^{-6}$. The relative $\mathbb{L}^2$-norm errors of $u(x,t)$, $v(x,t)$ and $\psi(x,t)$ respectively are $2.1856 \times 10^{-2}$, $2.8822\times 10^{-2}$ and $3.1912\times 10^{-3}$. These results infer that the considered PINN is enabled to approximate the soliton solution of the NLS equation with considered Gaussian potential with low error values. Figures \ref{fig:gauss_main} (d)-(f) show the comparisons of exact and the predicted soliton solution at different times, say $t=0.65$, $t=2.64$ and $t=4.38$. The predicted solitons at different time instants are fitted well with the exact soliton solutions. This also confirms the ability of PINN in solving the NLS equation for the given Gaussian $\mathcal{PT}$-symmetric potential.
\subsection{NLS equation with $\mathcal{PT}$-symmetric periodic potential}
\par Let us now consider the potential $P(x)$ in \eqref{nls2} in the form \cite{musslimani2008optical}
\begin{equation}
P(x)=V(x)+iW(x) = \cos^2 x+i W_0\sin 2x, \label{periodic}
\end{equation}
where the value of strength of the imaginary part is $W_0=0.45$. Since the potential $P(x)$ is $\mathcal{PT}$-symmetric, the real ($V(x)$) and imaginary ($W(x)$) parts satisfy the conditions mentioned in \eqref{con1}. {Here also the Gaussian profile is considered as the initial profile to solve the NLS equation {\eqref{nls2}} with $\mathcal{PT}$-symmetric periodic potential {\eqref{periodic}}.} After the initial common setup made for training the PINN, as mentioned before, we obtain the soliton solution from PINN. The obtained results are reported in Fig.~\ref{fig:periodic_main}.
\begin{figure}[!ht]
\includegraphics[width=0.5\linewidth]{periodic_tanh_40k.jpg}
\caption{\label{fig:periodic_main} Results of PINN in approximating soliton solution of NLS equation \eqref{nls2} with periodic potential \eqref{periodic}. (a) representing the predicted values of $|\psi(x,t)|$ by PINN. The stars in Fig. (a) representing randomly selected data points on initial and boundary conditions. (b) showing the exact values of $|\psi(x,t)|$. (c) corresponding to the squared error values between predicted and exact results. (d)-(f) a comparison of approximation done by PINN in finding soliton solution at particular time instants $t=0.65$, $t=2.64$ and $t=4.38$}
\end{figure}
The predicted and the exact magnitude of the soliton solution $|\psi(x,t)|$ obtained are shown in Figs.~\ref{fig:periodic_main} (a) and \ref{fig:periodic_main} (b) respectively. The data points which are randomly chosen on the initial and boundary conditions for the purpose of training are denoted as stars in Fig.~\ref{fig:periodic_main} (a). The squared error values between predicted and the exact solutions are shown in Fig.~\ref{fig:periodic_main} (c). From Fig.~\ref{fig:periodic_main} (c), we can see that the error values are in the order of $10^{-5}$ which confirm that our constructed PINN model succeeds in approximating the soliton solution of the NLS equation with $\mathcal{PT}$-symmetric periodic potential \eqref{periodic} with high accuracy. Further, to check the correctness of the solution we plot the solution $|\psi(x,t)|$ at different instants of time, say $t=0.65$, $t=2.64$ and $t=4.38$ in Figs.~\ref{fig:periodic_main} (d), (e) and (f) respectively and these figures also confirm that the solution obtained through PINN is accurate since the exact and the predicted solutions coincide with each other. The relative $\mathbb{L}^2$-norm error values in $u(x,t)$, $v(x,t)$ and $\psi(x,t)$ in this case are found to be $5.0925\times 10^{-2}$, $4.9897\times 10^{-2}$ and $4.272\times 10^{-3}$.
\subsection{NLS equation with $\mathcal{PT}$-symmetric Rosen-Morse potential}
\par Next, we consider another $\mathcal{PT}$-symmetric potential, namely Rosen-Morse potential which is given by\cite{midya2013}
\begin{equation}
P(x)=V(x)+iW(x) = -a(a+1)\sech^2 x+i\; 2b\tanh x, \label{rose}
\end{equation}
where $a$ and $b$ are parameters which we will take as $0.1$ and $0.03$. The potential considered in \eqref{rose} also satisfies the conditions given in \eqref{con1}. {We take the initial profile in the form {\cite{midya2013}}}
\begin{equation}
\psi(x) = \sqrt{a^2+a+2} \sech x e^{ibx}, \label{ini_rosean}
\end{equation}
{which satisfies the stationary part of {\eqref{nls2}}}. We consider the same preliminary setup and train the PINN as in the case of Gaussian and periodic potentials and obtain the outcome which we present in Fig.~\ref{fig:rosen_main}. Figures~\ref{fig:rosen_main} (a) and (b) respectively correspond to the predicted and exact magnitudes of the soliton solution of the NLS equation \eqref{nls2} with Rosen-Morse potential \eqref{rose}. The stars at the boundary of Fig.~\ref{fig:rosen_main} (a) denote the data points taken in the initial and boundary conditions. The squared error values between exact and the predicted ones are in the order of $10^{-4}$ which is reported in Fig.~\ref{fig:rosen_main} (c). The Figs.~\ref{fig:rosen_main} (a)-(c) reveal that PINN succeeds in approximating the soliton solution of the Rosen-Morse potential as well. The relative $\mathbb{L}^2$-norm error values of $u(x,t)$, $v(x,t)$ and $\psi(x,t)$ for this case are found to be $3.7277\times 10^{-2}$, $3.2468\times 10^{-2}$ and $5.9112\times 10^{-3}$. In Figs.~\ref{fig:rosen_main} (d)-(f), the exact and the predicted solitons are plotted one over the other at different time instants, say for example $t=0.65$, $t=2.64$ and $t=4.38$ in order to check whether the predicted result is accurate or not. From these figures we can see that the exact and the predicted solutions fit well one over the other indicating that the predicted result is accurate.
\begin{figure}[!ht]
\includegraphics[width=0.5\linewidth]{rosen_tanh_40k.jpg}
\caption{\label{fig:rosen_main} Results of PINN in approximating soliton solution of NLS equation \eqref{nls2} with Rosen-Morse potential \eqref{rose}. (a) representing the predicted values of $|\psi(x,t)|$ by PINN. The stars in Fig. (a) representing randomly selected data points on initial and boundary conditions. (b) showing the exact values of $|\psi(x,t)|$. (c) corresponding to the squared error values between predicted and exact results. (d)-(f) a comparison of approximation done by PINN in finding soliton solution at particular time instants $t=0.65$, $t=2.64$ and $t=4.38$}
\end{figure}
\subsection{Non-stationary solution of NLS equation with $\mathcal{PT}$-symmetric Rosen-Morse potential}
\par {Finally, we analyze the ability of PINN in predicting non-stationary solutions of the NLS equation. Let us consider an initial profile of a non-stationary solution to the NLS equation for the potential {\eqref{rose}} in the form}
\begin{equation}
\psi(x) = \sqrt{a^2+1} \sech x e^{-ibx}, \label{ini_rosean_non}
\end{equation}
{which does not satisfy the stationary part of {\eqref{nls2}}. Let us fix the parameters as $a=1.75$ and $b=0.35$. The obtained results after training the PINN with the same preliminary setup considered earlier are reported in Fig.~{\ref{fig:rosen_non_stat}}. The predicted and exact magnitudes of $\psi(x,t)$ are presented respectively in Figs.~{\ref{fig:rosen_non_stat}} (a) and {\ref{fig:rosen_non_stat}} (b). From these two figures we observe that PINN successfully predict the non-stationary solution as well.}
\begin{figure}[!ht]
\includegraphics[width=0.5\linewidth]{rosen_non_stat.jpg}
\caption{\label{fig:rosen_non_stat} Results of PINN in approximating non-stationary solution of NLS equation \eqref{nls2} with Rosen-Morse potential \eqref{periodic}. (a) representing the predicted values of $|\psi(x,t)|$ by PINN. The stars in Fig. (a) representing randomly selected data points on initial and boundary conditions. (b) showing the exact values of $|\psi(x,t)|$. (c) corresponding to the squared error values between predicted and exact results. (d)-(f) a comparison of approximation done by PINN in finding soliton solution at particular time instants $t=0.39$, $t=1.58$ and $t=2.78$}
\end{figure}
{We have examined the error values between predicted and exact results and plot the outcome in Fig.~{\ref{fig:rosen_non_stat}} (c). We come across the relative $\mathbb{L}^2$-norm error values of $u(x,t)$, $v(x,t)$ and $\psi(x,t)$ for this case as $5.4182\times 10^{-1}$, $5.5958\times 10^{-1}$ and $2.8065\times 10^{-1}$ respectively. To verify the obtained solution we also plot the solution at different time instants, say $t=0.39$, $t=1.58$ and $t=2.78$ in Figs.~{\ref{fig:rosen_non_stat}} (d), (e) and (f) respectively. The exact and predicted magnitudes of the predicted solution are fitted well in figures (d) and (e) but in Fig.~\ref{fig:rosen_non_stat}~(f) we can observe that the magnitude of the solution not fitted well with the exact one. This is due to the time-dependent nature of the solution. Our investigations reveal that one can solve the considered problem with less accuracy using PINN.}
\section{Factors affecting the performance of PINN}
\subsection{Effect of activation functions} \label{a_fn_sec}
\par In our main study, we have chosen $\tanh$ as the activation function (see Eq.~\eqref{tanh}) because it gives us the solution with a low error value. To study the effect of other activation functions in approximating the soliton solution of the $\mathcal{PT}$-symmetric potentials, we consider three other functions, namely Rectified Linear Unit (ReLU), sigmoid and $\sech$. Since the problem under consideration is approximating the soliton solution of the NLS equation with various $\mathcal{PT}$-symmetric potentials, we intend to use a new activation function, namely $\sech$, which has not been used in the field of DL. We consider the general form of the activation functions as given below
\begin{subequations}
\begin{equation}
\textrm{(i)}\; Z_j = ReLU(M) = max(0, M),
\end{equation}
\begin{equation}
\textrm{(ii)}\;Z_j = sigmoid(M) = \frac{1}{1+e^{-M}},
\end{equation}
\begin{equation}
\textrm{(iii)}\;Z_j = \sech(M),\qquad\qquad\qquad
\end{equation}
\begin{equation}
\textrm{(iv)}\;Z_j = \tanh(M),\qquad\qquad\qquad
\end{equation}
\end{subequations}
where $M = w_j.Z_{j-1}+b_j$ as taken in \eqref{tanh}. The functionality of each activation function can be visualized with the help of Fig.~\ref{fig:afun}.
\begin{figure}[!ht]
\includegraphics[width=0.5\linewidth]{a_fn.pdf}
\caption{\label{fig:afun} Different kinds of activation functions}
\end{figure}
\begin{figure*}[!ht]
\includegraphics[width=0.85\linewidth]{gauss_afun_40k.jpg}
\caption{\label{fig:afun_gauss} PINN results for the Gaussian potential case with different activation functions. Rows one to four, respectively, represent the activation functions ReLU, sigmoid, sech and tanh. {Figs. (a), (e), (i) and (m) represent the predicted magnitude of soliton solutions. (b), (f), (j) and (n) represent the error values in magnitude of soliton solutions. Figs. (c), (g), (k) and (o) in third column and Figs. (d), (h), (l) and (p) in fourth column correspond to the soliton solution at particular time instants $t=0.65$ and $t=4.38$ respectively}}
\end{figure*}
\par The predicted values and the error values in the case of NLS equation with $\mathcal{PT}$-symmetric Gaussian potential in approximating the soliton solution with all four different activation functions are reported in Fig.~\ref{fig:afun_gauss}. Figure \ref{fig:afun_gauss} (a) reveals that the function ReLU fails in approximating the soliton solution. The squared error value between predicted and the actual magnitude of soliton solution comes out in the order of $10^{-1}$ only (Fig.~\ref{fig:afun_gauss} (b)). The comparison between predicted and the exact solutions at two other instants of time, say at $t=0.65$ and $t=4.38$ are presented in Figs.~\ref{fig:afun_gauss} (c) and (d) which also confirms that by using ReLU as the activation function, a good approximation cannot be obtained for Gaussian potential. The results for the other three activation functions, namely sigmoid, sech and tanh are plotted in Figs.~\ref{fig:afun_gauss} (e)-(h), (i)-(l) and (m)-(p) respectively. From the outcome, we can infer that the prediction done by PINN with $\tanh$ as activation function gives an accurate result when compared with the other three. As far as the Gaussian potential is concerned, we come across squared error values that are in the order of $10^{-1}, 10^{-4}, 10^{-5}$ and $10^{-6}$ for the ReLU, sigmoid, $\sech$ and $\tanh$ activation functions respectively.
\begin{figure*}[!ht]
\includegraphics[width=0.8\linewidth]{periodic_afun_40k.jpg}
\caption{\label{fig:afun_periodic} PINN results for the periodic potential case with different activation functions. Rows one to four, respectively, represent the activation functions ReLU, sigmoid, sech and tanh. {Fig. (a), (e), (i) and (m) represent the predicted magnitude of soliton solutions. Figs. (b), (f), (j) and (n) represent the error values in magnitude of soliton solutions. Figs. (c), (g), (k) and (o) in third column and Figs. (d), (h), (l) and (p) in fourth column correspond to the soliton solution at particular time instants $t=0.65$ and $t=4.38$ respectively.}}
\end{figure*}
\par The results coming out from PINN with four different activation functions for the $\mathcal{PT}$-symmetric periodic potential is reported in Fig.~\ref{fig:afun_periodic}. The rows one to four represent the results coming out from PINN with the activation functions ReLU, sigmoid, sech and tanh respectively. From Figs.~\ref{fig:afun_periodic} (c) and (d) we can see that the approximation done by PINN with ReLU as an activation function is not fitting well with the original result. But in the case of sech, the prediction is better while comparing with the prediction done by ReLU and sigmoid functions since the squared error value comes out less. From the plots given in the second column of Fig.~\ref{fig:afun_periodic} we infer that the squared error values of the cases ReLU, sigmoid, $\sech$ and $\tanh$ are in the order of $10^{0}, 10^{-3}, 10^{-4}$ and $10^{-5}$ respectively.
\begin{figure*}[!ht]
\includegraphics[width=0.8\linewidth]{rosen_afun_40k.jpg}
\caption{\label{fig:afun_rosen} PINN results for the Rosen-Morse potential with different activation functions. Rows one to four, respectively, represent the activation functions ReLU, sigmoid, sech and tanh. {Figs. (a), (e), (i) and (m) represent the predicted magnitude of soliton solutions. (b), (f), (j) and (n) represent the error values in magnitude of soliton solutions. Figs. (c), (g), (k) and (o) in third column and Figs. (d), (h), (l) and (p) in fourth column correspond to the soliton solution at particular time instants $t=0.65$ and $t=4.38$ respectively}}
\end{figure*}
\begin{figure*}[!ht]
\includegraphics[width=0.75\linewidth]{scatter.jpg}
\caption{\label{fig:scatt} Scatter plots showing the performance of the PINN. (a)-(c) correspond to the result of NLS equation with Gaussian, periodic and Rosen-Morse respectively. }
\end{figure*}
\begin{table*}[!ht]
\caption{\label{tab:a_fun} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different activation functions for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
& & \multicolumn{4}{c}{Activation functions} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& ReLU & sigmoid & $\sech$ & $\tanh$\\
\hline\hline
& u & $8.2281\times 10^{-1}$ & $1.6449\times 10^{-2}$ & $1.6841\times 10^{-2}$ & $2.1856\times 10^{-2}$ \\
Gaussian & v & $7.2510\times 10^{-1}$ & $2.0385\times 10^{-2}$ & $1.8659\times 10^{-2}$ & $2.8822\times 10^{-2}$ \\
& $\psi$ & $6.4185\times 10^{-1}$ & $1.1915\times 10^{-2}$ & $5.8949\times 10^{-3}$ &$3.1912\times 10^{-3}$\\
\hline
& u & $1.0433\times 10^{0}$ & $4.7092\times 10^{-2}$ & $2.9961\times 10^{-2}$ & $5.0925\times 10^{-2}$ \\
Periodic & v & $1.0236\times 10^{0}$ & $4.7094\times 10^{-2}$ & $2.9019\times 10^{-2}$ & $4.9897\times 10^{-2}$ \\
& $\psi$ & $6.9093\times 10^{-1}$ & $1.3379\times 10^{-2}$ & $5.2880\times 10^{-3}$ & $4.2720\times 10^{-3}$ \\
\hline
& u & $1.1185\times 10^{0}$ & $1.1149\times 10^{-1}$ & $2.4371\times 10^{-2}$ & $3.7277\times 10^{-2}$ \\
Rosen-Morse & v & $1.1529\times 10^{0}$ & $8.2902\times 10^{-2}$ & $2.4424\times 10^{-2}$ & $3.2468\times 10^{-2}$\\
& $\psi$ & $4.8206\times 10^{-1}$ & $3.3237\times 10^{-2}$ & $9.8817\times 10^{-3}$ & $5.9112\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\par Prediction results of PINN with different activation functions for the NLS equation with Rosen-Morse potential are shown in Fig.~\ref{fig:afun_rosen}. The results corresponding to PINN with ReLU as the activation function are shown in Figs.~\ref{fig:afun_rosen} (a)-(d). From Fig.~\ref{fig:afun_rosen} (a) we can see that the approximation done by the PINN is not accurate. From the squared error plots shown in the second column in Fig.~\ref{fig:afun_rosen}, we can see that the error value is low in the case of $\tanh$ function. The plots on the third and fourth columns of Fig.~\ref{fig:afun_rosen} confirm that the predicted and exact results are fitted well with each other while we compare $\tanh$ activation function with the other activation functions.
\par The overall outcome is presented in the Table \ref{tab:a_fun}. For better comparison, we take the $\mathbb{L}^2$-norm error value in approximating $u$, $v$ and $\psi$ for all the three cases and also for four activation functions. $\mathbb{L}^2$-norm error values of the PINN with ReLU as activation function are very high in approximating $u$, $v$ and $\psi$ for all three potentials when compared with the other three activation functions. {This is due to the piecewise linearity of the ReLU function.} In the case of sigmoid activation function, the value of $\mathbb{L}^2$-norm error is of the order of $10^{-2}$. While comparing the outcome of the PINN with $\sech$ and $\tanh$ functions, it is clear that the error value is slightly low in the case of $\tanh$ while approximating the function $\psi$. But for the approximation of real ($u$) and imaginary ($v$) parts of the solution the PINNs with sech function have low error values for all three considered potentials. {Finally, we note that the time taken for the training of the PINN with $\sech$ activation function is higher when compared to the other three activation functions.}
\subsection{Effect of structure of the network}
\par {ANNs have large amount of parameters like weight and bias matrices which change randomly to minimize the given loss function during the process of optimization. So the structure of the ANN influences the accuracy of the PINN for the considered task. There are hyper-parameters that describe the structure of the ANN, namely width of the network (number of hidden layers) and depth of the network (number of units in each layer). We test the impact of these hyper-parameters for all three potentials with $\tanh$ as an activation function and present the outcomes in Tables~{\ref{tab:layers}}, {\ref{tab:nodes1}} and {\ref{tab:nodes2}}.}
\begin{table*}[!ht]
\caption{\label{tab:layers} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different number of hidden layers of ANN for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
& & \multicolumn{4}{c}{Number of hidden layers} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& 1 & 2 & 3 & 4\\
\hline\hline
& u & $4.0425\times 10^{-2}$ & $2.0164\times 10^{-2}$ & $2.3489\times 10^{-2}$ & $2.1856\times 10^{-2}$ \\
Gaussian & v & $6.9994\times 10^{-2}$ & $2.4169\times 10^{-2}$ & $3.1864\times 10^{-2}$ & $2.8822\times 10^{-2}$ \\
& $\psi$ & $2.8418\times 10^{-2}$ & $5.5559\times 10^{-3}$ & $3.5918\times 10^{-3}$ & $3.1912\times 10^{-3}$\\
\hline
& u & $7.9002\times 10^{-1}$ & $3.2177\times 10^{-2}$ & $4.8171\times 10^{-2}$ & $5.0925\times 10^{-2}$ \\
Periodic & v & $9.7382\times 10^{-1}$ & $3.1851\times 10^{-2}$ & $4.7245\times 10^{-2}$ & $4.9897\times 10^{-2}$ \\
& $\psi$ & $1.8630\times 10^{-1}$ & $7.6633\times 10^{-3}$ & $4.0443\times 10^{-3}$ & $4.2720\times 10^{-3}$\\
\hline
& u & $3.9728\times 10^{-1}$ & $2.1559\times 10^{-2}$ & $3.7352\times 10^{-2}$ & $3.7277\times 10^{-2}$\\
Rosen-Morse & v & $3.7073\times 10^{-1}$ & $2.1047\times 10^{-2}$ & $3.2467\times 10^{-2}$ & $3.2468\times 10^{-2}$\\
& $\psi$ & $8.3272\times 10^{-2}$ & $6.4215\times 10^{-3}$ & $5.8233\times 10^{-3}$ & $5.9112\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
{Table~{\ref{tab:layers}} corresponds to the $\mathbb{L}^2$-norm error values of the PINNs with number of hidden layers varying from 1 to 4. In this study, we fix the number of units equal to 100. It is clear from this Table that the performance of the PINN with single hidden layer is very low as compared with the other PINNs with more number of hidden layers. Further, when increasing the number of layers, we observe that the performance of PINN for all three potentials getting increased. In other words, the $\mathbb{L}^2$-norm error values decrease. But in the case of periodic and Rosen-Morse potentials the error values of the PINN with 4 hidden layers are slightly high when compare to the error values of the PINN with 3 hidden layers. The difference between these error values is considerably low. We need a model that performs well in finding solution of the NLS equation for all three considered potentials. So in our study we fixed the number of hidden layers equal to 4.}
\begin{table*}[!ht]
\caption{\label{tab:nodes1} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different number of neurons (10-50) in each hidden layer of ANN for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
& & \multicolumn{5}{c}{Number of neurons in the hidden layers} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& 10 & 20 & 30 & 40 & 50\\
\hline\hline
& u & $2.1027\times 10^{-2}$ & $1.9056\times 10^{-2}$ & $2.1658\times 10^{-2}$ & $2.0601\times 10^{-2}$ & $1.9279\times 10^{-2}$ \\
Gaussian & v & $2.7296\times 10^{-2}$ & $2.2187\times 10^{-2}$ & $2.7912\times 10^{-2}$ & $2.5865\times 10^{-2}$ & $2.3295\times 10^{-2}$\\
& $\psi$ & $4.4595\times 10^{-3}$ & $5.7220
\times 10^{-3}$ & $3.8554\times 10^{-3}$ & $4.1573\times 10^{-3}$ & $4.5314\times 10^{-3}$\\
\hline
& u & $5.3167\times 10^{-1}$ & $3.7814\times 10^{-2}$ & $4.2063\times 10^{-2}$ & $4.5505\times 10^{-2}$ & $4.6104\times 10^{-2}$\\
Periodic & v & $7.0910\times 10^{-1}$ & $3.6862\times 10^{-2}$ & $4.1328\times 10^{-2}$ & $4.4600\times 10^{-2}$ & $4.5129\times 10^{-2}$\\
& $\psi$ & $1.3475\times 10^{-1}$ & $4.3729\times 10^{-3}$ & $4.1834\times 10^{-3}$ & $4.0019\times 10^{-3}$ & $3.7271\times 10^{-3}$\\
\hline
& u & $1.4408\times 10^{-1}$ & $2.5117\times 10^{-2}$ & $3.9776\times 10^{-2}$ & $2.7235\times 10^{-2}$ & $3.1777\times 10^{-2}$\\
Rosen-Morse & v & $1.0567\times 10^{-1}$ & $2.3972\times 10^{-2}$ & $3.3787\times 10^{-2}$ & $2.5332\times 10^{-2}$ & $2.7809\times 10^{-2}$\\
& $\psi$ & $3.2571\times 10^{-2}$ & $7.2073\times 10^{-3}$ & $5.2887\times 10^{-3}$ & $5.1823\times 10^{-3}$ & $5.0052\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[!ht]
\caption{\label{tab:nodes2} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different number of neurons (60-100) in each hidden layer of ANN for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
& & \multicolumn{5}{c}{Number of neurons in the hidden layers} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& 60 & 70 & 80 & 90 & 100\\
\hline\hline
& u & $1.8949\times 10^{-2}$ & $1.9171\times 10^{-2}$ & $1.8510\times 10^{-2}$ & $2.0515\times 10^{-2}$ & $2.1856\times 10^{-2}$ \\
Gaussian & v & $2.2847\times 10^{-2}$ & $2.3018\times 10^{-2}$ & $2.1814\times 10^{-2}$ & $2.5669\times 10^{-2}$ & $2.8822\times 10^{-2}$ \\
& $\psi$ & $4.3099\times 10^{-3}$ & $4.6188\times 10^{-3}$ & $4.9689\times 10^{-3}$ & $4.0189\times 10^{-3}$ & $3.1912\times 10^{-3}$\\
\hline
& u & $4.8924\times 10^{-2}$ & $4.3401\times 10^{-2}$ & $5.2175\times 10^{-2}$ & $5.0627\times 10^{-2}$ & $5.0925\times 10^{-2}$\\
Periodic & v & $4.7821\times 10^{-2}$ & $4.2629\times 10^{-2}$ & $5.0985\times 10^{-2}$ & $4.9554\times 10^{-2}$ & $4.9897\times 10^{-2}$ \\
& $\psi$ & $3.8299\times 10^{-3}$ & $3.8624\times 10^{-3}$ & $4.0359\times 10^{-3}$ & $3.9781\times 10^{-3}$ & $4.2720\times 10^{-3}$\\
\hline
& u & $3.0926\times 10^{-2}$ & $3.7381\times 10^{-2}$ & $4.4708\times 10^{-2}$ & $4.1246\times 10^{-2}$ & $3.7277\times 10^{-2}$\\
Rosen-Morse & v & $2.7566\times 10^{-2}$ & $3.2622\times 10^{-2}$ & $3.7069\times 10^{-2}$ & $3.5090\times 10^{-2}$ & $3.2468\times 10^{-2}$\\
& $\psi$ & $4.9355\times 10^{-3}$ & $5.8375\times 10^{-3}$ & $5.3917\times 10^{-3}$ & $5.6393\times 10^{-3}$ & $5.9112\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\par {Next we examine how the performance of PINN is affected by the number of neurons in the hidden layers. For this we consider the PINN with $\tanh$ activation function and four hidden layers. Now we vary the the number of neurons from 10 to 100 and present the results of the PINN with the number of neurons 10-50 in Table~{\ref{tab:nodes1}} and for the number of neurons 60-100 in Table~{\ref{tab:nodes2}} respectively. From these two tables, we can see that $\mathbb{L}^2$-norm error values are very high for the case of PINN with ten neurons. Further, increasing the number of neurons the error values are decreasing and for some cases they are oscillating between low and high values because while increasing the number of neurons automatically increases the size of the weight and bias matrices and the model needs to optimize the more number of parameters. The error value of the solution of the NLS equation with the Gaussian potential gives a low value only when the PINN is trained with 100 neurons. So we fixed the number of neurons in each hidden layer as 100.}
\begin{table*}[!ht]
\caption{\label{tab:c_points} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different number of collocation points for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{cccccc}
& & \multicolumn{4}{c}{Number of collocation points} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& 5000 & 10000 & 15000 & 20000\\
\hline\hline
& u & $2.0919\times 10^{-2}$ & $1.9999\times 10^{-2}$ & $1.9238\times 10^{-2}$ & $2.1856\times 10^{-2}$ \\
Gaussian & v & $2.6675\times 10^{-2}$ & $2.4926\times 10^{-2}$ & $2.3326\times 10^{-2}$ & $2.8822\times 10^{-2}$ \\
& $\psi$ & $3.6122\times 10^{-3}$ & $4.0438\times 10^{-3}$ & $4.1420\times 10^{-3}$ & $3.1912\times 10^{-3}$\\
\hline
& u & $4.5870\times 10^{-2}$ & $5.1518\times 10^{-2}$ & $4.2062\times 10^{-2}$ & $5.0925\times 10^{-2}$ \\
Periodic & v & $4.4983\times 10^{-2}$ & $5.0452\times 10^{-2}$ & $4.1056\times 10^{-2}$ & $4.9897\times 10^{-2}$ \\
& $\psi$ & $3.8983\times 10^{-3}$ & $3.8867\times 10^{-3}$ & $3.8140\times 10^{-3}$ & $4.2720\times 10^{-3}$\\
\hline
& u & $3.8685\times 10^{-2}$ & $3.7610\times 10^{-2}$ & $4.6277\times 10^{-2}$ & $3.7277\times 10^{-2}$\\
Rosen-Morse & v & $3.3381\times 10^{-2}$ & $3.2416\times 10^{-2}$ & $3.8593\times 10^{-2}$ & $3.2468\times 10^{-2}$\\
& $\psi$ & $6.0374\times 10^{-3}$ & $5.6554\times 10^{-3}$ & $5.8951\times 10^{-3}$ & $5.9112\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\begin{table*}[!ht]
\caption{\label{tab:ini_bound} $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ of the PINN results approximated with different number of initial and boundary points of the considered domain for all three considered potentials.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
& & \multicolumn{5}{c}{Number of initial and boundary points} \\
$\mathcal{PT}$-symmetric \\Potentials & $\mathbb{L}^2$-norm Error& 10 & 20 & 30 & 40 & 50\\
\hline\hline
& u & $1.7453\times 10^{-2}$ & $1.7899\times 10^{-2}$ & $1.6184\times 10^{-2}$ & $2.1619\times 10^{-2}$ & $2.1856\times 10^{-2}$ \\
Gaussian & v & $2.0003\times 10^{-2}$ & $2.0321\times 10^{-2}$ & $1.7222\times 10^{-2}$ & $2.8040\times 10^{-2}$ & $2.8822\times 10^{-2}$ \\
& $\psi$ & $5.5589\times 10^{-3}$ & $4.9169\times 10^{-3}$ & $6.3697\times 10^{-3}$ & $3.6132\times 10^{-3}$ & $3.1912\times 10^{-3}$\\
\hline
& u & $2.5785\times 10^{-2}$ & $4.5283\times 10^{-2}$ & $3.3628\times 10^{-2}$ & $4.9036\times 10^{-2}$ & $5.0925\times 10^{-2}$ \\
Periodic & v & $2.4970\times 10^{-2}$ & $4.4399\times 10^{-2}$ & $3.2921\times 10^{-2}$ & $4.8044\times 10^{-2}$ & $4.9897\times 10^{-2}$ \\
& $\psi$ & $6.9655\times 10^{-3}$ & $4.0064\times 10^{-3}$ & $4.3703\times 10^{-3}$ & $3.9586\times 10^{-3}$ & $4.2720\times 10^{-3}$\\
\hline
& u & $2.1818\times 10^{-2}$ & $3.6917\times 10^{-2}$ & $3.4343\times 10^{-2}$ & $3.9939\times 10^{-2}$ & $3.7277\times 10^{-2}$\\
Rosen-Morse & v & $2.2776\times 10^{-2}$ & $3.1811\times 10^{-2}$ & $2.9756\times 10^{-2}$ & $3.4444\times 10^{-2}$ & $3.2468\times 10^{-2}$\\
& $\psi$ & $1.0650\times 10^{-2}$ & $5.2218\times 10^{-3}$ & $4.9235\times 10^{-3}$ & $5.0712\times 10^{-3}$ & $5.9112\times 10^{-3}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Effect of sampling points}
\par {We use the sampling points which are sampled from LHS~{\cite{stein1987}} for the input to the PINN model. These training data points also influence the performance of the PINN in solving the considered problem. The results with different number of collocation points are presented in Table~{\ref{tab:c_points}}. For this study we use $\tanh$ activation function, four hidden layers each with 100 neurons and 50 initial points and 50 points each on upper and lower boundaries. From Table~{\ref{tab:c_points}} it is clear that $\mathbb{L}^2$-norm error values are changing with respect to the change in the number of collocation points. When the number of collocation points is 20000 the error value is completely low for all cases especially for the Gaussian potential case. Our aim is to construct a DL model which is good enough to make the solution to the NLS equation for all three considered potentials. So it is better to have a more number of collocation points inside the considered domain so that the model can train with more points which lead to high accurate solution.}
\par {Finally, we experiment the PINN by varying the number of initial and boundary sampling points and the $\mathbb{L}^2$-norm error values in $u,v$ and $\psi$ for all three potentials presented in Table~{\ref{tab:ini_bound}}. In this table, we vary the number of points from 10 to 50 in both the initial and boundary regions. Here the number on the boundary denotes the number of points taken for both the upper and lower boundary. For example, the number 10 denotes that there are 10 points on the initial and also 10 points each on both the upper and lower boundary regions so that there are totally 20 points on the boundary of the considered domain. Here also we use PINN with $\tanh$ activation function with 4 hidden layers each with 100 neurons and 20000 collocation points inside the domain. From the results which are shown in Table~{\ref{tab:ini_bound}} we observe that in most cases the error values are become low when we increase the number of points and also in some cases the error values vary between low and high values particularly in the case of NLS equation with Rosen-Morse potential the error value is very high when the PINN trained with low number of points on the initial and boundary regions say 10. As discussed in the earlier cases, here also we fix the number of points on the initial and boundary is equal to 50 because the PINN with this setup has considerable low values for the $\mathbb{L}^2$-norm error for the all three considered potentials.}
\par {It is worth to note that all the above presented results may vary in the repeated learning processes because of the stochastic nature of the sampling technique and of the algorithm.}
\section{Conclusion}
\par In this work, we have considered the NLS equation with three $\mathcal{PT}$-symmetric potentials, namely Gaussian, periodic and Rosen-Morse and approximated the soliton solution of the NLS equation with the help of a DL approach so called PINN. For this purpose, we have considered a complex-valued PINN with $\tanh$ as an activation function. The PINN solves the given equation for the prescribed initial and boundary conditions by minimizing the mean squared error loss. We have considered 20000 collocation points by LHS \cite{stein1987}, 50 points and 100 points on initial and boundary data respectively. The predicted, exact and squared error in the magnitude of soliton solution for the considered three different potentials are evaluated and plotted. Further, we have also plotted the exact and predicted magnitudes of the soliton solution one over the other for various instants of time. From the results, we conclude that our constructed PINN can approximate the soliton solution for the given NLS equation for all three potentials precisely. The squared errors are found to be very low in the order of $10^{-6}$, $10^{-5}$ and $10^{-4}$ respectively for the Gaussian, periodic and Rosen-Morse potentials.
\par To visualize the performance of the PINN with tanh as activation function, we also present the scatter plot of actual versus the predicted data for all three considered potentials in Fig.~\ref{fig:scatt}. The scatter plots of the NLS equation with Gaussian, periodic and Rosen-Morse potentials are respectively shown in Figs.~\ref{fig:scatt} (a)-(c). The scatter plots confirm that the considered PINN accurately predict the soliton solution in all three cases. {Further, to analyse the factors that influence the performance of the PINN we tested the effect against the activation functions, network structure and sampling points. First, We have considered three functions namely, ReLU, sigmoid and $\tanh$ along with a new activation function $\sech$. The PINNs with ReLU and sigmoid as the activation functions approximated the soliton solution with less accuracy when compared to the PINNs with $\sech$ and $\tanh$ as activation functions. We have also examined the ability of these different PINNs by calculating the $\mathbb{L}^2$-norm error values for real ($u$) and imaginary ($v$) parts of the solution ($\psi$) for all three considered potentials. From the results, we conclude that the PINN can approximate the soliton solution of the NLS equation for the considered $\mathcal{PT}$-symmetric potentials with $\tanh$ and $\sech$ as activation function. We have also examined the effect on the performance due to the width and the depth of the PINN. From the obtained results we fixed the number of hidden layers equal to four and 100 neurons in each layer. Finally, we have also done an experiment on the number of sampling points and initial and boundary regions. From the outcomes we found that the amount of training data should be 20000 collocation points, 50 initial points and 50 boundary points in order to get a high accurate solution for the considered problem. One can use the considered DL model, namely PINN for solving the NLS equation with $\mathcal{PT}$-symmetric potentials.}
\begin{acknowledgments}
JM thanks MoE - RUSA 2.0 Physical Sciences, Government of lndia for providing a fellowship to carry out this work. KM and JBS are funded by the Center for Nonlinear Systems, Chennai Institute of Technology, India, vide funding number CIT/CNS/2021/RP-015. The work of MS forms part of a research project sponsored by NBHM, Government of India, under the Grant No. 02011/20/2018 NBHM (R.P)/R\&D II/15064. MS also acknowledges MoE - RUSA 2.0 Physical Sciences, Government of lndia for providing financial support in procuring a high-performance GPU server which highly assisted this work.
\end{acknowledgments}
\section*{Data Availability Statement}
The data that support the findings of this study are available within the article.
| {
"timestamp": "2022-04-20T02:07:22",
"yymm": "2204",
"arxiv_id": "2204.08596",
"language": "en",
"url": "https://arxiv.org/abs/2204.08596",
"abstract": "We investigate the physics informed neural network method, a deep learning approach, to approximate soliton solution of the nonlinear Schrödinger equation with parity time symmetric potentials. We consider three different parity time symmetric potentials, namely Gaussian, periodic and Rosen-Morse potentials. We use physics informed neural network to solve the considered nonlinear partial differential equation with the above three potentials. We compare the predicted result with actual result and analyze the ability of deep learning in solving the considered partial differential equation. We check the ability of deep learning in approximating the soliton solution by taking squared error between real and predicted values. {Further, we examine the factors that affect the performance of the considered deep learning method with different activation functions, namely ReLU, sigmoid and tanh. We also use a new activation function, namely sech which is not used in the field of deep learning and analyze whether this new activation function is suitable for the prediction of soliton solution of nonlinear Schrödinger equation for the aforementioned parity time symmetric potentials. In addition to the above, we present how the network's structure and the size of the training data influence the performance of the physics informed neural network. Our results show that the constructed deep learning model successfully approximates the soliton solution of the considered equation with high accuracy.",
"subjects": "Pattern Formation and Solitons (nlin.PS)",
"title": "Data driven soliton solution of the nonlinear Schrödinger equation with certain $\\mathcal{PT}$-symmetric potentials via deep learning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9539660989095221,
"lm_q2_score": 0.74316801430083,
"lm_q1q2_score": 0.7089570914368988
} |
https://arxiv.org/abs/0808.0555 | Pairing Functions, Boolean Evaluation and Binary Decision Diagrams in Prolog | A "pairing function" J associates a unique natural number z to any two natural numbers x,y such that for two "unpairing functions" K and L, the equalities K(J(x,y))=x, L(J(x,y))=y and J(K(z),L(z))=z hold. Using pairing functions on natural number representations of truth tables, we derive an encoding for Binary Decision Diagrams with the unique property that its boolean evaluation faithfully mimics its structural conversion to a a natural number through recursive application of a matching pairing function. We then use this result to derive {\em ranking} and {\em unranking} functions for BDDs and reduced BDDs. The paper is organized as a self-contained literate Prolog program, available atthis http URLKeywords: logic programming and computational mathematics, pairing/unpairing functions, encodings of boolean functions, binary decision diagrams, natural number representations of truth tables |
\section{Introduction}
This paper is an exploration with logic programming tools of {\em ranking} and
{\em unranking} problems on Binary Decision Diagrams. The practical
expressiveness of logic programming languages (in particular Prolog)
are put at test in the process. The paper is part
of a larger effort to cover in a declarative programming
paradigm, arguably more elegantly, some fundamental combinatorial generation
algorithms along the lines of \cite{knuth06draft}.
However, our main focus is by no means ``yet another implementation of BDDs in
Prolog''. The paper is more about fundamental
isomorphisms between logic functions
and their natural number representations, in the tradition of \cite{Goedel:31},
with the unusual twist that everything is expressed as a literate Prolog program,
and therefore automatically testable by the reader.
One could put such efforts under the generic umbrella of an emerging research
field that we would like to call {\em executable theoretical computer
science}. Nevertheless, we also hope that the more practically oriented reader
will be able to benefit from this approach by being able to experiment with,
and reuse our Prolog code in applications.
The paper is organized as follows:
Sections \ref{bits} and \ref{bdds} overview efficient evaluation of boolean
formulae in Prolog using bitvectors represented as arbitrary length integers
and Binary Decision Diagrams (BDDs).
Section \ref{pairings} discusses classic pairing and unpairing
operations and introduces pairing/unpairing
predicates acting directly on bitlists.
Section \ref{encbdd} introduces a novel BDD encoding (based on our unpairing
functions) and discusses the surprising equivalence between boolean evaluation of BDDs
and the inverse of our encoding, the main result of the paper.
Section \ref{rank} describes {\em ranking} and {\em unranking}
functions for BDDs and reduced BDDs.
Sections \ref{related} and \ref{concl} discuss related work,
future work and conclusions.
The code in the paper, embedded in a literate programming LaTeX
file, is entirely self contained and has been tested under {\em SWI-Prolog}.
\section{Parallel Evaluation of Boolean
Functions with Bitvector Operations}\label{bits}
Evaluation of a boolean function can be performed one
value at a time as in the predicate {\tt if\_then\_else/4}
\begin{code}
if_then_else(X,Y,Z,R):-
bit(X),bit(Y),bit(Z),
( X==1->R=Y
; R=Z
).
bit(0).
bit(1).
\end{code}
\noindent resulting in a {\em truth table}\footnote{One can see that if the
number of variables is fixed, we can ignore the bitsrings in the brackets.
Thus, the truth table can be identified with the natural number, represented in
binary form by the last column.}
\begin{codex}
?- if_then_else(X,Y,Z,R),write([X,Y,Z]:R),nl,fail;nl.
[0, 0, 0]:0
[0, 0, 1]:1
[0, 1, 0]:0
[0, 1, 1]:1
[1, 0, 0]:0
[1, 0, 1]:0
[1, 1, 0]:1
[1, 1, 1]:1
\end{codex}
Clearly, this does not take advantage of the ability of modern hardware to
perform such operations one word a time - with the instant benefit of a
speed-up proportional to the word size.
An alternate representation, adapted
from \cite{knuth06draft} uses integer encodings
of $2^n$ bits for each boolean variable $X_0,\ldots,X_{n-1}$.
Bitvector operations evaluate all
value combinations at once.
\begin{prop}
Let $x_k$ be a variable for $0 \leq k<n$
where $n$ is the number of distinct variables in a
boolean expression. Then column $k$ in the matrix representation
of the inputs in the the truth table
represents, as a bitstring, the natural number:
\begin{equation} \label{var}
x_k={(2^{2^n}-1)}/{(2^{2^{n-k-1}}+1)}
\end{equation}
\end{prop}
\noindent For instance, if $n=2$, the formula computes
$x_0=3=[0,0,1,1]$ and $x_1=5=[0,1,0,1]$.
The following predicates, working with arbitrary length bitstrings are
used to evaluate
variables $x_k$ with $k \in [0..n-1]$ with formula \ref{var}
and map the
constant boolean function {\tt 1}
to the bitstring of length $2^n$, {\tt 111..1},
representing ${2^{2^n}}-1$
\begin{code}
var_to_bitstring_int(NbOfBits,K,Xk):-
all_ones_mask(NbOfBits,Mask),
var_to_bitstring_int(NbOfBits,Mask,K,Xk).
var_to_bitstring_int(NbOfBits,Mask,K,Xk):-
NK is NbOfBits-(K+1),
D is (1<<(1<<NK))+1,
Xk is Mask//D.
all_ones_mask(NbOfBits,Mask):-Mask is (1<<(1<<NbOfBits))-1.
\end{code}
We have used in {\tt var\_to\_bitstring\_int} an adaptation of the efficient
bitstring-integer encoding described in the Boolean Evaluation
section of \cite{knuth06draft}. Intuitively, it is based on the idea that one
can look at $n$ variables as bitstring representations of the $n$ columns
of the truth table.
Variables representing such bitstring-truth tables
(seen as {\em projection functions})
can be combined with the usual bitwise integer operators,
to obtain new bitstring truth tables,
encoding all possible value combinations of their arguments.
Note that the constant $0$ is represented as $0$ while the constant $1$
is represented as $2^{2^n}-1$, corresponding to a column in
the truth table containing ones exclusively.
\section{Binary Decision Diagrams} \label {bdds}
We have seen that Natural Numbers in $[0..2^{2^n}-1]$ can be used as
representations of truth tables defining $n$-variable boolean functions.
A binary decision diagram (BDD)
\cite{bryant86graphbased} is an ordered binary tree obtained from
a boolean function, by assigning its variables, one at a time,
to {\tt 0} (left branch) and {\tt 1} (right branch).
In virtually all practical applications BDDs are represented as DAGs after
detecting shared nodes. We safely ignore this here
as they represent the same logic
function, which is all we care about at this point.
Typically in the early literature, the acronym
ROBDD is used to denote reduced ordered BDDs. Because this
optimization is now so prevalent,
the term BDD is frequently use to refer to
ROBDDs. Strictly speaking, BDD in this paper will stand for {\em ordered BDD
with reduction of identical branches but without node sharing}.
The construction deriving a BDD of a boolean function $f$ is known as Shannon
expansion \cite{shannon_all}, and is expressed as
\begin{equation}
f(x)= (\bar{x} \wedge f[x \leftarrow 0]) \vee (x \wedge f[x \leftarrow 1])
\end{equation}
\noindent where $f[x \leftarrow a]$ is computed
by uniformly substituting $a$ for $x$ in $f$. Note that by using the more
familiar boolean if-the-else function Shannon expansion can also
be expressed as:
\begin{equation}
f(x) = if~x~then~f[x \leftarrow 1]~else~f[x \leftarrow 0]
\end{equation}
We represent a $BDD$ in Prolog as a binary tree with constants {\tt 0} and {\tt
1} as leaves, marked with the function symbol {\tt c/1}. Internal
{\em if-then-else} nodes marked with {\tt ite/3} are controlled by
variables, ordered identically in each branch, as first arguments of {\tt
ite/1}. The two other arguments are subtrees representing the {\tt Then}
and {\tt Else} branches. Note that, in practice, reduced,
canonical DAG representations are used instead of
binary tree representations.
Alternatively, we observe that the Shannon expansion
can be directly derived from a $2^n$ size truth table,
using bitstring operations on encodings of its $n$ variables.
Assuming that the first column of a truth table corresponds to
variable $x$, $x=0$ and $x=1$ mask out, respectively,
the upper and lower half of the truth table.
\begin{code}
shannon_split(NV,X, Hi,Lo):-
all_ones_mask(NV,M),
NV1 is NV-1,
all_ones_mask(NV1,LM),
HM is xor(M,LM),
Lo is /\(LM,X),
H is /\(HM,X),
Hi is H>>(1<<NV1).
\end{code}
Note that the operation {\tt shannon\_split} can be reversed as follows:
\begin{code}
shannon_fuse(NV,Hi,Lo, X):-
NV1 is NV-1,
H is Hi<<(1<<NV1),
X is \/(H,Lo).
\end{code}
\begin{codex}
?- shannon_split(2, 7, X,Y),shannon_fuse(2, X,Y, Z).
X = 1,
Y = 3,
Z = 7.
?- shannon_split(3, 42, X,Y),shannon_fuse(3, X,Y, Z).
X = 2,
Y = 10,
Z = 42.
\end{codex}
Another way to look at these two operations (for a fixed value of NV), is
as bijections associating a pair of natural numbers to a
natural number, i.e. as {\em pairing} functions.
\begin{comment}
\begin{code}
shannon_tree(NV,TT, st(NV,BDD)):-
Max is (1<<(1<<NV)),
TT<Max,
shannon_unfold(NV,NV,TT, BDD).
shannon_unfold(0,_,TT,c(TT)).
shannon_unfold(N,NV,TT,mux(X,H,L)):-N>0,
N1 is N-1,
X is NV-N,
shannon_split(N,TT,Hi,Lo),
shannon_unfold(N1,NV,Hi,H),
shannon_unfold(N1,NV,Lo,L).
\end{code}
\end{comment}
\section{Pairing and Unpairing Functions} \label{pairings}
\begin{df}
A {\em pairing function} is a bijection $f : Nat \times Nat \rightarrow
Nat$. An {\em unpairing function} is a bijection $g : Nat \rightarrow
Nat \times Nat$.
\end{df}
Following Julia Robinson's notation \cite{robinson50},
given a pairing function $J$, its left and right inverses $K$ and $L$
are such that
\begin{equation}
J(K(z),L(z))=z
\end{equation}
\begin{equation}
K(J(x,y))=x
\end{equation}
\begin{equation}
L(J(x,y))=y
\end{equation}
We refer to \cite{DBLP:journals/tcs/CegielskiR01} for a typical use
in the foundations of mathematics and to \cite{DBLP:conf/ipps/Rosenberg02a}
for an extensive study of various pairing functions and their computational properties.
\subsection{Cantor's Pairing Function}
Starting from Cantor's pairing function
\begin{code}
cantor_pair(K1,K2,P):-P is (((K1+K2)*(K1+K2+1))//2)+K2.
\end{code}
bijections from $Nat \times Nat$ to $Nat$ have been used for various proofs
and constructions of mathematical objects
\cite{robinson50,DBLP:journals/tcs/CegielskiR01}.
For $X,Y \in \{0,1,2,3\}$ the sequence of values of this pairing function is:
\begin{codex}
?- findall(R,(between(0,3,A),between(0,3,B),cantor_pair(A,B,R)),Rs).
Rs = [0, 2, 4, 6, 1, 5, 9, 13, 3, 11, 19, 27, 7, 23, 39, 55]
\end{codex}
\noindent Note however, that the inverse of Cantor's pairing function involves
potentially expensive floating point operations that are also likely to loose precision
for arbitrary length integers.
\begin{comment}
\begin{code}
cantor_unpair(Z,K1,K2):-
I is floor((sqrt(8*Z+1)-1)/2),
K1 is ((I*(3+I))//2)-Z,
K2 is Z-((I*(I+1))//2).
\end{code}
\end{comment}
\subsection{The Pepis-Kalmar Pairing Function}
Another pairing function that can be implemented using only
elementary integer operations is the following:
\begin{equation}
f(x,y)=2^x(2y+1)-1
\end{equation}
\noindent The predicates {\tt pepis\_pair/3} and {\tt pepis\_unpair/3} are
derived from the function {\bf pepis\_J} and its left and right unpairing
companions {\bf pepis\_K} and {\bf pepis\_L} that have been used, by Pepis,
Kalmar and Robinson
in some fundamental work on recursion
theory, decidability and Hilbert's Tenth Problem
in \cite{pepis,kalmar1,robinson67}:
\begin{code}
pepis_pair(X,Y,Z):-pepis_J(X,Y,Z).
pepis_unpair(Z,X,Y):-pepis_K(Z,X),pepis_L(Z,Y).
pepis_J(X,Y, Z):-Z is ((1<<X)*((Y<<1)+1))-1.
pepis_K(Z, X):-Z1 is Z+1,two_s(Z1,X).
pepis_L(Z, Y):-Z1 is Z+1,no_two_s(Z1,N),Y is (N-1)>>1.
two_s(N,R):-even(N),!,H is N>>1,two_s(H,T),R is T+1.
two_s(_,0).
no_two_s(N,R):-two_s(N,T),R is N // (1<<T).
even(X):- 0 =:= /\(1,X).
odd(X):- 1 =:= /\(1,X).
\end{code}
This pairing function is asymmetrically growing
(faster growth on the first argument).
It works as follows:
\begin{codex}
?- pepis_pair(1,10,R).
R = 41.
?- pepis_unpair(10,1,R).
R = 3071.
?- findall(R,(between(0,3,A),between(0,3,B),pepis_pair(A,B,R)),Rs).
Rs=[0, 2, 4, 6, 1, 5, 9, 13, 3, 11, 19, 27, 7, 23, 39, 55]
\end{codex}
\subsection{Pairing/Unpairing
operations acting directly on bitlists} \label{BitMerge}
We will describe here pairing operations,
that are expressed exclusively as bitlist transformations of
{\tt bitmerge\_unpair} and its inverse {\tt bitmerge\_pair},
and are therefore likely to be easily hardware implementable.
As we have found out recently, they turn out to be the same as the functions
defined in Steven Pigeon's PhD thesis on Data Compression \cite{pigeon}, page 114).
The predicate {\tt bitmerge\_pair} implements a bijection from $Nat \times
Nat$ to $Nat$ that works by splitting a number's big endian bitstring
representation into odd and even bits, while its inverse {\tt to\_pair} blends
the odd and even bits back together. The helper predicates
{\tt to\_rbits} and {\tt from\_rbits},
given in the Appendix, convert to/from integers to bitlists.
\begin{code}
bitmerge_pair(X,Y,P):-
to_rbits(X,Xs),
to_rbits(Y,Ys),
bitmix(Xs,Ys,Ps),!,
from_rbits(Ps,P).
bitmerge_unpair(P,X,Y):-
to_rbits(P,Ps),
bitmix(Xs,Ys,Ps),!,
from_rbits(Xs,X),
from_rbits(Ys,Y).
bitmix([X|Xs],Ys,[X|Ms]):-!,bitmix(Ys,Xs,Ms).
bitmix([],[X|Xs],[0|Ms]):-!,bitmix([X|Xs],[],Ms).
bitmix([],[],[]).
\end{code}
The transformation of the bitlists, done by the bidirectional predicate bitmerge
is shown in the following example with bitstrings aligned:
\begin{codex}
?- bitmerge_unpair(2008,X,Y),bitmerge_pair(X,Y,Z).
X = 60,
Y = 26,
Z = 2008
\end{codex}
Note that we represent numbers with bits in reverse order (least significant on
the left). Like in the case of Cantor's pairing function, we can see
similar growth in both arguments:
\begin{codex}
?- between(0,15,N),bitmerge_unpair(N,A,B),
write(N:(A,B)),write(' '),fail;nl.
0: (0, 0) 1: (1, 0) 2: (0, 1) 3: (1, 1)
4: (2, 0) 5: (3, 0) 6: (2, 1) 7: (3, 1)
8: (0, 2) 9: (1, 2) 10: (0, 3) 11: (1, 3)
12: (2, 2) 13: (3, 2) 14: (2, 3) 15: (3, 3)
?- between(0,3,A),between(0,3,B),bitmerge_pair(A,B,N),
write(N:(A,B)),write(' '),fail;nl.
0: (0, 0) 2: (0, 1) 8: (0, 2) 10: (0, 3)
1: (1, 0) 3: (1, 1) 9: (1, 2) 11: (1, 3)
4: (2, 0) 6: (2, 1) 12: (2, 2) 14: (2, 3)
5: (3, 0) 7: (3, 1) 13: (3, 2) 15: (3, 3)
\end{codex}
It is also convenient sometimes to see pairing/unpairing as one-to-one
functions from/to the underlying language's ordered pairs, i.e. {\tt X-Y} in
Prolog :
\begin{code}
bitmerge_pair(X-Y,Z):-bitmerge_pair(X,Y,Z).
bitmerge_unpair(Z,X-Y):-bitmerge_unpair(Z,X,Y).
\end{code}
\section{Encodings of Binary Decision Diagrams} \label{encbdd}
We will build a $BDD$ by applying {\tt bitmerge\_unpair}
recursively to a Natural Number {\tt TT},
seen as an $N$-variable $2^N$ bit truth table.
This results in a complete binary tree of depth $N$.
As we will show later, this binary tree represents
a $BDD$ that returns {\tt TT} when evaluated applying
its boolean operations.
\begin{code}
plain_bdd(NV,TT, bdd(NV,BDD)):-
Max is (1<<(1<<NV)),
TT<Max,
isplit(NV,TT, BDD).
isplit(0,TT,c(TT)).
isplit(NV,TT,R):-NV>0,
NV1 is NV-1,
bitmerge_unpair(TT,Hi,Lo),
isplit(NV1,Hi,H),
isplit(NV1,Lo,L),
ite(NV1,H,L)=R.
\end{code}
The following examples
show the results returned by {\tt plain\_bdd}
for all $2^{2^k}$ truth tables associated to $k$ variables, with $k=2$.
\begin{codex}
?- between(0,15,TT),plain_bdd(2,TT,BDD),write(TT:BDD),nl,fail;nl
0:bdd(2, ite(1, ite(0, c(0), c(0)), ite(0, c(0), c(0))))
1:bdd(2, ite(1, ite(0, c(1), c(0)), ite(0, c(0), c(0))))
2:bdd(2, ite(1, ite(0, c(0), c(0)), ite(0, c(1), c(0))))
...
13:bdd(2, ite(1, ite(0, c(1), c(1)), ite(0, c(0), c(1))))
14:bdd(2, ite(1, ite(0, c(0), c(1)), ite(0, c(1), c(1))))
15:bdd(2, ite(1, ite(0, c(1), c(1)), ite(0, c(1), c(1))))
\end{codex}
\subsection{Reducing the $BDDs$}
The predicate {\tt bdd\_reduce} reduces a $BDD$ by trimming identical
left and right subtrees, and the predicate {\tt bdd}
associates this reduced form to $N \in Nat$.
\begin{code}
bdd_reduce(BDD,bdd(NV,R)):-nonvar(BDD),BDD=bdd(NV,X),bdd_reduce1(X,R).
bdd_reduce1(c(TT),c(TT)).
bdd_reduce1(ite(_,A,B),R):-A==B,bdd_reduce1(A,R).
bdd_reduce1(ite(X,A,B),ite(X,RA,RB)):-A\==B,
bdd_reduce1(A,RA),bdd_reduce1(B,RB).
bdd(NV,TT, ReducedBDD):-
plain_bdd(NV,TT, BDD),
bdd_reduce(BDD,ReducedBDD).
\end{code}
Note that we omit here the reduction step consisting in
sharing common subtrees, as it is obtained easily by replacing
trees with DAGs. The process is facilitated by the fact
that our unique encoding provides a perfect hashing
key for each subtree. The following examples
show the results returned by {\tt bdd} for {\tt NV=2}.
\begin{codex}
?- between(0,15,TT),bdd(2,TT,BDD),write(TT:BDD),nl,fail;nl
0:bdd(2, c(0))
1:bdd(2, ite(1, ite(0, c(1), c(0)), c(0)))
2:bdd(2, ite(1, c(0), ite(0, c(1), c(0))))
3:bdd(2, ite(0, c(1), c(0)))
...
13:bdd(2, ite(1, c(1), ite(0, c(0), c(1))))
14:bdd(2, ite(1, ite(0, c(0), c(1)), c(1)))
15:bdd(2, c(1))
\end{codex}
\subsection{From BDDs to Natural Numbers}
One can ``evaluate back'' the binary tree representing the BDD,
by using the pairing function {\tt bitmerge\_pair}.
The inverse of {\tt plain\_bdd} is implemented as follows:
\begin{code}
plain_inverse_bdd(bdd(_,X),TT):-plain_inverse_bdd1(X,TT).
plain_inverse_bdd1(c(TT),TT).
plain_inverse_bdd1(ite(_,L,R),TT):-
plain_inverse_bdd1(L,X),
plain_inverse_bdd1(R,Y),
bitmerge_pair(X,Y,TT).
\end{code}
\begin{codex}
?- plain_bdd(3,42, BDD),plain_inverse_bdd(BDD,N).
BDD = bdd(3,
ite(2,
ite(1,
ite(0, c(0), c(0)),
ite(0, c(0), c(0))),
ite(1,
ite(0, c(1), c(1)),
ite(0, c(1), c(0))))),
N = 42
\end{codex}
\noindent Note however that {\tt plain\_inverse\_bdd/2} does not act as an
inverse of {\tt bdd/3}, given that the {\em structure} of the $BDD$ tree
is changed by reduction.
\subsection{Boolean Evaluation of BDDs}
This raises the obvious question: how can we recover the original truth
table from a reduced BDD? The obvious answer is: by evaluating it as a
boolean function! The predicate {\tt ev/2} describes the $BDD$ evaluator:
\begin{code}
ev(bdd(NV,B),TT):-
all_ones_mask(NV,M),
eval_with_mask(NV,M,B,TT).
evc(0,_,0).
evc(1,M,M).
eval_with_mask(_,M,c(X),R):-evc(X,M,R).
eval_with_mask(NV,M,ite(X,T,E),R):-
eval_with_mask(NV,M,T,A),
eval_with_mask(NV,M,E,B),
var_to_bitstring_int(NV,M,X,V),
ite(V,A,B,R).
\end{code}
The predicate {\tt ite/4} used in {\tt eval\_with\_mask}
implements the boolean function {\tt if X then T else E}
using arbitrary length bitvector operations:
\begin{code}
ite(X,T,E, R):-R is xor(/\(X,xor(T,E)),E).
\end{code}
Note that this equivalent formula for {\tt ite} is slightly more
efficient than the obvious one with $\wedge$ and $\vee$ as it
requires only $3$ boolean operations. We will use {\tt ite/4} as the
basic building block for implementing a boolean evaluator for BDDs.
\subsection{The Equivalence}
A surprising result
is that boolean evaluation and structural transformation with
repeated application of
{\em pairing}
produce the same result, i.e.
the predicate {\tt ev/2} also acts as an inverse
of {\tt bdd/2} and {\tt plain\_bdd/2}.
\noindent {\em
As the following example shows, boolean evaluation {\tt ev/2}
faithfully emulates {\tt plain\_inverse\_bdd/2},
on both plain and reduced BDDs.
}
\begin{codex}
?- plain_bdd(3,42,BDD),ev(BDD,N).
BDD = bdd(3,
ite(2,
ite(1,
ite(0, c(0), c(0)),
ite(0, c(0), c(0))),
ite(1,
ite(0, c(1), c(1)),
ite(0, c(1), c(0))))),
N = 42
?- bdd(3,42,BDD),ev(BDD,N).
BDD = bdd(3,
ite(2,
c(0),
ite(1,
c(1),
ite(0, c(1), c(0))))),
N = 42
\end{codex}
The main result of this subsection can now be summarized as follows:
\begin{prop} \label{tt}
Let $B$ be the complete binary tree of depth $N$, obtained by recursive
applications of {\tt bitmerge\_unpair} on a truth table $T$, as described
by the predicate {\tt plain\_bdd(N,T,B)}.
Then for any $N$ and any $T$, when $B$ is interpreted as an (unreduced) BDD,
the result $V$ of its boolean evaluation using the predicate $ev(N,B,V)$
and
the result $R$ obtained by applying $plain\_inverse\_bdd(N,B,R)$
are both identical to $T$. Moreover, the operation {\tt $ev(N,B,V)$}
reverses the effects of both {\tt plain\_bdd} and {\tt bdd} with an
identical result.
\end{prop}
\noindent {\em Proof:} The predicate {\tt plain\_bdd} builds a binary
tree by splitting the bitstring $tt \in [0..2^N-1]$ up to depth $N$.
Observe that this corresponds to the Shannon expansion \cite{shannon_all} of the
formula associated to the truth table, using variable order $[n-1,...,0]$.
Observe that the effect of {\tt bitstring\_unpair} is the same as
\begin{itemize}
\item the effect of {\tt var\_to\_bitstring\_int(N,M,(N-1),R)}
acting as a mask selecting the left branch
\item
and the effect of its complement, acting as a mask selecting the right
branch.
\end{itemize}
Given that $2^N$ is the double of $2^{N-1}$, the same invariant holds at each step,
as the bitstring length of the truth table reduces to half. On the other hand,
it is clear that {\tt $ev$} reverses the action of both {\tt plain\_bdd} and
{\tt bdd} as BDDs and reduced BDDs represent
the same boolean function \cite{bryant86graphbased}.
This result can be seen as a yet another intriguing isomorphism between
boolean, arithmetic and symbolic computations.
\section{Ranking and Unranking of BDDs} \label{rank}
One more step is needed to extend the mapping between $BDDs$ with $N$
variables to a bijective mapping from/to $Nat$:
we will have to ``shift toward infinity''
the starting point of each new block of
BDDs in $Nat$ as BDDs of larger and larger sizes are enumerated.
First, we need to know by how much - so we compute the sum of the
counts of boolean functions with up to $N$ variables.
\begin{code}
bsum(0,0).
bsum(N,S):-N>0,N1 is N-1,bsum1(N1,S).
bsum1(0,2).
bsum1(N,S):-N>0,N1 is N-1,bsum1(N1,S1),S is S1+(1<<(1<<N)).
\end{code}
The stream of all such sums can now be generated as usual:
\begin{code}
bsum(S):-nat(N),bsum(N,S).
nat(0).
nat(N):-nat(N1),N is N1+1.
\end{code}
What we are really interested in, is decomposing {\tt N} into
the distance to the
last {\tt bsum} smaller than N, {\tt N\_M}
and the index of that generates the sum, {\tt K}.
\begin{code}
to_bsum(N, X,N_M):-
nat(X),bsum(X,S),S>N,!,
K is X-1,
bsum(K,M),
N_M is N-M.
\end{code}
{\em Unranking} of an arbitrary BDD is now easy - the index {\tt K}
determines the number of variables and {\tt N\_M} determines
the rank. Together they select the right BDD
with {\tt plain\_bdd} and {\tt bdd/3}.
\begin{code}
nat2plain_bdd(N,BDD):-to_bsum(N, K,N_M),plain_bdd(K,N_M,BDD).
nat2bdd(N,BDD):-to_bsum(N, K,N_M),bdd(K,N_M,BDD).
\end{code}
{\em Ranking} of a BDD is even easier: we first compute
its {\tt NumberOfVars} and its rank {\tt Nth}, then we shift the rank by
the {\tt bsums} up to {\tt NumberOfVars}, enumerating the
ranks previously assigned.
\begin{code}
plain_bdd2nat(bdd(NumberOfVars,BDD),N) :-
B=bdd(NumberOfVars,BDD),
plain_inverse_bdd(B,Nth),
K is NumberOfVars-1,
bsum(K,S),N is S+Nth.
bdd2nat(bdd(NumberOfVars,BDD),N) :-
B=bdd(NumberOfVars,BDD),
ev(B,Nth),
K is NumberOfVars-1,
bsum(K,S),N is S+Nth.
\end{code}
As the following example shows, {\tt nat2plain\_bdd/2}
and {\tt plain\_bdd2nat/2} implement inverse functions.
\begin{codex}
?- nat2plain_bdd(42,BDD),plain_bdd2nat(BDD,N).
BDD = bdd(4,
ite(3,
ite(2,
ite(1,
ite(0, c(0), c(0)),
ite(0, c(1), c(0))),
ite(1,
ite(0, c(1), c(0)),
ite(0, c(0), c(0)))),
ite(2,
ite(1,
ite(0, c(0), c(0)),
ite(0, c(0), c(0))),
ite(1, ite(0, c(0), c(0)),
ite(0, c(0), c(0)))))),
N = 42
\end{codex}
\noindent The same applies to {\tt nat2bdd/2} and its
inverse {\tt bdd2nat/2}.
\begin{codex}
?- nat2bdd(42,BDD),bdd2nat(BDD,N).
BDD = bdd(4,
ite(3,
ite(2,
ite(1, c(0),
ite(0, c(1), c(0))),
ite(1,
ite(0, c(1),c(0)), c(0))),
c(0))),
N = 42
\end{codex}
\noindent We can now generate infinite streams of BDDs as follows:
\begin{code}
plain_bdd(BDD):-nat(N),nat2plain_bdd(N,BDD).
bdd(BDD):-nat(N),nat2bdd(N,BDD).
\end{code}
\begin{codex}
?- plain_bdd(BDD).
BDD = bdd(1, ite(0, c(0), c(0))) ;
BDD = bdd(1, ite(0, c(1), c(0))) ;
BDD = bdd(2, ite(1, ite(0, c(0), c(0)), ite(0, c(0), c(0)))) ;
BDD = bdd(2, ite(1, ite(0, c(1), c(0)), ite(0, c(0), c(0)))) ;
...
?- bdd(BDD).
BDD = bdd(1, c(0)) ;
BDD = bdd(1, ite(0, c(1), c(0))) ;
BDD = bdd(2, c(0)) ;
BDD = bdd(2, ite(1, ite(0, c(1), c(0)), c(0))) ;
BDD = bdd(2, ite(1, c(0), ite(0, c(1), c(0)))) ;
BDD = bdd(2, ite(0, c(1), c(0))) ;
...
\end{codex}
\section{Related work} \label{related}
Pairing functions have been used in work on decision problems as early
as \cite{pepis,kalmar1,robinson50}.
{\em Ranking} functions can be traced back to G\"{o}del numberings
\cite{Goedel:31,conf/icalp/HartmanisB74} associated to formulae.
Together with their inverse {\em unranking} functions they are also
used in combinatorial generation
algorithms \cite{conf/mfcs/MartinezM03,knuth06draft}.
Binary Decision Diagrams are the dominant boolean function representation in
the field of circuit design automation
\cite{DBLP:journals/tcad/DrechslerSF04}.
BDDs have been used in a Genetic Programming context
\cite{DBLP:conf/ices/SakanashiHIK96,DBLP:journals/heuristics/ChenLHW04}
as a representation of evolving individuals subject to crossovers and mutations expressed as
structural transformations and recently in a machine learning context for
compressing probabilistic Prolog programs \cite{DBLP:journals/ml/RaedtKKRT08}
representing candidate theories.
Other interesting uses of BDDs in a
logic and constraint programming context are
related to representations of
finite domains. In \cite{DBLP:conf/padl/HawkinsS06} an algorithm for
finding minimal reasons for inferences is given.
\section{Conclusion and Future Work} \label{concl}
The surprising connection of pairing/unpairing functions and BDDs,
is the indirect result of implementation
work on a number of practical applications.
Our initial interest has been triggered by applications of the
encodings to combinational circuit synthesis in a logic
programming framework \cite{cf08,iclp07}.
We have found them also interesting as uniform
blocks for Genetic Programming applications of Logic Programming.
In a Genetic Programming context \cite{koza92},
the bijections between bitvectors/natural numbers
on one side, and trees/graphs representing BDDs on the other side,
suggest exploring the mapping and its action on various
transformations as a phenotype-genotype connection.
Given the connection between BDDs to
boolean and finite domain constraint solvers
it would be interesting to explore in that context,
efficient succinct data representations
derived from our BDD encodings.
\bibliographystyle{INCLUDES/splncs}
| {
"timestamp": "2009-02-04T04:25:22",
"yymm": "0808",
"arxiv_id": "0808.0555",
"language": "en",
"url": "https://arxiv.org/abs/0808.0555",
"abstract": "A \"pairing function\" J associates a unique natural number z to any two natural numbers x,y such that for two \"unpairing functions\" K and L, the equalities K(J(x,y))=x, L(J(x,y))=y and J(K(z),L(z))=z hold. Using pairing functions on natural number representations of truth tables, we derive an encoding for Binary Decision Diagrams with the unique property that its boolean evaluation faithfully mimics its structural conversion to a a natural number through recursive application of a matching pairing function. We then use this result to derive {\\em ranking} and {\\em unranking} functions for BDDs and reduced BDDs. The paper is organized as a self-contained literate Prolog program, available atthis http URLKeywords: logic programming and computational mathematics, pairing/unpairing functions, encodings of boolean functions, binary decision diagrams, natural number representations of truth tables",
"subjects": "Logic in Computer Science (cs.LO); Symbolic Computation (cs.SC)",
"title": "Pairing Functions, Boolean Evaluation and Binary Decision Diagrams in Prolog",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9539660923657093,
"lm_q2_score": 0.7431680143008301,
"lm_q1q2_score": 0.7089570865737465
} |
https://arxiv.org/abs/2209.11012 | Bypassing the quadrature exactness assumption of hyperinterpolation on the sphere | This paper focuses on the approximation of continuous functions on the unit sphere by spherical polynomials of degree $n$ via hyperinterpolation. Hyperinterpolation of degree $n$ is a discrete approximation of the $L^2$-orthogonal projection of degree $n$ with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most $2n$. This paper aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz--Zygmund property proposed in a previous paper. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule (not necessarily with quadrature exactness). This scheme is referred to as unfettered hyperinterpolation. This paper provides a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo (QMC) design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide. | \section{Introduction}
Let $\Sd:=\{x\in\mathbb{R}^{d+1}:\|x\|_2=1\}$ be the unit sphere in the Euclidean space $\mathbb{R}^{d+1}$ for $d\geq 2$, endowed with the surface measure $\omega_d$; that is, $\lvert\mathbb{S}^d\rvert:=\int_{\Sd}\text{d}\omega_d$ denotes the surface area of the unit sphere $\Sd$. Many real-world applications can be modeled as spherical problems. A critical task of spherical modeling is to find an effective data fitting strategy to approximate the underlying mapping between input and output data. Hyperinterpolation, introduced by Sloan in \cite{sloan1995polynomial}, is a simple yet powerful method for fitting spherical data, and it has received a great deal of interest since its birth, see, e.g., \cite{an2021lasso,MR2274179,le2001uniform,MR4226998,MR1761902,reimer2002generalized,zbMATH01421286,sloan2012filtered,MR1845243}. Given sampled data $\{(x_j,y_j)\}_{j=1}^m\subset \Sd\times \mathbb{R}$, the underlying mapping can be modeled as a spherical hyperinterpolant of degree $n$ in the form of
\begin{equation}\label{equ:map}
x\in\Sd\mapsto \sum_{j=1}^mw_jy_jG_n(x,x_j)\in \mathbb{R},
\end{equation}
where $w_j>0$, $j=1,2,\ldots,m$, are some prescribed weights,
$$G_n(x,y) = \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}Y_{\ell,k}(x)Y_{\ell,k}(y)$$
is a kernel generated by the spherical harmonics $\{Y_{\ell,k}\}$ of degree $\ell$ at most $n$, and the the precise number $Z(d,\ell)$ of spherical harmonics of exact degree $\ell$ is given in \eqref{equ:numberZ}.
The simplicity of spherical hyperinterpolation is manifested in the modeled mapping \eqref{equ:map}. Unlike many other fitting techniques that usually need to solve a system of linear equations to obtain the modeled mapping, e.g., the least squares, the spherical hyperinterpolation \eqref{equ:map} can be directly written down and immediately generates the output from any input $x\in\Sd$ without any mathematical manipulations but only addition and multiplication. Moreover, adding a new data pair or withdrawing an existing one can be directly achieved without a new computation from scratch.
However, the construction of hyperinterpolation of degree $n$ requires a positive-weight quadrature rule
\begin{equation}\label{equ:quad}
\sum_{j=1}^mw_jf(x_j)\approx \int_{\Sd}f\text{d}\omega_d
\end{equation}
to be exact for polynomials up to degree $2n$, that is,
\begin{equation}\label{equ:quadexactness}
\sum_{j=1}^mw_jf(x_j)= \int_{\Sd}f\text{d}\omega_d\quad \forall f\in\mathbb{P}_{2n}(\Sd),
\end{equation}
where $\Pn(\Sd)$ be the space of spherical polynomials of degree at most $n$. A convenient $\Lt$-orthonormal basis (with respect to $\omega_d$) for $\Pn$ is provided by the spherical harmonics $\{Y_{\ell,k}:k=1,2,\ldots Z(d,\ell);\ell=0,1,2,\ldots,n\}$. The hyperinterpolation operator $\mathcal{L}_n:\mathcal{C}(\Sd)\rightarrow\Pn(\Sd)$ maps a continuous function $f\in \mathcal{C}(\Sd)$ to
\begin{equation}\label{equ:hyper}
\mathcal{L}_nf := \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\left\langle f,Y_{\ell,k}\right\rangle_mY_{\ell,k}\in\Pn(\Sd),
\end{equation}
where $\left\langle f,g\right\rangle_m:=\sum_{j=1}^mw_jf(x_j)g(x_j)$ is the numerical evaluation of the inner product $\left\langle f,g\right\rangle:=\int_{\Sd}f(x)g(x)\text{d}\omega_d$ by the quadrature rule \eqref{equ:quad} with the exactness assumption \eqref{equ:quadexactness}.
In other words, the hyperinterpolation \eqref{equ:hyper} of $f\in \mathcal{C}(\Sd)$ can be regarded as a discrete version of the famous $\Lt$-orthogonal projection
\begin{equation}\label{equ:proj}
\mathcal{P}_nf := \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\left\langle f,Y_{\ell,k}\right\rangle Y_{\ell,k}\in\Pn(\Sd)
\end{equation}
of $f$ from $\mathcal{C}(\Sd)$ onto $\Pn(\Sd)$. Sometimes we may consider equal-weight quadrature rules of the form
\begin{equation}\label{equ:equalweightquad}
\frac{1}{m}\sum_{j=1}^mf(x_j)\approx \int_{\Sd}f\text{d}\omega_d.
\end{equation}
Regarding this very restrictive nature of \eqref{equ:quadexactness} that it is impractical and sometimes impossible to obtain data on the desired quadrature points in practice, our aim in this paper is to bypass this quadrature exactness assumption by replacing it with the \emph{Marcinkiewicz--Zygmund property} (see \cite{an2022quadrature}):
\begin{assumption}
We assume that there exists an $\eta\in[0,1)$ such that
\begin{equation}\label{equ:etaassumption}
\left\lvert\sum_{j=1}^mw_j\chi(x_j)^2-\int_{\Sd}\chi^2\text{d}\omega_d\right\rvert\leq \eta \int_{\Sd}\chi^2\text{d}\omega_d\quad \forall \chi\in\mathbb{P}_{n}(\Sd).
\end{equation}
If $n'=n$, i.e., the quadrature exactness is not relaxed, then the exactness \eqref{equ:quadexactness} implies $\eta =0$.
\end{assumption}
Then the construction of hyperinterpolation is feasible with many more quadrature rules outside the traditional candidates. Traditionally, quadrature rules using spherical $t$-designs are used to construct hyperinterpolation. As we can see in this paper, quadrature rules using scattered points, equal area points, minimal energy points, maximal determinant points, and many other kinds of points are also feasible for constructing hyperinterpolation. The Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} is equivalent to
\begin{equation*}
(1-\eta) \int_{\Sd}\chi^2\text{d}\omega_d\leq \sum_{j=1}^mw_j\chi(x_j)^2\leq (1+\eta) \int_{\Sd}\chi^2\text{d}\omega_d\quad \forall \chi\in\mathbb{P}_{n}(\Sd),
\end{equation*}
which can be regarded as the Marcinkiewicz--Zygmund inequality \cite{filbir2011marcinkiewicz,Marcinkiewicz1937,mhaskar2001spherical} applied to polynomials $\chi^2$ of degree at most $2n$ with $\chi\in\Pn(\Sd)$, and it has been utilized in our recent work \cite{an2022quadrature} that quadrature rules are assumed to have exactness degree $n+n'$ with $0<n'\leq n$ for the construction of hyperinterpolation.
To tell the difference between the original hyperinterpolation $\mathcal{L}_n$ and the hyperinterpolation relying only on the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}, we refer to the latter as the \emph{unfettered hyperinterpolation}, indicating that the application of hyperinterpolation is no longer limited by the quadrature exactness assumption, and denote it by \begin{equation}\label{equ:unfetteredhyper}
\U_nf:=\sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\langle f,Y_{\ell,k}\rangle_mY_{\ell,k}\in\Pn(\Sd),
\end{equation}
where the quadrature rule \eqref{equ:quad} for evaluating $\langle f,Y_{\ell,k}\rangle_m$ is only assumed to satisfy the property \eqref{equ:etaassumption}.
We derive in this paper that
\begin{equation}\label{equ:error2}
\|\U_nf-f\|_{\Lt}\leq\left(\sqrt{1+\eta}\left(\sum_{j=1}^mw_j\right)^{1/2}+\lvert\Sd\rvert^{1/2}\right)E_n(f)+\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt},
\end{equation}
where $E_n(f)$ denotes the best uniform error of $f\in \mathcal{C}(\Sd)$ by a polynomial in $\mathbb{P}_n(\Sd)$, that is, $E_n(f):=\inf_{\chi\in \mathbb{P}_n(\Sd)}\|f-\chi\|_{\infty}$, and $\chi^*\in\Pn(\Sd)$ is the best approximation polynomial of $f$ in $\Pn(\Sd)$ in the sense of $\|f-\chi^*\|_{\infty}=E_n(f)$. Thus, no matter what kind of point distributions is adopted, it is sufficient for a reasonable approximation error bound to control the numerical integration error so that the constant $\eta$ in the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} is reasonably small.
The $\Lt$ error estimate \eqref{equ:error2} reduces to the classical result $\|\mathcal{L}_nf-f\|_{\Lt}\leq 2\lvert\Sd\rvert^{1/2}E_n(f)$ of hyperinterpolation derived in \cite{sloan1995polynomial} when the quadrature exactness degree is assumed to be $2n$, because such an assumption leads to $\eta=0$ and $\sum_{j=1}^mw_j = \int_{\Sd}\text{d} \omega_d=\lvert\Sd\rvert$. If the quadrature exactness degree is assumed to be $n+n'$ with $0<n'\leq n$, then the estimate \eqref{equ:error2} can be refined as
\begin{equation*}
\|\U_nf-f\|_{\Lt} \leq \left(\sqrt{1+\eta}+1\right)\lvert\Sd\rvert^{1/2}E_{n'}(f),
\end{equation*}
and this convergence rate in terms of $E_{n'}(f)$ coincides with the result in our recent work \cite{an2022quadrature} that
\begin{equation}\label{equ:BIT}
\|\mathcal{L}_nf-f\|_{\Lt}\leq \left(\frac{1}{\sqrt{1-\eta}}+1\right)\lvert\Sd\rvert^{1/2}E_{n'}(f)
\end{equation}
under the same assumption. A Sobolev analog to the error estimate \eqref{equ:error2}, i.e., the error measured by a Sobolev norm, is also established in this paper.
We also highlight the connection between the unfettered hyperinterpolation and QMC designs. Historically, quadrature exactness is often a starting point in designing quadrature rules. Nevertheless, this trend has recently received growing concerns regarding whether exactness is a reliable designing principle, see, e.g., \cite{trefethen2022exactness}. The concept of QMC designs, introduced by Brauchart, Saff, Sloan, and Womersley in \cite{MR3246811}, is an important quadrature-designing principle against this historical trend. QMC designs include many points distributions that are easy to obtain numerically, and quadrature rules using QMC designs provide the same asymptotic order of convergence as rules with quadrature exactness when the integrand belongs to the Sobolev space $H^s(\Sd)$ with $s>d/2$. Moreover, quadrature exactness is not a necessary assumption for QMC designs. If the quadrature points form a QMC design, then we show quadrature rules using them also satisfy the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. Hence hyperinterpolation using QMC designs is a special case in the general framework of unfettered hyperinterpolation. However, the general error estimate \eqref{equ:error2} may not be sharp for hyperinterpolation using QMC designs, and we can refine them. Regarding the particularity of QMC designs, we may refer to the hyperinterpolation of $f\in\Hs(\Sd)$ using QMC designs, though a special case of unfettered hyperinterpolation, as the \emph{QMC hyperinterpolation}, and denote it by
\begin{equation}\label{equ:QMChyper}
\mathcal{Q}_nf:=\sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\left\langle f,Y_{\ell,k}\right\rangle_mY_{\ell,k}\in\Pn(\Sd),
\end{equation}
where the quadrature rule \eqref{equ:quad} for evaluating $\langle f,Y_{\ell,k}\rangle_m$ adopt a QMC design for $\Hs(\Sd)$ as the set of quadrature points. We show in this paper that for $f\in \Hs(\Sd)$,
\begin{equation*}
\|\mathcal{Q}_nf-f\|_{\Lt}\leq c''(s,d)\left(n^{-s}+\frac{1}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}}\right)\|f\|_{\Hs},
\end{equation*}
where $c''(s,d)>0$ is some constant depending only on $c$ and $s$, and $a_n^{(s)}$ is of order $(1+n)^{-2s}$.
\textbf{Organization.} The paper is organized as follows. Section \ref{sec:background} collects some technical facts regarding spherical
harmonics, our Sobolev space setting, spherical $t$-designs, and QMC designs. Section \ref{sec:unfetteredtheory} gives the approximation theory of the unfettered hyperinterpolation under the only assumption of the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. Section \ref{sec:QMCtheory} develops the approximation theory of the QMC hyperinterpolation under the only assumption that $\{x_j\}_{j=1}^m$ is a QMC design. Section \ref{sec:numerical} contains numerical experiments that validate our theory.
\section{Background}\label{sec:background}
We are concerned with real-valued functions on the sphere $\Sd$ in the Euclidean space $\mathbb{R}^{d+1}$ for $d\geq 2$.
\subsection{Spherical harmonics and hyperinterpolation}
Let $\Lt(\Sd)$ denote the Hilbert space of all square-integrable functions on $\Sd$ with the inner product
\begin{equation*}
\left\langle f,g\right\rangle:=\int_{\Sd}f(x)g(x)\text{d}\omega_d(x)
\end{equation*}
and the induced norm $\|f\|_{\Lt}: = \sqrt{\left\langle f,f\right\rangle}$. By $\mathcal{C}(\Sd)$ we denote the space of continuous functions on $\Sd$, endowed with the uniform norm $\|f\|_{\infty}:=\sup_{x\in\Sd}|f(x)|$.
The restriction to $\Sd$ of a homogeneous and harmonic polynomial of total degree $\ell$ defined on $\mathbb{R}^{d+1}$ is called a \emph{spherical harmonic of degree $\ell$} on $\Sd$. We denote, as usual, by $\{Y_{\ell,k}:k = 1,2,\ldots,Z(d,\ell)\}$ a collection of $\Lt$-orthonormal real-valued spherical harmonics of exact degree $\ell$, where
\begin{equation}\label{equ:numberZ}
Z(d,0) = 1, \quad Z(d,\ell) = (2\ell +d-1)\frac{\Gamma(\ell+d-1)}{\Gamma(d)\Gamma(\ell+1)}\sim \frac{2}{\Gamma(d)}\ell^{d-1}\quad\text{as }\ell\rightarrow\infty,
\end{equation}
where $\Gamma(z)$ is the gamma function and $f(x)\sim g(x)$ as $x\rightarrow c$ means $f(x)/g(x)\rightarrow 1$ as $x\rightarrow c$. The spherical harmonics of degree $\ell\in \{0,1,2,\ldots\}$ satisfy the addition theorem \cite[Theorem 2]{MR0199449}, that is,
\begin{equation*}
\sum_{k=1}^{Z(d,\ell)}Y_{\ell,k}(x)Y_{\ell,k}(y)= \frac{Z(d,\ell)}{\lvert\Sd\rvert}P_{\ell}^{(d)}(x\cdot y),
\end{equation*}
where $P_{\ell}^{(d)}$ is the normalized Gegenbauer polynomial on $[-1,1]$, orthogonal on with respect to the weight function $(1-t^2)^{d/2-1}$, and normalized such that $P^{(d)}_{\ell}(1)=1$. As an immediate application of the addition theorem, we have
\begin{equation}\label{equ:sphericalharmincsbound}
\|Y_{\ell,k}\|_{\infty}\leq \left(Z(d,\ell)/\lvert\Sd\rvert\right)^{1/2}\quad \forall \ell=0,1,2,\ldots\text{ and }k=1,2,\ldots,Z(d,\ell).
\end{equation}
Indeed, for any spherical harmonic $Y_{\ell,k}$, suppose $\lvert Y_{\ell,k}(x)\rvert$ attains $\|Y_{\ell,k}\|_{\infty}$ at the point $x^*\in\Sd$, then
\begin{equation*}
\|Y_{\ell,k}\|_{\infty} = \lvert Y_{\ell,k}(x^*)\rvert \leq \left(\sum_{k=1}^{Z(d,\ell)}\lvert Y_{\ell,k}(x^*)\rvert^2\right)^{1/2} = (Z(d,\ell)P^{(d)}_{\ell}(1)/\lvert\Sd\rvert)^{1/2} = \left(Z(d,\ell)/ \lvert\Sd\rvert\right)^{1/2}.
\end{equation*}
Besides, it is well known (see, e.g., \cite[pp. 38–39]{MR0199449}) that each spherical harmonic $Y_{\ell,k}$ of exact degree $\ell$ is an eigenfunction of the negative Laplace--Beltrami operator $-\Delta^*_d$ for $\Sd$ with eigenvalue
\begin{equation}\label{equ:LBeigenvale}
\lambda_{\ell}:=\ell(\ell+d-1).
\end{equation}
The family $\{Y_{\ell,k}:k=1,\ldots,Z(d,\ell);\ell = 0,1,2,\ldots\}$ of spherical harmonics forms a complete $\Lt$-orthonormal (with respect to $\omega_d$) system for the Hilbert space $\Lt(\Sd)$. Thus, for any $f\in\Lt(\Sd)$, it can be represented by a Laplace--Fourier series
\begin{equation*}
f(x)=\sum_{\ell=0}^{\infty}\sum_{k=1}^{Z(d,\ell)}\hat{f}_{\ell,k} Y_{\ell,k}(x)
\end{equation*}
with coefficients
\begin{equation}\label{equ:LFcoefficient}
\hat{f}_{\ell,k}:=\left\langle f,Y_{\ell,k}\right\rangle=\int_{\Sd}f(x)Y_{\ell,k}(x)\text{d}\omega_d(x),\quad \ell=0,1,2,\ldots \text{ and }k = 1,2,\ldots,Z(d,\ell).
\end{equation}
The space $\Pn(\Sd)$ of all spherical polynomials of degree at most $n$ (i.e., the restriction to $\Sd$ of all polynomials in $\mathbb{R}^{d+1}$ of degree at most $n$) coincides with the span of all spherical harmonics up to (and including) degree $n$, and its dimension satisfies $\dim(\Pn(\Sd))=Z(d+1,n)$. The space $\Pn(\Sd)$ is also a reproducing kernel Hilbert space with the reproducing kernel
\begin{equation}\label{equ:kernel}
G_n(x,y) = \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}Y_{\ell,k}(x)Y_{\ell,k}(y)
\end{equation}
in the sense that
\begin{equation}\label{equ:reproducingproperty}
\left\langle \chi,G(\cdot,x) \right\rangle = \chi(x)\quad\forall \chi\in\Pn(\Sd),
\end{equation}
see, e.g., \cite{MR1115901}. Given $f\in \mathcal{C}(\Sd)$, it is often simpler in practice to express the hyperinterpolant $\mathcal{L}_nf$ using the reproducing kernel $G_n(\cdot,\cdot)$ defined by \eqref{equ:kernel}. By rearranging the summation,
\begin{equation*}
\mathcal{L}_nf(x) = \sum_{\ell=0}^{n}\sum_{k=1}^{Z(d,\ell)}\left(\sum_{j=1}^mw_j f(x_j)Y_{\ell,k}(x_j)\right)Y_{\ell,k}(x) = \sum_{j=1}^mw_jf(x_j)G_n(x,x_j).
\end{equation*}
Since such a summation-rearranging procedure does not depend on the quadrature exactness, such an expression also applies to $\U_nf$ and $\mathcal{Q}_nf$. What makes the above three expressions different is the quadrature rules used for constructing different kinds of hyperinterpolants.
\subsection{Sobolev spaces}
The study of hyperinterpolation in a Sobolev space setting can be traced back to the work \cite{MR2274179} by Hesse and Sloan. The Sobolev space $\Hs(\Sd)$ on the sphere $\Sd$ may be defined for $s\geq 0$ as the set of all functions $f\in\Lt(\Sd)$ whose Laplace--Fourier coefficients \eqref{equ:LFcoefficient} satisfy
\begin{equation*}
\sum_{\ell=0}^{\infty}\sum_{k=1}^{Z(d,\ell)}(1+\lambda_{\ell})^s\lvert \hat{f}_{\ell,k}\rvert^2<\infty,
\end{equation*}
where $\lambda_{\ell}$ is given as \eqref{equ:LBeigenvale}. When $s=0$, we have $H^0(\Sd)=\Lt(\Sd)$. The norm in $\Hs(\Sd)$ may be defined as the square root of the expression on the left-hand side of the last inequality; however, in this paper, we shall take advantage of the freedom to define equivalent Sobolev space norms. Let $s>d/2$ be fixed and suppose we are given a sequence of positive real numbers $(a^{(s)}_{\ell})_{\ell\geq 0}$ satisfying
\begin{equation}\label{equ:asl}
a^{(s)}_{\ell} \asymp (1+\lambda_{\ell})^{-s} \asymp (1+\ell)^{-2s},
\end{equation}
where $a_n\asymp b_n$ denotes that there exist $c_1,c_2>0$ independent of $n$ such that $c_1a_n\leq b_n\leq c_2b_n$. Then we can define a norm in $\Hs(\Sd)$ by
\begin{equation*}
\|f\|_{\Hs}:=\left(\sum_{\ell=0}^{\infty}\sum_{k=1}^{Z(d,\ell)}\frac{1}{a^{(s)}_{\ell}}\lvert \hat{f}_{\ell,k}\rvert^2\right)^{1/2}.
\end{equation*}
The norm $\|\cdot\|_{\Hs}$ therefore depends on the particular choice of the sequence $(a^{(s)}_{\ell})_{\ell\geq 0}$, but a change to this sequence merely leads to an \emph{equivalent} Sobolev norm.
The following lemmas are necessary for our analysis.
\begin{lemma}\label{lem:hsl2}
For any $f\in\Pn(\Sd)$, $\|f\|_{\Hs}\leq \tilde{c} \left(n+1\right)^s\|f\|_{\Lt}$, where $\tilde{c}>0$ is a constant.
\end{lemma}
\begin{proof}
It is straightforward that
\begin{equation*}
\|f\|_{\Hs}=\left(\sum_{\ell=0}^{n}\sum_{k=1}^{Z(d,\ell)}\frac{1}{a^{(s)}_{\ell}}\lvert\hat{f}_{\ell,k}\rvert^2\right)^{1/2}\leq \left(\frac{1}{a_n^{(s)}}\|f\|_{\Lt}^2\right)^{1/2}\leq \tilde{c} \left(n+1\right)^s\|f\|_{\Lt}\quad \forall f\in\Pn(\Sd),
\end{equation*}
where we used the order \eqref{equ:asl} of $(a^{(s)}_{\ell})_{\ell\geq 0}$.
\end{proof}
\begin{lemma}\label{lem:sobolevclosed}
If $s>d/2$, then $\|fg\|_{\Hs}\leq \check{c}\|f\|_{\Hs}\|g\|_{\Hs}$, where $\check{c}>0$ is some constant.
\end{lemma}
\begin{proof}
For any Lipschitz domain $\Omega$, let $W^{s,2}(\Omega)$ be the Sobolev space of those functions in $L^2(\Omega)$ whose distributional derivatives up to (and including) order $s$ are in $L^2(\Omega)$. Note that the Sobolev spaces $\Hs(\Sd)$ can also be defined with the help of charts (that is, the so-called Sobolev spaces over boundaries), giving the space $W^{s,2}(\Sd)$ with an \emph{equivalent} norm, that is,
\begin{equation}\label{equ:sobolevlem1}
c_1\|f\|_{\Hs}\leq \|f\|_{W^{s,2}(\Sd)}\leq c_2\|f\|_{\Hs},
\end{equation}
where $c_1,c_2>0$ are some constants; see \cite[Chapter 7.3]{MR0350177} or \cite[Chapter 7.2.3]{MR2511061}. If $s>d/2$, then the Sobolev space $W^{s,2}(\Sd)$ is a Banach algebra, that is, for any $f,g\in W^{s,2}(\Sd)$,
\begin{equation}\label{equ:sobolevlem2}
\|fg\|_{W^{s,2}(\Sd)} \leq c_3 \|f\|_{W^{s,2}(\Sd)}\|g\|_{W^{s,2}(\Sd)},
\end{equation}
where $c_3>0$ is some constant; we refer to \cite[Theorem 5.23]{MR0450957} or \cite[Section 6.1]{MR785568} for this result. Together with \eqref{equ:sobolevlem1} and \eqref{equ:sobolevlem2}, we have the desired estimate.
\end{proof}
\begin{remark}
The norm equivalence \eqref{equ:sobolevlem1} is also identified and utilized in some other spherical approximation schemes, see, e.g., \cite{MR3712286,MR2271729}.
\end{remark}
\subsection{Spherical $t$-designs and QMC designs}\label{sec:designs}
A spherical $t$-design, introduced in the remarkable paper \cite{delsarte1991geometriae} by Delsarte, Goethals, and Seidel, is a set of points $\{x_j\}_{j=1}^m\subset \Sd$ with the characterizing property that an equal-weight quadrature rule in these points exactly integrates all polynomials of degree at most $t$, that is,
\begin{equation}\label{equ:stdexactness}
\frac{1}{m}\sum_{j=1}^m\chi(x_j)=\int_{\Sd}\chi(x)\text{d}\omega_d(x)\quad\forall \chi\in\mathbb{P}_t.
\end{equation}
A majority of studies in the literature on spherical designs care about the relation between $m$ and $t$ in \eqref{equ:stdexactness}. It was known by Seymour and Zaslavsky \cite{MR744857} that a spherical $t$-design always exists if $m$ is sufficiently large, but no quantitative results on the size of $m$ were established. In the original manuscript \cite{delsarte1991geometriae} of spherical $t$-designs, lower bounds on $m$ of exact order $t^d$ were derived in the sense that
\begin{equation*}
m\geq\begin{dcases}
\binom{d+t/2}{d}+\binom{d+t/2-1}{d}&\text{for even }t,\\
2\binom{d+\lfloor t/2\rfloor}{d}&\text{for odd }t;
\end{dcases}
\end{equation*}
but according to Bannai and Damerell \cite{MR519045,MR576179}, the number $m$ of quadrature points could achieve these lower bounds only for a few small values of $t$. Bondarenko, Radchenko, and Viazovska asserted in \cite{MR3071504} that for each $m\geq ct^d$ with some positive but unknown constant $c>0$, there exists a spherical $t$-design in $\Sd$ consisting of $m$ points.
Quadrature rules \eqref{equ:quad} using spherical $t$-designs are known to have fast-convergence property when the integrand belongs to the Sobolev space $\Hs$; namely, given $s>d/2$, there exists $C(s,d)>0$ depending only on $s$ and $d$ such that for every $m$-point spherical $t$-design $\{x_j\}_{j=1}^m$ on $\Sd$, there holds
\begin{equation}\label{equ:stderror}
\sup_{\substack{f\in\Hs(\Sd),\\ \|f\|_{\Hs}\leq 1}} \left\lvert\frac{1}{m}\sum_{j=1}^mf(x_j)-\int_{\Sd}f(x)\text{d}\omega_d\right\rvert\leq\frac{C(s,d)}{t^s}.
\end{equation}
The estimate \eqref{equ:stderror} was established gradually: It was first proved for the particular case $s=3/2$ and $d=2$ in \cite{MR2127668}, then extended to all $s>1$ for $d=2$ in \cite{MR2252093}, and finally extended to all $s>d/2$ and all $d\geq 2$ in \cite{MR2263736}. The condition $s>d/2$ is a natural one because functions to be approximated in this paper are assumed to be continuous, and by the Sobolev embedding theorem, $\Hs(\Sd)$ is continuously embedded in $\mathcal{C}(\Sd)$ if $s>d/2$.
If only spherical $t$-designs with $m\asymp t^d$ are concerned, then the upper bound on the error \eqref{equ:stderror} is of order $m^{-s/d}$. Here comes the concept of QMC designs, introduced by Brauchart, Saff, Sloan, and Womersley in \cite{MR3246811}: Given $s>d/2$, a sequence $\{x_j\}_{j=1}^m$ of $m$-point configurations on $\Sd$ with $m\rightarrow\infty$ is said to be a sequence of \emph{QMC designs} for $\Hs(\Sd)$ if there exists $c(s,d)>0$ independent of $m$ such that
\begin{equation}\label{equ:QMCerror}
\sup_{\substack{f\in\Hs(\Sd),\\ \|f\|_{\Hs}\leq 1}} \left\lvert\frac{1}{m}\sum_{j=1}^mf(x_j)-\int_{\Sd}f(x)\text{d}\omega_d\right\rvert\leq\frac{c(s,d)}{m^{s/d}}.
\end{equation}
In a nutshell, quadrature rules using QMC designs provide the same asymptotic order of convergence as exact rules (e.g., rules using spherical $t$-designs) when the integrand belongs to the Sobolev space $H^s$, but are easier to obtain numerically. For more studies on the numerical integration on the sphere with the integrand belonging to a Sobolev space, we refer the reader to \cite{MR2929076,MR3365840,hesse2010numerical,MR2123223}. Equal-weight numerical integration rules with the integrand belonging to many other spaces of smoothness also attracts much interest, see, e.g., \cite{MR2558691,MR2391005,MR3038697,MR4438170,MR3614894,MR2059741,MR3530965,MR1617765}, to name a few.
A substantial definition related to QMC designs $\{x_j\}_{j=1}^m$ is the \emph{QMC strength}, denoted by $s^*$. For every sequence of QMC designs $\{x_j\}_{j=1}^m$, there is some number $s^*$ such that $\{x_j\}_{j=1}^m$ is a sequence of QMC designs for all $s$ satisfying $d/2<s\leq s^*$ and is not a QMC design for $s>s^*$. Even if the integrand $f$ is infinitely differentiable, the convergence rate of the numerical integration error \eqref{equ:QMCerror} using a QMC design with strength $s^*$ is controlled by $m^{-s^*/d}$.
\section{General framework of unfettered hyperinterpolation}\label{sec:unfetteredtheory}
With the aid of the reproducing property \eqref{equ:reproducingproperty}, the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} implies the following lemma.
\begin{lemma}\label{lem}
For any $\chi\in\Pn(\Sd)$, we have
{\rm{(a)}} $(1-\eta)\|\chi\|_{\Lt}^2\leq \left\langle \U_n\chi,\chi\right\rangle\leq(1+\eta)\|\chi\|_{\Lt}^2.$
{\rm{(b)}} $(1-\eta)\|\chi\|_{\Lt}\leq\|\U_n\chi\|_{\Lt}\leq(1+\eta)\|\chi\|_{\Lt}$.
{\rm{(c)}} $\|\U_n\chi-\chi\|_{\Lt}^2\leq(\eta^2+4\eta)\|\chi\|_{\Lt}^2.$
\end{lemma}
\begin{proof} (a) The reproducing property \eqref{equ:reproducingproperty} of $G_n(\cdot,\cdot)$ implies
\begin{equation*}\begin{split}
\left\langle \U_n\chi,\chi\right\rangle &= \left\langle \sum_{j=1}^mw_j\chi(x_j)G_n(x,x_j),\chi(x)\right\rangle=\sum_{j=1}^mw_j\chi(x_j)\left\langle G_n(x,x_j),\chi(x)\right\rangle=\sum_{j=1}^nw_j\chi(x_j)^2.
\end{split}\end{equation*}
Thus by the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption},
\begin{equation*}
(1-\eta)\|\chi\|_{\Lt}^2=(1-\eta)\int_{\Sd}\chi^2\text{d}\omega_d\leq\sum_{j=1}^nw_j\chi(x_j)^2\leq(1+\eta)\int_{\Sd}\chi^2\text{d}\omega_d=(1+\eta)\|\chi\|_{\Lt}^2.
\end{equation*}
(b) By part (a), we have $(1-\eta)\|\chi\|_{\Lt}^2\leq\left\langle \U_n\chi,\chi\right\rangle\leq\|\U_n\chi\|_{\Lt}\|\chi\|_{\Lt}$, leading to $(1-\eta)\|\chi\|_{\Lt}\leq\|\U_n\chi\|_{\Lt}$. We also have
\begin{equation*}\begin{split}
\|\U_n\chi\|_{\Lt}^2&\leq\left\langle \U_n\chi,\U_n\chi\right\rangle
=\left\langle \sum_{j=1}^mw_j\chi(x_j)G_n(x,x_j),\U_n\chi(x)\right\rangle=\sum_{j=1}^mw_j\chi(x_j)\U_n\chi(x_j)\\
&\leq \left(\sum_{j=1}^mw_j\chi(x_j)^2\right)^{1/2}\left(\sum_{j=1}^mw_j\left(\U_n\chi(x_j)\right)^2\right)^{1/2}
\leq (1+\eta)\|\chi\|_{\Lt}\|\U_n\chi\|_{\Lt},
\end{split}\end{equation*}
where the first inequality is due to the Cauchy--Schwarz inequality, and the second one is ensured by the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. Thus part (b) is proved.
(c) Using parts (a) and (b) above, it is straightforward that
\begin{equation*}\begin{split}
\|\U_n\chi-\chi\|_{\Lt}^2
&=\|\U_n\chi\|_{\Lt}^2-2\left\langle \U_n\chi,\chi\right\rangle+\|\chi\|_{\Lt}^2
\leq (1+\eta)^2\|\chi\|_{\Lt}^2-2(1-\eta)\|\chi\|_{\Lt}^2+\|\chi\|_{\Lt}^2\\
&=(\eta^2+4\eta)\|\chi\|_{\Lt}^2.
\end{split}\end{equation*}
Hence this lemma is proved.
\end{proof}
We are now ready to state our main theorem.
\begin{theorem}\label{thm}
Given $f\in \mathcal{C}(\Sd)$, let $\U_nf\in\mathbb{P}_n$ be its unfettered hyperinterpolant defined by \eqref{equ:unfetteredhyper}, where the $m$-point positive-weight quadrature rule \eqref{equ:quad} is only assumed to have the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} with $\eta\in[0,1)$. Then
\begin{equation}\label{equ:stability}
\|\U_nf\|_{\Lt}\leq\sqrt{1+\eta}\left(\sum_{j=1}^mw_j\right)^{1/2}\|f\|_{\infty},
\end{equation}
and
\begin{equation}\label{equ:error}
\|\U_nf-f\|_{\Lt}\leq \left(\sqrt{1+\eta}\left(\sum_{j=1}^mw_j\right)^{1/2}+\lvert\Sd\rvert^{1/2}\right)E_n(f)+\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt},
\end{equation}
where $E_n(f)$ denotes the best uniform error of $f$ by a polynomial in $\Pn(\Sd)$ and $\chi^*\in\Pn(\Sd)$ denotes the best approximation polynomial of $f$ in $\Pn(\Sd)$ in the sense of $\|f-\chi^*\|_{\infty}=E_n(f)$.
\end{theorem}
\begin{proof}
For any $f\in \mathcal{C}(\Sd)$, we have $\U_nf\in\mathbb{P}_n$ and hence $\left\langle G_n(x,x_j),\U_nf(x)\right\rangle=\U_nf(x_j)$. Thus,
\begin{equation*}\begin{split}
\left\langle \U_nf,\U_nf\right\rangle
& = \left\langle \sum_{j=1}^mw_jf(x_j)G_n(x,x_j),\U_nf(x)\right\rangle = \sum_{j=1}^mw_jf(x_j)\U_nf(x_j)\\
& \leq \left(\sum_{j=1}^mw_jf(x_j)^2\right)^{1/2}\left(\sum_{j=1}^mw_j\left(\U_n\chi(x_j)\right)^2\right)^{1/2}\leq \left(\sum_{j=1}^mw_j\right)^{1/2}\|f\|_{\infty}\sqrt{1+\eta}\|\U_nf\|_{\Lt},
\end{split}\end{equation*}
where the first inequality is due to the Cauchy--Schwarz inequality and the second one holds by using $\sum_{j=1}^mw_jf(x_j)^2\leq\|f\|_{\infty}^2\sum_{j=1}^mw_j$ and the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. This estimate immediately implies the stability result \eqref{equ:stability}.
The error bound \eqref{equ:error} is obtained by the following argument. For any $\chi\in\mathbb{P}_n$, we have
\begin{equation*}\begin{split}
\|\U_nf-f\|_{\Lt}
& = \|\U_n(f-\chi)+(\chi-f)+(\U_n\chi-\chi)\|_{\Lt} \leq \|\U_n(f-\chi)\|_{\Lt} + \|f-\chi\|_{\Lt}+\|\U_n\chi-\chi\|_{\Lt}\\
& \leq \sqrt{1+\eta}\left(\sum_{j=1}^mw_j\right)^{1/2}\|f-\chi\|_{\infty}+\lvert\Sd\rvert^{1/2}\|f-\chi\|_{\infty} + \|\U_n\chi-\chi\|_{\Lt}.
\end{split}\end{equation*}
It follows, since this estimate holds for all polynomials in $\Pn(\Sd)$, that
\begin{equation*}
\|\U_nf-f\|_{\Lt} \leq \left(\sqrt{1+\eta}\left(\sum_{j=1}^mw_j\right)^{1/2}+\lvert\Sd\rvert^{1/2}\right)E_n(f) + \|\U_n\chi^*-\chi^*\|_{\Lt}.
\end{equation*}
By part (c) of Lemma \ref{lem}, we have $\|\U_n\chi^*-\chi^*\|_{\Lt}\leq \sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$.
\end{proof}
\subsection{Connections in the literature}
If the quadrature rule \eqref{equ:quad} is additional assumed to integral all constant functions (polynomials of degree zero) exactly, that is, $\sum_{j=1}^mw_j=\lvert\Sd\rvert$,
then we have $\|\U_nf\|_{\Lt}\leq\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}\|f\|_{\infty}$ and
\begin{equation*}
\|\U_nf-f\|_{\Lt}\leq \left(\sqrt{1+\eta}+1\right)\lvert\Sd\rvert^{1/2}E_n(f)+\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}.
\end{equation*}
If the quadrature rule \eqref{equ:quad} exactly integrate all polynomials of degree at most $2n$, i.e., the constant $\eta$ is zero, then the stability result \eqref{equ:stability} and error bound \eqref{equ:error} reduce to the classical results of hyperinterpolation in \cite{sloan1995polynomial}; namely, $\|\U_nf\|_{\Lt}\leq \lvert\Sd\rvert^{1/2}\|f\|_{\infty}$ and
\begin{equation*}
\|\U_nf-f\|_{\Lt}\leq 2\lvert\Sd\rvert^{1/2}E_n(f).
\end{equation*}
If the quadrature rule \eqref{equ:quad} has exactness degree $n+n'$ with $0<n'\leq n$, then $\U_n\chi = \chi$ for all $\chi\in\mathbb{P}_{n'}(\Sd)$, see \cite[Lemma 2.1]{an2022quadrature}. By the stability result \eqref{equ:stability}, we have for any $\chi\in\mathbb{P}_{n'}(\Sd)$,
\begin{equation*}
\|\U_nf-f\|_{\Lt} \leq\|\U_n(f-\chi) - (f-\chi)\|_{\Lt} \leq \|\U_n(f-\chi)\|_{\Lt}+\|f-\chi\|_{\Lt}.
\end{equation*}
As this estimate holds for all $\chi\in\mathbb{P}_{n'}(\Sd)$, it is straightforward that
\begin{equation}\label{equ:estimatenn'}
\|\U_nf-f\|_{\Lt} \leq \left(\sqrt{1+\eta}+1\right)\lvert\Sd\rvert^{1/2}E_{n'}(f),
\end{equation}
which has the same convergence rate in terms of $E_{n'}(f)$ as our previous estimate \eqref{equ:BIT} in \cite{an2022quadrature}. In \cite{an2022quadrature}, we make use of the discrete orthogonal projection property (see \cite[Lemma 3.1]{an2022quadrature}) to obtain the estimate \eqref{equ:BIT}, while in this paper we utilize the reproducing property \eqref{equ:reproducingproperty} for the estimate \eqref{equ:estimatenn'}.
Moreover, in light of Theorem \ref{thm} and the study on spherical hyperinterpolation in a Sobolev space setting by Hesse and Sloan in \cite{MR2274179}, we have the following Sobolev estimates, which reduce to their results in \cite{MR2274179} when the exactness degree $2n$ is assumed. For simplicity and without loss of generality, we assume $\sum_{j=1}^mw_j=\lvert\Sd\rvert$ in Corollary \ref{cor:sobolev}. Note that $\Hs(\Sd)\subset \Lt(\Sd)$.
\begin{corollary}\label{cor:sobolev}
Let $d\geq 2$, and let $t$ and $s$ be fixed real numbers with $s\geq t \geq 0$ and $s\geq d/2$. Under the conditions of Theorem \ref{thm}, for any unfettered hyperinterpolation operator $\U_n:\Hs(\Sd)\rightarrow H^t(\Sd)$, there hold
\begin{equation}\label{equ:errorsob}
\|\U_nf\|_{H^t} \leq \tilde{c} \left[ \left(\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}+1\right)(n+1)^{d/2+t-s}\|f\|_{H^s} + (n+1)^t\sqrt{\eta^2+4\eta}\|f\|_{\Lt}\right] + \|f\|_{\Hs}
\end{equation}
and
\begin{equation}\label{equ:stabilitysob}
\|\U_nf-f\|_{H^t} \leq \tilde{c} \left[ \left(\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}+1\right)(n+1)^{d/2+t-s} E_n(f;\Hs(\Sd)) + \tilde{c}(n+1)^t\sqrt{\eta^2+4\eta}\|f\|_{\Lt}\right],
\end{equation}
where $\tilde{c}>0$ is some constant that may vary line to line, and $E_n(f;\Hs(\Sd))$ is the best $\Hs$ approximation of $f\in\Hs(\Sd)$ by a polynomial in $\Pn(\Sd)$, that is, $E_n(f;\Hs(\Sd)):=\inf_{\chi\in\Pn(\Sd)}\|f-\chi\|_{\Hs}$.
\end{corollary}
\begin{remark}
When the exactness degree of the rule \eqref{equ:quad} is assumed to be $2n$, $\eta =0$ and the results \eqref{equ:stabilitysob} and \eqref{equ:errorsob} reduce to the respective results of the original hyperinterpolation (some constants may be different) derived by Hesse and Sloan in \cite{MR2274179}.
\end{remark}
\begin{proof}
Similar to the decomposition of $\|\U_nf-f\|_{\Lt}$ in the proof of Theorem \ref{thm}, we have
\begin{equation}\label{equ:sobolevdecomp}
\|\U_nf-f\|_{H^t} \leq \|\U_n(f-\mathcal{P}_nf)\|_{H^t} + \|\mathcal{P}_nf-f\|_{H^t} + \|\U_n(\mathcal{P}_nf)-\mathcal{P}_nf\|_{H^t}.
\end{equation}
The first term on the right-hand side of \eqref{equ:sobolevdecomp} can be bounded by
\begin{equation*}\begin{split}
\|\U_n(f-\mathcal{P}_nf)\|_{H^t} &\leq \tilde{c}(n+1)^t\|\U_n(f-\mathcal{P}_nf)\|_{\Lt} \leq \tilde{c}(n+1)^t\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}\|f-\mathcal{P}_nf\|_{\infty}\\
& \leq \tilde{c} (n+1)^t\sqrt{1+\eta}\lvert\Sd\rvert^{1/2} (n+1)^{d/2-s} \|f-\mathcal{P}_nf\|_{\Hs},
\end{split}
\end{equation*}
where the first inequality is due to Lemma \ref{lem:hsl2}, the second is due to the stability result \eqref{equ:stability}, and the third is due to \cite[Lemma 3.5]{MR2274179}. This lemma also guarantees that
\begin{equation*}
\|\mathcal{P}_nf-f\|_{H^t} \leq \tilde{c} (n+1)^{t-s}\|\mathcal{P}_nf-f\|_{H^s}.
\end{equation*}
The third term can be estimated as
\begin{equation*}\begin{split}
\|\U_n(\mathcal{P}_nf)-\mathcal{P}_nf\|_{H^t}
& \leq \tilde{c}(n+1)^t\|\U_n(\mathcal{P}_nf)-\mathcal{P}_nf\|_{\Lt} \leq \tilde{c}(n+1)^t\sqrt{\eta^2+4\eta}\|\mathcal{P}_nf\|_{\Lt}\\
& \leq \tilde{c}(n+1)^t\sqrt{\eta^2+4\eta}\|f\|_{\Lt}
\end{split}
\end{equation*}
where the first inequality is due to Lemma \ref{lem:hsl2}, the second is due to part (c) of Lemma \ref{lem}, and the third is due to the fact that the norm of $\mathcal{P}_n$ as an operator from $\Lt(\Sd)$ onto $\Lt(\Sd)$ is 1. Thus we have
\begin{equation*}
\|\U_nf-f\|_{H^t} \leq \tilde{c} \left[ \left(\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}+1\right)(n+1)^{d/2+t-s} E_n(f;\Hs(\Sd)) + (n+1)^t\sqrt{\eta^2+4\eta}\|f\|_{\Lt}\right],
\end{equation*}
where $E_n(f;\Hs(\Sd)) = \|f-\mathcal{P}_nf\|_{\Hs}$ is verified by \cite[Equ. (3.22)]{MR2274179}.
As $\|f-\mathcal{P}_nf\|_{\Hs}\leq \|f\|_{H^s}$ and $\|f\|_{H^t}\leq \|f\|_{H^s}$, we have
\begin{equation*}\begin{split}
\|\U_nf\|_{H^t}& \leq \|\U_nf-f\|_{H^t} + \|f\|_{H^t}\\
&\leq \tilde{c} \left[ \left(\sqrt{1+\eta}\lvert\Sd\rvert^{1/2}+1\right)(n+1)^{d/2+t-s}\|f\|_{H^s} + (n+1)^t\sqrt{\eta^2+4\eta}\|f\|_{\Lt}\right] + \|f\|_{\Hs},
\end{split}\end{equation*}
which completes the proof of this corollary.
\end{proof}
\subsection{Scattered data}
Now together with the work \cite{MR2475947} of Le Gia and Mhaskar, we can obtain a probabilistic description of Theorem \ref{thm}.
\begin{lemma}[{\cite[p. 463]{MR2475947}}]\label{lem:legiamhaskar}
Let the quadrature rule for constructing the unfettered hyperinterplants be an equal-weight rule \eqref{equ:equalweightquad} with an independent random sample of $m$ points drawn from the distribution $\omega_d$, and let $\gamma>0$ and $\eta\in(0,1)$. Then there exists a constant $\bar{c}:=\bar{c}(\gamma)$ such that if $m\geq \bar{c} n ^d\log{n}/\eta^2$, then the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} holds with probability exceeding $1-\bar{c}n^{-\gamma}$.
\end{lemma}
\begin{corollary}\label{cor:unfettered}
Adopt conditions of Theorem \ref{thm} and Lemma \ref{lem:legiamhaskar}, where the quadrature rule for constructing $\U_nf$ takes the form of \eqref{equ:equalweightquad} and uses $m\geq \bar{c}(\gamma) n ^d\log{n}/\eta^2$ quadrature points. Then the stability result \eqref{equ:stability} and error bound \eqref{equ:error} are valid with probability exceeding $1-\bar{c}n^{-\gamma}$.
\end{corollary}
As we can see, having bypassed the quadrature exactness assumption of the original hyperinterpolation, Theorem \ref{thm}
provides a general framework of analyzing the behavior of the unfettered hyperinterpolation. What we need to do in practice is to control the constant $\eta$ occurred in the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. As a practical guide, if the quadrature points are independently random samples from the the distribution $\omega_d$, then Corollary \ref{cor:unfettered} suggests a simple way to decrease $\eta$ by increasing the number $m$ of quadrature points.
\section{Unfettered hyperinterpolation with QMC designs}\label{sec:QMCtheory}
If $\{x_j\}_{j=1}^m$ is a QMC design for $\Hs(\Sd)$, it can be managed to satisfy the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}, as shown in Section \ref{subsec:QMCinferior}. Hence the unfettered hyperinterpolation using QMC designs is a special case of the general framework analyzed in Theorem \ref{thm}. Recall that we refer to such approximation as the QMC hyperinterpolation, denoted by $\mathcal{Q}_n$. However, the obtained error estimate may not be optimal due to the generality
of Theorem \ref{thm}, and we can find a sharper estimate customized for the unfettered hyperinterpolation using QMC designs.
\subsection{QMC hyperinterpolation in the general framework of unfettered hyperinterpolation}\label{subsec:QMCinferior}
It is critical to note that the numerical integration error \eqref{equ:QMCerror} of the QMC design-based quadrature rule and the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} cannot imply each other. On the one hand, the error \eqref{equ:QMCerror} applies to all functions in $\Hs(\Sd)$ with the property \eqref{equ:etaassumption} only holds for polynomial $\chi^2$ with $\chi\in\Pn(\Sd)$. On the other hand, if the integrand in the quadrature rule \eqref{equ:equalweightquad} is $\chi^2$ with $\chi\in\Pn(\Sd)$, the error bound \eqref{equ:QMCerror} suggests
\begin{equation}\label{equ:QMCchi2}
\left\lvert\frac{1}{m}\sum_{j=1}^m\chi(x_j)^2-\int_{\Sd}\chi^2\text{d}\omega_d\right\rvert\leq \frac{c(s,d)}{m^{s/d}}\|\chi^2\|_{\Hs}.
\end{equation}
This error \eqref{equ:QMCchi2} is not compatible with the Marcinkiewicz--Zygmund property \eqref{equ:etaassumption} because the controlling term is $\|\chi^2\|_{\Hs}$ instead of $\int_{\Sd}\chi^2\text{d}\omega_d$. Nevertheless, we can find an upper bound of $\|\chi^2\|_{\Hs}$ in terms of $\int_{\Sd}\chi^2\text{d}\omega_d$ to transform the error \eqref{equ:QMCchi2} into a Marcinkiewicz--Zygmund property \eqref{equ:etaassumption}. With the aid of Lemma \ref{lem:hsl2}, we have
\begin{equation*}\begin{split}
\|\chi^2\|_{\Hs}
&\leq \tilde{c}(2n+1)^s\|\chi^2\|_{\Lt}\leq \tilde{c}(2n+1)^s\|\chi\|_{\infty}\|\chi\|_{\Lt} \leq \tilde{c}(2n+1)^s\frac{\|\chi\|_{\infty}}{\|\chi\|_{\Lt}}\int_{\Sd}\chi^2\text{d}\omega_d.
\end{split}\end{equation*}
For any $\chi = \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\alpha_{\ell,k}Y_{\ell,k}\in\Pn(\Sd)$, we have
\begin{equation*}
\frac{\|\chi\|_{\infty}}{\|\chi\|_{\Lt}}\leq \frac{\sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\lvert\alpha_{\ell,k}\rvert\|Y_{\ell,k}\|_{\infty}}{\sqrt{\sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\lvert\alpha_{\ell,k}\rvert^2}} \leq \sqrt{\frac{Z(d,n)}{\lvert\Sd\rvert}Z(d+1,n)},
\end{equation*}
where we used the estimate \eqref{equ:sphericalharmincsbound} on the uniform norm of $Y_{\ell,k}$ and regard $\{\alpha_{\ell,k}\}$ as a vector of size $Z(d+1,n)$. Then we can let
\begin{equation}\label{equ:order1}
\eta = \frac{c(s,d)\tilde{c}}{m^{s/d}}(2n+1)^s\sqrt{\frac{Z(d,n)}{\lvert\Sd\rvert}Z(d+1,n)}
\end{equation}
and enforce it to be in $(0,1)$. Thus in this case, with the asymptotic result \eqref{equ:numberZ} of the size of $Z(d,\ell)$, the number $m$ should have a lower bound of order $n^{d+\frac{d^2}{s}-\frac{d}{2s}}$ as $n\rightarrow\infty$.
Moreover, regarding the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ in the error estimate \eqref{equ:error} in Theorem \ref{thm}, for a fixed degree, the convergence rate of this term with respect to $m$ is $m^{-s/(2d)}$.
\subsection{Approximation theory of QMC hyperinterpolation}\label{subsec:QMC}
We then show that the QMC hyperinterpolation has a sharper error estimate than the general estimate \eqref{equ:error} in Theorem \ref{thm}.
\begin{theorem}\label{QMCthm}
Given $f\in \Hs(\Sd)\subset \Lt(\Sd)$, let $\mathcal{Q}_nf\in\mathbb{P}_n$ be its QMC hyperinterpolant defined by \eqref{equ:QMChyper}, where the $m$-point equal-weight quadrature rule \eqref{equ:equalweightquad} adopts a QMC design for $\Hs(\Sd)$ as quadrature points. Then
\begin{equation}\label{equ:stabilityqmc}
\|\mathcal{Q}_nf\|_{\Lt}\leq \|f\|_{\Lt}+\frac{c'(s,d)}{m^{s/d}}(n+1)^s\|f\|_{\Hs},
\end{equation}
where $c'(s,d)>0$ is some constant depending only on $s$ and $d$, and
\begin{equation}\label{equ:errorqmc}
\|\mathcal{Q}_nf-f\|_{\Lt}\leq c''(s,d)\left(n^{-s} +\frac{1}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}}\right) \|f\|_{\Hs},
\end{equation}
where $c''(s,d)>0$ is some constant depending only on $s$ and $d$.
\end{theorem}
\begin{proof}
For $f\in \Hs(\Sd)$, we have
\begin{equation*}\begin{split}
\|\mathcal{Q}_nf\|_{\Lt}^2&=\left\langle \mathcal{Q}_nf,\mathcal{Q}_nf\right\rangle
= \left\langle \sum_{j=1}^mw_jf(x_j)G_n(x,x_j),\mathcal{Q}_nf(x)\right\rangle = \sum_{j=1}^mw_jf(x_j)\mathcal{Q}_nf(x_j)\\
& \leq \int_{\Sd}(\mathcal{Q}_n f)f\text{d}\omega_d + \frac{c(s,d)}{m^{s/d}}\|(\mathcal{Q}_nf)f\|_{\Hs} \leq \|f\|_{\Lt}\|\mathcal{Q}_n f\|_{\Lt} + \frac{c(s,d)\check{c}}{m^{s/d}}\|f\|_{\Hs}\|\mathcal{Q}_nf\|_{\Hs}\\
& \leq \|f\|_{\Lt}\|\mathcal{Q}_n f\|_{\Lt} + \frac{c(s,d)\check{c}}{m^{s/d}}\|f\|_{\Hs}(n+1)^s\|\mathcal{Q}_nf\|_{\Lt},
\end{split}\end{equation*}
where the first inequality is due to the integration error \eqref{equ:QMCerror} using QMC designs, the second one is due to the Cauchy--Schwarz inequality and Lemma \ref{lem:sobolevclosed} with $\check{c}$ given there, and the last one is due to Lemma \ref{lem:hsl2}. Hence we have the stability result \eqref{equ:stabilityqmc}.
For the error estimate \eqref{equ:errorqmc}, we have
\begin{equation*}
\|\mathcal{Q}_nf - f\|_{\Lt} \leq \|\mathcal{Q}_nf - \mathcal{P}_n f\|_{\Lt} + \|\mathcal{P}_nf-f\|_{\Lt},
\end{equation*}
where $\mathcal{P}_n$ is the $\Lt$-orthgonal projection operator \eqref{equ:proj}. For the term $ \|\mathcal{P}_nf-f\|_{\Lt}$, we have
\begin{equation*}\begin{split}
\|\mathcal{P}_nf-f\|_{\Lt}^2
& = \sum_{\ell=n+1}^{\infty}\sum_{k=1}^{Z(d,\ell)}\lvert\langle f,Y_{\ell,k}\rangle\rvert^2 = \sum_{\ell=n+1}^{\infty}\sum_{k=1}^{Z(d,\ell)}\lvert\langle f,Y_{\ell,k}\rangle\rvert^2\frac{a_{\ell}^{(s)}}{a_{\ell}^{(s)}} \leq a_n^{(s)} \|f\|_{\Hs}^2 \lesssim n^{-2s}\|f\|_{\Hs}^2.
\end{split}\end{equation*}
For the term $\|\mathcal{Q}_nf - \mathcal{P}_n f\|_{\Lt}$, we have
\begin{equation*}
\|\mathcal{Q}_nf - \mathcal{P}_n f\|_{\Lt}^2 = \sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}\left\lvert\left\langle f,Y_{\ell,k}\right\rangle_m - \left\langle f,Y_{\ell,k}\right\rangle \right\rvert^2
\end{equation*}
and
\begin{equation*}
\left\lvert\left\langle f,Y_{\ell,k}\right\rangle_m - \left\langle f,Y_{\ell,k}\right\rangle \right\rvert^2\leq \left(\frac{c(s,d)}{m^{s/d}}\|fY_{\ell,k}\|_{\Hs}\right)^2\leq \left(\frac{c(s,d)\check{c}}{m^{s/d}}\|f\|_{\Hs}\|Y_{\ell,k}\|_{\Hs}\right)^2,
\end{equation*}
where the first inequality is described by the integration error \eqref{equ:QMCerror} using QMC designs, and the second is due to Lemma \ref{lem:sobolevclosed}. Note that
\begin{equation*}
\|Y_{\ell,k}\|_{\Hs}^2 = \sum_{\ell'=0}^n\sum_{k'=1}^{Z(d,\ell)}\frac{1}{a_{\ell}^{(s)}}\lvert\left\langle Y_{\ell,k},Y_{\ell',k'}\right\rangle\rvert^2 = \frac{1}{a_{\ell}^{(s)}}.
\end{equation*}
Thus
\begin{equation*}
\|\mathcal{Q}_nf - \mathcal{P}_n f\|_{\Lt}^2 \leq \left(\frac{c(s,d)\check{c}}{m^{s/d}}\|f\|_{\Hs}\right)^2 \frac{1}{a_n^{(s)}}\sum_{\ell=0}^n\sum_{k=1}^{Z(d,\ell)}1 =\left(\frac{c(s,d)\check{c}}{m^{s/d}}\|f\|_{\Hs}\right)^2 \frac{Z(d+1,n)}{a_n^{(s)}},
\end{equation*}
leading to the error estimate \eqref{equ:errorqmc}.
\end{proof}
The estimate \eqref{equ:errorqmc} consists of two terms, one representing the error of the original hyperinterpolation, and the other is newly introduced in terms of $m$. In addition to hyperinterpolation, the fully discrete needlet approximation \cite{MR3668040} using spherical needlets \cite{MR2253732,MR2237162} and using quadrature rules without exactness assumption also has error estimates of this type, see a recent contribution \cite{brauchart2022needlets}.
\begin{corollary}\label{cor:QMCpoly}
If $f\in \Hs(\Sd)\cap \Pn(\Sd)$, then $ \|\mathcal{P}_nf-f\|_{\Lt} = 0$ and
\begin{equation*}
\|\mathcal{Q}_nf-f\|_{\Lt}\leq \frac{c(s,d)\check{c}}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}} \|f\|_{\Hs}.
\end{equation*}
\end{corollary}
\begin{remark}\label{rem:asymp}
If the number $m$ of quadrature points has a lower bound of order $n^d$, then $\|\mathcal{Q}_n f\|_{\Lt}$ is uniformly bounded by some constant. Recall from \eqref{equ:asl} that $a_{n}^{(s)}\asymp (1+n)^{-2s}$ and from \eqref{equ:numberZ} that $\quad Z(d+1,n) \sim \frac{2}{\Gamma(d+1)}n^{d}$ as $n\rightarrow\infty$. Thus if $m$ has a lower bound of order $n^{d+\frac{d^2}{2s}}$, then $\|\mathcal{Q}_n f-f\|_{\Lt}$ is uniformly bounded by some constant as $n\rightarrow\infty$. Moreover, if $m$ has a lower bound of order
\begin{equation}\label{equ:order2}
(n+1)^{d+\varepsilon_1}n^{\frac{d^2}{2s}+\varepsilon_2}
\end{equation}
where $\varepsilon_1,\varepsilon_2>0$, then $\|\mathcal{Q}_n f-f\|_{\Lt}\rightarrow 0$ as $n\rightarrow\infty$
\end{remark}
If the QMC hyperinterpolation is regarded as a special case of the unfettered hyperinterpolation, then the expression \eqref{equ:order1} on $\eta$ requires $m$ to have a lower bound of order
\begin{equation}\label{equ:order3}
(2n+1)^{d+\varepsilon_1}n^{\frac{2d^2-d}{2s}+\varepsilon_2}
\end{equation}
so that $\eta\rightarrow 0$ and hence $\|\mathcal{Q}_n f-f\|_{\Lt}\rightarrow 0$ as $n\rightarrow\infty$. For the same values of $\varepsilon_1$ and $\varepsilon_2$, the order \eqref{equ:order3} derived from regarding the QMC hyperinterpolation as a special case of the unfettered hyperinterpolation is unconditionally greater than the order \eqref{equ:order2} derived from Theorem \ref{QMCthm}, as $\frac{d^2}{2s}<\frac{d^2}{s}-\frac{d}{2s}$ holds for any $d\geq 1$. Moreover, as the term $E_n(f)$ in the estimate \eqref{equ:error} in Theorem \ref{thm} also has convergence rate of $n^{-s}$, what essentially varies the general estimate \eqref{equ:error} and the refined estimate \eqref{equ:errorqmc} is the other term in both estimates: the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ in the estimate \eqref{equ:error} and the term $\frac{1}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}} \|f\|_{\Hs}$ in the refined estimate \eqref{equ:errorqmc}. For a fixed degree $n$, we have demonstrated in Section \ref{subsec:QMCinferior} that the convergence rate of the term in \eqref{equ:error} with respect to $m$ is $m^{-s/(2d)}$, and we can see the convergence rate of the term in \eqref{equ:errorqmc} is $m^{-s/d}$.
\begin{corollary}\label{cor:QMChyper}
With the aid of Remark \ref{rem:asymp}, we know that if $E_n(f)\lesssim{n^{-s}}$, then letting $m\gtrsim (n+1)^dn^{\frac{d^2}{2s}}n^{d}$ gives
\begin{equation*}
\|\mathcal{Q}_nf - f\|_{\Lt}\lesssim {n^{-s}}.
\end{equation*}
\end{corollary}
\begin{remark}
For the above results, we assume $f\in \Hs(\Sd)$ and $\{x_j\}_{j=1}^m$ is a QMC design for $\Hs(\Sd)$. Recall the concept of QMC strength. Suppose $f\in H^{s'}$ and $\{x_j\}_{j=1}^m$ is a QMC design with strength $s^*$, then $s$ in the above results should be $s = \min\{s',s^*\}$.
\end{remark}
\section{Numerical experiments}\label{sec:numerical}
\subsection{Point sets and test functions}
Many different sequences of point sets on the sphere have been introduced in the literature. In the following experiments, we use points sets including
\begin{itemize}
\item[$\circ$] Random scattered points generated by the following MATLAB commands:\\
\texttt{rvals = 2*rand(m,1)-1;}\\
\texttt{elevation = asin(rvals); }\texttt{\% calculate an elevation angle for each point}\\
\texttt{azimuth = 2*pi*rand(m,1); }\texttt{\% create an azimuth angle for each point}\\
\texttt{\% convert to Cartesian coordinates}\\
\texttt{[x1,x2,x3] = sph2cart(azimuth,elevation,ones(m,1));}
\item[$\circ$] Equal area points \cite{MR1306011} based on an algorithm given in \cite{MR2582801};
\item[$\circ$] Fekete points which maximize the determinant for polynomial interpolation \cite{MR2065291};
\item[$\circ$] Coulomb energy points, which minimize $\sum_{i,j=1}^m(1/\|x_i-x_j\|_2)$;
\item[$\circ$] Spherical $t$-designs.
\end{itemize}
Random scattered points are directly generated in MATLAB, equal area points are generated based on the Recursive Zonal Equal Area (EQ) Sphere Partitioning Toolbox by Leopardi, Fekete points and Coulomb energy points are computed by Womersley in advance and are available on his website\footnote{Robert Womersley, \emph{Interpolation and Cubature on the Sphere}, \url{http://
www.maths.unsw.edu.au/~rsw/Sphere/}; accessed in August, 2022.}, and spherical $t$-designs are generated as the so-called well conditioned spherical $t$-designs in \cite{MR2763659}.
Moreover, we consider four kinds of test functions, including
\begin{itemize}
\item[$\circ$] A polynomial $f_1(x)=(x_1+x_2+x_3)^2\in\mathbb{P}_6(\Sd)$;
\item[$\circ$] $f_2(x_1,x_2,x_3): = \lvert x_1+x_2+x_3\rvert+ \sin^2(1+\lvert x_1+x_2+x_3 \rvert)$, which is continuous but non-smooth;
\item[$\circ$] The Franke function for the sphere \cite[p. 146]{MR946761}
\begin{equation*}\begin{split}
f_3(x_1,x_2,x_3) :=& 0.75\exp(-((9x_1-2)^2)/4-((9x_2-2)^2)/4-((9x_3-2)^2)/4)\\
& +0.75\exp(-((9x_1+1)^2)/49-((9x_2+1))/10-((9x_3+1))/10)\\
& +0.5\exp(-((9x_1-7)^2)/4-((9x_2-3)^2)/4-((9x_3-5)^2)/4)\\
& -0.2\exp(-((9x_1-4)^2)-((9x_2-7)^2)-((9x_3-5)^2)),
\end{split}\end{equation*}
which is in $C^{\infty}(\Sd)$;
\item[$\circ$] The sums of six compactly supported Wendland radial basis function \cite{MR3668040}
\begin{equation*}
f_{4,\sigma}:=\sum_{i=1}^6\phi_{\sigma}(z_i-x),\quad \sigma\geq 0,
\end{equation*}
where $z_1=[1,0,0]^{\text{T}}$, $z_2 = [-1,0,0]^{\text{T}}$, $z_3 = [0,1,0]^{\text{T}}$, $z_4=[0,-1,0]^{\text{T}}$, $z_5 = [0,0,1]^{\text{T}}$, and $z_6=[0,0,-1]^{\text{T}}$. The original Wendland functions
\begin{equation*}
\tilde{\phi}_{\sigma}(r):=
\begin{cases}
(1-r)_+^2,& \sigma =0,\\
(1-r)^4_+(4r+1), & \sigma = 1,\\
(1-r)_+^6(35r^2+18r+3)/3,& \sigma=2,\\
(1-r)_+^8(32r^3+25r^2+8r+1),&\sigma=3,\\
(1-r)_+^{10}(429r^4+450r^3+210r^2+50r+5)/5,&\sigma=4,
\end{cases}
\end{equation*}
are defined in \cite{wendland1995piecewise}, where $(r)_+:=\max\{r,0\}$ for $r\in\mathbb{R}$, and the normalized Wendland functions (test functions below) as defined in \cite{MR3822234} are
\begin{equation*}
\phi_{\sigma}(r) : = \tilde{\phi}_{\sigma}\left(\frac{r}{\delta_{\sigma}}\right),\quad \delta_{\sigma}:=\frac{3(\sigma+1)\Gamma(\sigma+1/2)}{2\Gamma(\sigma+1)},\quad \sigma \geq 0.
\end{equation*}
The normalized Wendland functions converge pointwise to a Gaussian as $\sigma\rightarrow\infty$, see \cite{chernih2014wendland}; moreover, $f_{4,\sigma}\in H^{\sigma+3/2}(\Sd)$, see \cite{MR2740542,MR1920637}.
\end{itemize}
\subsection{Unfettered hyperinterpolation and scattered data}
We start with a very interesting example of the unfettered hyperinterpolation with scattered data. As we have discussed in Theorem \ref{thm} and Corollary \ref{cor:unfettered}, the performance (i.e., the $\Lt$ error) of the unfettered hyperinterpolation is heavily dependent on the constant $\eta$, and what we need to do is to control this constant. In particular, if the degree $n$ and the number $m$ of quadrature points are fixed, Corollary \ref{cor:unfettered} suggests that $\eta$ has a lower bound of order $\sqrt{n^2\log{n}/m}$. It is immediate to see that $\eta$ is positively correlated to $n$ and negatively to $m$. Moreover, the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ in the error bound \eqref{equ:error} has a lower bound of order
$$\sqrt{\frac{n^2\log{n}}{m}+4\sqrt{\frac{n^2\log{n}}{m}}}.$$
That is, for a given $n$, the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ has a lower bound of order $m^{-1/4}$.
We first solely investigate the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ that arises as an artifact when the quadrature exactness assumption is discarded and leads to the divergence of the unfettered hyperinterpolation by examining the test function $f_1\in\mathbb{P}_6(\Sd)$. As $E_n(f_1)=0$ for all $n\geq 6$, we can focus on this term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ by letting $n\geq 6$. The $\Lt$ errors are depicted in Figure \ref{fig:unfettered1}: For each pair of $(n,m)$, we test ten times and report the average in terms of solid lines with markers; the maximal and minimal errors among these ten tests contribute to the upper and lower bounds of the filled region. We have at least three observations. Firstly, a larger degree $n$ of the unfettered hyperinterpolation, counterintuitively but rigorously asserted by our theory, leads to a larger value of $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$, because Corollary \ref{cor:unfettered} suggests that $\eta$ is negatively related to $n$. Secondly, as $n$ increases, the unfettered hyperinterpolation becomes more stable in the sense that the gap between the maximal and minimal errors among the ten tests for each pair of $(n,m)$ shrinks. This is also asserted by Corollary \ref{cor:unfettered} that the error bound \eqref{equ:error} is valid with probability exceeding $1-\bar{c}n^{-\gamma}$. Thirdly, as $m$ increases, the decaying rate of the unfettered hyperinterpolation with respect to $m$ for each $n$ coincides with the rate of $m^{-1/4}$. This observation is partially covered by our theory that the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ has a lower bound of order $m^{-1/4}$, see discussions in the previous paragraph, and we conjecture that there may hold $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}\asymp m^{-1/4}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=10cm]{unfettered1.eps}
\caption{Convergence of the unfettered hyperinterpolation in the approximation of $f_1$.}\label{fig:unfettered1}
\end{figure}
After characterizing the behavior of the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$, we then consider the $\Lt$ error of the unfettered hyperinterpolation. If $E_n(f)$ is not zero, then error estimate \eqref{equ:error} is controlled by two terms, $E_n(f)$ and $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$. We repeat the above procedure for non-polynomial functions $f_2$ and $f_3$, and the $\Lt$ errors are displayed in Figure \ref{fig:unfettered2}, in which we only report the average errors. We see that when $m$ is relatively small, the term $\sqrt{\eta^2+4\eta}\|\chi^*\|_{\Lt}$ dominates the error bound, so a smaller $n$ leads to a smaller $\eta$ and hence a smaller error bound; when $m$ is relatively large, $\eta$ becomes tiny, and the term $E_n(f)$ dominates the error bound, so a larger $n$ leads to a smaller error bound.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{unfettered2.eps}
\caption{Convergence of the unfettered hyperinterpolation in the approximation of $f_2$ and $f_3$.}\label{fig:unfettered2}
\end{figure}
Thus, we may conclude a \emph{rule of thumb} for determining the degree $n$ of the unfettered hyperinterpolation in real-world applications: If the number of samples is limited, then choose a small $n$; on the other hand, if the samples are relatively sufficient, then choose a large $n$.
\subsection{QMC hyperinterpolation and QMC designs}
We then investigate the QMC hyperinterpolation, using equal area points, Coulomb energy points, Fekete points, and spherical $t$-designs. We first consider the approximation of $f_1\in\mathbb{P}_6$ by the QMC hyperinterpolation using equal area points, and we show that the refined error estimate \eqref{equ:errorqmc} in Theorem \ref{QMCthm} is indeed sharper than the estimate \eqref{equ:error} in Theorem \ref{thm}. A convergence result of quadrature rules using equal area points can be found in \cite[Section 6.1]{hesse2010numerical}. For any $n\geq 6$, we have
\begin{equation}\label{equ:errorf1}
\|\mathcal{Q}_nf_1-f_1\|_{\Lt}\leq \frac{c''(s,d)}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}} \|f_1\|_{\Hs},
\end{equation}
in the light of Corollary \ref{cor:QMCpoly}. As the QMC strength $s^*$ of equal area points is conjectured in \cite{MR3246811} to be $2$, we may expect the decaying rate of $\|\mathcal{Q}_nf_1-f_1\|_{\Lt}$ with respect to $m$ to be $m^{-1}$ on the 2-sphere $\mathbb{S}^2$. However, from the general framework of the unfettered hyperinterpolation, we can only expect the decaying rate to be $m^{-1/2}$; see discussions in Section \ref{subsec:QMCinferior}. The $\Lt$ errors are depicted in Figure \ref{fig:QMC1}, which perfectly coincide with these deductions from our theory. We see that although the QMC hyperinterpolation can be regarded as a special case in the general framework of unfettered hyperinterpolation, the general estimate may not be sharp. Moreover, we find that a smaller $n$ leads to a smaller error, as suggested by the error bound \eqref{equ:errorf1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{QMC1.eps}
\caption{Convergence of the QMC hyperinterpolation in the approximation of $f_{1}$ using equal area points.}\label{fig:QMC1}
\end{figure}
We then consider the approximation of the normalized Wendland function $f_{4,2}$ by QMC hyperinterpolation, in which the term $n^{-s}\|f_{4,2}\|_{\Hs}$ cannot be ignored. Thus, the terms $n^{-s}$ and $m^{-s/2}$ jointly determine the convergence rate of $\|\mathcal{Q}_nf_{4,2}-f_{4,2}\|_{\Lt}$. It is conjectured in \cite{MR3246811} that the strength of Fekete points, equal area points, and Coulomb energy points is 1.5, 2, and 2, respectively. The $\Lt$ errors are depicted in Figure \ref{fig:QMC2}. Similarly to the unfettered hyperinterpolation using scattered data, we see that the term $\frac{1}{m^{s/d}}\sqrt{\frac{Z(d+1,n)}{a_n^{(s)}}}\|f\|_{\Hs}$ dominates the error bound when $m$ is relatively small, so a smaller $n$ leads to a smaller error; and the term $n^{-s}\|f\|_{\Hs}$ dominates the error bound when $m$ is relatively large. We observe that each error curve flattens as $m$ increases, and the curve of $n=6$ is higher than others when $m$ is large enough. Note that each curve corresponds to a fixed degree $n$. Thus the rule of thumb for determining the degree $n$ of the unfettered hyperinterpolation also applies to the QMC hyperinterpolation. The error curves of the QMC hyperinterpolation using spherical $t$-designs quickly flatten once the number $m$ of spherical $t$-designs renders the required quadrature exactness degrees. The convergence of the QMC hyperinterpolation using Fekete points is not monotonic. In light of Womersley's caveat on his website, the non-monotonic convergence is possibly caused by the fact that all computed Fekete points are only approximate local maximizers of the determinant for polynomial interpolation.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{QMC2.eps}
\caption{Convergence of the QMC hyperinterpolation in the approximation of $f_{4,2}$ using different kinds of point sets.}\label{fig:QMC2}
\end{figure}
We then study the performance of the QMC hyperinterpolation in the approximation of functions with different levels of smoothness. As we mentioned, the normalized Wendland function $f_{4,\sigma}$ belongs to $H^{\sigma+3/2}(\Sd)$. The $\Lt$ errors of the QMC hyperinterpolation of degree $n=5$ in the approximation of $f_{4,\sigma}$ with $\sigma = 0,1,\ldots,4$ are displayed in Figure \ref{fig:QMC3}, and the degree is intentionally set so small that error curves corresponding to different $\sigma$ can be distinguished. As we expect, the QMC hyperinterpolation is better in terms of $\Lt$ errors if the function to be approximated is smoother.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{QMC3.eps}
\caption{Convergence of the QMC hyperinterpolation in the approximation of $f_{4,\sigma}$ with $\sigma = 0,1,2,3,4$.}\label{fig:QMC3}
\end{figure}
Finally, we give a numerical example related to Remark \ref{rem:asymp} and Corollary \ref{cor:QMChyper} by considering the approximation of $f_{4,\sigma}$. As we mentioned in Section \ref{sec:designs}, to form a spherical $t$-design, $m$ should satisfy $m\asymp t^d$. Thus, to construct an original hyperinterpolant $\mathcal{L}_nf$ of degree $n$ on the 2-sphere $\mathbb{S}^2$ requires $m$ to be of order $n^2$, and we have $\|\mathcal{L}_nf-f\|_{\Lt}\rightarrow0$ as $n\rightarrow\infty$. According to Remark \ref{rem:asymp}, $m$ should have a lower bound of order $(n+1)^{d+\varepsilon_1}n^{\frac{d^2}{2s}+\varepsilon_2}$ for any $\varepsilon_1,\varepsilon_2>0$ to imply $\|\mathcal{Q}_nf-f\|_{\Lt}\rightarrow0$ as $n\rightarrow\infty$. The $\Lt$ errors with respect to the degree $n$ are depicted in Figure \ref{fig:QMC4}, and we let $m=(n+1)^2$ and $\lceil(n+1)^2n^{\frac{2}{\sigma+3/2}}\rceil$. The choice of $m=(n+1)^2$, which suffices to ensure the convergence of the original hyperinterpolation as $n\rightarrow\infty$, fails to imply the \emph{monotonic} convergence of the QMC hyperinterpolation. The choice of $m=\lceil(n+1)^2n^{\frac{2}{\sigma+3/2}}\rceil$, according to our theory, can ensure the convergence of $\mathcal{Q}_nf$ as $n\rightarrow \infty$, as shown in Figure \ref{fig:QMC4}. It may be strange to find that a larger $\sigma$ leads to a larger error level; this is due to the choice of $m=\lceil(n+1)^2n^{\frac{2}{\sigma+3/2}}\rceil$: a larger $\sigma$ implies a smaller $m$.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{QMC4.eps}
\caption{Performance of the QMC hyperinterpolation in the approximation of $f_{4,\sigma}$ with $m = (n+1)^2$ and $m=\lceil(n+1)^2n^{\frac{2}{\sigma+3/2}}\rceil$.}\label{fig:QMC4}
\end{figure}
By Corollary \ref{cor:QMChyper}, if we let $m\gtrsim (n+1)^2n^{\frac{2}{s}+2}$, then we can expect $\|\mathcal{Q}_nf-f\|_{\Lt}\lesssim n^{-s}$. This corollary is asserted by Figure \ref{fig:QMC5}, in which we investigate the approximation of $f_{4,2}$. We know that $f_{4,2}\in H^{2+3/2}(\Sd)$, thus we test on five choices of the number $m$, namely, $m = \beta\lceil(n+1)^2n^{2+\frac{2}{2+3/2}}\rceil$ with $\beta = 1,2,\ldots,5$. We see that the decaying rates of five choices all coincide with $m^{-(2+3/2)}$. This observation suggests $\|\mathcal{Q}_nf_{4,2}-f_{4,2}\|_{\Lt}\lesssim n^{-(2+3/2)}$, and more importantly, successfully verifies our theory on the QMC hyperinterpolation.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{QMC5.eps}
\caption{Convergence of the QMC hyperinterpolation in the approximation of $f_{4,2}$ with $m$ been a multiple $\beta$ of $\lceil(n+1)^2n^{2+\frac{2}{\sigma+3/2}}\rceil$ for $\beta = 1,2,\ldots,5$.}\label{fig:QMC5}
\end{figure}
\section{Concluding Remarks}
In this paper, we investigate the approximation scheme of hyperinterpolation on the sphere. The quadrature rules used in the construction of hyperinterpolation are not required to be exact for any polynomials but only to satisfy the Marcinkiewicz--Zygmund property, and we give the corresponding error estimate. Such an approximation scheme without the quadrature exactness assumption is referred to as the unfettered hyperinterpolation. If the quadrature rules use QMC designs, then the error estimate can be refined. To emphasize the particularity of QMC designs, we refer to the hyperinterpolation using QMC designs as quadrature points as the QMC hyperinterpolation. Note that the QMC hyperinterpolation can be regarded as a special case in the general framework of the unfettered hyperinterpolation. The general and refined estimates are split into two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another term introduced as compensation for the loss of exactness degrees. The newly introduced term may not converge to zero as the degree of hyperinterpolation tends to $\infty$, and we need to control it in practice. The numerical experiments show that the construction of hyperinterpolation using quadrature rules without exactness is feasible, and they verify the error estimates given in Sections \ref{sec:unfetteredtheory} and \ref{sec:QMCtheory}. The general framework of the unfettered hyperinterpolation on the sphere may be extended to the scheme of hyperinterpolation on other regions, such as a disk \cite{hansen2009norm}, a square \cite{caliari2007hyperinterpolation}, a cube \cite{caliari2008hyperinterpolation,wang2014norm}, a spherical triangle \cite{sommariva2021numerical}, and a spherical shell \cite{MR3554421,MR3739962}.
\section*{Acknowledgements}
We would like to thank Dr. Yoshihito Kazashi for his comment on our manuscript.
\bibliographystyle{siamplain}
| {
"timestamp": "2022-10-05T02:11:09",
"yymm": "2209",
"arxiv_id": "2209.11012",
"language": "en",
"url": "https://arxiv.org/abs/2209.11012",
"abstract": "This paper focuses on the approximation of continuous functions on the unit sphere by spherical polynomials of degree $n$ via hyperinterpolation. Hyperinterpolation of degree $n$ is a discrete approximation of the $L^2$-orthogonal projection of degree $n$ with its Fourier coefficients evaluated by a positive-weight quadrature rule that exactly integrates all spherical polynomials of degree at most $2n$. This paper aims to bypass this quadrature exactness assumption by replacing it with the Marcinkiewicz--Zygmund property proposed in a previous paper. Consequently, hyperinterpolation can be constructed by a positive-weight quadrature rule (not necessarily with quadrature exactness). This scheme is referred to as unfettered hyperinterpolation. This paper provides a reasonable error estimate for unfettered hyperinterpolation. The error estimate generally consists of two terms: a term representing the error estimate of the original hyperinterpolation of full quadrature exactness and another introduced as compensation for the loss of exactness degrees. A guide to controlling the newly introduced term in practice is provided. In particular, if the quadrature points form a quasi-Monte Carlo (QMC) design, then there is a refined error estimate. Numerical experiments verify the error estimates and the practical guide.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Bypassing the quadrature exactness assumption of hyperinterpolation on the sphere",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98657174604767,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.708944936469885
} |
https://arxiv.org/abs/math/0505276 | Nilfactors of R^m-actions and configurations in sets of positive upper density in R^m | We use ergodic theoretic tools to solve a classical problem in geometric Ramsey theory. Let E be a measurable subset of R^m, with positive upper density. Let V={0,v_1,...,v_k} be a subset of R^m. We show that for r large enough, we can find an isometric copy of rV arbitrarily close to E. This is a generalization of a theorem of Furstenberg, Katznelson and Weiss showing a similar property for m=k=2. | \section{Introduction}
Let $E$ be a measurable subset of $\mathbb{R}^m$.
We set
\[
\bar{D}(E):= \limsup_{l(S) \rightarrow \infty} \frac{m(S \cap E)}{m(S)},
\]
where $S$ ranges over all cubes in $\mathbb{R}^m$, and $l(S)$ denotes
the length of a side of $S$.
$\bar{D}(E)$ is the upper density of $E$.
We are interested in configurations which are necessarily contained in $E$.
Furstenberg, Katznelson, and Weiss ~\cite{FuKaW} showed, using methods from ergodic theory, that if $E \subset \mathbb{R}^2$, with $\bar{D}(E)>0$, all large distances
in $E$ are attained. More precisely:
\begin{thm}[FuKaW]\label{T:Distances}
If $E \subset \mathbb{R}^2$ with $\bar{D}(E)>0$, there exists $l_0$ such that for any
$l>l_0$ one can find a pair of points $x,y \in E$ with $\|x-y\|=l$.
\end{thm}
This result was also proved, using different methods, by Bourgain ~\cite{Bo}, and by Falconer and
Marstrand ~\cite{FaMa}.
It is natural to ask if the same is valid for larger configurations.
Bourgain has shown by an example that this can not be done ~\cite{Bo}.
As some configurations may not be found in the set itself,
we try to find the configurations
arbitrarily close to the set.
In the same paper Furstenberg, Katznelson, and Weiss ~\cite{FuKaW}
show that with this weaker condition,
one can find triangles in the plane:
\begin{thm}[FuKaW]
Let $E \subset \mathbb{R}^2$ with $\bar{D}(E)>0$, and let $E_{\delta}$ denote
the points at distance $<\delta$ from $E$. Let $v,u \in \mathbb{R}^2$, then
there exists $l_0$ such that for $l>l_0$ and any $\delta>0$ there exists a triple
$(x,y,z) \subset E_{\delta}^3$ forming a triangle congruent to $(0,lu,lv)$.
\end{thm}
The idea of the proof is to translate the geometric problem to a dynamical
problem, where $E$ corresponds to some measurable set $\ti{E}$, with
positive measure, in a measure preserving system $(X^0,\mathcal{B},\mu,\mathbb{R}^2)$.
The statement that $E_{\delta}$ contains a certain configuration,
corresponds to a
recurrence condition on the set $\ti{E}$. In the case of triangles
(configurations formed by $2$ vectors), the recurrence phenomenon in question
is reduced to the
case where $(X^0,\mathcal{B},\mu,\mathbb{R}^2)$ is a Kronecker action.
The problem for a general configuration reduces to the study of
pro-nilsystems (defined later). We prove the following theorem:
\begin{thm}\label{thm:main}
Let $E \subset \mathbb{R}^m$ have positive upper density,
and let $E_{\delta}$ denote the points of distance $< \delta$ from $E$.
Let $(u_1,\ldots,u_k) \subset (\mathbb{R}^m)^k$.
Then there exists $l_0$ such that for any $l > l_0$, and any $\delta>0$
there exists $\{x_1,x_2,\ldots,x_{k+1}\} \in E_{\delta}^{k+1}$
forming a configuration congruent to $\{0,lu_1,\ldots,lu_{k}\}$.
\end{thm}
{\bf Acknowledgment}
I thank Sasha Leibman and Hillel Furstenberg for helpful comments.
\section{Translation of the Geometric Problem \\
to a Dynamical Problem.}
We start by translating the geometric problem
to a dynamical problem. The translation as shown here was done
in ~\cite{FuKaW}. We bring it here for the sake of completeness.\vspace{.25in}
Let $E \subset \mathbb{R}^m$, such that $\bar{D}(E)>0$. Define
\[
\varphi(u):=min \{1,dist(u,E)\}.
\]
The functions $\varphi_v(u)=\varphi(u+v)$ form an equicontinuous, uniformly
bounded family, and thus have compact closure in the topology of
uniform convergence over bounded sets in $\mathbb{R}^m$. Denote this
closure by $X^0$. $\mathbb{R}^m$ acts on $X^0$ by $T_v\psi(u)=\psi(u+v)$ for
$\psi \in X^0, u,v\in \mathbb{R}^m$. $X^0$ is a compact metrizable space and
we can identify Borel measures on $X^0$ with functionals on $C(X^0)$.
Since $\bar{D}(E)>0$, there exists a sequence of cubes $S_n$
such that
\[
\frac{m(S_n \cap E)}{m(S_n)} \longrightarrow \bar{D}(E)>0.
\]
We define a probability measure $\mu$ on $X^0$ as follows.
We define the following probability measures:
for $f \in C(X^0)$, let
\[
\mu_n(f)=\frac{1}{m(S_n)}\int_{S_n} f(T_v\varphi) dm(v)
\]
We have for some subsequence $\{n_k\}$
\[
\mu_{n_k} \overset{w*}{\longrightarrow} \mu.
\]
Set $f_0(\psi)=\psi(0)$, then $f_0$ is a continuous function on $X^0$.
We define $\tilde{E} \subset X^0$ by
\[
\psi \in \tilde{E} \iff f_0(\psi)=0 \iff \psi(0)=0.
\]
$\tilde{E}$ is a closed subset of $X^0$ and we have:
\[
\mu(\tilde{E})= \lim_{l \rightarrow \infty} \int_X (1-f_0(\psi))^l d\mu(\psi).
\]
\begin{lma}
$\mu(\tilde{E})>0$.
\end{lma}
\begin{proof}
It suffices to show that for any $l$,
\[
\int_X (1-f_0(\psi))^l d\mu(\psi) \ge \bar{D}(E).
\]
Indeed
\begin{equation*}
\begin{split}
\int_X (1-f_0(\psi))^l d\mu(\psi)
&= \lim_{k \rightarrow \infty} \frac{1}{m(S_{n_k})}\int_{S_{n_k}}
(1-f_0(T_v\varphi))^l dm(v) \\
&= \lim_{k \rightarrow \infty} \frac{1}{m(S_{n_k})}\int_{S_{n_k}}
(1-\varphi(v))^l dm(v) \\
&\ge \lim_{k \rightarrow \infty} \frac{m(S_{n_k} \cap E)}{m(S_{n_k})}
= \bar{D}(E) > 0,
\end{split}
\end{equation*}
since $\varphi(v) = 0$ for $v \in E$.
\end{proof}
The next proposition establishes the correspondence between $E$ and
$\tilde{E}$.
\begin{pro}
Let $E \subset \mathbb{R}^m$ and $\ti{E}$ be as above. If for
$\veC{u}{l} \in (\mathbb{R}^m)^l$ we have
\begin{equation}\label{eq:intersection}
\mu(\ti{E} \cap T_{u_1}^{-1}\ti{E} \cap \ldots \cap T_{u_l}^{-1}\ti{E}) >0 ,
\end{equation}
then for all $\delta>0$,
\[
E_{\delta} \cap (E_{\delta}-u_1) \cap \ldots \cap (E_{\delta}-u_l)
\ne \emptyset .
\]
\end{pro}
\begin{proof}
Define the function $g$ on $X^0$ by
\begin{equation*}
g(\psi)=
\begin{cases}
\delta-f_0(\psi)& \text{if $f_0(\psi)< \delta$}, \\
0 & \text{if $f_0(\psi) \ge \delta$}.
\end{cases}
\end{equation*}
Since $g(\psi)$ is positive for $\psi \in \ti{E}$, equation
(\ref{eq:intersection})
implies that
\[
\int g(\psi)g(T_{u_1}\psi) \ldots g(T_{u_l}\psi) d\mu > 0 .
\]
In particular for some $\psi=T_w\varphi$ the integrand is positive. As
\[
g(T_w \varphi) > 0 \iff \varphi(w) < \delta \iff w \in E_{\delta}
\]
we have
\[
w \in E_{\delta}, w+u_1 \in E_{\delta}, \ldots , w+u_l \in E_{\delta}.
\]
\end{proof}
We now forget the original set $E$, and the geometric problem takes the
following dynamical form:
\begin{thm}[Dynamical Version]\label{DynamicalVersion}
Let $(X,\mathcal{B},\mu,\mathbb{R}^m)$ be a $\mathbb{R}^m$ action, and let $T_u$ denote the action
of $u \in \mathbb{R}^m$. Let $\veC{u}{k} \in (\mathbb{R}^m)^k$,
and let $A \subset X$, with $\mu(A) > 0$. There exists $t_0 \in \mathbb{R}^{+}$ s.t.
for all
$t>t_0$, there exists a rotation $P \in SO(m)$ such that
\[
\mu(A \cap T_{tP u_1}^{-1}A \cap \ldots \cap T_{tP u_{k}}^{-1}A) > 0 .
\]
(Here $SO(m)$ is the special orthogonal group acting on $\mathbb{R}^m$).
\end{thm}
\section{Preliminaries.}
In the following section we give some measure theoretic and
ergodic theory preliminaries. The theorems are stated without proofs.
For the proofs see ~\cite{Fu1}, ~\cite{Pe}.
A \emph{measure preserving system} (m.p.s) is a system
$X=(X^0,\mathcal{B},\mu,G)$ where $X^0$ is an arbitrary space, $\mathcal{B}$ is a
$\sigma$-algebra of subsets of $X^0$, $\mu$ is a $\sigma$-additive probability
measure on the sets of $\mathcal{B}$, and
$G$ is a locally compact group acting on $X^0$ by measure preserving
transformations. We denote the action of the element $g \in G$ by $T_g$.
If the group $G =\mathbb{Z}$, and $T$ is the generator of the $\mathbb{Z}$ action, we
denote the system $(X^0,\mathcal{B}, \mu,T)$.
We say that the action of $G$ is \emph{ergodic}, if for any $A \in \mathcal{B}$,
$T_g^{-1}A=A$
$\forall g \in G$, implies $\mu(A)=0$ or $\mu(A)=1$.
In this case we also say that $\mu$ is ergodic with respect to the
action of $G$.
Each $T_g$ induces a natural operator on $L^2(X)$ by
$T_gf=f \circ T_g$, and the ergodicity of the action of $G$ is equivalent
to the assertion that there are no non-constant $G$-invariant functions.
We have:
\begin{thm}[Mean Ergodic Theorem]
Let $X=(X^0,\mathcal{B},\mu,T)$ be a m.p.s., \\
and $f \in L^2(X)$. Then
\[
\Avr{N}{n} f \circ T^n \overset{L^2(X)}{\longrightarrow} \mathbb{P} f,
\]
where $\mathbb{P} f$ is the orthogonal projection of $f$ on the subspace of the
$T$-invariant functions.
\end{thm}
Let ${X}=(X^0,\mathcal{B},\mu,G)$
be a measure preserving system (m.p.s).
Let ${Y}=(Y^0,\mathcal{D},\nu,G)$
be a homomorphic image of $X$; i.e., we have a map $\pi:X^0 \rightarrow Y^0$
with $\pi^{-1}\mathcal{B}_Y \subset \mathcal{B}_X$, $\pi \mu_X =\mu_Y$ and $\pi$ commutes with the $G$ action.
Then ${Y}$ is a {\em factor} of ${X}$, ${X}$
is an {\em extension} of ${Y}$, and abusing the notation we write
$\pi:X \rightarrow Y$ for the factor map. A factor of $X$ is determined by
a $G$-invariant subalgebra of $L^{\infty}(X)$.
The map $\pi$ induces two natural maps
$\pi^*: L^2(Y) \rightarrow L^2(X)$ given by $\pi^*f=f\circ \pi$, and
$\pi_*: L^2(X) \rightarrow L^2(\mu_{Y})$ given by $\pi_*f=E(f|\mathcal{B}_{Y})$
(the orthogonal projection of $f$ on $\pi^*L^2(Y)$). The two measure
preserving systems are \emph{equivalent} if the homomorphism
of one to the other is invertible. We shall simplify the the notation writing $E(f|Y)$ for $E(f|B_Y)$.
A m.p.s. $X$ is \emph{regular} if $X^0$ is a compact
metric space, $\mathcal{B}$ the Borel algebra of $X^0$, $\mu$ a measure on $\mathcal{B}$.
A m.p.s. is \emph{separable} if $\mathcal{B}$ is generated by a countable subset.
As every separable m.p.s. is equivalent to a regular m.p.s.,
we will confine our attention to regular m.p.s.\vspace{.10in}
\subsection{Disintegration of Measure}
Let $(X^0,\mathcal{B},\mu)$ be a regular measure space, and let
$\alpha : (X^0,\mathcal{B},\mu)\rightarrow (Y^0,\mathcal{D},\nu)$ be a homomorphism to
another measure space (not necessarily regular). Suppose $\alpha$ is induced
by a map $\varphi: X^0 \rightarrow Y^0$. In this case the measure $\mu$ has a
{\em disintegration} in terms of fiber measures $\mu_y$, where $\mu_y$ is concentrated
on the fiber $\varphi^{-1}(y)=X_y$.
We denote by $\mathcal{M}(X)$ the compact
metric space of probability measures on $X^0$.
\begin{thm}\label{T:Disintegration}
There exists a measurable map from $Y^0$ to $\mathcal{M}(X^0)$, $y \rightarrow \mu_y$
which satisfies:
\begin{enumerate}
\item For every $f \in L^1(X^0,\mathcal{B},\mu)$, $f \in L^1(X^0,\mathcal{B},\mu_y)$ \\
for a.e. $y \in Y^0$, and $E(f|Y)(y) = \int f d\mu_y \quad$ for
a.e. $y \in Y^0$
\item $\int \{ \int f d\mu_y \} d \nu(y) = \int f d\mu \quad$
for every $f \in L^1(X^0,\mathcal{B},\mu)$.
\end{enumerate}
\end{thm}
The map $y \rightarrow \mu_y$ is characterized by condition (1).
We shall write $\mu= \int \mu_y d\nu$ and refer to this as the
disintegration of the measure $\mu$ with respect to $\mathcal{D}$.
If $(X^0,\mathcal{B},\mu,G)$ is a m.p.s., $\mathcal{D}$ the algebra of all $G$-invariant sets,
$\mu= \int \mu_x d\mu(x)$ the disintegration of $\mu$ with respect to $\mathcal{D}$,
then $\mu_x$ is $G$-invariant and ergodic for a.e. $x$.
\begin{dsc}{\bf Nilsystems and Characteristic Factors}
A {\em $k$-step nilflow} is a system $X=(N/\Gamma,\mathcal{B},m,G)$ where
$N$ is a $k$-step nilpotent Lie group, $\Gamma$ a cocompact lattice,
$\mathcal{B}$ the (completed) Borel algebra, $m$ the Haar measure, and the action
of $G$ is by
translation by elements of $N$: $T_gn\Gamma=a_gn\Gamma$ where $g \rightarrow a_g$ is a
homomorphism of $G$ to $N$.
We will sometimes denote this system by $(N/\Gamma,G)$, or $(N/\Gamma, a)$ if
$G=\mathbb{Z}$ and $1 \rightarrow a$. If $G$ is connected
and $(N/\Gamma,G)$ is an ergodic nilflow, then we may assume that $N$ is connected
so that $X^0 = N/\Gamma$ is connected and is a homogeneous space of the identity
component of $N$.
A {\em $k$-step pro-nilflow} is an inverse limit of $k$-step nilflows.
\end{dsc}
\begin{thm}[Cf.\cite{Pa1}] \label{thm:uniform}
Let $X=(N/\Gamma,a)$ be an ergodic nilflow, then
$X$ is uniquely ergodic. Let $f$ be a continuous
function on $N/\Gamma$. Then the averages $\Avr{N}{n} f(a^nx)$ converge
uniformly to $\int f(x) dm$.
\end{thm}
\begin{dsc}\label{dsc:sasha}
Let $N$ be a connected simply connected nilpotent Lie group, $\Gamma$ a cocompact
lattice
in $N$, and $X^0=N/\Gamma$. Let $\pi:N \rightarrow X^0$ be the natural projection,
and let $M$ be a closed connected subgroup of $N$ such that $\pi(M)$ is a
closed submanifold of $X^0$. Let $G=\mathbb{R}^k$ and let $\varphi:G \rightarrow N$ be
a homomorphism. For $x \in X^0$ let $O(x)=\overline{Gx}$, and for $x \in X^0$,
$g \in G$ let
$O_g(x)=\overline{\{\varphi(ng)x\}}_{n \in \mathbb{Z}}$; these are subnilmanifolds of
$X^0$ (see for example \cite{Le}) .
Introducing Malcev coordinates on $N$ and $M$ (\cite{Ma})
we can identify these groups topologically with, say, $\mathbb{R}^l$ and $\mathbb{R}^m$,
$l \ge m$.
Call a proper subspace of $\mathbb{R}^d$
{\em countably linear} if it is contained in a countable union of proper
linear subspaces. Call a subset of $\mathbb{R}^d$ {\em polynomial} if it is
the set of zeroes of some nonzero polynomial in $\mathbb{R}^d$ (i.e. an algebraic
variety of co-dimension $1$), and {\em countably
polynomial} if it is contained in a countable union of proper polynomial
subsets. The following proposition is due to Sasha Leibman:
\begin{pro}\label{pro:sasha} There exists a connected subnilmanifold $V$ of
$X^0$ such that
$O(\pi(a)) \subseteq aV$ for all $a \in M$, and there exists a countably linear
set $B \subset G$ such that for every $g \in G \setminus B$ there is a countably
polynomial set $A_g \subset M$ such that $O_g(\pi(a))=aV$ for all
$a \in M \setminus A_g$.
\end{pro}
\begin{proof} Define a mapping $\eta:G \times M \rightarrow N$ by
$\eta(g,a)=a^{-1}\varphi(g)a$. In Malcev coordinates on $M$ and $N$,
$\eta$ is a polynomial mapping $\mathbb{R}^{k+m} \rightarrow \mathbb{R}^l$ (see \cite{Ma}). Moreover
for each $a \in M$, $\eta(\cdot,a)$ is a homomorphism $G \rightarrow N$.
Let $H$ the closure of the subgroup generated by $\eta(G \times M)$.
Let $V$ be the closure of $\pi(H)$ in $N/\Gamma$. Then $V$ is a subnilmanifold
$V=\pi(K)$ for some closed
subgroup $K$ of $N$ (\cite{Sh}) ($\pi(H)$ itself is not necessarily closed).
We then have $a^{-1}\varphi(g)\pi(a)=\pi(a^{-1}\varphi(g)a) \in V$, thus
$\varphi(g)\pi(a) \in aV$ for any $a \in M$, and $g \in G$. So,
$O(\pi(a))\subseteq aV$ for all $a \in M$.
Let $\ti{L}$ be the set $\{ l \in N: lV=V\}$. Then $\ti{L}$ is a group,
$\eta(G \times M)$ and $K$ are subsets of $\ti{L}$, and $V=\pi(\ti{L})$.
Let $L$ be the identity component of $\ti{L}$, then
$\eta(G \times M) \subseteq L$, and therefore $H \subseteq L$.
$V$ is connected, and therefore a homogeneous subspace of $L$; $V=L/L\cap\Gamma$.
Let $W$ be the maximal torus factor of $V$,
$W=L/([L,L](L\cap\Gamma))$, and let $p:V \rightarrow W$ be the natural projection.
Let $\hat{W}$ be the group of characters of $W$, and let $\chi \in \hat{W}$.
The character $\chi$ can be lifted to a homomorphism $\zeta_{\chi}:L \rightarrow \mathbb{R}$.
For each $\chi \in \hat{W}$, let $\psi_{\chi}:=\zeta_{\chi} \circ \eta$.
Then $\psi_{\chi}$ are polynomials
on $G \times M$, which for each $a \in M$ are linear with respect to $G$. Moreover, each $\psi_{\chi}$ is a nonzero polynomial; otherwise
$\eta(G \times M)$ would be contained in the kernel of the corresponding
homomorphism $\chi \circ p \circ \pi : N \rightarrow S^1$. This is a closed
subgroup of $N$ containing $\eta(G \times M)$, thus contains the subgroup $H$.
Therefore $\chi \circ p \circ \pi (H)=1$, but this implies that
$\chi \circ p (V) =1$, i.e. $\chi$ is the trivial character.
Let $C_{\chi} \subset G \times M$ be the set of zeros of $\psi_{\chi}$, and let
$C=\bigcup_{\chi \in \hat{W}} C_{\chi}$. Then
$C$ is a countably polynomial subset of $G \times M$.
For any $(g,a) \notin C$ one has
$\chi \circ p(a^{-1}\varphi(g)\pi(a))=\chi \circ p (\pi(a^{-1}\varphi(g)a))\ne 0$
for all $\chi \in \hat{W}$, so the projection of
$a^{-1}\varphi(g)\pi(a) \in V$ to $W$ is not
contained in any proper
subtorus of $W$. Consider the following $\mathbb{Z}$ action on $V$: for $n \in \mathbb{Z}$,
$v \rightarrow a^{-1}\varphi(ng)av$. Since the projection of
$a^{-1}O_g(\pi(a)) \subseteq V$
to $W$ is a closed subgroup of $W$, i.e. a subtorus of $W$,
it is equal to $W$. By Parry (\cite{Pa1}) this implies the $\mathbb{Z}$ action is
minimal and therefore
$a^{-1}O_g(\pi(a))=V$, and so $O_g(\pi(a))=aV$.
Now let
\[
B=\{ g \in G:\{g\} \times M \subseteq C\}, \ \
M_{\chi}(g)= \{a \in M: \psi_{\chi}(g,a)=0 \}.
\]
If $\{g\} \times M \subseteq C$, then
$M= \bigcup_{\chi \in \hat{W}} M_{\chi}(g)$. As $M$ is connected,
if $M_{\chi}(g)$ has non-empty interior, then $M_{\chi}(g)=M$.
By the Baire category theorem $M_{\chi}$
is non-empty for some $\chi \in \hat{W}$. Therefore
\[
B =\bigcup_{\chi\in \hat{W}} \{ g \in G: \psi_{\chi}(g,a)=0 \
\text{for all} \ a \in M\}.
\]
Then $B$ is a countably linear subset of $G$, and for each
$g \in G \setminus B$,
\[
A_g=C \cap (\{g\} \times M) = \bigcup_{\chi \in \hat{W}}
\{a \in M: \psi_{\chi}(g,a)=0\}
\]
is a countably polynomial subset of $M$.
\end{proof}
\end{dsc}
\begin{thm}\label{thm:abelian_structure} Let $X=(X^0,\mathcal{B},\mu,\mathbb{R}^m)$ be an ergodic $\mathbb{R}^m$ action. We can associate with
$X$ an inverse sequence of factors
$\ldots \rightarrow Y_k(X) \rightarrow Y_{k-1}(X) \rightarrow \ldots \rightarrow Y_1(X)$, where $Y_k(X)$ is
a $k-1$-step
pro-nilflow such that the following holds:
If $u_1,\ldots,u_k \in \mathbb{R}^m$ are such that the actions
of $T_{u_i}$ and $T_{u_i-u_j}$ for $i \ne j$ are ergodic,
then for any bounded measurable functions $f_1,\ldots,f_k$ the limits
in $L^2(X)$
\begin{equation}\label{eq:FK1}
\lim_{n \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^k T_{nu_j} f_j(x), \
\lim_{n \rightarrow \infty} \Avr{N}{n} \pi^*\prod_{j=1}^k T_{nu_j} E(f_j|Y_k)(x)
\end{equation}
exist and are equal. The factor $Y_k(X)$ is called the {\em $k$-universal
characteristic factor ($k$-u.c.f) of $X$} .
Let
\[ \begin{split}
\tau_{\vec{u}}(T):=& T_{u_1} \times \ldots \times T_{u_l},\\
\end{split}
\]
let $\triangle_{k}(\mu)$ be the diagonal measure on $X^k$, then
\[ \bar{\triangle}_{\vec{u}}(\mu):=
\lim_{N \rightarrow \infty} \Avr{N}{n} \tau_{\vec{u}}^n \triangle_{k}(\mu)
\]
is well defined. If $F$ is a function
invariant under $\tau_{\vec{u}}$ with respect to the measure
$\bar{\triangle}_{\vec{u}}(\mu)$
and if $E(f_j|\mathcal{B}_{Y_k})=0$ for some $1\le j \le k$, then
\[
\int f_1(x_1)\ldots f_k(x_k)F(x_1,\ldots,x_k)d\bar{\triangle}_{\vec{u}}=0.
\]
\end{thm}
The factors $Y_k(X)$ were constructed for an ergodic m.p.s
$X=(X^0,\mathcal{B},\mu,T)$ by Host and Kra \cite{HKr} and independently by Ziegler
\cite{Z}. Frantzikinakis and Kra \cite{FrKr} showed that if
$X_i=(X^0,\mathcal{B},\mu,T_i)$ are ergodic measure
preserving systems on the same space $X^0$, where $T_i$ commute, then
$\mathcal{B}_{Y_k(X_i)}=\mathcal{B}_{Y_k(X_j)}=\mathcal{B}_{Y_k}$ for any $i,j$, and if the action of
$T_i^{-1}T_j$ is ergodic for all $i \ne j$, then
equation (\ref{eq:FK1}) holds (replacing $T_{nu_i}$ with
$T_{ni}=T^n_i$). Thus if we have a
$\mathbb{R}^m$ action then the systems $X_u=(X^0,\mathcal{B},\mu,T_u)$, $u \in \mathbb{R}^m$
for which the action of $T_u$ is ergodic, share the same sequence
of factors $Y_k(X)=Y_k(X_u)$. The fact that $Y_k(X)$ is a factor of the
$\mathbb{R}^m$ action follows from \cite{Z} corollary $2.4$. We will show that the
action of $\mathbb{R}^m$ on $Y_k(X)$ preserves the pro-nil structure:
\begin{dfn}
Let $Y=(Y^0,\mathcal{B}_Y,\mu_Y,\mathbb{R}^m)$ be a $j$-step pronilflow;
$Y=\lim_{\leftarrow} N_i/\Gamma_i$. We say that the action of $\mathbb{R}^m$ on $Y$
preserves the pro-nil structure
if the action of $\mathbb{R}^m$ on $Y$ induces a $\mathbb{R}^m$ action on
$N_i/\Gamma_i$ by group rotations.
\end{dfn}
\begin{pro} Let $Y=(Y^0,\mathcal{B},\mu,T)$ be a $j$-step ergodic pronilflow;
$Y=\lim_{\leftarrow} N_i/\Gamma_i$. Let $\{T_c\}_{c \in \mathbb{R}^m}$ be a
$\mathbb{R}^m$ action on $(Y^0,\mathcal{B},\mu)$ that
commutes with the action of $T$. Then the action of $\mathbb{R}^m$ on $Y$
preserves the pro-nil structure.
\end{pro}
\begin{proof}
For $j=1$, $Y$ is a Kronecker action, and any factor of $Y$ is a
Kronecker action. Thus it is enough to check that eigenfunctions of the
$T$ action are also eigenfunctions of the $\mathbb{R}^m$ action.
If $\psi$ is an eigenfunction, $T\psi(y)=\lambda\psi(y)$, then as $T$ and $T_c$
commute $\psi(TT_cy)=\lambda \psi(T_cy)$. Combining the two equations we get
\[
T\left( \frac{\psi(T_cy)}{\psi(y)}\right)=1.
\]
By ergodicity of $T$ we get $\psi(T_cy)=\delta_c \psi(y)$.
We proceed by induction on $j$. Let $Y$ be a $j$-step ergodic pronilflow;
$Y= \lim_{\leftarrow} M_i/\Lambda_i$. We first show that the $\mathbb{R}^m$ action
on Y induces a $\mathbb{R}^m$ action on $M_i/\Lambda_i$. Let $\pi:Y \rightarrow M_i/\Lambda_i$ be
the projection. Let $p: Y \rightarrow Y_j(M_i/\Lambda_i)$ be the projection onto the
$j$ u.c.f of $M_i/\Lambda_i$. $Y_j(M_i/\Lambda_i)$ is a $j-1$-step nilflow, we denote
it $N_i/\Gamma_i$.
The space $L^2(M_i/\Lambda_i)\circ \pi \subset L^2(Y)$ is
spanned by functions $f$ satisfying the following condition:
\[
Tf(y)=g(y)f(y)
\]
where $g=\ti{g} \circ p$ with $\ti{g}$ of type $j$
(see \cite{Z} theorem $6.1$).
As $T,T_c$ commute for any $c \in \mathbb{R}^m$
\[
TT_cf(y)=T_cTf(y)=T_cg(y)T_cf(y).
\]
Thus
\[
T\left( \frac{f(T_cy)}{f(y)}\right)=\frac{T_cg(y)}{g(y)}\frac{f(T_cy)}{f(y)}.
\]
By the induction hypothesis the action of $\mathbb{R}^m$ on $Y$ induces an action
on $Y_j(M_i/\Lambda_i)=N_i/\Gamma_i$, and this action is given by rotation
by an element $a_i(c) \in N_i$.
By proposition $6.37$ in \cite{Z}, as $\mathbb{R}^m$ commutes with the
action of $T$ on $N_i/\Gamma_i$ given by rotation by $a \in N_i$,
there exists a family of measurable functions
$\{ f_{c}:N_i/\Gamma_i \rightarrow S^1 \}_{c \in \mathbb{R}^m}$ and a family of constants
$\{ \lambda_c \}_{c \in \mathbb{R}^m}$ such that
\[
\frac{T_c \ti{g}(p(y))}{\ti{g}( p(y))}
=\lambda_{c}\frac{Tf_{c}(p(y))}{f_{c}(p(y))}.
\]
We get
\[
T\left( \frac{f(T_cy)}{f(y)f_c(p(y))}\right)=\lambda_c
\frac{f(T_cy)}{f(y)f_c((p(y))}.
\]
This implies that $\lambda_c$ is an eigenvalue of $T$, but as it is
multiplicative in $c \in \mathbb{R}^m$, $\lambda_c \equiv 1$.
Therefore by ergodicity of the $T$ action
\[
\frac{f(T_cy)}{f(y)f_c(p(y))}=\delta_c'
\]
or
\[
f(T_cy)=\delta_c f(y) f_c(p(y)) \in L^2(M_i/\Lambda_i)\circ \pi.
\]
This shows that the $\mathbb{R}^m$ action on $Y$ induces an $\mathbb{R}^m$ action
on $M_i/\Lambda_i$. The fact that this action is given by group rotations
was shown by Parry \cite{Pa2} in the case where $M_i$ is connected.
Alternatively, $M_i/\Lambda_i$ can be presented as a torus extension of
$Y_j(M_i/\Lambda_i)=N_i/\Gamma_i$ with
$g:N_i/\Gamma_i \rightarrow \mathbb{T}^n$ a cocycle of type $j$. Without loss of generality
we can assume $n=1$. Now the tuples $(a,g)$, $(a_i(c),f_c)$ belong to
the group
$\mathcal{G}$ defined in \cite{Z} proposition $6.37$ and this group acts
transitively and effectively on $M_i/\Lambda_i$.
\end{proof}
\begin{pro}[\cite{PS}]\label{P:ergodic}
If $(X,\mathcal{B},\mu,\mathbb{R})$ is an ergodic action of $\mathbb{R}$, then but for a countable
set of $u \in \mathbb{R}$, $T_u$ acts ergodically.
If $(X,\mathcal{B},\mu,\mathbb{R}^m)$ is an ergodic action of $\mathbb{R}^m$, then but for
a countable set of $l-1$ dimensional hyperplanes,
all $l-1$ dimensional hyperplanes through the origin act ergodically.
\end{pro}
The following is a version of the van der Corput Lemma
(see \cite{FuKaW}).
\begin{lma}\label{L:hilbert}
Let $H$ be a Hilbert space, $\xi \in \Xi$ some index set,
and let $u_n(\xi) \in H$ for $n \in \mathbb{N}$ be uniformly bounded in $n,\xi$.
Assume that for each $r$ the limit
\[
\gamma_r(\xi)= \Lim{N} \Avr{N}{n} \ip{u_n(\xi)}{u_{n+r}(\xi)}
\]
exists uniformly and
\begin{equation}
\Lim{R} \Avr{R}{r} \gamma_r(\xi)=0
\end{equation}
uniformly. Then
\[
\Avr{N}{n} u_n(\xi) \overset{H}{\longrightarrow} 0
\]
uniformly in $\xi$.
\end{lma}
\begin{dsc}{\bf Multidimensional Szemer\'edi.}
The following generalization of Szemer\'edi's theorem was proved
by Furstenberg and Katznelson ~\cite{FuKa}:
\begin{thm}\label{thm:FuK}
Let $X=(X^0,\mathcal{B},\mu,\mathbb{Z}^k)$ be a m.p.s., and let $T_1,\ldots,T_k$ be the
generators of the $\mathbb{Z}^k$ action.
Let $f\ge 0$ be a bounded measurable function on $X$ with $\int f d\mu >0$. Then
\[
\liminf_{N \rightarrow \infty} \Avr{N}{n} \int f(x)T^n_1f(x) \ldots T^n_kf(x)
d\mu(x) >0.
\]
\end{thm}
\end{dsc}
\section{The Main Theorem}
Denote $M_m(\mathbb{R})$ the $m \times m$ matrices over $\mathbb{R}$, and $SO(m)$
the special orthogonal group. Recall that if $(N/\Gamma,G)$ is a nilflow
the action of $T_g$ for $g \in G$ is given by
$T_gn\Gamma=a_gn\Gamma$ where $a_g \in N$.
\begin{lma} Let $(N/\Gamma,\mathbb{R}^m)$ be an ergodic
measure preserving action of $\mathbb{R}^m$ on a
nilmanifold $N/\Gamma$, where $N$ is connected.
Let $f_j$ be continuous functions on $N/\Gamma$.
Let $(u_1,\ldots, u_l) \in (\mathbb{R}^m)^l$. Then there exists a countably linear
set (see \ref{dsc:sasha}) $\mathcal{S} \subset M_m(\mathbb{R})$ such that for any $F \in M_m(\mathbb{R}) \setminus \mathcal{S}$
the function
\begin{equation}\label{eq:independence}
g_{F,L}(x):=\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^l T_{(nF+L)u_j} f_j(x)
\end{equation}
is independent of $L \in M_m(\mathbb{R})$ for a.e. $x \in N/\Gamma$.
Furthermore for any such $F$ the convergence is uniform in $L$.
\end{lma}
\begin{proof} Let $M$ be the diagonal of $N^l$ and let $G=M_m(\mathbb{R}) =\mathbb{R}^{m^2}$
(thought of as an additive group). Let $\varphi: M_m(\mathbb{R}) \rightarrow N^l$
be given by
\[
\varphi(F)=(a_{Fu_1},\ldots,a_{Fu_l}).
\]
By proposition
\ref{pro:sasha}, there exists a submanifold $V$ of $(N/\Gamma)^l$, and there
exists a countably linear set $\mathcal{S} \subset M_m(\mathbb{R}) $ such that for every
$F \in M_m(\mathbb{R}) \setminus \mathcal{S}$ there is a countably polynomial set
$A_F \subset M$
such that for $(a,\ldots,a) \notin A_F$,
\[
\overline{\{\varphi(nF)\pi(a,\ldots,a)\}}_{n \in \mathbb{Z}}=(a,\ldots,a)V,
\]
and
\[
\overline{G\pi (a,\ldots,a)} \subseteq (a,\ldots,a)V \ \ \
(\text{therefore} = (a,\ldots,a)V).
\]
For any $F \in M_m(\mathbb{R}) \setminus \mathcal{S}$, and $(a,\ldots,a) \notin A_F$
we have
\[
T_{Lu_1}\times \ldots \times T_{Lu_l} \pi(a,\ldots,a) \in
(a,\ldots,a)V.
\]
The action of $\varphi(F)$ on $(a,\ldots,a)V$ is ergodic, and by
theorem \ref{thm:uniform} it is uniquely ergodic. The point
$(T_{Lu_1}a\Gamma,\ldots,T_{Lu_l}a\Gamma) \in (a,\ldots,a)V$.
By theorem \ref{thm:uniform} the convergence in equation
(\ref{eq:independence}) is uniform in $L$, and $g_{F,L}(a\Gamma)$ is
independent of $L$.
\end{proof}
\begin{cor}\label{cor:same_limit} Let $Y=(Y^0,\mathcal{B},\mu,\mathbb{R}^m)$
be an ergodic pro-nilflow.
Let $f_j$ be bounded measurable functions on $Y^0$.
Let $(u_1,\ldots, u_l) \in (\mathbb{R}^m)^l$.
Then there exists a countably linear
set $\mathcal{S} \subset M_m(\mathbb{R})$ such that for any $F \in M_m(\mathbb{R}) \setminus \mathcal{S}$,
and all $L \in M_m(\mathbb{R})$
the function
\[
g_{F,L}(y):=\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^l T_{(Fn+L)u_j} f_j(y)
\]
where the limit is in $L^2(Y)$, is a constant function of $L \in M_m(\mathbb{R})$
and the convergence
is uniform in $L$.
\end{cor}
\begin{proof} If $Y^0=\lim_{\leftarrow} N_j/\Gamma_j$, the continuous functions on
$N_j/\Gamma_j$ lifted to $Y^0$, for all $j$, are dense in $C(Y^0)$.
\end{proof}
The next proposition will enable us to evaluate averages of functions
on $X$ by evaluating the averages of the projections of the functions
on the factor $Y_k(X)$ described in \ref{thm:abelian_structure}.
\begin{pro}\label{P:AbelianReduction}
Let $X=(X^0,\mathcal{B},\mu,\mathbb{R}^m)$ be an ergodic action of $\mathbb{R}^m$,
and let $\veC{u}{k} \in (\mathbb{R}^m)^k$.
Let
$Y_k$ be the factor described in theorem \ref{thm:abelian_structure},
and let $\pi:X \rightarrow Y_k$ be the factor map.
Let $f_1 \ldots f_k$ be bounded measurable functions on $X$.
Then there exists a countably linear subset $\mathcal{S} \subset M_m(\mathbb{R})$
such that for any $M \in M_m(\mathbb{R}) \setminus \mathcal{S}$, satisfying
$T_{Mu_i}$, $T_{M(u_i-u_j)}$
for $i,j=1,\ldots,k$, $i \ne j$ are ergodic, and for all $P \in M_m(\mathbb{R})$
we have
\[
\Avr{N}{n} \prod_{j=1}^k T_{(nM+P)u_j} f_j(x) -
\Avr{N}{n} \prod_{j=1}^k T_{(nM+P)u_j} \pi^*E(f_j|Y_k)(x)
\overset{L^2(X)}{\longrightarrow} 0
\]
uniformly in $P$.
\end{pro}
\begin{proof}
We prove this inductively. For $k=1$, let $u \in \mathbb{R}^k$, $u \ne 0$.
If $T_{Mu}$ is ergodic then
\[
\Avr{N}{n} T_{nMu+Pu} f(x)= T_{Pu}(\Avr{N}{n}
T_{nMu}f(x))\longrightarrow \int f(x) d\mu
\]
uniformly in $P$, by the Mean Ergodic Theorem.
Assume the statement holds for $k$: i.e., for $M$ outside a countably linear set
satisfying $T_{Mu_i}$, $T_{M(u_i-u_j)}$ for $i,j=1,\ldots,k$,
$i \ne j$ are ergodic, and all $P \in M_m(\mathbb{R}) $ we have
\[
\Avr{N}{n} \prod_{j=1}^k T_{(nM+P)u_j} f_j(x) -
\Avr{N}{n} \prod_{j=1}^k T_{(nM+P)u_j} \pi^*E(f_j|Y_k)(x)
\overset{L^2(X)}{\longrightarrow} 0
\]
uniformly in $P$.
We show this for $k+1$.
Let $\mathcal{S} \subset M_m(\mathbb{R})$ be the set from corollary \ref{cor:same_limit}
corresponding to $Y_k$ and $u_1, \ldots, u_{k+1}$. For
$M\in M_m(\mathbb{R}) \setminus \mathcal{S}$ the $L^2$ limit
\[
\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^{k+1} T_{(nM+P)u_j} E(f_j|Y_k)(y)
\]
is independent of $P$, and the convergence to the limit in uniform in $P$.
Let $M \in M_m(\mathbb{R})\setminus \mathcal{S}$ satisfy $T_{Mu_i}$, $T_{M(u_i-u_j)}$
are ergodic for $i,j=1,\ldots,k+1$, $i \ne j$.
It is enough to show that if
for some $1 \le j \le k+1$, $E(f_j|Y_{k+1})=0$ then
\[
\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^{k+1} T_{(nM+P)u_j} f_j(x)=0
\]
uniformly in $P$.
We use the Van der Corput Lemma (lemma \ref{L:hilbert}).
Let $v_n(M,P):= \prod_{j=1}^{k+1} T_{nMu_j+Pu_j} f_j(x)$.
Then
\[
\ip{v_n(M,P)}{v_{n+r}(M,P)}=\int \prod_{j=1}^{k+1} T_{nMu_j+Pu_j} f_j(x)T_{(n+r)Mu_j+Pu_j} \bar{f}_j(x) \ d\mu,
\]
and
\begin{equation*}
\begin{split}
\gamma_r &(M,P) = \lim_{N \rightarrow \infty} \Avr{N}{n}\ip{v_n(M,P)}{v_{n+r}(P,M)} \\
= & \lim_{N \rightarrow \infty} \int f(x)T_{rMu_1}\bar{f}(x)
\Avr{N}{n}\prod_{j=2}^{k+1}T_{nM(u_j-u_1)+P(u_j-u_1)}
(f_j(x)T_{rMu_j}\bar{f}_j(x))\ d\mu. \\
\end{split}
\end{equation*}
By the induction hypothesis this limit is equal (uniformly in $P$)
to the following limit
\begin{equation}\label{eq:Y_K}
\lim_{N \rightarrow \infty}\int f(x)T_{rMu_1}\bar{f}(x)
\Avr{N}{n}\prod_{j=2}^{k+1}T_{nM(u_j-u_1)+Pu_j-Pu_1}
\pi^*E(f_jT_{rMu_j}\bar{f}_j|Y_{k})(x)\ d\mu,
\end{equation}
which equals
\begin{equation}\label{eq:Y_K_1}
\lim_{N \rightarrow \infty} \Avr{N}{n}\prod_{j=1}^{k+1}
\int T_{nMu_j+Pu_j}
\pi^*E(f_jT_{rMu_j}\bar{f}_j|Y_k)(x)\ d\mu.
\end{equation}
The limit in equation (\ref{eq:Y_K_1}) is a limit on a
$k-1$ step pronilflow, as $M \in M_m(\mathbb{R}) \setminus \mathcal{S}$ it is the same for
all $P$, and the convergence is uniform in $P$.
By \ref{thm:abelian_structure}, the limit in equation (\ref{eq:Y_K_1}) is equal to
\[
\int \prod_{j=1}^{k+1} T_{Pu_j}f_j(x_j)T_{rMu_j+Pu_j}\bar{f}_j(x_j)
\ d\bar{\triangle}_{M\vec{u}}(\mu)(x_1,\ldots,x_{k+1}),
\]
where $\bar{\triangle}_{M\vec{u}}(\mu)$ is a measure on $X^{k+1}$.
Now
\[
\lim_{R \rightarrow \infty}\Avr{R}{r} \prod_{j=1}^{k+1} T_{rMu_j+Pu_j}\bar{f}_j(x_j)
\]
converges uniformly in $P$ to a function $F$ in
$L^2(\bar{\triangle}_{M\vec{u}}(\mu))$ which is invariant under
$T_{Mu_1} \times \ldots \times T_{Mu_{k+1}}$
(by the Mean Ergodic Theorem, as in the case $k=1$).
Finally by \ref{thm:abelian_structure}, if $E(f_j|Y_{k+1})=0$ for some
$1 \le j\le k+1$, then
\[
\lim_{R \rightarrow \infty}\Avr{R}{r} \gamma_r(M,P) =0.
\]
(uniformly in $P$).
\end{proof}
\begin{rmr}\label{rmr:complex}
Corollary \ref{cor:same_limit} and proposition \ref{P:AbelianReduction} remain
valid if we replace $M_m(\mathbb{R})$ by a linear subspace of $M_m(\mathbb{R})$. We apply
this for the case $m=2$, replacing $M_2(\mathbb{R})$ by the embedding
$\mathbb{C} \hookrightarrow M_2(\mathbb{R})$. Then, thinking of $u_1, \ldots, u_k$ as points in $\mathbb{C}$
we can replace the matrices $F,L \in M_2(\mathbb{R})$ with
$c,d \in \mathbb{C}$ where $c$ is outside countably many lines in $\mathbb{C}$.
\end{rmr}
\begin{lma}\label{Bantisymetric}
For each $r=1,\ldots \infty$, let $\{s^r_{l}\}_{l=1}^m \subset \mathbb{R}^m$,
such that for each $r$, $s_l^r\neq \vec{0}$ for some $1\le l \le m$.
Let $\{e_l\}_{l=1}^m$ be the standard basis for $\mathbb{R}^m$.
There exists an antisymmetric matrix $B \subset M_m(\mathbb{R})$ s.t.
$Bu_i,B(u_i-u_j)\neq \vec{0}$ for $1\le i,j \le k$, $i \neq j$, and
\[
\forall r : f_{r,B}(M) \stackrel{\te{def}}{=} \Sum{l}{m} \ip{s^r_{l}}{MBe_l} \not \equiv 0.
\]
\end{lma}
\begin{proof}
Let $\mathcal{B}$ be the subspace of antisymmetric matrices. Since $f_{r,B}(M)$
is linear in $M$, we have $f_{r,B}(M) \equiv 0 \iff B$ satisfies
the $m^2$ linear equations
given by the standard basis for $\mathbb{R}^{m^2}$. Hence for each $r$,
the 'bad' $B$ form a linear subspace of $\mathcal{B}$. Since we have only
a countable number of inequalities, it suffices to show that this
linear subspace is a proper subspace of $\mathcal{B}$. So without loss of
generality, we have only one inequality.
Assume
\[
\forall B \in \mathcal{B} : \Sum{l}{m} \ip{Ms_l}{Be_l} \equiv 0 \qquad
\]
Without loss off generality $s_{11} \neq 0$.
Let $E_{21}$ be an $m \times m$ matrix with $1$ at the index $21$, and $0$
elsewhere.
Then
\[
\begin{split}
\Sum{l}{m} \ip{E_{21}s_l}{Be_l}&=s_{11} b_{21} +s_{21} b_{22}+\ldots+s_{m1} b_{2m}\\
&=-s_{11} b_{12} +s_{31} b_{23}+\ldots+s_{m1} b_{2m}=0
\end{split}
\]
As $s_{11}\neq 0$ this is a non trivial linear condition on antisymmetric matrices.
Finally, the conditions $Bu_i=\vec{0}$ or $,B(u_i-u_j)=\vec{0}$ are non trivial
linear conditions on antisymmetric matrices.
\end{proof}
\begin{lma}\label{lma:antisymmetric}
Let $\mathcal{S}$ be a countably linear set in $M_m(\mathbb{R})$, $m \ge 3$.
Let $u_1,\ldots,u_k \in \mathbb{R}^m$.
There exist matrices
$M \in M_m(\mathbb{R}) \setminus \mathcal{S}$, and $P \in SO(m)$ such that
$M^tP$ is antisymmetric, and $T_{Mu_i}$, $T_{M(u_i-u_j)}$
for $i,j=1,\ldots,k$, $i \ne j$ are ergodic.
\end{lma}
\begin{proof}
The set $\mathcal{S}$ is countably linear therefore it is a countable union of sets of
the form
\[
\mathcal{S}_r=\{ N \in M_m(\mathbb{R}): \sum_{l=1}^m \ip{s^r_l}{Ne_l}=c_r\},
\]
Where $e_l$ is the standard basis for $\mathbb{R}^m$, $s^r_l \in \mathbb{R}^m$, $c_r \in \mathbb{R}$..
By lemma \ref{Bantisymetric} there exists an antisymmetric matrix $B$, such that
$Bu_i,B(u_i-u_j)\neq \vec{0}$ for $1\le i,j \le k$, $i \neq j$, and for all $r$
\[
f_r(M)=\sum_{l=1}^m \ip{s^r_l}{MBe_l} \not \equiv 0.
\]
For each $r$, the set of $M$ with $f_r(M)=c_r$ is a hyperplane in $M_m(\mathbb{R})$.
This subspace intersects $SO(m)$ in a proper algebraic subvariety of $SO(m)$.
Therefore for a.e. $P \in SO(m)$ (with respect to the Haar measure on $SO(m)$),
$M=PB$ will avoid the bad set $\tilde{\mathcal{S}}$. Clearly, if $M=PB$ avoids $\mathcal{S}$
then $tM=tPB$ avoids $\mathcal{S}$ for any $t>0$.
By proposition \ref{P:ergodic}, for a.e. $P \in SO(m)$ and a.e. $t \in \mathbb{R}$
$T_{tPBu_i}$, $T_{tPB(u_i-u_j)}$ act ergodically.
\end{proof}
\begin{dsc}{\em Proof of theorem \ref{DynamicalVersion}}.
Without loss of generality, we may assume by disintegration of $\mu$,
that the action of $\mathbb{R}^m$ is ergodic.
Let $f=1_A$ be the characteristic function of the set $A$, and $\mu(A)=\lambda$.
Let $Y_k$ be the factor described in theorem \ref{thm:abelian_structure}, and let
$E(f|Y_k)$ be the projection of $f$ on $L^2(Y_k)$.
We first prove the theorem for $m>2$.
By corollary \ref{cor:same_limit}, proposition \ref{P:AbelianReduction},
and lemma \ref{lma:antisymmetric}, there exist matrices $M\in M_m(\mathbb{R})$, $P \in SO(m)$ such that
$M^{t}P$ is antisymmetric, and for all $t \in \mathbb{R}$ we have
\[
\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^k T_{nMu_j+tPu_j} E(f|Y_k)(y)=g(y)
\]
in $L^2(Y_k)$, and
\[
\Avr{N}{n} \prod_{j=1}^k T_{nMu_j+tPu_j} f(x)-
\Avr{N}{n} \prod_{j=1}^k T_{nMu_j+tPu_j} \pi^*E(f|Y_k)(x)\rightarrow 0
\]
in $L^2(X)$, where the convergence is uniform in $t$. Then
\[
\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^k T_{nMu_j+tPu_j} f(x)=
\lim_{N \rightarrow \infty}\Avr{N}{n} \prod_{j=1}^k T_{nMu_j}f(x) = \pi^*g(x),
\]
and the convergence is uniform in $t$.
By theorem \ref{thm:FuK},
\[
\int f(x) \pi^*g(x) d\mu_X > C>0.
\]
Uniform convergence implies that there exists $N_0$, such that for all $t$
\[
\left|\Avr{N_0}{n} \int f(x) \prod_{j=1}^k T_{nMu_j+tPu_j} f(x) d\mu_X
- \int f(x) \pi^*g(x) d\mu_X \right|< \frac{C}{2}.
\]
Therefore for $N_0$, and for all $t \in \mathbb{R}$
\[
\Avr{N_0}{n} \int f(x) \prod_{j=1}^k T_{nMu_j+tPu_j} f(x) d\mu_X >\frac{C}{2}.
\]
This implies that for all $t \in \mathbb{R}$ there exists $n\le N_0$
with
\begin{equation*}
\begin{split}
\mu( A\cap & T_{(nM+tP)u_1}A\cap \ldots \cap T_{(nM+tP)u_k}A) = \\
&\int f(x)\prod_{j=1}^k T_{(nM+tP)u_j} f(x) d\mu_X
>\frac{C}{2}.
\end{split}
\end{equation*}
Now the $T_u$ satisfy the following continuity condition:
\begin{equation} \label{eq:cont}
\forall \varepsilon \exists \delta : \|u-u'\| \le \delta \Rightarrow
| \mu(A \cap T_uA) - \mu(A \cap T_{u'}A) | \le \varepsilon.
\end{equation}
As $M^tP$ is antisymmetric, $M \in T_P(SO(m))$ - the tangent space of $SO(m)$
at $P$. Thus
\[
P':=Pexp(\epsilon nP^{-1}M)=P(I+\epsilon nP^{-1}M+o(\epsilon))=
P+\epsilon n M + o(\epsilon)
\]
belongs to $SO(m)$.
But
\[
(\frac{1}{\epsilon}P+nM)- \frac{1}{\epsilon}P'=o(1),
\]
and if $t=\frac{1}{\epsilon}$ is large enough, then by equation
(\ref{eq:cont})
\[
\mu(A \cap T_{tP'u_1} \cap \ldots \cap T_{tP'u_k}) >\frac{C}{4}.
\]
For $m=2$ the proof is similar.
By remark \ref{rmr:complex} there exists $c \in \mathbb{C}$
such that for all $t \in \mathbb{R}$ we have
\[
\lim_{N \rightarrow \infty} \Avr{N}{n} \prod_{j=1}^k T_{ncu_j+itcu_j} E(f|Y_k)(y)=g(y)
\]
in $L^2(Y)$, and
\[
\Avr{N}{n} \prod_{j=1}^k T_{ncu_j+itcu_j} f(x)-
\Avr{N}{n} \prod_{j=1}^k T_{ncu_j+itcu_j} \pi^*E(f|Y_k)(x)\rightarrow 0
\]
in $L^2(X)$, where the convergence is uniform in $t$.
As in the proof for $m>2$, there exists $N_0$, such that for all $t$
\[
\Avr{N_0}{n} \int f(x) \prod_{j=1}^k T_{ncu_j+itcu_j} f(x) d\mu_X >\frac{C}{2}.
\]
This implies that for all $t \in \mathbb{R}$ there exists $n\le N_0$
with
\begin{equation*}
\begin{split}
\mu( A\cap & T_{(n+it)cu_1}A\cap \ldots \cap T_{(n+it)cu_k}A) = \\
&\int f(x)\prod_{j=1}^k T_{(n+it)cu_j} f(x) d\mu_X
>\frac{C}{2}.
\end{split}
\end{equation*}
If $t$ is large enough, then
\[
(n+it)cu_j \sim \frac{t}{|n+it|}(n+it)cu_j,
\]
and $|\frac{t}{|n+it|}(n+it)|=t$.
\end{dsc}
| {
"timestamp": "2005-05-12T22:37:11",
"yymm": "0505",
"arxiv_id": "math/0505276",
"language": "en",
"url": "https://arxiv.org/abs/math/0505276",
"abstract": "We use ergodic theoretic tools to solve a classical problem in geometric Ramsey theory. Let E be a measurable subset of R^m, with positive upper density. Let V={0,v_1,...,v_k} be a subset of R^m. We show that for r large enough, we can find an isometric copy of rV arbitrarily close to E. This is a generalization of a theorem of Furstenberg, Katznelson and Weiss showing a similar property for m=k=2.",
"subjects": "Dynamical Systems (math.DS); Combinatorics (math.CO)",
"title": "Nilfactors of R^m-actions and configurations in sets of positive upper density in R^m",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717440735736,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7089449350513105
} |
https://arxiv.org/abs/1705.02280 | The Stochastic Matching Problem: Beating Half with a Non-Adaptive Algorithm | In the stochastic matching problem, we are given a general (not necessarily bipartite) graph $G(V,E)$, where each edge in $E$ is realized with some constant probability $p > 0$ and the goal is to compute a bounded-degree (bounded by a function depending only on $p$) subgraph $H$ of $G$ such that the expected maximum matching size in $H$ is close to the expected maximum matching size in $G$. The algorithms in this setting are considered non-adaptive as they have to choose the subgraph $H$ without knowing any information about the set of realized edges in $G$. Originally motivated by an application to kidney exchange, the stochastic matching problem and its variants have received significant attention in recent years.The state-of-the-art non-adaptive algorithms for stochastic matching achieve an approximation ratio of $\frac{1}{2}-\epsilon$ for any $\epsilon > 0$, naturally raising the question that if $1/2$ is the limit of what can be achieved with a non-adaptive algorithm. In this work, we resolve this question by presenting the first algorithm for stochastic matching with an approximation guarantee that is strictly better than $1/2$: the algorithm computes a subgraph $H$ of $G$ with the maximum degree $O(\frac{\log{(1/ p)}}{p})$ such that the ratio of expected size of a maximum matching in realizations of $H$ and $G$ is at least $1/2+\delta_0$ for some absolute constant $\delta_0 > 0$. The degree bound on $H$ achieved by our algorithm is essentially the best possible (up to an $O(\log{(1/p)})$ factor) for any constant factor approximation algorithm, since an $\Omega(\frac{1}{p})$ degree in $H$ is necessary for a vertex to acquire at least one incident edge in a realization. |
\section{The Optimality of the $b$-Matching Lemma} \label{app:b-limitation}
In this section, we establish that our $b$-matching lemma is essentially optimal in the sense that it is impossible to find a $b$-matching
with at least $b\cdot \ensuremath{\mbox{\sc opt}}\xspace(G)$ edge for $b$ much larger than $1/p$. In particular, we show that,
\begin{claim*}
For any constant $0 < p < 1$, there exist bipartite graphs $G$ where $G_p$ has a matching of size $n-o(n)$ in expectation, but for any
$b \geq {2 \over p}$, there is no $b$-matching in $G$ with (at least) $b\cdot 0.99n$ edges; here $n$ is the number of
vertices on each side of $G$.
\end{claim*}
\begin{proof}
For any integer $N$, let $\ensuremath{\mathcal{G}}_{N,\frac{1}{N}}$ be the family of bipartite random graphs with $N$ vertices on each side and probability of picking each edge being $1/N$.
Let $\ensuremath{c^{\star}} \in (0,1)$ such that any bipartite graph sampled from $\ensuremath{\mathcal{G}}_{N,\frac{1}{N}}$ has a matching of size at least $\ensuremath{c^{\star}} \cdot N$ w.p. $1-o(1)$.
By a result of Karp and Sipser~\cite{KarpS81} on \emph{sparse random graphs} (see also~\cite{AronsonFP98}, Theorem~4), we have $\ensuremath{c^{\star}} \approx 0.56$.
Consider bipartite graphs $G(L,R,E)$ where the vertices in $L$ consists of two disjoint sets $L_1$ and $L_2$ with $\card{L_1} = N$ and
$\card{L_2} = (1-\ensuremath{c^{\star}}) \cdot N$ for parameter $N = \frac{n}{2-\ensuremath{c^{\star}}}$. Similarly, $R$ contains two sets $R_1$ and $R_2$ with $\card{R_1} = N$
and $\card{R_2} = (1-\ensuremath{c^{\star}})\cdot N$.
The set of edges in $G$ can be partitioned into two parts. First, there is a complete bipartite graph between $L_1$ and $R_2$, and a complete bipartite graph
between $L_2$ and $R_1$. Second, there is a sparse graph between $L_1$ and $R_1$ defined through the following random process: each edge
between $L_1$ and $R_1$ is independently chosen w.p. ${1 \over pN}$.
In the following, we show that for a graph $G$ created through the above process, w.p. $1 - o(1)$, $G_p$ has a matching of size $n-o(n)$
in expectation, and w.p. $1 - o(1)$, there is no $b$-matching in $G$ with $b\cdot 0.99n$ edges, for $b \geq {2 \over p}$. Hence, by applying a
union bound, the above process find a graph with both properties w.p. $1 - o(1)$, proving the claim.
To see that $G_p$ has a matching of size $n-o(n)$ in expectation, we realize the edges in $G$ in two steps: first realize the edges
between $L_1$ and $R_1$, and then the other edges (i.e., the two complete graphs between $L_1, R_2$ and between $L_2, R_1$,
respectively). For the subgraph between $L_1$ and $R_1$, notice that each edge between $L_1$ and $R_1$ is realized w.p.
${1 \over pN} \cdot p = {1 \over N}$ (chosen w.p. ${1 \over pN}$ in the above process and realize w.p. $p$). Since
$\card{L_1} = \card{R_1} = N$, the subgraph between $L_1$ and $R_1$ is sampled from $\ensuremath{\mathcal{G}}_{N,\frac{1}{N}}$ and hence w.p. $1-o(1)$,
there is a matching of size $\ensuremath{c^{\star}} N$ between $L_1$ and $R_1$. Now for the remaining $(1-\ensuremath{c^{\star}}) N$ unmatched vertices in $L_1$ (resp. in $R_1$), since there is a complete graph between
$L_1$ and $R_2$ (resp. $R_1$ and $L_2$), w.p. $1 - o(1)$, a perfect matching realizes between the unmatched vertices in $L_1$ and vertices in $R_2$
(resp. between $R_1$ and $L_2$). We conclude that any realization $G_p$ has a perfect matching w.p. $1-o(1)$ and hence the expected maximum matching
size in $G_p$ is at least $(1 - o(1)) n + o(1) \cdot 0 = n - o(n)$.
It remains to show that w.p. $1 - o(1)$, $G$ has no $b$-matching with $b\cdot 0.99n$ edges for $b\geq{2 \over p}$. For any $b$-matching in $G$,
the number of edges incident on $L_2$ and $R_2$ is at most $b \cdot (\card{L_2} + \card{R_2}) = 2 b N/(1-\ensuremath{c^{\star}})$. The remaining edges of this
$b$-matching must be between $L_1$ and $R_1$. Each edge between $L_1$ and $R_2$ is chosen w.p. ${1 \over pN}$, and there are $N^2$
possible edges between $L_1$ and $R_1$. By Chernoff bound, w.p. $1 - o(1)$, the number of realized edges between $L_1$ and $R_1$ is at
most $(1 +o(1)) {N \over p}$. Therefore, the total number of edges of any $b$-matching in
$G$ is at most
\begin{align*}
2 b (1-\ensuremath{c^{\star}}) N + (1 + o(1)){N \over p} &= b\cdot n - \paren{\ensuremath{c^{\star}} b \cdot N - \frac{N}{p}} + o(n) \tag{$(2-\ensuremath{c^{\star}}) \cdot N = n$}\\
& \leq b\cdot n - \paren{0.56 b N - 0.5bN} + o(n) \tag{$b \geq 2/p$ and hence $1/p \leq b/2$; $\ensuremath{c^{\star}} \approx 0.56$}\\
& < b\cdot 0.99 n
\end{align*}
\end{proof}
\section{Open Problems}\label{app:conc}
We presented the first non-adaptive algorithm with an approximation ratio that is strictly better than half for stochastic matching. In
particular, we showed that there exists an algorithm that given any graph $G$, computes a subgraph $H$ of $G$ with maximum degree $O(\frac{\log{(1/p)}}{p})$ such
that the ratio of expected size of a maximum matching in realizations of $H$ and $G$ is at least $0.52$ when $p$ is sufficiently small, and
$0.5+\delta_0$ (for an absolute constant $\delta_0 > 0$) for any values of $ 0 < p < 1$.
A main open problem remained by our work is to determine the best approximation ratio that a non-adaptive algorithm for stochastic matching can achieve.
In particular, can non-adaptive algorithms match the performance of adaptive algorithms
by achieving a $(1-\varepsilon)$-approximation for any $\varepsilon > 0$?
\section{An Algorithm for Large Values of $p$}\label{app:large-p}
In this section, we provide an algorithm, namely, Algorithm~\ref{alg:p-approximation}, with approximation ratio strictly better than $1/2$ when $p$ is bounded away from zero. In particular, this algorithm computes a
matching of size $\paren{1/2 + \Theta(p^2)}\cdot \ensuremath{\mbox{\sc opt}}\xspace$. Algorithm~\ref{alg:p-approximation} is required to handle the case when $p$ is not small enough for
Algorithm~\ref{alg:small-p} to perform well. Using a combination of both of these algorithms, we
can prove the second part of Theorem~\ref{thm:main}.
Let $p_0$ be any fixed constant independent of $n$, $\delta = \frac{p^2}{4}$, and $\varepsilon =\frac{\ensuremath{p_0}^2}{10^{4}}$.
The new algorithm (i.e, Algorithm~\ref{alg:p-approximation}) is similar to Algorithm~\ref{alg:small-p} with the only difference being that instead of a $\ensuremath{\floor{{1 \over p}}}$-matching, here, we simply pick a single maximum
matching in $G$. Our algorithm is stated as Algorithm~\ref{alg:p-approximation}.
\begin{algorithm}[h!]
\textnormal
\SetAlgoNoLine
\KwIn{A graph $G(V,E)$ and an edge realization probability $p_0 \leq p < 1$.}
\KwOut{A subgraph $H(V,Q)$ of $G(V,E)$.}
\begin{enumerate}
\item Let $M$ be a maximum matching in $G$.
\item Let $(M_1, M_2, \ldots, M_R) := \ensuremath{\textsf{MatchingCover}}\xspace(G(V,E\setminus M),\varepsilon)$ (recall that $\varepsilon = \frac{\ensuremath{p_0}^2}{10^{4}}$), and
$E_{MC} = M_1 \cup \ldots \cup M_R$.
\item Return $H(V,Q)$ where $Q:= M \cup E_{MC}$.
\end{enumerate}
\caption{A $\paren{0.5+\Theta(p^2)}$-Approximation Algorithm for Stochastic Matching}
\label{alg:p-approximation}
\end{algorithm}
The following lemma proves the approximation ratio of Algorithm~\ref{alg:p-approximation}.
\begin{lemma}\label{lem:p-approximation}
For any constant $\ensuremath{p_0} > 0$, any realization probability $p \geq \ensuremath{p_0}$, and any graph $G(V,E)$ the expected maximum matching size
in the graph $H$ computed by Algorithm~\ref{alg:p-approximation} is at least
$\paren{\frac{1}{2}+\frac{p^2}{4} - \frac{\ensuremath{p_0}^2}{10^{4}}} \cdot {\ensuremath{\mbox{\sc opt}}\xspace(G)}$.
\end{lemma}
Before proving Lemma~\ref{lem:p-approximation}, we show how to combine Algorithm~\ref{alg:small-p} and Algorithm~\ref{alg:p-approximation}
to prove Part~(\ref{part:main2}) of Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}, Part~(\ref{part:main2})]
Let $p_0$ be the constant such that Algorithm~\ref{alg:small-p} achieves an approximation ratio of $0.52$ for any $p \leq p_0$. The algorithm
for Part~(\ref{part:main2}) is simply as follows. If the realization probability $p \leq p_0$, run Algorithm~\ref{alg:small-p} and
otherwise run Algorithm~\ref{alg:p-approximation}. By Lemma~\ref{lem:p-approximation}, the approximation ratio of this algorithm is
$\min\set{0.52 , \frac{1}{2} + \frac{p^2}{4} - \frac{\ensuremath{p_0}^2}{10^4}} = 0.5 + \delta_0$ for some absolute constant $\delta_0$ (since
$p_0$ is an absolute constant and $p \geq p_0$).
We note that by optimizing the choice of $p_0$ and a more careful analysis of Algorithm~\ref{alg:small-p} (to account for many constants
involved), one can bound the value of $\delta_0 \approx 0.001$. We omit the tedious details of this calculation as it is not the main
contribution of this paper.
\end{proof}
We now prove Lemma~\ref{lem:p-approximation}.
\begin{proof}[Proof of Lemma~\ref{lem:p-approximation}]
Recall that $\ensuremath{\mbox{\sc opt}}\xspace$ (resp. $\ensuremath{\mbox{\sc alg}}\xspace$) is the expected maximum matching size in a realization $G_p$ of $G$ (resp. a realization $H_p$ of $H$).
Firstly, by Claim~\ref{clm:small-L}, with the parameters $\varepsilon$, $\delta$, and $X = M$, we have that if $\card{M_R}$ in $E_{MC}$ is
smaller than $({1 \over 2} - \delta)\cdot\ensuremath{\mbox{\sc opt}}\xspace$, then the expected matching size in $G(V,Q)$ is at least
$({1 \over 2} + \delta - \varepsilon)\cdot\ensuremath{\mbox{\sc opt}}\xspace = ({1 \over 2} + {p^2 \over 4} - \frac{\ensuremath{p_0}^2}{10^{4}}) \cdot \ensuremath{\mbox{\sc opt}}\xspace$, which proves the lemma. We now
consider the case where $\card{M_R} \geq ({1 \over 2} - \delta)\cdot\ensuremath{\mbox{\sc opt}}\xspace$.
Let $M'$ be the random variable denotes a maximum matching in a realization of $E_{MC}$ (breaking tie arbitrarily). By
Lemma~\ref{lem:basic-alg}, w.h.p., $\card{M'} \ge (1-\varepsilon)\card{M_R} \geq (\frac{1}{2} - \delta - \varepsilon)\cdot\ensuremath{\mbox{\sc opt}}\xspace$. For
simplicity, in the following, we always assume this event happens\footnote{This assumption can be removed while losing a negligible factor
of $o(1)$ in the size of final matching.} and further remove any extra edges in $M'$ so that
$\card{M'} = (\frac{1}{2} - \delta - \varepsilon)\cdot\ensuremath{\mbox{\sc opt}}\xspace$. We now use the matching $M$ chosen in the first step of the algorithm (which is
a maximum matching of $G$) to augment the matching $M'$. We should point out that at this point, $M'$ refers to a realized matching, while
$M$ is still a random variable (independent of $M'$ since $M$ and $E_{MC}$ are edge-disjoint).
Let $\alpha_1,\alpha_3$ and $\alpha_{\geq 5}$ denote, respectively, the number of augmenting paths (w.r.t. $M'$) of length $1$, $3$, and
at least $5$ in $M \bigtriangleup M'$. We have the following claim. The proof uses standard facts about the augmenting paths (see,
e.g.,~\cite{HopcroftK73}).
\begin{claim}\label{clm:augmenting-paths}
For $\alpha_1,\alpha_3$, and $\alpha_{\geq 5}$, defined as above:
\begin{align*}
\alpha_3 + 2\alpha_{\geq 5} &\leq \card{M'} \\
\alpha_1 + \alpha_3 + \alpha_{\geq 5} &= \card{M} - \card{M'}
\end{align*}
\end{claim}
\begin{proof}
Any augmenting path of length $3$ has one edge in $M'$ and any augmenting path of length at least $5$ has at least two edges
in $M'$. Since the augmenting paths are edge disjoints, the first inequality follows. The second inequality follows from the
fact that $M$ is a maximum matching in $G$ and each augmenting path in $M \bigtriangleup M'$ increases the size of $M'$ by
$1$.
\end{proof}
As stated earlier, each edge in $M$ is realized w.p. $p$ (independent of the choice of $M'$). Since an augmenting path of
length $1$ (resp. of length $3$) realizes in $M' \bigtriangleup M_p$ w.p. $p$ (resp. $p^2$), we have that the expected number
of times that $M'$ can be augmented using realized edges of $M$ is at least $\alpha_1 p + \alpha_3 p^2$, implying that the
final matching size is at least $(\frac{1}{2} - \delta - \varepsilon)\cdot\ensuremath{\mbox{\sc opt}}\xspace + \alpha_1 p + \alpha_3 p^2 $ in
expectation. Combining this with Claim~\ref{clm:augmenting-paths}, the minimum size of the output matching we obtain can be
formulated as the following linear program (denoted by \ensuremath{\textnormal{LP-(\ref{eq:lp})}}\xspace):
\begin{tbox}
\begin{equation}
\begin{array}{ll@{}ll}
\text{minimize} & \alpha_1 p + \alpha_3 p^2 &\\
\text{subject to}& \alpha_3 + 2\alpha_{\geq 5} \leq (\frac{1}{2} - \delta)\ensuremath{\mbox{\sc opt}}\xspace - \varepsilon\cdot\ensuremath{\mbox{\sc opt}}\xspace & \label{eq:lp}\\
& \alpha_1 + \alpha_3 + \alpha_{\geq 5} \geq (\frac{1}{2} + \delta)\ensuremath{\mbox{\sc opt}}\xspace + \varepsilon\cdot\ensuremath{\mbox{\sc opt}}\xspace & \\%\label{eq:lp-constraint2} \\
& \alpha_1,\alpha_3,\alpha_{\geq 5} \geq 0
\end{array}
\end{equation}
\end{tbox}
where in the second constraint, we use the fact that $M$ is a maximum matching in $G$ and hence $\card{M} \geq \ensuremath{\mbox{\sc opt}}\xspace$. We have the following claim.
\begin{claim}\label{clm:lp-minimizer}
The minimum value of $\ensuremath{\textnormal{LP-(\ref{eq:lp})}}\xspace$ is at least ${\frac{p^2}{2}} \cdot \ensuremath{\mbox{\sc opt}}\xspace$.
\end{claim}
\begin{proof}
The two constraints of \ensuremath{\textnormal{LP-(\ref{eq:lp})}}\xspace imply that,
\begin{align}
2\alpha_1 + \alpha_3 \geq \paren{\frac{1}{2} + 3\delta + 3\varepsilon} \cdot \ensuremath{\mbox{\sc opt}}\xspace \label{eq:constraint-both}
\end{align}
Suppose we want to minimize $\alpha_1 p + \alpha_3 p^2$ subject to the constraint in Eq~(\ref{eq:constraint-both}) (this is clearly a lower bound for the value of \ensuremath{\textnormal{LP-(\ref{eq:lp})}}\xspace). In this case,
since the contribution of $\alpha_3$ to the objective value is $p$ times the contribution of $\alpha_1$, while its contribution to the constraint is $\frac{1}{2}$ times the contribution of $\alpha_1$, it is
straightforward to verify that for $p \leq 1/2$, there is an optimal solution with $\alpha_1 = 0$, and for $p > 1/2$, there is an optimal solution with $\alpha_3 = 0$. We can now compute the value of
solution in each case:
\textbf{\bm{$p \leq \frac{1}{2}$} case.} In this case $\alpha_1 = 0$ and $\alpha_3 = \paren{\frac{1}{2} + 3\delta + 3\varepsilon} \cdot \ensuremath{\mbox{\sc opt}}\xspace$ minimizes $\alpha_1 p + \alpha_3 p^2$. Hence, the objective value is
\begin{align*}
\alpha_3 \cdot p^2 = \paren{\frac{1}{2} + 3\delta + 3\varepsilon} \cdot \ensuremath{\mbox{\sc opt}}\xspace \cdot p^2 \geq {\frac{p^2}{2}} \cdot \ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
\textbf{\bm{$p > \frac{1}{2}$} case.} In this case $\alpha_1 = \paren{\frac{1}{4} + \frac{3}{2}\delta + \frac{3}{2} \varepsilon} \cdot \ensuremath{\mbox{\sc opt}}\xspace$ and $\alpha_3 = 0$ minimizes $\alpha_1 p + \alpha_3 p^2$. Hence, the objective value is
\begin{align*}
\alpha_1 \cdot p &= \paren{\frac{1}{4} + \frac{3}{2}\delta + \frac{3}{2}\varepsilon}p \cdot \ensuremath{\mbox{\sc opt}}\xspace \geq \paren{\frac{1}{4} + \frac{3}{2}\delta}p \cdot \ensuremath{\mbox{\sc opt}}\xspace \\
&= \paren{\frac{1}{4} + \frac{3p^2}{8}}p \cdot \ensuremath{\mbox{\sc opt}}\xspace \geq {\frac{p^2}{2}} \cdot \ensuremath{\mbox{\sc opt}}\xspace \tag{$\delta = \frac{p^2}{4}$
and $\frac{1}{4} + \frac{3p^2}{8} \ge 2\sqrt{\frac{1}{4} \cdot \frac{3p^2}{8}} \ge \frac{p}{2}$}
\end{align*}
The claim now follows since in above calculation we \emph{relaxed} constraints of \ensuremath{\textnormal{LP-(\ref{eq:lp})}}\xspace to the constraint in Eq~(\ref{eq:constraint-both}).
\end{proof}
By plugging in the bound from Claim~\ref{clm:lp-minimizer}, we obtain that the final matching size is at least:
\begin{align*}
\frac{\ensuremath{\mbox{\sc opt}}\xspace}{2} - \delta\cdot\ensuremath{\mbox{\sc opt}}\xspace - \varepsilon \cdot\ensuremath{\mbox{\sc opt}}\xspace + \alpha_1 p + \alpha_3 p^2 &\geq \paren{{1 \over 2} - \delta -\varepsilon +
\frac{p^2}{2}} \cdot \ensuremath{\mbox{\sc opt}}\xspace \\
&= \paren{\frac{1}{2}+\frac{p^2}{4} - \frac{\ensuremath{p_0}^2}{10^4} }\cdot \ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
by plugging in $\delta = \frac{p^2}{4}$ and $\varepsilon = \frac{\ensuremath{p_0}^2}{10^4}$.
\end{proof}
\section{Omitted Proofs from Section~3}\label{app:prelim}
\subsection{Proof of Proposition~\ref{prop:upper-exp}}\label{app:upper-exp}
\begin{proof}
We first have $f(x)$ is \emph{monotonically decreasing}, since,
\begin{align*}
\frac{d f}{d x} = \frac{e^{-x} \cdot x - 1+e^{-x}}{x^2} = \frac{(x+1)\cdot e^{-x} - 1}{x^2} \leq \frac{e^{x}\cdot e^{-x} - 1}{x^2} = 0
\end{align*}
where we used the inequality $(1+x) \leq e^{x}$.
Consequently, since $x \le c$,
\begin{align*}
f(c) \le f(x) = {1 - e^{-x} \over x}
\end{align*}
which implies $e^{-x} \le 1 - f(c) \cdot x$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:upper-exp-2}}\label{app:upper-exp-2}
\begin{proof}
We first exam the equivalent conditions for the target inequality.
\begin{align*}
& &(1-x)^{{1\over x}} \ge {1 - x \over e} \\
&\Longleftrightarrow & (1-x)^{{1\over x} - 1} \ge {1 \over e} \\
&\Longleftrightarrow & ({1\over x} - 1)\ln(1-x) \ge -1 \tag{by taking natural log of both sides}
\end{align*}
Now, since $\ln(1 - x) \ge -x - {x^2 \over 2} - {x^3 \over 2}$ when $x \in (0,0.43]$. We have
\begin{align*}
({1\over x} - 1)\ln(1-x) &\ge ({1\over x} - 1) (-x - {x^2 \over 2} - {x^3 \over 2}) \tag{since $({1\over x} - 1) > 0$} \\
& = -1 - {x \over 2} - {x^2 \over 2} + x + {x^2 \over 2} + {x^3 \over 2} \\
& = - 1 + {x \over 2} + {x^3 \over 2} \\
& \ge -1
\end{align*}
which completes the proof.
\end{proof}
\section{Main Algorithm and Analysis}\label{sec:main-alg}
We provide our main algorithm for the stochastic matching problem (when $p$ is sufficiently small) in this section and prove
Part~(\ref{part:main1}) of Theorem~\ref{thm:main}. We assume throughout this section that the edge realization probability
$p \le p_0$ for some sufficiently small constant $p_0$. In this case, $\floor{{1 \over p}} - 1 \geq (1 - O(p_0)) \cdot {1 \over p}$ and we
use this inequality frequently in the proof. Indeed, throughout this section, one should view $p_0$ as a negligible constant and hence the
term $(1 - O(p_0))$ can essentially be ignored.
Let $\delta_0 = 0.02$, and $\varepsilon_0 = 0.02001$. Our algorithm is stated as Algorithm~\ref{alg:small-p} below:
\smallskip
\begin{algorithm}[H]
\textnormal
\SetAlgoNoLine
\KwIn{A graph $G(V,E)$ and an edge realization probability $p \leq p_0$.}
\KwOut{A subgraph $H(V,Q)$ of $G(V,E)$.}
\begin{enumerate}
\item Let $B$ be a maximum $\ensuremath{\floor{{1 \over p}}}$-matching in $G$.
\item Let $(M_1, M_2, \ldots, M_R) := \ensuremath{\textsf{MatchingCover}}\xspace(G(V,E\setminus B),\varepsilon_1)$ for $\varepsilon_1 = (\varepsilon_0 - \delta_0)/2$, and $E_{MC} = M_1 \cup \ldots \cup M_R$.
\item Return $H(V,Q)$ where $Q:= B \cup E_{MC}$.
\end{enumerate}
\caption{A $0.52$-Approximation Algorithm for Stochastic Matching}
\label{alg:small-p}
\end{algorithm}
Each vertex in $H$ has degree $\bigO{{\log(1/p) \over p}}$ -- this follows immediately from Lemma~\ref{lem:basic-alg}. In what
follows, we prove that $H$ has a matching of size at least $(0.5+\delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace = 0.52 \cdot \ensuremath{\mbox{\sc opt}}\xspace $ in expectation, which will complete the
proof of Part~(\ref{part:main1}) of Theorem~\ref{thm:main}.
First notice that if $\card{M_R} < ({1 \over 2} - {\varepsilon_0 + \delta_0 \over 2})\ensuremath{\mbox{\sc opt}}\xspace$ where $M_R$ is the smallest matching in the matching
cover $E_{MC}$ found by Algorithm~\ref{alg:small-p}, then by Claim~\ref{clm:small-L}, the expected matching size in $Q$ is at least
$({1 \over 2} + {\varepsilon_0 + \delta_0 \over 2} - {\varepsilon_0 - \delta_0 \over 2})\ensuremath{\mbox{\sc opt}}\xspace = ({1 \over 2} + \delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace$. Therefore, from now
on we focus on the case that $\card{M_R} \ge ({1 \over 2} - {\varepsilon_0 + \delta_0 \over 2})\ensuremath{\mbox{\sc opt}}\xspace$.
In this case, by Lemma~\ref{lem:basic-alg}, w.p. $1-o(1)$, there exists a matching $M$ among the realized edges in $E_{MC}$ with size at
least
\begin{align*}
\paren{1 - {\varepsilon_0 - \delta_0 \over 2}} \paren{{1 \over 2} - {\varepsilon_0 + \delta_0 \over 2}} \ensuremath{\mbox{\sc opt}}\xspace &\ge \paren{{1 \over 2} - {\varepsilon_0 + \delta_0 \over 2} - {\varepsilon_0 - \delta_0 \over 4}} \ensuremath{\mbox{\sc opt}}\xspace \\
&= \paren{{1 \over 2} - {3\varepsilon_0 \over 4} - {\delta_0 \over 4}} \ensuremath{\mbox{\sc opt}}\xspace \ge \paren{{1 \over 2} - \varepsilon_0}\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
In the following, we assume this event happens\footnote{This assumption can be removed while losing a negligible factor of $o(1)$ in the
size of final matching.} and prove that the set of edges realized in the $\ensuremath{\floor{{1 \over p}}}$-matching $B$ can be used to augment the matching $M$
to create a matching of size $({1 \over 2} + \delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace$ in expectation. To simplify the analysis, we assume w.l.o.g. that
$\card{M} = ({1\over2} - \varepsilon_0)\ensuremath{\mbox{\sc opt}}\xspace$ (i.e., we only keep $({1\over2} - \varepsilon_0)\ensuremath{\mbox{\sc opt}}\xspace$ edges of $M$ and remove any additional edges if there
is any). By the $b$-matching lemma (Lemma~\ref{lem:b-matching}),
$\card{B} \ge \paren{\ensuremath{\floor{{1 \over p}}}-1} \ensuremath{\mbox{\sc opt}}\xspace \ge (1 - O(p_0)) \cdot {\ensuremath{\mbox{\sc opt}}\xspace \over p}$, and hence, to prove Part~(1) of Theorem~\ref{thm:main}, it
suffices to prove the following statement.
\begin{lemma}\label{lem:b-matching-augmentation-raw}
Let $M$ be a matching of size $\paren{{1 \over 2} - \varepsilon_0} \ensuremath{\mbox{\sc opt}}\xspace$, and $B$ be a $\ensuremath{\floor{{1 \over p}}}$-matching of size at least $(1 - O(p_0)) \cdot {\ensuremath{\mbox{\sc opt}}\xspace \over p}$; then
the expected maximum matching size in $M \cup \ensuremath{B_{\overline{M}}}$ is at least $({1 \over 2} + \delta_0) \ensuremath{\mbox{\sc opt}}\xspace$.
\end{lemma}
\begin{proof}
Let $\ensuremath{B_M}$ be the set of edges in $B$ that are incident on the vertices in the matching $M$, and let $\ensuremath{B_{\overline{M}}} = B \setminus \ensuremath{B_M}$. Let
$\ensuremath{s_{\overline{M}}}$ be the random variable denoting the maximum matching size of a realization of $\ensuremath{B_{\overline{M}}}$. By Lemma~\ref{lem:outside-edges},
\begin{equation}
\label{eq:bout}
\expect{\ensuremath{s_{\overline{M}}}} \ge {\card{\ensuremath{B_{\overline{M}}}} \over \ensuremath{\floor{{1 \over p}}}} \cdot {1 - 3p \over 3} \ge {p \card{\ensuremath{B_{\overline{M}}}}} \cdot {1 - 3p \over 3}
\end{equation}
Therefore, if $\card{\ensuremath{B_{\overline{M}}}} \ge 6\varepsilon_0 \cdot {\ensuremath{\mbox{\sc opt}}\xspace \over p}$, then
\begin{align*}
\expect{\ensuremath{s_{\overline{M}}}} \ge 6\varepsilon_0 \cdot {\ensuremath{\mbox{\sc opt}}\xspace} \cdot {1 - 3p \over 3} = 2\varepsilon_0 (1 - 3p) \cdot \ensuremath{\mbox{\sc opt}}\xspace \ge (\varepsilon_0 + \delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace \tag{assuming $p_0 \le { \varepsilon_0 - \delta_0 \over 6\varepsilon_0}$}
\end{align*}
and since no edge in $\ensuremath{B_{\overline{M}}}$ is incident on the vertices in $M$, the expected matching size in $M \cup B_p$ is at least
\begin{align*}
\paren{{1 \over 2} -\varepsilon_0} \ensuremath{\mbox{\sc opt}}\xspace + (\varepsilon_0 + \delta_0) \ensuremath{\mbox{\sc opt}}\xspace = \paren{{1 \over 2} + \delta_0} \ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
as asserted by Lemma~\ref{lem:b-matching-augmentation-raw}. In the following, we assume $\card{\ensuremath{B_{\overline{M}}}} \le 6\varepsilon_0 \cdot {\ensuremath{\mbox{\sc opt}}\xspace \over p}$.
Furthermore, we fix a realization of $\ensuremath{B_{\overline{M}}}$ and fix a maximum matching $M'$ in the realization of $\ensuremath{B_{\overline{M}}}$ (whose size is $\ensuremath{s_{\overline{M}}}$ by
definition). In other words, we will lower bound the expected maximum matching size in $M \cup B_p$ conditioned on \emph{any} realization of
$\ensuremath{B_{\overline{M}}}$. The lower bound we obtain would be a linear function of $\ensuremath{s_{\overline{M}}}$, and by linearity of expectation, we can simply replace $\ensuremath{s_{\overline{M}}}$
with $\expect{\ensuremath{s_{\overline{M}}}}$, use Eq~(\ref{eq:bout}) to lower bound $\expect{\ensuremath{s_{\overline{M}}}}$, and obtain the desired lower bound of
$(1/2 + \delta_0)\ensuremath{\mbox{\sc opt}}\xspace$ on the expected maximum matching size.
Denote by $\ensuremath{M^+}$ the matching $M \cup M'$ (since the matchings $M$ and $M'$ are vertex-disjoint, $M \cup M'$ is indeed a valid matching of
size $\card{M} + \ensuremath{s_{\overline{M}}}$). We can focus the realizations of $\ensuremath{B_{\overline{M}}}$ where $\ensuremath{s_{\overline{M}}} \le 2\varepsilon_0 \cdot \ensuremath{\mbox{\sc opt}}\xspace$ since otherwise the matching $\ensuremath{M^+}$
already have size $(1/2 + \varepsilon_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace > (1/2 + \delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace$. Therefore, we have $\ensuremath{s_{\overline{M}}} \le 2\varepsilon_0 \cdot \ensuremath{\mbox{\sc opt}}\xspace = O(\ensuremath{\mbox{\sc opt}}\xspace)$
and $\card{\ensuremath{M^+}} = O(\ensuremath{\mbox{\sc opt}}\xspace)$, which will be useful in simplifying the presentation.
Now consider the edges in $\ensuremath{B_M}$. We further denote by $C$ the set of edges in $\ensuremath{B_M}$ that are incident on \emph{exactly} one vertex in
$\ensuremath{M^+}$. In the following, we first show that $\card{C}$ must be large (Claim~\ref{clm:C-size}) and then show that many edges in $C$ can be
used to augment the matching $\ensuremath{M^+}$, which leads to an increment on the matching size as a function of $\card{C}$
(Lemma~\ref{lem:1-neighbor} and Lemma~\ref{lem:allocation}). Combining these two statements completes the proof of
Lemma~\ref{lem:b-matching-augmentation-raw}.
\begin{claim}\label{clm:C-size}
$\card{C} \ge 2\card{\ensuremath{B_M}} - {2\card{\ensuremath{M^+}} \over p}$.
\end{claim}
\begin{proof}
Let $x$ denote the number of edges in $\ensuremath{B_M}$ that have degree $2$ to $V(\ensuremath{M^+})$ (i.e., are incident on two vertices in $\ensuremath{M^+}$). By
definition, every edge in $\ensuremath{B_M}$ is incident on $M$, and hence every edge in $\ensuremath{B_M}$ is also incident on $\ensuremath{M^+} ( = M \cup M')$.
Consequently, there are $\card{\ensuremath{B_M}} - x$ edges in $\ensuremath{B_M}$ that have degree $1$ to $V(\ensuremath{M^+})$ (i.e. belongs to $C$). Therefore, the total
degrees of all vertices $V(\ensuremath{M^+})$ provided by $\ensuremath{B_M}$ is at least:
$ 2 \cdot x + 1 \cdot \paren{\card{\ensuremath{B_M}} - x} = x + \card{\ensuremath{B_M}}$.
On the other hand, since $\card{V(\ensuremath{M^+})} = 2\card{\ensuremath{M^+}}$ and $B$ (hence $\ensuremath{B_M}$) is a $\ensuremath{\floor{{1 \over p}}}$-matching, the total degree of the
vertices $V(\ensuremath{M^+})$ provided by $\ensuremath{B_M}$ is at most ${2\card{\ensuremath{M^+}} \over p}$. Therefore,
$ x +\card{\ensuremath{B_M}} \le {2\card{\ensuremath{M^+}} \over p}$,
which implies $x \le {2\card{\ensuremath{M^+}} \over p} - \card{\ensuremath{B_M}}$. Therefore, the number of edges in $\ensuremath{B_M}$ incident on exactly one
vertex in $V(\ensuremath{M^+})$ (i.e., $\card{C}$) is at least
\begin{align*}
\card{\ensuremath{B_M}} - \paren{{2\card{\ensuremath{M^+}} \over p} - \card{\ensuremath{B_M}} } = 2\card{\ensuremath{B_M}} - {2\card{\ensuremath{M^+}} \over p}
\end{align*}
completing the proof.
\end{proof}
The following two lemmas are dedicated to showing that the edges in a realization of $C$, $C_p$, form many vertex-disjoint length-three augmenting
paths for the matching $\ensuremath{M^+}$ in expectation, which is a lower bound on the expected increment on the matching size. We first define some
notation. Let $W:= V \setminus V(\ensuremath{M^+})$, i.e., $W$ is the set of vertices \emph{not} matched by $\ensuremath{M^+}$. Denote the edges in $\ensuremath{M^+}$ by
$\set{(u_i, v_i) \mid i \in [\card{\ensuremath{M^+}}]}$, and denote by $d(u_i)$ (resp. $d(v_i)$) the number of edges in $C$ incident on $u_i$
(resp. $v_i$). Since, by definition, the edges in $C$ are only incident on one vertex in $V(\ensuremath{M^+})$, $d(u_i)$ (resp. $d(v_i)$) is also the
number of edges in $C$ between $u_i$ (resp. $v_i$) and $W$. In the following, whenever we say ``neighbors'' or ``degrees'', they are only
w.r.t. the edges $C$. Let $f(x)$ be the function defined in Proposition~\ref{prop:upper-exp}.
\begin{lemma}\label{lem:1-neighbor}
For any edge $(u_i,v_i) \in M$, w.p. at least
\[(1 - O(p_0)) {f({1 / e})\cdot p \over e^2} \paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1, 0},\]
there exists a length-three augmenting path $a_i - u_i - v_i - b_i$ in the realization $C_p$ of $C$, such that $a_i, b_i$ have no neighbors
other than $u_i$ and $v_i$.
\end{lemma}
Note that we can use all edges $(u_i,v_i)$ in $\ensuremath{M^+}$ with such an augmenting path $a_i-u_i-v_i-b_i$ to (simultaneously) augment $\ensuremath{M^+}$ since
these augmenting paths are vertex-disjoint ($a_i$ and $b_i$ are only neighbors of $u_i$ and $v_i$). Therefore, the expected number of edges
in $\ensuremath{M^+}$ that has such an augmenting path is a lower bound on the expected increment on the matching size.
\begin{proof}[Proof of Lemma~\ref{lem:1-neighbor}]
We consider three disjoint subsets of edges in $C$ one by one: $(i)$ the edges between $v_i$ and $W$, $(ii)$ the edges incident on a
specific vertex $w$ in $W$ (excluding the edge $(v_i, w)$), and $(iii)$ the edges incident on neighbors of $u_i$ other than $w$ (excluding
the edges incident on $v_i$).
First, consider the edges between $v_i$ and $W$. The prob. that none of these $d(v_i)$ edges are realized is at most
\begin{align*}
(1 - p)^{d(v_i)} \le e^{-p \cdot d(v_i)}
\end{align*}
Therefore, w.p. at least $1 - e^{-p \cdot d(v_i)}$, at least one edge between $v_i$ and $W$ is realized. We condition on this event and fix any such edge, denoted by $(v_i, b_i)$.
Second, consider the edges incident on $b_i$ (excluding the edge $(v_i, b_i)$). There are at most $1/p$ such edges, and the prob. that
none of them is realized is at least
\begin{align*}
(1-p)^{1/p} \ge {1 - p \over e} \ge {1 - p_0 \over e} \tag{$ p \leq p_0 \le 0.43$}
\end{align*}
where the first inequality is by Proposition~\ref{prop:upper-exp-2}. In the following, we
further condition on no other edges incident on $w$ is realized.
Third, consider all neighbors of $u_i$ other than $b_i$ (there are at least $\max\set{d(u_i) - 1, 0}$ such neighbors) and the edges
incident on these neighbors (excluding the edges incident on $v_i$). For each one of these neighbors $w$ of $u_i$, the prob. that the edge
$(u_i, w)$ is realized (w.p. $p$) and $w$ does not have any neighbor other than $u_i$ (and possibly $v_i$) (w.p. at least
${1 - p_0 \over e}$ by Proposition~\ref{prop:upper-exp-2}) is at least $p \cdot {1 - p_0 \over e}$. Therefore, the prob. that at least
one neighbor of $u_i$ satisfies these two properties is at least
\begin{align*}
1 - \paren{1 - p \cdot {1 - p_0 \over e}}^{\max\set{d(u_i) - 1, 0}} \ge & 1 - e^{- (1 - p_0) \cdot p\cdot \max\set{d(u_i) - 1,0}/e}\\
\ge &f({1 \over e})\cdot (1 - p_0) \cdot p \cdot \max\set{d(u_i) - 1, 0}/e
\end{align*}
where $f(x) = {1 - e^{-x} \over x}$ and the second inequality is by Proposition~\ref{prop:upper-exp}, using the fact that
\begin{align*}
{ (1 - p_0) \cdot p \max\set{d(u_i) - 1, 0} \over e} \le {1\over e} \tag{since $d(u_i) \le {1\over p}$}
\end{align*}
Putting the three steps together, the prob. that there is an augmenting path $a_i-u_i-v_i-b_i$ where $a_i$ and $b_i$ has no neighbors
other than $u_i$ and $v_i$ is at least
\begin{align*}
&\paren{1 - e^{-p \cdot d(v_i)}} \cdot {1 - p_0 \over e} \cdot {f({1 \over e})\cdot (1 - p_0) \cdot p \cdot \max\set{d(u_i) - 1, 0} \over e} \\
=& \paren{1 - O(p_0)} \cdot f({1 \over e}) \cdot {p \over e^2} \paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1, 0}
\end{align*}
\end{proof}
As we pointed out after the statement of Lemma~\ref{lem:1-neighbor}, we need to lower bound the expected number of edges in $\ensuremath{M^+}$ that has
such an augmenting path, which, by Lemma~\ref{lem:1-neighbor}, is lower bounded by the function $F$ defined below. For the two vectors
$d_u:= (d(u_1),\ldots,d(u_{\card{\ensuremath{M^+}}}))$ and $d_v:= (d(v_1),\ldots,d(v_{\card{\ensuremath{M^+}}}))$,
\begin{align*}
F(d_u,d_v):= \sum_{i \in [\card{\ensuremath{M^+}}]} \paren{1 - O(p_0)} \cdot f({1 \over e})\cdot {p \over e^2} \paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1, 0}
\end{align*}
The goal now is to find the smallest value of $F(d_u,d_v)$, with the constraint on the vectors $d_u$ and $d_v$ formulated in the following
(non-linear) minimization program (referred to as \ensuremath{\textnormal{MP-(\ref{eq:MP})}}\xspace).
\begin{tbox}
\begin{equation}
\begin{array}{ll@{}ll}
\text{minimize} & F(d_u,d_v) &\\
\text{subject to}& \sum_{i \in [\card{\ensuremath{M^+}}]} d(u_i) + d(v_i) = \card{C} \label{eq:MP}\\
& d(u_i),d(v_i) \in \bracket{\ensuremath{\floor{{1 \over p}}}}~~~~~~~~ i=1 ,\ldots,\card{\ensuremath{M^+}}
\end{array}
\end{equation}
\end{tbox}
The constraint on each individual $d(u_i)$ and $d(v_i)$ is because $C$ is a $\ensuremath{\floor{{1 \over p}}}$-matching. The following lemma lower bounds the
value of the objective function in \ensuremath{\textnormal{MP-(\ref{eq:MP})}}\xspace.
\begin{lemma}\label{lem:allocation}
Let $\ensuremath{F^{\star}}$ denote the optimal value of \ensuremath{\textnormal{MP-(\ref{eq:MP})}}\xspace; then,
\[\ensuremath{F^{\star}} \ge \paren{p \cdot \card{C} - \card{\ensuremath{M^+}}} \cdot \eta -O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\]
where $\eta := f({1 \over e})\cdot {1 \over e^2} \paren{1 - e^{-1}} > 0.07157$.
\end{lemma}
The proof of the Lemma~\ref{lem:allocation} is technical, and we defer it to Section~\ref{sec:mp}. By Lemma~\ref{lem:allocation} and
Claim~\ref{clm:C-size} (the lower bound on $\card{C}$) the expected increment (over $\ensuremath{M^+}$) of the matching size is at least
\begin{align*}
\ensuremath{F^{\star}} \ge &\paren{p \cdot \card{C} - \card{\ensuremath{M^+}}} \cdot \eta -O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace \\
\ge &\paren{p \cdot(2 \card{\ensuremath{B_M}} - 2 \card{\ensuremath{M^+}}/p) - \card{\ensuremath{M^+}}} \cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace \tag{By Claim~\ref{clm:C-size}}\\
= &\paren{ 2p \card{\ensuremath{B_M}} - 3 \card{\ensuremath{M^+}}} \cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\\
= &\paren{ 2p (\card{B} - \card{\ensuremath{B_{\overline{M}}}}) - 3 (\card{M} + \ensuremath{s_{\overline{M}}})} \cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace \\
= &\paren{ 2p \card{B} - 3\card{M} - 2p\card{\ensuremath{B_{\overline{M}}}} - 3\ensuremath{s_{\overline{M}}}} \cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace \\
= &\paren{ 2p (1 - O(p_0)){\ensuremath{\mbox{\sc opt}}\xspace \over p} - 3 \paren{{1 \over 2} - \varepsilon_0}\ensuremath{\mbox{\sc opt}}\xspace - 2p\card{\ensuremath{B_{\overline{M}}}} - 3\ensuremath{s_{\overline{M}}}}
\cdot \eta - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \\
= &\paren{ \paren{ {1 \over 2} + 3\varepsilon_0}{\ensuremath{\mbox{\sc opt}}\xspace} - 2p\card{\ensuremath{B_{\overline{M}}}} - 3\ensuremath{s_{\overline{M}}}} \cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
Since the original matching $\ensuremath{M^+}$ is of size $(1/2 - \varepsilon_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace + \ensuremath{s_{\overline{M}}}$, the expected matching size in $M \cup B_p$, i.e.,
$\mu(M\cup B_p)$, is
\begin{align*}
&\expect{\mu(M\cup B_p)} = \sum_{\ensuremath{s_{\overline{M}}}} \prob{\ensuremath{s_{\overline{M}}}} \expect{\mu{(M\cup B_p)} \mid \ensuremath{s_{\overline{M}}}} \tag{$\expect{X} = \sum_Y \prob{Y}\expect{X \mid
Y}$} \\
\ge &\sum_{\ensuremath{s_{\overline{M}}}} \prob{\ensuremath{s_{\overline{M}}}} (\card{\ensuremath{M^+}} + {\ensuremath{F^{\star}}}) \\
\geq& \paren{{1 \over 2} - \varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace + \expect{\ensuremath{s_{\overline{M}}}} +
\paren{ \paren{ {1 \over 2} + 3\varepsilon_0}{\ensuremath{\mbox{\sc opt}}\xspace} - 2p\card{\ensuremath{B_{\overline{M}}}} - 3\expect{\ensuremath{s_{\overline{M}}}}}
\cdot \eta - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\\
\ge& \paren{{1 \over 2} + {\eta \over 2} - \varepsilon_0 + 3\eta\varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace + (1 - 3\eta)\expect{\ensuremath{s_{\overline{M}}}} - 2p\eta\card{\ensuremath{B_{\overline{M}}}} -
O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\\
\ge& \paren{{1 \over 2} + {\eta \over 2} - \varepsilon_0 + 3\eta\varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace + (1 - 3\eta) (1 - O(p_0)){p\card{\ensuremath{B_{\overline{M}}}} \over 3} -
2p\eta\card{\ensuremath{B_{\overline{M}}}} - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace \tag{By Equation~\ref{eq:bout}}\\
\ge& \paren{{1 \over 2} + {\eta \over 2} - \varepsilon_0 + 3\eta\varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace + \paren{{1 \over 3} - 3\eta}p\card{\ensuremath{B_{\overline{M}}}} - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
Since $\eta \approx 0.07157$, ${1 \over 3} - 3\eta > 0$, and we have
\begin{align*}
& \paren{{1 \over 2} + {\eta \over 2} - \varepsilon_0 + 3\eta\varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace + \paren{{1 \over 3} - 3\eta}p\card{\ensuremath{B_{\overline{M}}}} -
O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\\
\ge & \paren{{1 \over 2} + {\eta \over 2} - \varepsilon_0 + 3\eta\varepsilon_0}\cdot \ensuremath{\mbox{\sc opt}}\xspace - O(p_0)\cdot\ensuremath{\mbox{\sc opt}}\xspace\\
> &~0.52 \cdot \ensuremath{\mbox{\sc opt}}\xspace \tag{$\varepsilon_0 = 0.02001$, $\eta > 0.07157$, and $p_0$ is sufficiently small.} \\
= & (1/2 + \delta_0) \cdot \ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
completing the proof of Lemma~\ref{lem:b-matching-augmentation-raw}.
\subsection{Lower Bounding the Value of \ensuremath{\textnormal{MP-(\ref{eq:MP})}}\xspace}\label{sec:mp}
\newcommand{\dustar}{\ensuremath{d^\star_u}}
\newcommand{\duistar}{\ensuremath{d^\star(u_i)}}
\newcommand{\duipstar}{\ensuremath{d^\star(u_{i'})}}
\newcommand{\duiOstar}{\ensuremath{d^\star(u_{i_1})}}
\newcommand{\duiTstar}{\ensuremath{d^\star(u_{i_2})}}
\newcommand{\dvstar}{\ensuremath{d^\star_v}}
\newcommand{\dvistar}{\ensuremath{d^\star(v_i)}}
\newcommand{\dviOstar}{\ensuremath{d^\star(v_{i_1})}}
\newcommand{\dviTstar}{\ensuremath{d^\star(v_{i_2})}}
In this section, we prove Lemma~\ref{lem:allocation}, i.e., the following inequality,
\begin{align*}
\ensuremath{F^{\star}} \paren{= \min F(d_u, d_v)} \ge (p \cdot \card{C} - \card{\ensuremath{M^+}}) \cdot \eta - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
where $\eta := f({1 \over e})\cdot {1 \over e^2} \paren{1 - e^{-1}}$.
Recall that
\begin{align*}
F(d_u,d_v) = &\sum_i \paren{1 - O(p_0)} \cdot f({1 \over e})\cdot {p \over e^2} \paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1,0} \\
= & \paren{1 - O(p_0)} \cdot f({1 \over e})\cdot {p \over e^2} \sum_i\paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1,0}
\end{align*}
Since the term $ \paren{1 - O(p_0)} \cdot f({1 \over e})\cdot {p \over e^2} $ is independent of $d_u$ and $d_v$,
\begin{align*}
\arg\min_{d_u,d_v} F = \arg\min_{d_u, d_v} \sum_i \paren{1 - e^{-p \cdot d(v_i)}} \cdot \max\set{d(u_i) - 1,0}
\end{align*}
Define $d(V) := \sum_id(v_i)$ and $d(U) := \sum_i d(u_i)$; then, $d(V) + d(U) = \card{C}$. We need to prove that for any choice of $d(V)$
and $d(U)$, the lemma statement holds. First of all, we can assume $d(U) \ge \card{\ensuremath{M^+}}$: otherwise, since $d(V) \le \ensuremath{\floor{{1 \over p}}}\card{\ensuremath{M^+}}$, we
will have
\begin{align*}
\card{C} = d(V) + d(U) \le \ensuremath{\floor{{1 \over p}}}\card{\ensuremath{M^+}} + \card{\ensuremath{M^+}} \le \paren{{1 \over p} + 1} \card{\ensuremath{M^+}}
\end{align*}
Therefore, for the target lower bound on $\ensuremath{F^{\star}}$
\begin{align*}
& (p \cdot \card{C} - \card{\ensuremath{M^+}})\cdot \eta - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \\
\le & \paren{p \paren{{1 \over p} + 1} \card{\ensuremath{M^+}} - \card{\ensuremath{M^+}}} \cdot \eta - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \\
\le & p \eta \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \\
\le & p_0 \eta \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
which can be made negative by choosing the constant hidden in $O(p_0)$ to be $1$, proving Lemma~\ref{lem:allocation}.
We further assume $d(U) - \card{\ensuremath{M^+}}$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}} -1$ and $d(V)$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}}$. This can be
achieved by removing at most $1/p$ edges from $d(U)$ and $d(V)$ respectively. Since $F$ is monotonically increasing with any $d(u_i)$ or
$d(v_i)$, removing edges from $d(U)$ and $d(V)$ can only make $\ensuremath{F^{\star}}$ even smaller. Therefore, if we show that after removing these
edges, the target lower bound on $\ensuremath{F^{\star}}$ holds, then it definitely holds for the original $d(U)$ and $d(V)$. In the following, we fix any
$d(U)$ and $d(V)$ where $d(U) - \card{\ensuremath{M^+}}$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}} -1$, $d(V)$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}}$, and
$d(V) + d(U) \ge \card{C} - {2 \over p}$. We prove the following key property of $F(d_u, d_v)$.
\begin{lemma}\label{lem:two-values}
There exists $d_u$ and $d_v$ that minimizes $F(d_u, d_v)$ where any entry in $d_u$ is either $1$ or $\ensuremath{\floor{{1 \over p}}}$ and any entry in $d_v$ is
either $0$ or $\ensuremath{\floor{{1 \over p}}}$.
\end{lemma}
We first show why Lemma~\ref{lem:two-values} implies the target lower bound on $\ensuremath{F^{\star}}$, and then prove Lemma~\ref{lem:two-values}. Fix
$\duistar$ and $\dvistar$ that satisfy the property in Lemma~\ref{lem:two-values}. Since every entry in $\dustar$ is either $1$ or
$\ensuremath{\floor{{1 \over p}}}$, the number of $1$'s in $\dustar$ is $x := \card{\ensuremath{M^+}} - (d(U) - \card{\ensuremath{M^+}})/( \ensuremath{\floor{{1 \over p}}} - 1)$. Similarly, the number of $0$'s in
$\dvstar$ is $y := \card{\ensuremath{M^+}} - d(V)/\ensuremath{\floor{{1 \over p}}}$. Therefore, the number of edges in $\ensuremath{M^+}$ where $\dvistar = \duistar = \ensuremath{\floor{{1 \over p}}}$ is at least
\begin{align*}
\card{\ensuremath{M^+}} - x - y = &\card{\ensuremath{M^+}} - \paren{\card{\ensuremath{M^+}} - {d(U) - \card{\ensuremath{M^+}} \over \ensuremath{\floor{{1 \over p}}} - 1}} - \paren{ \card{\ensuremath{M^+}} - {d(V)
\over \ensuremath{\floor{{1 \over p}}}}}\\
= & {d(U) - \card{\ensuremath{M^+}} \over \ensuremath{\floor{{1 \over p}}} - 1} - \card{\ensuremath{M^+}} + {d(V) \over \ensuremath{\floor{{1 \over p}}}}\\
\ge & {d(U) - \card{\ensuremath{M^+}} \over {1 \over p}} - \card{\ensuremath{M^+}} + d(V)p \tag{$\ensuremath{\floor{{1 \over p}}} -1 \le \ensuremath{\floor{{1 \over p}}} \le {1 \over p}$}\\
= & p \cdot d(U) + p \cdot d(V) - (1 + p)\card{\ensuremath{M^+}}\\
\ge & p \paren{\card{C} - {2 \over p}} - (1 + p)\card{\ensuremath{M^+}} \tag{$d(V) + d(U) \ge \card{C} - {2 \over p}$.}\\
= &p \card{C} - \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \tag{$\card{\ensuremath{M^+}} = O(\ensuremath{\mbox{\sc opt}}\xspace)$}
\end{align*}
And just focusing on these $p \card{C} - \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace$ edges, we have
\begin{align*}
\ensuremath{F^{\star}} \ge &\paren{p \card{C} - \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace} \cdot (1- O(p_0)) \cdot f({1\over e})\cdot {p \over e^2} \paren{1 - e^{-1}} \cdot (\ensuremath{\floor{{1 \over p}}} -
1) \\
\ge &\paren{p \card{C} - \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace} \cdot (1- O(p_0)) \cdot f({1\over e})\cdot {p \over e^2} \paren{1 - e^{-1}} \cdot
{1 \over p} \tag{$\ensuremath{\floor{{1 \over p}}} -1 \ge (1 - O(p_0)){1 \over p}$}\\
\ge &\paren{p \card{C} - \card{\ensuremath{M^+}} - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace} \cdot (1- O(p_0)) \cdot \eta
\\
\ge &\paren{p \card{C} - \card{\ensuremath{M^+}} } \cdot \eta - O(p_0)\ensuremath{\mbox{\sc opt}}\xspace \tag{$\card{\ensuremath{M^+}} = O(\ensuremath{\mbox{\sc opt}}\xspace)$, $p \card{C} \le p\card{\ensuremath{B_M}}= O(\ensuremath{\mbox{\sc opt}}\xspace)$}\\
\end{align*}
which proves Lemma~\ref{lem:allocation}.
We now prove Lemma~\ref{lem:two-values} which will complete the proof.
\begin{proof}[Proof of Lemma~\ref{lem:two-values}]
Fix any allocation $\dustar$ and $\dvstar$ that minimizes $F(d_u, d_v)$. We will show that first, there exists a sequence of locally
reallocating the values (i.e., degrees) in $\dustar$ without changing the value of $F(d_u, d_v)$ such that at the end, every entry in
$\dustar$ is either $1$ or $\ensuremath{\floor{{1 \over p}}}$. After changing the vector $\dustar$, we then show that there exists a sequence of locally
reallocating the values in $\dvstar$ without changing the value of $F(d_u, d_v)$ such that at the end, every entry in $\dvstar$ is
either $0$ or $\ensuremath{\floor{{1 \over p}}}$.
We first explain how to change $\dustar$. To simplify the presentation, we define $q_i = {1 - e^{-p \cdot \dvistar}}$ and the target
expression becomes
\begin{align}
\sum_i \paren{1 - e^{-p \cdot \dvistar}} \cdot \max\set{\duistar - 1,0} = \sum_i q_i \cdot \max\set{\duistar -1, 0} \label{eq:has-q}
\end{align}
Recall that $\duistar$ satisfies $\duistar \in \bracket{\ensuremath{\floor{{1 \over p}}}}$ (in \ensuremath{\textnormal{MP-(\ref{eq:MP})}}\xspace) (and hence $q_i \ge 0$) and $\sum_i \duistar = d(U)$. First
of all, if there exists some $i_1$ where $\duiOstar = 0$, then since $d(U) \ge \card{\ensuremath{M^+}}$, there must exist an index $i_2$ where
$\duiTstar \ge 2$. Then, we can shift one degree from $\duiTstar$ to $\duiOstar$ and after the shift, $(a)$ $\max\set{\duiOstar -1, 0}$
remains $0$ and hence $q_{i_1}\max\set{\duistar -1, 0}$ remains $0$, and $(b)$ $\max\set{\duiTstar -1, 0}$ decreases and hence
$q_{i_1} \max\set{\duiTstar -1, 0}$ does not increase. Therefore, $F(d_u, d_v)$ does not increase after the shift, and from now on, we
have $\duistar \ge 1$ for all $i \in [\card{\ensuremath{M^+}}]$. To proceed, we need the following property of $\duistar$.
\newcommand{\Delta}{\Delta}
\begin{claim}\label{clm:duistar}
For any pair of indices $i_1,i_2$ where $q_{i_1} > q_{i_2}$, either $\duiTstar = \ensuremath{\floor{{1 \over p}}}$ or $\duiOstar \le 1$.
\end{claim}
\begin{proof}
Suppose not. We have $\duiTstar < \ensuremath{\floor{{1 \over p}}}$ and $\duiOstar > 1$, for some $i_1$ and $i_2$. We can shift one degree from $\duiOstar$ to
$\duiTstar$ and still get a valid allocation. In the following, we show that this new allocation achieves a smaller value of $F$, which
contradicts to the optimality of $\duistar$.
Since shifting from $\duiOstar$ to $\duiTstar$ only changes the degrees for $u_{i_1}$ and $u_{i_2}$, it suffices for us to prove that
\begin{align*}
\Delta := &(q_{i_2} \max\set{\duiTstar -1, 0} + q_{i_1} \max\set{\duiOstar-1,0}) \\
&- (q_{i_2} \max\set{\duiTstar, 0} + q_{i_1} \max\set{\duiOstar - 2, 0}) > 0
\end{align*}
Since $\duiOstar \ge 2$, $\max\set{\duiOstar - 1, 0} = \duiOstar - 1$, $\max\set{\duiOstar - 2, 0} = \duiOstar - 2$. In addition,
since $\duiTstar \ge 0$, $ \max\set{\duiTstar, 0} = \duiTstar $. We have
\begin{align*}
\Delta = &(q_{i_2} \max\set{\duiTstar -1, 0} + q_{i_1} (\duiOstar-1)) - (q_{i_2} \duiTstar + q_{i_1} (\duiOstar - 2))\\
= & q_{i_2} \paren{\max\set{\duiTstar -1, 0} - \duiTstar} +q_{i_1}\\
\ge & q_{i_2} (\duiTstar -1 - \duiTstar) +q_{i_1} \\
= & q_{i_1} - q_{i_2}.
\end{align*}
Since we have $q_{i_1} > q_{i_2}$, the value of $F$ decreases after the shifting according to Eq~\ref{eq:has-q}, a contradiction.
\end{proof}
We use Claim~\ref{clm:duistar} to prove the correctness of the following sequence of reallocation of $\dustar$. Now, as long as there
exists an index $i_1$, where $\duiOstar \in (1, \ensuremath{\floor{{1 \over p}}})$, since $d(U) - \card{\ensuremath{M^+}}$ is an integer multiple of $(\ensuremath{\floor{{1 \over p}}} - 1)$, there must
exists some index $i_2$ where $\duiTstar \in (1, \ensuremath{\floor{{1 \over p}}})$ (recall that $\duistar \ge 1$), and we will shift the values between
$\duiOstar$ and $\dviTstar$ such that one of them becomes either $1$ or $\ensuremath{\floor{{1 \over p}}}$ and both of them are still at least $1$ (it is easy to
see this is always possible). First of all, every step of this reallocation reduces the number of vertices with
$\duistar \in (1, \ensuremath{\floor{{1 \over p}}})$, and hence it will terminate. To see that this process never changes $F(d_u, d_v)$, $(a)$ it cannot be that
$q_{i_1} \neq q_{i_2}$, since otherwise the indices $i_1$ and $i_2$ will contradict Claim~\ref{clm:duistar}, and $(b)$ if
$q_{i_1} = q_{i_2}$, shifting the allocation between $i_1$ to $i_2$ will not change $F(d_u, d_v)$.
Therefore, we can focus on the case where the entries of $\duistar$ are either $1$ or $\ensuremath{\floor{{1 \over p}}}$. We now consider $\dvistar$. Recall that
$d(V) = \sum_i d(v_i)$ and $d(V)$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}}$. The target expression can be written as
\begin{align*}
&\sum_i \paren{1 - e^{-p \cdot \dvistar}} \cdot \max\set{\duistar - 1, 0} \\
= & \sum_{i:~ \duistar = \ensuremath{\floor{{1 \over p}}}} \paren{1 - e^{-p \cdot \dvistar}} \paren{\ensuremath{\floor{{1 \over p}}} - 1} + \sum_{i:~ \duistar = 1} \paren{1 - e^{-p
\cdot \dvistar}} \cdot 0 \\
= & \sum_{i:~ \duistar = \ensuremath{\floor{{1 \over p}}}} \paren{1 - e^{-p \cdot \dvistar}}
\end{align*}
Since $F$ is monotonically increasing when any $d(u_i)$ increases, for the indices $i$ where $\duistar = 1$, ideally, one should allocate
as many degrees to $\dvistar$ as possible, i.e., $\dvistar = \ensuremath{\floor{{1 \over p}}}$. However, it might be the case that $d(V)$ cannot supply $\ensuremath{\floor{{1 \over p}}}$
degree for all $i$ where $\duistar = 1$. But in this case, we are done since reallocating between different $\dvistar$ where
$\duistar = 1$ does not change the $F$ (in fact, $F$ is always $0$), and we can shift them such that we have as many $\dvistar = \ensuremath{\floor{{1 \over p}}}$
as possible and leave the rest equal to $0$.
In the following, we assume $d(V)$ can supply $\ensuremath{\floor{{1 \over p}}}$ degree for all $i$ with $\duistar = 1$, and hence $\dvistar = \ensuremath{\floor{{1 \over p}}}$ whenever
$\duistar = 1$. It suffices to only focus on $\dvistar$ where $\duistar = \ensuremath{\floor{{1 \over p}}}$. We need the following property of $\dvistar$ to
complete the argument.
\begin{claim}\label{clm:dvistar}
For any pair of indices $i_1$ and $i_2$ such that $\duiOstar = \duiTstar = \ensuremath{\floor{{1 \over p}}}$, we have that either $\min\set{\dviOstar, \dviTstar} = 0$ or
$\max\set{\dviOstar, \dviTstar} = \ensuremath{\floor{{1 \over p}}}$.
\end{claim}
\begin{proof}
Suppose not. Then, for some $i_1$ and $i_2$, we have $\min\set{\dviOstar, \dviTstar} > 0$, and also $\max\set{\dviOstar, \dviTstar} <
\ensuremath{\floor{{1 \over p}}}$. Without lose of generality, assume $1 \le \dviOstar \le \dviTstar \le \ensuremath{\floor{{1 \over p}}} -1$. Then, shifting one degree from $\dviOstar$ to
$\dviTstar$ leads to a valid allocation, and we prove in the following that the new allocation decreases the objective function which
contradicts the optimality of $\dvstar$.
Since only the indices $i_1$ and $i_2$ are affected, it suffices for us to prove that
\begin{align*}
\Delta := \paren{1 - e^{-p \cdot \dviOstar}} + \paren{1 - e^{-p \cdot \dviTstar}} - \paren{1 - e^{-p \cdot (\dviOstar - 1)}} + \paren{1
- e^{-p \cdot (\dviTstar + 1)}} > 0
\end{align*}
We have
\begin{align*}
\Delta \geq e^{-p \cdot \dviOstar} (e^p - 1) + e^{-p \cdot \dviTstar}(e^{-p} - 1)
\end{align*}
Since $\dviOstar \le \dviTstar$, $e^{-p \cdot \dviOstar} \ge e^{-p \cdot \dviTstar}$. We further have
\begin{align*}
e^{-p \cdot \dviOstar} (e^p - 1) + e^{-p \cdot \dviTstar}(e^{-p} - 1) &\ge e^{-p \cdot \dviTstar} (e^p - 1) + e^{-p \cdot \dviTstar}(e^{-p} - 1)\\
&\ge e^{-p \cdot \dviTstar} (e^p - 1 + e^{-p} - 1) \\
&> e^{-p \cdot \dviTstar} (2 \sqrt{e^p \cdot e^{-p}} - 2) \\
&= 0
\end{align*}
where the strict inequality is true since the two terms can only be equal when $e^{p} = e^{-p}$ which does not happen for $p > 0$.
\end{proof}
Using Claim~\ref{clm:dvistar}, we can now show that any $\dvistar$ where $\duistar = \ensuremath{\floor{{1 \over p}}}$, must be either $0$ or $\ensuremath{\floor{{1 \over p}}}$. Suppose
not. If $\dviOstar \in (0, \ensuremath{\floor{{1 \over p}}})$, then since $d(V)$ is an integer multiple of $\ensuremath{\floor{{1 \over p}}}$, there must exists some other index $i_2$
where $\dviTstar \in (0, \ensuremath{\floor{{1 \over p}}})$; hence, $0 < \min\set{\dviOstar, \dviTstar} < \max\set{\dviOstar, \dviTstar} < \ensuremath{\floor{{1 \over p}}}$, a
contradiction to Claim~\ref{clm:dvistar}.
\end{proof}
\end{proof}
\section{The Optimality of the $b$-Matching Lemma}
In this section, we establish that our $b$-matching lemma is essentially optimal in the sense that it is impossible to find a $b$-matching
with at least $b\cdot \ensuremath{\mbox{\sc opt}}\xspace(G)$ edge for $b$ much larger than $1/p$. In particular, we show that,
\begin{claim}\label{clm:b-match-opt}
For any constant $0 < p < 1$, there exist bipartite graphs $G$ where $G_p$ has a matching of size $n-o(n)$ in expectation, and for any
$b > {2 \over p}$, there is no $b$-matching in $G$ with (at least) $b\cdot 0.99n$ edges; here $n$ is the number of
vertices on each side of $G$.
\end{claim}
\begin{proof}
Consider bipartite graphs $G(L,R,E)$ where the vertices in $L$ consists of two disjoint sets $L_1$ and $L_2$ with $\card{L_1} = N$ and
$\card{L_2} = N/e$, for the parameter $N = {n \over (1 + 1/e)}$. Similarly, $R$ contains two sets $R_1$ and $R_2$ with $\card{R_1} = N$
and $\card{R_2} = N/e$.
The edges in $G$ consists of two parts. First, there is a complete bipartite graph between $L_1$ and $R_2$, and a complete bipartite graph
between $L_2$ and $R_1$. Second, there is a sparse graph between $L_1$ and $R_2$ defined through the following random process: each edge
between $L_1$ and $R_1$ is chosen w.p. ${1 \over pN}$, independent of each other.
In the following, we show that for a graph $G$ created through the above process, w.p. $1 - o(1)$, $G_p$ has a matching of size $n-o(n)$
in expectation, and w.p. $1 - o(1)$, there is no $b$-matching in $G$ with $b\cdot 0.99n$ edges, for $b > {2 \over p}$. Hence, by applying a
union bound, the above process find a graph with both properties w.p. $1 - o(1)$, proving the claim.
To see that $G_p$ has a matching of size $n-o(n)$ in expectation, we realize the edges in $G$ in two steps: first realize the edges
between $L_1$ and $R_1$, and then the other edges (i.e., the two complete graphs between $L_1, R_2$ and between $L_2, R_1$,
respectively). For the subgraph between $L_1$ and $R_1$, notice that each edge between $L_1$ and $R_1$ is realized w.p.
${1 \over pN} \cdot p = {1 \over N}$ (chosen w.p. ${1 \over pN}$ in the above process and realize w.p. $p$). Since
$\card{L_1} = \card{R_1} = N$, the subgraph between $L_1$ and $R_1$ is a random bipartite graph with $N$ vertices on each side where each
edge is chosen w.p. $1/N$, independently. Therefore, by~\cite{bollobas1998random}, w.p. $1-o(1)$, there is a matching of size $(1 - 1/e)N$
between $L_1$ and $R_1$. Now for the remaining $N/e$ unmatched vertices in $L_1$ (resp. in $R_1$), since there is a complete graph between
$L_1$ and $R_2$ (resp. $R_1$ and $L_2$), w.p. $1 - o(1)$, a perfect matching realizes between the unmatched vertex in $L_1$ and $R_2$
(resp. between $R_1$ and $L_2$). The total matching size is $(1 - 1/e)N + N/e + N/e = (1 + 1/e)N = n$, and the expected maximum matching
size in $G_p$ is at least $(1 - o(1)) n + o(1) \cdot 0 = n - o(n)$.
It remains to show that w.p. $1 - o(1)$, $G$ has no $b$-matching with $b\cdot 0.99n$ edges for $b>{2 \over p}$. For any $b$-matching in $G$,
the number of edges incident on $L_2$ and $R_2$ is at most $b \cdot (\card{L_2} + \card{R_2}) = 2 b N/e$. The remaining edges of this
$b$-matching must be between $L_1$ and $R_1$. Each edge between $L_1$ and $R_2$ is chosen w.p. ${1 \over pN}$, and there are $N^2$
possible edges between $L_1$ and $R_1$. By Chernoff bound, w.p. $1 - o(1)$, the number of realized edges between $L_1$ and $R_1$ is at
most $(1 +\varepsilon_0) {N \over p}$ (for a constant $\varepsilon_0$ to be fixed later). Therefore, the total number of edges of any $b$-matching in
$G$ is at most
\begin{align*}
2 b N/e + (1 + \varepsilon_0){N \over p} &= b\cdot n - b(1 - 1/e)N + (1 + \varepsilon_0) {N \over p}\tag{$(1 + 1/e)N = n$}\\
& < b\cdot n - \paren{(1 - 1/e) - {1 + \varepsilon_0 \over 2}}bN \tag{$b > 2/p$ and hence $1/p < b/2$}\\
& = b\cdot 0.99 n \tag{fix $\varepsilon_0$ s.t., $1 - {1 \over e} - {1 + \varepsilon_0 \over 2} = 0.01 (1 + {1\over e})$ ($\varepsilon_0 \approx 0.24$)}
\end{align*}
\end{proof}
\section{$b$-Matching Lemma}\label{sec:b-matching}
Here, we develop one of the main ingredients of our algorithm, namely, any input graph $G$ contains a $b$-matching of size almost
$b \cdot \ensuremath{\mbox{\sc opt}}\xspace(G)$ for $b = 1/p$. Intuitively, if the expected matching size in $G$ is $\ensuremath{\mbox{\sc opt}}\xspace$, then since only $p$ fraction of edges are
realized in expectation, one may hope to find up to $1/p$ edge-disjoint matchings of size \ensuremath{\mbox{\sc opt}}\xspace in $G$. The following lemma formalizes this
intuition by using $b$-matchings (for $b = 1/p$) instead of a collection of edge-disjoint matchings.
\begin{lemma}[$b$-matching lemma]\label{lem:b-matching}
Let $b = \floor{\frac{1}{p}}$; any graph $G(V,E)$ has a $b$-matching of size at least $(b-1)\cdot\ensuremath{\mbox{\sc opt}}\xspace (G)$.
\end{lemma}
\begin{proof}
Suppose by contradiction that the maximum $b$-matching $B$ in $G$ is of size less than $(b-1)\cdot\ensuremath{\mbox{\sc opt}}\xspace$. Consequently, by Theorem~\ref{thm:b-matching-char}, there exist disjoint subsets $U,W$ of $V$ such that,
\begin{align}
b \cdot \card{U} + \card{E[W]} + \sum_{K} \floor{\frac{1}{2}\Paren{b\cdot \card{K} + \card{E[K,W]}}} < (b-1)\cdot\ensuremath{\mbox{\sc opt}}\xspace \label{eq:b-matching-1}
\end{align}
where $K$ ranges over all connected components in the graph $G[V - U - W]$. Let $c$ be the number of connected components in $G[V-U-W]$. We first note that $c < 2\ensuremath{\mbox{\sc opt}}\xspace$; otherwise,
\begin{align*}
\sum_{K} \floor{\frac{1}{2}\Paren{b\cdot \card{K} + \card{E[K,W]}}} &\geq \sum_{K} \floor{\frac{b}{2}} \geq c \cdot \paren{\frac{b-1}{2}} \geq (b-1)\cdot\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
and hence the LHS in Eq~(\ref{eq:b-matching-1}) would be more than $(b-1) \cdot \ensuremath{\mbox{\sc opt}}\xspace$, i.e., the RHS; a contradiction.
Additionally, we have $\card{U} + \card{W} + \sum_{K} \card{K} = n$. Hence, by multiplying each side in Eq~(\ref{eq:b-matching-1}) by $2$ and plugging in this bound, we have,
\begin{align*}
2b\cdot\ensuremath{\mbox{\sc opt}}\xspace-2\ensuremath{\mbox{\sc opt}}\xspace &> nb - b\card{W} + b\card{U} + 2\card{E[W]} + \sum_{K} \paren{\card{E[K,W]} - 1} \\
&\geq nb - b\card{W} + b\card{U} + 2\card{E[W]} + \sum_{K} \card{E[K,W]} - 2\ensuremath{\mbox{\sc opt}}\xspace
\end{align*}
Let $T := V \setminus (U \cup W)$, i.e., the set of vertices in connected components $K$. Using this notation, we can write the above equation simply as,
\begin{align}
b\cdot\card{W} - b\cdot\card{U} - 2\card{E[W]} - \card{E[T,W]} > b \cdot \paren{n-2\ensuremath{\mbox{\sc opt}}\xspace} \label{eq:b-matching-2}
\end{align}
Now consider the partition $T,U,W$ in a \emph{realized} graph $G(V,E_p)$. Let $E_p[W]$ and $E_p[T,W]$ denote, respectively, the set of edges in $E[W]$ and $E[T,W]$ after sampling the edges
w.p. $p$. For any matching $M$ in $G_p$, define $x(M)$ to be the number of \emph{unmatched} vertices (by $M$) in $W$. Finally, define $x^{\star} := \min_{M} x(M)$, where the minimum is taken over all
matchings in $G_p$. Clearly, $x^{\star}$ is a random variable depending on the choice of edges in $G_p$. We have the following simple claim.
\begin{claim}\label{clm:x-size}
For any realization $G_p$, $x^{\star} \geq \card{W}-\card{U} - 2\card{E_p[W]} - \card{E_p[T,W]}$.
\end{claim}
\begin{proof}
Consider the set of vertices in $W$. At most $\card{U}$ vertices of $W$ can be matched to vertices in $U$. Additionally, any edge in $E_p[W]$ can further reduce the number
of unmatched vertices in $W$ by at most $2$. Finally, any edge in $E_p[T,W]$ can reduce the number of remaining unmatched vertices in $W$ by at most $1$.
\end{proof}
Using the fact that $\Exp{x^{\star}} \leq n-2\ensuremath{\mbox{\sc opt}}\xspace$, we have,
\begin{align*}
b\cdot (n-2\ensuremath{\mbox{\sc opt}}\xspace) &\geq b\cdot\Exp{x^{\star}} \\
&\geq b\cdot\card{W} - b\cdot\card{U} - b\cdot\EX{2\card{E_p[W]} + \card{E_p[T,W]}} \tag{by Claim~\ref{clm:x-size}}\\
&= b\cdot\card{W} - b\cdot\card{U} - pb \cdot \paren{2\card{E[W]} + \card{E[T,W]}} \\
&\geq b\cdot\card{W} - b\cdot\card{U} - {2\card{E[W]} - \card{E[T,W]}} \tag{since $pb = p \floor{\frac{1}{p}} \leq 1$} \\
&> b \cdot (n-2\ensuremath{\mbox{\sc opt}}\xspace) \tag{by Eq~(\ref{eq:b-matching-2})}
\end{align*}
a contradiction.
\end{proof}
We further prove that the bound established in Lemma~\ref{lem:b-matching} is essentially tight (see Appendix~\ref{app:b-limitation}).
\begin{claim}\label{clm:b-match-opt}
For any constant $0 < p < 1$, there exist bipartite graphs $G$ where $G_p$ has a matching of size $n-o(n)$ in expectation, but for any
$b \geq {2 \over p}$, there is no $b$-matching in $G$ with (at least) $b\cdot 0.99n$ edges; here $n$ is the number of
vertices on each side of $G$.
\end{claim}
Finally, we establish the following auxiliary lemma.
\begin{lemma}\label{lem:outside-edges}
Let $B$ be a $\floor{\frac{1}{p}}$-matching with $\paren{\floor{\frac{1}{p}}\cdot N}$ edges; then, $\EX{\mu(B_p)} \geq (1-3p) \cdot {\frac{N}{3}}$.
\end{lemma}
\begin{proof}
We first partition the edges of $B$ into a collection of matchings. Since the degree of each vertex in $G(V,B)$ is at most $\floor{\frac{1}{p}}$, by Vizing's Theorem~\cite{Vizing64}, we can color
the edges in $G(V,B)$ with $\floor{\frac{1}{p}}+1$ colors such that no two edges with the same color are incident on a vertex.
This ensures that $B$ can be decomposed into $R = \floor{\frac{1}{p}}+1$ matchings
$M_1,\ldots,M_R$.
Next, we define the following process. Define $M^{(0)} = \emptyset$; for $i = 1$ to $R$ rounds, let $M^{(i)}$ be a maximal matching
obtained by adding to $M^{(i-1)}$ the set of realized edges in $M_i$ that are not incident on vertices in $M^{(i-1)}$. Define
$M := M^{(R)}$.
We argue that $\Exp{\card{M}} \geq (1-3p) \cdot {\frac{N}{3}}$. To do this, we need the following notation. Define $Y_i$ as a random variable denoting
the \emph{set of edges} in $M_i$ that are \emph{not} incident on any vertex of matching $M^{(i-1)}$. Note that $Y_i$ depends only on the realization of edges in $M_1,\ldots,M_{i-1}$ and is \emph{independent} of
the realization of $M_i$. Moreover, define $X_i$ as a random variable indicating the \emph{number of edges} in (a realization of )$Y_i$ that are added to $M^{(i-1)}$ (after updating by edges in $M_i$). We first have,
\begin{align*}
\card{Y_i} \geq \card{M_i} - 2 \card{M^{(i-1)}}
\end{align*}
since any edge in $M^{(i-1)}$ can be incident on at most two vertices of $M_i$. Moreover, conditioned on any valuation
for $Y_i$, we have $\Exp{X_i} = p \cdot \card{Y_i}$ since each edge in $M_i$ is realized w.p. $p$, independent
of the choice of $Y_i$. Consequently,
\begin{align*}
\Exp{X_i} = p \cdot \Exp{Y_i} \geq p \cdot \paren{\card{M_i} - 2\EX{\card{M^{(i-1)}}}}
\end{align*}
We again stress that the expectation for $X_i$ is taken over the choice of edges in $M_i$, while the expectation for $Y_i$ (and $M^{(i-1)}$) is taken over the choice of edges in $M_1,\ldots,M_{i-1}$.
We now have,
\begin{align*}
\Exp{\card{M}} &= \sum_{i=1}^R \Exp{X_i} \geq \sum_{i=1}^{R} p \cdot \paren{\card{M_i} - 2\EX{\card{M^{(i-1)}}}} \\
&\geq p \cdot \paren{\sum_{i=1}^{R} \card{M_i} - 2 \sum_{i=1}^{R} \Exp{\card{M}}} \tag{$\Exp{\card{M}} \geq \EX{\card{M^{(i-1)}}}$} \\
&\geq p \cdot \paren{\floor{\frac{1}{p}} \cdot N - 2 \paren{\floor{\frac{1}{p}}+1} \cdot \Exp{\card{M}}} \tag{$R = \floor{\frac{1}{p}}+1$}
\end{align*}
This implies that
\begin{align*}
\Exp{\card{M}} \geq (1-3p) \cdot \frac{N}{3}
\end{align*}
which concludes the proof.
\end{proof}
\section{Concluding Remarks and Open Problems}\label{sec:conc}
We presented the first non-adaptive algorithm for stochastic matching with an approximation ratio that is strictly better than half. In
particular, we showed that any graph $G$ has a subgraph $H$ with maximum degree $O(\frac{\log{(1/p)}}{p})$ such that the ratio of expected
size of a maximum matching in realizations of $H$ and $G$ is at least $0.52$ when $p$ is sufficiently small, i.e., case of vanishing probabilities, and $0.5+\delta_0$ (for an
absolute constant $\delta_0 > 0$) for any $p \in (0,1)$.
A main open problem is to determine the best approximation ratio achievable by a non-adaptive algorithm. In particular, can non-adaptive
algorithms qualitatively match the performance of adaptive algorithms by achieving a $(1-\varepsilon)$-approximation for any $\varepsilon > 0$ using a
subgraph with maximum degree $f(\varepsilon,p)$ for some function $f$? In the following, we mention some potential directions towards resolving this problem.
\paragraph{A barrier to obtaining a $(1-\varepsilon)$-approximation.} We briefly explain here a barrier to a $(1-\varepsilon)$-approximation algorithm that was noted in~\cite{ECpaper}.
It was shown in~\cite{ECpaper} that any (non-adaptive) $(1-\varepsilon)$-approximation algorithm for stochastic matching needs to solve the following problem.
\begin{problem*}[\!\cite{ECpaper}]
Suppose you are given a bipartite graph $G(L, R, E)$ ($\card{L} = \card{R} = n$) with the property that the expected maximum matching size
between two \emph{uniformly at random chosen} subsets $A \subseteq L$ and $B \subseteq R$ with $\card{A} = \card{B} = n/3$, is
$n/3 - o(n)$. The goal is to compute a subgraph $H(L,R,Q)$ with max-degree of $O(1)$, such that the expected size of a maximum matching
between two randomly chosen subsets $A$ and $B$ is $\Omega(n)$.
\end{problem*}
For the harder problem in which the two subsets $A$ and $B$ are chosen \emph{adversarially}, it is known that there exist graphs (in
particular, a Ruzsa-Szemer\'{e}di\xspace graph; see, e.g.~\cite{AlonMS12,FischerLNRRS02}) that admit no such sparse subgraph $H$ (see~\cite{ECpaper} for more
details). However, in the stochastic matching application, our interest is in \emph{randomly} chosen subsets $A$ and $B$, and it is not
known if there are instances such that the random set version of the problem is hard.
\paragraph{A direct application of $b$-matching lemma.} There is another possible way of utilizing the $b$-matching lemma. In
Lemma~\ref{lem:outside-edges}, we showed that for any $\frac{1}{p}$-matching $B$ of size $\frac{\ensuremath{\mbox{\sc opt}}\xspace}{p}$, the expected maximum matching
size of a realization of $B$ is at least $\frac{\ensuremath{\mbox{\sc opt}}\xspace}{3}$. In fact, using a more careful analysis, we can improve this bound to
$\approx 0.4 \cdot \ensuremath{\mbox{\sc opt}}\xspace$. This, together with our $b$-matching lemma, immediately implies a simple $0.4$-approximation algorithm for
stochastic matching. However, it is not clear to us whether this bound can be significantly improved to get a matching of size strictly
more than $\frac{\ensuremath{\mbox{\sc opt}}\xspace}{2}$. It is worth mentioning that using a result of Karp and Sipser~\cite{KarpS81} on \emph{sparse random graphs}
(see also~\cite{AronsonFP98}, Theorem~4), one can show that if the $\frac{1}{p}$-matching itself is chosen \emph{randomly}, then its
realizations contain a matching of size $\approx 0.56 \cdot \ensuremath{\mbox{\sc opt}}\xspace$ in expectation. However, this result relies heavily on the fact that the
original graph (in our case a realization of a random $\frac{1}{p}$-matching) is chosen randomly, and it seems unlikely that a similar
result holds for an \emph{adversarially} chosen $\frac{1}{p}$-matching.
\section{Introduction}\label{sec:intro}
We study the problem of finding a maximum matching in presence of \emph{uncertainty} in the input graph. Specifically, we consider the
\emph{stochastic} setting where for an input graph $G(V,E)$ and a parameter $p > 0$, each edge in $E$ is realized \emph{independently}
w.p.\footnote{Throughout, we use \emph{w.p.}, \emph{w.h.p}, and \emph{prob.} to abbreviate ``with probability'', ``with high probability'',
and ``probability'', respectively.} $p$. We call the graph obtained from this stochastic process (which should be viewed as a random
variable) a \emph{realization} of $G(V,E)$, denoted by $G_p(V,E_p)$. The \emph{stochastic matching} problem can now be defined as follows.
Given a general (not necessarily bipartite) graph $G(V,E)$ and an edge realization probability $p > 0$, compute a subgraph $H$ of $G$ such
that:
\begin{enumerate}[(i)]
\item The expected maximum matching size in a realization of $H$ is close to the expected maximum matching size in a realization of $G$.
\item The degree of each vertex in $H$ is bounded by some function that only depends on $p$, independent of the size of $G$.
\end{enumerate}
In other words, the stochastic matching problem asks if every graph $G$ contains a subgraph $H$ of \emph{bounded degree} (depending only on the realization probability $p$)
such that the expected matching size in realizations of $G$ and $H$ are close.
\paragraph{Kidney exchange.}
A canonical and arguably the most important application of the stochastic matching problem appears in \emph{kidney exchange}, where patients
waiting for kidney transplant can \emph{swap} their incompatible donors to each get a compatible donor. The goal is to identify a maximum
set of patient-donor pairs to perform such a swap (i.e., finds a maximum matching). However, through medical records of patients and donors, one can
only filter out the patient-donor pairs where donation is \emph{impossible}, and more costly and time consuming tests must be performed
before a transplant can be performed.
The stochastic setting captures the essence of the need of extra tests for kidney exchange: an algorithm selects a set of patient-donor
pairs to perform the extra tests (i.e., computes a subgraph $H$), while making sure that there is a large matching among the pairs that pass
the extra tests. The objective that the subgraph $H$ has small degree captures the essence of minimizing the number of (costly and time
consuming) tests that each patient needs to go through. The kidney exchange problem has been extensively studied in the literature,
particularly under stochastic settings (see, e.g.,~\cite{DickersonPS13,Anderson20012015,ManloveO14,Unver10,AkbarpourLG14,AndersonAGK15,AwasthiS09,DickersonPS12,DickersonS15}).
We remark that the the stochastic matching problem captures the simplest form of the kidney exchange, referred to as \emph{pairwise exchange}. Modern kidney exchange programs regularly
employ swaps between three or patient-donor pairs and this setting has also been studied previously in the literature; we refer the interested reader to~\cite{BlumDHPSS15} for more details.
\paragraph{Previous work.} Our results are directly related to the results in~\cite{BlumDHPSS15} and~\cite{ECpaper} which we describe in
detail below. Blum~{\it et al.\,}~\cite{BlumDHPSS15} introduced the (variant of) stochastic matching problem and proposed a
$(\frac{1}{2}-\varepsilon)$-approximation algorithm (for any $\varepsilon > 0$) which requires the subgraph $H$ to have maximum degree of
$\frac{\log{(1/\varepsilon)}}{p^{\Theta(1/\varepsilon)}}$. The algorithm of Blum~{\it et al.\,}~\cite{BlumDHPSS15} works as follows: Pick a maximum matching $M_i$
in $G$ and remove the edges in $M_i$; repeat for $R:=\frac{\log{(1/\varepsilon)}}{p^{\Theta(1/\varepsilon)}}$ times. In order to analyze this algorithm,
the authors showed that, for any $i\in[R]$, if the size of the maximum matching among the realized edges in $M_1,\ldots,M_i$ is less than
$\ensuremath{\mbox{\sc opt}}\xspace/2$, the matching $M_{i+1}$ contains many augmenting paths of $M$ of length $O(\frac{1}{\varepsilon})$; since each such augmenting path is
realized w.p. $p^{O(\frac{1}{\varepsilon})}$, one needs to repeat this augmentation process for $\frac{1}{p^{O(\frac{1}{\varepsilon})}}$ time (as is
roughly the value of $R$) to increase the matching size to $(\frac{1}{2}-\varepsilon)\cdot\ensuremath{\mbox{\sc opt}}\xspace$.
In a recent work~\cite{ECpaper}, we showed that in order to obtain a $(\frac{1}{2}-\varepsilon)$-approximation algorithm, one only needs a subgraph
$H$ with max-degree of $O(\frac{\log{(1/\varepsilon p)}}{\varepsilon p})$, significantly smaller than the bounds in~\cite{BlumDHPSS15}. Interestingly, the
algorithm of~\cite{ECpaper} and the one in~\cite{BlumDHPSS15} are essentially identical (modulo an extra sparsification part required
in~\cite{ECpaper}) and the main difference is in the analysis. In~\cite{ECpaper}, we completely bypassed the need for using augmenting paths
in the analysis and instead, took advantage of structural properties of matchings in a global manner (by using \emph{Tutte-Berge formula};
see, e.g.,~\cite{lovasz2009matching}). In particular, we showed that repeatedly picking $O(\frac{\log{(1/\varepsilon p)}}{\varepsilon p})$ maximum
matchings (as described before) suffices to ensure that, among the chosen edges, a matching of size (essentially) equal to the size of the
last chosen matching would be realized (with high probability). Having this, one can show that running the aforementioned algorithm even for
$R:= O(\frac{\log{(1/\varepsilon p)}}{\varepsilon p})$ suffices to obtain a $(\frac{1}{2}-\varepsilon)$-approximation.
\emph{Adaptive} algorithms for stochastic matching have also been studied by~\cite{BlumDHPSS15,ECpaper}. In an adaptive algorithm, instead
of a single graph $H$, one is allowed to pick a \emph{small} number of bounded-degree graphs $H_1,\ldots,H_k$ where the choice of each $H_i$
can be made after \emph{probing} the edges in $H_1, H_2, \ldots, H_{i-1}$ to see if they are realized or not. A $(1-\varepsilon)$-approximation adaptive
algorithm for this problem was first proposed in~\cite{BlumDHPSS15} and further refined in~\cite{ECpaper}.
\paragraph{Beating the half approximation.}
This state-of-the-art highlights the following natural question:
\begin{quote}
\emph{Is half-approximation the limit for non-adaptive algorithms or is there a non-adaptive algorithm that achieves approximation
guarantee of {strictly better than half}? }
\end{quote}
It is worth mentioning that in many variations, obtaining half approximation for the maximum matching problem is typically a relatively easy task (usually via a greedy approach), while beating half approximation turns out to be a
difficult task. Some notable examples include, randomized greedy matching~\cite{DyerF91,AronsonDFS95,PoloczekS12,ChanCWZ14}, online stochastic matching~\cite{KVV90,MehtaP12,MehtaWZ15}, and
semi-streaming matching~\cite{FKMSZ05,KonradMM12}.
\subsection{Our Contributions}
We resolve the aforementioned question of obtaining an algorithm for stochastic matching with an approximation guarantee of \emph{strictly}
better than half. Formally,
\begin{theorem}\label{thm:main}
There exists an algorithm that given any graph $G(V,E)$ and any parameter $p > 0$, computes a subgraph $H(V,Q)$ of $G$ with a maximum
degree of $\bigO{\frac{\log{(1/p)}}{p}}$ such that the ratio of the expected maximum matching size of a realization of $H$ to a
realization of $G$ is at least:
\begin{enumerate} [(i)]
\item \label{part:main1} $0.52$ when $p \leq p_0$ for an \emph{absolute} constant $\ensuremath{p_0} > 0$.
\item \label{part:main2} $0.5 + \delta_0$ for any $0 < p < 1$, where $\delta_0 > 0$ is an \emph{absolute} constant.
\end{enumerate}
\end{theorem}
Our result in Theorem~\ref{thm:main} makes progress towards an open problem posed by Blum~{\it et al.\,}~\cite{BlumDHPSS15} regarding the possibility
of having a non-adaptive $(1-\varepsilon)$-approximation algorithm for stochastic matching. We further remark that the assumption in Part~(\ref{part:main1}) of Theorem~\ref{thm:main} is standard
in the stochastic matching literature and is referred as the case of \emph{vanishing probabilities}, see, e.g.~\cite{MehtaP12,MehtaWZ15}.
It is worth mentioning the max-degree on $H$ achieved in Theorem~\ref{thm:main} is essentially the best possible (up to an
$O(\log{{1 \over p}})$ factor) for any \emph{constant factor} approximation algorithm: suppose $G$ is a complete graph; in this case the
expected matching size in $G$ is $ n-o(n)$ by standard results on random graphs (see, e.g.,~\cite{Bollobas2001}, Chapter~7); however, if
max-degree of $H$ is $o(\frac{1}{p})$, then the expected number of realized edges in $H$ is $o(n)$, implying that the expected matching size
in $H$ is $o(n)$.
Our approach to proving Theorem~\ref{thm:main} can be divided into two parts. In the first part, we prove a structural result showing that
if a realization of $G$ has expected maximum matching size $\ensuremath{\mbox{\sc opt}}\xspace$, then $G$ itself should contain essentially $\frac{1}{p}$ edge-disjoint
matchings of size $\ensuremath{\mbox{\sc opt}}\xspace$ each. This result, established through a characterization of $b$-matching size in general graphs (see
Section~\ref{sec:prelim}), sheds more light into the structure of a graph in terms of its expected maximum matching size, which may be of
independent interest.
In the second part, we combine the aforementioned structural result with the $(\frac{1}{2}-\varepsilon)$-approximation algorithm of~\cite{ECpaper}
to obtain a matching of size strictly larger than $\ensuremath{\mbox{\sc opt}}\xspace/2$. In order to do this, we first find a collection of $\frac{1}{p}$ edge-disjoint
matchings of size at least $\ensuremath{\mbox{\sc opt}}\xspace$, remove them from the graph, and then run the algorithm of~\cite{ECpaper} on the remaining edges. We show
that the edges in this collection of edge-disjoint matchings must form many \emph{length-three augmenting paths} of the matching computed by
the algorithm of~\cite{ECpaper}, hence leading to a matching of size strictly larger than $\ensuremath{\mbox{\sc opt}}\xspace/2$. The analysis is separated into two steps:
we first formulate the increment in the matching size (through these augmenting paths) via a (non-linear) minimization program, and then
analyze the optimal solution of this minimization program and hence lower bound the increment in the matching size obtained from the
augmenting paths.
\paragraph{Other related work.} Multiple variants of stochastic matching have been considered in the literature. Blum~{\it et al.\,}~\cite{BlumGPS13}
studied a similar setting where one can only probe \emph{two} edges incident on any vertex and the goal is to find the optimal set of edges
to query. Another well studied setting is the \emph{query-commit} model, whereby an algorithm probes one edge at a time and if an edge $e$
is probed and realized, then the algorithm must take $e$ as part of the matching it
outputs~\cite{CostelloTT12,Adamczyk11,ChenIKMR09,BansalGLMNR12,GuptaN13}. We refer the reader to~\cite{BlumDHPSS15} for a detailed
description of the related work.
\paragraph{Organization.} The rest of the paper is organized as follows. We start by providing a high level overview of our algorithm in
Section~\ref{sec:overview}. Next, in Section~\ref{sec:prelim}, we introduce the notation and preliminaries needed for the rest of the
paper. We prove our main structural result, i.e., $b$-matching lemma in Section~\ref{sec:b-matching}. Our main algorithm and its analysis, i.e., the proof
of Part~(\ref{part:main1}) of Theorem~\ref{thm:main} are provided in Section~\ref{sec:main-alg}. Proof of Part~(\ref{part:main2}) of Theorem~\ref{thm:main}, i.e., an algorithm that works
for the large-probability case appears in Section~\ref{app:large-p}. We conclude the paper in Section~\ref{sec:conc}.
\section*{Omitted Proofs}\label{app:missing}
\subsection{Proof of Proposition~\ref{prop:upper-exp}}\label{app:upper-exp}
\begin{proposition*}
Let $f(x) := {1 - e^{-x} \over x}$. Then, for any $c$, and any $x \in [0, c]$, $e^{-x} \le 1 - f(c) \cdot x$.
\end{proposition*}
\begin{proof}
We first have $f(x)$ is \emph{monotone decreasing}, since,
\begin{align*}
\frac{d f}{d x} = \frac{e^{-x} \cdot x - 1+e^{-x}}{x^2} = \frac{(x+1)\cdot e^{-x} - 1}{x^2} \leq \frac{e^{x}\cdot e^{-x} - 1}{x^2} = 0
\end{align*}
where we used the inequality $(1+x) \leq e^{x}$.
Consequently, since $x \le c$,
\begin{align*}
f(c) \le f(x) = {1 - e^{-x} \over x}
\end{align*}
which implies $e^{-x} \le 1 - f(c) \cdot x$.
\end{proof}
\subsection{Proof of Proposition~\ref{prop:upper-exp-2}}\label{app:upper-exp-2}
\begin{proposition*}
For any $x \in (0,1]$, $(1-x)^{{1\over x}} \ge {1 - x \over e}$.
\end{proposition*}
\begin{proof}
We first exam the equivalent conditions for the target inequality.
\begin{align*}
& &(1-x)^{{1\over x}} \ge {1 - x \over e} \\
&\Longleftrightarrow & (1-x)^{{1\over x} - 1} \ge {1 \over e} \\
&\Longleftrightarrow & ({1\over x} - 1)\log(1-x) \ge -1 \tag{Take natural log for both sides.}
\end{align*}
Now, since $\log(1 - x) \ge -x - {x^2 \over 2} - {x^3 \over 2}$ when $x \in (0,0.43]$. We have
\begin{align*}
({1\over x} - 1)\log(1-x) &\ge ({1\over x} - 1) (-x - {x^2 \over 2} - {x^3 \over 2}) \tag{since $({1\over x} - 1) > 0$} \\
& = -1 - {x \over 2} - {x^2 \over 2} + x + {x^2 \over 2} + {x^3 \over 2} \\
& = - 1 + {x \over 2} + {x^3 \over 2} \\
& \ge -1
\end{align*}
which completes the proof.
\end{proof}
\subsection{Proof of Claim~\ref{clm:small-L}}\label{app:small-L}
\begin{claim*}
Fix $0 < \varepsilon < \delta < 1$. Let $G(V,E)$ be a graph, $X$ be any arbitrary subset of $E$, and $(M_1,\ldots,M_R) = \ensuremath{\textsf{MatchingCover}}\xspace(G(V,E\setminus X),\varepsilon)$.
Define $E_{MC} = M_1 \cup \ldots \cup M_R$. If $\card{M_R} \leq \paren{\frac{1}{2}-\delta} \ensuremath{\mbox{\sc opt}}\xspace$, then the expected maximum matching size in a realization
of $G(V, X \cup E_{MC})$ is at least $\paren{\frac{1}{2} + \delta - \varepsilon} \ensuremath{\mbox{\sc opt}}\xspace$.
\end{claim*}
\begin{proof}
For each realization of $G_p$, we fix one maximum matching. Now the expected matching size in $G_p$
can be written as
\begin{align*}
{\ensuremath{\mbox{\sc opt}}\xspace} = \sum_M\prob{\text{$M$ is the fixed maximum matching in $G_p$}} \cdot \card{M}
\end{align*}
By property~(2) of \ensuremath{\textsf{MatchingCover}}\xspace in Lemma~\ref{lem:basic-alg}, the maximum matching size in the graph $G(V,E \setminus (X \cup E_{MC}))$ is at most $(1+\varepsilon)\card{M_R}$.
Therefore, for any matching $M$, at most $(1+\varepsilon)\card{M_R}$ edges of $M$ is in $E \setminus (X \cup E_{MC})$, and hence at least $\card{M} - (1+\varepsilon)L$ edges
of $M$ is in $X \cup E_{MC}$. This implies that if all edges in $M$ are
realized, a matching of size at least $\card{M} - (1+\varepsilon)\card{M_R}$ is realized in $Q$. Let $\ensuremath{\mbox{\sc alg}}\xspace$ be the expected maximum matching size in $G(V,X \cup E_{MC})$; we have,
\begin{align*}
\ensuremath{\mbox{\sc alg}}\xspace &\ge \sum_M\prob{\text{$M$ is the fixed maximum matching in $G_p$}} (\card{M} - (1+\varepsilon)\card{M_R})\\
&={\ensuremath{\mbox{\sc opt}}\xspace} - (1+\varepsilon)\card{M_R}
\end{align*}
Since $\card{M_R} \le (1/2 - \delta){\ensuremath{\mbox{\sc opt}}\xspace}$, we have,
\[ \ensuremath{\mbox{\sc alg}}\xspace \ge {\ensuremath{\mbox{\sc opt}}\xspace} - (1+\varepsilon)\cdot(1/2 - \delta) \cdot {\ensuremath{\mbox{\sc opt}}\xspace} \geq (1/2 + \delta - \varepsilon) \cdot {\ensuremath{\mbox{\sc opt}}\xspace}\]
which concludes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:outside-edges}}\label{app:outside-edges}
\begin{lemma*}
Let $B$ be a $\floor{\frac{1}{p}}$-matching with $\paren{\floor{\frac{1}{p}}\cdot N}$ edges; then, $\EX{\mu(B_p)} \geq (1-3p) \cdot {\frac{N}{3}}$.
\end{lemma*}
\begin{proof}
We first partition the edges of $B$ into a collection of matchings. Since the degree of each vertex in $G(V,B)$ is at most $\floor{\frac{1}{p}}$, by Vizing's Theorem~\cite{Vizing64}, we can color
the edges in $G(V,B)$ with $\floor{\frac{1}{p}}+1$ colors such that no two edges with the same color are incident on a vertex.
This ensures that $B$ can be decomposed into $R = \floor{\frac{1}{p}}+1$ matchings
$M_1,\ldots,M_R$.
Next, we define the following process. Define $M^{(0)} = \emptyset$; for $i = 1$ to $R$ rounds, let $M^{(i)}$ be a maximal matching
obtained by adding to $M^{(i-1)}$ the set of realized edges in $M_i$ that are not incident on vertices in $M^{(i-1)}$. Define
$M := M^{(R)}$.
We argue that $\Exp{\card{M}} \geq (1-3p) \cdot {\frac{N}{3}}$. To do this, we need the following notation. Define $Y_i$ as a random variable denoting
the \emph{set of edges} in $M_i$ that are \emph{not} incident on any vertex of matching $M^{(i-1)}$. Note that $Y_i$ depends only on the realization of edges in $M_1,\ldots,M_{i-1}$ and is \emph{independent} of
the realization of $M_i$. Moreover, define $X_i$ as a random variable indicating the \emph{number of edges} in (a realization of )$Y_i$ that are added to $M^{(i-1)}$ (after updating by edges in $M_i$). We first have,
\begin{align*}
\card{Y_i} \geq \card{M_i} - 2 \card{M^{(i-1)}}
\end{align*}
since any edge in $M^{(i-1)}$ can be incident on at most two vertices of $M_i$. Moreover, conditioned on any valuation
for $Y_i$, we have $\Exp{X_i} = p \cdot \card{Y_i}$ since each edge in $M_i$ is realized w.p. $p$, independent
of the choice of $Y_i$. Consequently,
\begin{align*}
\Exp{X_i} = p \cdot \Exp{Y_i} \geq p \cdot \paren{\card{M_i} - 2\EX{\card{M^{(i-1)}}}}
\end{align*}
We again stress that the expectation for $X_i$ is taken over the choice of edges in $M_i$, while the expectation for $Y_i$ (and $M^{(i-1)}$) is taken over the choice of edges in $M_1,\ldots,M_{i-1}$.
We now have,
\begin{align*}
\Exp{\card{M}} &= \sum_{i=1}^R \Exp{X_i} \geq \sum_{i=1}^{R} p \cdot \paren{\card{M_i} - 2\EX{\card{M^{(i-1)}}}} \\
&\geq p \cdot \paren{\sum_{i=1}^{R} \card{M_i} - 2 \sum_{i=1}^{R} \Exp{\card{M}}} \tag{$\Exp{\card{M}} \geq \EX{\card{M^{(i-1)}}}$} \\
&\geq p \cdot \paren{\floor{\frac{1}{p}} \cdot N - 2 \paren{\floor{\frac{1}{p}}+1} \cdot \Exp{\card{M}}} \tag{$R = \floor{\frac{1}{p}}+1$}
\end{align*}
This implies that
\begin{align*}
\Exp{\card{M}} \geq (1-3p) \cdot \frac{N}{3}
\end{align*}
which concludes the proof.
\end{proof}
\section{Technical Overview}\label{sec:overview}
In this section, we give a more detailed overview of the main ideas used in our algorithm for stochastic matching. For clarity of
exposition, throughout this section, we assume $p$ is a sufficiently small constant (corresponding to Part~(\ref{part:main1}) of
Theorem~\ref{thm:main}) and the expected maximum matching size in $G$ (i.e., $\ensuremath{\mbox{\sc opt}}\xspace$) is $n-o(n)$, or in other words, a realization of $G$,
$G_p$, has a near perfect matching in expectation.
Our starting point is the following observation: In order for $G_p$ to have a (near) perfect matching in expectation, the input graph $G$
must have many (roughly $1/p$) edge-disjoint (near) perfect matchings. To gain some intuition why this is true, suppose for the moment that
the input graph is a bipartite graph $G(L,R,E)$. Then, by \emph{Hall's Marriage Theorem}, we know that in order for $G_p$ to have a matching
of size $n - o(n)$, for any two subsets $X \subseteq L$ and $Y \subseteq R$, with $\card{X} - \card{Y} \geq o(n)$, at least one edge from
$X$ to $\bar{Y}$ should realize in $G_p$. However, this requirement implies that in $G$, there should be $1/p$ edges from $X$ to $\bar{Y}$
so that at least one of these edges appears in $G_p$. One can then show that a bipartite graph $G$ with such a structure has $1/p$
edge-disjoint matchings of size at least $n-o(n)$.
In general, we need to handle graphs that are not necessarily bipartite. In order to adapt the previous strategy, we slightly relax our
requirement of having $1/p$ edge-disjoint matchings to having one (simple) $b$-matching\footnote{Recall that a (simple) $b$-matching is
simply a graph with degree of each vertex bounded by $b$. See Section~\ref{sec:prelim} for more details.} of size $nb$ for the parameter
$b = \frac{1}{p}$. We show that,
\begin{itemize}
\item[] \textbf{b-Matching Lemma.} Any graph $G$ where $G_p$ has a matching of size $n-o(n)$ in expectation, has a $\frac{1}{p}$-matching of size (essentially) $\frac{n}{p}$.
\end{itemize}
Next, we combine the fact that a large $\frac{1}{p}$-matching, denoted by $B$, always exists in $G$, with the
$(\frac{1}{2}-\varepsilon)$-approximation algorithm of~\cite{ECpaper} to obtain a strictly better than $\frac{1}{2}$-approximation algorithm.
To continue, we briefly describe the algorithm of~\cite{ECpaper}, which we refer to as \ensuremath{\textsf{MatchingCover}}\xspace. \ensuremath{\textsf{MatchingCover}}\xspace works by picking a maximum
matching $M_i$ in $G$ and removing the edges of $M_i$ for $R:= \Theta\paren{\frac{\log{(1/p)}}{p}}$ times\footnote{We remark
that this algorithm has an extra \emph{sparsification} step which is needed to handle the case where $\ensuremath{\mbox{\sc opt}}\xspace = o(n)$. However, since in this
section we assume $\ensuremath{\mbox{\sc opt}}\xspace = n-o(n)$, this extra step is not required.}. This collection of matchings, denoted by $E_{MC}$, is referred to as
a \emph{matching cover} of the original graph $G$. The main property of this matching cover, proved in~\cite{ECpaper}, is that the set of
realized edges in $E_{MC}$ has a matching of size (essentially) $\card{M_R}$; note that $M_R$ is the smallest size matchings among the
matchings in $E_{MC}$.
We are now ready to define our main algorithm: Pick a maximum $\frac{1}{p}$-matching $B$ from $G$; run \ensuremath{\textsf{MatchingCover}}\xspace over the edges
$E\setminus B$ and obtain a matching cover $E_{MC}$; return $H(V,B \cup E_{MC})$. If $\card{M_R} < ({1 \over 2} - \delta_0)n$, using the
fact that $E_{MC}$ is obtained by repeatedly picking maximum matchings, one can show that any matching $M$ of size $n-o(n)$ in $G$ has more
than $(\frac{1}{2} + \delta_0)n - o(n)$ edges in $B \cup E_{MC}$. This also implies that the expected matching size in $H$ is at least
$(\frac{1}{2} + \delta_0)n - o(n)$. The more difficult case, which is where we concentrate bulk of our technical effort, is when
$\card{M_R} \geq (\frac{1}{2}-\delta_0)n$. For simplicity, assume $\card{M_R} = n/2$ from here on.
As stated above, if $|M_R| = n/2$, then in almost every realization of the edges in $E_{MC}$, there exists a matching $M$ of size at least
$n/2$. Our strategy is to \emph{augment} the matching $M$ using the (realized) edges in $B$, so that the matching size becomes
$({1 \over 2} + \delta_0) n$. It is important to note that the set of edges in $E_{MC}$ and $B$ are disjoint, and hence whether edges in
$E_{MC}$ and $B$ are realized are independent of each other.
Let $U$ be the set of vertices matched by $M$. There are two cases here to consider:
\begin{itemize}
\item \textbf{Case 1.} Nearly all edges in $B$ are incident on vertices in $U$.
\item \textbf{Case 2.} An $\varepsilon$-fraction of edges in $B$ are not incident on $U$ (for some constant $\varepsilon > 0$).
\end{itemize}
The second case is relatively easy to handle: we show that a realization of a $\frac{1}{p}$-matching with $N/p$ edges has a matching of size
at least $N/3$ in expectation. This implies that $B_p$ has a matching $M'$ of size $\varepsilon \cdot \frac{n}{3} = \Theta(\varepsilon) \cdot n$ which is
not incident on $U$. Consequently, $B \cup E_{MC}$ has a matching of size $\frac{n}{2} + \Theta(\varepsilon) \cdot n$ in expectation. The more
challenging task is to tackle the first case. To convey the main idea, we make a series of simplifying assumptions here: $(i)$ all edges in
$B$ are incident on $U$, $(ii)$ each edge in $B$ is incident on exactly one vertex in $U$, and $(iii)$ every vertex in $U$ is incident on
exactly $\frac{1}{p}$ edges of $B$.
Our goal is to identify a large collection of length-three augmenting paths for the matching $M$ using the edges of $B$. To achieve this,
we consider the event that an edge $(u,v)$ in $M$ has a length-three augmenting path $a-u-v-b$ where $u$ (resp. $v$) is the only neighbor of
$a$ (resp. $b$). We say such an edge $(u,v)$ is \emph{successful}. Since the length-three augmenting that certifies successful edges are
vertex-disjoint by definition, they can all (simultaneously) augment $M$. Consequently, it suffices to lower bound the expected number of
successful edges, or, equivalently, to lower bound the prob. that each edge is successful.
Let us further assume for the moment that $G$ is a bipartite graph. In this case, $u$ and $v$ do not share a common neighbor and we can
consider the neighborhood of $u$ and $v$ separately. The prob. that $u$ has a neighbor $w$ where $u$ is the only neighbor of $w$ (we say
$u$ is successful in this case) is not difficult to bound: enumerate all $1/p$ neighbors $w$ of $u$ and account for the the prob. that the
edge $(u,w)$ is realized and the prob. that no other edge incident on $w$ is realized. A similar argument can be made for $v$. Now, the
prob. that $(u,v)$ is successful is simply the product of the prob. that $u$ is successful and the prob. that $v$ is successful.
However, in general (non-bipartite) graphs, $u$ and $v$ might have common neighbors which results in prob. of $u$ being successful \emph{not independent} of
prob. of $v$ being successful. Handling this case requires a more careful argument and analysis. Moreover, recall that in the above
discussion, we made rather strong simplifying assumptions about how the edges in $B$ are distributed across the vertices of $U$. In order
to further remove these assumptions, in the actual analysis, we cast the probability of each edge $(u,v)$ being successful as a function of
the degrees of the vertices $u$ and $v$, and formulate a (non-linear) minimization program to capture the minimum number of possible
successful edges. Finally, we analyze the optimal solution of this minimization program, which allows us to achieve the target lower bound
on the expected increment in the matching size.
\section{Preliminaries}\label{sec:prelim}
\paragraph{Notation.}
For a graph $G(V,E)$, $n$ denotes the number of vertices in $G$. For any $U \subseteq V$, we use $G[U]$ to denote the subgraph of $G$
induced only on vertices in $U$, and use $E[U]$ to denote the set of edges in $G[U]$, i.e., the set of edges with both end points in
$U$. For any two subsets $U,W$ of $V$, we further use $E[U,W]$ to denote the set of edges with one end point in $U$ and another in $W$. For
any $X \subseteq E$, we use $V(X)$ to denote the set of vertices incident on $X$. Finally, we use $\mu(E)$ to denote the maximum matching
size among a set of edge $E$.
When sampling from a set of edges $X$ (resp. a graph $H$) where each edge in $X$ (resp. $H$) is sampled w.p. $p$, we use $X_p$ (resp. $H_p$)
to denote the random variable for the set of sampled edges. We use $\ensuremath{\mbox{\sc opt}}\xspace(G)$ (or shortly $\ensuremath{\mbox{\sc opt}}\xspace$ if the graph $G$ is clear from the context)
to denote the \emph{expected} maximum matching size of a realization of $G$ (i.e., $G_p(V,E_p)$)\footnote{We assume $\ensuremath{\mbox{\sc opt}}\xspace = \omega(1)$ to
obtain the desired concentration bounds (for example in Lemma~\ref{lem:basic-alg}). }. For any algorithm for the stochastic
matching problem, we use $\ensuremath{\mbox{\sc alg}}\xspace$ to denote the expected matching size in a realization of $H$, where $H$ is the subgraph computed by the
algorithm.
\paragraph{b-matchings.} For any graph $G(V,E)$ and any integer $b \geq 1$, a subset $M \subseteq E$ is called a \emph{simple $b$-matching}, iff the number of edges $M$ that are incident on each
vertex is at most $b$. Throughout, we drop the word `simple', and refer to $M$ as a $b$-matching.
We use the following characterization of the maximum $b$-matching size in general graphs (see~\cite{COBook}, Volume A, Chapter 33).
\begin{theorem}\label{thm:b-matching-char}
Let $G(V,E)$ be a graph and $b \geq 1$ be any integer. The maximum size of a $b$-matching is equal to the minimum value of
\begin{align*}
b \cdot \card{U} + \card{E[W]} + \sum_{K} \floor{\frac{1}{2}\Paren{b\cdot \card{K} + \card{E[K,W]}}}
\end{align*}
taken over all disjoint subsets $U,W$ of $V$, where $K$ ranges over all connected components in the graph $G[V - U - W]$.
\end{theorem}
\paragraph{Useful inequalities.} We also use the following simple inequalities. The Proofs are provided in Appendix~\ref{app:prelim} for completeness.
\begin{proposition}\label{prop:upper-exp}
Let $f(x) := {1 - e^{-x} \over x}$. Then, for any $c$, and any $x \in [0, c]$, $e^{-x} \le 1 - f(c) \cdot x$.
\end{proposition}
\begin{proposition}\label{prop:upper-exp-2}
For any $x \in (0, 0.43]$, $(1-x)^{{1\over x}} \ge {1 - x \over e}$. \footnote{This inequality actually holds for any $x \in [0,1]$. However, as we only need the range $(0,0.43]$
in our proofs and this allows us to provide a simpler proof, we only consider this range.}
\end{proposition}
\subsection{\ensuremath{\textsf{MatchingCover}}\xspace Algorithm}\label{sec:tools}
We use the $(0.5-\varepsilon)$-approximation algorithm of~\cite{ECpaper} (Algorithm~3) as a sub-routine. For simplicity, throughout the paper, we
refer to this algorithm as \ensuremath{\textsf{MatchingCover}}\xspace. In the following lemma, we summarize the properties of \ensuremath{\textsf{MatchingCover}}\xspace that we use in this paper. The proof
of this lemma immediately follows from Lemma~3.9 and Lemma~5.2 in~\cite{ECpaper}.
\begin{lemma}[\cite{ECpaper}]\label{lem:basic-alg}
For any graph $G(V,E)$, and any input parameter $\varepsilon > 0$, $\ensuremath{\textsf{MatchingCover}}\xspace(G,\varepsilon)$ outputs a collection of $R$ matchings $M_1, M_2, \ldots, M_R$
(denote $E_{MC} = M_1 \cup M_2 \cup \ldots\cup M_R$), such that, w.p. $1-o(1)$:
\begin{enumerate}
\item The size of a maximum matching among realized edges in $E_{MC}$ is at least $\paren{1-\varepsilon} \card{M_R}$.
\item $\card{M_1} \geq \ldots \geq \card{M_R} \geq (1-\varepsilon)\cdot\mu\paren{E \setminus E_{MC}}$.
\item $R = \Theta(\frac{\log{1/(\varepsilon p)}}{\varepsilon p})$.
\end{enumerate}
\end{lemma}
We can also prove the following simple claim based on the second property of the \ensuremath{\textsf{MatchingCover}}\xspace in Lemma~\ref{lem:basic-alg}. Roughly speaking, this claim states that if the \ensuremath{\textsf{MatchingCover}}\xspace is not able to
extract any further large matching (of size essentially $\ensuremath{\mbox{\sc opt}}\xspace/2$) from $G$, then the set of extracted edges already provides a matching of size $\ensuremath{\mbox{\sc opt}}\xspace/2$ in any realization.
A similar result is proven in~\cite{ECpaper} (see Lemma~5.3); however, since Claim~\ref{clm:small-L} does not follow directly from the results in~\cite{ECpaper}, we
provide a self-contained proof of this claim here.
\begin{claim}\label{clm:small-L}
Fix $0 < \varepsilon < \delta < 1$. Let $G(V,E)$ be a graph, $X$ be any arbitrary subset of $E$, and $(M_1,\ldots,M_R) = \ensuremath{\textsf{MatchingCover}}\xspace(G(V,E\setminus X),\varepsilon)$.
Define $E_{MC} = M_1 \cup \ldots \cup M_R$. If $\card{M_R} \leq \paren{\frac{1}{2}-\delta} \ensuremath{\mbox{\sc opt}}\xspace$, then the expected maximum matching size in a realization
of $G(V, X \cup E_{MC})$ is at least $\paren{\frac{1}{2} + \delta - \varepsilon} \ensuremath{\mbox{\sc opt}}\xspace$.
\end{claim}
\begin{proof}
For each realization of $G_p$, we fix one maximum matching. Now the expected matching size in $G_p$
can be written as
\begin{align*}
{\ensuremath{\mbox{\sc opt}}\xspace} = \sum_M\prob{\text{$M$ is the fixed maximum matching in $G_p$}} \cdot \card{M}
\end{align*}
By property~(2) of \ensuremath{\textsf{MatchingCover}}\xspace in Lemma~\ref{lem:basic-alg}, the maximum matching size in the graph $G(V,E \setminus (X \cup E_{MC}))$ is at most $(1+\varepsilon)\card{M_R}$.
Therefore, for any matching $M$, at most $(1+\varepsilon)\card{M_R}$ edges of $M$ is in $E \setminus (X \cup E_{MC})$, and hence at least $\card{M} - (1+\varepsilon)L$ edges
of $M$ is in $X \cup E_{MC}$. This implies that if all edges in $M$ are
realized, a matching of size at least $\card{M} - (1+\varepsilon)\card{M_R}$ is realized in $Q$. Let $\ensuremath{\mbox{\sc alg}}\xspace$ be the expected maximum matching size in $G(V,X \cup E_{MC})$; we have,
\begin{align*}
\ensuremath{\mbox{\sc alg}}\xspace &\ge \sum_M\prob{\text{$M$ is the fixed maximum matching in $G_p$}} (\card{M} - (1+\varepsilon)\card{M_R})\\
&={\ensuremath{\mbox{\sc opt}}\xspace} - (1+\varepsilon)\card{M_R}
\end{align*}
Since $\card{M_R} \le (1/2 - \delta){\ensuremath{\mbox{\sc opt}}\xspace}$, we have,
\[ \ensuremath{\mbox{\sc alg}}\xspace \ge {\ensuremath{\mbox{\sc opt}}\xspace} - (1+\varepsilon)\cdot(1/2 - \delta) \cdot {\ensuremath{\mbox{\sc opt}}\xspace} \geq (1/2 + \delta - \varepsilon) \cdot {\ensuremath{\mbox{\sc opt}}\xspace}\]
which concludes the proof.
\end{proof}
| {
"timestamp": "2017-05-08T02:08:23",
"yymm": "1705",
"arxiv_id": "1705.02280",
"language": "en",
"url": "https://arxiv.org/abs/1705.02280",
"abstract": "In the stochastic matching problem, we are given a general (not necessarily bipartite) graph $G(V,E)$, where each edge in $E$ is realized with some constant probability $p > 0$ and the goal is to compute a bounded-degree (bounded by a function depending only on $p$) subgraph $H$ of $G$ such that the expected maximum matching size in $H$ is close to the expected maximum matching size in $G$. The algorithms in this setting are considered non-adaptive as they have to choose the subgraph $H$ without knowing any information about the set of realized edges in $G$. Originally motivated by an application to kidney exchange, the stochastic matching problem and its variants have received significant attention in recent years.The state-of-the-art non-adaptive algorithms for stochastic matching achieve an approximation ratio of $\\frac{1}{2}-\\epsilon$ for any $\\epsilon > 0$, naturally raising the question that if $1/2$ is the limit of what can be achieved with a non-adaptive algorithm. In this work, we resolve this question by presenting the first algorithm for stochastic matching with an approximation guarantee that is strictly better than $1/2$: the algorithm computes a subgraph $H$ of $G$ with the maximum degree $O(\\frac{\\log{(1/ p)}}{p})$ such that the ratio of expected size of a maximum matching in realizations of $H$ and $G$ is at least $1/2+\\delta_0$ for some absolute constant $\\delta_0 > 0$. The degree bound on $H$ achieved by our algorithm is essentially the best possible (up to an $O(\\log{(1/p)})$ factor) for any constant factor approximation algorithm, since an $\\Omega(\\frac{1}{p})$ degree in $H$ is necessary for a vertex to acquire at least one incident edge in a realization.",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "The Stochastic Matching Problem: Beating Half with a Non-Adaptive Algorithm",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717484165853,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449322265791
} |
https://arxiv.org/abs/1905.03778 | Splitting hairs with transcendental entire functions | In recent years, there has been significant progress in the understanding of the dynamics of transcendental entire functions with bounded postsingular set. In particular, for certain classes of such functions, a complete description of their topological dynamics in terms of a simpler model has been given inspired by methods from polynomial dynamics. In this paper, and for the first time, we give analogous results in cases when the postsingular set is unbounded. More specifically, we show that if $f$ is of finite order, has bounded criticality on its Julia set $J(f)$, and its singular set consists of finitely many critical values that escape to infinity and satisfy a certain separation condition, then $J(f)$ is a collection of dynamic rays or hairs, that split at critical points, together with their corresponding landing points. In fact, our result holds for a much larger class of functions with bounded singular set. Moreover, this result is a consequence of a significantly more general one: we provide a topological model for the action of $f$ on its Julia set. | \section{Introduction}
For a polynomial $p$ of degree $d \geq 2$, Böttcher's Theorem provides a conjugacy between $p$ and the simpler map $z \mapsto z^{d}$ in a neighbourhood of infinity. Whenever all the orbits of the critical points of $p$ are bounded (or equivalently when its Julia set $J(p)$ is connected), this conjugacy can be extended to a biholomorphic map between ${\mathbb{C}} \setminus \overline{{\mathbb{D}}}$ and the basin of infinity of $p$. In particular, it allows us to define \textit{dynamic rays} for $p$ as the curves that arise as preimages of radial rays from $\partial {\mathbb{D}}$ to $\infty$ under this conjugacy, and provide a natural foliation of the set of points of $p$ that escape to infinity under iteration. Whenever $J(p)$ is locally connected, each ray has a unique accumulation point in $J(p)$, and we say that the ray \textit{lands}. This limiting behaviour of dynamic rays has been used with great success to provide a combinatorial description of the dynamics of $p$ in $J(p)$. For example, in this situation, Douady \cite{douady_pinchedmodel} constructed a \textit{topological model} for $J(p)$ as a ``pinched disc'', that is, as the quotient of $\partial{\mathbb{D}}$ by a natural equivalence relation.
Since for a transcendental entire map, $f$, infinity is an essential singularity, Böttcher's Theorem no longer applies. Still, it is natural to ask about the existence of dynamic rays, their limiting behaviour, and, more generally, topological models for the dynamics of $f$ in $J(f)$. Answers to these questions depend largely on its \emph{singular set} $S(f)$, that is, the closure of the set of its critical and asymptotic values, as well as on its \emph{postsingular set} $P(f) \defeq\overline{\bigcup_{n\geq 0}f^n (S(f))}$. In fact, in this paper, we restrict ourselves to the widely studied \emph{Eremenko-Lyubich class}~$\mathcal{B}$, consisting of all transcendental entire functions with bounded singular set, \cite{eremenkoclassB}. Then, it is known that if $f\in {\mathcal{B}}$ is a finite composition of functions of \textit{finite order}, i.e., so that $\log \log \vert f_i(z)\vert=O(\log \vert z \vert)$ as $\vert z \vert \rightarrow \infty$, then every point in its \textit{escaping set}
\[I(f)\defeq \{z\in{\mathbb{C}} : f^n(z)\to \infty \text{ as } n\to \infty\}\]
can be connected to infinity by an escaping curve, subsequently called \textit{dynamic ray} by analogy with the polynomial case, \cite{Baranski_Trees, RRRS}. In particular, these functions are \textit{criniferous}. More precisely, we adopt \cite[Definition 2.2]{RRRS} and \cite[Definition~1.2]{lasse_dreadlocks}:
\begin{defn}[Dynamic rays, criniferous maps]\label{def_ray}
Let $f$ be a transcendental entire function. A \emph{ray tail} of $f$ is an injective curve $\gamma :[t_0,\infty)\rightarrow I(f)$, with $t_0>0$, such that
\begin{itemize}
\item for each $n\geq 1$, $t \mapsto f^{n}(\gamma(t))$ is injective with $\lim_{t \rightarrow \infty} f^{n}(\gamma(t))=\infty$;
\item $f^{n}(\gamma(t))\rightarrow \infty$ uniformly in $t$ as $n\rightarrow \infty$.
\end{itemize}
A \emph{dynamic ray} of $f$ is a maximal injective curve $\gamma :(0,\infty)\rightarrow I(f)$ such that the restriction $\gamma_{|[t,\infty)}$ is a ray tail for all $t > 0$. We say that $\gamma$ \emph{lands} at $z$ if $\lim_{t \rightarrow 0^+} \gamma(t)=z$, and we call $z$ the \emph{endpoint} of $\gamma$. Moreover, we say that $f$ is \emph{criniferous} if for every $z\in I(f)$, there is $N\defeq N(z)\in {\mathbb{N}}$ so that $f^n(z)$ is in a ray tail for all $n\geq N$.
\end{defn}
We note that, as occurs for the maps in the exponential family whose asymptotic value escapes, \cite{lasse_nonlanding}, the accumulation set of a dynamic ray might be topologically rather complicated, and, in particular, need not be a point. In fact and until now, a complete topological description of the Julia set as a collection of dynamic rays that land has been achieved for certain transcendental functions whose postsingular set is bounded, e.g. \cite{dierkParadox,lasseBrushing, lasseRidigity, helenaSemi, mashael}. One of the challenges that a transcendental entire map $f$ presents is that, unlike for polynomials, $J(f)$ contains a large set of points whose orbits are neither bounded nor escaping \cite{dave_osb_bungee}. Moreover, for polynomials with escaping singular values, dynamic rays can be extended when they hit critical points using Green's function in a natural way, \cite{Goldberg_Milnor,kiwi_rationalrays}. However, with the essential singularity at infinity, we encounter very different dynamics for a transcendental map $f$, and, a priori, it is not obvious what to expect concerning dynamic rays when $P(f)$ is unbounded, not even when $P(f) \subset I(f)$. Our first result gives some answers to this question. Recall that an entire function $f$ has \textit{bounded criticality} on $J(f)$ if it does not contain asymptotic values and there is a uniform upper bound on the local degree of critical points.
\begin{thm}[Landing of rays for functions with escaping singular orbits] \label{thm_1intro} Let $f\in {\mathcal{B}}$ be a finite composition of functions of finite order. Suppose that $S(f)$ is a finite collection of critical values that escape to infinity, $f$ has bounded criticality on $J(f)$, and there exists $\epsilon>0$ so that $\vert w-z\vert \geq \epsilon\max\{\vert z \vert, \vert w \vert\}$ for all distinct $z,w \in P(f)$. Then, every dynamic ray of $f$ lands, and every point in $J(f)$ is either on a dynamic ray or it is the landing point of one of such rays.
\end{thm}
The hypotheses in Theorem \ref{thm_1intro} will be discussed later, but first we note that for any $f$ satisfying them, $J(f)={\mathbb{C}}$. Moreover, since $I(f)$ contains critical values, dynamic rays \textit{split} at critical points. This can be illustrated with $f(z)=\cosh(z)$. In this case, $S(f)=\operatorname{CV}(f)=\{-1, 1\}$, and $P(f)$ equals $S(f)$ together with the orbit of $f(-1)=f(1)$, which consists of a sequence of positive real points converging to infinity at an exponential rate. By \cite{dierkCosine}, $I(f)$ is a union of dynamic rays. Note that $0$ is a critical point, and it is easy to check that $(-\infty, 0]$ and $[0, \infty)$ are both ray tails. The vertical segments $[0, -i\text{Proj}/2]$ and $[0, i\text{Proj}/2]$ are mapped univalently to $[0,1]\subset {\mathbb{R}}^+$, and thus, the union of each segment with either one of the ray tails $(-\infty, 0]$ and $[0, \infty)$, forms a different ray tail. We can think of this structure as four ray tails that partially overlap pairwise.
Their endpoints $-i\text{Proj}/2$ and $i\text{Proj}/2$ are preimages of $0$, and so the structure described, has a preimage attached to each of them, see Figure \ref{fig:rays_cosh}. This leads again to two possible extensions of each ray tail. We show in \cite{mio_signed_addr} that for criniferous maps with escaping critical values, such extensions can be made in a systematic and dynamically meaningful way that leads to a foliation of their escaping sets into piecewise-overlapping rays. The additional assumptions in Theorem~\ref{thm_1intro}, guarantee that such rays always land; see \cite{mio_cosine} for more details on the dynamics of~$\cosh$.
\begin{figure}[htb]
\centering
\resizebox{0.6\textwidth}{!}{\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\ifx\svgwidth\undefined%
\setlength{\unitlength}{368.50393066bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{\svgwidth}%
\fi%
\global\let\svgwidth\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.69230772)%
\put(0,0){\includegraphics[width=\unitlength]{cosh_split.pdf}}%
\put(0.51311181,0.21003289){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\fontsize{9pt}{1em} -i\frac{\text{Proj}}{2}$}}}%
\put(0.51591789,0.35817888){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$0$}}}%
\put(0.51520385,0.52821125){\color[rgb]{0,0,0}\makebox(0,0)[lb]{\smash{$\fontsize{9pt}{1em} i\frac{\text{Proj}}{2}$}}}%
\end{picture}%
\endgroup}
\caption[Caption for LOF]{In colour, some ray tails of $f(z)=\cosh(z)$. Both the red and dark-blue ones are mapped into themselves, and the rest of coloured tails are first and second preimages of these tails. Further iterated preimages, some depicted in grey, lead to further extensions of these tails.\protect\footnotemark}
\label{fig:rays_cosh}
\end{figure}
\footnotetext{Original picture by L. Rempe, modified for this paper. It first appeared in \cite[p.~163]{mio_thesis}.}
Theorem \ref{thm_1intro} is a consequence of a more general result: in analogy to Douady's ``Pinched Disc Model'' for polynomials, we construct a topological model for the dynamics of any function satisfying its hypotheses. So far, all existing models for transcendental entire maps regarded functions in ${\mathcal{B}}$ with bounded postsingular set. The seminal work in this direction is \cite{AartsOversteegen}, where it is shown that the Julia set of certain exponential and sine maps is homeomorphic to a topological object known as \textit{straight brush} (see Definition \ref{def_brush}). When this occurs, the Julia set is said to be a \textit{Cantor bouquet}; compare with \cite{Bhattacharjee_exp, lasseTopExp} for other parameters in the exponential family. Subsequently, it was shown in \cite{lasseBrushing} that if $f$ is a finite composition of functions of finite order and of \textit{disjoint type}, i.e., with connected Fatou set $F(f)$ and such that $P(f) \Subset F(f)$, then $J(f)$ is a Cantor bouquet.
Understanding the dynamics of disjoint type functions is particularly useful, since if $f\in {\mathcal{B}}$, then for $\lambda \in {\mathbb{C}}$ with $\vert \lambda \vert$ small enough, the function $\lambda f$ is of disjoint type. In particular, $\lambda f$ is in the \textit{parameter space} of $f$, that is, $f$ and $\lambda f$ are quasiconformally equivalent. This implies that their dynamics near infinity are related by a certain analogue of Böttcher's Theorem for transcendental maps \cite{lasseRidigity}. One might regard disjoint type functions as having the simplest dynamics among those in the parameter space of $f$, and in particular, they play an analogous role for $f$ as $z\mapsto z^d$ does for a polynomial of degree~$d$. If, in addition, $f$ is of finite order, so is any disjoint type map $g\defeq \lambda f$, and by the result in \cite{lasseBrushing} mentioned earlier, $J(g)$ is a Cantor bouquet. This is used in \cite{lasseRidigity} to show that when $f$ is hyperbolic, $g\vert _{J(g)}$ and $f\vert _{J(f)}$ are conjugate. After identifying some endpoints of $J(g)$, this result is extended in \cite{helenaSemi} to \textit{strongly subhyperbolic} maps. See also \cite{mashael} for a further generalization.
The function $f(z) = \cosh(z)$ illustrates the fact that functions satisfying the hypotheses of Theorem \ref{thm_1intro} may have critical values in their dynamic rays. It follows that any model that describes their dynamics must reflect the splitting of ray tails at critical points. In particular, since, roughly speaking, a Cantor bouquet is a collection of disjoint curves, called \textit{hairs}, $J(f)$ can no longer be Cantor bouquet, and so for $g\defeq \lambda f$ of disjoint type, $g\vert_{J(g)}$ and $f\vert_{J(f)}$ can no longer be conjugate. However, the analysis performed on the map $\cosh$, where each ray tail ``splits into two'' at critical points, suggests considering two copies of each hair of $J(g)$, and mapping each copy to one of the two possible extensions. With that aim, we build a topological model for $f\vert_{J(f)}$ the following way: we consider the set $J(g)_\pm \defeq J(g)\times \{-,+\}$, that is endowed with a topology that preserves the order of rays at infinity, see \S\ref{sec_model}, and call this a \textit{model space} for $f$. Then, we define an \textit{associated model function} $\tilde{g} \colon J(g)_\pm \rightarrow J(g)_\pm$ to act as $g$ on the first coordinate, and as the identity on the second. We let $I(g)_{\pm}\defeq I(g)\times \{ -,+\}$ and show the following:
\begin{thm}[Semiconjugacy to model space]\label{thm_main_intro1}Let $f$ be as in Theorem \ref{thm_1intro}, let $J(g)_\pm$ be a model space for $f$, and let $\tilde{g}$ be its associated model function. Then, there exists a continuous surjective function $\varphi:J(g)_{\pm} \rightarrow J(f)$ so that $f\circ\varphi = \varphi\circ \tilde{g}$. Moreover, $\varphi(I(g)_{\pm})=I(f)$.
\end{thm}
In fact, we are able to prove significantly stronger results than those of Theorems \ref{thm_1intro} and \ref{thm_main_intro1}. However, they require some rather technical definitions, which is why we started with the more straightforward results of Theorems \ref{thm_1intro} and \ref{thm_main_intro1}. Firstly, we note that the conditions on $P(f)$ ensure that $f$ expands an \textit{orbifold metric} that sits in a neighbourhood of $J(f)$. We prove this result in \cite{mio_orbifolds} for a much larger class of maps. More precisely, we say that $f\in {\mathcal{B}}$ is \textit{strongly postcritically separated} if $P(f)\cap F(f)$ is compact, $f$ has bounded criticality on $J(f)$, there is a uniform bound on the number of critical points in the orbit of any $z\in J(f)$, and there is $\epsilon>0$ so that for any distinct $z,w\in P_J$, $\vert z-w\vert\geq \epsilon \max\{\vert z \vert, \vert w \vert\}$. In particular, in addition to escaping critical values, strongly postcritically separated maps might contain preperiodic postsingular points in their Julia set; see \S \ref{sec_postsep}.
Secondly, as mentioned earlier, the assumption on $f\in {\mathcal{B}}$ being a finite composition of functions of finite order, guarantees that $f$ is criniferous, and implies that $J(\lambda f)$ is a Cantor bouquet whenever $\lambda f$ is of disjoint type. In \cite{mio_thesis,mio_newCB}, we introduce the more general class ${\mathcal{CB}}$ of all $f \in {\mathcal{B}}$ so that $J(\lambda f)$ is a Cantor bouquet for $\vert \lambda \vert$ sufficiently small. As an extension of \cite[Theorem 1.2]{RRRS}, it is shown in \cite{mio_newCB} that all maps in ${\mathcal{CB}}$ are \textit{criniferous}. The model from Theorem \ref{thm_main_intro1} only requires for $f$ that $J(\lambda f)$ is a Cantor bouquet when $\vert \lambda \vert$ is small enough. Consequently, the same model can be defined for any function in ${\mathcal{CB}}$.
We are now able to state the main result of this paper, from which our other results follow quickly. This concerns all functions in ${\mathcal{CB}}$ that are also strongly postcritically separated.
\begin{thm}[Semiconjugacy to model space]\label{thm_main_intro}Let $f \in \mathcal{CB}$ be strongly postcritically separated, let $J(g)_\pm$ be a model space for $f$ and let $\tilde{g}$ be its associated model function. Then, there exists a continuous surjective function
\begin{equation}\label{eq_phi_intro}
\varphi:J(g)_{\pm} \rightarrow J(f)
\end{equation}
so that $f\circ\varphi = \varphi\circ \tilde{g}$. Moreover, $\varphi(I(g)_{\pm})=I(f)$.
\end{thm}
Since all functions considered in Theorem \ref{thm_main_intro1} are in $\mathcal{CB}$ and strongly postcritically separated, Theorem \ref{thm_main_intro1} follows from Theorem \ref{thm_main_intro}. Likewise, Theorem \ref{thm_1intro} is a consequence of the following corollary:
\begin{cor}[Landing of rays for strongly postcritically separated functions in ${\mathcal{CB}}$] \label{cor_intro} Under the assumptions of Theorem \ref{thm_main_intro}, every dynamic ray of $f$ lands, and every point in $J(f)$ is either on a dynamic ray, or is the landing point of at least one such ray.
\end{cor}
\begin{structure}
Sections \ref{sec_symbolic}, \ref{sec_CB} and \ref{sec_postsep} respectively summarize the concepts and results on symbolic dynamics, the class ${\mathcal{CB}}$ and strongly postcritically separated maps developed in \cite{mio_signed_addr, mio_orbifolds, mio_newCB} and required in this paper. In section \ref{sec_model}, we define and study the properties of the topological model $J(g)_\pm$. Finally, in \S\ref{sec_semiconj} we combine the tools from previous sections to construct the semiconjugacy from Theorem \ref{thm_main_intro} and prove Corollary \ref{cor_intro}.
\end{structure}
\subsection*{Basic notation}
As introduced throughout this section, the Fatou, Julia and escaping set of an entire function $f$ are denoted by $F(f)$, $J(f)$ and $I(f)$ respectively. The set of critical values is $\operatorname{CV}(f)$, that of asymptotic values is $\operatorname{AV}(f)$, and the set of critical points will be $\operatorname{Crit}(f)$. The set of singular values of $f$ is $S(f)$, and $P(f)$ denotes the postsingular set. Moreover, $P_{J}\defeq P(f)\cap J(f)$. We denote the complex plane by ${\mathbb{C}}$, the Riemann sphere by $\widehat{{\mathbb{C}}}$, and the disc centred at zero with radius $R$ by ${\mathbb{D}}_R$. We will indicate the closure of a domain $U$ by $\overline{U}$, which must be understood to be taken in ${\mathbb{C}}$. $A\Subset B$ means that $A$ is compactly contained in $B$. The annulus with radii $a,b \in {\mathbb{R}}^+$, $a<b$, will be denoted by $A(a,b)\defeq\lbrace w\in {\mathbb{C}} : a< \vert w \vert < b \rbrace$. For a holomorphic function $f$ and a set $A$, $\operatorname{Orb}^{-}(A)\defeq \bigcup^{\infty}_{n=0} f^{-n}(A)$ and $\operatorname{Orb}^{+}(A)\defeq \bigcup^{\infty}_{n=0} f^{n}(A)$ are the respective backward and forward orbit of $A$ under $f$.
\subsection*{Acknowledgements}
I am very grateful to my supervisors Lasse Rempe and Dave Sixsmith for their continuous help and advice. I also thank Vasiliki Evdoridou, Daniel Meyer and Phil Rippon for valuable comments.
\section{Combinatorics: signed addresses and inverse branches }\label{sec_symbolic}
This section summarizes the combinatorial concepts and results developed in \cite{mio_signed_addr} on criniferous maps with escaping critical values, that we shall require. We start recalling the widely used notion of \textit{external address} for functions in ${\mathcal{B}}$, that allows to assign symbolic dynamics to points whose orbit stays away from a neighbourhood of their singular set.
\begin{defn}[Tracts, fundamental domains]\label{def_fund}
Fix $f\in{\mathcal{B}}$ and let $D$ be a bounded Jordan domain around the origin, containing $S(f)$ and $f(0)$. Each connected component of $f^{-1}({\mathbb{C}}\setminus \overline{D})$ is a \emph{tract} of $f$, and $\mathcal{T}_f$ denotes the set of all tracts. Let $\delta$ be an arc connecting a point of $\overline{D}$ to infinity in the complement of the closure of the tracts. Denote
\begin{equation}\label{eq_deffund}
\mathcal{W} \defeq {\mathbb{C}}\setminus( \overline{D}\cup\delta).
\end{equation}
Each connected component of $f^{-1}(\mathcal{W})$ is a \emph{fundamental domain} of $f$, and we call the collection of all of them an \textit{alphabet of fundamental domains}, that we denote $\mathcal{A}(D,\delta)$. Moreover, for each $F\in \mathcal{A}(D,\delta)$, $\unbdd{F}$ is the unbounded connected component of $F\setminus \overline{D}$.
\end{defn}
\begin{defn}[External addresses] \label{def_extaddr}
Let $f\in{\mathcal{B}}$ and let $\mathcal{A}(D,\delta)$ be an alphabet of fundamental domains. An \emph{(infinite) external address} is a sequence ${\underline s}= F_0 F_1 F_2 \dots$ of elements in $\mathcal{A}(D,\delta)$. Moreover, for each external address ${\underline s}$, we let
\begin{equation}\label{eq_Js}
J_{{\underline s}}\defeq\left\{z\in{\mathbb{C}}\colon f^n(z)\in\unbdd{F_n} \text{ for all $n\geq 0$}\right\},
\end{equation}
and denote by $\operatorname{Addr}(f)$ the set of all ${\underline s}$ for which $J_{{\underline s}}$ is non-empty. If $z\in J_{\underline s}$ for some ${\underline s} \in \operatorname{Addr}(f)$, then we say that $z$ \textit{has (external) address} ${\underline s}.$ Moreover, $\sigma$ stands for the one-sided \emph{shift operator} on external addresses; that is, $\sigma(F_0 F_1 F_2\ldots ) = F_1F_2 \ldots$.
\end{defn}
\begin{observation}[Points with external address]\label{obs_addrz} Following the definition above, the sets in \eqref{eq_Js} lie entirely in $J(f)$, see \cite[Lemma 2.6]{lasse_dreadlocks}. Since these sets are by definition pairwise disjoint, whenever it is defined, the external address of a point is unique. Moreover, it follows from \cite[Proposition 2.8]{helenaSemi} that in the special case when $f$ is of disjoint type, all points in $J(f)$ have an external address. That is,
\begin{equation}\label{eq_continuadisj}
f \text { is of disjoint type }\quad \Rightarrow \quad J(f)=\bigcup_{{\underline s}\in \operatorname{Addr}(f)}J_{\underline s}.
\end{equation}
\end{observation}
However, not all escaping points of every function in ${\mathcal{B}}$ have an external address, as in particular occurs for $f\in{\mathcal{B}}$ with singular values in $I(f)$. This problem is resolved in \cite{mio_signed_addr} for criniferous maps in ${\mathcal{B}}$ with escaping critical values. More precisely, recall from the introduction that whenever a ray tail contains a critical value, there are components of its preimage with critical points, that can be interpreted as tails that \textit{split} or \textit{break} at them, and can be extended to overlap pairwise. In order to assign dynamically meaningful combinatorics to these points, we introduce the concept of \textit{signed address}:
\begin{discussion}[Space of signed addresses]\label{discussion_signed_addr}
Let $f\in {\mathcal{B}}$, and let $\operatorname{Addr}(f)$ be a set of external addresses defined from an alphabet of fundamental domains $\mathcal{A}(D,\delta)$. Consider the set
$$\operatorname{Addr}(f)_\pm \defeq \operatorname{Addr}(f) \times \{-,+\},$$
that we shall endow with a topology. There is a natural \emph{cyclic order} on the alphabet $\mathcal{A}(D,\delta)$ together with the curve $\delta$: if $X, Y, Z\in \mathcal{A}(D,\delta) \cup \{\delta\}$, then we write
\begin{equation}\label{eq_orderinfty}
[X,Y,Z]_{\infty} \: \: \Leftrightarrow \: \: Y \text{ tends to infinity between }X\text{ and } Z \text{ in positive orientation.}
\footnote{See \cite[13. Appendix]{lasse_dreadlocks} for details on the existence of a cyclic order on any pairwise disjoint collection of unbounded, closed, connected subsets of ${\mathbb{C}}$, none of which separates the plane.}
\end{equation}
From this cyclic order, it is possible to define a \textit{lexicographical order} on the set $\operatorname{Addr}(f)$: We can define a linear order on the set of fundamental domains by ``cutting'' $\delta$ the following way: $$F < \tilde{F} \quad \text{ if and only if } \quad [\delta, F, \tilde{F}]_\infty.$$
Then, the set of fundamental domains becomes totally ordered, and this order gives rise to a lexicographical order ``$<_{_\ell}$'' on external addresses, defined in the usual sense.
Let us give the set $\lbrace -,+\rbrace$ the order $\lbrace -\rbrace \prec\lbrace +\rbrace$. Define the linear order on $\operatorname{Addr}(f)_\pm$
\begin{equation} \label{eq_linear_addr}
(\underline{s}, \ast )<_{_A} (\underline{\tau}, \star) \qquad \text{ if and only if } \qquad \underline{s} <_{_\ell} \underline{\tau} \quad \text{ or } \quad \underline{s} =_{_\ell} \underline{\tau}\: \text{ and } \: \ast \prec \star,
\end{equation}
where the symbols ``$\ast, \star$'' denote generic elements of $\lbrace-, +\rbrace.$
This linear order gives rise to a cyclic order: for $a,x,b \in \operatorname{Addr}(f)_\pm$,
\begin{equation*} \label{eq_orderAfpm}
[a,x,b]_{_A} \quad \text{if and only if} \quad a<_{_A} x<_{_A} b \quad \text{ or } \quad x <_{_A}b <_{_A} a \quad \text{ or }\quad b <_{_A} a <_{_A} x.
\end{equation*}
The cyclic order allows us to provide the set $\operatorname{Addr}(f)_\pm$ with a topology $\tau_A$: given two different elements $({\underline s},\ast),(\underline{\tau},\star) \in \operatorname{Addr}(f)_{\pm}$, we define the \textit{open interval} from $({\underline s},\ast)$ to $(\underline{\tau},\star)$, denoted by $(({\underline s},\ast),(\underline{\tau},\star))$, as the set of all signed addresses $(\underline{\alpha}, \cdot)\in \operatorname{Addr}(f)_\pm$ such that $[({\underline s},\ast), (\underline{\alpha}, \cdot), (\underline{\tau},\star)]_{A}$. The collection of all these open intervals forms a base for the \textit{cyclic order topology.\footnote{In particular, the open sets in this topology happen to be exactly those ones which are open in every compatible linear order.}}
\end{discussion}
\begin{defn}[Signed external addresses for criniferous functions]\label{defn_signedaddr} Let $f\in {\mathcal{B}}$ be a criniferous function and let $(\operatorname{Addr}(f)_\pm, \tau_A)$ as in \ref{discussion_signed_addr}. A \emph{signed (external) address} for $f$ is any element of $\operatorname{Addr}(f)_\pm$.
\end{defn}
Our next goal is to define signed addresses for all escaping points of certain criniferous functions. Since for criniferous maps in ${\mathcal{B}}$, the sets in \eqref{eq_Js} contain unbounded curves, see \cite[Theorem 2.12]{mio_signed_addr}, we will achieve our task by extending them in a careful and systematic way. The following definition specifies which sub-curves can be used to perform such extensions:
\begin{defn}(Initial configuration of tails)\label{def_initial_conf} Let $f\in {\mathcal{B}}$ so that for each ${\underline s} \in \operatorname{Addr}(f)$, there exists a curve $\gamma^0_{\underline s}\subset J_{\underline s}$ that is either a ray tail, or a dynamic ray possibly with its endpoint. The set of curves $\{\gamma^0_{\underline s}\}_{{\underline s} \in \operatorname{Addr}(f)}$ is a \emph{valid initial configuration} for $f$ if, for each ${\underline s}\in \operatorname{Addr}(f)$, $f(\gamma^0_{\underline s}) \subset \gamma^0_{\sigma({\underline s})}$ and
\begin{equation}\label{eq_S0}
I(f) \subset \bigcup_{{\underline s} \in \operatorname{Addr}(f)} \bigcup_{n\geq 0} f^{-n}(\gamma^0_{\underline s}) \eqdef \mathcal{S}.
\end{equation}
\end{defn}
The next theorem, which gathers \cite[Definition 3.5, Theorem 3.8 and Observation~3.13]{mio_signed_addr}, tells us that the escaping sets of the functions we are looking at, can be described as a collection of rays, that we call \textit{canonical}, indexed by signed addresses. For a more detailed description on the construction and overlapping of these curves, we refer to \cite[\S 3]{mio_signed_addr}, or \cite[\S5]{mio_cosine} for the maps $\cosh$ and $\cosh^2$.
\begin{thm}[Canonical rays]\label{thm_signed} \normalfont Let $f\in {\mathcal{B}}$ be criniferous such that $J(f)\cap \operatorname{AV}(f)=\emptyset$. Let $\{\gamma^0_{\underline s}\}_{{\underline s} \in \operatorname{Addr}(f)}$ be a valid initial configuration for $f$. Then, for each $({\underline s}, \ast) \in \operatorname{Addr}(f)_\pm$, there exists a curve $\Gamma({\underline s}, \ast)$, that is either a ray tail or a dynamic ray possibly with its endpoint, and that can be written as a nested union
\begin{equation}\label{eq_canonical}
\Gamma({\underline s}, \ast)= \bigcup_{n\geq 0}\gamma^n_{({\underline s}, \ast)},
\end{equation}
satisfying:
\begin{enumerate}[label=(\alph*)]
\item $\gamma^0_{({\underline s},-)}\defeq\gamma^0_{({\underline s},+)}\defeq\gamma^0_{\underline s}\subset J_{\underline s}$;
\item for all $n\geq 1$, $\gamma^{n-1}_{({\underline s}, \ast)} \subseteq \gamma^{n}_{({\underline s}, \ast)}$ and $f \colon \gamma^{n}_{({\underline s}, \ast)} \to \gamma^{n-1}_{(\sigma({\underline s}), \ast)}$ is a bijection.
\label{item:tails_bijection}
\end{enumerate}
We call any sub-curve of those in \eqref{eq_canonical} \emph{canonical}. Landing of all canonical rays implies landing of all dynamic rays in $J(f)$, and
$$\bigcup_{({\underline s}, \ast)\in \operatorname{Addr}(f)_\pm}\Gamma({\underline s}, \ast) \supset \mathcal{S} \supset I(f),$$
where $\mathcal{S}$ is the set from \eqref{eq_S0}.
\end{thm}
\begin{defn}[Signed addresses for escaping points]\label{def_addrpm} Following Theorem~\ref{thm_signed}, for each $z\in \mathcal{S} \supset I(f)$, we say that $z$ has \emph{signed (external) address} $({\underline s}, \ast)$ if $ z\in \Gamma({\underline s}, \ast)$, and we denote by $\operatorname{Addr}(z)_\pm$ the set of all signed addresses of $z$. By \cite[Proposition 3.9 and Observation 3.11]{mio_signed_addr}, for each $z\in \mathcal{S}\supset I(f)$,
\begin{equation}\label{eq_singed_adddr}
\# \operatorname{Addr}(z)_\pm= 2 \prod^{\infty}_{j=0}\deg(f,f^{j}(z))<\infty.
\end{equation}
\end{defn}
The key feature of canonical tails for $f$ satisfying certain further assumptions, is that, given the topology we have provided $\operatorname{Addr}(f)_\pm$ with, we can define inverse branches on neighbourhoods of these canonical tails, in such a way that the branches agree in whole intervals of addresses. These branches will be instrumental in the proof of Theorem \ref{thm_main_intro}. The following theorem is \cite[Proposition~4.7 and Theorem 4.8]{mio_signed_addr}.
\begin{thm}[Inverse branches for canonical tails]\label{thm_inverse_hand}\normalfont Let $f\in {\mathcal{B}}$ be criniferous such that $J(f)\cap \operatorname{AV}(f)=\emptyset$, $P(f)\setminus I(f)$ is bounded and $S(f)\cap I(f)$ is finite. Then, there is a choice of $\operatorname{Addr}(f)$ such that, if we apply Theorem \ref{thm_signed} to $f$ given any valid initial configuration, then, for each $n\in {\mathbb{N}}$ and $({\underline s}, \ast) \in \operatorname{Addr}(f)_\pm$, there exists a, not necessarily open, neighbourhood
$$\tau_n({\underline s}, \ast) \supset \gamma^n_{({\underline s}, \ast)}$$ with the following properties:
\begin{enumerate}[label=(\arabic*)]
\item \label{item1} There exists an open interval of signed addresses $\Upsilon^n({\underline s},\ast) \ni ({\underline s},\ast)$ such that $$\tau_n(\underline{\alpha}, \star) \subseteq \tau_n({\underline s}, \ast) \quad \text{ for all } \quad (\underline{\alpha}, \star) \in \Upsilon^n({\underline s},\ast).$$
\item \label{item2} If $n\geq 1$, the restriction $f\vert_{\tau_n({\underline s}, \ast)}$ is injective and maps to $\tau_{n-1}(\sigma({\underline s}), \ast)$.
\end{enumerate}
Hence, for all $n\geq 1$, we can define the inverse branch
\begin{equation}\label{eq_inversebranch}
f^{-1,[n]}_{({\underline s},\ast)}\defeq \left(f\vert_{ \tau_n({\underline s}, \ast)}\right)^{-1} \colon f(\tau_n({\underline s}, \ast)) \to \tau_n({\underline s}, \ast).
\end{equation}
\end{thm}
\begin{observation}[Chains of inverse branches {\cite[Observation 4.9]{mio_signed_addr}}]\label{obs_chain_inverse} Following Theorem \ref{thm_inverse_hand}, for each $n\geq 0$ and $({\underline s}, \ast)\in \operatorname{Addr}(f)_\pm$, we denote
$$f^{-n}_{({\underline s},\ast)}\defeq \left(f^n\vert_{ \tau_n({\underline s}, \ast)}\right)^{-1} \colon f^n(\tau_n({\underline s}, \ast)) \to\tau_n({\underline s}, \ast).$$
Then, by Theorem \ref{thm_inverse_hand}\ref{item2}, the following chain of embeddings holds:
$$ \tau_n({\underline s}, \ast)\xhookrightarrow{ \; \;\;\; f \;\;\; \; } \tau_{n-1}(\sigma({\underline s}), \ast)\xhookrightarrow{\; \;\; \; f\;\; \; \; } \tau_{n-2}(\sigma^2({\underline s}), \ast) \xhookrightarrow{ \;\;\; \; f\;\;\; \; } \cdots \xhookrightarrow{\; \;\; \; f \; \; \;\; } \tau_{0}(\sigma^{n}({\underline s}), \ast). $$
This means that we can express the action of $f^{-n}_{({\underline s}, \ast)}$ in $f^n(\tau_n({\underline s}, \ast))$ as a composition of functions defined in \eqref{eq_inversebranch}. That is,
$$ \tau_n({\underline s}, \ast)\xleftarrow{ f^{-1,[n]}_{({\underline s}, \ast)} } f(\tau_n({\underline s}, \ast))\xleftarrow{ f^{-1, [n-1]}_{(\sigma({\underline s}), \ast)} } f^2(\tau_n({\underline s}, \ast)) \xleftarrow{ f^{-1, [n-2]}_{(\sigma^2({\underline s}), \ast)} } \cdots \xleftarrow{ f^{-1, [1]}_{(\sigma^{n-1}({\underline s}), \ast)} } f^n(\tau_n({\underline s}, \ast)). $$
More precisely,
$$f^{-n}_{({\underline s},\ast)} \equiv \left(f^{-1, [n]}_{({\underline s}, \ast)}\circ f^{-1, [n-1]}_{(\sigma({\underline s}), \ast)} \circ \cdots \circ f^{-1, [1]}_{(\sigma^{n-1}({\underline s}), \ast)}\right)\!\Big\vert_{f^n(\tau_n({\underline s}, \ast))}.$$
Moreover, combining this with Theorem \ref{thm_inverse_hand}, for all $(\underline{\alpha}, \star) \in \Upsilon^n({\underline s},\ast)$,
$$f^n(\tau_n(\underline{\alpha}, \star))\subseteq f^n(\tau_n({\underline s}, \ast)) \quad \text{ and } \quad f^{-n}_{({\underline s},\ast)}\big\vert_{f^n(\tau_n(\underline{\alpha}, \star))}\equiv f^{-n}_{(\underline{\alpha},\star)}\big\vert_{f^n(\tau_n(\underline{\alpha}, \star))}.$$
Finally, note that by Theorem \ref{thm_signed}\ref{item:tails_bijection}, $f^{-n}_{({\underline s},\ast)}\colon \gamma^0_{(\sigma^n({\underline s}), \ast)}\to \gamma^n_{({\underline s}, \ast)}$ is a bijection.
\end{observation}
\section{Cantor bouquets and the class ${\mathcal{CB}}$}\label{sec_CB}
This section gathers the definitions and results from \cite{mio_newCB} that we need for our purposes. Firstly, we have adopted the definition of Cantor bouquet from \cite{AartsOversteegen,lasseBrushing}.
\begin{defn}[Straight brush, Cantor bouquet] \label{def_brush}
A subset $B$ of $[0,+\infty) \times ({\mathbb{R}}\setminus\mathbb {Q})$ is a \emph{straight brush} if the following properties are satisfied:
\begin{itemize}
\item The set $B$ is a closed subset of ${\mathbb{R}}^2$.
\item For each $(x,y) \in B$, there is $t_y \geq 0$ so that $\{x: (x,y)\in B\} = [t_y, +\infty)$.
The set $[t_y, +\infty) \times \{y\}$ is called the \textit{hair} attached at $y$, and the point $(t_y, y)$ is called its \textit{endpoint}.
\item The set $\{y: (x,y) \in B \text{ for some } x\}$ is dense in ${\mathbb{R}}\setminus\mathbb {Q}$.
Moreover, for every $(x, y) \in B$, there exist two sequences of hairs attached respectively at $\beta_n, \gamma_n \in {\mathbb{R}}\setminus\mathbb {Q}$ such that $\beta_n < y < \gamma_n$, $\beta_n, \gamma_n \to y$ and $t_{\beta_n}, t_{\gamma_n} \to t_y$ as $n\to\infty$.
\end{itemize}
\noindent A \emph{Cantor bouquet} is any subset of the plane that is ambiently homeomorphic to a straight brush. A \textit{hair} (resp. \textit{endpoint}) of a Cantor bouquet is any preimage of a hair (resp. endpoint) of a straight brush under a corresponding ambient homeomorphism.
\end{defn}
\begin{observation}\label{obs_inverse_CB} Let $g\in {\mathcal{B}}$ be of disjoint type such that $J(f)$ is a Cantor bouquet. It follows from results in \cite{lasse_arclike} that each hair $\eta$ of $J(f)$ is a dynamic ray together with its endpoint, and $f\vert_\eta$ is a bijection to another hair, see \cite[Proposition 3.3]{mio_newCB}. Suppose that $\operatorname{Addr}(g)$ has been defined with respect to some alphabet of fundamental domains $\{F_i\}_{i\in I}.$ For each ${\underline s}=F_0 F_1 F_2\dots \in \operatorname{Addr}(g)$ and $n\in {\mathbb{N}}_{\geq 1}$, we denote
\[ g_{\underline s}^n \defeq g\vert_{F_{n-1}}\circ g\vert_{F_{n-2}} \circ \cdots \circ g\vert_{F_{0}} \quad \text{ and } \quad g^{-n}_{\underline s}\defeq \left(g^{n}_{\underline s}\right)^{-1}.\]
Let $X\subset J(g)$ be a Cantor bouquet such that $I(f)\subset \operatorname{Orb}^-(X)$, and for each ${\underline s}\in \operatorname{Addr}(g)$ and $n\geq 0$, let
\begin{equation}\label{eq_Isbeta}
I_{\underline s}\defeq J_{\underline s}\cap I(f), \quad X_{\underline s}\defeq X\cap I_{\underline s} \quad \text{ and } \quad \beta^n_{\underline s}\defeq g_{{\underline s}}^{-n}(X_{\sigma^n({\underline s})}).
\end{equation}
Then, by Observation \ref{obs_addrz},
\[I_{\underline s}=\bigcup_{n\geq 0} \beta^n_{\underline s} \quad \text{ and } \quad g^n\colon \beta^n_{\underline s} \to X_{\sigma^n({\underline s})} \text{ is a bijection for all } n\geq 0. \]
\end{observation}
Recall from the introduction that the class ${\mathcal{CB}}$ comprises all $f\in {\mathcal{B}}$ for which if $\vert \lambda\vert$ sufficiently small, then $J(\lambda f)$ is a Cantor bouquet. This class is introduced in \cite{mio_newCB}, where in particular, it is shown that all maps in ${\mathcal{CB}}$ are criniferous. The following theorem gathers together all the results on function in ${\mathcal{CB}}$ we use in \S \ref{sec_semiconj}, and it is a slightly simplified version of \cite[Theorem 4.13]{mio_newCB}.
\begin{thm}\label{thm_CB} Let $f\in {\mathcal{CB}}$ and let $L>0$ so that $S(f)\subset {\mathbb{D}}_L$. Then, $f$ is criniferous, and there exists a disjoint type map $g$ with a Cantor bouquet $X\subset J(g)$, and a continuous map $\theta \colon J(g)\rightarrow J(f)$ with the following properties:
\begin{enumerate}[label=(\alph*)]
\item $I(g)\subset \operatorname{Orb}^-(X)$;
\item \label{item:inclusions}$ \theta(J(g)) \cup J(g)\subset {\mathbb{C}}\setminus {\mathbb{D}}_L$;
\item \label{item:commute} $\theta \circ g =f\circ \theta $, except in a bounded set $B\subset {\mathbb{C}}\setminus X$;
\item \label{item:boundedC} there is a bounded set $C$ so that if $z\in B$, then the curve $[\theta(g(z)), f(\theta(z))]$ belongs to $C\cap f^{-1}({\mathbb{C}} \setminus {\mathbb{D}}_L)$.
\item \label{item:homeomX} $\theta\vert_X$ is a homeomorphism onto its image;
\item \label{item:imageBouquet} $\theta(X)$ is a forward invariant Cantor bouquet so that $I(f)\subset \operatorname{Orb}^-(\theta(X))$;
\item Each hair of $\theta(X)$ is either a ray tail or a dynamic ray with its endpoint;\label{item:rays}
\item \label{item:annulus} there is
$M\defeq M(L)>0$ such that for every $z \in J(g)$, $\theta(z) \in \overline{A(M^{-1}\vert z\vert, M\vert z\vert)}$; in particular, $\theta(X\cap I(g))\subset I(f)$;
\item \label{item:addr} the map $\theta$ establishes an order-preserving one-to-one correspondence between external addresses of $g$ and $f$.
\end{enumerate}
\end{thm}
\begin{observation}\label{obs_CB_conf} By Theorem \ref{thm_CB}\ref{item:imageBouquet},\ref{item:rays},\ref{item:addr}, the hairs of
$\theta(X)$ are a valid initial configuration for $f$ in the sense of Definition \ref{def_initial_conf}
\end{observation}
\section{Strongly postcritically separated maps}\label{sec_postsep}
Recall that Theorem \ref{thm_main_intro} holds for those maps in ${\mathcal{CB}}$ that are additionally strongly postcritically separated. The reason for it is that for the latter class of maps, postsingular points in their Julia set are ``sufficiently spread'' as to guarantee that the maps \textit{expand} an orbifold metric that sits on a neighbourhood of their Julia set, \cite{mio_orbifolds}. In this section, we include the main definitions and results from \cite{mio_orbifolds} that we require, and refer to that paper for more details.
Recall that for a holomorphic map $f:\widetilde{S}\rightarrow S$
between Riemann surfaces, the \emph{local degree} of $f$
at a point $z_0\in \widetilde{S}$, denoted by $\deg(f,z_0)$, is the unique integer $n\geq 1$
such that the local power series development of $f$ is of the form
\begin{equation*}
f(z)=f(z_0) + a_n (z-z_0)^n + \text{(higher terms)},
\end{equation*}
where $a_n\neq 0$. We say that $f$ has \textit{bounded criticality} on a set $A$ if $\operatorname{AV}(f) \cap A=\emptyset$ and there exists a constant $M<\infty$ such that $\operatorname{deg}(f,z)<M \text{ for all } z \in A.$
\begin{defn}[Strongly postcritically separated functions]\label{def_strongps}
A transcendental entire map $f$ is \textit{strongly postcritically separated} if:
\begin{enumerate}[label=(\alph*)]
\item \label{item_Fatou} $P(f)\cap F(f)$ is compact;
\item \label{itema_defsps} $f$ has bounded criticality on $J(f)$;
\item \label{itemb_defsps} for each $z\in J(f)$, $\#(\operatorname{Orb}^+(z)\cap \operatorname{Crit}(f)) \leq c$;
\item \label{itemd_defsps} for all distinct $z,w\in P_J\defeq P(f)\cap J(f)$, $\vert z-w\vert\geq \epsilon \max\{\vert z \vert, \vert w \vert\}$.
\end{enumerate}
\end{defn}
\begin{observation}[Dichotomy of points in $P_J$, {\cite[Observation 2.2]{mio_orbifolds}}] \label{rem_setting} If $f$ strongly postcritically separated, then any $p\in P_J$ is either (pre)periodic, or it escapes to infinity. If in addition $f\in \mathcal{B}$, then there can be at most finitely many points in $S(f)\cap I(f)$, and so $P(f)\setminus I(f)$ is bounded.
\end{observation}
The following theorem gathers the main results on these maps that we will use, and is a compendium of \cite[Definition and Proposition 5.1, Theorem 1.1 and Lemma 5.3]{mio_orbifolds}.
\begin{thm}[Orbifold expansion for strongly postcritically separated maps] \label{thm_orbifolds} Let $f\in {\mathcal{B}}$ be a strongly postcritically separated map. Then, there exist hyperbolic orbifolds ${\widetilde{\mathcal{O}}}=(\widetilde{S},\tilde{\nu})$ and ${\mathcal{O}}=(S,\nu)$ with the following properties:
\begin{enumerate}[label=(\alph*)]
\item \label{item_a_assocorb} Either $S={\mathbb{C}} =\tilde{S}$, or $\text{cl}(\tilde{S})\subset S={\mathbb{C}}\setminus \overline{U}$, where $U$ is a finite union of bounded Jordan domains. Moreover, $J(f) \Subset S$.
\item \label{item_e_assocorb} $f \colon{\widetilde{\mathcal{O}}}\to{\mathcal{O}}$ is an orbifold covering map.
\item There exists a constant $\Lambda > 1$ such that
\begin{equation}\label{eq_exp_lambda}
\Vert {\rm D} f(z)\Vert_{{\mathcal{O}}}\defeq\frac{\vert f'(z)\vert\rho_{{\mathcal{O}}}(f(z))}{\rho_{{\mathcal{O}}}(z)} \geq\Lambda,
\end{equation}
whenever the quotient it is defined.
\item \label{lem_annulus} For any fixed $K>1$, there is $R>0$ so that if $p,q\in \overline{A(t, Kt)}\subset A(t/K,t K^2) \subset S$ for some $t>0$, then $d_{\mathcal{O}}(p,q) \leq R$.
\end{enumerate}
\end{thm}
\begin{cor}[Shrinking of preimages of bounded curves {\cite[Corollary 5.4]{mio_orbifolds}}] \label{cor_uniform} Under the assumptions of Theorem \ref{thm_orbifolds}, for any curve $\gamma_0 \subset {\mathcal{O}}$, for all $k\geq 1$, and each curve $\gamma_k \subset f^{-k}(\gamma_0)$ such that $f^k\vert_{\gamma_k}$ is injective, $\ell_{{\mathcal{O}}}(\gamma_k)\leq \frac{\ell_{{\mathcal{O}}}(\gamma_0)}{\Lambda^{k}}$ for some constant $\Lambda>1$.
\end{cor}
Note that the preceding corollary is meaningful only when applied to some $\gamma_0$ of bounded orbifold length. We might not be able to guarantee so in all cases we wished, so instead, we will consider rectifiable curves in the same ``sort of homotopy class'' in the following sense.
\begin{discussion}[Definition of the sets $\mathcal{H}^{q}_{p} (W(k))$] Let us fix an entire function $f$ and let $k\in {\mathbb{N}}$. We suggest the reader keeps in mind the case when $k=0$, since it will be the one of greatest interest for us. Let $W(k)$ be a finite set of (distinct) points in $f^{-k}(P(f))$, totally ordered with respect to some relation ``$\prec$''. That is, $W(k)\defeq(W(k), \prec)=\{w_1, \ldots, w_N \}\subset f^{-k}(P(f))$ such that $w_{j-1}\prec w_j\prec w_{j+1}$ for all $ 1< j< N$. We note that $W(k)$ can be the empty set. Then, for every pair of points\footnote{In particular, $p$ and $q$ might belong to $f^{-k}(P(f))$.} $p,q \in {\mathbb{C}}\setminus W(k)$, we denote by $\mathcal{H}^{q}_{p} (W(k))$ the collection of all curves in ${\mathbb{C}}$ with endpoints $p$ and $q$ that join the points in $W(k)$ \textit{in the order} ``$\prec $'', starting from $p$. More formally, $\gamma\in \mathcal{H}^{q}_{p} (W(k))$ if $\text{int}(\gamma)\cap f^{-k}(P(f))=W(k)$ and $\gamma$ can be parametrized so that $\gamma(0)=p$, $\gamma(1)=q$ and $\gamma(\frac{j}{N+1})=w_j$ for all $1\leq j \leq N$. In particular, $\gamma$ can be expressed as a concatenation of $N+1$ curves
\begin{equation}\label{def_concat}
\gamma=\gamma^{w_{1}}_{p} \bm{\cdot}\gamma^{w_{2}}_{w_1} \bm{\cdot} \cdots \bm{\cdot}\gamma^{q}_{w_N},
\end{equation}
each of them with endpoints in $W(k) \cup\{p,q\}$ and such that $$\text{int}(\gamma^{w_1}_{p}), \text{int}(\gamma^{w_{i+1}}_{w_i}), \text{int}(\gamma^{q}_{w_N}) \subset {\mathbb{C}} \setminus f^{-k}(P(f))$$
for each $1\leq i \leq N$.
\end{discussion}
\noindent We use the following notion of homotopy for the sets of curves described:
\begin{defn}[Post-$k$-homotopic curves] \label{def_post0}
Consider $W(k)=\{w_1, \ldots, w_N \}\subset f^{-k}(P(f))$ and two curves $\gamma,\beta\in \mathcal{H}^{w_{N+1}}_{w_0}(W(k))$, for some $\{w_0, w_{N+1}\}\subset {\mathbb{C}}\setminus W(k)$. We say that $\gamma$ is \emph{post-$k$-homotopic to $\beta$} if for all $0 \leq i\leq N$, $\gamma^{w_{i+1}}_{w_i}$ is homotopic to $\beta^{w_{i+1}}_{w_i} \text{ in } ({\mathbb{C}} \setminus f^{-k}(P(f))) \cup \{w_i, w_{i+1} \}$.
\end{defn}
In other words, for each $1\leq i\leq N$, the restrictions of $\gamma$ and $\beta$ between $w_i$ and $w_{i+1}$ are homotopic in the space $({\mathbb{C}} \setminus f^{-k}(P(f))) \cup \{w_i, w_{i+1}\}$. It is easy to see that this defines an equivalence relation in $\mathcal{H}^q_p (W(k))$, with $p=w_0$ and $q=w_{N+1}$. For each $\gamma\in \mathcal{H}^q_p (W(k))$, we denote by $[\gamma]_{_k}$ its equivalence class. Note that if $W(k)=\emptyset$ and $p,q \in {\mathbb{C}} \setminus f^{-k}(P(f))$, then for any curve $\gamma\in\mathcal{H}^q_p (W(k))$, $[\gamma]_{_k}$ equals the equivalence class of $\gamma$ in ${\mathbb{C}} \setminus f^{-k}(P(f))$ in the usual sense. Moreover, if $\gamma$ is any curve that meets only finitely many elements of $f^{-k}(P(f))$, then it belongs to a unique set of the form ``$\mathcal{H}^q_p(W(k))$'' up to reparametrization of $\gamma$, and so its equivalence class $[\gamma]_{_k}$ is defined in an obvious sense. Hence, the notion of post-$k$-homotopy is well-defined for all such curves.
Recall that if $f$ is an entire function, for any two curves $\gamma, \beta \subset f({\mathbb{C}})\setminus P(f)$, homotopic relative to their endpoints, by the \textit{homotopy lifting property}, see \cite{hatcher2002algebraic}, for each curve in $f^{-1}(\gamma)$, there exists a curve in $f^{-1}(\beta)$ homotopic to it relative to their endpoints. The following is an analogue of the Homotopy Lifting Property for post-$k$-homotopic curves.
\begin{prop}[Post-homotopy lifting property]\label{cor_homot}
Let $f$ be an entire map and let $C\subset {\mathbb{C}}$ be a domain such that $f^{-1}(C) \subset C$ and $\operatorname{AV}(f) \cap C=\emptyset.$ Let $\gamma \subset C$ be a bounded curve such that $\#(\gamma \cap P(f))< \infty$. Then, for any $k\geq 0$ and any curve $\gamma_k\subset f^{-k}(\gamma)$ such that $f^k_{\vert \gamma_k}$ is injective, for each $\beta \in [\gamma]_{_0}$, there exists a unique curve $\beta_k\subset f^{-k}(\beta)$ satisfying $\beta_k \in [\gamma_k]_{_k}$. In particular, $\beta_k$ and $\gamma_k$ share their endpoints.
\end{prop}
Finally, the next result tells us that if $f$ is an entire function that has dynamic rays, and $K$ is certain subdomain of any hyperbolic orbifold, then there exists a constant $\mu$ such that for every piece of dynamic ray contained in $K$, we can find a curve in its post-$0$-homotopy class with orbifold length at most $\mu$. This result will be of great use to us for the following reason: in Lemma \ref{lem_UHSC}, rather than pulling back pieces of dynamic rays that might not be rectifiable, we will instead pull-back curves in their same post-$0$-homotopy class, for which, by Theorem \ref{cor_homot2}, we have a uniform bound on their length.
\begin{thm}[Pieces of rays with uniformly bounded length {\cite[Corollary 7.9]{mio_orbifolds}}]\label{cor_homot2}
Let $f\in {\mathcal{B}}$, let ${\mathcal{O}}=(S, \nu)$ be a hyperbolic orbifold with $S\subset {\mathbb{C}}$, and let $U\Subset S$ be a simply connected domain with locally connected boundary. Assume that $P(f)\cap \overline{U} \subset J(f)$, $\#( P(f)\cap \overline{U})$ is finite and there exists a dynamic ray or ray tail landing at each point in $P(f)\cap U$. Then, there exists a constant $L_U \geq 0$, depending only on $U$, such that for any (connected) piece of ray tail $\xi \subset U$, there exists $\delta\in [\xi]_{_0}$ with $\ell_{{\mathcal{O}}}(\delta)\leq L_U.$
\end{thm}
\section{The model space}\label{sec_model}
Gathering together results from previous sections, we know that if $f \in {\mathcal{CB}}$, then
\begin{enumerate}[label=(\arabic*)]
\item \label{item1_intromodel} $f$ is criniferous, and hence when $\operatorname{AV}(f)\cap J(f)=\emptyset$, $I(f)$ consists of a collection of canonical rays indexed by $\operatorname{Addr}(f)_\pm$, Theorem \ref{thm_signed};
\item \label{item2_intromodel} $f$ is conjugate to a disjoint type function $g$ on points that stay in a neighbourhood of infinity, Theorem \ref{thm_CB}.
\end{enumerate}
Recall that our goal is to define a model for the action of $f$ in $J(f)$, Theorem \ref{thm_main_intro}. Ideally, we would extend the conjugacy in \ref{item2_intromodel} to the whole $J(g)$, a strategy used previously for other classes of functions, e.g.\cite{lasseRidigity, helenaSemi, mashael}. However, as reflected in \ref{item1_intromodel}, the presence of critical values, and hence critical points, in $I(f)$, provides the escaping set with a topological structure different from a (Pinched) Cantor bouquet. Nonetheless, \ref{item1_intromodel} suggests the use of two copies of $J(g)$, say $J(g)\times \{-,+\}$, as a candidate model space: then, in a very rough sense, a function $\varphi$ could map $J(g)\times \{-,+\}$ to $J(f)$ by mapping $J(g)\times \{+\}$ to the closure of those canonical rays of the form $\Gamma(\cdot, +)$, and $J(g)\times \{ -\}$ to the closure of those canonical rays of the form $\Gamma(\cdot, -)$. We will proceed this way in \S\ref{sec_semiconj}.
Note that for the function $\varphi$ to be continuous, we need to provide the set $J(g)\times \{-,+ \}$ with the ``right topology'': we want to use the map $\theta$ from Theorem \ref{thm_CB} to conjugate near infinity each copy of $J(g)\times \{-,+ \}$ to a subset of $J(f)$. Since by Theorem~\ref{thm_CB}, the function $\theta$ preserves the cyclic orders in $\operatorname{Addr}(g)$ and $\operatorname{Addr}(f)$, we must endow $J(g)\times \{-,+ \}$ with a topology that is compatible with that of $\operatorname{Addr}(f)_\pm$ defined in \ref{discussion_signed_addr}. This is our main task in this section. Even if a topology could be defined directly over $J(g)\times \{-,+ \}$, for convenience and simplification of arguments, we instead define it in two copies of any straight brush $B$, and using the corresponding ambient homeomorphism $\psi\colon J(g) \rightarrow B$, we induce a topology in our model set, see \ref{dis_topology_model}.
For the rest of the section, let us fix any $f\in {\mathcal{CB}}$, as well as a disjoint type function $g$ from its parameter space. Let us moreover assume that the topological space $\operatorname{Addr}(g)_\pm$ has been defined following Definition \ref{defn_signedaddr}. Recall from Definition \ref{def_brush} that a straight brush is defined as a subset of $[0,\infty)\times {\mathbb{R}} \setminus \mathbb {Q} $. Hence, we consider the set
\begin{equation}\label{eq_setM}
{\mathcal{M}}\defeq [0,\infty)\times {\mathbb{R}} \setminus \mathbb {Q} \times \lbrace -,+\rbrace,
\end{equation}
that we aim to endow with a topology. We will use the symbols ``$\ast, \star, \circledast$'' to refer to generic elements of $\lbrace -,+\rbrace$.
\begin{discussion}[Topology in $({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$]\label{dis_modelM_topIrr}
We start by providing $({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$ with a topology compatible with that of $\operatorname{Addr}(g)_\pm$. Let $<_{_i}$ be the usual linear order on irrationals, and let us give the set $\lbrace -,+\rbrace$ the order $\lbrace -\rbrace \prec \lbrace +\rbrace$. Then, for elements in the set $({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$, we define the order relation
\begin{equation}\label{eq_orderB}
(r, \ast )< (s, \star) \qquad \text{ if and only if } \qquad r <_{_i} s \quad \text{ or } \quad r = s \: \text{ and } \: \ast \prec \star.
\end{equation}
This gives a total order to $({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$. Thus, we can define a cyclic order induced by ``$<$'' in the usual way: for $a,x,b \in ({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$,
$$[a,x,b]_{_I} \quad \text{if and only if} \quad a< x< b \quad \text{ or } \quad x < b < a \quad \text{ or }\quad b < a < x.$$
Moreover, given two different elements $a,b \in ({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$, we define the \textit{open interval} from $a$ to $b$, denoted by $(a,b)$, as the set of all points $x\in ({\mathbb{R}} \setminus \mathbb {Q})\times \lbrace -,+\rbrace$ such that $[a,x,b]$. The collection of all such open intervals forms a base for the \textit{cyclic order topology}, that we denote by $\tau_I$.
\end{discussion}
Before we proceed to define a topology in $\mathcal{M}$, let us check that the topological spaces $(\operatorname{Addr}(g)_\pm,\tau_A)$ and $({\mathbb{R}} \setminus \mathbb {Q}), \tau_I)$ are indeed closely related.
\begin{prop}[Correspondence between spaces]\label{prop_mapC} Let $\psi\colon J(g)\rightarrow B$ be an ambient homeomorphism, and for each ${\underline s} \in \operatorname{Addr}(g)$, let $\operatorname{Irr}({\underline s})\defeq y,$ where $y$ is the irrational so that $\psi(J_{\underline s})=[t_y, \infty) \times \{y\}\subset B$. Let $\mathcal{C}\colon\operatorname{Addr}(g)_\pm \rightarrow ({\mathbb{R}} \setminus \mathbb {Q})\times \{-,+ \}$ given by $\mathcal{C}(\underline{s}, \ast)=(\operatorname{Irr}({\underline s}),\ast)$. Then $\mathcal{C}$ is an open map.
\end{prop}
\begin{proof} Let $\underline{s}, \underline{\tau}, \underline{\alpha} \in \operatorname{Addr}(g)$. Let $[\cdot]_\ell$ denote the lexicographic cyclic order on $\operatorname{Addr}(g)$ as described in \ref{discussion_signed_addr}. Then,
$$[\underline{s}, \underline{\tau}, \underline{\alpha}]_{_\ell} \xLeftrightarrow{ \; \; _{(1)} \; \;} [ J_{\underline s}, J_{\underline{\tau}}, J_{\underline{\alpha}}]_\infty \xLeftrightarrow{ \; \; _{(2)} \; \;} [
\psi (J_{\underline s}),\psi (J_{\underline{\tau}}), \psi (J_{\underline{\alpha}})]_\infty \xLeftrightarrow{ \; \; _{(3)} \; \;}[\operatorname{Irr}(\underline{s}),\operatorname{Irr}( \underline{\tau}),\operatorname{Irr}( \underline{\alpha})]_{_i},$$
where (1) is \cite[Observation 2.14]{mio_signed_addr}, ${(2)}$ is by $\psi$ being a homeomorphism and hence preserving the cyclic order at infinity, and ${(3)}$ is by defining a cyclic order in the irrationals from the usual linear order. Then, if we respectively cut the cyclic orders $[\cdot]_\ell$ and $[\cdot]_i$ in some external address ${\underline s}$ and $\operatorname{Irr}({\underline s})$, since the linear orders in $\operatorname{Addr}(g)_\pm$ and $({\mathbb{R}} \setminus \mathbb {Q})\times \{-,+ \}$ are respectively defined in \eqref{eq_linear_addr} and \eqref{eq_orderB} the same way, it follows that
\begin{equation}\label{eq_mapC}
[({\underline s}, \ast),(\underline{\alpha}, \star),(\underline{\tau},\circledast)]_{_A} \quad \text{if and only if} \quad [\mathcal{C}(({\underline s}, \ast)),\mathcal{C}((\underline{\alpha}, \star)),\mathcal{C}((\underline{\tau},\circledast))]_{_I}.
\end{equation}
Then, since we have used these orders to define the respective cyclic order topologies $\tau_A$ and $\tau_I$ in the respective domain and codomain of $\mathcal{C}$, we have that $\mathcal{C}$ is an open map.
\end{proof}
We observe some properties of the topological space defined in \ref{dis_modelM_topIrr}.
\begin{observation}[Open and closed sets in $({\mathbb{R}} \setminus \mathbb {Q} \times \{-,+\}, \tau_I)$]\label{rem_cyclicIrr}
Let $A$ be an open set of $({\mathbb{R}} \setminus \mathbb {Q} \times \{-,+\}, \tau_I)$ and suppose that $(s,-), (s,+)\in A$. Then, since $ \tau_I$ is generated by open intervals, there exist irrationals $r<_{_i}s<_{_i}t$ such that $((r,\ast), (s,+))\ni (s,-)$ and $((s,-),(t,\ast)) \ni (s,+)$. Hence, both $(s,-), (s,+) \in((r,\ast), (t,\ast)) \subset A$. Moreover, for any pair $r<_{_i} s$, the sets $U\defeq ((r, +), (s,-))$, $U \cup \{(r,+)\}$, $U \cup \{(s,-)\}$ and $U \cup \{(r,+),(s,-)\}=((r, -), (s, +)) \eqdef V$ are open intervals. In addition, $V$ is also closed, since it contains its boundary points.
\end{observation}
\begin{discussion}[Definition of topologies]\label{dis_topology_model}
Let $\mathcal{M}$ be the set from \eqref{eq_setM}. We define the topological space $({\mathcal{M}}, \tau_{\mathcal{M}})$, with $\tau_{\mathcal{M}}$ being the product topology of $[0,\infty)$ with the usual topology, and $({\mathbb{R}} \setminus \mathbb {Q})\times \lbrace -,+\rbrace$ with the topology $\tau_I$. Let $B$ and $\psi$ be a straight brush and usual ambient homeomorphism for which $\psi(J(g))=B$. Let $B_{\pm}\defeq B \times \lbrace -,+\rbrace$ be the subspace of ${\mathcal{M}}$ with the induced topology $\tau_{B_{\pm}}$ from $\tau_{\mathcal{M}}$. Consider the set $J(g)_{\pm}\defeq J(g)\times \lbrace -,+\rbrace$ and the bijection $\tilde{\psi}\colon J(g)_{\pm} \to B_\pm$ defined as $\tilde{\psi}((z,\ast))\defeq (\psi(z),\ast)$. We can then induce a topology in $J(g)_{\pm}$ from the space $(B_\pm,\tau_{B_{\pm}})$, namely
\begin{equation}\label{eq_topologyJ}
\tau_J\defeq \lbrace \tilde{\psi}^{-1}(U) \colon U\in \tau_{B_\pm} \rbrace.
\end{equation}
Note that in particular, $\tilde{\psi}\colon (J(g)_{\pm}, \tau_J) \to (B_\pm,\tau_{B_{\pm}})$ is a homeomorphism. We moreover define $I(g)_{\pm}\defeq I(g)\times \lbrace -,+\rbrace \subset J(g)_{\pm}$ as a subspace equipped with the induced topology.
\end{discussion}
\begin{defn}[Model for functions in ${\mathcal{CB}}$]\label{defn_modelspace}
Let $f\in {\mathcal{CB}}$ and let $g$ be any disjoint type function on its parameter space. Then, the space $(J(g)_\pm, \tau_J)$, with $\tau_J$ defined following \ref{dis_topology_model} is a \textit{model space} for $f$. Moreover, we define its \textit{associated model function} $\tilde{g} \colon J(g)_\pm\rightarrow J(g)_\pm$ as $\tilde{g}(z, \ast)\defeq(g(z),\ast)$.
\end{defn}
\begin{observation}[All models for a fixed function are conjugate]\label{obs_unique_model} Let $f\in {\mathcal{CB}}$ and let $g_1$ and $g_2$ be two disjoint type functions on its parameter space. Let $J(g_1)_\pm$, $J(g_2)_\pm$ and $\tilde{g}_1$, $\tilde{g}_2$ be the corresponding models and respective associated model functions. Then, there exists a homeomorphism $\Phi\colon J(g_1)_\pm \rightarrow J(g_2)_\pm$ such that $\Phi\circ \tilde{g}_1=\tilde{g}_2 \circ \Phi$. To see this, note that since any two straight brushes are ambiently homeomorphic, we may assume without loss of generality that the topologies in $J(g_1)_\pm$ and $J(g_2)_\pm$ have been induced from the same space $B_\pm$ following \ref{dis_topology_model}. It follows from \cite[Proof of Theorem 3.1]{lasseRidigity}, see \cite[Corollary 4.3]{mio_newCB}, that there exists a homeomorphism $\varphi\colon J(g_1)\rightarrow J(g_2)$ such that $\varphi\circ g_1=g_2 \circ \varphi$. Defining $\Phi(z, \ast)\defeq(\varphi(z),\ast)$, our claim follows.
\end{observation}
\begin{remark}
The space $({\mathcal{M}},\tau_{\mathcal{M}})$ is not second countable, see \cite[remark on p.124]{mio_thesis} it cannot be (topologically) embedded on the plane. Similarly, nor can $(J(g)_{\pm}, \tau_J)$. Nonetheless, consider any open set $U$ of ${\mathcal{M}}$ of the form $U \defeq (t_1, t_2) \times (x,y)$, with $t_1<t_2$ and $x=(r,\ast) ,\: y=(s,\star) \in ({\mathbb{R}} \setminus \mathbb {Q}) \times \lbrace -,+\rbrace$ for some $r<_{_i}s$. Then, the interval $(x,y)$ comprises all elements $(\alpha, \ast)$ with $r<_{_i}\alpha<_{_i}s$, and so, we can think of $U$ as being a sort of ``box''. This intuition will become clearer in the proof of the next proposition.
\end{remark}
\begin{prop}[Continuity of functions from the model space]\label{prop_contmodel} Let $f\in {\mathcal{CB}}$, and let $J(g)_\pm$ be a model space for $f$. Then, both its associated model function $\tilde{g}$ and the function $\text{Proj}\colon J(g)_{\pm} \rightarrow J(g)$ given by $\text{Proj}(z, \ast)\defeq z$ are continuous.
\end{prop}
\begin{proof}
Let $\tilde{\psi}\colon J(g)_{\pm} \to B_\pm$ and $\psi \colon J(g)\rightarrow B$ be the homeomorphisms from \ref{dis_topology_model}. Since proving continuity of $\text{Proj}$ is equivalent to proving continuity of $\mathcal{P}\defeq (\psi\circ \text{Proj} \circ \tilde{\psi}^{-1})\colon B_\pm \rightarrow B$, we do the latter. For any $(t,r,\ast)\in B_\pm$,
\begin{equation} \label{eq_P}
\mathcal{P}(t,r,\ast)=(\psi\circ\text{Proj}\circ\tilde{\psi}^{-1})(x)= (\psi\circ\text{Proj})(\psi^{-1}(x), \ast)=(\psi\circ\psi^{-1})(x)=(t,r).
\end{equation}
Fix $x=(t,r,\ast)\in B_\pm$, any $\epsilon>0$ and let ${\mathbb{D}}_{\epsilon}(t,r)$ be the (Euclidean) ball of radius $\epsilon$ centred at $\mathcal{P}(x)$. We can find a pair of irrational numbers $r_1<r<r_2$ such that the rectangle $(t-\epsilon/2,t+\epsilon/2 )\times (r_1, r_2) \subset {\mathbb{D}}_{\epsilon}(t,r)$. Then, $R\defeq ((t-\epsilon/2,t+\epsilon/2) \times ((r_1,+), (r_2,-))\cap B_\pm)$ is an open subset of $B_\pm$ containing $x$ and such that
$$\mathcal{P}\left(R\right)=(t-\epsilon/2,t+\epsilon/2 )\times (r_1, r_2)\subset {\mathbb{D}}_{\epsilon}(t,r),$$
and so $\mathcal{P}$ is continuous. Similarly, proving that $\tilde{g}\colon J(g)_\pm \rightarrow J(g)_\pm$ is continuous is equivalent to proving that $\tilde{h}\defeq \tilde{\psi}\circ \tilde{g}\circ \tilde{\psi}^{-1}\colon B_\pm \rightarrow B_\pm$ is continuous. For any $x=(t,r,\ast)\in B_\pm$,
$$\tilde{h}(x)=(\tilde{\psi}\circ\tilde{g}\circ\tilde{\psi}^{-1})(x)=(\tilde{\psi}\circ\tilde{g})(\psi^{-1}(t,r), \ast)=((\psi\circ g \circ\psi^{-1})(t,r), \ast).$$
That is, $\tilde{h}(t,r,\ast)= (h(t,r), \ast)$, where $h\defeq \psi\circ g\circ\psi^{-1}\colon B \rightarrow B$ is a continuous function.
Fix $x\in B_\pm$ and let $V_x$ be an open neighbourhood of $\tilde{h}(x)\eqdef(t,\alpha,\ast)$. We may assume without loss of generality that $V_x$ is of the form $V_x\defeq (t_1,t_2) \times (w,y)$, with $t_1< t <t_2\in {\mathbb{R}}$ and $w=(r, \circledast), y=(s,\star) \in B_\pm$ such that $r\leq_{_i} \alpha \leq_{_i} s$. Let $\mathcal{P}\colon B_\pm \rightarrow B$ be the function specified in \eqref{eq_P}. If, $r \lneq_{_i}\alpha \lneq_{_i} s$, then $(t,\alpha, -), (t,\alpha, +) \in V_x$, and by Observation~\ref{rem_cyclicIrr}, there exists a pair of irrationals $\alpha^-, \alpha^+$ so that $r\leq \alpha^-<\alpha<\alpha^+\leq s$ and $H\defeq (t_1,t_2) \times ((\alpha^-,+), (\alpha^+,-)) \subset V_x$. In particular, $\mathcal{P}(H)$ is open and $(\mathcal{P}^{-1}\circ\mathcal{P})(H)=H$. Since both $h$ and $\mathcal{P}$ are continuous functions, $X\defeq (\mathcal{P}^{-1}\circ h^{-1}\circ \mathcal{P})(H)$ is an open set in $B_\pm$, and by construction, $X$ is a neighbourhood of $x$ such that $\tilde{h}(X)\subset V_x$. Otherwise, either $r=\alpha$, which implies that for $V_x$ being an open neighbourhood of $\tilde{h}(x)$, $w$ must be of the form $w=(r, -)$ and $\tilde{h}(x)=(t, r, +)$, or by the same reasoning, $y=(s,+)$ and $\tilde{h}(x)=(t,s,-)$. We only argue continuity in the first case and remark that the second case can be dealt with analogously. Define $R\defeq (t_1,t_2) \times (r, -)$ and $H\defeq (t_1,t_2) \times ((r,+), (s,\star)) \subset V_x$. Note that $\mathcal{P}(H)$ is an open set and $\mathcal{P}(R)\subset \partial \mathcal{P}(H)$. Since $g$ is of disjoint type, $J(g) \cap \operatorname{Crit}(g)=\emptyset$, and so $g$ is locally injective in $J(g)$. Therefore, so is $h$, which implies that $h$ preserves locally the order of the hairs of the straight brush $B$. Consequently, $(h^{-1}\circ\mathcal{P})(R) \subset \partial (h^{-1} \circ\mathcal{P})(H).$ By construction and a similar argument to that in the previous case, the set $(h^{-1}\circ\mathcal{P})(R)\times \{-\}\cup (\mathcal{P}^{-1}\circ h^{-1} \circ \mathcal{P})(H)$ is an open neighbourhood of $x$ whose image under $\tilde{h}$ lies in $V_x$, and continuity follows.
\end{proof}
We conclude this section with some topological properties of model spaces that will be of use to us in Section \ref{sec_semiconj} when proving surjectivity of the function $\varphi$ from Theorem \ref{thm_main_intro}.
\begin{lemma}[Compactification of the model space]\label{compactification} Let $f\in {\mathcal{CB}}$ and let $J(g)_\pm$ be a model space for $f$. Then, $J(g)_\pm$ admits the one point (or Alexandroff)-compactification $\tau_\infty$. The new compact space $(J(g)_{\pm}\cup\lbrace\tilde{\infty}\rbrace,\tau_\infty)$ is a sequential space. Moreover, given a sequence $\lbrace x_k \rbrace_{k\in {\mathbb{N}}} \subset J(g)_{\pm}\cup\lbrace\tilde{\infty}\rbrace$,
\begin{equation}\label{limits}
\lim_{k \rightarrow \infty} x_k=\tilde{\infty} \quad \Longleftrightarrow \quad \lim_{k \rightarrow \infty} \text{Proj}(x_k)=\infty.
\end{equation}
\end{lemma}
\begin{proof}
We show that $J(g)_{\pm}$ admits a one-point compactification by proving that $J(g)_{\pm}$ is a locally compact, Hausdorff space. Equivalently, since these are topological properties (preserved under homeomorphisms), we instead show that a corresponding double brush $(B_\pm,\tau_{B_{\pm}})$, see \ref{dis_topology_model}, is locally compact and Hausdorff. Note that the space $({\mathcal{M}},\tau_{{\mathcal{M}}})$ defined in \ref{dis_topology_model} is Hausdorff: for any $(t,s)\in {\mathbb{R}}^2$,
\begin{equation*}
\begin{split}
(t,s,-) \in V_{-}\defeq (t-t/2, t+1)\times((s-1,-),(s,+)), \\ (t,s,+) \in V_{+}\defeq (t-t/2, t+1)\times ((s,-),(s+1,+)),
\end{split}
\end{equation*}
and $V_{-} \cap V_+=\emptyset$. Disjoint neighbourhoods of any pair of points in ${\mathcal{M}}$ can be constructed similarly. Since being Hausdorff is a hereditary property, $(B_\pm,\tau_{B_{\pm}})$ is Hausdorff.
We prove local compactness of $(B_\pm,\tau_{B_{\pm}})$ by showing that for each $x \in B_\pm$ and each open bounded neighbourhood $U_x \ni x$, the closure of $U_x$ in $B_\pm$ , that we denote by $\overline{U}_x$, is compact. With that purpose, let $\mathcal{U}=\lbrace{U_i}\rbrace_{i \in I}$ be an open cover of $\overline{U}_x$. By definition, $B_\pm\setminus \overline{U}_x$ is an open set, and so
$$ \mathcal{U}'\defeq \lbrace{U_i}\rbrace_{i\in I} \cup \lbrace B_\pm\setminus \overline{U}_x \rbrace$$
is an open cover of $B_\pm$. Hence, for each $(t,s, \ast) \in B_\pm$, there exists $U_{(t,s, \ast)} \in \mathcal{U}'$ such that $(t,s,\ast)\in U_{(t,s, \ast)}.$ For each $(t,s) \in B$, denote
$$V_{(t,s)}\defeq U_{(t,s, -)} \cup U_{(t,s, +)} \quad \text{ and }\quad \mathcal{V}\defeq \{V_{(t,s)}\}_{(t,s) \in B}.$$
Let $\mathcal{P}\colon B_\pm \rightarrow B$ be the projection function specified in \eqref{eq_P}, and observe that $\mathcal{P}(V_{(t,s)})$ might not be open, but since both $\{(t,s, -), (t,s, +) \}\subset V_{(t,s)}$, by Observation \ref{rem_cyclicIrr}, $\mathcal{P}(V_{(t,s)})$ always contains an open neighbourhood $W_{(t,s)} \ni (t,s),$ that we take to be $\mathcal{P}(V_{(t,s)})$ when this set is open. Then,
$$\mathcal{W}\defeq \lbrace W_{(t,s)} \rbrace_{(t,s)\in B}$$
forms an open cover of $B\subset {\mathbb{R}}^2$, and in particular of the closure $\overline{\mathcal{P}(\overline{U}_x)}$. Note that $\mathcal{P}(\overline{U}_x)$ is a bounded set, since $\overline{U}_x$ is bounded and we showed in the proof of Proposition \ref{prop_contmodel} that $\mathcal{P}$ is continuous. Since the straight brush $B\subset {\mathbb{R}}^{2}$ satisfies the Heine-Borel property, there exists a finite subcover $\tilde{\mathcal{W}}=\lbrace{W_k}\rbrace_{k \in K} \subset \mathcal{W}$ of $\overline{\mathcal{P}(\overline{U}_x)}$. For each $k\in K$, choose $V_k \in \mathcal{V}$ such that $W_k \subseteq \mathcal{P}(V_k)$ and denote $$\tilde{\mathcal{V}}\defeq \{V_k\}_{k \in K} \quad \text{ and } \quad \tilde{ \mathcal{U}} \defeq \lbrace U_{(t,s, \ast)} \in \mathcal{U} \colon U_{(t,s, \ast)} \subset V_{k} \in \tilde{ \mathcal{V}}\rbrace.$$
By definition, the set $\tilde{ \mathcal{V}}$ has the same number of elements as ${\tilde{\mathcal{W}}}$ does, and $\tilde{ \mathcal{U}}$ has at most double, and so a finite number. Thus, if we show that $\tilde{\mathcal{U}}$ is an open subcover of $\overline{U}_x$, then we will have shown that $(B_\pm,\tau_{B_{\pm}})$ is locally compact. Note that $\mathcal{P}^{-1}(\tilde{ \mathcal{W}})$ is an open cover of $\mathcal{P}^{-1}(\overline{\mathcal{P}(\overline{U}_x)}) \supset \overline{U}_x$, and hence so is $\tilde{ \mathcal{V}}$. Therefore, for each $(t, s, \ast )\in \overline{U}_x$, there exists $k\in K$ such that $(t, s, \ast )\in V_k= U_{(t', s',+)}\cup U_{(t',s',-)}$ for some $(t',s')\in B$. If both $U_{(t',s', -)}=\lbrace B_\pm\setminus \overline{U}_x \rbrace= U_{(t',s', +)}$, then $V_k \cap \overline{U}_x=\emptyset$, which contradicts $ (t, s, \ast )\in V_k$. Hence, $(t, s, \ast ) \in U_{(t',s',\ast)}\in \tilde{\mathcal{U}}$ for some $\ast \in \{-,+\}$, and so $\tilde{ \mathcal{U}}$ is an open subcover of $\overline{U}_x$.
We have shown that $B_\pm$ admits a (Hausdorff) one-point compactification, that we denote by $B_\pm \cup\lbrace \tilde{\infty} \rbrace $. We will see that $B_\pm \cup\lbrace \tilde{\infty}\rbrace $ is a sequential space by proving that more generally, it is a first-countable space, i.e., each point of $B_\pm \cup\lbrace \tilde{\infty} \rbrace$ has a countable neighbourhood basis. By definition, the open sets in $B_\pm \cup\lbrace \tilde{\infty} \rbrace $ are all sets that are open in $B_\pm$, together with all sets of the form $(B_\pm \setminus C)\cup \lbrace \tilde{\infty} \rbrace$, where $C$ is any closed and compact set in $B_\pm$. For each $(t,s,\ast)\in B_\pm$, a local basis can be chosen to be the collection of sets $\{U_n\}_{n\in {\mathbb{N}}}$ given by
$$ U_{n}\defeq \left((t-1/n, t+1/n)\times \left((s-1/n,-), (s+1/n,+) \right)\right)\cap B_\pm.$$
In order to find a local basis for $\tilde{\infty}$, for each $N \in {\mathbb{N}}$, let
$$ C_{N}\defeq [0,N]\times ((-N, -), (N, +)),$$
and note that by Observation \ref{rem_cyclicIrr}, $C_{N}$ equals its closure. Thus, reasoning as in the first part of the proof of this lemma, one can see that $C_{N}$ is compact, and therefore $ \left\lbrace (B_\pm \setminus C_N) \cup \lbrace \tilde{\infty} \rbrace \right\rbrace_{N \in {\mathbb{N}}}$ forms a local basis for $\lbrace \tilde{\infty}\rbrace$. Thus, we have shown that $B_\pm \cup\lbrace \tilde{\infty} \rbrace$ is a sequential space.
Finally, if $\lbrace x_k \rbrace_{k\in {\mathbb{N}}} \subset B\cup\lbrace\tilde{\infty}\rbrace$ is a sequence such that $x_k\to\tilde{\infty}$ as $k\rightarrow \infty$, then for every $N\in {\mathbb{N}}$, there exists $K(N)\in {\mathbb{N}}$ such that for all $k \geq K(N)$, $x_k \in (B_\pm \setminus C_N)$. Hence, for all $k\geq K(N)$, $\mathcal{P}(x_k) \in \mathcal{P}(B_\pm \setminus C_N) \subset {\mathbb{R}}^2 \setminus ([0,N] \times [-N,N])$. Therefore,
$$ N \to \infty \iff K(N) \to \infty \iff x_k \to \tilde{\infty} \iff \mathcal{P}(x_k) \to \infty,$$
and the last claim of the statement follows.
\end{proof}
\section{The semiconjugacy}\label{sec_semiconj}
In this section, we prove a more precise version of Theorem \ref{thm_main_intro}, namely Theorem~\ref{thm_main}. We start by bringing together the parameters and functions that will be involved.
\begin{discussion}[Combination of previous results]\label{dis_setting_semi}
Let $f \in {\mathcal{CB}}$ be an arbitrary but fixed, strongly postcritically separated function. Let us fix a pair of hyperbolic orbifolds ${\mathcal{O}}=(S,\nu)$ and ${\widetilde{\mathcal{O}}}=(\widetilde{S},\tilde{\nu})$ associated to $f$ provided by Theorem \ref{thm_orbifolds}. In particular, by Theorem \ref{thm_orbifolds}\ref{item_a_assocorb}, $S={\mathbb{C}} \setminus \overline{U}$, where $\overline{U}$ is a, possibly empty, compact set. By Observation \ref{rem_setting} and since $f\in {\mathcal{B}}$, we can fix $K>0$ sufficiently large so that
\begin{equation}\label{eq_dk2}
\lbrace P_J \setminus I(f),\overline{U},S(f), 0, f(0) \rbrace \subset {\mathbb{D}}_{K}.
\end{equation}
Let us choose a constant $L>K$ such that $f({\mathbb{D}}_K) \Subset {\mathbb{D}}_L$, and apply Theorem \ref{thm_CB} to $f$ with this constant. This provides us with a disjoint type map $g$ and continuous function $\theta\colon J(g)\to J(f)$, that in particular conjugates $g$ and $f$ in subsets of their Julia sets. Moreover, $\theta$ determines an order-preserving bijection between any choice of $\operatorname{Addr}(g)$ and $\operatorname{Addr}(f)$, as well as a valid initial configuration
\begin{equation}\label{eq_initial_semi}
\left\{\gamma^0_{\underline s}\right\}_{{\underline s} \in \operatorname{Addr}(f)}
\end{equation}
in the sense of Definition \ref{def_initial_conf}, see Observation \ref{obs_CB_conf}. Since $f$ is strongly postcritically separated, $f$ satisfies all the assumptions in Theorem \ref{thm_inverse_hand}, see Observation \ref{rem_setting}. Thus, we can fix the choice of $\operatorname{Addr}(f)$ provided by this theorem, and define the space of signed external addresses $\operatorname{Addr}(f)_\pm$ from $\operatorname{Addr}(f)$, see Definition \ref{defn_signedaddr}. Using the initial configuration in \eqref{eq_initial_semi}, we define the set of canonical tails
$$\mathcal{C}\defeq \left\{\gamma^n_{({\underline s}, \ast)} : n\geq 0 \text{ and }({\underline s}, \ast)\in \operatorname{Addr}(f)_\pm\right\}$$ provided by Theorem \ref{thm_signed}. In particular, by Theorem \ref{thm_inverse_hand}, for each curve $\gamma^n_{({\underline s}, \ast)} \in \mathcal{C}$, there exists a neighbourhood $\tau_n({\underline s}, \ast) \supset \gamma^n_{({\underline s}, \ast)}$ where we can define an inverse branch of $f$
\begin{equation}\label{eq_inverse_semi}
f^{-1,[n]}_{({\underline s},\ast)}\defeq \left(f\vert_{ \tau_n({\underline s}, \ast)}\right)^{-1} \colon f(\tau_n({\underline s}, \ast)) \rightarrow \tau_n({\underline s}, \ast),
\end{equation}
as well as an inverse branch of $f^n$ provided by Observation \ref{obs_chain_inverse},
\begin{equation}\label{eq_inverse_semi2}
f^{-n}_{({\underline s},\ast)}\defeq \left(f^n\vert_{ \tau_n({\underline s}, \ast)}\right)^{-1} \colon f^n(\tau_n({\underline s}, \ast)) \rightarrow \tau_n({\underline s}, \ast),
\end{equation}
having both of them properties we shall use later on. Next, using the function $g$, we fix a model space $(J(g)_\pm, \tau_J)$ for $f$ (see Definition \ref{defn_modelspace}) and the corresponding associated model function
$$\tilde{g}\colon J(g)_\pm\rightarrow J(g)_\pm; \quad \quad \tilde{g}(z, \ast)\mapsto(g(z),\ast).$$ In addition,
$$\text{Proj}\colon J(g)_{\pm} \rightarrow J(g); \quad \quad \text{Proj}(z, \ast)\mapsto z,$$
is the projection function from Proposition \ref{prop_contmodel}. Finally, for each $({\underline s}, \ast) \!\in \!\operatorname{Addr}(g)_\pm$, we denote
\begin{equation}\label{def_Jsmodel}
J_{({\underline s}, \ast)}\defeq J_{\underline s} \times \{\ast\} \quad \text{ and } \quad I_{({\underline s}, \ast)}\defeq I_{\underline s} \times \{\ast\},
\end{equation}
where $I_{\underline s}=J_{\underline s}\cap I(f)$, see Observation \ref{obs_inverse_CB}. In particular, by Observation \ref{obs_addrz}, $$J(g)_\pm=\bigcup_{({\underline s}, \ast) \in \operatorname{Addr}(g)_\pm} J_{({\underline s},\ast)}.$$
Moreover, for each $x \in J(g)_\pm$, $\operatorname{addr}(x)$ denotes the unique $({\underline s}, \ast) \!\in \!\operatorname{Addr}(g)_\pm$ such that $x\in J_{({\underline s}, \ast)}$, see Observation \ref{obs_addrz}.
\end{discussion}
After setting in \ref{dis_setting_semi} the functions we shall use in the proof of Theorem \ref{thm_main}, for ease of understanding, we now comment on the main ideas of this proof. For the functions $f$ and $\tilde{g}$ fixed in \ref{dis_setting_semi}, we aim to obtain the function $\varphi \colon J(g)_\pm \rightarrow J(f)$ that semiconjugates them as a limit of functions $\varphi_n\colon J(g)_\pm \rightarrow J(f)$, that are successively \textit{better approximations} of $\varphi$. For each $x\in J(g)_\pm$ and each $n\geq 0$, roughly speaking, $\varphi_n(x)$ is defined the following way: we iterate $x$ under the model function $\tilde{g}$ a number $n$ of times. In particular, if $\operatorname{addr}(x) = ({\underline s}, \ast)$, then $\tilde{g}^n(x)\subset J_{(\sigma^n({\underline s}), \ast)}$. Then, we use the functions $\text{Proj}$ and $\theta$ to \textit{move} from the space $J(g)_\pm$ to the dynamical plane of $f$. More precisely, if $\text{Proj}(x)=z$, then $\theta(g^n(z)) \in \gamma^0_{(\sigma^n({\underline s}), \ast)}$. Then, we use the composition of $n$ inverse branches of $f$ of the form specified in \eqref{eq_inverse_semi} to obtain, thanks to Theorem \ref{thm_inverse_hand} and Observation \ref{obs_chain_inverse}, a point in $\gamma^n_{({\underline s}, \ast)}$, that is $\varphi_n(x)$; see Figure \ref{figure_defphi_n}.
\begin{figure}[htb]
\centering
\begingroup%
\makeatletter%
\providecommand\color[2][]{%
\errmessage{(Inkscape) Color is used for the text in Inkscape, but the package 'color.sty' is not loaded}%
\renewcommand\color[2][]{}%
}%
\providecommand\transparent[1]{%
\errmessage{(Inkscape) Transparency is used (non-zero) for the text in Inkscape, but the package 'transparent.sty' is not loaded}%
\renewcommand\transparent[1]{}%
}%
\providecommand\rotatebox[2]{#2}%
\newcommand*\fsize{\dimexpr\f@size pt\relax}%
\newcommand*\lineheight[1]{\fontsize{\fsize}{#1\fsize}\selectfont}%
\ifx\svgwidth\undefined%
\setlength{\unitlength}{396.8503937bp}%
\ifx\svgscale\undefined%
\relax%
\else%
\setlength{\unitlength}{\unitlength * \real{\svgscale}}%
\fi%
\else%
\setlength{\unitlength}{\svgwidth}%
\fi%
\global\let\svgwidth\undefined%
\global\let\svgscale\undefined%
\makeatother%
\begin{picture}(1,0.64285714)%
\lineheight{1}%
\setlength\tabcolsep{0pt}%
\put(0,0){\includegraphics[width=\unitlength,page=1]{diagram_conjugacy.pdf}}%
\put(0.06885787,0.55097077){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0$\\\end{tabular}}}}%
\put(0.10365825,0.59811102){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$x$\end{tabular}}}}%
\put(0.24948594,0.61641088){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}$\end{tabular}}}}%
\put(0.3476481,0.59800039){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}(x)$\end{tabular}}}}%
\put(0.46950918,0.61812749){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}$\end{tabular}}}}%
\put(0.58302438,0.5979423){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}^2(x)$\end{tabular}}}}%
\put(0.58211806,0.55474563){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0$\\\end{tabular}}}}%
\put(0.02707864,0.46642075){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0(x)$\\\end{tabular}}}}%
\put(0.01466578,0.37245731){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_1(x)$\\\end{tabular}}}}%
\put(0.07802527,0.262754){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_2(x)$\\\end{tabular}}}}%
\put(0.09477709,0.19495234){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_3(x)$\\\end{tabular}}}}%
\put(0.40205596,0.47180553){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0(\tilde{g}(x))$\\\end{tabular}}}}%
\put(0.39013192,0.35351285){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_1(\tilde{g}(x))$\\\end{tabular}}}}%
\put(0.37657855,0.2393222){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_2(\tilde{g}(x))$\\\end{tabular}}}}%
\put(0.7309367,0.61927964){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}$\end{tabular}}}}%
\put(0.82721269,0.59827995){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\tilde{g}^3(x)$\end{tabular}}}}%
\put(0.3500939,0.5552378){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0$\\\end{tabular}}}}%
\put(0.81890286,0.55608284){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0$\\\end{tabular}}}}%
\put(0.63276978,0.48642789){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0(\tilde{g}^2(x))$\\\end{tabular}}}}%
\put(0.65488494,0.35706613){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_1(\tilde{g}^2(x))$\\\end{tabular}}}}%
\put(0.86606505,0.46935196){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\varphi_0(\tilde{g}^3(x))$\\\end{tabular}}}}%
\put(0.09621703,0.02858903){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma^3_{(\underline{s}, *)}$\end{tabular}}}}%
\put(0.32526093,0.11842221){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma^2_{(\sigma(\underline{s}), *)}$\end{tabular}}}}%
\put(0.59959205,0.22581635){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma^1_{(\sigma^2(\underline{s}), *)}$\end{tabular}}}}%
\put(0.83300808,0.3080798){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}$\gamma^0_{(\sigma^3(\underline{s}), *)}$\end{tabular}}}}%
\put(0.1238784,-0.01879229){\color[rgb]{0.4,0.4,0.4}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em}$\tau^3(\underline{s},*)$\end{tabular}}}}%
\put(0.36134649,0.07061125){\color[rgb]{0.4,0.4,0.4}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em}$\tau^2(\sigma(\underline{s}),*)$\end{tabular}}}}%
\put(0.64123399,0.17912979){\color[rgb]{0.4,0.4,0.4}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em}$\tau^1(\sigma^2(\underline{s}),*)$\end{tabular}}}}%
\put(0.86875572,0.25616289){\color[rgb]{0.4,0.4,0.4}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em}$\tau^0(\sigma^3(\underline{s}),*)$\end{tabular}}}}%
\put(0.21181309,0.03307293){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\begin{minipage}{0.00457828\unitlength}\raggedright \end{minipage}}}%
\put(0.69632983,0.44380553){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[1]}_{(\sigma^2(\underline{s}),*)}$\end{tabular}}}}%
\put(0.2267857,0.4325813){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[1]}_{(\underline{s},*)}$\end{tabular}}}}%
\put(0.22235448,0.32219301){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[2]}_{(\underline{s},*)}$\end{tabular}}}}%
\put(0.22613423,0.24187309){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[3]}_{(\underline{s},*)}$\end{tabular}}}}%
\put(0.45453303,0.43102334){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[1]}_{(\sigma(\underline{s}),*)}$\end{tabular}}}}%
\put(0.44739388,0.31767172){\color[rgb]{0,0,0}\makebox(0,0)[lt]{\lineheight{1.25}\smash{\begin{tabular}[t]{l}\fontsize{9pt}{1em} $f^{-1,[2]}_{(\sigma(\underline{s}),*)}$\end{tabular}}}}%
\end{picture}%
\endgroup
\caption{A schematic of the functions and curves involved in the definition of the maps $\left\{\varphi_n\right\}_{n\in {\mathbb{N}}}$.}
\label{figure_defphi_n}
\end{figure}
Since all the functions involved in the definition of $\varphi_n$ are continuous, continuity of $\varphi_n$ will follow. Moreover, we use Theorem \ref{thm_orbifolds}, that is, orbifold expansion of $f$ in a neighbourhood of $J(f)$, to show that the functions $\varphi_n$ converge to a limit function $\varphi$ in Lemma \ref{lem_UHSC}. Finally, using that since $J(g)$ is a Cantor bouquet and $g$ is of disjoint type, all but some of the endpoints of the hairs of $J(g)$ are escaping, Observation \ref{obs_inverse_CB}, we show surjectivity of $\varphi$.
\begin{defn}[Functions $\varphi_n$]\label{def_phin} Following \ref{dis_setting_semi}, for each $n\geq 0$, we define the function $\varphi_n \colon J(g)_{\pm} \to J(f)$ as
$$\varphi_0(x)\defeq \theta(\text{Proj}(x)) \qquad \text{and}\qquad \varphi_{n+1}(x)\defeq f^{-1, [n]}_{\operatorname{addr}(x)}(\varphi_{n}(\tilde{g}(x))).$$
\end{defn}
We claim that these functions are well-defined. Indeed, the function $\varphi_0$ is well-defined since $\text{Proj}(J(g)_\pm)\subset J(g)$. For each $n\geq 1$, choose any $x\in J(g)_\pm$ and suppose that $\operatorname{addr}(x)=({\underline s}, \ast)$. Then, expanding definitions and using Observation \ref{obs_chain_inverse},
\begin{equation}\label{eq_expdef}
\begin{split}
\varphi_{n}(x)&= \left(f^{-1, [n]}_{\operatorname{addr}(x)}\circ f^{-1, [n-1]}_{\operatorname{addr}(\tilde{g}(x))} \circ \cdots \circ f^{-1,[1]}_{\operatorname{addr}(\tilde{g}^{n-1}(x))} \circ \varphi_{0} \circ \tilde{g}^n\right)(x)\\
&=\left(f^{-1, [n]}_{({\underline s}, \ast)}\circ f^{-1, [n-1]}_{(\sigma({\underline s}), \ast)} \circ \cdots \circ f^{-1, [1]}_{(\sigma^{n-1}({\underline s}), \ast)} \circ \varphi_{0} \circ \tilde{g}^n\right)(x)\\
&= f^{-n}_{({\underline s}, \ast)} (\varphi_0(\tilde{g}^n(x))).
\end{split}
\end{equation}
Since the equalities in \eqref{eq_expdef} only depend on $\operatorname{addr}(x)$ but not on the point $x$ itself, the action of $\varphi_{n}$ can be expressed in terms of the sets in \eqref{def_Jsmodel} as
\begin{equation}\label{eq_defphikabbr}
\varphi_n\vert_{J_{({\underline s}, \ast)}} \equiv f^{-n}_{({\underline s}, \ast)} \circ \varphi_0 \circ \tilde{g}^n \vert_{J_{({\underline s}, \ast)}}.
\end{equation}
Thus, since each $x \in J(g)_\pm$ belongs to the unique set $J_{({\underline s}, \ast)}$ for $({\underline s}, \ast)=\operatorname{addr}(x)$, $\varphi_n$ is a well-defined function for all $n\geq 0$. In particular, by Observation \ref{obs_chain_inverse}, for each $({\underline s}, \ast)\in \operatorname{Addr}(g)_\pm$,
\begin{equation}\label{eq_phin_gamman}
\varphi_n(J_{({\underline s},\ast)})=\gamma^n_{({\underline s},\ast)}.
\end{equation}
Moreover, by construction, for all $n\geq 0$,
\begin{equation} \label{commute1}
\varphi_{n}\circ \tilde{g}= f \circ \varphi_{n+1}.
\end{equation}
\begin{prop}[Continuity of $\varphi_n$] \label{new_prop_contphi_k} For each $n\geq 0$, the function $\varphi_n \colon J(g)_{\pm} \rightarrow J(f)$ is continuous.
\end{prop}
\begin{proof}
The function $\varphi_0$ is continuous because it is the composition of two continuous functions, see Theorem \ref{thm_CB} and Proposition \ref{prop_contmodel}. Fix any $n\geq 1$, fix an arbitrary $x\in J(g)_\pm$, let $\operatorname{addr}(x)\eqdef ({\underline s}, \ast)$ and let $\Upsilon^n({\underline s}, \ast)$ be the interval in $\operatorname{Addr}(f)_\pm$ provided by Theorem \ref{thm_inverse_hand}. By Theorem \ref{thm_CB}\ref{item:addr}, $\theta$ establishes a one-to-one and order-preserving correspondence between $\operatorname{Addr}(f)$ and $\operatorname{Addr}(g)$. Hence, up to this correspondence, the topological spaces $\operatorname{Addr}(f)_\pm$ and $\operatorname{Addr}(g)_\pm$ are the same, and so, $\Upsilon^n({\underline s}, \ast)$ is an open interval in $(\operatorname{Addr}(g)_\pm, \tau_A)$. Let us consider the subset of $J(g)_\pm$
$$ A\defeq \bigcup_{(\underline{\tau}, \star)\in \Upsilon^n({\underline s}, \ast)} J_{(\underline{\tau},\star)}.$$ Then, by Observation \ref{obs_chain_inverse} and \eqref{eq_defphikabbr},
\begin{equation} \label{eq_pruebacontphi_n}
\varphi_n\vert_{ A} \equiv f^{-n}_{({\underline s}, \ast)} \circ \varphi_0 \circ \tilde{g}^n\vert_{ A}.
\end{equation}
It follows that $\varphi_n\vert_{A}$ is continuous as it is a composition of continuous functions: we have just shown that $\varphi_0$ is continuous, and $\tilde{g}$ is continuous by Proposition \ref{prop_contmodel}. Moreover, by Observation \ref{obs_chain_inverse}, it holds that $\varphi_0(\tilde{g}^n(A)) \subset f^n(\tau_n({\underline s}, \ast))$, and thus, the restriction of $f^{-n}_{({\underline s}, \ast)}$ to $\varphi_0 (\tilde{g}^n (A))$ is well-defined and continuous.
We are only left to show that $A$ contains an open neighbourhood of $x$. Recall that we defined in Proposition \ref{prop_mapC} an open map $\mathcal{C}:(\operatorname{Addr}(g)_\pm, \tau_A) \rightarrow ({\mathbb{R}} \setminus \mathbb {Q}\times \{-,+ \}, \tau_I)$. Since $\tilde{\psi}(x)=(t, \mathcal{C}({\underline s}, \ast)) \in B$ for some $t>0$ and $\mathcal{C}$ is an open map, $\mathcal{C}(\Upsilon^n({\underline s}, \ast))$ is an open interval in $({\mathbb{R}}\setminus \mathbb {Q} \times \{ -,+ \},\tau_I)$. Then, $U \defeq ((t_1, t_2) \times \mathcal{C}(\Upsilon^n({\underline s}, \ast))) \cap B$ is an open neighbourhood of $\tilde{\psi}(x)$ in $B$ for any choice of $t_1,t_2\in {\mathbb{R}}^+$ such that $t_1 <t<t_2$. Consequently, see \ref{dis_topology_model}, $\tilde{\psi}^{-1}(U)$ is an open neighbourhood of $x$ that lies in $A$.
\end{proof}
\subsection*{Convergence to the semiconjugacy}
For the hyperbolic orbifold ${\mathcal{O}}\Supset J(f)$ fixed in \ref{dis_setting_semi}, let $d_{\mathcal{O}}$ be the distance function defined from its orbifold metric; see \cite[\S3]{mio_orbifolds} for details. We shall now see that for any given point $x\in J(g)_\pm$, $d_{\mathcal{O}}(\varphi_{n+1}(x),\varphi_n(x))\rightarrow 0$ as $n\rightarrow \infty$.
\begin{lemma}[The functions $\varphi_n$ form a Cauchy sequence] \label{lem_UHSC} There exist constants $\mu>0$ and $\Lambda>1$ such that for each $x \in J(g)_\pm$,
\begin{enumerate}[label=(\Alph*)]
\item \label{itemA_cauchy} $d_{\mathcal{O}}(\text{Proj}(x),\varphi_{0}(x))<\mu$,
\item \label{itemB_cauchy} $d_{\mathcal{O}}(\varphi_{n+1}(x),\varphi_n(x))\leq \frac{\mu}{\Lambda^{n}}$ for every $n \geq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Fix $x\in J(g)_\pm$ and let $z\defeq \text{Proj}(x)$. In particular, $\varphi_0(x)=\theta(z)$. By Theorem~\ref{thm_CB}\ref{item:inclusions}\ref{item:annulus},
\begin{equation}\label{eq_thetaJk}
\theta(J(g))\cup J(g) \subset {\mathbb{C}} \setminus {\mathbb{D}}_{K}\subset {\mathcal{O}} \quad \text{ and } \quad \theta(z) \in \overline{A(M^{-1} \vert z\vert, M\vert z\vert)},
\end{equation}
for some $M>0$ that does not depend on the point $z$.
Let us choose a constant $\tilde{K}>1$ such that $M^{-1}>\tilde{K}$. Then, by Theorem \ref{thm_orbifolds}\ref{lem_annulus}, there exists $\tilde{R}>0$ so that if $\overline{A(r, \tilde{K}r)}\subset A(r/\tilde{K},r \tilde{K}^2) \subset {\mathcal{O}}$, then the ${\mathcal{O}}$-distance between any two points in $\overline{A(r, \tilde{K}r)}$ is less than $\tilde{R}$.
We want to combine this result with \eqref{eq_thetaJk} to get an upper bound for $d_{\mathcal{O}}(z,\theta(z))$ by expressing the annulus in \eqref{eq_thetaJk} as a finite union of annuli of the form $A(r, \tilde{K}r)$ for some $r>0$. More specifically, let $N$ be the smallest number for which $\tilde{K}^N\geq M^{2}$. That is, $N\defeq \left \lceil{\frac{2\log M}{\log \tilde{K}}}\right \rceil $, and let $r\defeq M^{-1}\vert z \vert$. Then, by \eqref{eq_thetaJk} and the choice of $\tilde{K}$,
\begin{equation}\label{eq_annuli}
z,\: \theta(z) \in \bigcup_{i=1}^{N} \overline{A\left(\tilde{K}^{i-1}r, \tilde{K}^{i}r \right)} \subset \bigcup_{i=1}^{N} A\left(\tilde{K}^{i-2}r, \tilde{K}^{i+1}r \right) \subset{\mathcal{O}}.
\end{equation}
Thus, since the constant $N$ does not depend on the point $z\in J(g)$, we have that for all $x\in J(g)_\pm$, $d_{\mathcal{O}}(\varphi_0(x),\text{Proj}(x))\leq N \cdot \tilde{R} \eqdef\mu_1$, and item \ref{itemA_cauchy} is proved.
We now prove item \ref{itemB_cauchy}. Let $\operatorname{addr}(x)=({\underline s}, \ast)$ and fix any $n\in {\mathbb{N}}$. Recall that by \eqref{eq_phin_gamman}, since by Theorem \ref{thm_signed}\ref{item:tails_bijection}, $\gamma^n_{({\underline s}, \ast)} \subseteq \gamma^{n+1}_{({\underline s}, \ast)}$,
$$\varphi_n(x), \varphi_{n+1}(x) \in \gamma^{n+1}_{({\underline s}, \ast)}.$$
Thus, the ${\mathcal{O}}$-length of the piece of $\gamma^{n+1}_{({\underline s}, \ast)}$ that joins $\varphi_n(x)$ and $\varphi_{n+1}(x)$ provides an upper bound for the ${\mathcal{O}}$-distance between these two points. Let $\delta(n)$ be that curve. Then, using \eqref{eq_defphikabbr} and \eqref{commute1}, it holds that $$f^n(\varphi_n(x))=\varphi_0(\tilde{g}^n(x)), \qquad f^n(\varphi_{n+1}(x))=\varphi_1(\tilde{g}^n(x)),$$ and
$\delta(1)\defeq f^n(\delta(n)) \subset \gamma^1_{(\sigma^n({\underline s}), \ast)}$ is a curve with endpoints $\varphi_0(\tilde{g}^n(x))$ and $\varphi_1(\tilde{g}^n(x))$ and such that $f^{-n}_{({\underline s}, \ast)}(\delta(1))=\delta(n)$; see Figure \ref{figure_defphi_n}. Since $f \in{\mathcal{B}}$ and is strongly postcritically separated, by Corollary \ref{cor_uniform}, any upper bound for $\ell_{\mathcal{O}}(\delta(1))$ is also an upper bound for $\ell_{\mathcal{O}}(\delta(n))$. In particular, if we find a constant $C$ that bounds the ${\mathcal{O}}$-length of the sub-curve in $\gamma^1_{\operatorname{addr}(y)}$ between $\varphi_0(y)$ and $\varphi_1(y)$ for all $y\in J(g)_\pm$, being $C$ independent of the point $y$, then \ref{itemB_cauchy} would follow. However, those sub-curves are pieces of ray tails, and in principle might not be rectifiable. Therefore and instead, we find curves in their post-$0$-homotopy class (see Definition \ref{def_post0}) with bounded orbifold length. More specifically:
\begin{Claim}
There exists a constant $\mu_2>0$ such that for each $x\in J(g)_\pm$ and $n\geq 0$, if $\delta(1)$ is the piece of $\gamma^1_{\operatorname{addr}(x)}$ joining $\varphi_0(x)$ and $\varphi_1(x)$, then there exists $\tilde{\delta}(1) \in [\delta(1)]_{_0}$ with $\ell_{{\mathcal{O}}}(\tilde{\delta}(1))\leq \mu_2.$
\end{Claim}
\begin{subproof}
Fix an arbitrary $x\in J(g)_\pm$ and let $z\defeq \text{Proj}(x)$. If $\varphi_{0}(x)=\varphi_{1}(x)$, the claim holds trivially. Otherwise, since $f\vert_{\gamma_{\operatorname{addr}(x)}^1}$ is injective,
\begin{equation}
\varphi_{0}(x) \neq \varphi_{1}(x) \iff f(\varphi_0(x))=f(\theta(z))\neq \theta(g(z))=f(\varphi_1(x)).
\end{equation}
But by Theorem \ref{thm_CB}\ref{item:boundedC}, the inequality on the right only occurs when the piece of ray $\xi$ joining $f(\varphi_1(x))$ and $(\varphi_0(x))$ belongs to a compact set $C\subset {\mathbb{C}}\setminus {\mathbb{D}}_L \cap f^{-1}({\mathbb{C}}\setminus {\mathbb{D}}_L) \subset {\mathcal{O}}$.
Using Theorem \ref{cor_homot2}, we are aiming to find curves post-$0$-homotopic to the pieces of dynamic rays totally contained in $C$ with uniformly bounded ${\mathcal{O}}$-length. Observe that $C \cap P(f) \subset I(f)$, since by the choice of the constant $K$ in \eqref{eq_dk2}, $(P(f)\setminus I(f)) \Subset {\mathbb{D}}_K$. Moreover, by discreteness of $P_J$, $C \cap P_J$ is a finite set, and since, by Theorem \ref{thm_CB}, $f$ is criniferous, there exists at least one ray tail connecting each point of $C \cap P_J$ to infinity; see \cite[Theorem 1.3]{mio_newCB}. We note that $f^{-1}({\mathbb{C}}\setminus {\mathbb{D}}_L)$ does not intersect $S(f)$, and so are a collection of tracts $\mathcal{T}$ for $f$. Then, see \cite[Lemma 2.1]{lasse_dreadlocks}, at most finitely many pieces of the tracts in $\mathcal{T}$ intersect $C$, say $\{T_1, \ldots, T_m\}$. Each of these pieces is simply-connected and its boundary is an analytic curve, and hence locally connected. Thus, we can apply Theorem \ref{cor_homot2} to the closure of each $T_i$ in $C$, to obtain a constant $L_i$ such that for any (connected) piece of ray tail $\xi \subset \overline{T_i}\cap C$, there exists $\delta\in [\xi]_{_0}$ with $\ell_{{\mathcal{O}}}(\delta)\leq L_i.$ Letting $\mu_2\defeq \max_{1\leq i\leq m}L_i$, and using Corollary \ref{cor_uniform}, the claim follows.
\end{subproof}
Then, for each $x\in J(g)_\pm$ and $n\geq 0$, if $\delta(1)$ is the piece of $\gamma^1_{({\underline s}, \ast)}$ joining $\varphi_0(\tilde{g}^n(x))$ and $\varphi_1(\tilde{g}^n(x))$, then there exists $\tilde{\delta}(1) \in [\delta(1)]_{_0}$ with $\ell_{{\mathcal{O}}}(\tilde{\delta}(1))\leq \mu_2.$ Hence, by Proposition~\ref{cor_homot}, if $\delta(n)\subset f^{-n}_{({\underline s}, \ast)}(\delta(1))$ is the curve joining $\varphi_0(x)$ and $\varphi_1(x)$, then there exists a unique curve $\tilde{\delta}(n) \subseteq f^{-n}_{({\underline s}, \ast)}(\tilde{\delta}(1))$ satisfying $\tilde{\delta}(n) \in [\delta(n)]_{_0}$. In particular, $\tilde{\delta}(n)$ has endpoints $\varphi_0(x)$ and $\varphi_1(x)$, and moreover, by Corollary \ref{cor_uniform}, there exists a constant $\Lambda>1$, that does not depend on $x$, such that
$$d_{\mathcal{O}}(\varphi_{n+1}(x),\varphi_n(x))\leq \ell_{\mathcal{O}}(\tilde{\delta}(n)) \leq \frac{\ell_{\mathcal{O}}(\tilde{\delta}(1))}{\Lambda^{n}} \leq \frac{\mu_2}{\Lambda^{n}}.$$
Letting $\mu\defeq \max\{\mu_1, \mu_2\}$, the lemma follows.
\end{proof}
\noindent Finally, we state and prove a more detailed version of Theorem \ref{thm_main_intro}.
\begin{thm}\label{thm_main} Let $f\in {\mathcal{CB}}$ be strongly postcritically separated, let $J(g)_\pm$ be a model space for $f$ and let $\tilde{g}$ be its associated model function. Then, there exists a continuous surjective function
\begin{equation*}
\varphi \colon J(g)_{\pm} \rightarrow J(f) \qquad \text{ so that } \qquad f\circ\varphi = \varphi\circ \tilde{g}
\end{equation*}
and $\varphi(I(g)_{\pm})=I(f)$ and there is $K\in {\mathbb{N}}$ such that for every $z\in I(f)$, $\# \varphi^{-1}(z)< K$.
Moreover, for each $({\underline s}, \ast) \in \operatorname{Addr}(g)_\pm$, the restriction map $\varphi\colon J_{({\underline s}, \ast)}\rightarrow\overline{\Gamma({\underline s}, \ast)}$ is a bijection, and so $\overline{\Gamma({\underline s}, \ast)}$ is a canonical ray together with its endpoint.
\end{thm}
\begin{observation}\label{obs_orders_semi} We have implicitly stated in Theorem \ref{thm_main} that $\varphi$ establishes a one-to-one correspondence between $\operatorname{Addr}(g)_\pm$ and $\operatorname{Addr}(f)_\pm$, since with some abuse of notation, we have stated that for each $({\underline s}, \ast)\in \operatorname{Addr}(g)_\pm$, $J_{({\underline s}, \ast)}\subset J(g)_\pm$ is mapped to $\overline{\Gamma({\underline s}, \ast)} \subset J(f)$ for $({\underline s}, \ast) \in \operatorname{Addr}(f)_\pm$. Here, $\overline{\Gamma({\underline s}, \ast)}$ denotes the closure of $\Gamma({\underline s}, \ast)$ in ${\mathbb{C}}$. In particular, we are claiming that $\varphi$ is an order-preserving continuous map.
\end{observation}
\begin{proof}[Proof of Theorem \ref{thm_main}]
Since by Observation \ref{obs_unique_model}, any two models for $f$ are conjugate, we may assume without loss of generality that $g$ is the disjoint type function fixed in \ref{dis_setting_semi}. Then, $J(g)_\pm$ and $\tilde{g}$ are also fixed in \ref{dis_setting_semi}. Let $\{\varphi_n\}_{n\geq 0}$ be the sequence of functions given by Definition \ref{def_phin} following \ref{dis_setting_semi}. By Proposition \ref{new_prop_contphi_k} and Lemma \ref{lem_UHSC}, $\{\varphi_n\}_{n\geq 0}$ is a uniformly Cauchy sequence of continuous functions. Since the orbifold metric in ${\mathcal{O}}$ is complete, they converge uniformly to a continuous limit function
$\varphi :J(g)_{\pm} \rightarrow{\mathcal{O}},$ which by the functional equation (\ref{commute1}) satisfies
\begin{equation} \label{commutative}
\varphi\circ \tilde{g}= f \circ \varphi.
\end{equation}
\noindent By Lemma \ref{lem_UHSC},
\begin{equation*}\label{eq_sum}
\begin{split}
d_{\mathcal{O}}(\varphi(x), \text{Proj}(x)) &\leq d_{\mathcal{O}}(\varphi(x),\varphi_0(x))+ d_{\mathcal{O}}(\varphi_0(x), \text{Proj}(x)) \\ & \leq \sum_{k=0} ^{\infty} d_{\mathcal{O}}(\varphi_{k+1}(x),\varphi_k(x)) +\mu
\leq 2\sum_{j=0}^{\infty}\frac{\mu}{\Lambda^{j}}=\frac{2\mu\Lambda}{\Lambda-1}.
\end{split}
\end{equation*}
\noindent This means for sequences $\lbrace x_n \rbrace_{n\in {\mathbb{N}}} \subset J(g)_{\pm}$ that as $n\rightarrow \infty$,
\begin{equation}\label{corrId}
\varphi(x_n)\rightarrow\infty \quad \text{ if and only if } \quad \text{Proj}(x_n)\rightarrow\infty.
\end{equation}
In particular, this holds when $\lbrace x_n \rbrace_{n\in {\mathbb{N}}}=\lbrace \tilde{g}^{n}(x) \rbrace_{n\in {\mathbb{N}}}$ is the orbit of some $x \in I(g)_\pm$. Using that, by \eqref{commutative}, $\varphi(\tilde{g}^{n}(x))=f^{n}(\varphi(x))$, we have that $x\in I(g)_{\pm}$ if and only if $\varphi(x)\in I(f)$. Equivalently,
\begin{equation}\label{eq_quasisurjective}
\varphi(I(g)_\pm) \subseteq I(f) \quad \text{ and } \quad \varphi(J(f)_\pm \setminus I(g)_\pm) \subseteq J(f)\setminus I(f).
\end{equation}
Recall from Observation \ref{obs_inverse_CB} that since $g$ is a disjoint type function whose Julia set is a Cantor bouquet, each of the sets $J_{\underline s}$ from \eqref{eq_Js} is a dynamic ray together with its endpoint, and hence, contains at most one non-escaping point, namely its endpoint $e_{\underline s}$. Thus, for each $\ast \in \{-,+\}$,
\begin{equation}\label{eq_e_s}
J_{({\underline s}, \ast)} \setminus I_{({\underline s}, \ast)} \subseteq \{(e_{\underline s}, \ast)\}.
\end{equation}
\begin{Claim}\label{claim_main}
For each $({\underline s},\ast)\in \operatorname{Addr}(g)_\pm$, $\varphi\colon I_{({\underline s}, \ast)}\rightarrow \Gamma({\underline s}, \ast)\cap I(f)$ is a bijection.
\end{Claim}
\begin{subproof}
For each $n\in {\mathbb{N}}$, let $\hat{\gamma}^n_{({\underline s}, \ast)}\defeq\gamma^n_{({\underline s}, \ast)}\cap I(f)$, and recall from Theorem~\ref{thm_signed} that $\hat{\gamma}^n_{({\underline s}, \ast)}\setminus \gamma^n_{({\underline s}, \ast)}$ is at most the endpoint of $\gamma^n_{({\underline s}, \ast)}$. Recall from Observation \ref{obs_inverse_CB} and Theorem \ref{thm_signed} that
\begin{equation*}
I_{({\underline s},\ast)}=\bigcup_{n\geq 0}\beta^n_{\underline s}\times \{\ast\} \quad \text{ and } \quad \Gamma({\underline s},\ast)\cap I(f)=\bigcup_{n\geq 0}\hat{\gamma}^n_{({\underline s},\ast)},
\end{equation*}
where $\beta^n_{\underline s}$ is a ray tail or dynamic ray.
If for each $n\geq 0$, we denote $\beta=\beta(n)\defeq \beta^n_{\underline s}\times \{\ast\}$, then $\varphi\colon \beta \to \hat{\gamma}^n_{({\underline s},\ast)}$ is a bijection: for all $m\geq n$, by definition of $\varphi_m$ and Theorem \ref{thm_CB},
$$\varphi_{m}(\beta)= f^{-n}_{({\underline s}, \ast)}\circ f^{n-m}\circ \theta \circ \tilde{g}^{m} (\beta)=f^{-n}_{({\underline s}, \ast)}\circ\theta\circ \tilde{g}^{n}(\beta)=f^{-n}_{({\underline s}, \ast)}(\hat{\gamma}^0_{(\sigma^n({\underline s}),\ast)}),$$
and so $\varphi\vert_ \beta=f^{-n}_{({\underline s}, \ast)}\circ\theta\circ \tilde{g}^{n}\vert_\beta$, which by Theorem \ref{thm_CB} and Observations \ref{obs_inverse_CB} and \ref{obs_chain_inverse}, is a composition of bijections.
\end{subproof}
The equality $\varphi(I(g)_\pm)=I(f)$ is now a consequence of the claim together with Theorem~\ref{thm_signed} and \eqref{eq_quasisurjective}. In addition, for each $({\underline s}, \ast)\in \operatorname{Addr}(g)_\pm$, if $(e_{\underline s}, \ast)\in J(g)_\pm \setminus I(g)_\pm$, then by \eqref{eq_quasisurjective} and \eqref{eq_e_s}, $\varphi(e, \ast)\in J(f) \setminus I(f)$, and so by the previous claim and continuity of $\varphi$, $\varphi\vert_{ J_{({\underline s}, \ast)}}$ is injective and
\begin{equation}\label{eq_phiJinGamma}
\Gamma({\underline s}, \ast)\subset \varphi(J_{({\underline s},\ast)})\subset \overline{\Gamma({\underline s}, \ast)}.
\end{equation}
In order to prove surjectivity of $\varphi$, let $J(g)_{\pm}\cup\lbrace\tilde{\infty} \rbrace$ be the one point compactification of $J(g)_{\pm}$ provided by Lemma \ref{compactification}, and denote by $J(f) \cup \lbrace \infty \rbrace$ the compactification of $J(f)$ as a subset of the Riemann sphere $\widehat{{\mathbb{C}}}$. By Lemma \ref{compactification} and \eqref{corrId}, given a sequence $\lbrace x_n \rbrace_{n\in {\mathbb{N}}} \subset J(g)_{\pm}\cup\lbrace\tilde{\infty}\rbrace$, we have
\begin{equation}\label{eq_iff}
\lim_{n \rightarrow \infty} x_n=\tilde{\infty} \quad \Longleftrightarrow \quad \lim_{n \rightarrow \infty}\text{Proj}(x_n)=\infty \quad \Longleftrightarrow \quad \lim_{n \rightarrow \infty}\varphi(x_n)=\infty.
\end{equation}
Since by Lemma \ref{compactification} $J(g)_{\pm}\cup\lbrace\tilde{\infty} \rbrace$ is a sequential space, and so is $\widehat{{\mathbb{C}}}$, the notions of continuity and sequential continuity for functions between these spaces are equivalent. Therefore, by \eqref{eq_iff}, we can extend $\varphi$ to a continuous map $\hat{\varphi}: J(g)_{\pm} \cup \lbrace \tilde{\infty} \rbrace \rightarrow J(f) \cup \lbrace \infty \rbrace$ by defining $\hat{\varphi}(\tilde{\infty})=\infty$. By continuity of $\hat{\varphi}$, we have that $\hat{\varphi}\left(J(g)_{\pm}\cup\lbrace\tilde{\infty} \rbrace\right)$ is compact. By definition of $\hat{\varphi}$, it must be the case that $\hat{\varphi}(J(g)_{\pm})=\varphi(J(g)_{\pm})$, and by removing $\lbrace\infty\rbrace$ from the codomain of $\hat{\varphi}$, we can conclude that $\varphi(J(g)_{\pm})$ is (relatively) closed in $J(f)$ with respect to the original topologies. By this and since the Julia set is the closure of the escaping set for any function in ${\mathcal{B}}$, \cite{eremenkoclassB}, we have
\begin{equation*}
I(f) =\varphi(I(g)_\pm) \subset \varphi (J(g)_\pm) \subset J(f)= \overline{I(f)}.
\end{equation*}
Consequently, $\varphi (J(g)_\pm)$ must be equal to $J(f)$, showing that $\varphi$ is surjective. Moreover, arguing exactly the same way, we can see that for each $({\underline s}, \ast)\in \operatorname{Addr}(f)_\pm$, the set $\varphi(J_{({\underline s}, \ast)})$ is closed in $J(f)$, and hence, by \eqref{eq_phiJinGamma}, $\varphi\colon J_{({\underline s}, \ast)}\rightarrow\overline{\Gamma({\underline s}, \ast)}$ is a bijection. In particular, $\overline{\Gamma({\underline s}, \ast)}$ is a canonical ray together with its endpoint.
Finally, each $z\in \mathcal{S}\supset I(f)$ belongs to $\#\operatorname{Addr}(z)_\pm=\prod^{\infty}_{j=0}\deg(f,f^{j}(z))$ canonical curves, see Definition \ref{defn_signedaddr}. By the claim in this proof, for each $z\in I(f)$, $\#\varphi^{-1}(z)=\#\operatorname{Addr}(z)_\pm$. Moreover, since $f$ is strongly postcritically separated, by items \ref{itema_defsps} and \ref{itemb_defsps} in Definition~\ref{def_strongps}, there exist constants $N,c\in {\mathbb{N}}$ such that for each $z\in J(f)$, $\#(\operatorname{Orb}^+(z)\cap \operatorname{Crit}(f)) \leq c$ and $\deg(f,w)\leq N$ for all $w\in \operatorname{Crit}(f)$. Hence, letting $K\defeq N^c$, the claim in the statement follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_main_intro}]
It is a direct consequence of Theorem \ref{thm_main}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor_intro}]
Note that $f\in {\mathcal{CB}}$ is in particular criniferous, see Theorem \ref{thm_CB}, and since it is also strongly postcritically separated, it has no asymptotic values in its Julia set. Hence, by Theorem \ref{thm_signed}, proving that all canonical rays of $f$ land suffices to conclude that all its dynamic rays land. Since by Theorem \ref{thm_main}, for each $({\underline s},\ast)\in \operatorname{Addr}(f)_\pm$, $ \overline{\Gamma({\underline s},\ast)}$ is a canonical ray together with its landing point, the corollary follows.
\end{proof}
\begin{proof}[Proofs of Theorems \ref{thm_1intro} and \ref{thm_main_intro1}] If $f\in {\mathcal{B}}$ is a finite composition of functions of finite order, then $f\in {\mathcal{CB}}$, see \cite[Proposition 4.5]{mio_newCB}. Moreover, if $S(f)$ is a finite collection of critical values in $I(f)$, $f$ has bounded criticality on $J(f)$, and $\vert w-z\vert \geq \epsilon\max\{\vert z \vert, \vert w \vert\}$ for some $\epsilon>0$ and all distinct $z,w \in P(f)$, then $f$ is strongly postcritically separated, see Definition \ref{def_strongps}. Thus, Theorems \ref{thm_1intro} and \ref{thm_main_intro1} respectively follow from Corollary \ref{cor_intro} and Theorem \ref{thm_main_intro}.
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2020-10-22T02:12:09",
"yymm": "1905",
"arxiv_id": "1905.03778",
"language": "en",
"url": "https://arxiv.org/abs/1905.03778",
"abstract": "In recent years, there has been significant progress in the understanding of the dynamics of transcendental entire functions with bounded postsingular set. In particular, for certain classes of such functions, a complete description of their topological dynamics in terms of a simpler model has been given inspired by methods from polynomial dynamics. In this paper, and for the first time, we give analogous results in cases when the postsingular set is unbounded. More specifically, we show that if $f$ is of finite order, has bounded criticality on its Julia set $J(f)$, and its singular set consists of finitely many critical values that escape to infinity and satisfy a certain separation condition, then $J(f)$ is a collection of dynamic rays or hairs, that split at critical points, together with their corresponding landing points. In fact, our result holds for a much larger class of functions with bounded singular set. Moreover, this result is a consequence of a significantly more general one: we provide a topological model for the action of $f$ on its Julia set.",
"subjects": "Dynamical Systems (math.DS); Complex Variables (math.CV)",
"title": "Splitting hairs with transcendental entire functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717472321277,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449313754345
} |
https://arxiv.org/abs/1402.0290 | Finite time blowup for an averaged three-dimensional Navier-Stokes equation | The Navier-Stokes equation on the Euclidean space $\mathbf{R}^3$ can be expressed in the form $\partial_t u = \Delta u + B(u,u)$, where $B$ is a certain bilinear operator on divergence-free vector fields $u$ obeying the cancellation property $\langle B(u,u), u\rangle=0$ (which is equivalent to the energy identity for the Navier-Stokes equation). In this paper, we consider a modification $\partial_t u = \Delta u + \tilde B(u,u)$ of this equation, where $\tilde B$ is an averaged version of the bilinear operator $B$ (where the average involves rotations and Fourier multipliers of order zero), and which also obeys the cancellation condition $\langle \tilde B(u,u), u \rangle = 0$ (so that it obeys the usual energy identity). By analysing a system of ODE related to (but more complicated than) a dyadic Navier-Stokes model of Katz and Pavlovic, we construct an example of a smooth solution to such a averaged Navier-Stokes equation which blows up in finite time. This demonstrates that any attempt to positively resolve the Navier-Stokes global regularity problem in three dimensions has to use finer structure on the nonlinear portion $B(u,u)$ of the equation than is provided by harmonic analysis estimates and the energy identity. We also propose a program for adapting these blowup results to the true Navier-Stokes equations. | \section{Introduction}
\subsection{Statement of main result}
The purpose of this paper is to formalise the ``supercriticality'' barrier for the (infamous) global regularity problem for the Navier-Stokes equation, using a blowup solution to a certain averaged version of Navier-Stokes equation to demonstrate that any proposed positive solution to the regularity problem which does not use the finer structure of the nonlinearity cannot possibly be successful. This barrier also suggests a possible route to provide a negative answer to this problem, that is to say it suggests a program for constructing a blowup solution to the true Navier-Stokes equations.
The barrier is not particularly sensitive to the precise formulation\footnote{See \cite{tao-local} for an analysis of the relationship between different formulations of the Navier-Stokes regularity problem in three dimensions. It is likely that our main results also extend to higher dimensions than three, although we will not pursue this matter here.} of the regularity problem, but to state the results in the cleanest fashion we will take the homogeneous global regularity problem in the Euclidean setting in three spatial dimensions as our formulation:
\begin{conjecture}[Navier-Stokes global regularity]\label{nsconj} \ \cite[(A)]{fefferman} Let $\nu > 0$, and let $u_0: \R^3 \to \R^3$ be a divergence-free vector field in the Schwartz class. Then there exist a smooth vector field $u: [0,+\infty) \times \R^3 \to \R^3$ (the velocity field) and smooth function $p: \R^3 \to \R$ (the pressure field) obeying the equations
\begin{equation}\label{ns}
\begin{split}
\partial_t u + (u \cdot \nabla) u &= \nu \Delta u - \nabla p \\
\nabla \cdot u &= 0 \\
u(0,\cdot) &= u_0
\end{split}
\end{equation}
as well as the finite energy condition $u \in L^\infty_t L^2_x([0,T] \times \R^3)$ for every $0 < T < \infty$.
\end{conjecture}
By applying the rescaling $\tilde u(t,x) := \nu u( \nu t, x )$, $\tilde p(t,x) := \nu p( \nu t, \nu x)$ we may normalise $\nu=1$ (note that there is no smallness requirement on the initial data $u_0$), and we shall do so henceforth.
To study this conjecture, we perform some standard computations to eliminate the role of the pressure $p$, and to pass from the category of smooth (classical) solutions to the closely related category of \emph{mild} solutions in a high regularity class. It will not matter too much what regularity class we take here, as long as it is subcritical, but for sake of concreteness (and to avoid some very minor technicalities) we will take a quite high regularity space, namely the Sobolev space $H^{10}_\df(\R^3)$ of (distributional) vector fields $u: \R^3 \to \R^3$ with $H^{10}$ regularity (thus the weak derivatives $\nabla^j u$ are square-integrable for $j=0,\dots,10$) and which are divergence free in the distributional sense: $\nabla \cdot u = 0$. By using the $L^2$ inner product\footnote{We will not use the $H^{10}_\df$ inner product in this paper, thus all appearances of the $\langle,\rangle$ notation should be interpreted in the $L^2$ sense.}
$$ \langle u, v \rangle := \int_{\R^3} u \cdot v\ dx$$
on vector fields $u, v: \R^3\to \R^3$, the dual $H^{10}_\df(\R^3)^*$ may be identified with the negative-order Sobolev space $H^{-10}_\df(\R^3)$ of divergence-free distributions $u: \R^3 \to \R^3$ of $H^{-10}$ regularity. We introduce the \emph{Euler bilinear operator} $B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ via duality as
$$ \langle B(u,v), w \rangle :=
-\frac{1}{2} \int_{\R^3} \left(\left(\left(u \cdot \nabla\right) v\right) \cdot w\right) + \left(\left(\left(v \cdot \nabla\right) u\right) \cdot w\right)\ dx$$
for $u,v,w \in H^{10}_\df(\R^3)$; it is easy to see from Sobolev embedding that this operator is well defined. More directly, we can write
$$ B(u,v) = -\frac{1}{2} P \left[ (u \cdot \nabla) v + (v \cdot \nabla) u \right]$$
where $P$ is the \emph{Leray projection} onto divergence-free vector fields, defined on square-integrable $u: \R^3 \to \R^3$ by the formula
$$ Pu_i := u_i - \Delta^{-1} \partial_i \partial_j u_j$$
with the usual summation conventions, where $\Delta^{-1} \partial_i \partial_j$ is defined as the Fourier multiplier with symbol $\frac{\xi_i \xi_j}{|\xi|^2}$. Note that $B(u,v)$ takes values in $L^2(\R^3)$ (and not just in $H^{10}_\df(\R^3)^*$) when $u,v \in H^{10}_\df(\R^3)$. We refer to the form $(u,v,w) \mapsto\langle B(u,v),w \rangle$ as the \emph{Euler trilinear form}. As is well known, we have the important cancellation law
\begin{equation}\label{cancellation}
\langle B(u,u), u \rangle = 0
\end{equation}
for all $u \in H^{10}_\df(\R^3)$, as can be seen by a routine integration by parts exploiting the divergence-free nature of $u$, with all manipulations being easily justified due to the high regularity of $u$. It will also be convenient to express the Euler trilinear form in terms of the Fourier transform $\hat u(\xi) := \int_{\R^3} u(x) e^{-2\pi i x \cdot \xi}\ dx$ as
\begin{equation}\label{bwing}
\langle B(u,v), w \rangle = -\pi i \int_{\xi_1+\xi_2+\xi_3=0} \Lambda_{\xi_1,\xi_2,\xi_3}( \hat u(\xi_1), \hat v(\xi_2), \hat w(\xi_3) )
\end{equation}
for all $u,v,w \in H^{10}_\df(\R^3)$,
where we adopt the shorthand
$$\int_{\xi_1+\xi_2+\xi_3=0} F(\xi_1,\xi_2,\xi_3) := \int_{\R^3} \int_{\R^3} F(\xi_1,\xi_2,-\xi_1-\xi_2)\ d\xi_1 d\xi_2$$
and $\Lambda_{\xi_1,\xi_2,\xi_3}: \xi_1^\perp \times \xi_2^\perp \times \xi_3^\perp \to \R$ is the trilinear form
\begin{equation}\label{lambda-def}
\Lambda_{\xi_1,\xi_2,\xi_3}( X_1, X_2, X_3 ) := (X_1 \cdot \xi_2)(X_2 \cdot X_3) + (X_2 \cdot \xi_1) (X_1 \cdot X_3),
\end{equation}
defined for vectors $X_i$ in the orthogonal complement $\xi_i^\perp := \{ X_i \in \R^3: X_i \cdot \xi_i = 0 \}$ of $\xi_i$ for $i=1,2,3$; note the divergence-free condition ensures that $\hat u(\xi_1) \in \xi_1^\perp$ for (almost) all $\xi_1 \in \R^3$, and similarly for $v$ and $w$. This also provides an alternate way to establish \eqref{cancellation}.
Given a Schwartz divergence-free vector field $u_0: \R^3 \to \R^3$ and a time interval $I \subset [0,+\infty)$ containing $0$, we define a \emph{mild $H^{10}$ solution to the Navier-Stokes equations} (or \emph{mild solution} for short) with initial data $u_0$ to be a continuous map $u: I \to H^{10}_\df(\R^3)$ obeying the integral equation
\begin{equation}\label{integral}
u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} B(u(t'),u(t'))\ dt'
\end{equation}
for all $t \in I$, where $e^{t\Delta}$ are the usual heat propagators (defined on $L^2(\R^3)$, for instance); formally,\eqref{integral} implies the projected Navier-Stokes equation
\begin{equation}\label{ns-smooth}
\begin{split}
\partial_t u &= \Delta u + B(u,u)\\
u(0,\cdot) &= u_0
\end{split}
\end{equation}
in a distributional sense at least (actually, at the $H^{10}_\df$ level of regularity it is not difficult to justify \eqref{ns-smooth} in the classical sense for mild solutions).
The distinction between smooth finite energy solutions and $H^{10}_\df$ mild solutions is essentially non-existent (at least\footnote{For data which is only in $H^{10}_\df$, there is a technical distinction between the two solution concepts, due to a lack of unlimited time regularity at the initial time $t=0$ that is ultimately caused by the non-local effects of the divergence-free condition $\nabla \cdot u = 0$, requiring one to replace the notion of a smooth solution with that of an \emph{almost smooth solution}; see \cite{tao-local} for details. However, in this paper we will only concern ourselves with Schwartz initial data, so that this issue does not arise.} for Schwartz initial data), and the reader may wish to conflate the two notions on a first reading. More rigorously, we can reformulate Conjecture \ref{nsconj} as the following logically equivalent conjecture:
\begin{conjecture}[Navier-Stokes global regularity, again]\label{reg-again} Let $u_0: \R^3 \to \R^3$ be a divergence-free vector field in the Schwartz class. Then there exists a mild solution $u: [0,+\infty) \to H^{10}_\df(\R^3)$ to the Navier-Stokes equations with initial data $u_0$.
\end{conjecture}
\begin{lemma} Conjecture \ref{nsconj} and Conjecture \ref{reg-again} are equivalent.
\end{lemma}
\begin{proof} We use the results from \cite{tao-local}, although this equivalence is essentially classical and was previously well known to experts.
Let us first show that Conjecture \ref{nsconj} implies Conjecture \ref{reg-again}. Let $u_0: \R^3 \to\R^3$ be a Schwartz divergence-free vector field, then by Conjecture \ref{nsconj} we may find a smooth vector field $u: [0,+\infty) \times \R^3 \to \R^3$ and smooth function $p: \R^3 \to \R$ obeying the equations \eqref{ns} and the finite energy condition. By \cite[Corollary 11.1]{tao-local}, $u$ is an $H^1$ solution, that is to say $u \in L^\infty_t H^1_x([0,T] \times \R^3)$ for all finite $T$. By \cite[Corollary 4.3]{tao-local}, we then have the integral equation \eqref{integral}, and by \cite[Theorem 5.4(ii)]{tao-local}, $u \in L^\infty_t H^k_x([0,T] \times \R^3)$ for every $k$, which easily implies (from \eqref{integral}) that $u$ is a continuous map from $[0,+\infty)$ to $H^{10}_\df(\R^3)$. This gives Conjecture \ref{reg-again}.
Conversely, if Conjecture \ref{reg-again} holds, and $u_0: \R^3 \to \R^3$ is a Schwartz class solution, we may find a mild solution $u: [0,+\infty) \to H^{10}_\df(\R^3)$ with this initial data. By \cite[Theorem 5.4(ii)]{tao-local}, $u\in L^\infty_t H^k_x([0,T] \times \R^3)$ for every $k$. If we define the normalised pressure
$$ p := -\Delta^{-1} \partial_i \partial_j(u_i u_j)$$
then by \cite[Theorem 5.4(iv)]{tao-local}, $u$ and $p$ are smooth on $[0,+\infty) \times \R^3$, and for each $j, k \geq 0$, the functions $\partial_t^j u, \partial_t^j p$ lie in $L^\infty_t H^k_x([0,T] \times \R^3)$ for all finite $T$. By differentiating \eqref{integral}, we have
\begin{align*}
\partial_t u &= \Delta u + B(u,u) \\
&= \Delta u - (u \cdot \nabla) u - \nabla p,
\end{align*}
and Conjecture \ref{nsconj} follows.
\end{proof}
If we take the inner product of \eqref{ns-smooth} with $u$ and integrate in time using \eqref{cancellation}, we arrive at\footnote{One has to justify the integration by parts of course, but this is routine under the hypothesis of a mild solution; we omit the (standard) details. } the fundamental \emph{energy identity}
\begin{equation}\label{energy-ident}
\frac{1}{2} \int_{\R^3} |u(T,x)|^2\ dx + \int_0^T \int_{\R^3} |\nabla u(t,x)|^2\ dx dt = \frac{1}{2} \int_{\R^3} |u_0(x)|^2\ dx
\end{equation}
for any mild solution to the Navier-Stokes equation.
If one was unaware of the supercritical nature of the Navier-Stokes equation, one might attempt to obtain a positive solution to Conjecture \ref{nsconj} or Conjecture \ref{reg-again} by combining \eqref{energy-ident} (or equivalently, \eqref{cancellation}) with various harmonic analysis estimates for the inhomogeneous heat equation
\begin{align*}
\partial_t u &= \Delta u + F \\
u(0,\cdot) &= u_0
\end{align*}
(or, in integral form, $u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} F(t')\ dt'$), together with harmonic analysis estimates for the Euler bilinear operator $B$, a simple example of which is the estimate
\begin{equation}\label{bouv}
\| B(u,v) \|_{L^2(\R^3)} \leq C \left( \|\nabla u \|_{L^4(\R^3)} \| v \|_{L^4(\R^3)} + \|\nabla v \|_{L^4(\R^3)} \| u \|_{L^4(\R^3)} \right)
\end{equation}
for some absolute constant $C$. Such an approach succeeds for instance if the initial data $u_0$ is sufficiently small\footnote{One can of course also consider other perturbative regimes, in which the solution $u$ is expected to be close to some other special solution than the zero solution. There is a vast literature in these directions, see e.g. \cite{chemin} and the references therein.} in a suitable critical norm (see \cite{koch-tataru} for an essentially optimal result in this direction), or if the dissipative operator $-\Delta$ is replaced by a hyperdissipative operator $(-\Delta)^\alpha$ for some $\alpha \geq 5/4$ (see \cite{katz-hyper}) or with very slightly less hyperdissipative operators (see \cite{tao-hyper}). Unfortunately, standard scaling heuristics (see e.g. \cite[\S 2.4]{tao-struct}) have long indicated to the experts that the energy estimate \eqref{energy-ident} (or \eqref{cancellation}), together with the harmonic analysis estimates available for the heat equation and for the Euler bilinear operator $B$, are not sufficient by themselves to affirmatively answer Conjecture \ref{nsconj}. However, these scaling heuristics are not formalised as a rigorous barrier to solvability, and the above mentioned strategy to solve the Navier-Stokes global regularity problem continues to be attempted on occasion.
The most conclusive way to rule out such a strategy would of course be to demonstrate\footnote{It is a classical fact that mild solutions to a given initial data are unique, see e.g. \cite[Theorem 5.4(iii)]{tao-local}.} a mild solution to the Navier-Stokes equation that develops a singularity in finite time, in the sense that the $H^{10}_\df$ norm of $u(t)$ goes to infinity as $t$ approaches a finite time $T_*$. Needless to say, we are unable to produce such a solution. However, we will in this paper obtain a finite time blowup (mild) solution to an \emph{averaged} equation
\begin{equation}\label{ns-modified}
\begin{split}
\partial_t u &= \Delta u + \tilde B(u,u)\\
u(0,\cdot) &= u_0,
\end{split}
\end{equation}
where $\tilde B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ will be a (carefully selected) averaged version of $B$ that has equal or lesser ``strength'' from a harmonic analysis point of view (indeed, $\tilde B$ obeys slightly \emph{more} estimates than $B$ does), and which still obeys the fundamental cancellation property \eqref{cancellation}. Thus, any successful method to affirmatively answer Conjecture \ref{nsconj} (or Conjecture \ref{reg-again}) must either use finer structure of the Navier-Stokes equation beyond the general form \eqref{ns-smooth}, or else must rely crucially on some estimate or other property of the Euler bilinear operator $B$ that is not shared by the averaged operator $\tilde B$.
We pause to mention some previous blowup results in this direction. If one drops the cancellation requirement \eqref{cancellation}, so that one no longer has the energy identity \eqref{energy-ident}, then blowup solutions for various Navier-Stokes type equations have been constructed in the literature. For instance, in \cite{mont} finite time blowup for a ``cheap Navier-Stokes equation'' $\partial_t u = \Delta u + \sqrt{-\Delta}(u^2)$ (with $u$ now a scalar field) was constructed in the one-dimensional setting, with the results extended to higher dimensions in \cite{gallagher}. As remarked in that latter paper, it is essential to the methods of proof that no energy identity is available. In a slightly different direction, finite time blowup was established in \cite{sinai} for a complexified version of the Navier-Stokes equations, in which the energy identity was again unavailable (or more precisely, it is available but non-coercive). These models are not exactly of the type \eqref{ns-modified} considered in this paper, but are certainly very similar in spirit.
Further models of Navier-Stokes type, which obey an energy identity, were introduced by Plech\'a\c{c} and \c{S}ver\'ak \cite{plechac}, \cite{plechac-2}, by Katz and Pavlovic \cite{katz-dyadic}, and by Hou and Lei \cite{hou}; of these three, the model in \cite{katz-dyadic} is the most relevant for our work and will be discussed in detail in Section \ref{overview-sec} below. These models differ from each other in several respects, but interestingly, in all three cases there is substantial evidence of blowup in five and higher dimensions, but not in three or four dimensions; indeed, for all three of the models mentioned above there are global regularity results in three dimensions, even in the presence of blowup results for the corresponding inviscid model. Numerical evidence for blowup for the Navier-Stokes equations is currently rather scant (except in the infinite energy setting, see \cite{grundy}, \cite{naga}); the blowup evidence is much stronger in the case of the Euler equations (see \cite{hlwz} for a recent result in this direction, and \cite{hou-survey} for a survey), but it is as yet unclear\footnote{However, in \cite{hsw}, finite time blowup for a three-dimensional ``partially viscous'' Navier-Stokes type model, in which some but not all of the fields are subject to a viscosity term, was established.} whether these blowup results have direct implications for Navier-Stokes in the three-dimensional setting, due to the relatively significant strength of the dissipation.
Finally, we mention work \cite{kiselev}, \cite{alibaud}, \cite{dong} establishing finite time blowup for supercritical fractal Burgers equations; such equations are not exactly of Navier-Stokes type, being scalar one-dimensional equations rather than incompressible vector-valued three-dimensional ones, but from a scaling perspective the results are of the same type, namely a demonstration of blowup whenever the norms controlled by the conservation and monotonicity laws are all supercritical.
We now describe more precisely the type of averaged operator $\tilde B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ we will consider. We consider three types of symmetries on $H^{10}_\df(\R^3)$ that we will average over. Firstly, we have rotation symmetry: if $R \in \SO(3)$ is a rotation matrix on $\R^3$ and $u \in H^{10}_\df(\R^3)$, then the rotated vector field
$$ \Rot_R(u)( x ) := R u( R^{-1} x )$$
is also in $H^{10}_\df(\R^3)$; note that the Fourier transform also rotates by the same law,
$$ \widehat{\Rot_R(u)}(\xi) = R \hat u(R^{-1}\xi).$$
Clearly, these rotation operators are uniformly bounded on $H^{10}_\df(\R^3)$, and also on every Sobolev space $W^{s,p}(\R^3)$ with $s \in \R$ and $1 < p < \infty$.
Next, define a (complex) \emph{Fourier multiplier of order $0$} to be an operator $m(D)$ defined on (the complexification $H^{10}_\df(\R^3) \otimes \C$ of) $H^{10}_\df(\R^3)$ by the formula
$$ \widehat{m(D) u}(\xi) := m(\xi) \hat u(\xi)$$
where $m: \R^3 \to \C$ is a function that is smooth away from the origin, with the seminorms
\begin{equation}\label{seminorm}
\| m \|_k := \sup_{\xi \neq 0} |\xi|^k |\nabla^k m(\xi)|
\end{equation}
being finite for every natural number $k$. We say that $m(D)$ is \emph{real} if the symbol $m$ obeys the symmetry $m(-\xi) = \overline{m(\xi)}$ for all $\xi \in \R^3 \backslash \{0\}$, then $m(D)$ maps $H^{10}_\df(\R^3)$ to itself. From the H\"ormander-Mikhlin multiplier theorem (see e.g. \cite{stein-singular}), complex Fourier multipliers of order $0$ are also bounded on (the complexifications of) every Sobolev space $W^{s,p}(\R^3)$ for all $s \in \R$ and $1 < p < \infty$, with an operator norm that depends linearly on finitely many of the $\|m\|_k$. We let $\mathcal{M}_0$ denote the space of all real Fourier multipliers of order $0$, so that $\mathcal{M}_0 \otimes \C$ is the space of complex Fourier multipliers (note that every complex Fourier multiplier $m(D)$ of order $0$ can be uniquely decomposed as $m(D) = m_1(D) + i m_2(D)$ with $m_1(D),m_2(D)$ real Fourier multipliers of order $0$). Fourier multipliers of order $0$ do not necessarily commute with the rotation operators $\Rot_R$, but the group of rotation operators normalises the algebra $\mathcal{M}_0$, and hence also the complexification $\mathcal{M}_{0} \otimes \C$.
Finally, we will average\footnote{In an earlier version of this manuscript, no averaging over dilations was assumed, but it was pointed out to us by the referee that the non-degeneracy condition \eqref{c-nondeg} failed if one did not introduce dilation averaging.} over the dilation operators
\begin{equation}\label{dila}
\Dil_\lambda(u)(x) := \lambda^{3/2} u(\lambda x)
\end{equation}
for $\lambda > 0$. These operators do not quite preserve the $H^{10}_\df(\R^3)$ norm, but if $\lambda$ is restricted to a compact subset of $(0,+\infty)$ then these operators (and their inverses) will be uniformly bounded on $H^{10}_\df(\R^3)$.
We now define an \emph{averaged Euler bilinear operator} to be an operator $\tilde B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$, defined via duality by the formula
\begin{equation}\label{bogo}
\langle \tilde B(u,v), w \rangle := \E \left\langle B\left( m_1(D) \Rot_{R_1} \Dil_{\lambda_1} u, m_2(D) \Rot_{R_2} \Dil_{\lambda_2} v\right), m_3(D) \Rot_{R_3} \Dil_{\lambda_3} w\right\rangle
\end{equation}
for all $u,v,w \in H^{10}_\df(\R^3)$, where $m_1(D),m_2(D),m_3(D)$ are random real Fourier multipliers of order $0$, $R_1,R_2,R_3$ are random rotations, and $\lambda_1,\lambda_2,\lambda_3$ are random dilations, obeying the moment bounds
$$ \E \| m_1 \|_{k_1} \|m_2 \|_{k_2} \|m_3\|_{k_3} < \infty$$
and
$$ C^{-1} \leq \lambda_1,\lambda_2,\lambda_3 \leq C$$
almost surely for any natural numbers $k_1,k_2,k_3$ and some finite $C$. To phrase this definition without probabilistic notation, we have
\begin{equation}\label{bogo-alt}
\langle \tilde B(u,v), w \rangle = \int_\Omega \left\langle B\left( m_{1,\omega}(D) \Rot_{R_{1,\omega}} \Dil_{\lambda_1} u, m_{2,\omega}(D) \Rot_{R_{2,\omega}} \Dil_{\lambda_2} v\right), m_{3,\omega}(D) \Rot_{R_{3,\omega}} \Dil_{\lambda_3} w\right\rangle\ d\mu(\omega)
\end{equation}
for some probability space $(\Omega,\mu)$ and some measurable maps $R_{i,\cdot}: \Omega \to \SO(3)$, $\lambda_{i,\cdot}: \Omega \to (0,+\infty)$ and $m_{i,\cdot}(D): \Omega \to \mathcal{M}_{0}$, where $\mathcal{M}_{0}$ is given the Borel $\sigma$-algebra coming from the seminorms $\| \|_k$, and one has
$$ \int_\Omega \| m_{1,\omega} \|_{k_1} \| m_{2,\omega} \|_{k_2} \| m_{3,\omega} \|_{k_3}\ d\mu(\omega) < \infty$$
and
$$ C^{-1} \leq\lambda_1(\omega), \lambda_2(\omega), \lambda_3(\omega) < C$$
for all natural numbers $k_1,k_2,k_3$. One can also express $\tilde B(u,v)$ without duality by the formula
$$ \tilde B(u,v) = \int_\Omega \Dil_{\lambda_{3,\omega}^{-1}} \Rot_{R_{3,\omega}^{-1}} \overline{m_{3,\omega}}(D) B\left ( m_{1,\omega}(D) \Rot_{R_{1,\omega}} \Dil_{\lambda_{1,\omega}} u, m_{2,\omega}(D) \Rot_{R_{2,\omega}} \Dil_{\lambda_{2,\omega}} v\right)\ d\mu(\omega)$$
where the integral is interpreted in the weak sense (i.e. the Gelfand-Pettis integral). However, we will not use this formulation of $\tilde B$ here.
\begin{remark} By the rotation symmetry $\langle B( \Rot_R u, \Rot_R v), \Rot_R w \rangle = \langle B(u,v), w \rangle$, we may eliminate one of the three rotation operators $\Rot_{R_{i,\omega}}$ in \eqref{bogo-alt} if desired, and similarly for the dilation operator. By some Fourier analysis (related to the fractional Leibniz rule) it should also be possible to eliminate one of the Fourier multipliers $m_{i,\omega}(D)$. However, we will not attempt to do so here.
\end{remark}
From duality, the triangle inequality (or more precisely, Minkowski's inequality for integrals), and the H\"ormander-Mikhlin multiplier theorem, we see that every estimate on the Euler bilinear operator $B$ in Sobolev spaces $W^{s,p}(\R^3)$ with $1 < p < \infty$ implies a corresponding estimate for averaged Euler bilinear operators $\tilde B$ (but possibly with a larger constant). For instance, from \eqref{bouv} we have
\begin{equation}\label{bouv-avg}
\| \tilde B(u,v) \|_{L^2(\R^3)} \leq C_{\tilde B} ( \|\nabla u \|_{L^4(\R^3)} \| v \|_{L^4(\R^3)} + \|\nabla v \|_{L^4(\R^3)} \| u \|_{L^4(\R^3)} ).
\end{equation}
for $u,v \in H^{10}_\df(\R^3)$, where the constant $C_{\tilde B}$ depends\footnote{Note that by applying the transformation $(u,\tilde B) \to (\lambda u, \lambda^{-1} \tilde B)$ to \eqref{ns-modified}, we have the freedom to multiply $\tilde B$ by an arbitrary constant, and so the constants $C_{\tilde B}$ appearing in any given estimate such as \eqref{bouv-avg} can be normalised to any absolute constant (e.g. $1$) if desired.} only on $\tilde B$. A similar argument shows that the expectation in \eqref{bogo} (or the integral in \eqref{bogo-alt}) is absolutely convergent for any $u,v,w \in H^{10}_\df(\R^3)$.
Similar considerations hold for most other basic bilinear estimates\footnote{There is a possible exception to this principle if the estimate involves endpoint spaces such as $L^1$ and $L^\infty$ for which the H\"ormander-Mikhlin multiplier theorem is not available, or non-convex spaces such as $L^{1,\infty}$ for which the triangle inequality is not available. However, as the Leray projection $P$ is also badly behaved on these spaces, such endpoint spaces rarely appear in these sorts of analyses of the Navier-Stokes equation.} on $B$ in popular function spaces such as H\"older spaces, Besov spaces, or Morrey spaces. Because of this, the local theory (and related theory, such as the concentration-compactness theory) for \eqref{ns-modified} is essentially identical to that of \eqref{ns-smooth} (up to changes in the explicit constants), although we will not attempt to formalise this assertion here. In particular, we may introduce the notion of a \emph{mild solution} to the averaged Navier-Stokes equation \eqref{ns-smooth} with initial data $u_0 \in H^{10}_\df(\R^3)$ on a time interval $I\subset [0,+\infty)$ containing $0$, defined to be a continuous map $u: I \to H^{10}_\df(\R^3)$ obeying the integral equation
\begin{equation}\label{integral-avg}
u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} \tilde B(u(t'),u(t'))\ dt'
\end{equation}
for all $0 \leq t \leq T$. It is then a routine matter to extend the $H^{10}$ local existence and uniqueness theory (see e.g. \cite[\S 5]{tao-local}) for mild solutions of the Navier-Stokes equations, to mild solutions of the averaged Navier-Stokes equations, basically because of the previous observation that all the estimates on $B$ used in that local theory continue to hold for $\tilde B$.
Because we have not imposed any symmetry or anti-symmetry hypotheses on the averaging measure $\mu$, rotations $R_j$, and Fourier multipliers $m_j(D)$, the analogue
\begin{equation}\label{cancellation-2}
\langle \tilde B( u, u ), u \rangle = 0
\end{equation}
of the cancellation condition \eqref{cancellation} is not automatically satisfied. If however we have \eqref{cancellation-2} for all $u \in H^{10}_\df(\R^3)$, then mild solutions to \eqref{ns-modified} enjoy the same energy identity \eqref{energy-ident} as mild solutions to the true Navier-Stokes equation.
We are now ready to state the main result of the paper.
\begin{theorem}[Finite time blowup for an averaged Navier-Stokes equation]\label{main} There exists a symmetric averaged Euler bilinear operator $\tilde B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ obeying the cancellation property \eqref{cancellation-2} for all $u \in H^{10}_\df(\R^3)$, and a Schwartz divergence-free vector field $u_0$, such that there is no global-in-time mild solution $u: [0,+\infty) \to H^{10}_\df(\R^3)$ to the averaged Navier-Stokes equation \eqref{ns-modified} with initial data $u_0$.
\end{theorem}
In fact, the arguments used to prove the above theorem can be pushed a little further to construct a smooth mild solution $u: [0,T_*) \to H^{10}_\df(\R^3)$ for some $0 < T_* < \infty$ that blows up (at the spatial origin) as $t$ approaches $T_*$ (and with subcritical norms such as $\|u(t)\|_{H^{10}_\df(\R^3)}$ diverging to infinity as $t \to T_*$).
\begin{remark} One can also rewrite the averaged Navier-Stokes equation \eqref{ns-modified} in a form more closely resembling \eqref{ns}, namely
\begin{align*}
\partial_t u + T(u,u) &= \Delta u - \nabla p \\
\nabla \cdot u &= 0 \\
u(0,\cdot) &= u_0
\end{align*}
where $T$ is an averaged version of the convection operator $(u \cdot \nabla)u$, defined by $T = \frac{1}{2} (T_{12} + T_{21})$ where
$$ T_{ij}(u,u) :=
\int_\Omega \Rot_{R_{3,\omega}^{-1}} \overline{m_{3,\omega}}(D) \left ( (m_{i,\omega}(D) \Rot_{R_{i,\omega}} u \cdot \nabla) m_{j,\omega}(D) \Rot_{R_{j,\omega}} u\right)\ d\mu(\omega)$$
for $ij=12,21$. We can also ensure that the inviscid form of the averaged Navier-Stokes equation conserves helicity, as well as total momentum, angular momentum, and vorticity; see Remark \ref{helicity} below.
\end{remark}
Our construction of this averaged bilinear operator $\tilde B: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ and blowup solution $u$ will admittedly be rather artificial, as the averaged operator $\tilde B$ will only retain a carefully chosen (and carefully weighted) subset of the nonlinear interactions present in the original operator $B$, with the weights designed to facilitate a specific blowup mechanism while suppressing other nonlinear interactions that could potentially disrupt this mechanism. There is however a possibility that the proof strategy in Theorem \ref{main} could be adapted to the true Navier-Stokes equations; see Section \ref{program} below. Even without this possibility, however, we view this result as a significant (but not completely inpenetrable) \emph{barrier} to a certain class of strategies for excluding such blowup based on treating the bilinear Euler operator $B$ abstractly, as it shows that any strategy that fails to distinguish between the Euler bilinear operator $B$ and its averaged counterparts $\tilde B$ (assuming that the averages obey the cancellation \eqref{cancellation-2}) is doomed to failure. We emphasise however that this barrier does not rule out arguments that crucially exploit specific properties of the Navier-Stokes equation that are not shared by the averaged versions. For instance, the arguments in \cite{ess} (see also the subsequent paper \cite{kenig-koch} for an alternate treatment), which establish global regularity for Navier-Stokes subject to a hypothesis of bounded critical norm, rely on a unique continuation property for backwards heat equations which in turn relies on being able to control the nonlinearity pointwise in terms of the solution and its first derivatives. This is a particular feature of the Navier-Stokes equation \eqref{ns} (in vorticity formulation) which is difficult to discern from the projected formulation \eqref{ns-smooth}, and does not hold in general in \eqref{ns-modified}; in particular, it is not obvious to the author whether the main results in \cite{ess} extend\footnote{This would not be in contradiction to Theorem \ref{main}, as the blowup solution constructed in the proof of that theorem is of ``Type II'' in the sense that critical norms of the solution $u(t)$ diverge in the limit $t \to T_*$. In contrast, the results in \cite{ess} rules out ``Type I'' blowup, in which a certain critical norm stays bounded.} to averaged Navier-Stokes equations. As such, arguments based on such unique continuation properties are (currently, at least) examples of approaches to the regularity problem that are not manifestly subject to this barrier (unless progress is made on the program outlined in Section \ref{program} below). Another example of a positive recent result on the Navier-Stokes problem that uses the finer structure of the nonlinearity (and is thus not obviously subject to this barrier) is the work in \cite{chemin} constructing large data smooth solutions to the Navier-Stokes equations in which the initial data varies slowly in one direction, and which relies on certain delicate algebraic properties of the symbol of $B$.
\subsection{Overview of proof}\label{overview-sec}
The philosophy of proof of Theorem \ref{main} is to treat the dissipative term $\Delta u$ of \eqref{ns-modified} as a perturbative error (which is possible thanks to the supercritical nature of the energy, due to the fact that we are in more than two spatial dimensions), and to construct a stable blowup solution to the ``averaged Euler equation'' $\partial_t u = \tilde B(u,u)$ that blows up so rapidly that the effect of adding a dissipation\footnote{Indeed, our arguments permit one to add any supercritical hyperdissipation $(-\Delta)^\alpha$, $\alpha < 5/4$, to the equation \eqref{ns-modified} while still obtaining blowup for certain choices of initial data, although for sake of exposition we will only discuss the classical $\alpha=1$ case here.} term is negligible. This blowup solution will have a significant portion of its energy concentrating on smaller and smaller balls around the spatial origin $x=0$; more precisely, there will be an increasing sequence of times $t_n$ converging exponentially fast to a finite limit $T_*$, such that a large fraction of the energy (at least $(1+\epsilon_0)^{-\eps n}$ for some small $\eps, \epsilon_0>0$) is concentrated in the ball $B(0,(1+\epsilon_0)^{-n})$ centred at the origin. We will be able to make the difference $t_{n+1}-t_n$ of the order $(1+\epsilon_0)^{(-\frac{5}{2}+O(\eps))n}$ for some small $\eps>0$; this is about as short as one can hope from scaling heuristics (see e.g. \cite{tao-hyper} for a discussion), and indicates a blowup which is almost as rapid and efficient as possible, given the form of the nonlinearity. In particular, for large $n$, the time difference $t_{n+1}-t_n$ will be significantly shorter than the dissipation time $(1+\epsilon_0)^{-2n}$ at that spatial scale, which helps explain why the effect of the dissipative term $\Delta u$ will be negligible.
To construct the stable blowup solution, we were motivated by the work on regularity and blowup of the system of ODE
\begin{equation}\label{xn}
\partial_t X_n = - \lambda^{2n\alpha} X_n + \lambda^{n-1} X_{n-1}^2 - \lambda^n X_n X_{n+1}
\end{equation}
for a system $(X_n)_{n\in \Z}$ of scalar unknown functions $X_n: [0,T_*) \to \R$, where $\lambda>1$ and $\alpha>0$ are parameters. This system was introduced by Katz-Pavlovic \cite{katz-dyadic} (with $\lambda=2$ and $\alpha=2/5$) as a dyadic model\footnote{Strictly speaking, the equation studied in \cite{katz-dyadic} is slightly different, in that there is a nonlinear interaction between each wavelet in the model and all of the children of that wavelet, whereas the model here corresponds to the case where each wavelet interacts with only one of its children at most. The equation in \cite{katz-dyadic} turns out to be a bit more dispersive than the model \eqref{xn}, and in particular enjoys global regularity (by an unpublished argument of Nazarov), and is thus not directly suitable as a model for proving Theorem \ref{main}.} for the Navier-Stokes equations \eqref{ns}, and are related to hierarchical shell models for these equations (see also \cite{des} for an earlier derivation of these equations from Fourier-analytic considerations). Roughly speaking, a solution $(X_n)_{n \in \Z}$ to this system (with $\alpha=2/5$) corresponds (at a heuristic level) to a solution $u$ to an equation similar to \eqref{ns-smooth} or \eqref{ns-modified} with $u$ of the shape
\begin{equation}\label{utx}
u(t,x) \approx \sum_n X_n(t) \lambda^{3n/5} \psi( \lambda^{2n/5} x )
\end{equation}
for some Schwartz function $\psi$ with Fourier transform vanishing near the origin. We remark that the analogue of the energy identity \eqref{energy-ident} in this setting is the identity
\begin{equation}\label{energy-ident-dyadic}
\frac{1}{2} \sum_n X_n(T)^2 + \int_0^T \sum_n \lambda^{2n\alpha} X_n(t)^2 dt = \frac{1}{2} \sum_n X_n(0)^2,
\end{equation}
valid whenever $X_n$ exhibits sufficient decay as $n \to \pm \infty$ (we do not formalise this statement here).
We will defer for now the technical issue (which we regard as being of secondary importance) of transferring blowup results from dyadic Navier-Stokes models to averaged Navier-Stokes models, and focus on the question of whether blowup solutions may be constructed for ODE systems such as \eqref{xn}.
Blowup solutions for the equation \eqref{xn} are known to exist for sufficiently small $\alpha$; specifically, for $\alpha < 1/4$ this was (essentially) established in \cite{katz-dyadic}, while for $\alpha < 1/3$ this was established
in \cite{ches}, with global regularity established in the critical and subcritical regimes $\alpha \geq 1/2$. If a blowup solution could be constructed\footnote{The results in \cite{katz-dyadic} can be however adapted to establish a version of Theorem \ref{main} in six and higher dimensions, while the results in \cite{ches} give a version in five and higher dimensions (and just barely miss the four-dimensional case); this can be done by adapting the arguments in this paper (and using the above-cited blowup results as a substitute for the lengthier ODE analysis in this paper), and we leave the details to the interested reader. Interestingly, the results in \cite{plechac}, \cite{plechac-2} on a somewhat different Navier-Stokes type model also indicate blowup in five and higher dimensions, while giving global regularity instead in lower dimensions; similarly for a third Navier-Stokes model introduced in \cite{hou}.} with the value $\alpha=2/5$, then this would be a dyadic analogue of Theorem \ref{main}. Unfortunately for our purposes, for the values $\lambda=2^{1/\alpha}, \alpha=2/5$, global regularity was established in \cite{bmr} (for non-negative initial data $X_n(0)$), by carefully identifying a region of phase space that is invariant under forward evolution of \eqref{xn}, and which in particular prevents the energy $X_n$ from concentrating too strongly at a single value of $n$. However, the argument in \cite{bmr} is sensitive to the specific numerical value of $\lambda$ (and also relies heavily on the assumption of initial non-negativity), and does not rule out the possibility of blowup at $\alpha=2/5$ for some variant of the system \eqref{xn}.
From multiplying \eqref{xn} by $X_n$, we arrive at the energy transfer equations
\begin{equation}\label{e-transfer}
\partial_t \left(\frac{1}{2} X_n^2\right) = - \lambda^{2n\alpha} X_n^2 + \lambda^{n-1} X_n X_{n-1}^2 - \lambda^n X_{n+1} X_n^2
\end{equation}
for $n \in \Z$,
which are a local version of \eqref{energy-ident-dyadic}, and reveal in particular (in the non-negative case $X_n \geq 0$) that there is a flow of energy at rate $\lambda^n X_{n+1} X_n^2$ from the $n^{\operatorname{th}}$ mode $X_n$ to the $(n+1)^{\operatorname{st}}$ mode $X_{n+1}$. In principle, whenever one is in the supercritical regime $\alpha < 1/2$, one should be able to start with a delta function initial data $X_n(0) = 1_{n=n_0}$ for some sufficiently large $n_0$, and then this transfer of energy should allow for a ``low-to-high frequency cascade'' solution in which the energy moves rapidly from the $n_0^{\operatorname{th}}$ mode to the $(n_0+1)^{\operatorname{st}}$ mode, with the cascade fast enough to ``outrun'' the dissipative effect of the term $-\lambda^{2n\alpha} X_n^2$ in the energy transfer equation \eqref{e-transfer}, which is lower order when $\alpha<1/2$. However, as observed in \cite{bmr}, this cascade scenario does not actually occur as strongly as the above heuristic reasoning suggests, because the energy in $X_{n+1}$ is partially transferred to $X_{n+2}$ before the transfer of energy from $X_n$ to $X_{n+1}$ is fully complete, leading instead to a solution in which the bulk of the energy remains in low values of $n$ and is eventually dissipated away by the $-\lambda^{2n\alpha} X_n^2$ term before forming a singularity. Thus we see that there is an interference effect between the energy transfer between $X_n$ and $X_{n+1}$, and the energy transfer between $X_{n+1}$ and $X_{n+2}$, that disrupts the naive blowup scenario.
One can fix this problem by suitably modifying the model equation \eqref{xn}. One rather drastic (and not particularly satisfactory) way to do this is to forcibly (i.e., \emph{exogenously}) shut off most of the nonlinear interactions, so that only one pair $X_n,X_{n+1}$ of adjacent modes experiences a nonlinear (but energy-conserving) interaction at any given time. Specifically, one can consider a truncated-nonlinearity ODE
\begin{equation}\label{trunc-non}
\partial_t X_n = - \lambda^{2n\alpha} X_n + 1_{n-1 = n(t)} \lambda^{n-1} X_{n-1}^2 - 1_{n = n(t)} \lambda^n X_n X_{n+1}
\end{equation}
where $n: [0,T_*) \to \Z$ is a piecewise constant function that one specifies in advance, and which describes which pair of modes $X_{n(t)}, X_{n(t)+1}$ is ``allowed'' to interact at a given time $t$. It is not difficult to construct a blowup solution for this truncated ODE; we do so in Section \ref{beyond}. Such a result corresponds to a weak version of Theorem \ref{main} in which the averaged nonlinearity $\tilde B$ is now allowed to be time dependent, $\tilde B = \tilde B(t)$, with the dependence of $\tilde B(t)$ on $t$ being piecewise constant (and experiencing an unbounded number of discontinuities as $t$ approaches $T_*$). In particular, the nonlinearity $\tilde B(t)$ is experiencing an exogenous oscillatory singularity in time as $t$ approaches $T_*$, making the spatial singularity of the solution $u$ become significantly less surprising\footnote{It is worth noting, however, that a surprisingly large portion of the local theory for Navier-Stokes would survive with a time-dependent nonlinearity, even if it were discontinuous in time, so even this weakened version of Theorem \ref{main} provides a somewhat non-trivial barrier that can still exclude certain solution strategies to the Navier-Stokes regularity problem.}.
Our strategy, then, is to design a system of ODE similar to \eqref{xn} that can \emph{endogenously} simulate the exogenous truncations $1_{n-1=n(t)}$, $1_{n=n(t)}$ of \eqref{trunc-non}. As shown in \cite{bmr}, this cannot be done for the scalar equation \eqref{xn}, at least when $\lambda$ is equal to $2$. However, by replacing \eqref{xn} with a vector-valued generalisation, in which one has four scalar functions $X_{1,n}(t),X_{2,n}(t),X_{3,n}(t),X_{4,n}(t)$ associated to each scale $n$, rather than a single scalar function $X_n(t)$, it turns out to be possible to use quadratic interactions of the same strength as the terms $\lambda^{n-1} X_{n-1}^2, \lambda^n X_n X_{n+1}$ appearing in \eqref{xn} to induce such a simulation, while still respecting the energy identity. The precise system of ODE used is somewhat complicated (see Section \ref{blowup-sec}), but it can be described as a sequence of ``quadratic circuits'' connected in series, with each circuit built out of a small number of ``quadratic logic gates'', each corresponding to a certain type of basic quadratic nonlinear interaction. Specifically, we will combine together some ``pump'' gates that transfer energy from one mode to another (and which are the only gate present in \eqref{xn}) with ``amplifier'' gates (that use one mode to ignite exponential growth in another mode) and ``rotor'' gates (that use one mode to rotate the energy between two other modes). By combining together these gates with carefully chosen coupling constants (a sort of ``quadratic engineering'' task somewhat analogous to the more linear circuit design tasks in electrical engineering), we can set up a transfer of energy from scale $n$ to scale $n+1$ which can be made arbitrarily abrupt, in that the duration of the time interval separating the regime in which most of the energy is at scale $n$, and most of the energy is at scale $n+1$, can be made as small as desired. Furthermore, this transfer is delayed somewhat from the time at which the scale $n$ first experiences a large influx of energy. The combination of the delay in energy transfer and the abruptness of that transfer means that the process of transferring energy from scale $n$ to scale $n+1$ is not itself interrupted (up to negligible errors) by the process of transferring energy from scale $n+1$ to $n+2$, and this permits us (after a lengthy bootstrap argument) to construct a blowup solution to this equation, which resembles the blowup solution for the truncated ODE \eqref{trunc-non}.
We now briefly discuss how to pass from the dyadic model problem of establishing blowup for a variant of \eqref{xn} to a problem of the form \eqref{ns-modified}, though as noted before we view the dyadic analysis as containing the core results of the paper, with the conversion to the non-dyadic setting being primarily for aesthetic reasons (and to eliminate any lingering suspicion that the blowup here is arising from some purely dyadic phenomenon that is somehow not replicable in the non-dyadic setup). By using an ansatz of the form \eqref{utx} and rewriting everything in Fourier space, one can map the dyadic model problem to a problem similar to \eqref{ns-modified}, but with the Laplacian replaced by a ``dyadic Laplacian'' (similar to the one appearing in \cite{katz-dyadic}, \cite{fp}), and with a bilinear operator $\tilde B$ which has a Fourier representation
$$ \langle \tilde B(u,v), w \rangle = \int\int\int \tilde m(\xi_1,\xi_2,\xi_3)( \hat u(\xi_1), \hat v(\xi_2), \hat w(\xi_3) )\ d\xi_1 d\xi_2 d\xi_3 $$
for a certain tensor-valued symbol $\tilde m(\xi_1,\xi_2,\xi_3)$ that is supported on the region of frequency space where $\xi_1,\xi_2,\xi_3$ are comparable in magnitude, and having magnitude $\sim |\xi_1|$ in that region (together with the usual estimates on derivatives of the symbol). Meanwhile, thanks to \eqref{bwing}, $B$ has a similar representation
$$ \langle B(u,v), w \rangle = \int\int\int m(\xi_1,\xi_2,\xi_3)( \hat u(\xi_1), \hat v(\xi_2), \hat w(\xi_3) )\ d\xi_1 d\xi_2 d\xi_3 $$
with $m$ being a singular (tensor-valued) distribution on the hyperplane $\xi_1+\xi_2+\xi_3=0$. After averaging over some rotations, one can\footnote{For minor technical and notational reasons, the formal version of this argument performed in Section \ref{euler-avg} does not quite perform these steps in the order indicated here, however all the ingredients mentioned here are still used at some point in the rigorous argument.} ``smear out'' the distribution $m$ to be absolutely continuous with respect to $d\xi_1 d\xi_2 d\xi_3$, and then by suitably modulating by Fourier multipliers of order $0$ (and in particular, differentiation operators of imaginary order) one can localise the symbol to the region of frequency space where $\xi_1,\xi_2,\xi_3$ are comparable in magnitude. By performing some suitable Fourier-type decompositions of the latter symbol, we are then able to express $\tilde m$ as an average of various transformations of $m$, giving rise to a description of $\tilde B$ as an averaged Navier-Stokes operator. Ultimately, the problem boils down to the task of establishing a certain non-degeneracy property of the tensor symbol $\Lambda$ defined in \eqref{lambda-def}, which one establishes by a short geometric calculation. The averaging over dilations in Theorem \ref{main} is needed in order to ensure this non-degeneracy property, but it is likely that this averaging can be dropped by a more careful analysis.
This almost finishes the proof of Theorem \ref{main}, except that the dyadic model equation involves the dyadic Laplacian instead of the Euclidean Laplacian. However, it turns out that the analysis of the dyadic system of ODE can be adapted to the case of non-dyadic dissipation, by using local energy inequalities as a substitute for the exact ODE that appear in the dyadic model. While this complicates the analysis slightly, the effect is ultimately negligible due to the perturbative nature of the dissipation.
\subsection{A program for establishing blowup for the true Navier-Stokes equations?}\label{program}
To summarise the strategy of proof of Theorem \ref{main}, a solution to a carefully chosen averaged version
$$ \partial_t u = \tilde B(u,u)$$
of the Euler equations is constructed which behaves like a ``von Neumann machine'' (that is, a self-replicating machine) in the following sense: at a given time $t_n$, it evolves as a sort of ``quadratic computer'', made out of ``quadratic logic gates'', which is ``programmed'' so that after a reasonable period of time $t_{n+1}-t_n$, it abruptly ``replicates'' into a rescaled version of itself (being $1+\epsilon_0$ times smaller, and about $(1+\epsilon_0)^{5/2}$ times faster), while also erasing almost completely the previous iteration of this machine. This replication process is stable with respect to perturbations, and in particular can survive the presence of a supercritical dissipation if the initial scale of the machine is sufficiently small.
This suggests an ambitious (but not obviously impossible) program (in both senses of the word) to achieve the same effect for the true Navier-Stokes equations, thus obtaining a negative answer to Conjecture \ref{nsconj}. Define an \emph{ideal (incompressible, inviscid) fluid} to be a divergence-free vector field $u$ that evolves according to the true Euler equations
$$ \partial_t u = B(u,u).$$
Somewhat analogously to how a quantum computer can be constructed from the laws of quantum mechanics (see e.g. \cite{benioff}), or a Turing machine can be constructed from cellular automata such as Conway's ``Game of Life'' (see e.g. \cite{adam}), one could hope to design logic gates entirely out of ideal fluid (perhaps by using suitably shaped vortex sheets to simulate the various types of physical materials one would use in a mechanical computer). If these gates were sufficiently ``Turing complete'', and also ``noise-tolerant'', one could then hope to combine enough of these gates together to ``program'' a von Neumann machine consisting of ideal fluid that, when it runs, behaves qualitatively like the blowup solution used to establish Theorem \ref{main}. Note that such replicators, as well as the related concept of a \emph{universal constructor}, have been built within cellular automata such as the ``Game of Life''; see e.g. \cite{adam-life}.
Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program are of a ``software engineering'' nature, and would be in principle achievable, even if the details could be extremely complicated in practice. The main mathematical difficulty in executing this ``fluid computing'' program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties. In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key difference of having a linear evolution rather than a nonlinear one) may prove to be useful.
A significant (but perhaps not insuperable) obstacle to this program is that in addition to the conservation of energy, the Euler equations obey a number of additional conservation laws, such as conservation of helicity, with vortex lines also being transported by the flow; see e.g. \cite{bert}. This places additional limitations on the type of fluid gates one could hope to construct; however, as these conservation laws are indefinite in sign, it may still be possible to design computational gates that respect all of these laws.
It is worth pointing out, however, that even if this program is successful, it would only demonstrate blowup for a very specific type of initial data (and tiny perturbations thereof), and is not necessarily in contradiction with the belief that one has global regularity for \emph{most} choices of initial data (for some carefully chosen definition of ``most'', e.g. with overwhelming (but not almost sure) probability with respect to various probability distributions of initial data). However, we do not have any new ideas to contribute on how to address this latter question, other than to state the obvious fact that deterministic methods alone are unlikely to be sufficient to resolve the problem, and that stochastic methods (e.g. those based on invariant measures) are probably needed.
\subsection{Acknowledgments}
I thank Nets Katz for helpful discussions, Zhen Lei and Gregory Seregin for help with the references, and the anonymous referee for a careful reading and pointing out an error in a previous version of this manuscript. The author is supported by a Simons Investigator grant, the James and Carol Collins Chair, the Mathematical Analysis \& Application Research Fund Endowment, and by NSF grant DMS-1266164.
\section{Notation}
We use $X=O(Y)$ or $X \lesssim Y$ to denote the estimate $|X| \leq CY$, for some quantity $C$ (which we call the \emph{implied constant}). If we need the implied constant to depend on a parameter (e.g. $k$), we will either indicate this convention explicitly in the text, or use subscripts, e.g. $X = O_k(Y)$ or $X \lesssim_k Y$.
If $\xi$ is an element of $\R^3$, we use $|\xi|$ to denote its Euclidean magnitude. For $\xi_0 \in \R^3$ and $r>0$, we use $B(\xi_0,r) := \{ \xi \in \R^3: |\xi-\xi_0| < r \}$ to denote the open ball of radius $r$ centred at $\xi_0$. Given a subset $B$ of $\R^3$ and a real number $\lambda$, we use $\lambda \cdot B := \{ \lambda \xi: \xi \in B \}$ to denote the dilate of $B$ by $\lambda$.
If $P$ is a mathematical statement, we use $1_P$ to denote the quantity $1$ when $P$ is true and $0$ when $P$ is false.
Given two real vector spaces $V,W$, we define the tensor product $V \otimes W$ to be the real vector space spanned by formal tensor products $v \otimes w$ with $v \in V$ and $w \in W$, subject to the requirement that the map $(v,w) \mapsto v \otimes w$ is bilinear. Thus for instance $V \otimes \C$ is the complexification of $V$, that is to say the space of formal linear combinations $v_1+iv_2$ with $v_1,v_2 \in V$.
\section{Averaging the Euler bilinear operator}\label{euler-avg}
In this section we show that certain bilinear operators, which are spatially localised variants of the ``cascade operators'' introduced in \cite{katz-dyadic}, can be viewed as averaged Euler bilinear operators.
We now formalise the class of local cascade operators we will be working with.
For technical reasons, we will use the integer powers $(1+\epsilon_0)^n$ of $1+\epsilon_0$ for some sufficiently small $\epsilon_0>0$ as our dyadic range of scales, rather than the more traditional powers of two, $2^n$. Roughly speaking, the reason for this is to ensure that any triangle of side lengths that are of comparable size, in the sense that they all between $(1+\epsilon_0)^{n-O(1)}$ and $(1+\epsilon_0)^{n+O(1)}$ for some $n$, are almost equilateral; this lets us avoid some degeneracies in the tensor symbol implicit in \eqref{bwing} that would otherwise complicate the task of expressing certain bilinear operators as averages of the Euler bilinear operator $B$ (specifically, the smallness of $\epsilon_0$ is needed to establish the non-degeneracy condition \eqref{c-nondeg} below).
\begin{definition}[Local cascade operators]\label{cascdef} Let $\epsilon_0>0$. A \emph{basic local cascade operator} (with dyadic scale parameter $\epsilon_0>0$) is a bilinear operator $C: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ defined via duality by the formula
\begin{equation}\label{cuw}
\langle C(u,v), w \rangle = \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \langle w, \psi_{3,n} \rangle
\end{equation}
for all $u,v,w \in H^{10}_\df(\R^3)$, where for $i=1,2,3$ and $n \in \Z$, $\psi_{i,n}: \R^3 \to \R^3$ is the $L^2$-rescaled function
$$ \psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i\left( (1+\epsilon_0)^n x \right)$$
and $\psi_i: \R^3 \to \R^3$ is a Schwartz function whose Fourier transform is supported on the annulus $\{ \xi: 1-2\epsilon_0 \leq |\xi| \leq 1+2\epsilon_0 \}$. A \emph{local cascade operator} is defined to be a finite linear combination of basic local cascade operators.
\end{definition}
Note from the Plancherel theorem that one has
$$ \sum_n (1 + (1+\epsilon_0)^{2n})^{10} |\langle u, \psi_{1,n} \rangle|^2 < \infty$$
whenever $u \in H^{10}_\df(\R^3)$ and $\psi_{1,n}$ is as in Definition \ref{cascdef}. Similarly for $\psi_{2,n}$ and $\psi_{3,n}$. From this and the H\"older inequality it is an easy matter to ensure that the sum in \eqref{cuw} is absolutely convergent for any $u,v,w \in H^{10}_\df(\R^3)$, so the definition of a cascade operator is well-defined, and that such operators are bounded from $H^{10}_\df(\R^3) \times H^{10}_\df(\R^3)$ to $H^{10}_\df(\R^3)^*$; indeed, the same argument shows that such operators map $H^{10}_\df(\R^3) \times H^{10}_\df(\R^3)$ to $L^2(\R^3)$. (One could in fact extend such operators to significantly rougher spaces than $H^{10}_\df(\R^3)$, but we will not need to do so here.)
We did not impose that the $\psi_i$ were divergence free, but one could easily do so via Leray projections if desired, in which case the operators $C(u,v)$ defined via duality in \eqref{cuw} can be expressed more directly as
$$
C(u,v) = \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} \langle u, \psi_{1,n} \rangle \langle v, \psi_{2,n} \rangle \psi_{3,n}.
$$
We remark that the exponent $5/2$ appearing in \eqref{cuw} ensures that local cascade operators enjoy a dyadic version of the scale invariance that the Euler bilinear form enjoys. Indeed, recalling the dilation operators \eqref{dila}, one can compute that for any $u,v,w \in H^{10}_\df(\R^3)$, one has
$$ \langle B( \Dil_\lambda u, \Dil_\lambda v ), \Dil_\lambda w \rangle = \lambda^{5/2} \langle B(u,v), w \rangle,$$
and similarly for any local cascade operator $C$ one has
$$ \langle C( \Dil_\lambda u, \Dil_\lambda v ), \Dil_\lambda w \rangle = \lambda^{5/2} \langle C(u,v), w \rangle,$$
under the additional restriction that $\lambda$ is an integer power of $1+\epsilon_0$.
Theorem \ref{main} is then an immediate consequence of the following two results.
\begin{theorem}[Local cascade operators are averaged Euler operators]\label{avg} Let $\epsilon_0>0$ be a sufficiently small absolute constant. Then every local cascade operator (with dyadic scale parameter $\epsilon_0$) is an averaged Euler bilinear operator.
\end{theorem}
\begin{theorem}[Blowup for a local cascade equation]\label{blowup} Let $0 < \epsilon_0 < 1$. Then there exists a symmetric local cascade operator $C: H^{10}_\df(\R^3) \times H^{10}_\df(\R^3) \to H^{10}_\df(\R^3)^*$ (with dyadic scale parameter $\epsilon_0$) obeying the cancellation property
\begin{equation}\label{cancelled}
\langle C(u,u), u \rangle = 0
\end{equation}
for all $u \in H^{10}_\df(\R^3)$, and Schwartz divergence-free vector field $u_0$, such that there does not exist any global mild solution $u: [0,+\infty) \to H^{10}_\df(\R^3)$ to the initial value problem
\begin{equation}\label{system}
\begin{split}
\partial_t u &= \Delta u + C(u,u) \\
u(0,\cdot) &= u_0,
\end{split}
\end{equation}
that is to say there does not exist any continuous $u: [0,+\infty) \to H^{10}_\df(\R^3)$ with
$$ u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} C\left( u(t'), u(t') \right)\ dt'$$
for all $t \in [0,+\infty)$.
\end{theorem}
Theorem \ref{blowup} is the main technical result of this paper, and its proof will occupy the subsequent sections of this paper. In this section we establish Theorem \ref{avg}. This will be done by a somewhat lengthy series of averaging arguments and Fourier decompositions, together with some elementary three-dimensional geometry, with the result ultimately following from a certain non-degeneracy property of the trilinear form $\Lambda$ defined in \eqref{lambda-def}; the arguments are unrelated to those in the rest of the paper, and readers may wish to initially skip this section and move on to the rest of the argument.
Henceforth $\epsilon_0>0$ will be assumed to be sufficiently small (e.g. $\epsilon_0 = 10^{-10}$ will suffice). In this section, the implied constants in the $O()$ notation are not permitted to depend on $\epsilon_0$.
\subsection{First step: complexification}
It will be convenient to complexify the problem in order to freely use Fourier-analytic tools at later stages of the argument. To this end, we introduce the following notation.
\begin{definition}[Complex averaging]\label{avg-def} Let $C, C': H^{10}_\df(\R^3) \otimes \C \times H^{10}_\df(\R^3) \otimes \C \to H^{10}_\df(\R^3)^* \otimes \C$ be bounded (complex-)bilinear operators. We say that $C$ is a \emph{complex average} of $C'$ if there exists a finite measure space $(\Omega,\mu)$ and measurable functions $m_{i,\cdot}(D): \Omega \to {\mathcal M}_0 \otimes \C$, $R_{i,\cdot}: \Omega \to \SO(3)$, $\lambda_{i,\cdot}: \Omega \to (0,+\infty)$ for $i=1,2,3$ such that
\begin{equation}\label{cuvw}
\begin{split}
&\langle C(u,v), w \rangle = \int_\Omega \\
&\left\langle C'\left( m_{1,\omega}(D) \Rot_{R_{1,\omega}} \Dil_{\lambda_{1,\omega}} u, m_{2,\omega}(D) \Rot_{R_{2,\omega}} \Dil_{\lambda_{2,\omega}} v\right), m_{3,\omega}(D) \Rot_{R_{3,\omega}} \Dil_{\lambda_{3,\omega}} w\right\rangle\ d\mu(\omega),
\end{split}\end{equation}
and that one has the integrability conditions
\begin{equation}\label{integrab}
\int_\Omega \| m_{1,\omega}(D) \|_{k_1} \| m_{2,\omega}(D) \|_{k_2} \| m_{3,\omega}(D) \|_{k_3}\ d\mu(\omega) < \infty
\end{equation}
and
$$ C_0^{-1} \leq \lambda_1(\omega), \lambda_2(\omega), \lambda_3(\omega) \leq C_0$$
for any natural numbers $k_1,k_2,k_3$ (recall that the seminorms $\| \|_k$ on ${\mathcal M}_0 \otimes \C$ were defined in \eqref{seminorm}) and some finite $C_0$. Here, we complexify the inner product $\langle,\rangle$ by defining
$$ \langle u, v \rangle := \int_{\R^3} u(x) \cdot v(x)\ dx$$
for complex vector fields $u,v \in H^{10}_\df(\R^3) \otimes \C$; note that we do \emph{not} place a complex conjugate on the $v$ factor, so the inner product is complex bilinear rather than sesquilinear.
\end{definition}
Suppose we can show that every local cascade operator $C$ is a complex average of the Euler bilinear operator $B$ in the sense of the above definition. The multipliers $m_{j,\omega}(D)$ for $j=1,2,3$ appearing in the expansion \eqref{cuvw} are not required to be real, but we can decompose them as $m_{j,\omega,1}(D) + i m_{j,\omega,2}(D)$ where $m_{j,\omega,1}(D), m_{j,\omega,2}(D)$ are real (and with the seminorms of $m_{j,\omega,1}(D), m_{j,\omega,2}(D)$ bounded by a multiple of the corresponding seminorm of $m_{j,\omega}(D)$). Thus we can decompose the right-hand side of \eqref{cuvw} as the sum of $2^3=8$ pieces, each of which is of the same form as the original right-hand side up to a power of $i$, and with all the $m_{j,\omega}(D)$ appearing in each piece being a real Fourier multiplier. As the left-hand side of \eqref{cuvw} is real (as are the inner products on the right-hand side), we may eliminate all the terms on the right-hand side involving odd powers of $i$ by taking real parts. The power of $i$ in each of the four remaining terms is now just a sign $\pm 1$ and can be absorbed into the $m_{1,\omega}(D)$ factor; by concatenating together four copies of $(\Omega,\mu)$ we may now obtain an expansion of the form \eqref{cuvw} in which all the $m_{j,\omega}(D)$ are real. Finally, by multiplying $m_{1,\omega}(D)$ by a normalising constant we may take $(\Omega,\mu)$ to be a probability space rather than a finite measure space. Combining all these manipulations, we conclude Theorem \ref{avg}. Thus, it will suffice to show that every local cascade operator is a complex average of the Euler bilinear operator $B$.
\subsection{Second step: frequency localisation}
By again using $m_{1,\omega}(D)$ to absorb scalar factors, we see that if $C$ is a complex average of $C'$, then any complex scalar multiple of $C$ is a complex average of $C'$; also, by concatenating finite measure spaces together we see from Definition \ref{avg-def} that if $C_1, C_2$ are both complex averages of $C'$, then $C_1+C_2$ is an complex average of $C'$. Thus the space of averages of the Euler bilinear operator is closed under finite linear combinations, and so it will suffice to show that every \emph{basic} local cascade operator is a complex average of the Euler bilinear operator.
By decomposing the $\psi_j$, $j=1,2,3$ in \eqref{cuvw} into finitely many (complex-valued) pieces, we may replace the basic local cascade operator with the complexified basic local cascade operator $C$ defined by
\begin{equation}\label{cuw-soft}
\langle C(u,v), w \rangle = \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} \langle u, \overline{\psi_{1,n}} \rangle \langle v, \overline{\psi_{2,n}} \rangle \langle w, \overline{\psi_{3,n}} \rangle,
\end{equation}
where each $\psi_j: \R^3 \to \C^3$ is now a Schwartz complex vector field with Fourier transform $\hat \psi_j$ supported on the ball $B(\xi^0_j, \epsilon_0^3)$ for some non-zero $\xi^0_j \in \R^3$ with magnitude comparable to $1$. Henceforth we fix $C$ to be such a complexified basic local cascade operator. Note that due to the presence of rotations and dilations in the definition of a complex average, we have the freedom to rotate each and dilate each of the $\xi^0_j$ as we please. We shall select the normalisation
\begin{equation}\label{xi-split}
\begin{split}
\xi^0_1 &= (0,1,0) \\
\xi^0_2 &= (-1,-1,0) \\
\xi^0_3 &= (1,0,0)
\end{split}
\end{equation}
so that in particular
\begin{equation}\label{xisum}
\xi^0_1 + \xi^0_2 + \xi^0_3 = 0;
\end{equation}
see Figure \ref{fig:freq}. The exact normalisation in \eqref{xi-split} is somewhat arbitrary, but the vanishing \eqref{xisum} is convenient for technical reasons; also, it is necessary to ensure that $\xi^0_1,\xi^0_2,\xi^0_3$ have distinct magnitudes in order to avoid a certain degeneracy later in the argument (namely, the failure of \eqref{c-nondeg} below).
\begin{figure} [t]
\centering
\includegraphics{./tripod.png}
\caption[Frequencies]{The frequencies $\xi^0_1,\xi^0_2,\xi^0_3$. The frequency variables $\xi_j$ (and later on, the normalised frequencies $\tilde \xi_j$) will be localised to within $O(\epsilon_0^3)$ of $\xi_j^0$; this localisation is represented schematically in this figure by the circles around the reference frequencies $\xi_j^0$.}
\label{fig:freq}
\end{figure}
Once we perform this normalisation, we will have no further need of averaging over dilations, and will rely purely on Fourier and rotation averaging to obtain the required representation of the cascade operator $C$.
Note that ${\mathcal M}_0 \otimes \C$ is closed under composition, and from \eqref{seminorm} and the Leibniz rule we have the inequalities
$$ \| m(D) m'(D) \|_k \leq C_k \sum_{k_1=0}^k \sum_{k_2=0}^k \| m(D) \|_{k_1} \|m'(D)\|_{k_2}$$
for all natural numbers $k$ and all $m(D), m'(D) \in {\mathcal M}_0 \otimes \C$ (where $C_k$ depends only on $k$). From this, Fubini's theorem, and H\"older's inequality, together with the observation that rotation and dilation operators normalise ${\mathcal M}_0 \otimes \C$, we have the following transitivity property: if $C_1$ is a complex average of $C_2$, and $C_2$ is a complex average of $C_3$, then $C_1$ is a complex average of $C_3$. Our proof strategy will exploit this transitivity by passing from the Euler bilinear operator $B$ to the local cascade operator $C$ in stages, performing a sequence of averaging operations on $B$ to gradually make it resemble the local cascade operator.
\subsection{Third step: forcing frequency comparability}
We now use some differential operators of imaginary order to localise the frequencies $\xi_1,\xi_2,\xi_3$ to be comparable to each other in magnitude. Let $\varphi: \R \to \R$ be a smooth function supported on $[-2,2]$ that equals one on $[-1,1]$. We then define the function $\eta: (0,+\infty)^3 \to \R$ by
$$ \eta(N_1,N_2,N_3) := \prod_{j=2}^3 \varphi\left( \frac{1}{10\epsilon_0^2} \left( \frac{N_j}{N_1} - \frac{|\xi^0_j|}{|\xi^0_1|} \right) \right);$$
thus $\eta(|\xi_1|, |\xi_2|, |\xi_3|)$ is only non-vanishing when $\xi_1,\xi_2,\xi_3$ have comparable magnitude.
Note that $\eta(N_1,N_2,N_3) = \eta(1,e^{\log(N_2/N_1)},e^{\log(N_3/N_1)})$, and that $(x,y) \mapsto \eta(1,e^x,e^y)$ is a smooth compactly supported function. By Fourier\footnote{One could also use Mellin inversion here if desired.} inversion, we thus have a representation of the form
\begin{align*}
\eta(N_1,N_2,N_3) &= \eta( 1, e^{\log(N_2/N_1)}, e^{\log(N_3/N_1)}) \\
&= \int_\R \int_\R e^{it_2 \log(N_2/N_1)} e^{it_3 \log(N_3/N_1)} \phi(t_2,t_3)\ dt_2 dt_3\\
&= \int_\R \int_\R N_1^{-it_2-it_3} N_2^{it_2} N_3^{it_3} \phi(t_2,t_3)\ dt_2 dt_3
\end{align*}
for any $N_1,N_2,N_3 > 0$, where $\phi: \R^2 \to \C$ is a rapidly decreasing function, thus
$$ \int_\R \int_\R |\phi(t_2,t_3)| (1+|t_2|+|t_3|)^k\ dt_2 dt_3 <\infty$$
for all $k \geq 0$. If we then define the bilinear operator $B_\eta: H^{10}_\df(\R^3) \otimes \C \times H^{10}_\df(\R^3) \otimes \C \to H^{10}_\df(\R^3)^* \otimes \C$ via duality by the formula
$$ \langle B_\eta(u,v), w \rangle :=
\int_\R \int_\R \left\langle B\left( D^{-it_2-it_3} u, D^{it_2} v \right), D^{it_3} w \right\rangle \phi(t_2,t_3)\ dt_2 dt_3$$
where $D^{it}$ is the Fourier multiplier
$$ \widehat{D^{it} u}(\xi) := |\xi|^{it} \hat u(\xi)$$
then $B_\eta$ is a complex average of $B$ (note that $\| D^{it} \|_k$ grows polynomially in $t$ for each $k$). From \eqref{bwing} and Fubini's theorem (working first with Schwartz $u,v,w$ to justify all the exchange of integrals, and then taking limits) we see that\footnote{Note that we do not define $\eta(|\xi_1|,|\xi_2|,|\xi_3|)$ when one of $\xi_1,\xi_2,\xi_3$ vanishes, but this is only occurs on a set of measure zero and so there is no difficulty defining the integral.}
$$
\langle B_\eta(u,v), w \rangle = -\pi i \int_{\xi_1+\xi_2+\xi_3=0} \eta(|\xi_1|,|\xi_2|,|\xi_3|) \Lambda_{\xi_1,\xi_2,\xi_3}( \hat u(\xi_1), \hat v(\xi_2), \hat w(\xi_3) ).
$$
It thus suffices to show that $C$ is a complex average of $B_\eta$.
Next, we localise the frequency $\xi_1$ to the correct sequence of balls. Let $\rho: \R^3 \to \C$ be the function
$$ \rho(\xi_1) := \sum_{n \in \Z} \varphi\left( \frac{1}{\epsilon_0^2} \left( (1+\epsilon_0)^{-n} \xi_1 - \xi_1^0 \right) \right)$$
with $\varphi$ defined as before; thus $\rho$ is supported on the union of the balls $(1+\epsilon_0)^n \cdot B( \xi_1^0, 2\epsilon_0^2)$ for $n \in \Z$. Let $\rho(D)$ be the associated Fourier multiplier; this is easily checked to be a Fourier multiplier of order $0$. By Definition \ref{avg-def}, the bilinear operator $B_{\eta,\rho}$ defined by
$$ B_{\eta,\rho}(u,v) := B_\eta( \rho(D) u, v )$$
is clearly a complex average of $B_\eta$, and so it suffices to show that $C$ is a complex average of $B_{\eta,\rho}$.
\subsection{Fourth step: localising to a single frequency scale}
Now we localise to a single scale. Observe that we can decompose $B_{\eta,\rho} = -\pi i \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} B_{\eta,\rho,n}$, where for each $n \in \Z$ we may define the operator $B_{\eta,\rho,n}: H^{10}_\df(\R^3) \otimes \C \times H^{10}_\df(\R^3) \otimes \C \to H^{10}_\df(\R^3)^* \otimes \C$ by the formula
\begin{align*}
\langle B_{\eta,\rho,n}(u,v), w \rangle &= (1+\epsilon_0)^{-5n/2} \int_{\xi_1+\xi_2+\xi_3=0} \varphi\left(\frac{1}{\epsilon_0^2} \left((1+\epsilon_0)^{-n} \xi_1 - \xi_1^0\right)\right) \\
&\quad \eta(|\xi_1|,|\xi_2|,|\xi_3|) \Lambda_{\xi_1,\xi_2,\xi_3}( \hat u(\xi_1), \hat v(\xi_2), \hat w(\xi_3))
\end{align*}
for $u,v,w \in H^{10}_\df(\R^3)$.
In a similar vein, we may use \eqref{cuw-soft} to decompose $C = \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} C_n$, where
\begin{equation}\label{cnuv}
\langle C_n(u,v), w \rangle = \langle u, \overline{\psi_{1,n}} \rangle \langle v, \overline{\psi_{2,n}} \rangle \langle w, \overline{\psi_{3,n}} \rangle.
\end{equation}
Observe (by using the change of variables $\tilde \xi := \xi / (1+\epsilon_0)^n$) that we have the scaling laws
$$
\langle B_{\eta,\rho,n}(u,v), w \rangle = \left\langle B_{\eta,\rho,0}\left(\Dil_{(1+\epsilon_0)^{-n}} u, \Dil_{(1+\epsilon_0)^{-n}} v\right), \Dil_{(1+\epsilon_0)^{-n}} w \right\rangle
$$
and similarly
$$
\langle C_n(u,v), w \rangle = \left\langle C_0\left(\Dil_{(1+\epsilon_0)^{-n}} u, \Dil_{(1+\epsilon_0)^{-n}} v\right), \Dil_{(1+\epsilon_0)^{-n}} w \right\rangle
$$
for any $n \in \Z$ and $u,v,w \in H^{10}_\df(\R^3) \otimes \C$.
Suppose for now that we can show that $C_0$ is a complex average of $B_{\eta,\rho,0}$ (without the use of dilation operators), thus
\begin{equation}\label{couv}
\langle C_0(u,v), w \rangle = \int_\Omega \left\langle B_{\eta,\rho,0}\left( m_{1,\omega}(D) \Rot_{R_{1,\omega}} u, m_{2,\omega}(D) \Rot_{R_{2,\omega}} v\right), m_{3,\omega}(D) \Rot_{R_3} w\right\rangle\ d\mu(\omega)
\end{equation}
for some $m_{j,\omega}$, $R_j$ ($j=1,2,3$), and $(\Omega,\mu)$ as in Definition \ref{avg-def}. From the definition of $C_0$ (and the support hypotheses on $\psi_1,\psi_2,\psi_3$), we see that we may smoothly localise each $m_{j,\omega}$ to the ball $B( \xi_j^0, O(\epsilon_0^3) )$ without loss of generality (and without destroying the fact that the $m_{i,\omega}(D)$ are Fourier multipliers of order $0$ that obey \eqref{integrab}). If we then define
$$ m_{i,\omega,n}(\xi) := m_{i,\omega}( (1+\epsilon_0)^{-n} \xi )$$
and $\tilde m_{i,\omega} := \sum_{n \in \Z} m_{i,\omega,n}$, then the $m_{i,\omega}(D)$ are also Fourier multipliers of order $0$ obeying \eqref{integrab}, and the quantity
$$ \int_\Omega \left\langle B_{\eta,\rho}\left( m_{1,\omega,n_1}(D) \Rot_{R_{1,\omega}} u, m_{2,\omega,n_2}(D) \Rot_{R_{2,\omega}} v\right), m_{3,\omega,n_3}(D) \Rot_{R_3} w\right\rangle\ d\mu(\omega)$$
is equal to $-\pi i \langle C_{n_1}(u,v),w\rangle$ when $n_1=n_2=n_3$, and vanishing otherwise if $\epsilon_0$ is small enough (thanks to the support properties of $m_{i,\omega,n}$, $\eta$ and $\rho$). Summing, we see that
$$ \langle C(u,v), w \rangle = \frac{1}{-\pi i} \int_\Omega \left\langle B_{\eta,\rho}\left( \tilde m_{1,\omega}(D) \Rot_{R_{1,\omega}} u, \tilde m_{2,\omega}(D) \Rot_{R_{2,\omega}} v\right), \tilde m_{3,\omega}(D) \Rot_{R_3} w\right\rangle\ d\mu(\omega)$$
(as before, one can work first with Schwartz $u,v,w$, and then take limits), thus demonstrating that $C$ is a complex average of $B_{\eta,\rho}$ as desired (absorbing the $\frac{1}{-\pi i}$ factor into $m_{1,\omega}$). Thus, to finish the proof of Theorem \ref{avg}, it suffices to show that $C_0$ is a complex average of $B_{\eta,\rho,0}$.
\subsection{Fifth step: extracting the symbol}
We have reduced matters to the task of obtaining a representation \eqref{couv} for $\langle C_0(u,v), w \rangle$. By \eqref{cnuv} and Plancherel's theorem, we may expand $\langle C_0(u,v),w\rangle$ as
$$
\int_{\R^3} \int_{\R^3} \int_{\R^3} \left(u(\xi_1) \cdot \overline{\hat \psi_1(\xi_1)}\right) \left(v(\xi_2) \cdot \overline{\hat \psi_2(\xi_2)}\right) \left(w(\xi_3) \cdot \overline{\hat \psi_3(\xi_3)}\right)\ d\xi_1 d\xi_2 d\xi_3$$
which we rewrite as
\begin{equation}\label{expansion}
\int_{\R^3} \int_{\R^3} \int_{\R^3} \left(u(\xi_1) \otimes v(\xi_2) \otimes w(\xi_3)\right) \cdot \left(\overline{\hat \psi_1(\xi_1)} \otimes \overline{\hat \psi_2(\xi_2)} \otimes \overline{\hat \psi_3(\xi_3)}\right)\ d\xi_1 d\xi_2 d\xi_3
\end{equation}
where $\cdot$ here denotes the standard complex-bilinear inner product on the $3^3=27$-dimensional complex vector space $\C^3 \otimes \C^3 \otimes \C^3$.
Meanwhile, the right-hand side of \eqref{couv} can be expanded as
\begin{align*}
&\int_\Omega \int_{\xi_1+\xi_2+\xi_3=0} m_{1,\omega}(\xi_1) m_{2,\omega}(\xi_2) m_{3,\omega}(\xi_3) \varphi\left(\frac{1}{\epsilon_0^2} \left(\xi_1 - \xi_1^0\right)\right) \eta(|\xi_1|,|\xi_2|,|\xi_3|) \times \\
&\quad
\Lambda_{\xi_1,\xi_2,\xi_3}\left( R_{1,\omega} \hat u(R_{1,\omega}^{-1} \xi_1), R_{2,\omega} \hat v(R_{2,\omega}^{-1} \xi_2), R_{3,\omega} \hat w(R_{3,\omega}^{-1} \xi_3)\right)\ d\mu(\omega).
\end{align*}
Rewriting the integral $\int_{\xi_1+\xi_2+\xi_3=0}$ (by a slight abuse\footnote{If one wanted to be more formally rigorous here, one could replace the Dirac delta function $\delta(\xi)$ here with an approximation to the identity $\frac{1}{\eps^3} \phi(\frac{\xi}{\eps})$ for some smooth compactly supported function $\phi: \R^3 \to \R$ of total mass one, and then add a limit symbol $\lim_{\eps \to 0}$ outside of the integration.} of notation) as $\int_{\R^3} \int_{\R^3} \int_{\R^3} \delta(\xi_1+\xi_2+\xi_3)\ d\xi_1 d\xi_2 d\xi_3$, where $\delta$ is the Dirac delta function on $\R^3$, and then applying the change of variables $\xi_j \mapsto R_{j,\omega}^{-1} \xi_j$, we may rewrite the above expression as
\begin{align*}
&\int_{\R^3} \int_{\R^3} \int_{\R^3} \int_\Omega \delta( R_{1,\omega} \xi_1 + R_{2,\omega} \xi_2 + R_{3,\omega} \xi_3 )
m_{1,\omega}(R_{1,\omega} \xi_1) m_{2,\omega}(R_{2,\omega} \xi_2) m_{3,\omega}(R_{3,\omega} \xi_3)\\
&\quad \varphi\left(\frac{1}{\epsilon_0^2} \left(R_{1,\omega} \xi_1 - \xi_1^0\right)\right) \eta(|\xi_1|, |\xi_2|,|\xi_3|) \times \\
&\quad
\Lambda_{R_{1,\omega} \xi_1, R_{2,\omega} \xi_2, R_{3,\omega} \xi_3}( R_{1,\omega} \hat u(\xi_1), R_{2,\omega} \hat v(\xi_2), R_{3,\omega} \hat w(\xi_3)) \\
&\quad d\mu(\omega) d\xi_1 d\xi_2 d\xi_3.
\end{align*}
Comparing this with the expansion \eqref{expansion} of the left-hand side of \eqref{couv}, we claim that our task is now reduced to that of constructing a finite measure space $(\Omega,\mu)$ and measurable functions $R_{i,\cdot}: \Omega \to \SO(3)$, $m_{i,\cdot}(D): \Omega \to {\mathcal M}_0 \otimes \C$, and $F: \Omega \to \C^3\otimes \C^3 \otimes \C^3$ obeying \eqref{integrab} with $F$ bounded, such that we have the identity
\begin{equation}\label{bigmess}
\begin{split}
X_1 \otimes X_2 \otimes X_3 &= \int_\Omega \delta( R_{1,\omega} \xi_1 + R_{2,\omega} \xi_2 + R_{3,\omega} \xi_3 ) F(\omega)
m_{1,\omega}(R_{1,\omega} \xi_1) m_{2,\omega}(R_{2,\omega} \xi_2) m_{3,\omega}(R_{3,\omega} \xi_3)\\
&\quad \varphi\left(\frac{1}{\epsilon_0^2} \left(R_{1,\omega} \xi_1 - \xi_1^0\right)\right) \eta(|\xi_1|,|\xi_2|,|\xi_3|) \times \\
&\quad
\Lambda_{R_{1,\omega} \xi_1, R_{2,\omega} \xi_2, R_{3,\omega} \xi_3}( R_{1,\omega} X_1, R_{2,\omega} X_2, R_{3,\omega} X_3)\ d\mu(\omega)
\end{split}
\end{equation}
for all $\xi_j \in B(\xi_j^0,\epsilon_0^3)$ and $X_j \in \xi_j^\perp$, $j=1,2,3$. Indeed, if one applies \eqref{bigmess} with $(X_1,X_2,X_3) = (\hat u(\xi_1), \hat v(\xi_2),\hat w(\xi_3))$, contracts the resulting tensor against
$\overline{\hat \psi_1(\xi_1)} \otimes \overline{\hat \psi_2(\xi_2)} \otimes \overline{\hat \psi_3(\xi_3)}$ and then integrates in $\xi_1,\xi_2,\xi_3$ (absorbing the $\overline{\hat \psi_i}$ and $F$ factors into the $m_{j,\omega}$ terms, after first breaking $F$ into $27$ components), we obtain the desired decomposition \eqref{couv} (after replacing $\Omega$ with the disjoint union of $27$ copies of $\Omega$ to accommodate the contributions from the various components of $F$). As before, one may wish to first work with Schwartz $u,v,w$ to justify the interchanges of integrals, and then take limits at the end of the argument.
\subsection{Sixth step: simplifying the weights}
It remains to obtain the decomposition \eqref{bigmess}. We will restrict attention to those rotations $R_{j,\omega}$ which almost fix $\xi_j^0$ in the sense that
\begin{equation}\label{roxi}
|R_{j,\omega} \xi_j^0 - \xi_j^0| < \epsilon_0^2/2
\end{equation}
for $j=1,2,3$. With this restriction, the weight $\varphi(\frac{1}{\epsilon_0^2} (R_{1,\omega} \xi_1 - \xi_1^0)) \eta(|\xi_1|,|\xi_2|,|\xi_3|)$ is equal to one (for $\epsilon_0$ small enough), and so \eqref{bigmess} simplifies to
\begin{equation}\label{bigmess-2}
\begin{split}
X_1 \otimes X_2 \otimes X_3 &= \int_\Omega \delta( R_{1,\omega} \xi_1 + R_{2,\omega} \xi_2 + R_{3,\omega} \xi_3 ) F(\omega)
m_{1,\omega}(R_{1,\omega} \xi_1) m_{2,\omega}(R_{2,\omega} \xi_2) m_{3,\omega}(R_{3,\omega} \xi_3) \times \\
&\quad
\Lambda_{R_{1,\omega} \xi_1, R_{2,\omega} \xi_2, R_{3,\omega} \xi_3}( R_{1,\omega} X_1, R_{2,\omega} X_2, R_{3,\omega} X_3)\ d\mu(\omega).
\end{split}
\end{equation}
Let
$$ \Sigma \subset \SO(3) \times \SO(3) \times \SO(3) \times \R^3 \times \R^3 \times \R^3$$
denote the set of sextuples $(R_1,R_2,R_3,\xi_1,\xi_2,\xi_3)$ where $R_j \in \SO(3)$ with $|R_j \xi_j^0 -\xi_j^0| < \epsilon_0^2/4$ for $j=1,2,3$, and $\xi_i \in B( \xi_i^0, 2\epsilon_0^3 )$ for $i=1,2,3$ with
$$ R_1 \xi_1 + R_2 \xi_2 + R_3 \xi_3 = 0.$$
For $\epsilon_0$ small enough, we see from the implicit function theorem that this is a smooth manifold (of dimension $15$), and that for any choice of $\xi_j \in B( \xi_j^0, 2\epsilon_0^3 )$ for $j=1,2,3$, the slice
$$ \Sigma_{\xi_1,\xi_2,\xi_3} := \{ (R_1,R_2,R_3): (R_1,R_2,R_3,\xi_1,\xi_2,\xi_3) \in \Sigma \}$$
is a smooth manifold (of dimension $6$).
Suppose that we can find a smooth function
$$ F': \Sigma \to \C^3 \otimes \C^3 \otimes \C^3$$
such that we have the identity
\begin{equation}\label{bigmess-3}
\begin{split}
X_1 \otimes X_2 \otimes X_3 &= \int_{\Sigma_{\xi_1,\xi_2,\xi_3}} F'( R_1,R_2,R_3, \xi_1,\xi_2,\xi_3) \times \\
&\quad
\Lambda_{R_1 \xi_1, R_2 \xi_2, R_3 \xi_3}( R_1 X_1, R_2 X_2, R_3 X_3 )\ d\sigma(R_1,R_2,R_3)
\end{split}
\end{equation}
whenever $\xi_j \in B(\xi_j^0,\epsilon_0^3)$ and $X_j \in \xi_j^\perp$, $j=1,2,3$, where $d\sigma(R_1,R_2,R_3)$ is surface measure on $\Sigma_{\xi_1,\xi_2,\xi_3}$. By a change of variables, this can be rewritten as
\begin{align*}
X_1 \otimes X_2 \otimes X_3 &= \int_{U} \delta( R_1 \xi_1 + R_2 \xi_2 + R_3 \xi_3 )
\tilde F'( R_1,R_2,R_3, \xi_1,\xi_2,\xi_3) \times \\
&\quad
\Lambda_{R_1 \xi_1, R_2 \xi_2, R_3 \xi_3}( R_1 X_1, R_2 X_2, R_3 X_3 )\ dR_1 dR_2 dR_3
\end{align*}
where $\tilde F': \Sigma \to \C^3 \otimes \C^3 \otimes \C^3$ is another smooth function ($F'$ multiplied by some Jacobian factors) and $dR_1,dR_2,dR_3$ denote Haar measure on $\SO(3)$, with
$$ U := \{ (R_1,R_2,R_3) \in \SO(3)^3: |R_j \xi_j^0 -\xi_j^0| < \epsilon_0^2/4 \hbox{ for } j=1,2,3 \}.$$
We may smoothly extend $\tilde F'$ to become a smooth compactly supported function on the larger domain
$$ U \times B(\xi_1^0,2\epsilon_0^3) \times B(\xi_2^0,2\epsilon_0^3) \times B(\xi_3^0,2\epsilon_0^3).$$
By a Fourier expansion and another smooth truncation, we may thus write
$$ \tilde F'( R_1,R_2,R_3, \xi_1,\xi_2,\xi_3) = \int_{\R^3 \times \R^3 \times \R^3} f(R_1,R_2,R_3,x_1,x_2,x_3) \prod_{j=1}^3 (e^{2\pi i x_j \cdot \xi_j} m_j(\xi_j))\ dx_1 dx_2 dx_3$$
whenever $(R_1,R_2,R_3) \in U$ and $\xi_j \in B(\xi_j^0,\epsilon^3)$, where $m_j$ is a smooth function supported on $B(\xi_j^0, 3\epsilon_0^3)$, and $f: U \times \R^3 \times \R^3 \times \R^3 \to \C^3 \times \C^3 \times \C^3$ is rapidly decreasing in $x_1,x_2,x_3$, uniformly in $R_1,R_2,R_3$. Inserting this expansion into \eqref{bigmess-3}, we obtain the desired expansion \eqref{bigmess-2} (taking $\Omega$ to be $U \times \R^3 \times \R^3 \times \R^3$, with $\mu$ being Haar measure weighted by $|f|$, choosing the $m_{j,\omega}$ to be an appropriately rotated version of $m_j$, twisted by a plane wave, and with $F := f/|f|$).
\subsection{Seventh step: restricting to rotations around fixed axes}
It remains to find a smooth function $F'$ for which one has the required representation \eqref{bigmess-3}. Observe from \eqref{xisum} and the implicit function theorem (for $\epsilon_0$ small enough) that if $\xi_j \in B(\xi_j^0,\epsilon_0^3)$ for $j=1,2,3$, one can find rotations $R_{j,\xi_1,\xi_2,\xi_3} \in \SO(3)$ for $j=1,2,3$ with
$$
R_{j,\xi_1,\xi_2,\xi_3} = I + O(\epsilon_0^3) $$
(where $I$ is the identity matrix) and the tuple $(\tilde \xi_1,\tilde \xi_2, \tilde \xi_3)$
defined by
\begin{equation}\label{txdef}
\tilde \xi_j := R_{j,\xi_1,\xi_2,\xi_3} \xi_j
\end{equation}
lives in the space
\begin{equation}\label{gamma-def}
\Gamma := \{ (\eta_1,\eta_2,\eta_3) \in B(\xi_1^0, C\epsilon_0^3) \times B(\xi_2^0, C\epsilon_0^3) \times B(\xi_3^0, C\epsilon_0^3): \eta_1 + \eta_2 + \eta_3 = 0 \}
\end{equation}
for some absolute constant $C$ independent of $\epsilon_0$. Furthermore, from the implicit function theorem we may make $R_{j,\xi_1,\xi_2,\xi_3}$ and hence $\tilde \xi_1, \tilde \xi_2, \tilde \xi_3$ depend smoothly on $\xi_1,\xi_2,\xi_3$ in the indicated domain if $\epsilon_0$ is small enough. If we let $R_\xi^\theta \in \SO(3)$ denote the rotation by $\theta$ around the axis $\xi$ using the right-hand rule\footnote{More precisely, if $u$ is the unit vector $u = \xi/|\xi|$, we define $R_\xi^\theta X := (X \cdot u) u + \cos(\theta) (X - (X \cdot u) u) + \sin(\theta) u \times X$.} for any $\xi \in \R^3 \backslash \{0\}$ and $\theta \in \R/2\pi \Z$, we then see that the six-dimensional manifold
\begin{equation}\label{slor}
\{ ( S R_{\tilde \xi_1}^{\theta_1} R_{1,\xi_1,\xi_2,\xi_3}, S R_{\tilde \xi_2}^{\theta_2} R_{2,\xi_1,\xi_2,\xi_3}, S R_{\tilde \xi_3}^{\theta_3} R_{3,\xi_1,\xi_2,\xi_3}): S \in \SO(3); \theta_1,\theta_2,\theta_3 \in \R/2\pi \Z; \| S - I \| \leq \epsilon^2/8 \}
\end{equation}
(where $\| \|$ denotes the operator norm) is an open submanifold of $\Sigma_{\xi_1,\xi_2,\xi_3}$. Also, if we use the ansatz
$$ (R_1,R_2,R_3) = ( S R_{\tilde \xi_1}^{\theta_1} R_{1,\xi_1,\xi_2,\xi_3}, S R_{\tilde \xi_2}^{\theta_2} R_{2,\xi_1,\xi_2,\xi_3}, S R_{\tilde \xi_3}^{\theta_3} R_{3,\xi_1,\xi_2,\xi_3})$$
then from \eqref{lambda-def} we see that
$$ \Lambda_{R_1 \xi_1, R_2 \xi_2, R_3\xi_3}(R_1 X_1, R_2 X_2, R_3 X_3) = \Lambda_{\tilde \xi_1, \tilde \xi_2, \tilde \xi_3}\left(
R_{\tilde \xi_1}^{\theta_1} R_{1,\xi_1,\xi_2,\xi_3} X_1,
R_{\tilde \xi_2}^{\theta_2} R_{2,\xi_1,\xi_2,\xi_3} X_2,
R_{\tilde \xi_3}^{\theta_2} R_{2,\xi_1,\xi_2,\xi_3} X_3 \right)$$
for $X_j \in \xi_j^\perp$, $j=1,2,3$.
Thus, if we can find a smooth function
$$F'': \R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z \times \Gamma \to \C^3 \otimes \C^3 \otimes \C^3$$
with the property that
\begin{equation}\label{bigmess-4}
\begin{split}
Y_1 \otimes Y_2 \otimes Y_3 &= \int_{ \R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z} F''( \theta_1,\theta_2,\theta_3, \eta_1, \eta_2, \eta_3 ) \times \\
&\quad
\Lambda_{\eta_1,\eta_2,\eta_3}( R_{\eta_1}^{\theta_1} Y_1, R_{\eta_2}^{\theta_2} Y_2, R_{\eta_3}^{\theta_3} Y_3 )\ d\theta_1 d\theta_2 d\theta_3
\end{split}
\end{equation}
for all $(\eta_1,\eta_2,\eta_3) \in \Gamma$ and $Y_j \in \eta_j^\perp$ for $j=1,2,3$, then by substituting $Y_j = R_{j,\xi_1,\xi_2,\xi_3} X_j$ and $\eta_j = \tilde \xi_j$, we have
\begin{align*}
R_{1,\xi_1,\xi_2,\xi_3} X_1 \otimes R_{2,\xi_1,\xi_2,\xi_3} X_2 \otimes R_{3,\xi_1,\xi_2,\xi_3} X_3 &= \int_{\R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z} F''( \theta_1,\theta_2,\theta_3, \tilde \xi_1, \tilde \xi_2, \tilde \xi_3 ) \times \\
&\quad
\Lambda_{R_1 \xi_1, R_2 \xi_2, R_3 \xi_3}( R_1 X_1, R_2 X_2, R_3 X_3 )\ d\theta_1 d\theta_2 d\theta_3
\end{align*}
for any $\xi_j \in B(\xi_j^0,\epsilon^3)$ and $X_j \in \xi_j^\perp$. Averaging this over all $S \in \SO(3)$ with $\| S - I \| \leq \epsilon^2/8$, and inverting the tensored rotation operator $R_{1,\xi_1,\xi_2,\xi_3} \otimes R_{2,\xi_1,\xi_2,\xi_3} \otimes R_{3,\xi_1,\xi_2,\xi_3}$, we obtain a representation of the desired form \eqref{bigmess-3}. Thus it suffices to find a smooth function $F''$ with the representation \eqref{bigmess-4}.
\subsection{Eighth step: parameterising in terms of rotation angles}
Note that if $(\eta_1,\eta_2,\eta_3) \in \Gamma$, then the vectors $\eta_1,\eta_2,\eta_3$ are coplanar, and so we may find a unit vector $n = n(\eta_1,\eta_2,\eta_3)$ orthogonal to all of the $\eta_i$; by the implicit function theorem we may ensure that $n$ depends smoothly on $\eta_1,\eta_2,\eta_3$. From \eqref{xi-split} we may normalise $n$ to be close to $(0,0,1)$ (as opposed to close to $(0,0,-1)$). To prove \eqref{bigmess-4}, it suffices by homogeneity to consider the case when $Y_1,Y_2,Y_3$ are unit vectors; as $Y_j \in \eta_j^\perp$, this means that we may write $Y_j = R_{\eta_j}^{\alpha_j} n$ for some $\alpha_j \in \R/2\pi\Z$ for all $j=1,2,3$. We may thus rewrite \eqref{bigmess-4} as the claim that
\begin{equation}\label{bigmess-5}
\begin{split}
R_{\eta_1}^{\alpha_1} n \otimes R_{\eta_2}^{\alpha_2} n \otimes R_{\eta_3}^{\alpha_3} n &= \int_{ \R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z} F''( \theta_1,\theta_2,\theta_3, \eta_1, \eta_2, \eta_3 ) \times \\
&\quad \Theta_{\eta_1,\eta_2,\eta_3}( \theta_1 + \alpha_1, \theta_2 + \alpha_2, \theta_3 + \alpha_3 )\ d\theta_1 d\theta_2 d\theta_3
\end{split}
\end{equation}
for all $(\eta_1,\eta_2,\eta_3) \in \Gamma$ and $\alpha_1,\alpha_2,\alpha_3 \in \R/2\pi\Z$, where $\Theta_{\eta_1,\eta_2,\eta_3}: (\R/2\pi\Z)^3 \to \R$ is the function
\begin{equation}\label{theta-def} \Theta_{\eta_1,\eta_2,\eta_3}(\gamma_1,\gamma_2,\gamma_3) :=
\Lambda_{\eta_1,\eta_2,\eta_3}( R_{\eta_1}^{\gamma_1} n, R_{\eta_2}^{\gamma_2} n, R_{\eta_3}^{\gamma_3} n ).
\end{equation}
Note that for fixed $\eta_1,\eta_2,\eta_3$ and each $j=1,2,3$, each of the three coefficients of $R_{\eta_j}^{\alpha_j} n \in \R^3$ is a complex linear combination of $e^{-i \alpha_j}$ and $e^{ i \alpha_j}$, with coefficients depending smoothly on $\eta_1,\eta_2,\eta_3$. Thus to show \eqref{bigmess-5}, it suffices to obtain a representation
\begin{equation}\label{bigmess-6}
\begin{split}
e^{i (\sigma_1 \alpha_1 + \sigma_2 \alpha_2 + \sigma_3 \alpha_3)} &= \int_{ \R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z} F_{\sigma_1,\sigma_2,\sigma_3}( \theta_1,\theta_2,\theta_3, \eta_1, \eta_2, \eta_3 ) \times \\
&\quad \Theta_{\eta_1,\eta_2,\eta_3}( \theta_1 + \alpha_1, \theta_2 + \alpha_2, \theta_3 + \alpha_3 )\ d\theta_1 d\theta_2 d\theta_3
\end{split}
\end{equation}
for all eight choices of sign patterns $(\sigma_1,\sigma_2,\sigma_3) \in \{-1,+1\}^3$, and some smooth functions
$$ F_{\sigma_1,\sigma_2,\sigma_3}: \R/2\pi\Z \times \R/2\pi\Z \times \R/2\pi\Z \times \Gamma \to \C.$$
\subsection{Ninth step: Fourier inversion and checking a non-degeneracy condition}
By \eqref{theta-def}, \eqref{lambda-def} and decomposing $R_{\eta_j}^{\gamma_j} n$ into a complex linear combination of $e^{-i \gamma_j}$ and $e^{i \gamma_j}$, we see that for fixed $\eta_1,\eta_2,\eta_3$, we may expand
\begin{equation}\label{cdoc}
\Theta_{\eta_1,\eta_2,\eta_3}( \gamma_1,\gamma_2,\gamma_3 ) = \sum_{(\sigma_1,\sigma_2,\sigma_3) \in \{-1,+1\}^3} c_{\sigma_1,\sigma_2,\sigma_3}(\eta_1,\eta_2,\eta_3) e^{i (\sigma_1 \gamma_1 + \sigma_2 \gamma_2 + \sigma_3 \gamma_3)}
\end{equation}
for some smooth coefficients $c_{\sigma_1,\sigma_2,\sigma_3}: \Gamma \to \C$. From the Fourier inversion formula on $(\R/2\pi\Z)^3$, we thus obtain \eqref{bigmess-6} as long as we have the non-degeneracy condition
\begin{equation}\label{c-nondeg}
c_{\sigma_1,\sigma_2,\sigma_3}(\eta_1,\eta_2,\eta_3) \neq 0
\end{equation}
for all $(\eta_1,\eta_2,\eta_3) \in \Gamma$ and all choices of signs $(\sigma_1,\sigma_2,\sigma_3) \in \{-1,+1\}^3$.
For this, we finally need to use the precise form of $\Lambda$. From \eqref{theta-def}, \eqref{lambda-def} we can write $\Theta_{\eta_1,\eta_2,\eta_3}( \gamma_1,\gamma_2,\gamma_3 )$ as
$$
(R_{\eta_1}^{\gamma_1} n \cdot \eta_2) (R_{\eta_2}^{\gamma_2} n \cdot R_{\eta_3}^{\gamma_3} n) + (R_{\eta_2}^{\gamma_2} n \cdot \eta_1) (R_{\eta_1}^{\gamma_1} n \cdot R_{\eta_3}^{\gamma_3} n)$$
which we expand further as
\begin{align*}
& \sin(\gamma_1) \left(\left(u_1 \times n\right) \cdot \eta_2\right) \left(\cos(\gamma_2) \cos(\gamma_3) + \left(u_2 \cdot u_3\right) \sin(\gamma_2) \sin(\gamma_3)\right)\\
& \quad +
\sin(\gamma_2) \left(\left(u_2 \times n\right) \cdot \eta_1\right) \left(\cos(\gamma_1) \cos(\gamma_3) + \left(u_1 \cdot u_3\right) \sin(\gamma_1) \sin(\gamma_3)\right)
\end{align*}
where $u_i := \eta_i/|\eta_i|$. Expanding
\begin{equation}\label{sin}
\sin(\gamma) = \frac{1}{2i} (e^{i\gamma} - e^{-i\gamma}) = \frac{1}{2i} \sum_{\sigma = \pm 1} \sigma e^{i \sigma \gamma}
\end{equation}
and
$$\cos(\gamma) = \frac{1}{2} (e^{i\gamma} + e^{-i\gamma}) = \frac{1}{2} \sum_{\sigma = \pm 1} e^{i \sigma \gamma}$$
we have
$$ \cos(\gamma_2) \cos(\gamma_3) = \frac{1}{4} \sum_{\sigma_2,\sigma_3 = \pm 1} e^{i (\sigma_2 \gamma_2 + \sigma_3 \gamma_3)}$$
and
$$ \sin(\gamma_2) \sin(\gamma_3) = - \frac{1}{4} \sum_{\sigma_2,\sigma_3 = \pm 1} \sigma_2 \sigma_3 e^{i (\sigma_2 \gamma_2 + \sigma_3 \gamma_3)}$$
(the minus sign arising here from the $i$ in the denominator in \eqref{sin}). Similarly with $\sigma_2, \gamma_2$ replaced by $\sigma_1, \gamma_1$ respectively. Inserting these expansions and comparing with \eqref{cdoc}, we conclude that
$$ c_{\sigma_1,\sigma_2,\sigma_3}(\eta_1,\eta_2,\eta_3) =\frac{1}{8i}
\left( \left(\left(u_1 \times n\right) \cdot \eta_2\right) \sigma_1 \left(1 - \left(u_2 \cdot u_3\right) \sigma_2 \sigma_3 \right) +
\left(\left(u_2 \times n\right) \cdot \eta_1\right) \sigma_2 \left(1 - \left(u_1 \cdot u_3\right) \sigma_1 \sigma_3 \right) \right).$$
But by \eqref{gamma-def}, $\eta_j = \xi_j^0 + O(\epsilon_0^3)$, which from \eqref{xi-split} implies that
\begin{align*}
(u_1 \cdot n) \cdot \eta_2 &= -1 + O(\epsilon_0) \\
(u_2 \cdot n) \cdot \eta_1 &= \frac{1}{\sqrt{2}} + O(\epsilon_0) \\
u_2 \cdot u_3 &= -\frac{1}{\sqrt{2}} + O(\epsilon_0) \\
u_1 \cdot u_3 &= O(\epsilon_0)
\end{align*}
and thus
$$ c_{\sigma_1,\sigma_2,\sigma_3}(\eta_1,\eta_2,\eta_3) = -\frac{1}{8i} ( - \sigma_1 + \frac{1}{\sqrt{2}} \sigma_2 + \frac{1}{\sqrt{2}} \sigma_1 \sigma_2 \sigma_3 ) + O(\epsilon_0).$$
As $- \sigma_1 + \frac{1}{\sqrt{2}} \sigma_2 + \frac{1}{\sqrt{2}} \sigma_1 \sigma_2 \sigma_3$ is bounded away from zero for $\sigma_1,\sigma_2,\sigma_3 \in \{-1,+1\}$, the non-degeneracy claim \eqref{c-nondeg} follows for $\epsilon_0$ small enough. This concludes the proof of Theorem \ref{avg}.
\begin{remark} The averaging over dilation operators was only needed to place the base frequencies $\xi^0_1,\xi^0_2,\xi^0_3$ in a location where the non-degeneracy condition \eqref{c-nondeg} held. This condition in fact holds for generic $\xi^0_1,\xi^0_2,\xi^0_3$, and so even without the use of averaging over dilations it should be the case that \emph{most} local cascade operators are expressible as averaged Euler operators. As there is some freedom to select the local cascade operators in Theorem \ref{blowup}, this should still be enough to establish a slightly stronger version of Theorem \ref{main} in which one does not use any averaging over dilations. We will however not pursue this matter here.
\end{remark}
\section{Reduction to an infinite-dimensional ODE}
We now begin the proof of Theorem \ref{blowup}. We fix $0 < \epsilon_0 < 1$; henceforth we allow all implied constants in the $O()$ notation to depend on $\epsilon_0$. We suppose that Theorem \ref{blowup} failed, so that one can always construct\footnote{This hypothesis of global existence is technically convenient so that we may assume some \emph{a priori} regularity on our solution, namely $H^{10}$. Alternatively, one could develop an $H^{10}$ local well-posedness theory for \eqref{system}, and unconditionally construct a mild $H^{10}$ solution that blows up in a finite time by a minor modification of the arguments in this paper; we leave the details of this variant of the argument to the interested reader.}
global mild solutions to any initial value problem of the form \eqref{system} with $C$ a local cascade operator and $u_0$ a Schwartz divergence-free vector field.
To apply this hypothesis, we need to construct a local cascade operator $C$ and an initial velocity field $u_0$. We need a dimension parameter $m$, which will be a positive integer (eventually we will set $m=4$). Let $B_1,\dots,B_m$ be balls in the annulus $\{ \xi \in \R^3: 1 < |\xi| \leq 1+\epsilon_0/2 \}$, chosen so that the $2m$ balls $B_1,\dots,B_m,-B_1,\dots,-B_m$ are all disjoint. For each $i=1,\dots,m$, let $\psi_i \in H^{10}_\df(\R^3)$ be Schwartz with Fourier transform real-valued and supported\footnote{We need to have $\hat \psi_i$ supported on $B_i \cup -B_i$ rather than just $B_i$, otherwise we could not require $\psi_i$ to be real.} on $B_i \cup -B_i$, normalised so that $\|\psi_i\|_{L^2(\R^3)} = 1$.
As in Definition \ref{cascdef}, we define the rescaled functions
$$ \psi_{i,n}(x) := (1+\epsilon_0)^{3n/2} \psi_i\left( (1+\epsilon_0)^n x \right)$$
for $i=1,\dots,m$ and $n \in \Z$, and then define the local cascade operator $C$ by the formula
\begin{equation}\label{cudef}
\begin{split}
C(u,v) &:= \sum_{n \in \Z} \sum_{(i_1,i_2,i_3,\mu_1,\mu_2,\mu_3) \in \{1,\dots,m\}^3 \times S}\\
&\quad
\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5n/2} \langle u, \psi_{i_1,n+\mu_1} \rangle \langle v, \psi_{i_2,n+\mu_2} \rangle \psi_{i_3,n+\mu_3}
\end{split}
\end{equation}
for $u,v \in H^{10}_\df(\R^3)$,
where $S \subset \Z^3$ is the four-element set
$$ S := \{ (0,0,0), (1,0,0), (0,1,0), (0,0,1) \},$$
the $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} \in \R$ are structure constants to be chosen later, and which obey the symmetry condition
\begin{equation}\label{symmetry}
\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} = \alpha_{i_2,i_1,i_3,\mu_2,\mu_1,\mu_3}
\end{equation}
for $(i_1,i_2,i_3,\mu_1,\mu_2,\mu_3) \in S$. From Definition \ref{cascdef} we see that $C$ is indeed a local cascade operator (it is a sum of $|S| = 7m^3$ basic local cascade operators), and \eqref{symmetry} ensures that $C$ is symmetric. Clearly
\begin{align*}
\langle C(u,u),u \rangle &= \sum_{n \in \Z} \sum_{(i_1,i_2,i_3,\mu_1,\mu_2,\mu_3) \in \{1,\dots,m\}^3 \times S}\\
&\quad
\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5n/2} \langle u, \psi_{i_1,n+\mu_1} \rangle \langle u, \psi_{i_2,n+\mu_2} \rangle \langle u, \psi_{i_3,n+\mu_3} \rangle
\end{align*}
for $u \in H^{10}_\df(\R^3)$. From this, we see that the cancellation condition \eqref{cancelled} will follow from the cancellation conditions
\begin{equation}\label{cyclic}
\sum_{\{a,b,c\} = \{1,2,3\}} \alpha_{i_a,i_b,i_c,\mu_a,\mu_b,\mu_c} = 0
\end{equation}
for all $i_1,i_2,i_3 \in \{1,\dots,m\}$ and $(\mu_1,\mu_2,\mu_3) \in S$.
We will select initial data $u_0$ of the form\footnote{Our analysis is in fact somewhat stable, and will also apply if $u_0$ is a sufficiently small perturbation of $\psi_{1,n_0}$ in the $H^{10}_\df$ norm, thus creating blowup for a non-empty open set of initial data in smooth topologies, although this open set is rather small and is also quite far from the origin (due to the large nature of $n_0$). We leave the details of this modification to the interested reader.}
\begin{equation}\label{upsi}
u_0 := \psi_{1,n_0}
\end{equation}
for some sufficiently large\footnote{Alternatively (and equivalently), one could hold $n_0$ fixed (e.g. $n_0=0$), and rescale the viscosity $\nu$ to be small, thus one is now studying the equation $\partial_t u = \nu \Delta u + C(u,u)$ with some small $\nu>0$. One can then repeat all the arguments below, basically with $\nu$ playing the role of the quantity $(1+\epsilon_0)^{-n_0/2}$ that will make a prominent appearance in later sections. We leave the details of this variant of the argument to the interested reader.} integer $n_0$ to be chosen later. This is clearly a Schwarz divergence-free vector field. By hypothesis, we thus have a global mild solution $u: [0,+\infty) \to H^{10}_\df(\R^3)$ to the system \eqref{system}. We record some basic properties of this solution here:
\begin{lemma}[Equations of motion]\label{eqmot} Let $C$ be a cascade operator of the form \eqref{cudef}, with coefficients $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3}$ obeying the symmetry \eqref{symmetry} and cancellation property \eqref{cyclic}. Let $u: [0,+\infty) \to H^{10}_\df(\R^3)$ be a global mild solution to the equation \eqref{system} with initial data given by \eqref{upsi} for some $n_0$. For each $n \in \Z$, $t \geq 0$ and $i=1,\dots,m$, let $u_{i,n}(t)$ be the Fourier projection of $u(t)$ to the region $(1+\epsilon_0)^n \cdot (B_i \cup -B_i)$, thus
$$ \widehat{u_{i,n}(t)}(\xi) = \widehat{u(t)}(\xi) 1_{\xi \in (1+\epsilon_0)^n \cdot (B_i \cup -B_i)}$$
and then define the coefficients
$$ X_{i,n}(t) := \langle u(t), \psi_{i,n} \rangle = \langle u_{i,n}(t),\psi_{i,n} \rangle$$
and the local energies
\begin{equation}\label{ein-def}
E_{i,n}(t) := \frac{1}{2} \|u_{i,n}(t)\|_{L^2(\R^3)}^2.
\end{equation}
\begin{itemize}
\item[(i)] (A priori regularity) We have
\begin{equation}\label{slog}
\sup_{0 \leq t\leq T} \sup_{n \in\Z} \sup_{i=1,\dots,m} (1 + (1+\epsilon_0)^{10n}) |X_{i,n}(t)| < \infty
\end{equation}
and
\begin{equation}\label{slog-2}
\sup_{0 \leq t\leq T} \sup_{n \in\Z} \sup_{i=1,\dots,m} (1 + (1+\epsilon_0)^{10n}) E_{i,n}(t)^{1/2} < \infty
\end{equation}
for all $0 < T < \infty$.
\item[(ii)] (Initial conditions) For any $n \in \Z$ and $i=1,\dots,m$, we have
\begin{equation}\label{ein-init}
E_{i,n}(0) = \frac{1}{2} X_{i,n}(0)^2
\end{equation}
and
\begin{equation}\label{xin-init}
X_{i,n}(0) = 1_{(i,n) = (1,n_0)}.
\end{equation}
\item[(iii)] (Equations of motion) For any $n \in\Z$ and $i=1,\dots,m$, we have the equation of motion
\begin{equation}\label{xin}
\partial_t X_{i,n} = \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S}
\alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2} + O\left( (1+\epsilon_0)^{2n} E_{i,n}^{1/2} \right).
\end{equation}
and the energy inequality
\begin{equation}\label{ein}
\partial_t E_{i,n} \leq \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S} \alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2} X_{i,n}
\end{equation}
for all $t \geq 0$.
\item[(iv)] (Energy defect) For any $n \in \Z$ and $i=1,\dots,m$, we have
\begin{equation}\label{exin}
\frac{1}{2} X_{i,n}^2(t) \leq E_{i,n}(t) \leq \frac{1}{2} X_{i,n}^2(t) + O\left( (1+\epsilon_0)^{2n} \int_0^t E_{i,n}(t')\ dt' \right)
\end{equation}
for all $t \geq 0$.
\item[(v)] (No very low frequencies) One has
\begin{equation}\label{nolo}
X_{i,n}(t) = E_{i,n}(t) = 0
\end{equation}
for all $n < n_0$, $i=1,\dots,m$, and $t \geq 0$.
\end{itemize}
\end{lemma}
\begin{proof} As $u$ is a mild solution to \eqref{system}, we have
\begin{equation}\label{uko}
u(t) = e^{t\Delta} u_0 + \int_0^t e^{(t-t')\Delta} C( u(t'), u(t') )\ dt'
\end{equation}
for all $t \geq 0$. Taking Fourier transforms, we see in particular that $\hat u(t)$ is supported on the union $\bigcup_{n \in \Z} \bigcup_{i=1}^m \bigcup_{\mu = \pm 1} \mu (1+\epsilon_0)^n \cdot B_i$ of dilations of the balls $\pm B_1,\ldots,\pm B_m$. As these dilated balls are disjoint, we thus have a decomposition
$$ u(t) = \sum_{n \in \Z} \sum_{i=1}^m u_{i,n}(t)$$
(which is unconditionally convergent in $H^{10}$). If we define the scalar functions $X_{i,n}: [0,+\infty) \to \C$ by the formula
$$ X_{i,n}(t) := \langle u(t), \psi_{i,n} \rangle = \langle u_{i,n}(t),\psi_{i,n} \rangle,$$
then from the \emph{a priori} regularity $u \in C^0_t H^{10}_x$ we obtain \eqref{slog} from the Plancherel identity. Taking inner products of \eqref{uko} with $\psi_{i,n}$, we have
\begin{align*}
u_{i,n}(t) &= e^{t\Delta} X_{i,n}(0) \psi_{i,n} +
\sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S} \\
&\quad\quad
\alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} \int_0^t X_{i_1,n-\mu_3+\mu_1}(t') X_{i_2,n-\mu_3+\mu_2}(t') e^{(t-t')\Delta} \psi_{i,n}\ dt'
\end{align*}
or in differentiated form (using \eqref{slog} to justify the calculations)
\begin{equation}\label{upp}
\partial_t u_{i,n} = \Delta u_{i,n} + \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S}
\alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2} \psi_{i,n}.
\end{equation}
In particular this shows that $u_{i,n}$ is continuously differentiable in time (in the $L^2_x$ topology, say), which implies that the $X_{i,n}$ are continously differentiable.
It is unfortunate that the $\psi_{i,n}$ are not eigenfunctions of the Laplacian $\Delta$, otherwise $u_{i,n}$ would be always be a scalar multiple of $\psi_{i,n}$ (that is, $u_{i,n} = X_{i,n} \psi_{i,n}$), and the equation \eqref{system} would collapse to a system of ODE in the $X_{i,n}$ variables. However, it is still possible to get good control on the dynamics even without the eigenfunction property. To do this, we use the local energies $E_{i,n}$ from \eqref{ein-def}. From Cauchy-Schwarz we have
\begin{equation}\label{loco}
\frac{1}{2} X_{i,n}(t)^2 \leq E_{i,n}(t),
\end{equation}
and from Plancherel and the $C^0_t H^{10}_x$ bound on $u$ we have \eqref{slog-2}
for all $0 < T < \infty$.
By taking inner products of \eqref{upp} with $u_{i,n}$, and noting that
$$ \langle \Delta u_{i,n}, u_{i,n} \rangle \leq 0$$
we obtain the \emph{local energy inequality} \eqref{ein}. Indeed, one could use Fourier analysis to place an additional dissipation term of $8 \pi^2 (1+\epsilon_0)^{2n} E_{i,n}$ on the right-hand side of \eqref{ein}, but we will not need to use this term here (it is too small to be of much use, since we are in the regime where dissipation can be treated as a negligible perturbation).
If instead, if we take inner products of \eqref{upp} with $\psi_{i,n}$, and note that
\begin{align*}
\langle \Delta u_{i,n}, \psi_{i,n} \rangle &= \langle u_{i,n}, \Delta \psi_{i,n} \rangle \\
&= O\left( E_{i,n}^{1/2} \|\Delta \psi_{i,n} \|_{L^2(\R^3)}^2 \right) \\
&= O\left( (1+\epsilon_0)^{2n} E_{i,n}^{1/2} \right)
\end{align*}
we conclude \eqref{xin}.
From \eqref{xin}, \eqref{ein} we see that
$$
\partial_t \left(E_{i,n} - \frac{1}{2} X_{i,n}^2\right) \leq O\left( (1+\epsilon_0)^{2n} E_{i,n} \right)
$$
while from \eqref{ein-init} we see that $E_{i,n} - \frac{1}{2} X_{i,n}^2$ vanishes at time zero. The claim \eqref{exin} then follows from \eqref{loco} and the fundamental theorem of calculus.
Finally, we prove \eqref{nolo}. For $i_3=1,\dots,m$ and $n < n_0$, we see from \eqref{ein}, \eqref{ein-init}, \eqref{xin-init} and the fundamental theorem of calculus that
$$
E_{i_3,n}(t) \leq \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S} \alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} \int_0^t X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2} X_{i_3,n}(t')\ dt'$$
for any $t \geq 0$. Summing this for $i=1,\dots,m$ and $n < n_0$, and using \eqref{slog}, \eqref{slog-2} to ensure all summations and integrals are absolutely convergent, we conclude that
\begin{align*}
\sum_{n<n_0} \sum_{i=1}^m E_{i,n}(t) &\leq \sum_{i_1,i_2,i_3 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S} \sum_{n < n_0}\\
&\quad \alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} \int_0^t X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2} X_{i_3,n}(t')\ dt'.
\end{align*}
By \eqref{cyclic}, all the terms here can be grouped into terms that sum to zero, except for those terms with $n=n_0-1$, $(\mu_1,\mu_2,\mu_3) \in \{ (1,0,0), (0,1,0) \}$; thus
\begin{align*}
\sum_{n<n_0} \sum_{i=1}^m E_{i,n}(t) &\leq \sum_{i_1,i_2,i_3 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in \{ (1,0,0), (0,1,0) \}}
\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n_0-1-\mu_3)/2} \\
&\quad \int_0^t X_{i_1,n_0-1-\mu_3+\mu_1} X_{i_2,n_0-1-\mu_3+\mu_2} X_{i_3,n_0-1}(t')\ dt'.
\end{align*}
By the constraint on $(\mu_1,\mu_2,\mu_3)$, two of the terms $X_{i_1,n_0-1-\mu_3+\mu_1}$, $X_{i_2,n_0-1-\mu_3+\mu_2}$, $X_{i_3,n_0-1}$ may be bounded by $\sum_{n<n_0} \sum_{i=1}^m E_{i,n}$, and the remaining term may be controlled by \eqref{slog}, leading to the bound
$$
\sum_{n<n_0} \sum_{i=1}^m E_{i,n}(t) \leq C_{T,n_0} \int_0^{t'}\sum_{n<n_0} \sum_{i=1}^m E_{i,n}(t')\ dt'
$$
for all $0 \leq t \leq T$ and some finite quantity $C_{T,n_0}$ depending on $T, n_0$ (and on the quantity in \eqref{slog}). By Gronwall's inequality, we conclude that $\sum_{n<n_0} \sum_{i=1}^m E_{i,n}(t) =0$ for all $t \geq 0$, giving \eqref{nolo}.
\end{proof}
The above lemma shows that \eqref{system} almost collapses into an ODE system for the $X_{i,n}$. As a first approximation, the reader may wish to ignore the role of the energies $E_{i,n}$ (or identify them with $\frac{1}{2} X_{i,n}^2$), and pretend that \eqref{xin} is replaced by either the inviscid equation
\begin{equation}\label{inviscid-model}
\partial_t X_{i,n} = \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S}
\alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2}
\end{equation}
or the viscous equation
\begin{equation}\label{viscous-model}
\partial_t X_{i,n} = - (1+\epsilon_0)^{2n} X_{i,n} + \sum_{i_1,i_2 \in \{1,\dots,m\}} \sum_{(\mu_1,\mu_2,\mu_3) \in S}
\alpha_{i_1,i_2,i,\mu_1,\mu_2,\mu_3} (1+\epsilon_0)^{5(n-\mu_3)/2} X_{i_1,n-\mu_3+\mu_1} X_{i_2,n-\mu_3+\mu_2}
\end{equation}
in the analysis that follows. Note that the viscous equation generalises the dyadic Katz-Pavlovic equation \eqref{xn} (with $\lambda=(1+\epsilon_0)^{5/2}$ and $\alpha=2/5$), which corresponds to a simple case in which $m=1$.
Theorem \ref{blowup} now follows from the following ODE result:
\begin{theorem}[ODE blowup]\label{ood} Let $0 < \epsilon_0 < 1$. Then there exist a natural number $m \geq 0$, structure constants $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3} \in \R$ for $i_1,i_2,i_3 \in \{1,\dots,m\}$ and $(\mu_1,\mu_2,\mu_3) \in S$ obeying the symmetry condition \eqref{symmetry} and the cancellation condition \eqref{cyclic}, with the property that for sufficiently large $n_0$ (sufficiently large depending on implied constants in \eqref{ein}, \eqref{xin}), there does not exist continuously differentiable functions $X_{i,n}: [0,+\infty) \to \R$ and $E_{i,n}: [0,+\infty) \to [0,+\infty)$ obeying the conclusions \eqref{slog}-\eqref{nolo} of Lemma \ref{eqmot}.
\end{theorem}
We will prove Theorem \ref{ood} in Section \ref{blowup-sec}, but we first warm up with some finite dimensional ODE toy problems in the next section.
\begin{remark}[Helicity conservation]\label{helicity} As is well known (see e.g. \cite{bert}), the inviscid Euler equations $\partial_t u = B(u,u)$ conserve helicity $\int_{\R^3} u \cdot (\operatorname{curl} u)$. This is equivalent to the additional cancellation law
\begin{equation}\label{cancel-hel}
\langle B(u,u), \operatorname{curl} u \rangle = 0
\end{equation}
for all $u \in H^{10}_\df(\R^3)$. One can ask whether we can similarly enforce the cancellation law
\begin{equation}\label{cancel-hel-2}
\langle \tilde B(u,u), \operatorname{curl} u \rangle = 0
\end{equation}
for the averaged operators $\tilde B$. In general, the operator $C$ defined in \eqref{cudef} will not obey \eqref{cancel-hel-2}. However, we may still ensure \eqref{cancel-hel-2} (while preserving the other desired properties of $\tilde B$) as follows. Firstly, observe that we may choose the functions $\psi_1,\psi_2,\psi_3$ in the construction of $C$ to be odd, thus $\psi_i(-x) = -\psi_i(x)$ for all $i=1,2,3$ and $x \in \R^3$. Next, from \eqref{cudef}, \eqref{cyclic}, and \eqref{symmetry}, we see that the operator $C(u,v)$ in \eqref{cudef} is a finite linear combination of operators of the form
$$\frac{1}{2} \sum_{n \in \Z} (1+\epsilon_0)^{5n/2} (A_n(u,v) + A_n(v,u)),$$
where
$$ A_n(u,v) := \langle u, \psi'_n \rangle \langle v, \psi''_n \rangle \psi'''_n - \langle u, \psi'_n \rangle \langle v, \psi'''_n \rangle \psi''_n $$
and $\psi', \psi'', \psi''' \in H^{10}_\df(\R^3)$ are odd functions with Fourier transform supported on an annulus. These operators obey the energy cancellation law $\langle A_n(u,u), u \rangle = 0$, but do not necessarily obey the helicity cancellation law $\langle A_n(u,u), \operatorname{curl} u \rangle = 0$. However, if we introduce the modified operator
\begin{align*}
\tilde A_n(u,v) &:= \langle u, \psi'_n \rangle \langle v, \psi''_n \rangle \psi'''_n - \langle u, \psi'_n \rangle \langle v, \psi'''_n \rangle \psi''_n \\
&\quad - \langle u, \psi'_n \rangle \langle v, \operatorname{curl} \psi'''_n \rangle \operatorname{curl}^{-1} \psi''_n + \langle u, \operatorname{curl}^{-1} \psi''_n \rangle \langle v, \operatorname{curl} \psi'''_n \rangle \psi'_n \\
&\quad - \langle u, \operatorname{curl}^{-1} \psi''_n \rangle \langle v, \operatorname{curl} \psi'_n \rangle \psi'''_n + \langle u, \psi'''_n \rangle \langle v, \operatorname{curl} \psi'_n \rangle \operatorname{curl}^{-1} \psi''_n
\end{align*}
where $\operatorname{curl}^{-1} := \Delta^{-1} \operatorname{curl}$ inverts the curl operator on divergence free functions with Fourier support on an annulus, one can check that $\tilde A_n$ obeys both the energy cancellation $\langle \tilde A_n(u,u), u \rangle$ and the helicity cancellation $\langle \tilde A_n(u,u), \operatorname{curl} u \rangle = 0$. Thus if we define $\tilde C$ by replacing all occurrences of $A_n$ with their counterparts $\tilde A_n$, then $\tilde C$ also obeys energy and helicity cancellation. Furthermore, observe that $\tilde A_n(u,u) = A_n(u,u)$ is an odd function whenever $u$ is an odd function (basically because the curl or inverse curl of an odd function is even, and thus orthogonal to all odd functions), and so $\tilde C(u,u) = C(u,u)$ when $u$ is an odd function. It is then easy to see that any mild solution to $\partial_t u = \tilde C(u,u)$ with odd initial data is then odd for all time, and thus also solves $\partial_t u = C(u,u)$. From this, we see that Theorem \ref{blowup} for $C$ implies Theorem \ref{blowup} for $\tilde C$. As a consequence, we can enforce helicity conservation in Theorem \ref{main} if desired. Of course, it was unlikely in any event that global helicity conservation would have been useful for the global regularity problem, given that the helicity of an odd vector field is automatically zero, and that odd vector fields are preserved by the Euler and Navier-Stokes flows, and are not expected to be any more difficult\footnote{Indeed, any non-odd initial data for such flows may be made odd by first translating by a large displacement and then anti-symmetrising, which will asymptotically have no impact on the dynamics after renormalising.} to handle than general vector fields.
Finally, we remark that the Euler equations $\partial_t u = B(u,u)$ formally conserve total momentum $\int_{\R^3} u\ dx$, total angular momentum $\int_{\R^3} x \times u\ dx$, and total vorticity $\int_{\R^3} \nabla \times u\ dx$. These quantities are also formally conserved by the equation $\partial_t u = C(u,u)$ for any local cascade operator $C$, basically because the wavelets $\psi_1,\psi_2,\psi_3$ used in building these cascade operators have Fourier transform vanishing near the origin; we omit the details.
\end{remark}
\section{Quadratic circuits}
Our objective is to solve an infinite-dimensional system of ODE, roughly of the form \eqref{inviscid-model}. In order to build up some intuition for doing so, we will first study a finite-dimensional ``toy'' model, namely ODEs of the form
\begin{equation}\label{ode}
\partial_t X = G(X,X)
\end{equation}
where $X: [t_1,t_2] \to \R^m$ is a vector-valued trajectory for some finite $m$, and $G: \R^m \times \R^m \to \R^m$ is a bilinear operator obeying the cancellation condition
\begin{equation}\label{g-cancel}
G(X,X) \cdot X = 0
\end{equation}
for all $X \in \R^m$ (so in particular, the flow \eqref{ode} preserves the norm of $X$, and so the ODE is globally well posed). It will be important for us that there is no size restriction on the coefficients on the bilinear operator $G$, although the coefficients must of course be real. The terminology ``circuit'' is meant to invoke an analogy with electrical engineering (and also with computational complexity theory). Clearly, \eqref{ode} is a toy model for the system \eqref{inviscid-model}, and can also be viewed as a toy model for the Euler equations $\partial_t u = B(u,u)$. We will build a quadratic circuit to accomplish a specific task (namely, to abruptly transfer energy from one mode to another, after a delay) out of ``quadratic logic gates'', by which we mean quadratic circuits \eqref{ode} of a very small size (with $m=2$ or $m=3$) and a simple structure to $G$, which each accomplish a single simple task of transforming a certain type of input into a certain type of output.
We first discuss in turn the three quadratic logic gates we will be using, which we call the ``pump'', the ``amplifier'', and the ``rotor'', and then show how these gates can be combined to build a circuit with the desired properties. It looks likely that the set of quadratic gates is sufficiently ``Turing complete'' in that they can perform extremely general computational tasks\footnote{Of course, this is bearing in mind that, being globally well-posed ODE, circuits of the form \eqref{ode} are necessarily limited to perform continuous (i.e. analog) operations rather than perfectly digital operations. Also, as the equation \eqref{ode} is time reversible, only reversible computing tasks may be performed by quadratic circuits, at least in the absence of dissipation.}, but we will not pursue\footnote{See \cite{pour} for a treatment of continuous computation in PDE, and \cite{turing} for continuous computation in ODE.} this matter further here.
Strictly speaking, the discussion here is not actually needed for the proof of our main results, but we believe that the model problems studied here will assist the reader in understanding what may otherwise be a highly unmotivated construction and set of arguments in the next section.
\subsection{The pump gate}
We first describe the \emph{pump gate}. This is a binary gate (so $m=2$), with unknown $X(t) = (x(t),y(t)) \in \R^2$ obeying the quadratic ODE
\begin{equation}\label{pump}
\begin{split}
\partial_t x &= - \alpha xy \\
\partial_t y &= \alpha x^2
\end{split}
\end{equation}
where $\alpha >0$ is a fixed coupling constant (representing the strength of the pump). We will be applying this pump in the regime where $x$ is initially positive and $y \geq 0$; by Gronwall's inequality (or by integrating factors), we see that $x$ remains positive for all subsequent time, while $y$ is increasing. As the total energy $x^2+y^2$ is conserved, we thus see that energy is being pumped from $x$ to $y$. For instance, we have the explicit solution
\begin{equation}\label{pomp}
x(t) = A \operatorname{sech}(\alpha A t); \quad y(t) = A \operatorname{tanh}( \alpha A t )
\end{equation}
for any amplitude $A>0$, which at time $t=0$ is at the initial state $(x(0),y(0)) = (A,0)$. For times $0 \leq t \leq \frac{1}{\alpha A}$, the $y$ component increases more or less linearly at rate comparable to $\alpha A^2$, with a corresponding drain of energy from $x$; after this time, $x$ decays exponentially fast (at rate $\alpha A$), with the energy in $x$ being transferred more or less completely to $y$ after time $t \geq \frac{C}{\alpha A}$ for a large constant $C$. Thus, the pump can be used to execute a delayed, but \emph{gradual}, transition of energy from one mode (the $x$ mode) to another (the $y$ mode). We will schematically depict the pump by a thick arrow: see Figure \ref{fig:pump}.
\begin{figure} [t]
\centering
\includegraphics{./pump.png}
\caption[Pump gate]{The pump gate from the $x$ mode to the $y$ mode with coupling constant $\alpha$.}
\label{fig:pump}
\end{figure}
If one ignores the dissipation term, the dyadic model equation \eqref{xn} can be viewed as a sequence of pumps chained together, with the coupling constant $\lambda^n$ of the pump from one mode $X_n$ to the next $X_{n+1}$ increasing exponentially with $n$.
One useful feature of the pump which we will exploit is that it can ``integrate'' an alternating input $x$ into a monotone output $y$, somewhat analogously to how a rectifier in electrical engineering converts AC current to DC current. Indeed, if one couples the $x$ input of the pump to an external forcing term, thus
\begin{align*}
\partial_t x &= - \alpha xy + F\\
\partial_t y &= \alpha x^2
\end{align*}
with $F$ highly oscillatory, then $x$ may oscillate in sign also (if the $F$ term dominates the energy drain term $-\alpha xy$), but the $y$ output continues to increase at a more or less steady rate. If for instance $F(t) = A\omega \cos(\omega t)$ with some quantities $A, \omega$ which are large compared to the coupling constant $\alpha$, and we set initial conditions $x(0)=y(0)=0$ for simplicity, then we expect $x$ to behave like $A \sin(\omega t)$, and $y$ to increase at rate about $\frac{1}{2} \alpha A^2$ on average.
If instead we couple the pump to an oscillatory forcing term on the output, thus
\begin{align*}
\partial_t x &= - \alpha xy \\
\partial_t y &= \alpha x^2 + G
\end{align*}
then it is possible that $y$ can turn negative, which causes the pump to reverse in energy flow to become an amplifier (see below). This behaviour will be undesirable for us, so we will take some care to design our circuit so that the output of a pump does not experience significant negative forcing at key epochs in the dynamics, unless this forcing is counterbalanced by an almost equivalent amount of positive forcing.
\subsection{Application: finite time blowup for an exogenously truncated dyadic model}\label{beyond}
As a quick application of the pump gate, we establish blowup for the truncated version \eqref{trunc-non} of the dyadic model system \eqref{xn}, whenever one has supercritical dissipation:
\begin{proposition}[Blowup for a truncated dyadic model]\label{bkp} Let $\lambda > 1$ and $0 < \alpha < 1/2$, and let $0 < \delta < 1-2\alpha$. Then there exists a natural number $n_0$, a sequence of times
$$ 0 = t_{n_0} < t_{n_0+1} < t_{n_0+2} < \dots$$
increasing to a finite limit $T_*$, and continuous, piecewise smooth functions $X_n: [0,T_*) \to \R$ for $n \geq n_0$ such that $X_{n_0+k}(t)=0$ whenever $k \geq 1$ and $0 \leq t \leq t_{n_0+k-1}$, and such that
\begin{equation}\label{trunc-non-again}
\partial_t X_n = - \lambda^{2n\alpha} X_n + 1_{(t_{n-1},t_n)}(t) \lambda^{n-1} X_{n-1}^2 - 1_{(t_n,t_{n+1})}(t) \lambda^n X_n X_{n+1}
\end{equation}
for all $t \in [0,T_*)$ other than the times $t_{n_0}, t_{n_1},\ldots$, and all $n \geq n_0$, with the convention that $t_{n_0-1}=0$ and $X_{n_0-1}=0$. Furthermore, we have
$$ X_{n_0+k}(t_{n_0+k}) = \lambda^{- \delta k}$$
for every $k \geq 0$. In particular, for any $\delta'>\delta$, we have the blowup
$$\limsup_{t \to T_*} \sup_n \lambda^{\delta'n} |X_n(t)| = +\infty.$$
\end{proposition}
This proposition is not needed for the blowup results in the rest of the paper, but is easier to prove than those results, and already illustrates the basic features of the blowup solutions being constructed. Note that the blowup here is available for all values of the dissipation parameter up to the critical value of $1/2$, in contrast to the results in \cite{katz-dyadic} and \cite{ches} for the untruncated equation \eqref{xn} which cover the ranges $\alpha < 1/4$ and $\alpha < 1/3$ respectively, as well as the results in \cite{bmr} establishing global solutions when $\lambda=2$ and $2/5 \le \alpha \le 1/2$.
\begin{proof} We let $n_0$ be a sufficiently large natural number (depending on $\lambda,\alpha,\delta$) to be chosen later.
We then construct $t_{n_0}, t_{n_0+1},\ldots$ and $X_n(t)$ iteratively as follows:
\begin{itemize}
\item[Step 1.] Initialise $k=0$ and $t_{n_0}=0$. We also initialise
$$ X_{n_0}(0)=1; \quad X_{n_0+k}(0) = 0 \hbox{ for all } k \geq 1.$$
\item[Step 2.] Now suppose that $t_{n_0+k}$ has been constructed, and the solution $X_n(t)$ constructed for all times $0 \leq t \leq t_{n_0+k}$ and $n \geq n_0$. We then solve the pump system with dissipation
\begin{align}
\partial_t X_{n_0+k} &= -\lambda^{2(n_0+k)\alpha} X_{n_0+k} - \lambda^{n_0+k} X_{n_0+k} X_{n_0+k+1} \label{eot}\\
\partial_t X_{n_0+k+1} &= -\lambda^{2(n_0+k+1)\alpha} X_{n_0+k+1} + \lambda^{n_0+k} X_{n_0+k}^2\label{eot-2}
\end{align}
within the time interval $t \in [t_{n_0+k}, t_{n_0+k+1}]$, where $t_{n_0+k+1}$ is the first time for which $X_{n_0+k+1}(t_{n_0+k+1}) = \lambda^{-\delta(k+1)}$; we justify the existence of such a time below.
\item[Step 3.] For each $n \neq n_0+k,n_0+k+1$, we evolve $X_n$ on $[t_{n_0+k}, t_{n_0+k+1}]$ by the linear ODE
$$ \partial_t X_n = -\lambda^{2n\alpha} X_n.$$
\item[Step 4.] Increment $k$ to $k+1$ and return to Step 2.
\end{itemize}
Let us now establish that the time $t_{n_0+k+1}$ introduced in Step 2 is well defined for any given $k \geq 0$. If we make the change of variables
$$ x(t) := \lambda^{\delta k} X_{n_0+k}( t_{n_0+k} + \lambda^{-n_0-k + \delta k} t )$$
$$ y(t) := \lambda^{\delta k} X_{n_0+k+1}( t_{n_0+k} + \lambda^{-n_0-k + \delta k} t )$$
then we see from construction that we have the initial conditions
$$ x(0)=1, y(0)=0$$
and the evolution equations
\begin{align}
\partial_t x = - \eps x - xy \label{able}\\
\partial_t y = - \lambda^{-2\alpha} \eps y + x^2 \label{bble}
\end{align}
where
$$ \eps := \lambda^{-(1-2\alpha)n_0 - (1-2\alpha-\delta)k},$$
and our task is to show that $y(t) = \lambda^{-\delta}$ for some finite $t>0$. However, from the explicit solution \eqref{pomp} to the pump gate \eqref{pump}, we see that in the case $\eps=0$, this occurs at time $t = \operatorname{tanh}^{-1}( \lambda^{-\delta} )$; standard perturbation arguments then show that if $n_0$ is sufficiently large (which forces $\eps$ to be sufficiently small), the claim occurs at some time $t \leq 2 \operatorname{tanh}^{-1}( \lambda^{-\delta} )$ (say). Undoing the scaling, we see that
$$ t_{n_0+k+1} - t_{n_0+k} \leq 2 \operatorname{tanh}^{-1}( \lambda^{-\delta} ) \lambda^{-n_0-k+\delta k}$$
so $t_n$ converges to a finite limit $T_*$ as $n \to \infty$, and the claim follows.
\end{proof}
As mentioned in the introduction, one can use (a slight modification of) this proposition to obtain a weaker ``exogenous'' version of Theorem \ref{main} in which the averaged operator $\tilde B = \tilde B(t)$ is now allowed to depend on the time coordinate $t$ (in a piecewise constant fashion, with an unbounded number of discontinuities as $t$ approaches the blowup time). We leave the details (which are an adaptation of those in Section \ref{euler-avg}) to the interested reader.
\begin{remark} One cannot take $\delta=0$ in the above argument, because the pump gate never quite transfers all of its energy from the $x$ mode to the $y$ mode. If however we worked with the modified equation
$$
\partial_t X_n = - \frac{\lambda^n}{g(\lambda^n)^2} X_n + 1_{(t_{n-1},t_n)}(t) \lambda^{n-1} (X_{n-1}^2 + X_{n-1} X_n) - 1_{(t_n,t_{n+1})}(t) \lambda^n (X_n X_{n+1} + X_{n+1}^2)$$
for some function $g: [0,+\infty) \to [0,+\infty)$ increasing to infinity, and defines $t_{n_0+k+1}$ to be the first time for which $X_{n_0+k}(t_{n_0+k+1})=0$ (so that $X_{n_0+k+1}$ is the only non-zero mode at this time), then a modification of the above argument establishes finite time blowup whenever $n_0$ is sufficiently large and
$$ \int_1^\infty \frac{ds}{sg(s)^2} < \infty,$$
basically because one can show inductively that $X_{n_0+k}(t_{n_0+k})$ is comparable to $1$, $t_{n_0+k+1}-t_{n_0+k}$ is comparable to $\lambda^{-n}$, and the energy dissipation on each time interval $[t_{n_0+k},t_{n_0+k+1}]$ is comparable to $\frac{1}{g(\lambda^{n_0+k})^2}$; we omit the details. This is compatible with the heuristic calculation in \cite[Remark 1.2]{tao-hyper}. In the converse direction, the arguments in \cite{tapay} or \cite{wu} should ensure global regularity for the above equation (or for the analogous hyperdissipative version of \eqref{xn}) under the condition
$$ \int_1^\infty \frac{ds}{sg(s)^4} = +\infty.$$
This leaves an intermediate regime (e.g. $g(s) = \log(1+s)^\beta$ for $1/4 < \beta \leq 1/2$) in which it is unclear whether one can force blowup\footnote{Since the initial release of this manuscript, it has been shown in \cite{bmr-conj} (see also \cite{bmr-dyadic}) that blowup in fact does not occur in this intermediate regime. Roughly speaking, the basic point is that as the energy moves from low frequency modes to high frequency modes, it must transition through all intermediate frequency scales, and the cumulative energy dissipation from such transitions is enough to prevent the solution from escaping to frequency infinity in this intermediate regime.} with any of these ODE models. The analysis in \cite{tapay} or \cite{wu} suggests that this may be possible, but one would have to work with models in which many different modes are activated at once (in contrast to the situation in Proposition \ref{bkp}, in which only two modes have interesting dynamics at any given time).
\end{remark}
\subsection{The amplifier gate}
The \emph{amplifier gate} is a reversed version of the pump gate:
\begin{equation}\label{amp}
\begin{split}
\partial_t x &= - \alpha y^2 \\
\partial_t y &= \alpha xy.
\end{split}
\end{equation}
Here again $\alpha>0$ is a coupling constant, indicating the strength of the amplifier. We will use this gate in the regime in which $x$ is positive and large, and $y$ is positive but small. In this case, we can explicitly solve the second equation to obtain
\begin{equation}\label{y-gronwall}
y(t) = \exp\left( \alpha \int_{t_0}^t x(t')\ dt' \right) y(t_0)
\end{equation}
for any $t \geq t_0$, which suggests that $y$ grows exponentially at rate comparable to $\alpha x(0)$, until such time that the $y$ mode begins to drain a significant fraction of energy from the $x$ mode. Thus, the $x$ mode can be viewed as causing exponential amplification in the $y$ mode. Of course, in the presence of forcing terms, we no longer have the exact formula \eqref{y-gronwall}, but we may take advantage of Gronwall's inequality to obtain analogous control on $y$.
As with the pump gate, the amplifier gate preserves the total energy $x^2+y^2$. An explicit solution to \eqref{amp} is given by
$$ x(t) = A \operatorname{tanh}(\alpha A (T-t)); \quad y(t) = A \operatorname{sech}( \alpha A (T-t) )$$
for any $A>0$ and $T>0$. For $0 < t < T$, the quantity $y$ increases exponentially at rate about $\alpha A$, while $x$ stays roughly steady at $A$.
By using the amplifier with a large coupling constant $\alpha$, $x$ large and positive, and $y$ small and positive, we can cause $y$ to grow at a rapid exponential rate, and in particular to transition abruptly from being small (e.g. $y \leq \eps$ for some threshold $\eps$) to being large (e.g. $y>2\eps$), if the threshold $\eps$ is set low enough that $y$ does not yet begin to drain significant amounts of energy from $x$. This ability to generate abrupt transitions is of course needed in our quest to engineer an abrupt delayed transition of energy from one mode to another. This behaviour can be disrupted if $x$ becomes negative at some point, but we will avoid this in practice by making $x$ the output of a pump (which, as discussed previously, can serve to ``rectify'' an alternating input into a steadily increasing output). We will represent the amplifier schematically by a triangle-headed arrow (Figure \ref{fig:amp}).
\begin{figure} [t]
\centering
\includegraphics{./amplifier.png}
\caption[Amplifier gate]{The amplifier gate from the $x$ mode to the $y$ mode with coupling constant $\alpha$.}
\label{fig:amp}
\end{figure}
\subsection{The rotor gate}
The \emph{rotor gate} is a ternary gate
\begin{equation}\label{rotor}
\begin{split}
\partial_t x &= -\alpha yz \\
\partial_t y &= \alpha xz \\
\partial_t z &= 0
\end{split}
\end{equation}
where again $\alpha >0$ is a parameter.
This of course preserves the total energy $x^2+y^2+z^2$ and has the explicit solution
\begin{align*}
x(t) &= x(t_0) \cos\left( \alpha z(t_0) (t-t_0) \right) - y(t_0) \sin\left( \alpha z(t_0) (t-t_0) \right) \\
y(t) &= y(t_0) \cos\left( \alpha z(t_0) (t-t_0) \right) + x(t_0) \sin\left( \alpha z(t_0) (t-t_0) \right) \\
z(t) &= z(t_0)
\end{align*}
in which $(x(t),y(t))$ rotates around the origin at a contant angular rate $\alpha z(t_0)$, while $z$ remains fixed. Thus the $z$ mode can be viewed as driving the oscillating interchange of energy between the $x$ and $y$ modes.
Because we will be coupling the rotor to various forcing terms in $x$, $y$, and $z$, we cannot rely directly on the above explicit solution, although this solution is of course very useful for supplying intuition as to how the rotor behaves. Instead, we will use energy-based analyses of the rotor, which are much more robust with respect to forcing terms. Firstly we observe that for the rotor with no forcing, the combined energy of the $x$ and $y$ modes is conserved:
$$ \partial_t (x^2+y^2) = 0.$$
In a related spirit, we have the \emph{equipartition of energy identity}
$$ \alpha z (x^2-y^2) = \partial_t (xy)$$
or in integral form
$$ \alpha \int_{t_0}^T z(t) (x^2(t) - y^2(t))\ dt = x(T)y(T) - x(t_0)y(t_0).$$
Using the conserved energy $x^2+y^2=E$ and the constant nature of $z$, this becomes
$$ \frac{1}{T-t_0} \int_{t_0}^T x^2(t)\ dt = \frac{1}{2} E + O\left( \frac{E}{\alpha |z(t_0)| (T-t_0)}\right)$$
and similarly with $x(t)$ replaced by $y(t)$. Thus we see that over any time interval significantly longer than the period $\frac{2\pi}{\alpha |z(t_0)|}$, the $x$ mode absorbs about half the energy $E$ of the combined pair $x,y$, and similarly for $y$.
In our application, we will use the rotor with the driving mode $z$ being the output of an amplifier. As noted previously, amplifier outputs can transition rapidly from being small to being large, so the pair $(x,y)$ will initially be almost stationary, and then suddenly transition to a highly oscillatory state. This creates a ``jolt'' of ``alternating current'', which we will then quickly transform to ``direct current'' via a pump gate.
We describe the rotor gate schematically by a loop connecting the $x$ and $y$ modes that is driven by the $z$ mode: see Figure \ref{fig:rotor}.
\begin{figure} [t]
\centering
\includegraphics{./rotor.png}
\caption[Rotor gate]{The rotor gate that uses the $z$ mode to exchange energy between the $x$ and $y$ modes using the coupling constant $\alpha$.}
\label{fig:rotor}
\end{figure}
\subsection{A delayed and abrupt energy transition}
We can now build a ``quadratic circuit'' that achieves the goal of abruptly transitioning almost all of its energy from one mode to another, after a certain delay (and thus exhibiting ``digital'' transition behaviour rather than ``analog'' transition behaviour). To describe this circuit, we first need a large parameter $K \gtrsim 1$, together with a very small parameter $0 < \eps \lesssim 1$ which will be sufficiently small depending on $K$ (e.g. one could choose $\eps := 1/\exp(\exp(CK))$ for some large absolute constant $C$). We will then consider a five-mode circuit $X = (a,b,c,d,\tilde a)$ obeying the equations
\begin{align}
\partial_t a &= - \eps^{-2} c d - \eps ab - \eps^2 \exp(-K^{10}) ac \label{a-eq} \\
\partial_t b &= \eps a^2 - \eps^{-1} K^{10} c^2 \label{b-eq} \\
\partial_t c &= \eps^2 \exp(-K^{10}) a^2 + \eps^{-1} K^{10} bc \label{c-eq} \\
\partial_t d &= \eps^{-2} ca - K d\tilde a \label{d-eq}\\
\partial_t \tilde a &= K d^2\label{ta-eq}
\end{align}
and with initial data
\begin{equation}\label{a-init}
a(0) = 1; \quad b(0)=c(0)=d(0)=\tilde a(0)=0.
\end{equation}
This system looks complicated and artificial, with a rather arbitrary looking set of coupling constants of wildly differing magnitudes, but it should be viewed as a superposition of five quadratic gates:
\begin{itemize}
\item A pump of coupling constant $\eps$ that transfers a small amount of energy from $a$ to $b$;
\item A pump of coupling constant $\eps^2 \exp(-K^{10})$ that transfers a minute amount of energy from $a$ to $c$;
\item An amplifier of coupling constant $\eps^{-1} K^{10}$ that uses $b$ to rapidly amplify $c$;
\item A rotor of coupling constant $\eps^{-2}$ that uses $c$ to (eventually) rotate energy very rapidly between $a$ and $d$; and
\item A pump of coupling constant $K$ that drains energy from $d$ to $\tilde a$ at a moderately fast pace.
\end{itemize}
See Figure \ref{fig:delay}.
\begin{figure} [t]
\centering
\includegraphics{./delay.png}
\caption[Delay circuit]{A circuit that creates a delayed, but abrupt, transition of energy from $a$ to $\tilde a$.}
\label{fig:delay}
\end{figure}
One should view $a$ as the input mode for this circuit, and $\tilde a$ as the output mode; in later sections we will chain an infinite sequence of these circuits (rescaled by an exponentially growing parameter) together by identifying the output mode for each circuit with the input mode for the next. One can check by hand that this system is of the form \eqref{ode} with bilinear form $G$ obeying the cancellation condition \eqref{g-cancel} (basically because the system is composed of gates, each of which individually satisfy this condition).
As a caricature, the evolution of this system can be described as follows, involving a critical time $t_c \approx \sqrt{2}$:
\begin{enumerate}
\item[(i)] At early times $0 \leq t \leq t_c - 1/\sqrt{K}$ (say), nothing much appears to happen: $a$ remains very close to $1$, $b$ grows linearly like $\eps t$, $c$ grows exponentially like $\eps^2 \exp( (\frac{1}{2}t^2-1) K^{10} )$, and $d$ and $\tilde a$ are close to $0$.
\item[(ii)] At a critical time $t_c \approx \sqrt{2}$, there is an abrupt transition when the exponentially growing $c$ suddenly (within a time of $O(K^{-10})$ or so) transitions from being much smaller than $\eps^2$ to being much larger than $\eps^2$. This ignites the rotor gate, which then begins to rapidly transfer energy between $a$ and $d$. By equipartition of energy, $d^2$ will approximately be equal to $1/2$ on the average.
\item[(iii)] After time $t_c+1/K$ or so, the pump between $d$ and $\tilde a$ begins to activate, and steadily drains the energy from $d$ (which, as mentioned before, contains about half the energy of the system) to $\tilde a$. Meanwhile, $b$ and $c$ remain very small (because of the $\eps$ factors), but with $c$ large enough to continue the rapid mixing of energy between $a$ and $d$ throughout this process.
\item[(iv)] By time $t_c + 1/\sqrt{K}$ (say), all but an exponentially small remnant of energy has been drained into $\tilde a$.
\end{enumerate}
We depict these dynamics schematically in Figure \ref{fig:transfer}.
\begin{figure} [t]
\centering
\includegraphics{./energy-transfer.png}
\caption{A schematic description (not entirely drawn to scale) of the dynamics of $a,b,c,d,\tilde a$ before, near, and after the critical time $t_c$. Note that $b$ and $c$ are of significantly smaller magnitude (by powers of $\eps$) than the other modes $a,d,\tilde a$, even after taking into account the exponential growth of $c$. Note also how the pump gates convert alternating inputs to monotone outputs, and how the amplifier gate converts a slowly growing input to an exponentially growing output. Finally, observe the transient nature of $d$, which only plays a role near the critical time $t_c$; the bulk of the energy is concentrated at the $a$ mode before time $t_c$ and at the $\tilde a$ mode after time $t_c$.}
\label{fig:transfer}
\end{figure}
To summarise the above description of the dynamics, this circuit has achieved the stated goal of creating an abrupt transition (of duration $O(1/\sqrt{K})$) of energy from one mode $a$ to another $\tilde a$, after a long delay (of duration $t_c \approx \sqrt{2}$). (It is by no means the only circuit that can accomplish this task, but the author was not able to locate a circuit of lower complexity that did so.)
More formally, we claim
\begin{theorem}[Delayed abrupt energy transition]\label{daet} If $K$ is sufficiently large, and $\eps$ sufficiently small depending on $K$, then there exists a time
\begin{equation}\label{tcable}
t_c = \sqrt{2} + O(1/\sqrt{K})
\end{equation}
such that
\begin{equation}\label{able2}
a(t) = 1 + O(K^{-10}); \quad b(t), c(t), d(t), \tilde a(t) = O(K^{-10})
\end{equation}
for $0 \leq t \leq t_c - 1/\sqrt{K}$, and
\begin{equation}\label{beable}
\tilde a(t) = 1 + O(K^{-10}); \quad a(t), b(t), c(t), d(t) = O(K^{-10})
\end{equation}
for $t \geq t_c + 1/\sqrt{K}$.
\end{theorem}
\begin{proof} We shall use the usual bootstrap procedure of starting with crude estimates and steadily refining them to stronger estimates on this interval, using continuity arguments if necessary in case the crude estimates are initially only available at $t=0$ rather than for all $t \in [0,2]$.
From conservation of energy and \eqref{a-init} we have
\begin{equation}\label{energy-con}
a(t)^2 + b(t)^2 + c(t)^2 + d(t)^2 + \tilde a(t)^2 = 1
\end{equation}
throughout this interval. In particular, we have
\begin{equation}\label{ob}
a(t),b(t),c(t),d(t),\tilde a(t) = O(1).
\end{equation}
We can improve this bound on $b$ and $c$ as follows. From \eqref{b-eq}, \eqref{c-eq} we have the local energy identity
$$ \partial_t (b^2+c^2) = 2 \eps a^2 b + 2\eps^2 \exp(-K^{10}) a^2 c$$
and thus by \eqref{ob}
$$ \partial_t (b^2+c^2) = O\left( \eps (b^2+c^2)^{1/2} \right)$$
or equivalently
$$ \partial_t \left((b^2+c^2)^{1/2}\right) = O( \eps )$$
(in a weak derivative\footnote{To justify this step (a very simple example of the \emph{diamagnetic inequality} (see e.g. \cite[\S 7.19-7.22]{ll}) in action), one can first work instead with $(b^2+c^2+\delta)^{1/2}$ for some small $\delta>0$, in order to avoid any singularity, and then take distributional limits as $\delta \to 0$.} sense), and so from \eqref{a-init} and the fundamental theorem of calculus we see in particular that
\begin{equation}\label{ob-2}
b(t), c(t) = O(\eps)
\end{equation}
for $t \in [0,2]$. Inserting this into \eqref{c-eq}, we see that
\begin{equation}\label{cgrow}
\partial_t c = O( \eps^2 \exp(-K^{10}) ) + O( K^{10} c )
\end{equation}
for $t \in [0,2]$ (note from the initial condition $c(0)=0$ and a comparison argument that $c(t) \geq 0$ for all $t \geq 0$). By Gronwall's inequality we thus have
\begin{equation}\label{code}
0 \leq c(t) \lesssim \eps^2 \exp\left((Ct-1)K^{10} \right)
\end{equation}
for all $t \in [0,2]$ and some absolute constant $C>0$. Finally, from \eqref{d-eq}, \eqref{ta-eq} we have the local energy identity
$$
\partial_t (d^2+\tilde a^2) = 2 \eps^{-2} c ad
$$
and hence by \eqref{ob}
$$
\partial_t \left((d^2+\tilde a^2)^{1/2}\right) = O( \eps^{-2} c ) $$
and thus by \eqref{a-init} and the fundamental theorem of calculus
\begin{equation}\label{dora}
d(t), \tilde a(t) = O\left( \eps^{-2} \int_0^t c(t')\ dt' \right).
\end{equation}
In particular, from \eqref{code} we have
\begin{equation}\label{explorer-2}
|d(t)|, |\tilde a(t)| \lesssim K^{-10} \exp\left( (Ct-1) K^{10} \right),
\end{equation}
which is a good bound for short times $t \leq 1/C$.
Having obtained crude bounds on all five quantities $a(t), b(t), c(t), d(t),\tilde a(t)$, we now return to obtain sharper bounds on these quantities. Let $t_c$ be the supremum of all the times $t \in [0,2]$ for which $c(t') \leq K^{-10} \eps^2$ for all $0 \leq t' \leq t$, thus $0 < t_c \leq 2$ and
\begin{equation}\label{boots}
c(t) \leq K^{-10} \eps^2
\end{equation}
for $t \in [0,t_c]$. Comparing this with \eqref{code} we conclude that $t_c \gtrsim 1$. From \eqref{dora} we have
$$ |d(t)|, |\tilde a(t)| \lesssim K^{-10}$$
for $t \in [0,t_c]$. Inserting these bounds and \eqref{ob-2} back into \eqref{a-eq}, we have
$$
\partial_t a = O( K^{-20} ) + O( \eps^2 )
$$
on $[0,t_c]$, so from \eqref{a-init} (and assuming $\eps$ sufficiently small depending on $K$) we have
$$
a(t) = 1 + O( K^{-20} )$$
for $t \in [0,t_c]$. This already gives all the bounds \eqref{able2}. Inserting the $a$ bound into \eqref{b-eq} and using \eqref{boots}, we have
$$ \partial_t b = \eps + O( K^{-20} \eps ) + O( K^{-10} \eps^3 ) $$
and so from \eqref{a-init} (again assuming $\eps$ sufficiently small depending on $K$) we have
\begin{equation}\label{bogo-2}
b(t) = \eps t + O( K^{-20} \eps )
\end{equation}
for $t \in [0,t_c]$. Inserting these bounds into \eqref{c-eq}, we have
$$ \partial_t c = (1 + O(K^{-20})) \eps^2 \exp(-K^{10}) + (K^{10} t + O(K^{-10})) c$$
and hence by \eqref{a-init} and Gronwall's inequality
\begin{align*}
c(t) &= \int_0^t \exp\left(\int_{t'}^t \left(K^{10} t'' + O(K^{-10})\right)\ dt''\right) \left(1 + O(K^{-20})\right) \eps^2 \exp(-K^{10})\ dt' \\
&= \left(1 + O(K^{-10})\right) \eps^2 \exp\left(\left(\frac{1}{2} t^2- 1\right)K^{10}\right) \int_0^t \exp\left( - \frac{1}{2} (t')^2 K^{10}\right)\ dt'
\end{align*}
for $t \in [0,t_c]$. In particular (since $t_c \gtrsim 1$), standard asymptotics on the error function give
$$ c(t_c) = \left(1 + O(K^{-10})\right) \eps^2 \exp\left(\left(\frac{1}{2} t_c^2-1\right)K^{10}\right) \sqrt{\frac{\pi}{2 K^{10}}}
$$
which, when compared against the definition of $t_c$, shows \eqref{tcable}. In particular, $t_c < 2$ (for $K$ large enough), and so
\begin{equation}\label{c-bound}
c(t_c) = K^{-10} \eps^2.
\end{equation}
Having described the evolution up to time $t_c$, we now move to the future of $t_c$. From \eqref{bogo-2} we have
$$ b(t_c) \gtrsim \eps.$$
Meanwhile, from \eqref{code}, \eqref{b-eq} (discarding the non-negative $\eps a^2$ term) we have
$$ \partial_t b \geq - \eps^3 \exp(O( K^{10} ) )$$
for $t \in [t_c,2]$, so (for $\eps$ small enough) we also have
$$ b(t) \gtrsim \eps$$
for $t \in [t_c,2]$. Inserting this bound into \eqref{c-eq}, and discarding the non-negative $\eps^2 \exp(-K^{10}) a^2$ term, we arrive at the exponential growth
$$ \partial_t c(t) \gtrsim K^{10} c(t)$$
for $t \in [t_c,2]$. From this, \eqref{c-bound}, and Gronwall's inequality, we see in particular that
\begin{equation}\label{c-large}
c(t) \geq K^{100} \eps^2
\end{equation}
for $t$ in the time interval $I := [t_c + K^{-9}, 2]$. In other words, the rotor gate will be continuously and strongly activated from time $t_c+K^{-9}$ onwards.
On the other hand, from \eqref{cgrow}, \eqref{c-large} we also have
\begin{equation}\label{solace}
\partial_t c(t) = O( K^{10} c(t) )
\end{equation}
for $t \in I$, so the exponential growth rate of $c$ remains under control in this region.
Now we use equipartition of energy to establish some reasonably rapid energy drain from $a,b,c,d$ to $\tilde a$.
From \eqref{a-eq}-\eqref{d-eq} one has
$$ \partial_t (a^2+b^2+c^2+d^2) = - 2Kd^2 \tilde a.$$
Similarly, from \eqref{a-eq}, \eqref{d-eq}, and \eqref{ob} one has
$$ \partial_t(ad) = \eps^{-2} c (a^2-d^2) - O(K).$$
Finally, from \eqref{ta-eq}, \eqref{ob} one has
$$ \partial_t \tilde a = O(K).$$
We conclude using \eqref{c-large}, \eqref{solace}, and the product rule that
\begin{equation}\label{douse}
\begin{split}
\partial_t( ad \frac{\eps^2}{c} \tilde a) &= - (a^2-d^2) \tilde a - ad \frac{\eps^2}{c} \frac{\partial_t c}{c} \tilde a + ad \frac{\eps^2}{c} \partial_t \tilde a + O(K^{-99}) \\
&= - (a^2-d^2) \tilde a + O(K^{-90})
\end{split}
\end{equation}
for $t \in I$,
so if we define the modified energy
$$ E_* := \frac{1}{2}(a^2+b^2+c^2+d^2) + \frac{1}{2} K ad \frac{\eps^2}{c} \tilde a$$
then
$$ \partial_t E_* = - \frac{1}{2} K (a^2+d^2) \tilde a + O( K^{-80} )$$
for $t \in I$.
From \eqref{ob-2} we have
\begin{equation}\label{est}
E_* = \frac{1}{2}(a^2+b^2+c^2+d^2) + O(K^{-99}) = \frac{1}{2} (a^2+d^2) + O(K^{-99})
\end{equation}
and thus
$$ \partial_t E_* = - K \tilde a E_* + O( K^{-80} ).$$
Starting with the crude bound $E_*(t_c) = O(1)$ from \eqref{est}, we thus see from Gronwall's inequality that
$$ E_*(t) \lesssim \exp\left( - K \int_{t_c}^t \tilde a(t')\ dt'\right) + O(K^{-80})$$
for all $t \in I$.
Observe from \eqref{ta-eq} that $\tilde a$ is non-decreasing, and thus
\begin{equation}\label{toke}
E_*(t) \lesssim \exp( - K (t-t') \tilde a(t')) + O(K^{-80})
\end{equation}
whenever $t_c \leq t' \leq t \leq 2$. We will use this bound with $t' := t_c + 1/K$. We claim that
\begin{equation}\label{atc}
\tilde a(t_c+1/K) \geq 0.1.
\end{equation}
Suppose this is not the case; then by \eqref{ta-eq} we have
$$ \int_{t_c}^{t_c+1/K} d(t'')^2\ dt'' \leq \frac{1}{10K}.$$
However, by repeating the derivation of \eqref{douse} we have
$$ \partial_t \left(ad \frac{\eps^2}{c}\right) = - (a^2-d^2) + O(K^{-90})$$
and hence by the fundamental theorem of calculus and \eqref{c-large} we have
$$ \int_{t_c}^{t_c+1/K} (a(t'')^2 - d(t'')^2)\ dt'' = O( K^{-90} );$$
combining this with the previous estimate, we conclude that
$$ \int_{t_c}^{t_c+1/K} \frac{1}{2} (a^2+d^2)(t'')\ dt'' \leq \frac{1}{10K} + O(K^{-90}).$$
On the other hand, for $t'' \in [t_c,t_c+1/K]$ one has $\tilde a(t) \leq 0.1$ by monotonicity of $\tilde a$, and hence by \eqref{est}, \eqref{energy-con} we have
$$ a^2+d^2 \geq 0.99 + O(K^{-99})$$
in this interval, giving the required contradiction.
Inserting the bound \eqref{atc} into \eqref{toke}, we conclude in particular that
$$ E_*(t) \lesssim K^{-80}$$
for $t_c+\frac{1}{\sqrt{K}} \leq t \leq 2$, and \eqref{beable} follows from \eqref{est} and \eqref{energy-con}.
\end{proof}
\section{Blowup for the cascade ODE}\label{blowup-sec}
We can now prove Theorem \ref{ood} (and hence Theorem \ref{blowup} and Theorem \ref{main}). The idea is to chain together an infinite sequence of circuits of the form \eqref{a-eq}-\eqref{ta-eq}, so that (a more complicated version of) the analysis from Theorem \ref{daet} may be applied.
\subsection{First step: constructing the ODE}
Let $\epsilon_0 > 0$ be fixed; we allow all implied constants in the $O()$ notation to depend on $\epsilon_0$. As in the previous section, we need a large constant $K \geq 1$, which we assume to be sufficiently large depending on $\epsilon_0$, and then a small constant $0 < \eps < 1$, which we assume to be sufficiently small depending on both $K$ and $\epsilon_0$. Finally, we take $n_0$ sufficiently large depending on $\epsilon_0, K, \eps$.
The reader may wish to keep in mind the hierarchy of parameters
$$ 1 \ll \frac{1}{\epsilon_0} \ll K \ll \frac{1}{\eps} \ll n_0$$
as a heuristic for comparing the magnitude of various quantities appearing in the sequel. Thus, for instance, a quantity of the form $O\left( \exp\left( O\left(K^{10}\right) \right) \left(1+\epsilon_0\right)^{-n_0/2} \right)$ will be smaller than $\exp\left(-K^{10}\right) \eps^2$; a quantity of the form $O\left( \exp\left( O\left( K^{10}\right)\right) \eps \right)$ will be smaller than $K^{-100}$; and so forth.
\begin{table}
\centering
\caption{The non-zero values of $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3}$.}
\label{alpha-table}
\begin{tabular}{lllllll}
\toprule
$i_1$ & $i_2$ & $i_3$ & $\mu_1$ & $\mu_2$ & $\mu_3$ & $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3}$ \\
\midrule
$3$ & $4$ & $1$ & $0$ & $0$ & $0$ & $-\eps^{-2} / 2$ \\
$4$ & $3$ & $1$ & $0$ & $0$ & $0$ & $-\eps^{-2} / 2$ \\
$1$ & $3$ & $4$ & $0$ & $0$ & $0$ & $\eps^{-2} / 2$ \\
$3$ & $1$ & $4$ & $0$ & $0$ & $0$ & $\eps^{-2} / 2$ \\
$1$ & $2$ & $1$ & $0$ & $0$ & $0$ & $-\eps / 2$ \\
$2$ & $1$ & $1$ & $0$ & $0$ & $0$ & $-\eps / 2$ \\
$1$ & $1$ & $2$ & $0$ & $0$ & $0$ & $\eps$ \\
$1$ & $3$ & $1$ & $0$ & $0$ & $0$ & $-\eps^{2} \exp(-K^{10})/ 2$ \\
$3$ & $1$ & $1$ & $0$ & $0$ & $0$ & $-\eps^{2} \exp(-K^{-10}) / 2$ \\
$1$ & $1$ & $3$ & $0$ & $0$ & $0$ & $\eps^{2} \exp(K^{-10})$ \\
$3$ & $3$ & $2$ & $0$ & $0$ & $0$ & $-\eps^{-1} K^{10}$ \\
$2$ & $3$ & $3$ & $0$ & $0$ & $0$ & $\eps^{-1} K^{10}/2$ \\
$3$ & $2$ & $3$ & $0$ & $0$ & $0$ & $\eps^{-1} K^{10}/2$ \\
$4$ & $4$ & $1$ & $0$ & $0$ & $1$ & $(1+\epsilon_0)^{5/2} K$ \\
$1$ & $4$ & $4$ & $1$ & $0$ & $0$ & $-(1+\epsilon_0)^{5/2} K/2$ \\
$4$ & $1$ & $4$ & $0$ & $1$ & $0$ & $-(1+\epsilon_0)^{5/2} K/2$\\
\bottomrule
\end{tabular}
\end{table}
The dimension parameter $m$ for the system we will use to prove Theorem \ref{ood} will be taken to be $m=4$.
We set the coefficients $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3}$ by using Table \ref{alpha-table}, with $\alpha_{i_1,i_2,i_3,\mu_1,\mu_2,\mu_3}$ set equal to zero if it does not appear in the above table. It is clear that the required symmetry property \eqref{symmetry} and the cancellation property \eqref{cyclic} hold. Also, the hypotheses of Lemma \ref{eqmot}(v) are satisfied.
Suppose for contradiction that Theorem \ref{ood} fails for this choice of parameters, so that we have global continuously differentiable functions $X_{i,n}: [0,+\infty) \to \R$ and $E_{i,n}: [0,+\infty) \to [0,+\infty)$ obeying the conclusions of Lemma \ref{eqmot}. More precisely, by Lemma \ref{eqmot}(i), we have the \emph{a priori} regularity
\begin{equation}\label{many}
\sup_{0 \leq t\leq T} \sup_{n \in\Z} \sup_{i=1,\dots,4} \left(1 + (1+\epsilon_0)^{10n}\right) |X_{i,n}(t)| < \infty
\end{equation}
and
$$
\sup_{0 \leq t\leq T} \sup_{n \in\Z} \sup_{i=1,\dots,4} \left(1 + (1+\epsilon_0)^{10n}\right) E_{i,n}(t)^{1/2} < \infty,
$$
for all $0 < T< \infty$.
It will be convenient to work with the combined energy
$$E_n := E_{1,n} + E_{2,n} + E_{3,n} + E_{4,n},$$
so that
\begin{equation}\label{many-2}
\sup_{0 \leq t\leq T} \sup_{n \in\Z} \left(1 + (1+\epsilon_0)^{10n}\right) E_{n}(t)^{1/2} < \infty
\end{equation}
for all $0 < T < \infty$.
By Lemma \ref{eqmot}(iii), we have the equations of motion
\begin{align}
\partial_t X_{1,n} &= (1+\epsilon_0)^{5n/2} (- \eps^{-2} X_{3,n} X_{4,n} - \eps X_{1,n} X_{2,n} - \eps^2 \exp(-K^{10}) X_{1,n} X_{3,n} + K X_{4,n-1}^2)\nonumber\\
&\quad\quad + O\left( (1+\epsilon_0)^{2n} E_n^{1/2} \right) \label{axin-eq} \\
\partial_t X_{2,n} &= (1+\epsilon_0)^{5n/2} (\eps X_{1,n}^2 - \eps^{-1} K^{10} X_{3,n}^2) + O\left( (1+\epsilon_0)^{2n} E_n^{1/2} \right) \label{bxin-eq} \\
\partial_t X_{3,n} &= (1+\epsilon_0)^{5n/2} (\eps^2 \exp(-K^{10}) X_{1,n}^2 + \eps^{-1} K^{10} X_{2,n} X_{3,n} ) + O\left( (1+\epsilon_0)^{2n} E_n^{1/2} \right) \label{cxin-eq} \\
\partial_t X_{4,n} &= (1+\epsilon_0)^{5n/2} (\eps^{-2} X_{3,n} X_{1,n} - (1+\epsilon_0)^{5/2} K X_{4,n} X_{1,n+1}) + O\left( (1+\epsilon_0)^{2n} E_n^{1/2} \right) \label{dxin-eq}
\end{align}
(compare with \eqref{a-eq}-\eqref{ta-eq}) and the local energy inequality
\begin{equation}\label{lei-0}
\partial_t E_n \leq (1+\epsilon_0)^{5n/2} K X_{4,n-1}^2 X_{1,n} - (1+\epsilon_0)^{5(n+1)/2} K X_{4,n}^2 X_{1,n+1}
\end{equation}
for any $n \in \Z$ and $t \geq 0$.
\begin{remark} As mentioned in the previous section, if one ignores the dissipation terms, the system \eqref{axin-eq}-\eqref{dxin-eq} describes an infinite number of (rescaled) copies of the quadratic circuit analysed in Theorem \ref{daet}, with the output of each such circuit chained to the input of a slightly faster-running version of the same circuit; see Figure \ref{fig:circ1}.
\end{remark}
\begin{figure} [t]
\centering
\includegraphics{./circuit-1.png}
\caption[Cascade of circuits]{A portion of the system \eqref{axin-eq}-\eqref{dxin-eq}, focusing on the modes at or near scale $n$, and ignoring the dissipation terms. Compare with Figure \ref{fig:delay}.}
\label{fig:circ1}
\end{figure}
By Lemma \ref{eqmot}(iii), we have the initial conditions
\begin{equation}\label{initial-cond}
E_{n}(0) = \frac{1}{2} 1_{n=n_0}; \quad X_{i,n}(0) = 1_{(i,n)=(1,n_0)}
\end{equation}
for all $n \in \Z$ and $i=1,\dots,4$.
By Lemma \ref{eqmot}(iv), we have
\begin{equation}\label{ein-sum}
\frac{1}{2} \sum_{i=1}^4 X_{i,n}^2(t) \leq E_n(t) \leq \frac{1}{2}\sum_{i=1}^4 X_{i,n}^2(t) + O\left( (1+\epsilon_0)^{2n} \int_0^t E_n(t')\ dt' \right)
\end{equation}
for any $n \in \Z$ and $t \geq 0$.
Finally, from Lemma \ref{eqmot}(v) we have
\begin{equation}\label{nomode}
E_n(t) = X_{1,n}(t) = X_{2,n}(t)=X_{3,n}(t) = X_{4,n}(t)=0
\end{equation}
for all $n < n_0$ and $t \geq 0$.
To prove Theorem \ref{ood}, it thus suffices to show
\begin{theorem}[No global solution for ODE system]\label{ood2} Let $0 < \epsilon_0 < 1$, let $K>0$ be sufficiently large depending on $\epsilon_0$, let $\eps>0$ be sufficiently small depending on $\epsilon_0,K$, and let $n_0$ be sufficiently large depending on $\epsilon_0, K, \eps$, and the implied constants in \eqref{axin-eq}, \eqref{dxin-eq}, \eqref{ein-sum}. Then there does not exist continuously differentiable functions $X_{i,n}: [0,+\infty) \to \R$ and $E_n: [0,+\infty) \to [0,+\infty)$ obeying the estimates and equations \eqref{many}-\eqref{nomode} for the indicated range of parameters.
\end{theorem}
It remains to prove Theorem \ref{ood2}.
\subsection{Second step: describing the blowup dynamics}
We will establish the following description of the dynamics of $X_{i,n}$ and $E_{i,n}$:
\begin{proposition}[Blowup dynamics]\label{blowdyn} Let the hypotheses and notation be as in Theorem \ref{ood2}, and suppose for contradiction that we may find continuously differentiable functions $X_{i,n}:[0,+\infty) \to \R$ and $E_n: [0,+\infty) \to [0,+\infty)$ with the stated properties \eqref{many}-\eqref{nomode}.
Let $N \geq n_0$ be an integer. Then there exist times
$$ 0 \leq t_{n_0} < t_{n_0+1} < \dots < t_N < \infty$$
and amplitudes
$$ e_{n_0}, \dots, e_N > 0$$
obeying the following properties:
\begin{itemize}
\item[(vi)] (Initialisation) We have
\begin{equation}\label{t-init}
t_{n_0} = 0
\end{equation}
and
\begin{equation}\label{e-init}
e_{n_0} = 1.
\end{equation}
\item[(vii)] (Scale evolution) For all $n_0 < n \leq N$, one has the amplitude stability
\begin{equation}\label{en-stable}
(1+\epsilon_0)^{-1/100} e_{n-1} \leq e_n \leq (1+\epsilon_0)^{1/100} e_{n-1}
\end{equation}
and the lifespan bound
\begin{equation}\label{lifespan}
\frac{1}{100} (1+\epsilon_0)^{-5(n-1)/2} e_{n-1}^{-1} \leq t_n - t_{n-1} \leq 100 (1+\epsilon_0)^{-5(n-1)/2} e_{n-1}^{-1}.
\end{equation}
\item[(viii)] (Transition state) For all $n_0 \leq n \leq N$, we have the bounds
\begin{align}
X_{1,n}(t_n) &= e_n \label{an-init} \\
|X_{2,n}(t_n)| &\leq 10^{-5} \eps e_n \label{bn-init}\\
|X_{3,n}(t_n)| &\leq 10^{-5} \exp( -K^{10} ) \eps^2 e_n \label{cn-init}\\
X_{3,n}(t_n) &\geq - (1+\epsilon_0)^{-n_0/4} e_n\label{cn-init-2}\\
|X_{4,n}(t_n)| &\leq K^{-10} e_{n} \label{dn-en} \\
E_{n-1}(t_n) &\leq K^{-20} e_n^2. \label{en1-init}
\end{align}
If $n_0 < n \leq N$, we have the additional bounds
\begin{align}
X_{2,n-1}(t_n) &\geq 10^{-5} \eps e_n \label{spin-1}\\
X_{2,n-1}(t_n) &\leq 10^{5} \eps e_n \label{spin-1a}\\
X_{3,n-1}(t_n) &\geq \exp( K^{9} ) \eps^2 e_n.\label{spin-2}\\
X_{3,n-1}(t_n) &\leq \exp( K^{10} ) \eps^2 e_n.\label{spin-2a}
\end{align}
\item[(ix)] (Energy estimates) For all $n_0 < n \leq N$ and $t_{n-1} \leq t \leq t_n$, we have the bounds
\begin{align}
E_{n-m}(t) &\leq K^{-10} (1+\epsilon_0)^{m/10} e_{n-1}^2 \hbox{ for all } m \geq 2 \label{before-en} \\
E_{n-1}(t)+E_n(t) &\leq e_{n-1}^2 \label{during-en} \\
E_{n+m}(t) &\leq K^{-30} (1+\epsilon_0)^{-10m} e_{n-1}^2 \hbox{ for all } m \geq 1 \label{after-en}
\end{align}
\end{itemize}
\end{proposition}
These bounds may appear somewhat complicated, but roughly speaking they assert that at each time $t_n$, the solution concentrates an important part of its energy at scale $n$ (and significantly less energy at adjacent scales); see Table \ref{energy-table} and Figure \ref{energy-figure}. The precise bounds here do have to be chosen carefully, because of a rather intricate induction argument in which the estimates for a given value of $N$ are used to prove the estimates for $N+1$. For this reason, no use of the asymptotic notation $O()$ appears in the above proposition. Of the four modes $X_{1,n}, X_{2,n}, X_{3,n}, X_{4,n}$, it is the first mode $X_{1,n}$ that carries most of the energy at the checkpoint time $t_n$; the secondary modes $X_{2,n}, X_{3,n}$ play an important role in driving the dynamics (and so many of the more technical bounds in (viii) are devoted to controlling these modes) but carry\footnote{As a crude first approximation (ignoring factors depending on $K$), one should think of $X_{2,n}$ as being about $\eps$ the size of $X_{1,n}$ or $X_{4,n}$, and $X_{3,n}$ being about $\eps^2$ the size of $X_{1,n}$ or $X_{4,n}$.} very little energy, while the $X_{4,n}$ mode is only used as a conduit to transfer energy from the $X_{1,n}$ mode to the $X_{1,n+1}$ mode. The bounds \eqref{spin-1}-\eqref{spin-2a} are technical; they are needed to ensure that the rotor at scale $n-1$ is rotating so quickly that the modes at scale $n-1$ do not cause any ``constructive interference'' with the modes at scale $n$ at time $t_n$ (or at slightly later times).
\begin{table}
\centering
\caption{Upper bounds for energies at scales close to $N$, at times close to $t_N$. Note that at time $t_N$, the energy is locally concentrated at scale $N$, but transitions as $t_N \leq t \leq t_{N+1}$ to a state at time $t=t_{N+1}$ at which the energy is locally concentrated at scale $N+1$.}
\label{energy-table}
\begin{tabular}{lllll}
\toprule
Energy & $t_{N-1} < t < t_N$ & $t=t_N$ & $t_N < t < t_{N+1}$ & $t=t_{N+1}$ \\
\midrule
$E_{N-2}$ & $K^{-10} (1+\epsilon_0)^{2/10} e_{N-1}^2$ & $K^{-10} (1+\epsilon_0)^{2/10} e_{N-1}^2$ & $K^{-10} (1+\epsilon_0)^{3/10} e_{N}^2$ & $K^{-10} (1+\epsilon_0)^{3/10} e_{N}^2$ \\
$E_{N-1}$ & $e_{N-1}^2$ & $K^{-20} e_N$ & $K^{-10} (1+\epsilon_0)^{2/10} e_{N}^2$ & $K^{-10} (1+\epsilon_0)^{2/10} e_{N}^2$ \\
$E_N$ & $e_{N-1}^2$ & $(\frac{1}{2} + O(K^{-20}))e_N^2$ & $e_N^2$ & $K^{-20} e_{N+1}^2$ \\
$E_{N+1}$ & $K^{-30} (1+\epsilon_0)^{-10} e_{N-1}^2$ & $K^{-30} (1+\epsilon_0)^{-10} e_{N-1}^2$ & $e_N^2$ & $(\frac{1}{2} + O(K^{-20})) e_{N+1}^2$\\
$E_{N+2}$ & $K^{-30} (1+\epsilon_0)^{-20} e_{N-1}^2$ & $K^{-30} (1+\epsilon_0)^{-20} e_{N-1}^2$ & $K^{-30} (1+\epsilon_0)^{-10} e_{N}^2$ & $K^{-30} (1+\epsilon_0)^{-10} e_{N}^2$\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure} [t]
\centering
\includegraphics{./energy-blowup.png}
\caption{A schematic description (not entirely drawn to scale) of the dynamics of the energies $E_n$ for $n$ close to $N$ and times close to $t_N$. Observe that the energy is concentrated at a single scale $N$ for a lengthy time interval (most of $[t_N,t_{N+1}]$), but then abruptly transfers almost all of its energy (minus some losses due to dissipation and interaction with other scales) to the next finer scale $N+1$ by the time $t_{N+1}$. The length of the time intervals $[t_N,t_{N+1}]$ decreases geometrically, leading to blowup at some finite time $T$.}
\label{energy-figure}
\end{figure}
Let us now see how the above proposition implies Theorem \ref{ood2} (and hence Theorem \ref{ood}, Theorem \ref{blowup} and Theorem \ref{main}). Let $N \geq n_0$ be arbitrary. From \eqref{e-init}, \eqref{en-stable}, \eqref{lifespan} we have
$$ t_n - t_{n-1} \lesssim (1+\epsilon_0)^{-(\frac{5}{2}-\frac{1}{100}) n} $$
and hence by \eqref{t-init} and summing the geometric series we have
$$ t_N \leq T$$
for some finite $T = T_{\epsilon_0}$ independent of $N$. On the other hand, from \eqref{an-init}, \eqref{en-stable}, \eqref{e-init} we have
$$ |X_{1,N}(t_N)| \geq (1+\epsilon_0)^{-N/100}$$
and hence
$$ \sup_{0 \leq t\leq T} \sup_{n \in\Z} \sup_{i=1,\dots,4} (1 + (1+\epsilon_0)^{10n}) X_{i,n}(t) \geq (1+\epsilon_0)^{9N}$$
for any $N$. Sending $N$ to infinity, we contradict \eqref{many}.
\subsection{Third step: setting up the induction}
It remains to prove Proposition \ref{blowdyn}. We do so by an induction on $N$. The base case $N=n_0$ is easy: one sets $t_{n_0}=0$ and $e_{n_0}=1$, and all the required claims are either vacuously true or follow immediately from the initial conditions \eqref{initial-cond}. It remains to establish the inductive case of this proposition. For the convenience of the reader, we state this inductive case as an explicit proposition.
\begin{proposition}[Blowup dynamics, inductive case]\label{blowdyn-induct} Let the hypotheses and notation be as in Theorem \ref{ood2}, and suppose for contradiction that we may find continuously differentiable functions $X_{i,n}:[0,+\infty) \to \R$ and $E_n: [0,+\infty) \to [0,+\infty)$ with the stated properties \eqref{many}-\eqref{nomode}.
Assume that Proposition \ref{blowdyn} has already been established for some $N \geq n_0$, giving times
$$ 0 \leq t_{n_0} < t_{n_0+1} < \dots < t_N < \infty$$
and energies
$$ e_{n_0}, \dots, e_N > 0$$
with the properties \eqref{t-init}-\eqref{after-en} stated in that proposition. Then there exists a time
$$ t_N < t_{N+1} < \infty$$
and an amplitude $e_{N+1} > 0$ obeying the following properties:
\begin{itemize}
\item[(vii')] (Scale evolution) One has the amplitude stability
\begin{equation}\label{en-stable-induct}
(1+\epsilon_0)^{-1/100} e_N \leq e_{N+1} \leq (1+\epsilon_0)^{1/100} e_N
\end{equation}
and the lifespan bound
\begin{equation}\label{lifespan-induct}
\frac{1}{100} (1+\epsilon_0)^{-5N/2} e_N^{-1} \leq t_{N+1} - t_N \leq 100 (1+\epsilon_0)^{-5N/2} e_N^{-1}.
\end{equation}
\item[(viii')] (Transition state) We have the bounds
\begin{align}
X_{1,N+1}(t_{N+1}) &= e_{N+1} \label{an-init-induct} \\
|X_{2,N+1}(t_{N+1})| &\leq 10^{-5} \eps e_{N+1} \label{bn-init-induct}\\
|X_{3,N+1}(t_{N+1})| &\leq 10^{-5} \exp( -K^{10} ) \eps^2 e_{N+1} \label{cn-init-induct}\\
X_{3,N+1}(t_{N+1}) &\geq - (1+\epsilon_0)^{-n_0/4} e_{N+1} \label{cn-init-induct-2}\\
|X_{4,N+1}(t_{N+1})| &\leq K^{-10} e_{N+1} \label{dn-en-induct} \\
E_N(t_{N+1}) &\leq K^{-20} e_{N+1}^2 \label{en1-init-induct}\\
X_{2,N}(t_{N+1}) &\geq 10^{-5} \eps e_{N+1}\label{spin-1-induct}\\
X_{2,N}(t_{N+1}) &\leq 10^{5} \eps e_{N+1}\label{spin-1a-induct}\\
X_{3,N}(t_{N+1}) &\geq \exp( K^{9} ) \eps^2 e_{N+1}^2.\label{spin-2-induct}\\
X_{3,N}(t_{N+1}) &\leq \exp( K^{10} ) \eps^2 e_{N+1}^2.\label{spin-2a-induct}
\end{align}
\item[(ix')] (Energy estimates) For all $t_N \leq t \leq t_{N+1}$, we have the bounds
\begin{align}
E_{N+1-m}(t) &\leq K^{-10} (1+\epsilon_0)^{m/10} e_N^2 \hbox{ for all } m \geq 2 \label{before-en-induct} \\
E_{N}(t)+E_{N+1}(t) &\leq e_{N}^2 \label{during-en-induct} \\
E_{N+1+m}(t) &\leq K^{-30} (1+\epsilon_0)^{-10m} e_N^2 \hbox{ for all } m \geq 1 \label{after-en-induct}
\end{align}
\end{itemize}
\end{proposition}
Clearly, Proposition \ref{blowdyn-induct} implies Proposition \ref{blowdyn} (and hence Theorems \ref{ood2}, \ref{ood}, \ref{blowup} and \ref{main}).
Roughly speaking, the situation is as follows. At time $t_N$, the solution $(X_{i,n})_{i=1,\dots,4; n \in \Z}$ has a large amount of energy at a single mode $X_{1,N}$ at scale $N$ (thanks to \eqref{an-init}) and small amounts of energy at nearby modes and scales (thanks to \eqref{bn-init}, \eqref{cn-init}, \eqref{dn-en}, \eqref{en1-init}, \eqref{before-en}, \eqref{after-en}). We wish to run the evolution forward to a later time $t_{N+1}$ (which can be approximately determined using \eqref{lifespan-induct}) for which the energy near scale $N$ has now largely transitioned to the $X_{1,N+1}$ mode (see \eqref{an-init-induct}, \eqref{en-stable-induct}), but with little energy at nearby modes and scales (see \eqref{bn-init-induct}, \eqref{cn-init-induct}, \eqref{en1-init-induct}, \eqref{before-en-induct}, \eqref{dn-en-induct}, \eqref{after-en-induct}). In particular, the transition of energy to the $X_{1,N+1}$ mode needs to be so abrupt that no significant amount of energy leaks into the $N+2$ modes yet (see \eqref{after-en-induct}). To establish this, we shall first show (under a bootstrap hypothesis) that all scales other\footnote{This is an oversimplification; one has to take some care to also exclude the possibility of ``constructive interference'' between the $N-1$ and $N$ modes, but this is a technical issue of secondary importance.} than the $N$ and $N+1$ scales are under control, thus largely reducing matters to the study of the dynamics between the $N$ and $N+1$ modes, at which point one can run the analysis used to establish Theorem \ref{daet}.
\subsection{Fourth step: renormalising the dynamics}
It is convenient to perform a rescaling to essentially eliminate the role of the time $t_N$, the energy $e_N$, and the scale $(1+\epsilon_0)^{-N}$, in order to make the dynamics closely resemble those in Theorem \ref{daet}.
More precisely, Proposition \ref{blowdyn-induct} rescales as follows.
\begin{proposition}[Rescaled inductive step]\label{induct-rescale} Let $0 < \epsilon_0 < 1$, let $K>0$ be sufficiently large depending on $\epsilon_0$, let $\eps>0$ be sufficiently small depending on $\epsilon_0,K$, and let $n_0$ be sufficiently large depending on $\epsilon_0, K, \eps$, and the implied constants in \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}, \eqref{cain}, \eqref{tau-stable} below. Let $N \geq n_0$, and suppose we have rescaled times
$$\tau_{n_0-N} < \dots < \tau_0 = 0$$
and continuously differentiable functions $a_k,b_k,c_k,d_k: [\tau_{n_0-N},+\infty) \to \R$ and $\tilde E_k: [\tau_{n_0-N},+\infty) \to [0,+\infty)$ obeying the following properties:
\begin{itemize}
\item[(i)] (A priori regularity) We have
\begin{equation}\label{many-norm}
\sup_{\tau_{n_0-N} \leq t\leq T} \sup_{k \in\Z} \left(1 + (1+\epsilon_0)^{10k}\right) (|a_k(t)|+|b_k(t)|+|c_k(t)|+|d_k(t)|) < \infty
\end{equation}
and
\begin{equation}\label{many-norm-2}
\sup_{\tau_{n_0-N} \leq t\leq T} \sup_{k \in\Z} \left(1 + (1+\epsilon_0)^{10k}\right) \tilde E_k(t)^{1/2} < \infty,
\end{equation}
for any $T > \tau_{n_0-N}$.
\item[(ii)] (Equations of motion) One has
\begin{align}
\partial_t a_k &= (1+\epsilon_0)^{5k/2} \left(- \eps^{-2} c_k d_k - \eps a_k b_k - \eps^2 \exp(-K^{10}) a_k c_k + K d_{k-1}^2 \right) \nonumber \\
&\quad\quad+ O\left( (1+\epsilon_0)^{2k - n_0/2} \tilde E_k^{1/2} \right) \label{axin-eq-resc} \\
\partial_t b_k &= (1+\epsilon_0)^{5k/2} \left(\eps a_k^2 - \eps^{-1} K^{10} c_k^2 \right) + O\left( (1+\epsilon_0)^{2k - n_0/2} \tilde E_k^{1/2} \right) \label{bxin-eq-resc} \\
\partial_t c_k &= (1+\epsilon_0)^{5k/2} \left(\eps^2 \exp(-K^{10}) a_k^2 + \eps^{-1} K^{10} b_k c_k \right) + O\left( (1+\epsilon_0)^{2k - n_0/2} \tilde E_k^{1/2} \right) \label{cxin-eq-resc} \\
\partial_t d_k &= (1+\epsilon_0)^{5k/2} \left(\eps^{-2} c_k a_k - (1+\epsilon_0)^{5/2} K d_k a_{k+1} \right) + O\left( (1+\epsilon_0)^{2k - n_0/2} \tilde E_k^{1/2} \right) \label{dxin-eq-resc}
\end{align}
and
\begin{equation}\label{lei}
\partial_t \tilde E_k \leq K (1+\epsilon_0)^{5k/2} ( d_{k-1}^2 a_k - (1+\epsilon_0)^{5/2} d_k^2 a_{k+1} )
\end{equation}
for all $k \in \Z$ and $t \geq \tau_{n_0-N}$.
\item[(iii)] (Initial conditions) One has
\begin{equation}\label{nomad}
a_k(\tau_{n_0-N})=b_k(\tau_{n_0-N})=c_k(\tau_{n_0-N})=d_k(\tau_{n_0-N})=\tilde E_k(\tau_{n_0-N})=0
\end{equation}
whenever $k > n_0-N$.
\item[(iv)] (Energy defect) One has
\begin{equation}\label{cain}
\begin{split}
\frac{1}{2} \left(a_k^2(t)+b_k^2(t)+c_k^2(t)+d_k^2(t)\right) &\leq \tilde E_k(t)\\
& \leq \frac{1}{2} \left(a_k^2(t)+b_k^2(t)+c_k^2(t)+d_k^2(t)\right)\\
&\quad\quad + O\left( (1+\epsilon_0)^{2k-n_0/2} \int_{\tau_{n_0-N}}^t \tilde E_k(t')\ dt' \right)
\end{split}
\end{equation}
whenever $k \in\Z$ and $t \geq \tau_{n_0-N}$.
\item[(v)] (No very low frequencies) One has
\begin{equation}\label{nomodo}
a_k(t)=b_k(t)=c_k(t)=d_k(t)=\tilde E_k(t)=0
\end{equation}
whenever $k < n_0-N$ and $t \geq \tau_{n_0-N}$.
\item[(vii)] (Scale evolution) One has
\begin{equation}\label{tau-stable}
-O\left((1+\epsilon_0)^{(\frac{5}{2}+\frac{1}{100})|k|}\right) \leq \tau_k \leq 0
\end{equation}
for any $n_0-N \leq k \leq 0$.
\item[(viii)] (Transition state) We have
\begin{align}
a_0(0) &= 1 \label{an-init-rescaled}\\
|b_0(0)| &\leq 10^{-5} \eps \label{bn-init-rescaled}\\
|c_0(0)| &\leq 10^{-5} \exp(-K^{10}) \eps^2 \label{cn-init-rescaled}\\
c_0(0) &\geq - (1+\epsilon_0)^{-n_0/4} \label{cn-init-rescaleda}\\
|d_0(0)| &\leq K^{-10} \label{dn-init-rescaled}\\
\tilde E_{-1}(0) &\leq K^{-20} \label{en1-init-rescaled}
\end{align}
and if $N > n_0$ we have the additional bounds
\begin{align}
b_{-1}(0) &\geq 10^{-5} \eps \label{spin-1-scaled}\\
b_{-1}(0) &\leq 10^{5} \eps \label{spin-1a-scaled}\\
c_{-1}(0) &\geq \exp( K^{9} ) \eps^2.\label{spin-2-scaled}\\
c_{-1}(0) &\leq \exp( K^{10} ) \eps^2.\label{spin-2a-scaled}
\end{align}
\item[(ix)] (Energy estimates) We have
\begin{align}
\tilde E_{k-m}(t) &\leq K^{-10} (1+\epsilon_0)^{m/10 + |k-1|/50}\hbox{ for all } m \geq 2 \label{before-en-rescaled} \\
\tilde E_{k-1}(t)+\tilde E_k(t) &\leq (1+\epsilon_0)^{|k-1|/50} \label{during-en-rescaled} \\
\tilde E_{k+m}(t) &\leq K^{-30} (1+\epsilon_0)^{-10m + |k-1|/50} \hbox{ for all } m \geq 1 \label{after-en-rescaled}
\end{align}
whenever $n_0-N < k\leq 0$ and $\tau_{k-1} \leq t \leq \tau_k$.
\end{itemize}
Then there exists a time
\begin{equation}\label{lifespan-rescaled}
\frac{1}{100} \leq \tau_1 \leq 100
\end{equation}
and an amplitude
\begin{equation}\label{en-stable-rescaled}
(1+\epsilon_0)^{-1/100} \leq \mu_1 \leq (1+\epsilon_0)^{1/100}
\end{equation}
such that we have the bounds
\begin{align}
a_1(\tau_1) &= \mu_1 \label{an-init-next} \\
|b_1(\tau_1)| &\leq 10^{-5} \eps \mu_1 \label{bn-init-next}\\
|c_1(\tau_1)| &\leq 10^{-5} \exp( -K^{10} ) \eps^2 \mu_1 \label{cn-init-next}\\
c_1(\tau_1) &\geq - (1+\epsilon_0)^{-n_0/4} \mu_1 \label{cn-init-rescaleda-next}\\
|d_1(\tau_1)| &\leq K^{-10} \mu_1 \label{dn-en-next} \\
\tilde E_0(\tau_1) &\leq K^{-20} \mu_1^2. \label{en1-init-next}\\
b_0(\tau_1) &\geq 10^{-5} \eps \mu_1 \label{spin-1-next}\\
b_0(\tau_1) &\leq 10^{5} \eps \mu_1 \label{spin-1a-next}\\
c_0(\tau_1) &\geq \exp( K^{9} ) \eps^2 \mu_1\label{spin-2-next}\\
c_0(\tau_1) &\leq \exp( K^{10} ) \eps^2 \mu_1\label{spin-2a-next}
\end{align}
and
\begin{align}
\tilde E_{1-m}(t) &\leq K^{-10} (1+\epsilon_0)^{m/10} \hbox{ for all } m \geq 2 \label{before-en-next} \\
\tilde E_{0}(t)+\tilde E_{1}(t) &\leq 1 \label{during-en-next} \\
\tilde E_{1+m}(t) &\leq K^{-30} (1+\epsilon_0)^{-10m} \hbox{ for all } m \geq 1 \label{after-en-next}
\end{align}
for all $0 \leq t \leq \tau_1$.
\end{proposition}
\begin{remark} The dynamics \eqref{axin-eq-resc}-\eqref{dxin-eq-resc} are depicted in Figure \ref{fig:circ2} (with the dissipative terms ignored). Note how the rescaling has placed the tiny factor of $(1+\epsilon_0)^{-n_0/2}$ in front of all the viscosity terms in \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}, thus highlighting the lower order nature of these terms for our analysis. This small factor is ultimately reflecting the supercritical nature of the dissipation; in practice, this factor will allow us to treat all dissipative terms as negligible.
\end{remark}
\begin{figure} [t]
\centering
\includegraphics{./circuit-2.png}
\caption[Rescaled circuits]{A portion of the rescaled dynamics \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}, again ignoring the dissipation terms. The factors of $(1+\epsilon_0)$ here play no significant role and may be ignored at a first reading.}
\label{fig:circ2}
\end{figure}
Let us now explain why Proposition \ref{induct-rescale} implies Proposition \ref{blowdyn-induct} (and hence Proposition \ref{blowdyn} and Theorems \ref{ood2}, \ref{ood}, \ref{blowup} and \ref{main}). Let the notation and hypotheses be as in Proposition \ref{blowdyn-induct}. We then define the rescaled times
\begin{equation}\label{taudef}
\tau_k := (1+\epsilon_0)^{5N/2} e_N (t_{N+k} - t_N)
\end{equation}
and rescaled amplitudes
\begin{equation}\label{mudef}
\mu_k := e_N^{-1} e_{N+k}
\end{equation}
for $n_0-N \leq k \leq 0$, as well as the rescaled solutions
\begin{align*}
a_k(t) &:= e_N^{-1} X_{1,N+k}\left( t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} t \right) \\
b_k(t) &:= e_N^{-1} X_{2,N+k}\left( t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} t \right) \\
c_k(t) &:= e_N^{-1} X_{3,N+k}\left( t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} t \right) \\
d_k(t) &:= e_N^{-1} X_{4,N+k}\left( t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} t \right)
\end{align*}
and rescaled energies
\begin{align*}
\tilde E_k(t) &:= e_N^{-2} E_{N+k}\left( t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} t \right)
\end{align*}
for $k \in \Z$ and $t \geq \tau_{n_0-N}$. Under this rescaling, the \emph{a priori} regularity \eqref{many-norm}, \eqref{many-norm-2} follows from \eqref{many}, \eqref{many-2}. The bound
\begin{equation}\label{mu-stable}
(1+\epsilon_0)^{-|k|/100} \leq \mu_{k} \leq (1+\epsilon_0)^{|k|/100}
\end{equation}
for $n_0-N \leq k \leq 0$ follows from \eqref{en-stable} and \eqref{mudef}; from this, \eqref{lifespan}, \eqref{taudef} and summing the geometric series we then obtain \eqref{tau-stable} (recall that we allow implied constants in the $\lesssim$ or $O()$ notation to depend on $\epsilon_0$).
If we directly rescale \eqref{axin-eq}-\eqref{dxin-eq}, we obtain \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}, except with the factors $(1+\epsilon_0)^{2k-n_0/2}$ replaced by
$(1+\epsilon_0)^{2k - N/2} e_N^{-1}$. However, from \eqref{en-stable}, \eqref{e-init} we have
$$ e_N^{-1} \leq (1+\epsilon_0)^{(N-n_0)/100}$$
and hence
$$ (1+\epsilon_0)^{2k - N/2} e_N^{-1} \leq (1+\epsilon_0)^{2k-n_0/2}.$$
This gives the equations of motion \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}. The energy inequality \eqref{lei} is similarly obtained from rescaling \eqref{lei-0}.
The initial conditions \eqref{nomad} follow from rescaling \eqref{initial-cond} (and also using \eqref{t-init}). Similarly, \eqref{cain} follows from rescaling \eqref{ein-sum}, and \eqref{nomodo} follows from rescaling \eqref{nomode}. Similarly, the conditions \eqref{an-init-rescaled}-\eqref{spin-2a-scaled} follow from rescaling \eqref{an-init}-\eqref{spin-2a}. Finally, \eqref{before-en-rescaled}-\eqref{after-en-rescaled} follow from rescaling \eqref{before-en}-\eqref{after-en} and using \eqref{mu-stable}. We then apply Proposition \ref{induct-rescale} to obtain $\tau_1, \mu_1$ with the stated properties \eqref{lifespan-rescaled}-\eqref{after-en-next}. It is then routine to verify that the conclusions of Proposition \ref{blowdyn-induct} are satisfied with
$$ t_{N+1} := t_N + (1+\epsilon_0)^{-5N/2} e_N^{-1} \tau_1$$
and
$$ e_{N+1} := \mu_1 e_N.$$
For future reference, we record one consequence of the energy estimates \eqref{before-en-rescaled}-\eqref{after-en-rescaled}:
\begin{lemma}[Cumulative energy bound]\label{cumeng} For any integer $m = O(1)$, one has
$$ \int_{\tau_{n_0-N}}^0 \tilde E_m(t)\ dt \lesssim 1.$$\
(The implied constant here may depend on $m$.)
\end{lemma}
\begin{proof}
From \eqref{during-en-rescaled}, \eqref{after-en-rescaled} we have
$$ \tilde E_m(t) \lesssim (1+\epsilon_0)^{10k + |k|/50} $$
whenever $\tau_{k-1} \leq t \leq \tau_k$ and $n_0-N < k \leq 0$, and so
$$ \int_{\tau_{n_0-N}}^0 \tilde E_m(t)\ dt \lesssim \sum_{n_0-N < k \leq 0} (1+\epsilon_0)^{10k + |k|/50} |\tau_{k-1}|.$$
Applying \eqref{tau-stable} and summing the geometric series, we obtain the claim.
\end{proof}
\subsection{Fifth step: crude energy estimates for distant modes}
We now begin the proof of Proposition \ref{induct-rescale}. For the rest of this section, we assume the notations and hypotheses are as in that proposition.
The first stage is to establish the energy bounds \eqref{before-en-next}, \eqref{during-en-next}, \eqref{after-en-next} (and also the bound \eqref{dn-en-next}) on a certain time interval $0 \leq t \leq T_1$; the quantity $\tau_1$ will later be chosen between $0$ and $T_1$, thus establishing the required bounds \eqref{before-en-next}, \eqref{during-en-next}, \eqref{after-en-next}, \eqref{dn-en-next} for $0 \leq t \leq \tau_1$.
We first establish bounds at time $t=0$ that are slightly better than the required bounds \eqref{before-en-next}, \eqref{during-en-next}, \eqref{after-en-next}, \eqref{dn-en-next}.
\begin{lemma}[Initial bounds]\label{prelim} Let the notation and assumptions be as in Proposition \ref{induct-rescale}. Then we have
\begin{align}
\tilde E_{1-m}(0) &\leq (1+\epsilon_0)^{-0.08} K^{-10} (1+\epsilon_0)^{m/10} \hbox{ for all } m \geq 2 \label{before-en-0} \\
\tilde E_{0}(0)+\tilde E_{1}(0) &\leq 0.6 \label{during-en-0} \\
\tilde E_{1+m}(0) &\leq (1+\epsilon_0)^{-9.98} K^{-30} (1+\epsilon_0)^{-10m} \hbox{ for all } m \geq 1 \label{after-en-0}\\
|d_1(0)| &\leq \sqrt{2} K^{-15}. \label{dn-en-0}
\end{align}
\end{lemma}
\begin{proof}
From \eqref{before-en-rescaled}, \eqref{after-en-rescaled} for $k=0$ and $t=0$, we have
\begin{equation}\label{before-1}
\tilde E_{-m}(0) \leq K^{-10} (1+\epsilon_0)^{m/10 + 1/50} \hbox{ for all } m \geq 2
\end{equation}
and
\begin{equation}\label{after-1}
\tilde E_{m}(0) \leq K^{-30} (1+\epsilon_0)^{-10m + 1/50} \hbox{ for all } m \geq 1
\end{equation}
which implies the claims \eqref{before-en-0} for all $m \geq 3$ and \eqref{after-en-0} for $m \geq 1$, after shifting $m$ by one. The claim \eqref{before-en-0} for $m=2$ follows from \eqref{en1-init-rescaled} (since $K$ is large depending on $\epsilon_0$). Also, from \eqref{after-1} we have
\begin{equation}\label{half}
\tilde E_1(0) \leq K^{-30};
\end{equation}
the claim \eqref{dn-en-0} then follows from \eqref{cain}.
It remains to establish \eqref{during-en-0}. From \eqref{cain} one has
$$ \tilde E_0(0) \leq \frac{1}{2} \left(a_0^2(0)+b_0^2(0)+c_0^2(0)+d_0^2(0)\right) + O\left( (1+\epsilon_0)^{-n_0/2} \int_{\tau_{n_0-N}}^0 \tilde E_0(t)\ dt \right).$$
From \eqref{an-init-rescaled}-\eqref{dn-init-rescaled} we have
$$ \frac{1}{2} (a_0^2(0)+b_0^2(0)+c_0^2(0)+d_0^2(0)) = 0.5 + O( K^{-10} ).$$
Applying Lemma \ref{cumeng}, and recalling that $K$ and $n_0$ are assumed sufficiently large, the claim \eqref{during-en-0} follows.
\end{proof}
We now define $T_1$ to be the largest time in $[0,100]$ for which one has the bounds
\begin{align}
\tilde E_{1-m}(t) &\leq K^{-10} (1+\epsilon_0)^{m/10} \hbox{ for all } m \geq 2 \label{before-en-next'} \\
\tilde E_{0}(t)+\tilde E_{1}(t) &\leq 1 \label{during-en-next'} \\
\tilde E_{1+m}(t) &\leq K^{-30} (1+\epsilon_0)^{-10m} \hbox{ for all } m \geq 1 \label{after-en-next'}\\
|d_1(t)| &\leq \frac{1}{2} K^{-10}. \label{dn-en-next'}
\end{align}
for all $0 \leq t \leq T_1$. Lemma \ref{prelim} ensures that $T_1$ is well-defined (note that all the conditions here are closed conditions in $t$).
We record a variant of the arguments in Lemma \ref{prelim} that will be needed later:
\begin{lemma}[Almost all energy in primary modes]\label{primary} For any $k=-1,0,1$ and $0 \leq t \leq T_1$, one has
$$ \tilde E_k(t) = \frac{1}{2} \left(a_k(t)^2 + b_k(t)^2 + c_k(t)^2 + d_k(t)^2 \right) + O\left( (1+\epsilon_0)^{-n_0/2} \right).$$
\end{lemma}
\begin{proof} By \eqref{cain} it suffices to show that
$$ \int_{\tau_{n_0-N}}^{T_1} \tilde E_k(t)\ dt \lesssim 1.$$
The portion of the integral with $0 \leq t \leq T_1$ is controlled by \eqref{before-en-next'}, \eqref{during-en-next'}, and the trivial bound $T_1 \leq 100$. The claim now follows from Lemma \ref{cumeng}.
\end{proof}
The bounds \eqref{before-en-next'}-\eqref{dn-en-next'} look like an infinite number of conditions, but note from the qualitative decay property \eqref{many-norm-2} that
$$
\sup_{0\leq t \leq 100} \sup_{k \in\Z} (1 + (1+\epsilon_0)^{20k}) \tilde E_k(0) < \infty,
$$
which implies that the bounds \eqref{before-en-next'}, \eqref{after-en-next'} are automatically satisfied for all $m \geq M$ and some finite $M$ (independent of $T_1$). So there are really only a finite number of conditions in the definition of $T_1$. As $\tilde E_m$ and $d_1$ vary continuously in time, we conclude that either $T_1=100$, or that at least one of the inequalities \eqref{before-en-next'}-\eqref{after-en-next'} is obeyed with equality (for some $m$) at time $t=T_1$. In particular, from Lemma \ref{prelim} we see that $T_1 \neq 0$, thus $0 < T_1 \leq 100$.
Now we use local energy estimates to rule out several of the ways in which one can ``exit'' the bounds \eqref{before-en-next'}-\eqref{dn-en-next'}.
\begin{lemma}[Closing some exits] We have
\begin{equation}\label{far-exit}
\tilde E_{1-m}(T_1) < K^{-10} (1+\epsilon_0)^{m/10} \hbox{ for all } m \geq 3
\end{equation}
and
\begin{equation}\label{far2-exit}
\tilde E_{1+m}(T_1) < K^{-30} (1+\epsilon_0)^{-10m} \hbox{ for all } m \geq 1
\end{equation}
and
\begin{equation}\label{near-exit}
\tilde E_{0}(T_1)+\tilde E_{1}(T_1) < 1.
\end{equation}
\end{lemma}
\begin{proof} Integrating \eqref{lei} on $[0,T_1]$, we conclude that
\begin{equation}\label{energy-ineq}
\tilde E_k(T_1) \leq \tilde E_k(0) + K (1+\epsilon_0)^{5k/2} \int_0^{T_1} d_{k-1}^2 a_k(t) - (1+\epsilon_0)^{5/2} d_k^2 a_{k+1}(t)\ dt
\end{equation}
for any $k \in \Z$; summing this for $k=0$ and $k=1$ and using \eqref{during-en-0}, we conclude the variant
\begin{equation}\label{energy-ineq2}
\tilde E_0(T_1) +\tilde E_1(T_1) \leq 0.6 + K \int_0^{T_1} d_{-1}^2 a_0(t) - (1+\epsilon_0)^{5} d_1^2 a_{2}(t)\ dt.
\end{equation}
From \eqref{before-en-next'}, \eqref{during-en-next'}, \eqref{after-en-next'} we have
$$
d_{-1}^2 a_0(t) - (1+\epsilon_0)^{5} d_1^2 a_{2}(t) = O( K^{-5} )$$
and hence from \eqref{energy-ineq2}
$$ \tilde E_0(T_1) +\tilde E_1(T_1) \leq 0.6 + O(K^{-4})$$
giving \eqref{near-exit} for $K$ large enough.
Now suppose that $m \geq 3$. From \eqref{energy-ineq} with $k=1-m$ and \eqref{before-en-0}, we have
\begin{align*}
\tilde E_{1-m}(T_1) &\leq (1+\epsilon_0)^{-0.08} K^{-10} (1+\epsilon_0)^{m/10} \\
&\quad + O\left( K (1+\epsilon_0)^{-5m/2} \int_0^{T_1} |d_{-m}|^2 |a_{1-m}|(t) + |d_{1-m}|^2 |a_{2-m}(t)|\ dt \right).
\end{align*}
From \eqref{before-en-next'} (and now using the hypothesis $m \geq 3$) we have
$$ |d_{-m}|^2 |a_{1-m}|(t) + |d_{1-m}|^2 |a_{2-m}(t)| \lesssim K^{-15} (1+\epsilon_0)^{3m/20}$$
for $0 \leq t \leq T_1$, and so (since $T_1 \leq 100$)
$$
\tilde E_{1-m}(T_1) \leq K^{-10} (1+\epsilon_0)^{m/10} \left((1+\epsilon_0)^{-0.09} + O\left( K^{-4} (1+\epsilon_0)^{-(\frac{5}{2}-\frac{1}{20})m}\right)\right)$$
and \eqref{far-exit} follows (assuming $K$ large enough).
Similarly, if $m \geq 1$, we may apply \eqref{energy-ineq} with $k=1+m$ and use \eqref{after-en-0} to obtain
$$
\tilde E_{1+m}(T_1) \leq (1+\epsilon_0)^{-9.98} K^{-30} (1+\epsilon_0)^{-10m} + O\left( K (1+\epsilon_0)^{5m/2} \int_0^{T_1} |d_{m}|^2 |a_{m+1}|(t) + |d_{m+1}|^2 |a_{m+2}(t)|\ dt\right).$$
From \eqref{after-en-next'} (and \eqref{dn-en-next'} when $m=1$) we have
$$ |d_{k-1}|^2 |a_k|(t) + |d_k|^2 |a_{k+1}|(t) \lesssim K^{-35} (1+\epsilon_0)^{-15m} $$
for $0 \leq t \leq T_1$, and thus
$$
\tilde E_{1+m}(T_1) \leq K^{-30} (1+\epsilon_0)^{-10m} \left((1+\epsilon_0)^{-9.98} + O\left( K^{-4} (1+\epsilon_0)^{-5m/2} \right) \right)$$
and \eqref{far2-exit} follows (assuming $K$ large enough).
\end{proof}
From this lemma and the previous discussion, we have some partial control on how we exit the regime:
\begin{corollary}[Exit trichotomy]\label{exit} At least one of the following assertions hold:
\begin{itemize}
\item (Backwards flow of energy) We have
\begin{equation}\label{exit-1}
\tilde E_{-1}(T_1) = K^{-10} (1+\epsilon_0)^{2/10}.
\end{equation}
\item (Forwards flow of energy) We have
\begin{equation}\label{exit-2}
|d_1(T_1)| = \frac{1}{2} K^{-10}.
\end{equation}
\item (Running out the clock) We have
\begin{equation}\label{exit-3}
T_1 = 100.
\end{equation}
\end{itemize}
\end{corollary}
Although we will not need this fact here, it turns out (using a refinement of the analysis below) that it is option \eqref{exit-2} which actually occurs in this trichotomy.
Thanks to \eqref{before-en-next'}-\eqref{dn-en-next'}, the task of proving Proposition \ref{induct-rescale} has now reduced to the following claim:
\begin{proposition}[Reduced induction claim]\label{reduced-claim} Let the notation and hypotheses be as in Proposition \ref{induct-rescale}, and let $T_1$ be defined as above. There exists a time
\begin{equation}\label{lifespan-rescaled-2}
\frac{1}{100} \leq \tau_1 \leq T_1
\end{equation}
(in particular, $T_1 \geq 1/100$) and an amplitude
\begin{equation}\label{en-stable-rescaled-2}
(1+\epsilon_0)^{-1/100} \leq \mu_1 \leq (1+\epsilon_0)^{1/100}
\end{equation}
such that we have the bounds
\begin{align}
a_1(\tau_1) &= \mu_1 \label{an-init-next-2} \\
|b_1(\tau_1)| &\leq 10^{-5} \eps \mu_1 \label{bn-init-next-2}\\
|c_1(\tau_1)| &\leq 10^{-5} \exp( -K^{10} ) \eps^2 \mu_1 \label{cn-init-next-2}\\
c_1(\tau_1) &\geq - (1+\epsilon_0)^{-n_0/4} \mu_1 \label{cn-init-next-2a}\\
\tilde E_0(\tau_1) &\leq K^{-20} \mu_1^2 \label{en1-init-next-2}\\
b_0(\tau_1) &\geq 10^{-5} \eps \mu_1 \label{spin-1-next-2}\\
b_0(\tau_1) &\leq 10^{5} \eps \mu_1 \label{spin-1a-next-2}\\
c_0(\tau_1) &\geq \exp( K^{9} ) \eps^2 \mu_1\label{spin-2-next-2}\\
c_0(\tau_1) &\leq \exp( K^{10} ) \eps^2 \mu_1.\label{spin-2a-next-2}
\end{align}
\end{proposition}
Indeed, the remaining claims \eqref{dn-en-next}, \eqref{before-en-next}, \eqref{during-en-next}, \eqref{after-en-next} of Proposition \ref{induct-rescale} follow for $\tau_1$ obeying \eqref{lifespan-rescaled-2} from \eqref{before-en-next'}-\eqref{dn-en-next'} (using \eqref{en-stable-rescaled-2} to handle the $\mu_1$ factor in \eqref{dn-en-next}).
\subsection{Sixth step: eliminating the role of $b_1,c_1,d_1$}
We now begin the proof of Proposition \ref{reduced-claim}. Henceforth the notation and assumptions are as in that proposition.
It turns out that we can reduce to the setting in which the dynamics of $b_1,c_1,d_1$ are essentially trivial. The key proposition is
\begin{proposition}[Small $a_1$ implies small $b_1,c_1,d_1$]\label{todos} Suppose that $0 \leq \tau \leq T_1$ is a time such that
\begin{equation}\label{sold}
\int_0^\tau a_1(t)^2\ dt \leq K^{-1/4}
\end{equation}
Then we have the bounds
\begin{align}
|b_1(t)| &\lesssim K^{-1/4} \eps \label{b-small} \\
|c_1(t)| &\lesssim K^{-1/4} \exp( - K^{10}/2 ) \eps^2 \label{c-small}\\
c_1(t) &\geq - O( (1+\epsilon_0)^{-n_0/3} ) \label{c-small-2}\\
|d_1(t)| &\lesssim K^{-20} \label{d-small}
\end{align}
for all $0 \leq t \leq \tau$.
\end{proposition}
\begin{proof} We first observe from \eqref{after-en-rescaled} that
$$ \tilde E_1(t) \lesssim K^{-30} (1+\epsilon_0)^{10k}$$
whenever $n_0-N < k\leq 0$ and $\tau_{k-1} \leq t \leq \tau_k$. From this and \eqref{tau-stable} we conclude the crude bound
\begin{equation}\label{ai}
\tilde E_1(t) \lesssim \frac{K^{-30}}{1+|t|^3}
\end{equation}
whenever $\tau_{n_0-N} \leq t \leq 0$.
Let $\tau'$ be the largest time in $[\tau_{n_0-N},\tau]$ for which
\begin{equation}\label{champ}
\int_{\tau_{n_0-N}}^{\tau'} |b_1(t)|\ dt \leq \frac{1}{10} \eps.
\end{equation}
From continuity we see that either $\tau'=\tau$, or else
\begin{equation}\label{opes}
\int_{\tau_{n_0-N}}^{\tau'} |b_1(t)|\ dt = \frac{1}{10} \eps.
\end{equation}
We rule out the latter possibility as follows. From \eqref{cxin-eq-resc} one has
$$ |\partial_t c_1| \leq O\left(\eps^2 \exp(-K^{10}) \tilde E_1\right) + (1+\epsilon_0)^{5/2} \eps^{-1} K^{10} |b_1| |c_1| + O\left((1+\epsilon_0)^{-n_0/2} \tilde E_1^{1/2}\right)$$
and
$$ \partial_t c_1 \geq - O( (1+\epsilon_0)^{-n_0/2} \tilde E_1 ) - (1+\epsilon_0)^{5/2} \eps^{-1} K^{10} |b_1| |c_1|$$
for all $t \geq \tau_{n_0-N}$, while from \eqref{nomad} one has $c_1(\tau_{n_0-N})=0$. From Gronwall's inequality and \eqref{champ}, we conclude that
\begin{align*}
|c_1(t)| &\lesssim \exp( (1+\epsilon_0)^{5/2} \frac{1}{10} K^{10}) \times \\
&\quad \left( \eps^2 \exp(-K^{10}) \int_{\tau_{n_0-N}}^t \tilde E_1(t')\ dt' + O\left( (1+\epsilon_0)^{-n_0/2} \int_{\tau_{n_0-N}}^t \tilde E_1(t')^{1/2}\ dt' \right) \right)
\end{align*}
and
$$c_1(t) \geq - O( \exp( (1+\epsilon_0)^{5/2} \frac{1}{10} K^{10}) (1+\epsilon_0)^{-n_0/2} \int_{\tau_{n_0-N}}^t \tilde E_1(t')^{1/2}\ dt' )$$
for any $\tau_{n_0-N} \leq t \leq \tau'$. In particular, from \eqref{ai} and \eqref{during-en-next'} we have
\begin{equation}\label{dream}
|c_1(t)| \lesssim \frac{\eps^2\exp(-3K^{-10}/4)}{1+|t|^2}
\end{equation}
and
\begin{equation}\label{dream-2}
c_1(t) \geq - O\left( \exp\left( O(K^{10}) (1+\epsilon_0)^{-n_0/2} \right)\right)
\end{equation}
for all $\tau_{n_0-N} \leq t \leq \tau'$ (here we use the trivial bound $\tau' \leq \tau \leq T_1 \leq 100$).
In a similar spirit, from \eqref{bxin-eq-resc} one has
$$|\partial_t b_1| \leq \eps a_1^2 + \eps^{-1} K^{10} c_1^2 + O\left( (1+\epsilon_0)^{-n_0} \tilde E_1^{1/2} \right) $$
for all $t \geq \tau_{n_0-N}$, and hence by \eqref{nomad}
$$ |b_1(t)| \leq \int_{\tau_{n_0-N}}^t \eps a_1^2(t') + \eps^{-1} K^{10} c_1^2(t') + O\left((1+\epsilon_0)^{-n_0} \tilde E_1^{1/2}\right) \ dt'$$
In particular, from \eqref{dream}, \eqref{ai}, \eqref{sold} we have
$$ |b_1(t)| \lesssim \frac{K^{-30}}{1+|t|^2} \eps$$
for $\tau_{n_0-N} \leq t \leq 0$, and
\begin{equation}\label{wake}
|b_1(t)| \lesssim K^{-1/4} \eps
\end{equation}
for $0 \leq t \leq \tau'$. However, this is inconsistent with \eqref{opes} if $K$ is small enough (recalling that $\tau' \leq 100$). Thus $\tau'=\tau$. The bounds \eqref{b-small}, \eqref{c-small}, \eqref{c-small-2} now follow from \eqref{dream}, \eqref{dream-2}, \eqref{wake}.
Finally, from \eqref{dxin-eq-resc} and \eqref{c-small}, \eqref{during-en-next'}, \eqref{after-en-next'} we have
$$
\partial_t d_1 = O\left( \exp\left( -K^{10}/2 \right) \right) + O( K^{-14} |d_1| )
$$
for $0 \leq t \leq \tau'$ (taking $n_0$ large enough), and from this, \eqref{dn-en-0}, and Gronwall's inequality one obtains \eqref{d-small} (for $K$ large enough).
\end{proof}
Let $T_2$ be the largest time in $[0,T_1]$ such that
$$
\int_0^{T_2} a_1(t)^2\ dt \leq K^{-1/4}.
$$
Combining Proposition \ref{todos} with Corollary \ref{exit} and using continuity, we conclude
\begin{corollary}[Exit trichotomy, again]\label{exit-again} At least one of the following assertions hold:
\begin{itemize}
\item (Backwards flow of energy) One has
\begin{equation}\label{exit-1a}
\tilde E_{-1}(T_2) = K^{-10} (1+\epsilon_0)^{2/10}.
\end{equation}
\item (Forwards flow of energy) One has
\begin{equation}\label{exit-2a}
\int_0^{T_2} a_1(t)^2\ dt = K^{-1/4}.
\end{equation}
\item (Running out the clock) One has
\begin{equation}\label{exit-3a}
T_2 = 100.
\end{equation}
\end{itemize}
\end{corollary}
Again, it turns out that it is option \eqref{exit-2a} that actually occurs, although we will not quite prove (or use) this assertion here.
The most important modes for the remainder of the analysis are $a_0,b_0,c_0,d_0$, and $a_1$.
From \eqref{axin-eq-resc}-\eqref{dxin-eq-resc}, the energy bounds \eqref{before-en-next'}-\eqref{after-en-next'}, and Proposition \ref{todos}, we observe the equations of motion
\begin{align}
\partial_t a_0 &= - \eps^{-2} c_0 d_0 + O\left( K^{-9} \right) \label{axin-eq-0a} \\
\partial_t b_0 &= \eps a_0^2 - \eps^{-1} K^{10} c_0^2 + O\left( (1+\epsilon_0)^{- n_0/2}\right) \label{bxin-eq-0a} \\
\partial_t c_0 &= \eps^2 \exp(-K^{10}) a_0^2 + \eps^{-1} K^{10} b_0 c_0 + O\left( (1+\epsilon_0)^{- n_0/2} \right) \label{cxin-eq-0a} \\
\partial_t d_0 &= \eps^{-2} c_0 a_0 - (1+\epsilon_0)^{5/2} K d_0 a_1 + O\left( (1+\epsilon_0)^{- n_0/2} \right) \label{dxin-eq-0a}\\
\partial_t a_1 &= (1+\epsilon_0)^{5/2} K d_0^2 + O( K^{-1} |a_1| ) + O\left( K^{-20} \right) \label{axin-eq-1a}
\end{align}
for these modes in the time interval $0 \leq t \leq T_2$. When $N > n_0$, we also need to keep some track of the modes $a_{-1}, b_{-1}, c_{-1}, d_{-1}$; again from \eqref{axin-eq-resc}-\eqref{dxin-eq-resc} and \eqref{before-en-next'}-\eqref{after-en-next'}, these equations may be given as
\begin{align}
\partial_t a_{-1} &= - (1+\epsilon_0)^{-5/2} \eps^{-2} c_{-1} d_{-1} + O\left( K^{-9} \right) \label{axin-eq-minus} \\
\partial_t b_{-1} &= (1+\epsilon_0)^{-5/2} \eps a_{-1}^2 - (1+\epsilon_0)^{-5/2} \eps^{-1} K^{10} c_{-1}^2 + O\left( (1+\epsilon_0)^{- n_0/2}\right) \label{bxin-eq-minus} \\
\partial_t c_{-1} &= (1+\epsilon_0)^{-5/2} \eps^2 \exp(-K^{10}) a_{-1}^2 + (1+\epsilon_0)^{-5/2} \eps^{-1} K^{10} b_{-1} c_{-1} + O\left( (1+\epsilon_0)^{- n_0/2} \right) \label{cxin-eq-minus} \\
\partial_t d_{-1} &= (1+\epsilon_0)^{-5/2} \eps^{-2} c_{-1} a_{-1} - K d_{-1} a_{0} + O\left( (1+\epsilon_0)^{- n_0/2} \right). \label{dxin-eq-minus}
\end{align}
The dynamics of these variables $a_{-1},b_{-1},c_{-1},d_{-1}$, do not directly impact the dynamics in \eqref{axin-eq-0a}-\eqref{axin-eq-1a}; however we will still need to track these variables in order to prevent a premature exit of the form \eqref{exit-1a} that could potentially be caused by energy flowing back from $a_0$ to $d_{-1}$.
The task of proving Proposition \ref{reduced-claim} has now reduced further, to that of establishing the following claim.
\begin{proposition}[Reduced induction claim, II]\label{reduced-claim-2} Let the notation and hypotheses be as in Proposition \ref{induct-rescale}, and let $T_1$ and $T_2$ be defined as above. There exists a time
\begin{equation}\label{lifespan-rescaled-3}
\frac{1}{100} \leq \tau_1 \leq T_2
\end{equation}
(in particular, $T_2 \geq 1/100$)
such that we have the bounds
\begin{align}
(1+\epsilon_0)^{-1/100} \leq a_1(\tau_1) &\leq (1+\epsilon_0)^{1/100} \label{an-init-next-3}\\
b_0(\tau_1) &\geq 2 \times 10^{-5} \eps \label{spin-1-next-3}\\
b_0(\tau_1) &\leq \frac{1}{2} 10^{5} \eps \label{spin-1a-next-3}\\
c_0(\tau_1) &\geq 2 \times \exp( K^{9} ) \eps^2 \label{spin-2-next-3}\\
c_0(\tau_1) &\leq \frac{1}{2} \exp( K^{10} ) \eps^2 \label{spin-2a-next-3}\\
\tilde E_0(\tau_1) &\leq \frac{1}{2} K^{-20}, \label{en1-init-next-3}
\end{align}
\end{proposition}
Indeed, Proposition \ref{reduced-claim} follows from Proposition \ref{reduced-claim-2} and Proposition \ref{todos} once we set $\mu_1 := a_1(\tau_1)$ (and take $K$ sufficiently large, $\eps$ sufficiently small, and $n_0$ sufficiently large).
\subsection{Seventh step: dynamics at the zero scale}
We now prove Proposition \ref{reduced-claim-2} (and hence Propositions \ref{reduced-claim}, \ref{blowdyn-induct}, \ref{blowdyn} and Theorems \ref{ood2}, \ref{ood}, \ref{blowup} and \ref{main}).
The task at hand is now very close to the situation in Theorem \ref{daet}, and we will now repeat the proof of that theorem with minor modifications, except for a technical distraction having to do with eliminating a premature exercise of the option \eqref{exit-1a}, which requires some analysis of the $-1$-scale dynamics.
From \eqref{during-en-next'} we have
\begin{equation}\label{ab}
a_0(t), b_0(t), c_0(t), d_0(t), a_1(t) = O(1)
\end{equation}
for all $0 \leq t \leq T_2$. Actually, we can do a bit better than this. From \eqref{axin-eq-0a}-\eqref{axin-eq-1a} and \eqref{ab} we have
$$ \partial_t (a_0^2+b_0^2+c_0^2+d_0^2+a_1^2) = O( K^{-1} ) $$
for $0 \leq t \leq T_2$
(if $n_0$ is large enough), whereas from \eqref{an-init-rescaled}-\eqref{dn-init-rescaled} and \eqref{after-en-rescaled} we have
$$ a_0(0)^2+b_0(0)^2+c_0(0)^2+d_0(0)^2+a_1(0)^2 = 1 + O(K^{-20}).$$
By the fundamental theorem of calculus, we conclude that
\begin{equation}\label{ab-refine}
a_0(t)^2+b_0(t)^2+c_0(t)^2+d_0(t)^2+a_1(t)^2 = 1 + O(K^{-1})
\end{equation}
for all $0 \leq t \leq T_2$.
Now (as in the proof of Theorem \ref{daet}) we obtain improved bounds on $b,c$. From \eqref{bxin-eq-0a}, \eqref{cxin-eq-0a}, \eqref{ab} one has
$$
\partial_t (b_0^2+c_0^2) =
2 \eps a_0^2 b_0 + 2 \eps^2 \exp(-K^{10}) a_0^2 c_0 + O\left( (1+\epsilon_0)^{-n_0/2} (b_0^2+c_0^2)^{1/2} \right)$$
for all $0 \leq t \leq T_2$, and thus by \eqref{ab-refine}
$$
\partial_t (b_0^2+c_0^2)^{1/2} \leq (1 + O(K^{-1})) \eps$$
for all $0 \leq t \leq T_2$ (interpreting the derivative in a weak sense).
On the other hand, from \eqref{bn-init-rescaled}, \eqref{cn-init-rescaled} we have
$$ (b_0(0)^2+c_0(0)^2)^{1/2} \leq (10^{-5} + O(K^{-1})) \eps.$$
From the fundamental theorem of calculus, we conclude that
\begin{equation}\label{ab-2}
|b_0(t)|, |c_0(t)| \leq (10^{-5} + t + O(K^{-1})) \eps
\end{equation}
for all $0 \leq t \leq T_2$. Inserting this (and \eqref{ab-refine}) into \eqref{cxin-eq-0a}, we obtain
$$ |\partial_t c_0| \leq O( \eps^2 \exp(-K^{10}) ) + \left(10^{-5}+t+O(K^{-1})\right) K^{10} |c_0|$$
for all $0 \leq t \leq T_2$.
In particular, by \eqref{cn-init-rescaled} and Gronwall's inequality, we have the bound
\begin{equation}\label{code2}
|c_0(t)| \lesssim \eps^2 \exp\left( K^{10} \left( - \frac{1}{4} + 10^{-5} t + \frac{1}{2} t^2 + O(K^{-1}) \right) \right)
\end{equation}
for all $0 \leq t \leq T_2$.
Finally, from \eqref{dxin-eq-0a}, \eqref{axin-eq-1a} we have
$$ \partial_t \left(d_0^2 + a_1^2\right) = 2 \eps^{-2} c_0 a_0 d_0 + O\left( K^{-1} \left(d_0^2+a_1^2\right) \right) + O\left( K^{-20} \left(d_0^2+a_1^2\right)^{1/2} \right) $$
for all $0 \leq t \leq T_2$,
and hence by \eqref{ab}
\begin{equation}\label{explorer}
\partial_t \left(d_0^2 + a_1^2\right)^{1/2} = O( \eps^{-2} |c_0|) + O\left( K^{-1} \left(d_0^2+a_1^2\right)^{1/2} \right) + O( K^{-20} )
\end{equation}
for all $0 \leq t \leq T_2$ (interpreted in a weak sense).
From \eqref{dn-init-rescaled}, \eqref{after-en-rescaled} we have
\begin{equation}\label{slam}
\left(d_0(0)^2+a_1(0)^2\right)^{1/2} = O( K^{-10} )
\end{equation}
and hence by \eqref{code2} and Gronwall's inequality
\begin{equation}\label{domo}
|d_0(t)|, |a_1(t)| \lesssim \exp\left( K^{10} \left( - 1/4 + 10^{-5} t + \frac{1}{2} t^2 + O(K^{-1}) \right) \right) + K^{-10}
\end{equation}
for all $0 \leq t \leq T_2$.
Inserting this bound into \eqref{axin-eq-0a}, we see that
$$
|\partial_t a_0| \lesssim \exp\left( 2K^{10} \left( - 1/4 + 10^{-5} t + \frac{1}{2} t^2 + O\left(K^{-1}\right) \right) \right) + K^{-9}$$
for all $0 \leq t \leq T_2$,
which among other things implies (from \eqref{an-init-rescaled}) that $a_0(t) \geq 0$ whenever $0 \leq t \leq \min(T_2,1/2)$. From the $k=-1$ case of \eqref{lei}, we thus have
$$ \partial_t \tilde E_{-1} \leq K (1+\epsilon_0)^{-5/2} d_{-2}^2 a_{-1}$$
for $0 \leq t \leq \min(T_2,1/2)$; by \eqref{before-en-next'} we conclude that
$$ \partial_t \tilde E_{-1} \leq O( K^{-14} )$$
on this interval, and hence by \eqref{before-en-0}
$$ \tilde E_{-1}\left(\min(T_2,1/2)\right) \leq K^{-10} \left(1+\epsilon_0\right)^{2/10} \left(\left(1+\epsilon_0\right)^{-0.08} + O\left( K^{-4} \right)\right),$$
which rules out the first option of Corollary \ref{exit-again} if $T_2 \leq 1/2$. The second option of this corollary is also ruled out when $T_2 \leq 1/2$, thanks to \eqref{domo}. We conclude that
\begin{equation}\label{tc-below}
T_2 \geq 1/2.
\end{equation}
Now we sharpen the bounds on $a_0(t), b_0(t), c_0(t), d_0(t), a_1(t)$. Let $t_c$ be the supremum of all the times $t \in [0,T_2]$ for which $|c(t')| \leq K^{-10} \eps^2$ for all $0 \leq t \leq t'$, thus
$$0 \leq t_c \leq T_2 \leq T_1 \leq 100$$
and
\begin{equation}\label{boots2}
|c_0(t)| \leq K^{-10} \eps^2
\end{equation}
for all $0 \leq t \leq t_c$. Comparing this with \eqref{code2}, \eqref{tc-below}, we conclude that
\begin{equation}\label{cando}
t_c \geq 1/2.
\end{equation}
From \eqref{explorer}, \eqref{slam}, \eqref{boots2}, and Gronwall's inequality one has
\begin{equation}\label{ak10}
|d_0(t)|, |a_1(t)| \lesssim K^{-10}
\end{equation}
for all $0 \leq t \leq t_c$. Inserting these bounds and \eqref{ab-2} back into \eqref{axin-eq-0a}, we see that
$$ \partial_t a_0 = O( K^{-9} ) $$
for $0 \leq t \leq t_c$, and thus by \eqref{an-init-rescaled} we have
\begin{equation}\label{aorta}
a_0(t) = 1 + O(K^{-9})
\end{equation}
for $0 \leq t \leq t_c$. Inserting this into \eqref{bxin-eq-0a} and using \eqref{boots2}, we conclude that
$$ \partial_t b_0(t) = \eps \left(1 + O\left(K^{-9}\right)\right) $$
for $0 \leq t \leq t_c$, and hence by \eqref{bn-init-rescaled}
\begin{equation}\label{stan-lee}
\eps\left(t - 10^{-5} - O\left(K^{-9}\right)\right) \leq b_0(t) \leq \eps\left(t + 10^{-5} + O\left(K^{-9}\right)\right)
\end{equation}
for all $0 \leq t \leq t_c$. Meanwhile, inserting \eqref{aorta} into \eqref{cxin-eq-0a}, we obtain
$$
\partial_t c_0(t) \geq \left(1 + O(K^{-9})\right) \eps^2 \exp(-K^{10}) + \eps^{-1} K^{10} b_0(t) c_0(t) $$
for all $0 \leq t \leq t_c$, and hence by
\eqref{cn-init-rescaleda}, \eqref{stan-lee} and Gronwall's inequality we see that
$$
c_0(t) \gtrsim \exp\left( \left(\frac{1}{2} t^2 - 10^{-5} t - 1 + O(K^{-9})\right) K^{10} \right) \eps^2$$
whenever $1/2 \leq t \leq t_c$. Comparing this with \eqref{boots2} we see that
\begin{equation}\label{tc-bound}
t_c \leq 2
\end{equation}
(say), which by definition of \eqref{boots2} implies that
\begin{equation}\label{c-bound-happy}
c_0(t_c) = K^{-10} \eps^2.
\end{equation}
Having described the evolution up to time $t_c$, we now move to the future of $t_c$, and specifically in the interval $[t_c,\tau_1]$ where
\begin{equation}\label{t3-def}
\tau_1 := \min( t_c + K^{-1/2}, T_2 ).
\end{equation}
From \eqref{ak10} we have
$$ \int_0^{t_c} a_1(t)^2\ dt \lesssim K^{-10}$$
and hence by \eqref{during-en-next'} and \eqref{t3-def} we have
$$ \int_0^{t} a_1(t)^2\ dt < K^{-1/4}$$
whenever $t \leq \tau_1$. From this and Corollary \ref{exit-again} (and \eqref{tc-bound}) we conclude
\begin{proposition}[Exit dichotomy]\label{exit-options} At least one of the following assertions hold:
\begin{itemize}
\item (Backwards flow of energy) One has
\begin{equation}\label{exit-1b}
\tilde E_{-1}(\tau_1) = K^{-10} (1+\epsilon_0)^{2/10}.
\end{equation}
\item (Running out the clock) One has
\begin{equation}\label{exit-3b}
\tau_1 = t_c + K^{-1/2}.
\end{equation}
\end{itemize}
\end{proposition}
We will shortly eliminate the option \eqref{exit-1b}, but first we need more control on the dynamics.
From \eqref{stan-lee} and \eqref{cando} we have
$$ b_0(t_c) \geq 10^{-1} \eps.$$
Meanwhile, from \eqref{code2}, \eqref{bxin-eq-0a} (discarding the non-negative $\eps a_0^2$ term) we have
$$
\partial_t b_0 \geq - O\left( \eps^3 \exp\left( O\left( K^{10} \right) \right) \right)$$
for all $t_c \leq t \leq \tau_1$ (with $n_0$ large enough); we conclude (for $\eps$ small enough) that
\begin{equation}\label{botany}
b_0(t) \geq 10^{-4} \eps
\end{equation}
(say) for all $t_c \leq t \leq \tau_1$. Inserting this bound into \eqref{cxin-eq-0a}, and discarding the non-negative $\eps^2 \exp(-K^{10}) a_0^2$ term, we see from \eqref{c-bound-happy} and a continuity argument that
\begin{equation}\label{gargle}
c_0(t) \geq K^{-10} \eps^2
\end{equation}
for $t_c \leq t \leq \tau_1$ (in particular, $c_0$ is positive on this interval), and furthermore that we have the exponential growth
\begin{equation}\label{chap}
\partial_t c_0(t) \gtrsim K^{10} c_0(t)
\end{equation}
for $t_c \leq t \leq \tau_1$. We conclude that
\begin{equation}\label{c-large-again}
c_0(t) \geq K^{100} \eps^2
\end{equation}
for $t$ in the interval $I := [t_c+K^{-9}, \tau_1]$. (We have not yet ruled out the possibility that this time interval is empty, although we will shortly show that this is not the case.) In the opposite direction, we see from \eqref{ab}, \eqref{ab-2}, \eqref{gargle}, \eqref{cxin-eq-0a} that
\begin{equation}\label{solace-again}
\partial_t c_0 \lesssim K^{10} c_0
\end{equation}
for $t_c \leq t \leq \tau_1$. From \eqref{c-bound-happy}, \eqref{t3-def}, and Gronwall's inequality, we thus have the upper bound
\begin{equation}\label{co-upper}
c_0(t) \lesssim \exp\left( O( K^{10 - 1/2} ) \right) \eps^2
\end{equation}
for $t_c \leq t \leq \tau_1$. Crucially, this upper bound will be significantly smaller than a lower bound for $c_{-1}$ in the same interval, leading to an important mismatch in speeds between the $0$-scale and $-1$-scale dynamics that prevents a premature exit via \eqref{exit-1b}. More precisely, we have
\begin{proposition}[No exit to coarse scales]\label{noexit} We have
\begin{equation}\label{toast}
\tilde E_{-1}(t) \lesssim K^{-14}
\end{equation}
for all $t_c \leq t \leq \tau_1$.
In particular, by Proposition \ref{exit-options} we have
\begin{equation}\label{tau-tc}
\tau_1 = t_c + K^{-1/2}
\end{equation}
and hence the interval $I = [t_c+K^{-9}, \tau_1]$ is non-empty.
\end{proposition}
\begin{proof} If $N=n_0$ then this is immediate from \eqref{nomodo}, so we may assume that $N > n_0$. In particular, the bounds \eqref{spin-1-scaled}-\eqref{spin-2a-scaled} are available.
We will need some additional bounds on $b_{-1}, c_{-1}$.
From \eqref{bxin-eq-minus}, \eqref{before-en-next'} (discarding the second term in \eqref{bxin-eq-minus} as being non-positive) we have
$$ \partial_t b_{-1}(t) \leq O( \eps )$$
for all $0 \leq t \leq \tau_1$. From this and \eqref{spin-1a-scaled}, we have
\begin{equation}\label{bmn}
b_{-1}(t) \leq O( \eps )
\end{equation}
for $0 \leq t \leq \tau_1$. Meanwhile, from \eqref{cxin-eq-minus} (using \eqref{before-en-next'} to bound $a_{-1}$) we have
$$ \partial_t |c_{-1}|^2 = O( \eps^2 |c_{-1}| ) + 2 (1+\epsilon_0)^{-5/2} \eps^{-1} K^{10} b_{-1} |c_{-1}|^2 $$
and thus by \eqref{bmn}
$$ \partial_t |c_{-1}|^2(t) \leq O( \eps^2 |c_{-1}| ) + O\left( K^{10} |c_{-1}|^2 \right)$$
and thus
$$ \partial_t |c_{-1}|(t) \leq O( \eps^2 ) + O\left( K^{10} |c_{-1}| \right).$$
By Gronwall's inequality and \eqref{spin-2a-scaled}, we thus have
\begin{equation}\label{cibosh}
|c_{-1}(t)| \lesssim \exp\left( O\left( K^{10} \right) \right) \eps^2
\end{equation}
for $0 \leq t \leq \tau_1$.
Inserting this back into \eqref{bxin-eq-minus}, we see that
$$ \partial_t b_{-1} \geq - O\left( \exp\left( O\left( K^{10} \right) \right) \right) \eps^3$$
and hence by \eqref{spin-1-scaled}
$$ b_{-1}(t) \geq 10^{-6} \eps$$
for $0 \leq t \leq \tau_1$ (if $\eps$ is small enough). Inserting this into \eqref{cxin-eq-minus}, we see that
$$ \partial_t c_{-1} \geq 10^{-6} K^{10} c_{-1} - O\left( (1+\epsilon_0)^{- n_0/2} \right)$$
for $0 \leq t \leq \tau_1$, and hence by \eqref{spin-2-scaled}
\begin{equation}\label{cmn}
c_{-1}(t) \geq \exp( 10^{-8} K^{10} ) \eps^2
\end{equation}
for $1/10 \leq t \leq \tau_1$ (if $n_0$ is large enough). Returning to \eqref{bxin-eq-minus}, we now have (thanks to \eqref{cibosh}) that
$$ \partial_t b_{-1}(t) = O( \eps )$$
for $0 \leq t \leq \tau_1$, and thus by \eqref{spin-1a-scaled}
\begin{equation}\label{bibosh}
b_{-1}(t) = O( \eps )
\end{equation}
for $0 \leq t \leq \tau_1$; from \eqref{cxin-eq-minus}, \eqref{cmn}, \eqref{before-en-next'} we now have
\begin{equation}\label{c-bang}
\partial_t c_{-1}(t) = O( K^{10} c_{-1}(t) )
\end{equation}
for $0 \leq t \leq \tau_1$.
From \eqref{before-en-0} we have
\begin{equation}\label{eop}
\tilde E_{-1}(0) \lesssim K^{-10}.
\end{equation}
This bound is too big for \eqref{toast}, so we will first need to establish some decay in $\tilde E_{-1}$ as one moves from time $t=0$ to time $t=t_c$.
From \eqref{lei} we have
$$ \partial_t \tilde E_{-1} \leq K (1+\epsilon_0)^{-5/2} d_{-2}^2 a_{-1} - K d_{-1}^2 a_0 $$
From \eqref{before-en-next'} we have $d_{-2}^2 a_{-1}(t) = O(K^{-15})$ for $0 \leq t \leq \tau_1$, so
\begin{equation}\label{chimp}
\partial_t \tilde E_{-1} \leq - K d_{-1}^2 a_0 + O( K^{-14} )
\end{equation}
for $0 \leq t \leq \tau_1$.
Equipartition of energy suggests that $d_{-1}^2$ oscillates around $\tilde E_{-1}$ on the average. To formalise this, observe from \eqref{axin-eq-minus}, \eqref{dxin-eq-minus}, \eqref{before-en-next'} that
$$
\partial_t (a_{-1} d_{-1}) = (1+\epsilon_0)^{-5/2} \eps^{-2} c_{-1} \left(a_{-1}^2 - d_{-1}^2\right) - K a_{-1} d_{-1} a_0 + O(K^{-14}) $$
and hence by the product rule
\begin{align*}
\partial_t \left(a_{-1} d_{-1} \frac{\eps^2}{c_{-1}} a_0\right)& = (1+\epsilon_0)^{-5/2} \left(a_{-1}^2 - d_{-1}^2\right) a_0 - \frac{\eps^2}{c_{-1}} \frac{\partial_t c_{-1}}{c_{-1}} a_{-1} d_{-1} - \frac{\eps^2}{c_{-1}} K a_{-1} d_{-1} a_0^2 \\
&\quad + a_{-1} d_{-1} \frac{\eps_2}{c_{-1}} \partial_t a_0 + O\left(K^{-14} \frac{\eps^2}{c_{-1}}\right)
\end{align*}
for $0 \leq t \leq \tau_1$.
from \eqref{axin-eq-0a}, \eqref{during-en-next'}, \eqref{co-upper} we have
$$ \partial_t a_0 = O\left( \exp\left( O\left( K^{10 - 1/2} \right) \right) \right).$$
Using \eqref{cmn}, \eqref{c-bang}, \eqref{before-en-next}, we conclude that
$$ \partial_t \left(a_{-1} d_{-1} \frac{\eps^2}{c_{-1}} a_0\right) = (1+\epsilon_0)^{-5/2} \left(a_{-1}^2 - d_{-1}^2\right) a_0 + O\left(K^{-100}\right)$$
for $0 \leq t \leq T_*$.
If we define the modified energy
$$ E^* := \tilde E_{-1} - \frac{1}{2} (1+\epsilon_0)^{5/2} K a_{-1} d_{-1} \frac{\eps^2}{c_{-1}} a_0$$
then from \eqref{cmn}, \eqref{before-en-next'} we have
\begin{equation}\label{vamonos}
E^* = \tilde E_{-1} + O(K^{-100})
\end{equation}
while from \eqref{chimp} we have
$$ \partial_t E^* \leq -\frac{1}{2} K (1+\epsilon_0)^{-5/2} \left(a_{-1}^2 + d_{-1}^2\right) a_0 + O\left(K^{-14} \right)$$
for $0 \leq t \leq \tau_1$.
By Lemma \ref{primary}, \eqref{bibosh}, \eqref{cibosh} we have
$$ \tilde E_{-1} = \frac{1}{2} \left(a_{-1}^2 + d_{-1}^2\right) + O(\eps^2)$$
and thus by \eqref{vamonos}
$$ \partial_t E^* \leq -\frac{1}{2} K (1+\epsilon_0)^{-5/2} E^* a_0 + O\left(K^{-14} \right)$$
From \eqref{eop} we have $E^*(0) \lesssim K^{-10}$, and by \eqref{vamonos} it will suffice to show that $E^*(t) \lesssim K^{-14}$ for $t_c \leq t \leq \tau_1$. By Gronwall's inequality, it thus suffices to show that
$$ \int_0^t a_0(t')\ dt' \gtrsim 1$$
for all $t_c \leq t \leq \tau_1$. But from \eqref{during-en-next'} and the bound $\tau_1 \leq t_c + K^{-1/2}$ we have
$$ \int_0^t a_0(t')\ dt' = \int_0^{t_c} a_0(t')\ dt' + O(K^{-1/2})$$
and the claim now follows from \eqref{aorta} and \eqref{cando}.
\end{proof}
We now resume the analysis of the $0$-scale modes.
From Proposition \ref{noexit} and \eqref{axin-eq-resc} (and \eqref{during-en-next'}), we see that we can improve the error term in \eqref{axin-eq-0a} to
\begin{equation}\label{achoo}
\partial_t a_0 = - \eps^{-2} c_0 d_0 + O\left( K^{-14} \right)
\end{equation}
in the interval $t_c \leq t \leq \tau_1$. This improvement will be needed in order to close the bootstrap argument.
From \eqref{bxin-eq-0a}, \eqref{during-en-next'}, \eqref{co-upper} we have
$$ |\partial_t b_0| \leq 10 \eps$$
for $t_c \leq t \leq \tau_1$, and hence by \eqref{stan-lee} and \eqref{botany} we have
\begin{equation}\label{bilboa}
| b_0| \leq 10^4 \eps
\end{equation}
for $t_c \leq t \leq \tau_1$. Inserting this into \eqref{cxin-eq-0a} and using \eqref{c-large-again}, \eqref{during-en-next'} we have
$$
|\partial_t c_0(t)| \lesssim K^{10} c_0(t)
$$
for $t_c \leq t \leq \tau_1$; combining this with \eqref{solace-again} we have
\begin{equation}\label{solace-final}
K^{10} c_0(t) \lesssim \partial_t c_0(t) \lesssim K^{10} c_0(t).
\end{equation}
Now we use equipartition of energy to establish some energy drain from $a_0,d_0$ to $a_1$. From \eqref{achoo}, \eqref{dxin-eq-0a}, \eqref{during-en-next'}, \eqref{after-en-next'} one has
\begin{equation}\label{sanitary}
\partial_t \frac{1}{2}(a_0^2+d_0^2) = - (1+\epsilon_0)^{5/2} K d_0^2 a_1 + O\left( K^{-14} \left(a_0^2+d_0^2\right)^{1/2} \right) + O( K^{-100} )
\end{equation}
and
$$ \partial_t \left(a_0 d_0\right) = \eps^{-2} c_0 \left(a_0^2 - d_0^2\right) + O( K )$$
for $t \in I$.
Meanwhile, from \eqref{axin-eq-1a}, \eqref{during-en-next'} we have
$$ \partial_t a_1 = O(K).$$
From this and \eqref{c-large-again}, \eqref{solace-final} that
\begin{equation}\label{douse-again}
\begin{split}
\partial_t\left( a_0d_0 \frac{\eps^2}{c_0} a_1\right) &= - (a_0^2-d_0^2) a_1 - a_0d_0 \frac{\eps^2}{c_0} \frac{\partial_t c_0}{c_0} a_1 + a_0d_0 \frac{\eps^2}{c_0} \partial_t a_0 + O(K^{-100}) \\
&= - (a_0^2-d_0^2) a_1 + O(K^{-100})
\end{split}
\end{equation}
for $t \in I$,
so if we define the modified energy
$$
E_* := \frac{1}{2}\left(a_0^2+d^2_0\right) + \frac{1}{2} K a_0d_0 \frac{\eps^2}{c_0} a_1
$$
then from \eqref{c-large}, \eqref{during-en-next'}, \eqref{after-en-next'} we have
\begin{equation}\label{est-again}
E_* = \tilde E_0 + O(K^{-100}) = \frac{1}{2} \left(a_0^2+d_0^2\right) + O(K^{-100})
\end{equation}
and
$$ \partial_t E_* = - \frac{1}{2} K \left(a_0^2+d_0^2\right) a_1 + O\left( K^{-14} \left(a_0^2+d_0^2\right)^{1/2} \right) + O( K^{-99} )$$
for $t \in I$, and hence
$$ \partial_t E_* = - K a_1 E_* + O\left( K^{-14} E_*^{1/2} \right) + O( K^{-99} )$$
and hence
$$ \partial_t \left(E_* + K^{-28}\right)^{1/2} = \frac{1}{2} K a_1 \left(E_* + K^{-28}\right)^{1/2} + O(K^{-14})$$
for $t \in I$.
Starting with the crude bound $E_*(t_c+K^{-9}) \lesssim 1$ from \eqref{est-again}, \eqref{during-en-next'}, we conclude from Gronwall's inequality that
$$
\left(E_*(t)+K^{-28}\right)^{1/2} \lesssim \exp\left( - \frac{1}{2} K \int_{t_c+K^{-9}}^t a(t')\ dt'\right) + O( K^{-14} )$$
for any $t \in I$. In particular, from \eqref{est-again} we have
\begin{equation}\label{eeyore}
\tilde E_0(t) \lesssim \exp\left( - K \int_{t_c+K^{-9}}^t a(t')\ dt'\right) + O( K^{-28} )
\end{equation}
for $t \in I$.
From \eqref{axin-eq-1a} we have
\begin{equation}\label{fallow}
\partial_t a_1 \geq O( K^{-1} |a_1| ) + O( K^{-20} )
\end{equation}
for $t \in I$. In particular, if we can show
\begin{equation}\label{atc-again}
a_1(t_c+1/K) \geq 0.1
\end{equation}
then by Gronwall's inequality we will have
\begin{equation}\label{gb}
a_1(t) \geq 0.05
\end{equation}
for all $t_c+1/K \leq t \leq \tau_1 = t_c+1/K^{1/2}$, and in particular from \eqref{eeyore} we have
\begin{equation}\label{samuel}
\tilde E_0(\tau_1) \lesssim K^{-28}
\end{equation}
giving \eqref{en1-init-next-3}.
We now show \eqref{atc-again}. Suppose this is not the case. From \eqref{axin-eq-1a}, \eqref{ak10}, \eqref{ab-refine} we have
$$ a_1(t_c + K^{-9}) \lesssim K^{-1} $$
so from \eqref{axin-eq-1a}, \eqref{after-en-next'}, and the failure of \eqref{atc-again} we have
$$ \int_{t_c+K^{-9}}^{t_c+1/K} K d_0(t)^2\ dt \leq 0.1 + O( K^{-1} )$$
and thus
$$ \int_{t_c+K^{-9}}^{t_c+1/K} d_0(t)^2\ dt \leq \frac{1}{10K} + O( K^{-2} ).$$
However, by repeating the derivation of \eqref{douse-again} we have
$$ \partial_t (a_0d_0 \frac{\eps^2}{c_0}) = - (a_0^2-d_0^2) + O(K^{-100})$$
on $I$,
and hence by the fundamental theorem of calculus and \eqref{c-large} we have
$$ \int_{t_c+K^{-9}}^{t_c+1/K} a_0(t)^2 - d_0(t)^2\ dt = O( K^{-100} )$$
and thus
\begin{equation}\label{jam}
\int_{t_c+K^{-9}}^{t_c+1/K} \frac{1}{2} (a_0^2+d_0^2)(t)\ dt \leq \frac{1}{10K} + O(K^{-2}).
\end{equation}
On the other hand, for $t \in [t_c+K^{-9},t_c+1/K]$ one has $a_1(t) \leq 0.1 + O(K^{-20})$ by \eqref{fallow}, the failure of \eqref{atc-again}, and Gronwall's inequality. From \eqref{ab-refine}, \eqref{est-again} we conclude that
$$ a_0^2+d_0^2(t) \geq 0.99 + O(K^{-1}),$$
which contradicts \eqref{jam}. This concludes the proof of \eqref{atc-again} and hence \eqref{en1-init-next-3}.
To finish up, we need to establish the bounds \eqref{an-init-next-3}-\eqref{spin-2a-next-3} (the bounds \eqref{lifespan-rescaled-3} coming from \eqref{cando} and construction of $\tau_1$). From \eqref{ab-refine}, \eqref{samuel} we have
$$ a_1(t)^2 = 1 + O(K^{-1})$$
and \eqref{an-init-next-3} follows from this and \eqref{gb}. The bounds \eqref{spin-1-next-3}, \eqref{spin-1a-next-3} follow from \eqref{bilboa} and \eqref{botany}, while the bounds \eqref{spin-2-next-3}, \eqref{spin-2a-next-3} follows from \eqref{c-bound-happy}, \eqref{solace-final}, \eqref{tau-tc}, and Gronwall's inequality. This (finally!) completes the proof of Proposition \ref{reduced-claim-2}, and hence of Theorem \ref{main}.
| {
"timestamp": "2015-04-02T02:04:20",
"yymm": "1402",
"arxiv_id": "1402.0290",
"language": "en",
"url": "https://arxiv.org/abs/1402.0290",
"abstract": "The Navier-Stokes equation on the Euclidean space $\\mathbf{R}^3$ can be expressed in the form $\\partial_t u = \\Delta u + B(u,u)$, where $B$ is a certain bilinear operator on divergence-free vector fields $u$ obeying the cancellation property $\\langle B(u,u), u\\rangle=0$ (which is equivalent to the energy identity for the Navier-Stokes equation). In this paper, we consider a modification $\\partial_t u = \\Delta u + \\tilde B(u,u)$ of this equation, where $\\tilde B$ is an averaged version of the bilinear operator $B$ (where the average involves rotations and Fourier multipliers of order zero), and which also obeys the cancellation condition $\\langle \\tilde B(u,u), u \\rangle = 0$ (so that it obeys the usual energy identity). By analysing a system of ODE related to (but more complicated than) a dyadic Navier-Stokes model of Katz and Pavlovic, we construct an example of a smooth solution to such a averaged Navier-Stokes equation which blows up in finite time. This demonstrates that any attempt to positively resolve the Navier-Stokes global regularity problem in three dimensions has to use finer structure on the nonlinear portion $B(u,u)$ of the equation than is provided by harmonic analysis estimates and the energy identity. We also propose a program for adapting these blowup results to the true Navier-Stokes equations.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Finite time blowup for an averaged three-dimensional Navier-Stokes equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449310917195
} |
https://arxiv.org/abs/1507.04169 | Phase transition in a sequential assignment problem on graphs | We study the following game on a finite graph $G = (V, E)$. At the start, each edge is assigned an integer $n_e \ge 0$, $n = \sum_{e \in E} n_e$. In round $t$, $1 \le t \le n$, a uniformly random vertex $v \in V$ is chosen and one of the edges $f$ incident with $v$ is selected by the player. The value assigned to $f$ is then decreased by $1$. The player wins, if the configuration $(0, \dots, 0)$ is reached; in other words, the edge values never go negative. Our main result is that there is a phase transition: as $n \to \infty$, the probability that the player wins approaches a constant $c_G > 0$ when $(n_e/n : e \in E)$ converges to a point in the interior of a certain convex set $\mathcal{R}_G$, and goes to $0$ exponentially when $(n_e/n : e \in E)$ is bounded away from $\mathcal{R}_G$. We also obtain upper bounds in the near-critical region, that is when $(n_e/n : e \in E)$ lies close to $\partial \mathcal{R}_G$. We supply quantitative error bounds in our arguments. | \section{Introduction}
\label{sec:intro}
Consider the following game (known in different versions
\cite{J16}, \cite[Section 1.7]{Pbook}).
Players start with a row of $N$ empty boxes. In each of $N$ rounds,
a random digit is generated, and each player has to place it
into one of the empty boxes they have. A player's score is
the $N$ digit number obtained after the last round.
The game is a special case of \emph{sequential stochastic assignment}
introduced by Derman, Lieberman and Ross \cite{DLR72}. In sequential
assignment, there are $N$ jobs with given values $p_1 \le \dots \le p_N$
that have to be assigned to $N$ workers, as they appear in sequence.
The $i$-th worker has ability $X_i$, where $X_1, \dots, X_N$ are i.i.d.~random
variables from a given distribution $F$. The reward from assigning
the job of value $p_i$ to a worker with ability $x$ is $p_i x$,
and the overall reward of the assignment is the sum of the individual
rewards. The game mentioned at the start is recovered when
$p_i = 10^{i-1}$, and $X_i$ is uniform in $\{ 0, \dots, 9 \}$.
The paper \cite{DLR72} showed that there is a strategy that maximizes the
expected score independently of what $p_1, \dots, p_N$ are. This
strategy has the following form. There are numbers
$-\infty = a_{0,n} \le a_{1,n} \le \dots \le a_{n-1,n} \le a_{n,n} = \infty$,
$n \ge 1$, that only depend on the distribution $F$, such that if there are
$n$ jobs remaining to be assigned, with values $p'_1 \le \dots \le p'_n$,
and the next worker has ability $x$ with
$a_{i-1,n} \le x \le a_{i,n}$, then the worker is assigned to the
job with value $p'_i$.
Albright and Derman \cite{AD72} showed, using law of large numbers type arguments, that when $F$ is absolutely
continuous, one has $\lim_{n \to \infty} a_{q n, n} = F^{-1}(q)$, $0 < q < 1$,
as $n \to \infty$. In particular, when the number $n$ of jobs is large,
a worker with ability $x$ should be assigned to a job with rank approximately
$q n$, where $F^{-1}(q) = x$. Note that when $F$ is discrete,
this way of determining the asymptotics breaks down: when $x$
is an atom of $F$, the graph of $F^{-1}$ has a horizontal
piece at height $x$. For large finite $n$, the value of $q$
where the profile $a_{q n, n}$ crosses height $x$ can be expected to be somewhere in the corresponding interval of constancy of $F^{-1}$, and its precise location can be expected to be governed by large deviation effects.
In order to motivate the subject of our paper, consider the
following modification of the game mentioned at the beginning. Suppose that each digit can take
the values $1, \dots, k$, with equal probability. Also suppose that
the goal of the player is to maximize the probability of achieving
the maximum possible score, that is to reach the unique final assignment
consisting of $k$ contiguous intervals of equal digits.
Let $\tau$ be the first time when all $k$ numbers have occurred
at least once. At time $\tau$, the empty boxes form $k-1$ intervals
of lengths $n_1, \dots, n_{k-1}$, where $n - \tau = \sum_{i=1}^{k-1} n_i$.
The $i$-th interval has a box filled with $i$ adjacent to it on the right, and
a box filled with $i+1$ adjacent to it on the left.
It is plausible that there exist numbers
$0 = \alpha_1 < \alpha_2 < \dots < \alpha_{k-1} < \alpha_{k} = 1$, such that for large $n$, under the optimal strategy,
$n_i/n \sim \alpha_{i+1} - \alpha_i$, $i = 1, \dots, k-1$.
We will be interested in the following question.
Suppose that an
alternative position is imposed on the player, where the intervals have
length $n'_i \sim (\beta_{i+1} - \beta_i) n'$, $i = 1, \dots, k-1$, where
$0 = \beta_1 < \beta_2 < \dots < \beta_{k-1} < \beta_k = 1$.
What is the behaviour of the probability that the player
can achieve the maximal score \emph{from this position}?
We show that the above probability displays a
sharp transition in the limit $n' \to \infty$.
When the vector $(\beta_{i+1} - \beta_i : i = 1, \dots, k-1)$
lies in the interior of a certain convex set $\mathcal{R}_k$, the probability approaches a positive constant,
whereas it goes to $0$ exponentially when
the vector is at a positive distance from $\mathcal{R}_k$.
More generally, we consider the above transition on a general finite graph $G = (V,E)$
with vertices labelled $1, \dots, k$. The starting position is a vector
$(n_e : e \in E)$, and $n = \sum_{e \in E} n_e$. When a number
$1 \le i \le k$ is rolled, one of the edges $f$ incident with vertex $i$
is selected by the player, and the value assigned to edge $f$ is decreased
by $1$. We assign a final reward of $1$ when the configuration
$(0, \dots, 0)$ is reached, and refer to this as `winning'.
In the game described at the beginning, the graph is a path of length $k-1$.
We believe the study of this model is interesting for a number of reasons.
\begin{enumerate}
\item Questions of reachability have been studied in control theory for a
long time \cite[Sections 19,20]{PBGMbook}. In our model, the controllable
set $\mathcal{R}_G$, that allows the player to reach the state $(0,\dots,0)$ with
uniformly positive probability, has a simple characterization,
which however involves the graph structure in a non-trivial way;
see Eqn.~\eqref{e:RG-def} and Lemma \ref{lem:prop-R_G}. As we show,
choosing the right control is only essential near $\partial \mathcal{R}_G$.
We believe our model, that is tractable on a general graph, is a useful
example system to have in understanding the behaviour of discrete controlled
systems with spatial structure near critical regions. Indeed, the main technical
effort in this paper is getting estimates in the near critical region,
that we do in Section \ref{sec:critical-proof}.
\item In deriving the optimal strategy for sequential assignment,
Derman, Lieberman and Ross \cite{DLR72} used
Hardy's inequality, of which we have no analogue on graphs.
Our proofs work without knowledge of the optimal strategy,
and only rely on martingale and Lyapunov function techniques, as well
as an explicit relationship between $\mathcal{R}_G$ and available controls.
Thus our arguments may be adaptable to other models.
It may be that the transition phenomenon itself can be
established with less effort, given more information
on the optimal strategy (see for example Question \ref{prob:exp-converge}
in Section \ref{sec:open}). Nevertheless, we believe that the
quantitative bounds we derive are of independent interest.
\item As the title of this paper suggests, we view the
transition studied in this paper as an instance of a critical
phenomenon.\footnote{A reader unfamiliar with critical phenomena
can find a good introduction in the short text \cite{Gbook10}.
We note that such familiarity is not required for understanding
this paper.}
While such transitions are ubiquitous in stochastic
control, we found little in the literature that connects them
with critical phenomena. We believe that such a point of view
can be beneficial, and was indeed our original motivation for
this study. Examples of works in the physics
literature that address an interplay between controllability
and network structure are \cite{NV12,JLCPSB13,SM13}.
\item Further problems that are important for applications can be studied
in our model or suitable modifications thereof. For example, we
see no obvious \emph{distributed} control, where vertices would
only have local information about the graph structure.
\end{enumerate}
\subsection{Definition of the model}
\label{ssec:model}
Throughout $G = (V,E)$ will be a finite connected simple graph
(without multiple edges or loops). We write
$k = |V|$, and assume $|E| \ge 2$ (the case with
one edge being trivial).
We write $\deg_G(v)$ for the degree of $v \in V$, and
$\deg_F(v)$ for the degree of $v$ in the
subgraph of $G$ induced by the set of edges $F \subset E$.
The state at time $0 \le t \le n$ is an integer vector
$\mathbf{N}(t) = (N_e(t) : e \in E)$, where the starting
state is $\mathbf{N}(0) = \mathbf{n} = (n_e : e \in E)$. Usually we will
use capitalized letters for random variables or random
processes, and lowercase letters for their possible values.
We write
$n = \sum_{e \in E} n_e$. Let $V_1, \ldots V_n \in V$ be an
i.i.d.~sequence of vertices with $\mathbf{P} [ V_i = v ] = \frac{1}{k}$,
$v \in V$, $i = 1, \dots, n$.
If the player allocates $V_t$ to the edge $e$ incident
with $V_t$, the state is updated as
\eqnst
{ \mathbf{N}(t) = \mathbf{N}(t-1) - \mathbf{1}^e, \quad \text{ where } \quad
\mathbf{1}^e = (1^e_f : f \in E), \quad
1^e_f
= \begin{cases}
1 & \text{if $f = e$;}\\
0 & \text{if $f \not= e$.}
\end{cases} }
The gambler wins if $\mathbf{N}(n) = (0, \dots, 0) \in \mathbb{N}^E$,
and looses otherwise. We denote by $p_G(\mathbf{n})$
the probability of winning under the optimal strategy, when the
starting state is $\mathbf{n}$. This satisfies
\eqn{e:optimality}
{ p_G(\mathbf{n})
= \frac{1}{k} \, \sum_{v \in V} \, \max_{e \in E : e \sim v} \, p_G(\mathbf{n} - \mathbf{1}^e), }
known as the \emph{optimality equation} \cite[Section I.1]{Rbook},
where $e \sim v$ means that $e$ is incident with $v$.
We introduce some notation needed to state our main theorem.
We write $\mathcal{S}_G$ for the probability simplex in $\mathbb{R}^E$, that is, the set
of non-negative vectors $\mathbf{x} \in \mathbb{R}^E$ such that $\sum_{e \in E} x_e = 1$.
We define
\eqnspl{e:RG-def}
{ d(F)
&= \left| \{ v \in V : \deg_F(v) = \deg_G(v) \} \right|, \quad
\emptyset \subset F \subset E; \\
\mathcal{R}_G
&= \left\{ \mathbf{x} \in \mathcal{S}_G : \text{for all
$\emptyset \subsetneq F \subsetneq E$ we have $\sum_{e \in F} x_e >
\frac{1}{k} d(F)$} \right\}; \\
\mathcal{I}_G
&= \left\{ \mathbf{x} \in \mathcal{S}_G: \text{there exists
$\emptyset \subsetneq F \subsetneq E$ such that $\sum_{e \in F} x_e <
\frac{1}{k} d(F)$} \right\}. }
The letters `$d$', `$\mathcal{R}$' and `$\mathcal{I}$' are intended to evoke
`degree', `reachable' and `inaccessible', as we explain.
For any non-empty set
$F$ of edges, $\frac{d(F)}{k}$ is the probability that the player receives
a vertex that has full degree in $F$.
Any such vertex \emph{must} be allocated to one of the edges in $F$.
For starting positions $\mathbf{n} = (n_e : e \in E)$ where the proportion
of space $\sum_{e \in F} n_e / n$ available at the beginning
is smaller than $d(F)/k$, the probability
of winning goes to $0$ (as $n \to \infty$).
Therefore, from the region $\mathcal{I}_G$ the winning
position is asymptotically inaccessible.
On the other hand, as we show in Theorem \ref{thm:phase-trans-graph},
if $\mathbf{n} = n \, \mathbf{x}$ with $\mathbf{x} \in \mathcal{R}_G$, then the winning position
is asymptotically reachable from $\mathbf{n}$.
As we point out in Section \ref{ssec:prelim}, the set $\mathcal{R}_G$
arises as the region of controllability for a simple
(deterministic) linear control system associated to the game.
It can be verified that when $G$ is a tree with $k$ vertices
($k \ge 3$) $\mathcal{R}_G$ is a parallelepiped.
As we will not need this fact, we omit the proof.
\begin{remark}
The arguments we present in this paper are also applicable
to the slightly more general model when $V_1, \dots, V_n$ are not
uniformly distributed (but still i.i.d.). Suppose
$\mathbf{P} [ V_i = v ] = p_v$ with a probability vector
$\mathbf{p} = (p_v : v \in V)$ such that $p_v > 0$ for all $v \in V$.
In this case $\mathcal{R}_G$ and $\mathcal{I}_G$ are replaced by
\eqnsplst
{ \mathcal{R}_{G,\mathbf{p}}
&= \left\{ \mathbf{x} \in \mathcal{S}_G : \text{for all
$\emptyset \subsetneq F \subsetneq E$ we have $\sum_{e \in F} x_e >
\sum_{v : \deg_F(v) = \deg_G(v)} p_v$} \right\}; \\
\mathcal{I}_{G,\mathbf{p}}
&= \left\{ \mathbf{x} \in \mathcal{S}_G: \text{there exists
$\emptyset \subsetneq F \subsetneq E$ such that $\sum_{e \in F} x_e <
\sum_{v : \deg_F(v) = \deg_G(v)} p_v$} \right\}, }
As the required changes in the proofs are minor, but including them would
burden the notation further, we state and prove the results only
in the uniform case. All the essential difficulties are already present
in the uniform model.
\end{remark}
\subsection{Main results}
\label{ssec:main-result}
Theorems \ref{thm:phase-trans-graph} and \ref{thm:bdry} below state
our main results. Figure \ref{fig:k4-plot} illustrates these
when $G$ is a path of length three, that is $k = 4$.
\begin{theorem}
\label{thm:phase-trans-graph}
Let $G$ be a finite connected simple graph with $|E| \ge 2$. \\
(i) If $\mathbf{x} \in \mathcal{I}_G$, and $\mathbf{n} = n \mathbf{x} + O(1)$, then
$p_G(\mathbf{n}) \to 0$ exponentially fast, as $n \to \infty$,
at a rate depending on $\mathbf{x}$. The rate of decay is bounded
away from $0$ on subsets bounded away from $\mathcal{R}_G$. \\
(ii) There exists a constant $c_G > 0$, such that
if $\mathbf{x} \in \mathcal{R}_G$, and $\mathbf{n} = n \mathbf{x} + O(1)$, then
$p_G(\mathbf{n}) \to c_G$, as $n \to \infty$.
\end{theorem}
\begin{figure}[htpb]
(a) \hskip-1cm\includegraphics[scale=0.7]{k4-plot-p4-bw.eps} \
(b) \hskip-1cm\includegraphics[scale=0.7]{k4-plot-p4-critical-bw.eps}
\caption{%
(a) Image of $p_G(m, 200-m-\ell, \ell)$ when $G$ is a path of length
three ($k = 4$) and $n = 200$.
The limit of $p_G$ is a positive constant in the rectangle
$\frac{1}{4} < x = m/n, y=\ell/n < \frac{1}{2}$ (dark region),
and goes to $0$ when $(x,y)$ is away from the rectangle (white region).
The maximum of $p_G$ is $\approx 0.2583299$.
(b) Detailed image of $p_G$ near the corner of the critical region
$0.15 \le m/n \le 0.35$, $0.4 \le \ell/n \le 0.6$.}
\label{fig:k4-plot}
\end{figure}
In Section \ref{sec:critical-proof} we obtain bounds on the
behaviour near $\partial \mathcal{R}_G$. These shows that the
`critical window' has width of order $\sqrt{n}$ around
$n \partial \mathcal{R}_G$. Our bounds in particular imply the following
upper bound on $p_G(\mathbf{n})$ in this region. Fix any $\delta > 0$, and let
\eqnst
{ \overline{M}_n
= \overline{M}_n(\delta)
= \max \left\{ p_G(\mathbf{n}) : \mathbf{n}/n \in \mathcal{S}_G,\,
\mathrm{dist}(\mathbf{n}/n, \partial \mathcal{R}_G) \le \delta \right\}. }
\begin{theorem}
\label{thm:bdry}
For any $\delta > 0$ we have
$\limsup_{n \to \infty} \overline{M}_n(\delta) \le c_G$.
\end{theorem}
Combining Theorems \ref{thm:phase-trans-graph} and \ref{thm:bdry}
we obtain the following corollary.
\begin{corollary}
\label{cor:max-pG}
The configuration $\mathbf{n}$ that maximizes $p_G(\mathbf{n})$ with $n$ fixed,
satisfies $p_G(\mathbf{n}) = c_G + o(1)$, as $n \to \infty$.
\end{corollary}
Theorems \ref{thm:phase-trans-graph} and \ref{thm:bdry} do not rule
out the possibility that $p_G(\mathbf{n})$ is maximized near the critical surface,
at a distance that is $o(n)$. But of course we expect that
the location of the maximum, when rescaled by $1/n$, converges to a point in the
interior of $\mathcal{R}_G$. It is also plausible that the location of this point
can be characterized in terms of large deviation rates for events
of the form `the gambler runs out of space on the edges in $F$',
that is:
\eqnst
{ \left\{ \sum_{v : \deg_F(v) = \deg_G(v)} \sum_{t=1}^n \mathbf{1}_{V_t = v}
> \sum_{e \in F} n_e \right\}, \quad
\emptyset \subsetneq F \subsetneq E. }
We state an explicit conjecture for a path of length $k-1$, where this
is easiest to formulate.
Let
\eqnst
{ a_*(j;k)
= \frac{\log \left( \frac{k-j-1}{k-j} \right)}{\log \left(
\frac{j \, (k-j-1)}{(j+1) \, (k - j)} \right)}, \quad
1 \le j \le k-2 \qquad\quad
a_*(0;k) = 0 \qquad\quad
a_*(k-1;k) = 1. }
Let $\mathbf{n}^{\max} = (n^{\max}_j : j = 1, \dots, k-1)$ denote a point in $n \, \mathcal{S}_G$ where
$p_G(\mathbf{n})$ is maximized, $n \ge 1$.
\begin{conjecture}
\label{conj:path}
Let $k \ge 3$. Then for $1 \le j \le k-2$ we have
\eqnst
{ \lim_{n \to \infty} \frac{1}{n} \sum_{\ell=1}^j n^{\max}_j
= a_*(j;k). }
\end{conjecture}
The number $a_*(j;k)$ is obtained as the unique point
$a \in \left( \frac{j}{k}, \frac{j+1}{k} \right)$, for which the
`cheaper' of the two large deviation events
\eqnst
{ \left\{ \sum_{v=1}^j \sum_{t=1}^n \mathbf{1}_{V_t = v}
> a \, n \right\} \quad \text{ and } \quad
\left\{ \sum_{v=j+2}^k \sum_{t=1}^n \mathbf{1}_{V_t = v}
> (1-a) \, n \right\} }
is as `expensive' as possible. (This number $a$ can be obtained by equating
the large deviation rates of the two events.) Each $a_*(j;k)$ marks
out a linear submanifold of $\mathcal{S}_G$, and the location of the optimum
is their intersection. We expect that a similar characterization
holds for any connected graph $G$.
The structure of the paper is as follows.
The proof of Theorem \ref{thm:phase-trans-graph} is given in
Section \ref{sec:main-proof}. We study the behaviour near $\partial \mathcal{R}_G$
in Section \ref{sec:critical-proof}, and deduce Theorem \ref{thm:bdry}.
We stress however, that our analysis provides a much more refined picture
than Theorem \ref{thm:bdry}; see Propositions \ref{prop:far-enough},
\ref{prop:in-enough} and \ref{prop:near-critical}, and their proof. The estimates
in these propositions suggest Gaussian behaviour near $\partial \mathcal{R}_G$.
We conclude with some further questions in Section \ref{sec:open}.
\section{Proof of the phase transition}
\label{sec:main-proof}
The next section collects some preliminaries and useful notation.
\subsection{Basic properties of $\mathcal{R}_G$}
\label{ssec:prelim}
It will be convenient to have the version of $\mathcal{R}_G$ in which the
inequalities are not strict:
\eqnst
{ \mathcal{K}_G
= \left\{ \mathbf{x} \in \mathcal{S}_G : \text{for all $F \subset E$
we have $\sum_{e \in F} x_e \ge \frac{1}{k} d(F)$} \right\}. }
We denote by $H_F$ the hyperplanes appearing in these inequalities:
\eqnst
{ H_F
= \left\{ \mathbf{x} \in \mathbb{R}^E : \sum_{e \in F} x_e = \frac{1}{k} d(F) \right\},
\emptyset \not= F \subset E. }
In particular, $\mathcal{S}_G$, $\mathcal{R}_G$, $\mathcal{I}_G$ and $\mathcal{K}_G$ are all subsets of $H_E$.
\begin{lemma}
\label{lem:prop-R_G} \ \\
(i) The sets $\mathcal{K}_G$ and $\mathcal{R}_G$ are convex with a non-empty interior
relative to $H_E$.\\
(ii) $\mathcal{K}_G = \overline{\mathcal{R}_G}$ (the closure of $\mathcal{R}_G$ in $H_E$).
\end{lemma}
\begin{proof}
(i) As intersections of halfspaces with $H_E$, both $\mathcal{K}_G$ and $\mathcal{R}_G$ are convex.
Also, since the halfspaces defining $\mathcal{R}_G$ (resp.~$\mathcal{K}_G$) are open (resp.~closed),
$\mathcal{R}_G$ (resp.~$\mathcal{K}_G$) is a relatively open (resp.~closed) subset of $H_E$.
The containment $\mathcal{R}_G \subset \mathcal{K}_G$ is immediate from the definitions.
To show that $\mathcal{R}_G$ has non-empty interior, we check that the vector
\eqn{e:x^*}
{ \mathbf{x}^*
= (x^*_e : e \in E), \quad
x^*_e
= \frac{1}{k} \sum_{\substack{v \in V \\ v \sim e}} \frac{1}{\deg(v)},
\quad e \in E, }
belongs to $\mathcal{R}_G$. First, $\mathbf{x}^* \in H_E$ can be seen by summing the
formula for $x^*_e$ over $e \in E$ and exchanging the two sums. It is also
immediate that $x^*_e > 0$, and therefore $\mathbf{x}^* \in \mathcal{S}_G$.
Now fix any $\emptyset \subsetneq F \subsetneq E$. Since
$G$ is connected, there exists a vertex $v \in V$ such that
$0 < \deg_F(v) < \deg_G(v)$. Therefore,
\eqnsplst
{ \sum_{e \in F} x^*_e
&= \sum_{e \in F} \frac{1}{k} \sum_{\substack{v \in V \\ v \sim e}} \frac{1}{\deg(v)}
= \frac{1}{k} \sum_{\substack{v \in V \\ \deg_F(v) = \deg_G(v)}}
\sum_{\substack{e \in F \\ e \sim v}} \frac{1}{\deg_G(v)}
+ \frac{1}{k} \sum_{\substack{v \in V \\ \deg_F(v) < \deg_G(v)}}
\sum_{\substack{e \in F \\ e \sim v}} \frac{1}{\deg_G(v)} \\
&> \frac{1}{k} \sum_{\substack{v \in V \\ \deg_F(v) = \deg_G(v)}} 1
= \frac{d(F)}{k}. }
This shows that $\mathbf{x}^* \in \mathcal{R}_G$, and since $\mathcal{R}_G$ is open in $H_E$,
$\mathbf{X}^*$ is an interior point. The containment $\mathcal{R}_G \subset \mathcal{K}_G$ implies
that $\mathbf{x}^*$ is also an interior point of $\mathcal{K}_G$.
(ii) Since $\mathcal{K}_G$ is closed, we have $\overline{\mathcal{R}_G} \subset \mathcal{K}_G$.
Therefore, it is enough to show that $\mathcal{K}_G \setminus \mathcal{R}_G \subset \overline{\mathcal{R}_G}$.
Let $\mathbf{x} \in \mathcal{K}_G \setminus \mathcal{R}_G$. Let $\mathbf{x}(t) = t \mathbf{x} + (1-t) \mathbf{x}^*$.
Convexity of $\mathcal{K}_G$ implies that $\mathbf{x}(t) \in \mathcal{K}_G$ for all $0 \le t \le 1$.
Moreover, since the expressions $\sum_{e \in F} x_e(t)$ are monotone linear functions
of $t$, and $\sum_{e \in F} x_e(0) > d(F)/k$, and $\sum_{e \in F} x_e(1) \ge d(F)/k$,
we must have the inequality $\sum_{e \in F} x_e(t) > \frac{1}{k} d(F)$
for all $0 \le t < 1$. This implies that $\mathbf{x}(t) \in \mathcal{R}_G$ for
$0 \le t < 1$, and hence $\mathbf{x} \in \overline{\mathcal{R}_G}$, as required.
\end{proof}
The optimality equation implies that the optimal deterministic strategy is
also optimal among randomized strategies. The next lemma states a
connection between elements of $\mathcal{K}_G$ and possible moves in a
randomized strategy. In its statement, we think of $q^{(v)}(e)$ as the
probability of assigning vertex $v$ to the edge $e$ in such a move.
\begin{lemma}
\label{lem:interpret}
We have $\mathbf{x} \in \mathcal{K}_G$ if and only if there exists a
collection $\{ q^{(v)}(e) : v \in V,\, e \in E \}$
of non-negative numbers such that:\\
(i) $\sum_{e \in E} q^{(v)}(e) = 1$ for all $v \in V$; \\
(ii) $q^{(v)}(e) = 0$ if $e$ is not incident with $v$; \\
(iii) $\frac{1}{k} \sum_{v \in V} q^{(v)}(e) = x_e$ for all $e \in E$.
\end{lemma}
\begin{proof}
We deduce the statement from the Max-Flow-Min-Cut Theorem \cite[Theorem III.1]{Bbook}.
Define an auxilliary
directed graph $G'$ as follows. Replace each edge $\{ v, w \}$
of $G$ by two directed edges $( v, u_e )$ and $( w, u_e )$, introducing
the new vertex $u_e$ for each $e \in E$. Also add new vertices
$s$ and $t$. Add a directed edge $( s, v )$ for each $v \in V$ and
a directed edge $( u_e, t )$ for each $e \in E$. Thus $G'$ has
$|V| + |E| + 2$ vertices and $2 |E| + |V| + |E|$ edges.
Consider flows of strength $1$ from $s$ to $t$ in $G'$, where
we assign capacity $1/k$ to each edge $(s, v)$, $v \in V$,
capacity $2$ to each $(v, u_e)$ and capacity $x_e$ to
each $(u_e, t)$.
Suppose $q^{(v)}(e)$ satisfy (i)--(iii). Define a flow by letting
$1/k$ flow on each $(s,v)$, $q^{(v)}(e)/k$ flow on each $(v, u_e)$,
and $x_e$ flow on each $(u_e,t)$. This flow satisfies
the capacity constraints, and it is a maximal flow, since
$\{ (s, v) : v \in V \}$ is a cut with value $1$. Therefore any
other other cut must have value at least $1$.
Given $\emptyset \subset F \subset E$, consider the cut
\eqn{e:cut}
{ \{ (s, v) : \deg_F(v) < \deg_G(v) \}
\cup \{ (u_e, t) : e \in F \}. }
with value
\eqnst
{ \frac{k - d(F)}{k} + \sum_{e \in F} x_e
= 1 - \frac{d(F)}{k} + \sum_{e \in F} x_e
\ge 1. }
This implies that $\mathbf{x} \in \mathcal{K}_G$.
For the converse, suppose that $\mathbf{x} \in \mathcal{K}_G$, and
consider a maximal flow on $G'$. The conditions in the
definition of $\mathcal{K}_G$ imply that all cuts of the form
\eqref{e:cut} have value $\ge 1$, and the cut corresponding
to $F = E$ has value $1$. It is easy to check
that any minimal cut is necessarily of this form, and therefore
the maximal flow is $1$. Letting $q^{(v)}(e)$ be $k$-times the
amount flowing on $(v, u_e)$ we obtain a collection satisfying
(i)--(iii).
\end{proof}
Basic for Theorem \ref{thm:phase-trans-graph} is the following
computation. Suppose that our current state is
$\mathbf{n} = n \mathbf{x}$, $\mathbf{x} \in \mathcal{S}_G$.
Let $\{ q^{(v)}(e) \}_{v \in V, e \in E}$ be a set of
probabilities representing a randomized move (that is:
$q^{(v)}_e$ is the probability that edge $e$ will be used,
conditional on the event that vertex $v$ has been drawn). Let
$\mathbf{N}' = (n-1) \mathbf{X}'$ be the random outcome of the move.
Let $y_e = \frac{1}{k} \sum_{v \in V} q^{(v)}(e)$. We have
\eqnspl{e:drift1}
{ \mathbf{E} \mathbf{X}'
&= \frac{1}{n-1} \mathbf{E} \mathbf{N}'
= \frac{1}{n-1} \left( \mathbf{n} - \sum_{e \in E} y_e \mathbf{1}^e \right)
= \frac{n}{n-1} \mathbf{x} - \frac{1}{n-1} \mathbf{y}
= \mathbf{x} + \frac{1}{n-1} ( \mathbf{x} - \mathbf{y} ). }
If $\mathbf{x} \in \mathcal{R}_G$, then due to Lemma \ref{lem:interpret}
it is possible to choose $\mathbf{y} \in \mathcal{K}_G$ in such a
way that the average displacement
points in any desired direction.
On the other hand, if $\mathbf{x} \in \mathcal{I}_G$, convexity of $\mathcal{K}_G$ implies that
the process will always move away from $\mathcal{R}_G$ on average.
The above observations are also reflected in the following deterministic
controlled differential equation:
\eqnst
{ \frac{d \mathbf{x}}{dt}
= \mathbf{x} - \mathbf{u}(t), \quad \text{where the control $\mathbf{u}$ satisfies
$\mathbf{u}(t) \in \mathcal{K}_G$ for all $t \ge 0$.} }
It is easy to see (for example using as Lyapunov function the distance
from $H_E \cap H_F$ for suitable $F$) that:\\
(i) If $\mathbf{x}(0) \not\in \mathcal{K}_G$, then for any control $\mathbf{u}$ we have
$\mathbf{x}(t) \not\in \mathcal{K}_G$ for all $t \ge 0$;\\
(ii) If $\mathbf{x}(0) \in \mathcal{R}_G$, then for any $\mathbf{x}' \in \mathcal{R}_G$ there exists a
control $\mathbf{u}$ such that $\lim_{t \to \infty} \mathbf{x}(t) = \mathbf{x}'$.
Let us introduce some further notation.
Throughout we write $\| \mathbf{w} \|_1 = \sum_{e \in E} |w_e|$ and
$| \mathbf{w} | = \sqrt{\sum_{e \in E} |w_e|^2}$ for any
vector $\mathbf{w} = (w_e : e \in E) \in \mathbb{R}^E$. For $\mathbf{w} \in \mathbb{R}^E$ and
$A \subset \mathbb{R}^E$ we write $\mathrm{dist}(\mathbf{w}, A) = \inf_{\mathbf{y} \in A} |\mathbf{w} - \mathbf{y}|$.
We will write $\langle \cdot, \cdot \rangle$ for the Euclidean
scalar product.
For each $\emptyset \subsetneq F \subsetneq E$ we fix a point $\mathbf{z}^F \in \mathcal{K}_G$
such that $\sum_{e \in F} z^F_e = \frac{d(F)}{k}$. Let $\mathbf{u}^F$ be the
unit vector of the form
\eqnst
{ u^F_e
= \begin{cases}
a_F & \text{if $e \in F$;}\\
-b_F & \text{if $e \in E \setminus F$,}
\end{cases} }
with $a_F, b_F > 0$, and such that
$\sum_{e \in E} u^F_e = 0$. For all $\mathbf{w} \in \mathcal{K}_G$ we have
$\langle \mathbf{w} - \mathbf{z}^F, \mathbf{u}^F \rangle \ge 0$. We will often use linear
functions of the form:
\eqnsplst
{ L^{F,n}(\mathbf{n})
= \langle \mathbf{n} - n \mathbf{z}^F, \mathbf{u}^F \rangle
= \sum_{e \in E} (n_e - n z^F_e) u^F_e. }
The last expression can be rewritten as follows:
\eqnsplst
{ &\sum_{e \in E} (n_e - n z^F_e) u^F_e
= a_F \sum_{e \in F} n_e + (-b_F) \left( n - \sum_{e \in F} n_e \right)
- n a_F \sum_{e \in F} z^F_e - n (-b_F) \left( 1 - \sum_{e \in F} z^F_e \right) \\
&\qquad = (a_F + b_F) \sum_{e \in F} n_e - n b_F - n (a_F + b_F) \sum_{e \in F} z^F_e + n b_F
= (a_F + b_F) \left( \sum_{e \in F} n_e - n \frac{d(F)}{k} \right). }
We define $\kappa = \kappa(G) = \min \{ (a_F + b_F) : \emptyset \subsetneq F \subsetneq E \} > 0$.
We will need the following lemma.
\begin{lemma}
\label{lem:equiv-metric}
There exist constants $b = b(G) > 0$ and $B = B(G)$ such that for all $\mathbf{w} \in \mathcal{K}_G$ we have
\eqn{e:equiv-metric}
{ b \, \mathrm{dist}( \mathbf{w}, \partial \mathcal{R}_G )
\le \min_{\emptyset \subsetneq F \subsetneq E} \left\{ \sum_{e \in F} w_e - \frac{d(F)}{k} \right\}
\le B \, \mathrm{dist}( \mathbf{w}, \partial \mathcal{R}_G ). }
We also have
\eqn{e:equiv-metric2}
{ \frac{1}{2} L^{F,n}(n \mathbf{w})
\le n \left( \sum_{e \in F} w_e - \frac{d(F)}{k} \right)
\le \frac{1}{\kappa} L^{F,n}(n \mathbf{w}), \quad n \ge 1. }
\end{lemma}
\begin{proof}
The proof of Lemma \ref{lem:prop-R_G}(ii) showed that $\mathcal{K}_G \setminus \mathcal{R}_G = \partial \mathcal{R}_G$.
Therefore, if $\mathbf{w} \in \mathcal{K}_G \setminus \mathcal{R}_G$ then $\sum_{e \in F} w_e = d(F)/k$ for some
$\emptyset \subsetneq F \subsetneq E$, and $\mathrm{dist}(\mathbf{w}, \partial \mathcal{R}_G) = 0$. In particular, the first
statement of the lemma holds when $\mathbf{w} \in \mathcal{K}_G \setminus \mathcal{R}_G$. Henceforth assume that
$\mathbf{w} \in \mathcal{R}_G$. Then since $\partial \mathcal{R}_G = \cup_{\emptyset \subsetneq F \subsetneq E} H_F \cap \mathcal{K}_G$,
we have
\eqn{e:dist-HF}
{ \mathrm{dist}( \mathbf{w}, \partial \mathcal{R}_G )
= \min_{\emptyset \subsetneq F \subsetneq E} \mathrm{dist}( \mathbf{w}, \mathcal{K}_G \cap H_F )
\ge \min_{\emptyset \subsetneq F \subsetneq E} \mathrm{dist}( \mathbf{w}, H_E \cap H_F ). }
We claim that the last inequality is in fact an equality. Let $F$ be a set
for which the minimum in the right hand side of \eqref{e:dist-HF} is attained.
Let $\mathbf{w}_0$ be the orthogonal projection of $\mathbf{w}$ onto $H_E \cap H_F$ in the linear
space $H_0$. If the line segment $\mathbf{w} \, \mathbf{w}_0$ had any interior point $\mathbf{w}_1$
belonging to any other $H_{F'}$, then this would contradict the minimality
of $F'$. Therefore, the entire line segment $\mathbf{w} \, \mathbf{w}_0$, apart from $\mathbf{w}_0$,
belongs to $\mathcal{R}_G$, with $\mathbf{w}_0 \in \partial \mathcal{R}_G$.
Hence $\mathrm{dist} ( \mathbf{w}, H_E \cap H_F) = \mathrm{dist}( \mathbf{w}, \mathbf{w}_0 ) \ge \mathrm{dist} ( \mathbf{w}, \partial \mathcal{R}_G )$.
This proves our claim. Since $\mathbf{w} \in H_E$, there exists a constant $B_0$, that only depends on
$\min \{ \text{angle between $H_E$ and $H_F$} : \emptyset \subsetneq F \subsetneq E \}$,
such that
\eqnst
{ \mathrm{dist}( \mathbf{w}, H_F )
\le \mathrm{dist}( \mathbf{w}, H_E \cap H_F )
\le B_0 \, \mathrm{dist} ( \mathbf{w}, H_F ). }
This implies the first statement of the lemma, since
$\mathrm{dist} ( \mathbf{w}, H_F ) = |F|^{-1/2} \left( \sum_{e \in F} w_e - \frac{d(F)}{k} \right)$.
The second statement of the lemma follows from the definition of $\kappa(G)$,
and the fact that $a_F, b_F \le 1$ (since $\mathbf{u}^F$ is a unit vector).
\end{proof}
Recall that we write $\mathbf{n} = n \mathbf{x}$ for the starting state. Given a randomized
strategy, we write $\mathbf{X}(t) = \frac{1}{n-t} \mathbf{N}(t)$.
Note that we allow the processes $\mathbf{N}(t)$, $\mathbf{X}(t)$, etc.~to have
negative entries, and once this happens, we have $\mathbf{X}(t) \not\in \mathcal{S}_G$
for all further times.
We write $\mathbf{Y}(t-1)$ for the vector of edge weights that our strategy
prescribes for round $t$, and $E(t) \in E$ for the random edge selected
in round $t$ according to this strategy. We write
\eqnst
{ \mathcal{F}_t
= \sigma \left( \mathbf{N}(s),\, \mathbf{Y}(s) : 0 \le s \le t \right) }
for the filtration of the process.
\subsection{Steering}
\label{ssec:steering}
In the following proposition we show that if $n$ is large enough,
then starting from any state in $\mathcal{R}_G$ that is bounded away from the
boundary, there is a strategy that steers the process close to
any other such point in $\mathcal{R}_G$.
\begin{proposition}
\label{prop:main-steer}
Given $\delta > 0$, there exist $c_1 = c_1(G,\delta) > 0$,
$\lambda_1 = \lambda_1(G, \delta) > 0$, $n_0 = n_0 (G, \delta)$,
$K_1 = K_1(G, \delta)$ and $C_1 = C_1(G, \delta)$ such that the following holds.
Let $n$ and $n_1$ be any positive integers such that $n \ge ( 1 + K_1) n_1$ and
$n_1 \ge n_0$. Suppose that $\mathbf{n} = n \mathbf{x}$ with $\mathrm{dist}(\mathbf{x}, \partial \mathcal{R}_G) \ge \delta$.
Suppose also that $\mathbf{z} \in \mathcal{R}_G$ with $\mathrm{dist}( \mathbf{z}, \partial \mathcal{R}_G ) \ge \delta$,
with $n_1 \mathbf{z}$ having integer coordinates. There exists a randomized
strategy starting from state $\mathbf{n}$ such that under this strategy we have:
\eqn{e:hit-exact}
{ \mathbf{P} [ \mathbf{N}(n - n_1) = n_1 \mathbf{z} ]
\ge c_1; }
and for all $q \ge 1$ we have
\eqn{e:not-deviate}
{ \mathbf{P} \left[ \left| \mathbf{N}(n - n_1) - n_1 \mathbf{z} \right| > q \right]
\le C_1 \exp ( - \lambda_1 q ). }
\end{proposition}
The strategy will be defined in three stages: in the first stage
we reduce $| \mathbf{N}(t) - (n - t) \mathbf{z}|$ to $O(1)$; in the second stage we
keep it within $O(1)$ until time $n - n_1 - O(1)$; and we use the
last $O(1)$ steps to attempt to hit $n_1 \mathbf{z}$ exactly.
The first two of these steps are the content of the next two lemmas.
After proving the lemmas we assemble them to prove
Proposition \ref{prop:main-steer}.
\begin{lemma}
\label{lem:1st-stage}
Given $\delta > 0$ there exists $K_2 = K_2(G, \delta)$,
$d_0 = d_0(\delta)$, $\lambda_2 = \lambda_2(G, \delta) > 0$
and $C_2 = C_2(G)$ such that for any $\mathbf{x}, \mathbf{z}$ with
$\mathrm{dist}(\mathbf{x}, \partial \mathcal{R}_G), \mathrm{dist}(\mathbf{z}, \partial \mathcal{R}_G) \ge \delta$
the following holds. For any $n, n'$ with $n \ge K_2 n'$ and
$n'$ large enough there is a randomized strategy starting from
state $\mathbf{n} = n \mathbf{x}$ such that the stopping time
\eqnst
{ \tau_{d_0}
= \inf \{ t \ge 0 : | \mathbf{N}(t) - (n-t) \mathbf{z} | \le d_0 \} }
satisfies
\eqn{e:1st-stage-estimate}
{ \mathbf{P} [ \tau_{d_0} > n - n' ]
\le C_2 \exp ( - \lambda_2 n' ). }
\end{lemma}
\begin{proof}
The value of $d_0 > 0$ will be chosen in course of the proof.
We are also going to use a small parameter $0 < \varepsilon_0 < \delta/4$,
chosen later. The first step of the proof is to reach an
$\varepsilon_0$-neighbourhood of $\mathbf{z}$.
Let $\mathbf{y}$ be the point where the halfline starting at $\mathbf{z}$ and
passing through $\mathbf{x}$ intersects $\partial \mathcal{R}_G$. Let $\mathbf{u}$ denote
the unit vector with the same direction as $\mathbf{x} - \mathbf{z}$.
In the first step, we use the following strategy: given the current state
$\mathbf{N}(t) = (n-t) \mathbf{X}(t)$, we select $\mathbf{Y}(t) \in \partial\mathcal{R}_G$
such that $\mathbf{Y}(t) - \mathbf{X}(t)$ is a positive multiple of $\mathbf{u}$.
In particular, $\mathbf{Y}(0) = \mathbf{y}$.
We employ this strategy until the stopping time $\tau(1)$ defined by
\eqnst
{ \tau(1)
= \inf \{ t \ge 0 : | \mathbf{X}(t) - \mathbf{z} | \le \varepsilon_0 \}. }
Let us write $\mathbf{X}^\mathrm{ort}(t)$ for the component of the vector
$\mathbf{X}(t) - \mathbf{z}$ orthogonal to $\mathbf{u}$.
Let
\eqn{e:S(t)-1}
{ S(t)
= \langle \mathbf{N}(t) - (n-t) \mathbf{z}, \mathbf{u} \rangle. }
Since
\eqnst
{ \mathbf{N}(t+1)
= \left( \mathbf{N}(t) - \mathbf{Y}(t) \right) + \left( \mathbf{Y}(t) - \mathbf{1}^{E(t+1)} \right), }
and the second term has mean $\mathbf{0}$ given $\mathcal{F}_t$, we have
\eqnspl{e:mart}
{ \mathbf{E} [ S(t+1) \,|\, \mathcal{F}_t ]
&= S(t) - \langle \mathbf{Y}(t) - \mathbf{z}, \mathbf{u} \rangle. }
Since $\mathbf{x}$ and $\mathbf{z}$ are bounded away from $\partial \mathcal{R}_G$,
there exist $\mu = \mu(G,\delta) > 1$ and $\varepsilon_0 = \varepsilon_0(G, \delta) > 0$
such that as long as $|\mathbf{X}^\mathrm{ort}(t)| \le \frac{\varepsilon_0}{2}$, we have
\eqn{e:drift-lb}
{ \langle \mathbf{Y}(t) - \mathbf{z}, \mathbf{u} \rangle
\ge \mu |\mathbf{x} - \mathbf{z}|. }
This implies that $S'(t) = S(t) + t \mu |\mathbf{x} - \mathbf{z}|$ is a
supermartingale as long as $|\mathbf{X}^\mathrm{ort}(t)| \le \varepsilon_0/2$.
On the other hand, due to the calculation in \eqref{e:drift1},
$\mathbf{X}^\mathrm{ort}(t)$ is a martingale.
Let $t_1 = \frac{1 + \mu}{2 \mu} n$. Due to the choice of
$\mu$ and $\varepsilon_0$, we have the inclusions
\eqnspl{e:terminate}
{ \left\{ \tau(1) > t_1 \right\}
&\subset \{ \text{$|\mathbf{X}^\mathrm{ort}(s)| > \varepsilon_0/2$ for some $0 \le s \le t_1$} \} \\
&\qquad\qquad \cup \{ \text{$S(s) > \mu (n-s) |\mathbf{x} - \mathbf{z}|$ for some $0 \le s \le t_1$} \}
\cup \{ \text{$S(t_1) \ge 1$} \} \\
&\subset \left\{ \max_{0 \le s \le t_1} |\mathbf{X}^\mathrm{ort}(s)| > \varepsilon_0/2 \right\}
\cup \left\{ \max_{0 \le s \le t_1} S'(s) - S'(0) > (\mu - 1) n |\mathbf{x} - \mathbf{z}| \right\} \\
&\qquad\qquad \cup \left\{ \max_{0 \le s \le t_1} S'(s) - S'(0)
> \frac{\mu - 1}{2} n |\mathbf{x} - \mathbf{z}| \right\}. }
The inclusions \eqref{e:terminate} imply
\eqnspl{e:1st-stage-bad-0}
{ \mathbf{P} [ \tau(1) > t_1 ]
\le \mathbf{P} \left[ \max_{0 \le s \le t_1} S'(s) - S'(0) > \frac{\mu - 1}{2} n |\mathbf{x} - \mathbf{z}| \right]
+ \mathbf{P} \left[ \max_{0 \le s \le t_1} |\mathbf{X}^\mathrm{ort}(s)| > \varepsilon_0/2 \right]. }
Since $S'(t)$ has increments bounded by $(1 + \mu) \sqrt{2}$,
while $|\mathbf{X}^\mathrm{ort}(t+1) - \mathbf{X}^\mathrm{ort}(t)| \le \sqrt{2}/(n-t-1)$,
we can apply the Azuma-Hoeffding inequality
(see \cite[Exercise E14.2]{Wbook} or \cite[Theorem 12.2(3)]{GSbook})
to $\{ S'(t) \}_{t \ge 0}$ as well as to the projection of
$\{ \mathbf{X}^\mathrm{ort}(t) \}_{t \ge 0}$ to each coordinate direction.
This yields
\eqnspl{e:1st-stage-bad}
{ \mathbf{P} [ \tau(1) > t_1 ]
&\le \exp \left( - \frac{(\mu - 1)^2}{8} \frac{n^2 |\mathbf{x} - \mathbf{z}|^2}{t_1 \, 2 \, (1+\mu)^2} \right)
+ 2 |E| \exp \left( - \frac{1}{8} \frac{\varepsilon_0^2}{t_1 |E| \sum_{s = 1}^{t_1} \frac{2}{(n - s)^2} } \right) \\
&\le C' \exp ( - \lambda'n ) }
for some $\lambda' = \lambda'(\mu, \varepsilon_0) > 0$ and $C' = C'(G)$.
For the second step we condition on the point
$\mathbf{n}_1 = n_1 \mathbf{x}_1 = \mathbf{N}(\tau(1))$, such that $n - n_1 \le t_1$ and
$| \mathbf{x}_1 - \mathbf{z} | \le \varepsilon_0 < \delta/4$. For ease of notation,
we re-parametrize time for this step so that $\mathbf{N}(0) = \mathbf{n}_1$.
We choose $\mathbf{Y}(t)$ to be the point where the halfline starting at $\mathbf{z}$
and passing through $\mathbf{X}(t)$ intersects $\partial \mathcal{R}_G$. Let us write
$\mathbf{u}(t)$ for the unit vector with the same direction as $\mathbf{X}(t) - \mathbf{z}$.
Decompose $\mathbf{X}(t+1) - \mathbf{z} = X'(t+1) \mathbf{u}(t) + \mathbf{X}''(t+1)$, where
$\langle \mathbf{X}''(t+1), \mathbf{u}(t) \rangle = 0$.
As long as $|\mathbf{N}(t) - (n-t) \mathbf{z}| \ge d_0$, we have
\eqnsplst
{ | \mathbf{N}(t+1) - (n-t-1) \mathbf{z}|
&= \sqrt{ \langle \mathbf{N}(t+1) - (n-t-1) \mathbf{z}, \mathbf{u}(t) \rangle^2 + (n-t-1)^2|\mathbf{X}''(t+1)|^2 } \\
&\le \sqrt{ \langle \mathbf{N}(t+1) - (n-t-1) \mathbf{z}, \mathbf{u}(t) \rangle^2 + 2 } \\
&\le \langle \mathbf{N}(t+1) - (n-t-1) \mathbf{z}, \mathbf{u}(t) \rangle + \frac{2}{d_0-\sqrt{2}}. }
Therefore,
\eqnsplst
{ \mathbf{E} \big( | \mathbf{N}(t+1) - (n-t-1) \mathbf{z}| \,\big|\, \mathcal{F}_t \big)
&\le \mathbf{E} \big( \langle \mathbf{N}(t+1) - (n-t-1) \mathbf{z}, \mathbf{u}(t) \rangle \,\big|\, \mathcal{F}_t \big)
+ \frac{2}{d_0-\sqrt{2}} \\
&= \langle \mathbf{N}(t) - (n-t) \mathbf{z}, \mathbf{u}(t) \rangle - \langle \mathbf{Y}(t) - \mathbf{z}, \mathbf{u}(t) \rangle
+ \frac{2}{d_0-\sqrt{2}} \\
&\le |\mathbf{N}(t) - (n-t) \mathbf{z}| - \delta + \frac{2}{d_0-\sqrt{2}}. }
Hence if we require that $d_0 \ge \sqrt{2} + \frac{4}{\delta}$, then
\eqnst
{ D(t)
= |\mathbf{N}(t) - (n-t) \mathbf{z}| + \frac{\delta}{2} t, \quad t \ge 0, }
is a supermartingale until $\tau_{d_0}$. Since the increments of
$D(t)$ are bounded by $2 + \frac{\delta}{2} < 3$, and
$\varepsilon_0 < \frac{\delta}{4}$, it follows with $t_2 = \frac{3}{4} n_1$ that
\eqnst
{ \mathbf{P} \left[ \tau_{d_0} > t_2 \right]
\le \mathbf{P} \left[ \max_{0 \le s \le t_2} (D(s) - D(0)) > \frac{\delta}{8} n_1 \right]
\le \exp \left( - \frac{\delta^2 t_2^2}{64 \cdot 3^2 \, t_2} \right)
\le \exp ( - \lambda'' n_1 ) }
with some $\lambda'' = \lambda''(\delta) > 0$.
Putting the two parts together, the statement follows if we choose
$K_2 = \frac{8 \mu}{\mu - 1}$.
\end{proof}
\begin{lemma}
\label{lem:2nd-stage}
Given $\delta > 0$ there exist $\lambda_3 = \lambda_3(\delta) > 0$
and $C_3 = C_3(\delta)$ such that such that for all $n' \ge n'' \ge 0$
and all $\mathbf{w}, \mathbf{z} \in \mathcal{K}_G$ with $\mathrm{dist}(\mathbf{z}, \partial \mathcal{R}_G) \ge \delta$,
$|n' \mathbf{w} - n'\mathbf{z}| \le d_0(\delta)$ the following holds. There
exists a randomized strategy starting in state $\mathbf{n}' = n' \mathbf{w}$ such that
for all $q \ge 1$ we have
\eqn{e:2nd-stage-estimate}
{ \mathbf{P} \left[ | \mathbf{N}(n' - n'') - n'' \mathbf{z} | > q \right]
\le C_3 \exp ( - \lambda_3 q ). }
\end{lemma}
\begin{proof}
When $| \mathbf{N}(t) - (n'-t) \mathbf{z}| < d_0$, let us apply an arbitrary move,
otherwise, let us follow the strategy used in the second part of
Lemma \ref{lem:1st-stage}. We saw in the proof of Lemma \ref{lem:1st-stage}
that
\eqnst
{ D(t)
= |\mathbf{N}(t) - (n-t) \mathbf{z}| + \frac{\delta}{2} \sum_{0 \le s < t}
I [ |\mathbf{N}(s) - (n-s) \mathbf{z}| \ge d_0 ] }
is a supermartingale on any time interval $s \in [t_1,t_2)$ on which
$|\mathbf{N}(s) - (n-s) \mathbf{z}| \ge d_0$. Assume the event
\eqnst
{ F(q)
= \left\{ \text{$|\mathbf{N}(n' - n'') - n'' \mathbf{z} | > 4 q$} \right\}, }
and suppose $q > d_0$. When $n' - n'' < q$, the event $F(q)$ is
impossible, because $| \mathbf{N}(0) - n' \mathbf{z} | \le d_0 < q$ and the
increments of $|\mathbf{N}(t) - (n-t) \mathbf{z}|$ are bounded by $2$.
Hence we may assume that $\ell_\mathrm{max} := \lfloor (n' - n'')/q \rfloor \ge 1$.
Since $D(0) \le d_0 < q$, the inequalities
\eqn{e:not-all}
{ | \mathbf{N}(n' - n'' - \ell q ) - (n'' + \ell q) \mathbf{z} |
> 4 q, \quad \ell = 0, \dots, \ell_\mathrm{max}, }
cannot all simultaneously be satisfied. Summing over the
smallest $\ell$ for which \eqref{e:not-all} fails, we have
\eqnspl{e:2nd-stage-bad-3}
{ \mathbf{P} [ F(q) ]
&\le \sum_{1 \le \ell \le \ell_\mathrm{max}} \mathbf{P} \left[ D(n' - n'')
- D(n' - n'' - \ell q) > \frac{\delta}{2} q \ell \right] \\
&\le \sum_{\ell \ge 1} \exp \left( - \frac{1}{8} \frac{\delta^2 q^2 \ell^2}{3^2 \, q \ell} \right)
\le C_3 \exp ( - \lambda_3 q ). }
Adjusting the constant $C_3$, if necessary, we have the statement for all
$q > 0$. This completes the proof.
\end{proof}
\begin{remark}
Note that the above strategy does not require the coordinates to stay positive.
This will become important in Section \ref{ssec:II}.
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:main-steer}.]
Observe that if there is no point $\mathbf{w}$ such that
$\mathrm{dist}( \mathbf{w}, \partial \mathcal{R}_G ) \ge \delta$, then the statement of the
Proposition holds vacuously. Henceforth assume that $\delta$ is small enough
so that the set above is non-empty. We choose $q_0 \ge 2$ so that for the event $F(q)$ introduced in the
proof of Lemma \ref{lem:2nd-stage} we have $\mathbf{P} [ F(q_0/4) ] \le \frac{1}{2}$.
Let $M$ be the smallest integer such that
\eqnst
{ M
\ge \left( \min \left\{ w_e : e \in E,\, \mathbf{w} \in \mathcal{R}_G,\,
\mathrm{dist} (\mathbf{w}, \partial \mathcal{R}_G) \ge \delta \right\} \right)^{-1}, }
which is finite by our assumption on $\delta$.
We choose $K_1$ and $n_0$ such that $n \ge K_1 n_1$ and $n_1 \ge n_0$ imply
$n \ge K_2 ( n_1 + M q_0 )$, where $K_2$ is the constant from
Lemma \ref{lem:1st-stage}.
Following the strategies in Lemmas \ref{lem:1st-stage} and \ref{lem:2nd-stage}
over the time interval $[n, n - n_1 - M q_0]$ we have
\eqn{e:2nd-stage-good}
{ \mathbf{P} \left[ | \mathbf{N}(n - n_1 - M q_0) - (n_1 + M q_0 ) | \le q_0 \right]
\ge \frac{1}{2} - C_2 \exp ( - \lambda_2 n_1 )
\ge \frac{1}{4}, }
if $n_0$ is large enough. On the event in \eqref{e:2nd-stage-good} we have
\eqnsplst
{ &N_e(n - n_1 - M q_0) - n_1 z_e \\
&\qquad \ge (M q_0) z_e - | N_e(n - n_1 - M q_0)
- (n_1 + M q_0) z_e | \\
&\qquad \ge q_0 - q_0
= 0, \quad e \in E. }
Therefore, $\mathbf{N}(n - n_1 - M q_0) \ge n_1 \mathbf{z}$ componentwise, and
there is a strictly positive probability $c_1 = c_1(G, \delta) > 0$
that $n_1 \mathbf{z}$ can be hit exactly from the state $\mathbf{N}(n - n_1 - M q_0)$.
This proves \eqref{e:hit-exact} of the Proposition.
Since the form of the bound \eqref{e:2nd-stage-estimate} is not
affected by taking $M q_0$ extra steps, statement
\eqref{e:not-deviate} follows from the estimates \eqref{e:1st-stage-estimate}
and \eqref{e:2nd-stage-estimate} of Lemmas \ref{lem:1st-stage} and \ref{lem:2nd-stage}.
\end{proof}
\subsection{Proof of the Main Theorem}
\label{ssec:proof-main}
In this section we complete the proof of Theorem \ref{thm:phase-trans-graph}.
\begin{proof}[Proof of Theorem \ref{thm:phase-trans-graph}(i).]
Fix $\mathbf{x} \in \mathcal{I}_G$, and let
$\emptyset \subsetneq F \subsetneq E$ be a set such that
$\sum_{e \in F} x_e < \frac{d(F)}{k}$. Then for some $\varepsilon = \varepsilon(G, \mathbf{x}) > 0$
and sufficiently large $n$ we have
$\frac{1}{n} \sum_{e \in F} N_e(0) < \frac{d(F)}{k} - \varepsilon$.
Let
\eqnst
{ Y_t
= \begin{cases}
1 & \text{if $V_t = v$ and $\deg_F(v) = \deg_G(v)$;} \\
0 & \text{otherwise.}
\end{cases} }
Since any $v$ with $\deg_F(v) = \deg_G(v)$ must be
assigned to one of the edges in $F$, we have
\eqnsplst
{ p_G(\mathbf{n})
&\le \mathbf{P} \left[ \sum_{t = 1}^n Y_t \le \sum_{e \in F} N_e(0) \right]
\le \mathbf{P} \left[ \frac{1}{n} \sum_{t=1}^n Y_t < \frac{d(F)}{k} - \varepsilon \right]
\le \exp \left( -n \frac{\varepsilon^2}{4} \right). }
by Bernstein's inequality; see \cite[Theorem 2.2(1)]{GSbook}.
The rate of decay is bounded away from $0$ as long as
$\mathbf{x}$ is bounded away from $\partial \mathcal{R}_G$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:phase-trans-graph}(ii).]
We show that for any fixed $\delta > 0$ we have
\eqn{e:limits}
{ \lim_{n \to \infty} M_n
= \lim_{n \to \infty} m_n
= \alpha, }
where
\eqnsplst
{ m_n
&= m_n(\delta)
= \min \left\{ p_G(\mathbf{n}) : \sum_{e \in E} n_e = n,\,
\mathrm{dist}(\mathbf{n}/n, \partial \mathcal{R}_G) \ge \delta \right\}, \quad n \ge 1; \\
M_n
&= M_n(\delta)
= \max \left\{ p_G(\mathbf{n}) : \sum_{e \in E} n_e = n,\,
\mathrm{dist}(\mathbf{n}/n, \partial \mathcal{R}_G) \ge \delta \right\}, \quad n \ge 1; \\
\alpha
&= \alpha(\delta)
= \liminf_{n \to \infty} m_n(\delta). }
We consider $n' \ge n_0$, $n \ge K_1 n'$ and $\mathbf{n} = n \mathbf{x}$ such that
$m_n = p_G(\mathbf{n})$. We apply Proposition \ref{prop:main-steer} with
$\mathbf{z} = \mathbf{n}'/n'$, where $\mathbf{n}'$ is chosen so that $M_{n'} = p_G(\mathbf{n}')$.
Let $\varphi(\mathbf{r})$ denote the probability that with the
strategy described in Proposition \ref{prop:main-steer}
the state at time $n - n'$ is $n' \mathbf{z} + \mathbf{r}$,
where $\sum_{e \in E} r_e = 0$. Due to
Proposition \ref{prop:main-steer}, we have
$\varphi(0) \ge c_1$. Therefore, we can write
\eqnsplst
{ m_n
= p_G(\mathbf{n})
&\ge \sum_{\mathbf{r} : \sum_{e \in E} r_e = 0} \varphi(\mathbf{r}) \, p_G(n' \mathbf{z} + \mathbf{r})
\ge c_1 p_G(n' \mathbf{z}) + \sum_{\substack{\mathbf{r} \not= \mathbf{0} : \\ \sum_{e \in E} r_e = 0}}
\varphi(\mathbf{r}) \, p_G(n' \mathbf{z} + \mathbf{r}) \\
&\ge c_1 (M_{n'} - m_{n'}) + \sum_{\mathbf{r} : \sum_{e \in E} r_e = 0}
\varphi(\mathbf{r}) \, m_{n'} \\
&\ge c_1 (M_{n'} - m_{n'}) + m_{n'} - C \exp ( -\lambda n' ) }
with some $\lambda > 0$ and $C$ depending on $\delta$ and
$\lambda_1, \lambda_2, \lambda_3$. Rearranging gives
\eqn{e:with-c}
{ M_{n'} - m_{n'}
\le \frac{1}{c_1} ( m_n - m_{n'} ) + \frac{C}{c_1} \exp ( -\lambda n' ). }
Since $n \ge K n'$ was arbitrary, taking $\liminf_{n \to \infty}$ yields
\eqn{e:liminf}
{ M_{n'} - m_{n'}
\le \frac{1}{c_1} (\alpha - m_{n'}) + \frac{C}{c_1} \exp ( - \lambda n' ). }
Taking $\limsup_{n' \to \infty}$ in \eqref{e:liminf} yields $M_{n'} - m_{n'} \to 0$.
Taking $\liminf_{n' \to \infty}$ in \eqref{e:liminf} yields
\eqnst
{ 0
\le \liminf_{n' \to \infty} (M_{n'} - m_{n'})
\le \frac{1}{c_1} (\alpha - \limsup_{n' \to \infty} m_{n'})
\le 0. }
This shows that $\lim_{n' \to \infty} m_{n'} = \alpha$, and the
proof of \eqref{e:limits} is complete.
The limit does not depend on $\delta$, since for
$0 < \delta_1 < \delta_2$ we have
\eqnst
{ m_n(\delta_1)
\le m_n(\delta_2)
\le M_n(\delta_2)
\le M_n(\delta_2), }
and hence $\alpha(\delta_1) = \alpha(\delta_2) = c_G$.
We conclude the proof by noting that $c_G > 0$. This is because
Proposition \ref{prop:main-steer} implies that
the process can be steered close to the point $n_0 \mathbf{x}^*$
for a sufficiently large $n_0$ with positive probability, and
from here there is a strictly positive probability of winning.
\end{proof}
\begin{remark}
Since the left hand side of \eqref{e:liminf} is non-negative,
we can rearrange to get
\eqnst
{ m_{n'}
\le \alpha + C \exp ( - \lambda n' ), \quad n' \ge n_0. }
We do not have a corresponding exponential lower bound on the speed
at which the limit $\alpha$ is approached.
See Question \ref{prob:exp-converge} in Section \ref{sec:open}.
\end{remark}
\section{Upper bounds in the critical region}
\label{sec:critical-proof}
In this section we obtain estimates in the critical region.
This requires distinguishing a few cases that we state as
separate propositions in the next section, and use them to prove
Theorem \ref{thm:bdry}. The proofs of the three propositions are given
in Sections \ref{ssec:I}, \ref{ssec:II} and \ref{ssec:III},
respectively.
\subsection{Statements of upper bounds in three subregions}
\label{ssec:state-regions}
We define the sets of configurations
\eqnspl{e:cB_G}
{ \mathcal{B}_G^I(n; A)
&= \left\{ \mathbf{n} \in n \mathcal{S}_G : \, \text{for some
$\emptyset \subsetneq F \subsetneq E$ we have
$L^{F,n}(\mathbf{n}) \le -A \sqrt{n}$} \right\} \\
\mathcal{B}_G^{II}(n; A)
&= \left\{ \mathbf{n} \in n \mathcal{S}_G : \, \text{for all
$F$ with $0 < d(F) < k$ we have
$L^{F,n}(\mathbf{n}) \ge A \sqrt{n}$} \right\} \\
\mathcal{B}_G^{III}(n; A)
&= \left\{ \mathbf{n} \in n \mathcal{S}_G : \,
-A \sqrt{n} < \min_{F: 0 < d(F) < k} L^{F,n}(\mathbf{n})
< A \sqrt{n} \right\}. }
\begin{proposition}
\label{prop:far-enough}
For all $A > 0$ we have
\eqnst
{ \limsup_{n \to \infty} \, \max \{ p_G(\mathbf{n}) : \mathbf{n} \in \mathcal{B}_G^I(n; A) \}
\le \exp \left( - \frac{A^2}{8} \right). }
In particular, the $\limsup$ is at most $c_G$, if
$A \ge \sqrt{8 \log (1/c_G)}$.
\end{proposition}
\begin{proposition}
\label{prop:in-enough}
There exist constants $C_4 = C_4(G)$ and $\lambda_4 = \lambda_4(G) > 0$
such that for all $A \ge 1$ we have
\eqn{e:in-enough}
{ \limsup_{n \to \infty} \, \max \{ p_G(\mathbf{n}) : \mathbf{n} \in \mathcal{B}_G^{II}(n; A) \}
\le c_G + C_4 \exp ( - \lambda_4 A^2 ). }
\end{proposition}
\begin{proposition}
\label{prop:near-critical}
There exists $A_0 = A_0(G)$ such that for all $A \ge A_0$ we have
\eqnst
{ \limsup_{n \to \infty} \, \max \{ p_G(\mathbf{n}) : \mathbf{n} \in \mathcal{B}_G^{III}(n; A) \}
\le c_G + C_4 \exp ( - \lambda_4 A^2 ). }
\end{proposition}
\begin{proof}[Proof of Theorem \ref{thm:bdry} assuming
Propositions \ref{prop:far-enough},
\ref{prop:in-enough}, \ref{prop:near-critical}.]
Given $\varepsilon > 0$, choose $A$ sufficiently large so that
each of the upper bounds in Propositions \ref{prop:far-enough},
\ref{prop:in-enough} and \ref{prop:near-critical}
is at most $c_G + \varepsilon$. Since with this fixed choice of $A$
the sets $\mathcal{B}_G^I$, $\mathcal{B}_G^{II}$ and $\mathcal{B}_G^{III}$
cover all possibilities, the statement follows.
\end{proof}
\subsection{Upper bound for $\mathcal{B}_G^{I}$}
\label{ssec:I}
\begin{proof}[Proof of Proposition \ref{prop:far-enough}.]
We may fix the set $F$ in the definition of $\mathcal{B}_G^I(n; A)$
and argue separately for each such set. Let us fix $\delta > 0$.
Due to Theorem \ref{thm:phase-trans-graph}(i),
we may restrict to $\mathbf{n}$ such that
\eqnst
{ - \delta n
< L^{F,n}(\mathbf{n})
\le - A \sqrt{n}. }
Let us follow the optimal strategy starting in
configuration $\mathbf{n}$. The process $S(t) = L^{F,n-t} (\mathbf{N}(t))$
is a supermartingale due to
\eqn{e:S(t)-supermart}
{ \mathbf{E} [ S(t+1) \,|\, \mathcal{F}_t ]
= S(t) - \langle \mathbf{Y}(t) - \mathbf{z}^F, \mathbf{u}^F \rangle
\le S(t). }
Consider the stopping time
\eqnst
{ \tau
= \left( \lfloor n - c \sqrt{n} \rfloor + 1 \right) \, \wedge \,
\inf \{ t \ge 0 : S(t) < -\delta(n-t) \}, }
where $c = \frac{A}{2 \delta}$. Then we have
\eqnsplst
{ \mathbf{P} [ \tau > n - c \sqrt{n} ]
&\le \mathbf{P} \left[ \max_{0 \le t \le \lfloor n - c\sqrt{n} \rfloor} S(t) - S(0) > (A - \delta c ) \sqrt n \right] \\
&\le \exp \left( -\frac{1}{2} \frac{(A - \delta c)^2 \, n}{\lfloor n - c \sqrt{n} \rfloor} \right)
\le \exp \left( - \frac{A^2}{8} \right). }
Due to the optimality equation, $p_G(\mathbf{N}(t))$ is a bounded martingale.
Hence by optional stopping we have
\eqnspl{e:OST}
{ p_G(\mathbf{n})
&= \mathbf{E} [ p_G(\mathbf{N}(\tau)) ;\, \tau \le n - c \sqrt{n},\, S(\tau) < - \delta (n - \tau) ]
+ \mathbf{E} [ p_G(\mathbf{N}(\tau)) ;\, \tau > n - c \sqrt{n} ], }
The first term in the right hand side of \eqref{e:OST} is at most
\eqnst
{ \max \left\{ p_G(\mathbf{n}') : \| \mathbf{n}' \|_1 \ge c \sqrt{n},\, L^{F,n'}(\mathbf{n}') < -\delta n' \right\}, }
which goes to $0$, as $n \to \infty$, due to Theorem \ref{thm:phase-trans-graph}(i).
The second term in the right hand side of \eqref{e:OST} is
at most $\mathbf{P} [ \tau > n - c \sqrt{n} ] \le \exp ( - \frac{A^2}{8} ) < c_G$, due to our choice
of $A$. This completes the proof of the Proposition.
\end{proof}
\subsection{Upper bound for $\mathcal{B}_G^{II}$}
\label{ssec:II}
We start with two propositions that strengthen
Proposition \ref{prop:main-steer}, and will be used in the proof of
Proposition \ref{prop:in-enough}. In the first, we give a lower
bound on the probability that the process can be steered away
from the boundary, if at least order $\sqrt{n}$ away.
\begin{proposition}
\label{prop:strengthened}
There exist $\lambda_5 = \lambda_5(G) > 0$, $\gamma = \gamma(G) > 0$,
$c_5 = c_5(G)$, $C_5 = C_5(G)$ and $n'_0 = n'_0(G)$ such that for
all $A \ge 1$ the following holds. Let $n, n'$ satisfy
$n^{\gamma} \ge n' \ge n'_0$, and let $\mathbf{n} = n \mathbf{x}$ be a
configuration such that
\eqnspl{e:assump}
{ \sum_{e \in F} x_e
&\ge \frac{1}{k} d(F) + \frac{A}{\sqrt{n}}, \quad
\text{for all $\emptyset \subsetneq F \subsetneq E$.} }
There exists a randomized strategy starting from $\mathbf{n}$ such that
for the stopping time
\eqnst
{ \tau
= \inf \{ t \ge 0 : \mathrm{dist}(\mathbf{X}(t), \partial \mathcal{R}_G) \ge c_5 \} }
we have
\eqnst
{ \mathbf{P} [ \tau > n - n' ]
\le C_5 \exp ( -\lambda_5 A^2 ). }
\end{proposition}
\begin{proof}
Let $\mathbf{y}$ be the point where the halfline
starting at $\mathbf{x}^*$ and passing through $\mathbf{x}$ intersects $\partial \mathcal{R}_G$.
Write $d = | \mathbf{x} - \mathbf{y} |$, and note that $d \ge \frac{A}{B} \frac{1}{\sqrt{n}}$,
due to Lemma \ref{lem:equiv-metric}.
Let $r$ be the smallest integer such that $(3/2)^r d \ge \frac{1}{2} | \mathbf{x}^* - \mathbf{y} |$.
We fix a small number $\eta > 0$ such that $\frac{1}{2} - \eta > \frac{4}{9}$.
Then it is straightforward to check that the choice of $r$ ensures that
there exists $0 < \gamma = \gamma(G) < 1$ such that
$(\frac{1}{2} - \eta)^r n \ge n^{\gamma}$, if $n \ge n_0$ for
some $n_0 = n_0(G)$.
Consider the sequence of points $\mathbf{x} = \mathbf{y}(0), \mathbf{y}(1), \dots, \mathbf{y}(r)$ defined by
\eqnst
{ \mathbf{y}(i)
= \mathbf{y} + (3/2)^i (\mathbf{x} - \mathbf{y}), \quad i = 0, 1, \dots, r. }
The following statement can be proved in essentially the same way as
Lemma \ref{lem:1st-stage}. For $\varepsilon > 0$ sufficiently small, there
exists $\lambda = \lambda(G,\eta,\varepsilon) > 0$ such that given any point $\mathbf{w} \in \mathcal{R}_G$
with $|\mathbf{w} - \mathbf{y}(i)| < \varepsilon (3/2)^i d$ and any $n$ such that
$(\frac{1}{2} - \eta) n \ge n_0$ the following holds.
There exists a randomized strategy starting in state $n \mathbf{w}$ such that
for the stopping time
\eqnst
{ \tau(i)
= \inf \{ t \ge 0 : |\mathbf{X}(t) - \mathbf{y}(i+1)| < \varepsilon (3/2)^{i+1} d \} }
we have
\eqnst
{ \mathbf{P} \left[ \tau(i) > \left( \frac{1}{2} + \eta \right) n \right]
\le \exp \left( - \lambda (3/2)^{2i} A^2 \right). }
Summing the upper bounds on $\tau(0), \tau(1), \dots, \tau(r-1)$ we obtain
that there is a randomized strategy starting from state $\mathbf{n}$ such that
for the stopping time
\eqnst
{ \tau'
= \inf \{ t \ge 0 : |\mathbf{X}(t) - \mathbf{y}(r)| < \varepsilon (3/2)^{r} d \} }
we have
\eqnst
{ \mathbf{P} [ \tau' > n - n^{\gamma} ]
\le C \exp ( - \lambda A^2 ). }
Due to the choice of $r$, and for a sufficiently small $\varepsilon$,
the point $\mathbf{X}(\tau')$ is at least a fixed positive distance $c_5$
from $\partial \mathcal{R}_G$, and hence $\tau \le \tau'$. This completes
the proof.
\end{proof}
The next proposition extends the result of Proposition \ref{prop:main-steer}
to the case when the target state is anywhere in $\mathcal{K}_G$.
\begin{proposition}
\label{prop:hit-any}
Given $\delta > 0$, there exists $\lambda_6 = \lambda_6(G) > 0$, $C_6 = C_6(G)$,
$c_6 = c_6(G) > 0$, $K_6 = K_6(G,\delta)$ and $n_6 = n_6(G,\delta)$ such that for any
$n_1 \ge K_6 n'$, $n' \ge n_6$ and configurations $\mathbf{n}_1 = n_1 \mathbf{x}$, $\mathbf{x} \in \mathcal{R}_G$,
$\mathrm{dist}(\mathbf{x}, \partial \mathcal{R}_G) \ge \delta$ and $\mathbf{n}' = n' \mathbf{z}$, $\mathbf{z} \in \mathcal{K}_G$
the following holds. There exists a randomized strategy starting in state
$\mathbf{n}_1$ such that
\eqn{e:exactly2}
{ \mathbf{P} \left[ \mathbf{N}(n_1 - n') = \mathbf{n}' \right]
\ge c_6, }
and
\eqn{e:not-deviate2}
{ \mathbf{P} \left[ | \mathbf{N}(n_1 - n') - \mathbf{n}' | > q \right]
\le C_6 \exp ( - \lambda_6 q ), \quad q > 0. }
\end{proposition}
\begin{proof}
We consider the following intermediate point:
\eqnsplst
{ \mathbf{x}''
= \frac{1}{2} \mathbf{x} + \frac{1}{2} \mathbf{x}' \qquad \text{ and } \qquad
\mathbf{n}''
= n' \mathbf{x} + \mathbf{n}' + O(1), }
where the $O(1)$ term guarantees that $\mathbf{n}''$ has integer coordinates.
Observe that $\mathrm{dist}( \mathbf{x}'', \partial \mathcal{R}_G)$
is at least a positive constant. Due to Proposition \ref{prop:main-steer}
we can steer the process from $\mathbf{n}_1$ to a $(\delta/4)$-neighbourhood
of $\mathbf{x}''$ with probability at least $1 - C_1 \exp ( - \lambda_1 n' )$,
provided $K_6 \ge 2 K_1(G,\delta)$.
Let us call the point reached this way $(2 n') \mathbf{y}''$. Since
\eqnst
{ \mathbf{y}''
= \mathbf{x}'' + (\mathbf{y}'' - \mathbf{x}'')
= \frac{1}{2} (\mathbf{x} - 2(\mathbf{y}'' - \mathbf{x}'')) + \frac{1}{2} \mathbf{x}', }
and $| 2 (\mathbf{y}'' - \mathbf{x}'') | < \frac{\delta}{2}$,
the point $\mathbf{w} = \mathbf{x} - 2(\mathbf{y}'' - \mathbf{x}'')$ satisfies
$\mathrm{dist}( \mathbf{w}, \partial \mathcal{R}_G ) \ge \frac{\delta}{2}$.
Now consider the steps of the strategy of Lemma \ref{lem:2nd-stage}
for the starting state $n' \mathbf{w}$ and target state $0 \mathbf{w}$ over
the time interval $[0,n' - M q_0]$, where
$M \ge ( \min \{ w_e : e \in E \} )^{-1}$, and $q_0$
is chosen so that $F(q_0/4) \ge \frac{1}{2}$.
Let $\widetilde{\mathbf{N}}(t)$, $t \ge 0$ denote this process.
If the coordinates do stay positive until time $n' - M q_0$,
there is a strictly positive probability of hitting state $\mathbf{0}$.
When $\mathbf{0}$ is not hit exactly, we have the bound
\eqnst
{ \mathbf{P} [ |\widetilde{\mathbf{N}}(n')| > q ]
= \mathbf{P} [ |\widetilde{\mathbf{N}}(n') - \mathbf{0}| > q ]
\le C_2 \exp ( - \lambda_2 q ). }
If we now apply exactly the same moves to the configuration
$(2 n') \mathbf{y}''$, we obtain that the process
$\mathbf{N}(t) = \mathbf{n}' + \widetilde{\mathbf{N}}(t)$ hits
$\mathbf{n}' = n' \mathbf{x}'$ with positive probability, and satisfies the
bound in \eqref{e:not-deviate2}.
\end{proof}
Since the proof of Proposition \ref{prop:in-enough} is quite long,
we first give a brief outline. Suppose we can select configurations
$\mathbf{n}$ and $\mathbf{n}(\ell), \dots, \mathbf{n}(1)$ in such a way that:\\
(a) $\mathbf{n}/n$ is bounded away from $\partial \mathcal{R}_G$, so that
we have $p_G(\mathbf{n}) \le c_G + \varepsilon$;\\
(b) $\mathbf{n}(\ell), \dots, \mathbf{n}(1)$ are in the respective sets $\mathcal{B}_G^{II}$
with each $p_G(\mathbf{n}(i))$ close to the $\limsup$ in \eqref{e:in-enough};\\
(c) We can steer the process as follows:
$\mathbf{n} \to \mathbf{n}(\ell) \to \mathbf{n}(\ell-1) \to \dots \to \mathbf{n}(1)$;\\
(d) In each steering step we hit the target exactly with probability
bounded away from $0$.\\
If $\ell$ is large, step (d) ensures that $p_G(\mathbf{n})$ cannot be
much smaller than the smallest of the $p_G(\mathbf{n}(i))$'s, and the claim will follow.
The crux of the proof is parts (c)--(d), which rely on
Propositions \ref{prop:strengthened} and \ref{prop:hit-any}.
The argument is somewhat delicate, since the $\mathbf{n}(i)$'s now can be
arbitrarily close to $\partial \mathcal{R}_G$;
recall the definition of $\mathcal{B}_G^{II}$ in \eqref{e:cB_G}.
Therefore, Propositions \ref{prop:strengthened} and \ref{prop:hit-any}
will be applied on a suitable subgraph that omits some edges.
We carry out the plan (a)--(d). We start with some preliminaries.
The first step is to subdivide $\mathcal{B}_G^{II}$ according to which part
of $\partial \mathcal{R}_G$ is close. Given $\mathbf{n} \in \mathcal{B}_G^{II}$, let
\eqnsplst
{ \mathcal{G}
= \mathcal{G}(\mathbf{n} ; G, A)
= \left\{ F \subset E :
L^{F,n}(\mathbf{n}) < \frac{\kappa A}{2^{|E|+1}} \sqrt{n} \right\}
\qquad \text{ and } \qquad
\overline{F}
= \cup \mathcal{G}, }
where $\kappa$ is the constant from Lemma \ref{lem:equiv-metric}.
It may so happen that $\overline{F} = \emptyset$, in which case the arguments
we have to make are similar to and simpler than when $\overline{F} \not= \emptyset$.
We will not spell out such arguments. Note that $F \in \mathcal{G}$ implies
$d(F) = 0$, since $\mathbf{n} \in \mathcal{B}_G^{II}$. Hence we have
\eqn{e:sum-e-bound}
{ \sum_{e \in \overline{F}} n_e
\le \sum_{F \in \mathcal{G}} \sum_{e \in F} n_e
\le \sum_{F \in \mathcal{G}} \frac{1}{2 \kappa} L^{F,n}(\mathbf{n})
< \frac{1}{2} A \sqrt{n}. }
This implies $d(\overline{F}) = 0$, for $n$ large enough. Note that any $F$ with
$d(F) = 0$ that is not contained entirely inside $\overline{F}$ satisfies
\eqnst
{ \sum_{e \in F} n_e
\ge \frac{1}{2} L^{F,n}(\mathbf{n})
\ge \frac{\kappa A}{2^{|E|+2}} \sqrt{n}. }
Let us abreviate $\kappa_0 = \kappa / 2^{|E|+2}$.
In the remainder of this section, we are going to fix a possible
value $F_0$ of $\overline{F}$, and argue separately for each $F_0$.
With this in mind we make the following definitions. For any
$F_0$ such that $d(F_0) = 0$, let
\eqnspl{e:max-def}
{ \mathcal{B}_G^{II}(n; A, F_0)
&= \left\{ \mathbf{n} \in \mathcal{B}_G^{II}(n; A) : \,
\parbox{8cm}{$\sum_{e \in F_0} n_e < \frac{1}{2} A \sqrt{n}$,
and for all $F$ not contained in $F_0$ we have
$\sum_{e \in F} n_e - \frac{n}{k} d(F) \ge \kappa_0 A \sqrt{n}$} \right\} \\
M_n(F_0)
&= \max \left\{ p_G(\mathbf{n}) : \, \mathbf{n} \in \mathcal{B}_G^{II}(n; A, F_0) \right\} \\
\beta
&= \limsup_{n \to \infty} M_n(F_0). }
Our task is to show that $\beta \le c_G + C \exp ( - \lambda A^2 )$
for each $F_0$ such that $\mathcal{B}_G^{II}(n; A, F_0)$ is non-empty.
We will need to work on subgraphs of the form $G^{H} = (V, E^{H})$,
where $E^{H} = E \setminus H$, $H \subset F_0$. We write $\mathbf{n}^H$ for
the restriction of $\mathbf{n}$ to $G^H$, that is: $\mathbf{n}^H = (n_e : e \in E^H)$.
When no confusion can arise, we will write $n^H = \sum_{e \in E^H} n_e$.
\begin{lemma}
If $\mathcal{B}_G^{II}(n; A, F_0)$ is non-empty, then
for any $H \subset F_0$ the graph $G^{H}$ is connected.
\end{lemma}
\begin{proof}
It is enough to consider $H = F_0$. Should $G^{F_0}$ not be
connected, we could write $E = E_1 \cup F_0 \cup E_2$
as a disjoint union, where $E_1$ and $E_2$ are non-empty and
do not share any vertex. Then we have
$0 < d(E_1 \cup F_0), d(E_2 \cup F_0) < k$ and
$d(E_1 \cup F_0) + d(E_2 \cup F_0) \ge k$. Therefore,
if $\mathbf{n} \in \mathcal{B}_G^{II}(n; A, F_0)$, we have
\eqnsplst
{ \sum_{e \in E} n_e
&= \sum_{e \in E_1 \cup F_0} n_e + \sum_{e \in E_2 \cup F_0} n_e
- \sum_{e \in F_0} n_e \\
&\ge \frac{n}{k} d(E_1 \cup F_0) + \frac{1}{2} A \sqrt{n}
+ \frac{n}{k} d(E_2 \cup F_0) + \frac{1}{2} A \sqrt{n}
- \frac{1}{2} A \sqrt{n} \\
&\ge n + \frac{1}{2} A \sqrt{n}
> n, }
a contradiction.
\end{proof}
\begin{lemma}
\label{lem:str-applies}
Let $H \subset F_0$ and $\mathbf{n} \in \mathcal{B}_G^{II}(n; A, F_0)$.\\
(i) We have $\mathbf{n}^H/n^H \in \mathcal{K}_{G^H}$.\\
(ii) Suppose in addition that $n_e \ge c A \sqrt{n}$
for all $e \in F_0 \setminus H$, with some $c > 0$.
Then $\mathbf{n}^H$ satisfies the assumption on the starting state of
Proposition \ref{prop:strengthened}, with $A$ replaced by
$\min\{ c A, \kappa_0 A \}$.
\end{lemma}
\begin{proof}
Both statements will be proved by the same computations.
Let $\emptyset \subsetneq F \subsetneq (E \setminus H)$.
Since $d(H) \le d(F_0) = 0$, we have $d(F \cup H ; G) = d(F ; G^{H})$.
When this common value is $\ge 1$, we have
\eqnspl{e:restr-ge1}
{ \sum_{e \in F} n_e
&\ge \sum_{e \in F \cup H} n_e - \frac{1}{2} A \sqrt{n}
\ge \frac{n}{k} d(F \cup H ; G) + A \sqrt{n} - \frac{1}{2} A \sqrt{n} \\
&\ge \frac{n^{H}}{k} d(F ; G^{H}) + \frac{1}{2} A \sqrt{n^H}
\ge \frac{n^{H}}{k} d(F ; G^{H}). }
This already suffices for part (i). When $d(F \cup H ; G) = d(F; G^H) = 0$
and $F$ is not a subset of $F_0$, we have
\eqn{e:restr-=0-1}
{ \sum_{e \in F} n_e
\ge \kappa_0 A \sqrt{n}
\ge \kappa_0 A \sqrt{n^H}. }
When $\emptyset \subsetneq F \subset F_0 \setminus H$, under the assumption
made in part (ii) we have
\eqn{e:restr-=0-2}
{ \sum_{e \in F} n_e
\ge c A \sqrt{n}
\ge c A \sqrt{n^H}. }
The three cases \eqref{e:restr-ge1}, \eqref{e:restr-=0-1} and
\eqref{e:restr-=0-2} complete the proof of part (ii).
\end{proof}
The main technical difficulty in the proof of
Proposition \ref{prop:in-enough} is that we have no
control over how small $n_e(i)$ can get for $e \in F_0$, and therefore
these coordinates \emph{must} be hit exactly at each stage.
We can do this, if the difference $n_e(i+1) - n_e(i) \ge 0$
is sufficiently small so that we have enough opportunity to
play these edges (once the exact value is achieved,
we can ignore any such edge, since $d(F_0) = 0$.
The configurations introduced next will help us
overcome this technical difficulty.
Let $\mathbf{x}^{*,F_0}$ denote the configuration
introduced in \eqref{e:x^*}, with the graph $G$ replaced by $G^{F_0}$.
Given $\delta > 0$ and $H \subsetneq F_0$, let
\eqnst
{ \mathbf{y}^{*,F_0}(\delta; H)
= (1 - \delta) \mathbf{x}^{*,F_0}
+ \delta \frac{1}{|F_0 \setminus H|} \sum_{e \in F_0 \setminus H} \mathbf{1}^e, }
where all vectors are regarded as being in $\mathbb{R}^{E^H}$.
Let $\mathbf{n}^{*,F_0}(H) = n \mathbf{y}^{*,F_0}(\delta ; H) + O(1)$.
\begin{lemma}
\label{lem:x^*-mod} \ \\
(i) We have $\mathbf{x}^{*,F_0} \in \mathcal{K}_{G^H}$.\\
(ii) For all sufficiently small $\delta > 0$ we have
$\mathbf{y}^{*,F_0}(\delta; H) \in \mathcal{R}_{G^H}$ and
$\mathrm{dist}(\mathbf{y}^{*,F_0}(\delta; H), \partial \mathcal{R}_{G^H}) \ge \delta (B |F_0 \setminus H|)^{-1}$.\\
(iii) There exists $c_7(G) > 0$ such that for all sufficiently small $\delta > 0$
and all $\emptyset \subsetneq F \subsetneq E^{F_0}$ we have
\eqnst
{ \left( \sum_{e \in E^{F_0}} n^{*,F_0}_e(H) \right)^{-1}
\sum_{e \in F} n^{*,F_0}_e(H)
\ge \frac{d(F ; G^{F_0})}{k} + c_7. }
\end{lemma}
\begin{proof}
(i) Let $\emptyset \subsetneq F \subsetneq E^H$. We first consider the case
when $F \not\subset F_0 \setminus H$ and
$E \setminus F_0 \not\subset F$. Then we have
\eqn{e:not-subset1}
{ \sum_{e \in F} x^{*,F_0}_e
= \sum_{e \in F \setminus F_0} x^{*,F_0}_e
> \frac{d(F \setminus F_0 ; G^{F_0})}{k}
= \frac{d(F \cup (F_0 \setminus H) ; G^H)}{k}
\ge \frac{d(F ; G^H)}{k}. }
When $F \not\subset F_0 \setminus H$ and $E \setminus F_0 \subset F$, we have instead
\eqn{e:not-subset2}
{ \sum_{e \in F} x^{*,F_0}_e
= \sum_{e \in F \setminus F_0} x^{*,F_0}_e
= 1
> \frac{d(F ; G^H)}{k}. }
If $\emptyset \subsetneq F \subset F_0 \setminus H$, we have
\eqn{e:subset}
{ \sum_{e \in F} x^{*,F_0}_e
= 0
= \frac{d(F ; G^H)}{k}. }
This completes the proof of part (i).
(ii) If $\delta$ is sufficiently small, the inequalities \eqref{e:not-subset1}
and \eqref{e:not-subset2}, with $\mathbf{x}^{*,F_0}$ replaced by $\mathbf{y}^{*,F_0}(\delta; H)$,
remain strict. Also, Eqn.~\eqref{e:subset} becomes a strict inequality.
The lower bound on the distance follows from Lemma \ref{lem:equiv-metric}.
(iii) This follows from \eqref{e:not-subset1}, since the normalization
factor in the front is $[n (1 - O(\delta))]^{-1}$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:in-enough}.]
Given $\varepsilon > 0$, we select a subsequence along which
$M_n(F_0) > \beta - \varepsilon$. For each $n$ in the subsequence,
select $\mathbf{n} \in \mathcal{B}_G^{II}(n, F_0)$ such that
$p_G(\mathbf{n}) > \beta - \varepsilon$. By passing to a further subsequence,
we may assume that for each $e \in F_0$ the coordinates
$n_e$ are nondecreasing along the subsequence.
We now choose $\mathbf{n}(1), \dots, \mathbf{n}(\ell)$ and $\mathbf{n}$.
Let $n(1) < \dots < n(\ell)$ and let $\mathbf{n}(i) \in \mathcal{B}_G^{II}(n(i); F_0)$,
$i = 1, \dots, \ell$, be a sequence of points such that:\\
(i) $n(i+1) \ge 2 (2 K_6 n(i))^{1/\gamma}$, $i = 1, \dots, \ell-1$;\\
(ii) $n_e(i+1) \ge n_e(i)$, for all $e \in F_0$,
$i = 1, \dots, \ell-1$;\\
(iii) $p_G(\mathbf{n}(i)) \ge \beta - \varepsilon$, $i = 1, \dots, \ell$.\\
We further define $\mathbf{n}$ in the following way.
Let $n = 2 K_6 n(\ell)$, where $K_6$ is the constant of
Proposition \ref{prop:hit-any}, and let
$\mathbf{n} = K_6 \, n(\ell) \, \mathbf{y}^{*,F_0}(\delta_1; \emptyset) + K_6 \, \mathbf{n}(\ell) + O(1)$
for a small $\delta_1 > 0$ for which the conclusions of
Lemma \ref{lem:x^*-mod}(ii)--(iii) hold.
We will need that for all $e \in F_0$ we have
\eqn{e:F_0-bound}
{ n_e
\le K_6 \, n(\ell) \, \frac{\delta_1}{|F_0|}
+ K_6 \, \frac{1}{2} \, A \, \sqrt{n(\ell)} + O(1)
< 2 \delta_1 K_6 \, n(\ell)
= \delta_1 n, }
if $n(\ell)$ is large enough.
Also note that an application of Theorem \ref{thm:phase-trans-graph}(ii)
yields $p_G(\mathbf{n}) < c_G + \varepsilon$.
We now define the strategy to steer from $\mathbf{n}$ towards $\mathbf{n}(\ell)$.
We first employ a strategy that plays an edge $e \in F_0$
with $N_e(t) > n_e(\ell)$, whenever that is possible, but never plays
an edge $e \in F_0$ with $N_e(t) = n_e(\ell)$. We stop the first
time $t$ when for all $e \in F_0$ we have $N_e(t) = n_e(\ell)$.
Such a strategy exists, since $d(F_0) = 0$. Since we start
with $N_e(0) - n_e(\ell) \le \delta_1 n$ (recall \eqref{e:F_0-bound}),
if $\delta_1$ is sufficiently small, there is probability
$\ge 1 - \exp ( - \lambda n )$ that we stop before time
$C \delta n$ for some $C = C(G)$ and $\lambda > 0$.
Moreover, the value on every edge is decreased
by an amount at most $C \delta n$, and therefore it follows
from Lemma \ref{lem:x^*-mod}(iii) that the configuration
$\mathbf{n}'$ reached has the property that
$(\mathbf{n}')^{F_0}$ is bounded away from $\partial \mathcal{R}_{G^{F_0}}$.
We can now apply Proposition \ref{prop:hit-any} to
$(\mathbf{n}')^{F_0}$ and $(\mathbf{n}(\ell))^{F_0}$
on the connected graph $G^{F_0}$. We can implement
the moves given by the strategy in that proposition as a strategy
on $G$, because $d(F_0) = 0$.
Let $\varphi_\ell(\mathbf{r}(\ell))$ denote the probability that
at time $n(\ell)$ we reach state $\mathbf{n}(\ell) + \mathbf{r}(\ell)$.
Let us write $c_\ell = \varphi_\ell(\mathbf{0})$ for the probability
that $\mathbf{n}(\ell)$ was hit exactly. Note that since
we applied the strategy on $G^{F_0}$, we have $r_e(\ell) = 0$
for all $e \in F_0$. This restriction will be implicit
in our notation. Proposition \ref{prop:hit-any} implies
\eqnspl{e:1st-hit}
{ c_G + \varepsilon
&\ge p_G(\mathbf{n})
\ge c_\ell p_G(\mathbf{n}(\ell)) + \sum_{\mathbf{r}(\ell) \not= \mathbf{0}} \varphi_\ell(\mathbf{r}(\ell)) \,
p_G(\mathbf{n}(\ell) + \mathbf{r}(\ell)) \\
&\ge c_\ell (\beta - \varepsilon) + \sum_{0 < |\mathbf{r}(\ell)| < \nu A \sqrt{n(\ell)}}
\varphi_\ell(\mathbf{r}(\ell)) \, p_G(\mathbf{n}(\ell) + \mathbf{r}(\ell)). }
with any $\nu > 0$. The value of $\nu$ will be chosen in what follows.
We now inductively define the strategy that steers
from $\mathbf{n}(i+1) + \mathbf{r}(i+1)$ towards $\mathbf{n}(i)$, for
$i = \ell-1, \ell-2, \dots, 1$. We assume
$|\mathbf{r}(i+1)| < \nu A \sqrt{n(i+1)}$. Let
\eqnst
{ H
= \{ e \in F_0 : n_e(i+1) < \delta_2 A \sqrt{n_{i+1}} \}, }
where $\delta_2 > 0$ will be chosen in a moment. We will first
reduce the edges in $H$ to their target value $n_e(i)$.
Then we use Proposition \ref{prop:strengthened} and
Proposition \ref{prop:main-steer} in $G^H$ to reach
a target where the edges $e \in F_0 \setminus H$ do not
have much excess compared to $n_e(i)$, so that these can be
reduced to $n_e(i)$ as well. Following this, we use
Proposition \ref{prop:hit-any} in $G^{F_0}$ to hit $\mathbf{n}(i)$.
The first part of the strategy is to reduce the value on each
edge $e \in H$, whenever that is possible, until it equals $n_e(i)$,
and in such a way that no edge in $F_0 \setminus H$ is used.
We stop the first time $t$ when $N_e(t) = n_e(i)$ for all $e \in H$.
Since $d(F_0) = 0$, such strategy exists. The goal is achieved before time
$C \delta_2 A \sqrt{n(i+1)}$ with probability
$\ge 1 - \exp ( - \lambda \sqrt{n(i+1)} )$, if $\delta_2$ is sufficiently small.
Moreover, the value of every $e \in E \setminus F_0$ is decreased by
no more than $C \delta_2 A \sqrt{n(i+1)}$.
Let $\mathbf{n}'(i+1)$ denote the configuration reached.
\begin{lemma}
\label{lem:applies}
If $\delta_2$ and $\nu$ are sufficiently small, the restriction of the
configuration $\mathbf{n}'(i+1)$ to $G^H$ satisfies the assumption on the starting
state of Proposition \ref{prop:strengthened} with $A$ replaced by
$\min \{ \frac{1}{2} \kappa_0 A, \delta_2 A \}$.
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma \ref{lem:str-applies}.
Let $\emptyset \subsetneq F \subsetneq E \setminus H$.
If $d(F \cup H ; G) \ge 1$, we have
\eqnspl{e:ge1}
{ \sum_{e \in F} n'_e(i+1)
&= \sum_{e \in F \cup H} n'_e(i+1) - \sum_{e \in H} n_e(i)
\ge \sum_{e \in F \cup H} n'_e(i+1) - \sum_{e \in H} (n_e(i+1) + r_e(i+1)) \\
&\ge \sum_{e \in F \cup H} (n_e(i+1) + r_e(i+1)) - (C + |H|) \delta_2 A \sqrt{n(i+1)} \\
&\ge \sum_{e \in F \cup H} n_e(i+1) - \sqrt{|E|} |\mathbf{r}(i+1)|
- (C + |H|) \delta_2 A \sqrt{n(i+1)} \\
&\ge \frac{n(i+1)}{k} d(F \cup H ; G) + A \sqrt{n(i+1)} - ( \sqrt{|E|} \nu
+ (C + |H|) \delta_2) A \sqrt{n(i+1)} \\
&\ge \frac{n'(i+1)}{k} d(F ; G^H) + (1 - C' \nu + C'' \delta_2 ) A \sqrt{n'(i+1)}. }
Hence we will require that $1 - C' \nu - C'' \delta_2 \ge \frac{1}{2}$, say.
When $d(F \cup H ; G) = 0$ and $F$ is not a subset of $F_0$, we have
\eqnspl{e:=0-1}
{ \sum_{e \in F} n'_e(i+1)
&\ge \sum_{e \in F} (n_e(i+1) + r_e(i+1)) - C \delta_2 A \sqrt{n(i+1)} \\
&\ge \sum_{e \in F} n_e(i+1) - (\sqrt{|E|} \nu + C \delta_2) A \sqrt{n(i+1)} \\
&\ge (\kappa_0 - \sqrt{|E|} \nu - C \delta_2 ) A \sqrt{n(i+1)} \\
&\ge \frac{1}{2} \kappa_0 A \sqrt{n'(i+1)}, }
if $\nu$ and $\delta_2$ are small enough.
Finally, if $\emptyset \subsetneq F \subset F_0 \setminus H$, we have
\eqn{e:=0-2}
{ \sum_{e \in F} n'_e(i+1)
= \sum_{e \in F} n_e(i+1)
\ge \sum_{e \in F} \delta_2 A \sqrt{n(i+1)}
\ge \delta_2 A \sqrt{n'(i+1)}. }
The cases \eqref{e:ge1}, \eqref{e:=0-1} and \eqref{e:=0-2} complete the proof.
\end{proof}
We need one more auxilliary configuration. Let $n''(i) = 2 K_6 n(i)$,
where $K_6$ is the constant from Proposition \ref{prop:hit-any}, and let
\eqnst
{ \mathbf{n}''(i)
= K_6 n(i) \mathbf{y}^{*,F_0}(\delta_1; H)
+ (K_6 - 1) \frac{n(i)}{(n(i))^{H}} (\mathbf{n}(i))^{H}
+ \mathbf{n}(i) + O(1). }
Due to Lemma \ref{lem:x^*-mod}(ii), $\mathbf{n}''(i)/n''(i) \in \mathcal{R}_G$ and
$(\mathbf{n}''(i))^H/(n''(i))^H$ is at least distance $c \delta_1$ away from
$\partial \mathcal{R}_{G^H}$. Therefore, we can apply Proposition \ref{prop:main-steer}
on the graph $G^H$ to steer the process from $(\mathbf{n}'(i+1))^H$ to a
$\delta_3$ neighbourhood of $(\mathbf{n}''(i))^H$, which succeeds with probability at least
$1 - C_1 \exp ( - \lambda_1 \delta_3 n(i) )$. Moreover, due to Lemma \ref{lem:x^*-mod}(iii),
the configuration $\mathbf{n}''(i) + \mathbf{s}$ reached this way satisfies
\eqn{e:not-subset-bound}
{ (2 K_6 n(i))^{-1} \sum_{e \in F} (n''_e(i) + s_e)
\ge \frac{d(F ; G^{F_0})}{k} + c_7',
\quad \emptyset \subsetneq F \subsetneq E^{F_0}. }
Also, for $e \in F_0 \setminus H$ we have
\eqnsplst
{ (n''_e(i) + s_e) - n_e(i)
&\ge K_6 n(i) y^{*,F_0}_e(\delta_1; H) - \sqrt{|E|} |\mathbf{s}| - \frac{1}{2} A \sqrt{n(i)} \\
&\ge K_6 n(i) \frac{\delta_1}{|F_0|}
- 2 K_6 n(i) \sqrt{|E|} \delta_3 - \frac{1}{2} A \sqrt{n(i)}
\ge 0, }
if $\delta_3 < \delta_1 (4 |F_0| \sqrt{|E|})^{-1}$
and $n(i)$ is large enough. On the other hand:
\eqnsplst
{ n''_e(i) + s_e
&\le K_6 n(i) \delta_1 + \sqrt{|E|} |\mathbf{s}|
+ K_6 \frac{1}{2} A \sqrt{n(i)} (1 + O(n(i)^{-1/2})) \\
&\le K_6 n(i) \delta_1 + 2 K_6 n(i) \sqrt{|E|} \delta_3
\le 2 K_6 n(i) \delta_1, }
if $n(i)$ is large enough.
If $\delta_1$ is sufficiently small, we can now employ a strategy
starting from state $\mathbf{n}''(i) + \mathbf{s}$, that reduces the values
on all $e \in F_0 \setminus H$, whenever that is possible,
until they all equal $n_e(i)$, but never uses an edge in $H$.
This only changes the values on $e \in E^{F_0}$ by at most $2 C \delta_1 K_6 n(i)$,
and succeeds with probability at least $1 - \exp ( - \lambda 2 K_6 n(i) )$.
Let $\mathbf{n}'''(i)$ denote the configuration reached. It follows from
\eqref{e:not-subset-bound} that $(\mathbf{n}''')^{F_0}$ is bounded away from
$\partial \mathcal{R}_{G^{F_0}}$.
Finally, we can apply Proposition \ref{prop:hit-any} on the graph
$G^{F_0}$ with starting state $(\mathbf{n}'''(i))^{F_0}$
and target state $(\mathbf{n}(i))^{F_0}$. Let $\varphi_i(\mathbf{r}(i))$
denote the probability that at time $n(i)$ we reach state $\mathbf{n}(i) + \mathbf{r}(i)$.
Let us write $c_i = \varphi_i(\mathbf{0})$ for the probability
that $\mathbf{n}(i)$ is hit exactly. This gives the following
inductive bound:
\eqnspl{e:i-th-hit}
{ p_G(\mathbf{n}(i+1) + \mathbf{r}(i+1))
&\ge c_i p_G(\mathbf{n}(i)) + \sum_{\mathbf{r}(i) \not= \mathbf{0}} \varphi_i(\mathbf{r}(i)) \, p_G(\mathbf{n}(i) + \mathbf{r}(i)) \\
&\ge c_i (\beta - \varepsilon) + \sum_{0 < |\mathbf{r}(i)| < \nu A \sqrt{n_i}}
\varphi_i(\mathbf{r}(i)) \, p_G(\mathbf{n}(i) + \mathbf{r}(i)). }
Combining \eqref{e:1st-hit} and \eqref{e:i-th-hit}, Proposition \ref{prop:hit-any}
yields
\eqnsplst
{ c_G + \varepsilon
&\ge (\beta - \varepsilon) \left[ c_\ell + (1 - c_\ell) c_{\ell-1}
+ \dots + (1 - c_\ell) \cdots (1 - c_2) c_1 \right] \\
&\qquad\qquad - C \ell \exp ( - \lambda A^2 ) - C \exp ( - \lambda \nu A \sqrt{n_1} ). }
Since each $c_j \ge c > 0$, we extract a factor arbitrarily close
to $\beta - \varepsilon$. Letting $\varepsilon \downarrow 0$ shows that
$c_G \ge \beta (1 - e^{-c \ell}) - C \ell \exp ( - \lambda A^2)$.
Choosing $\ell$ of order $A^2$ completes the proof.
\end{proof}
\subsection{Upper bound for $\mathcal{B}_G^{III}$}
\label{ssec:III}
In the proof of Proposition \ref{prop:near-critical} we are going to
need the following lemma about supermartingales. It is a close variant
of \cite[Propositions 17.19 and 17.20]{LPWbook} and hence we
omit the proof.
\begin{lemma}
\label{lem:super-exit}
Let $Z(t)$ be a non-negative supermartingale with respect to $\mathcal{F}_t$,
and $\tau$ a stopping time with respect to $\mathcal{F}_t$. Suppose that\\
(i) $Z(0) = k \ge 1$; \\
(ii) $|Z(t+1) - Z(t)| \le B$; \\
(iii) there exist constants $\sigma^2 > 0$ and $b > 0$ such that
almost surely on the event $\{ \tau > t \}$,
either $\mathsf{Var} ( Z(t+1) \,|\, \mathcal{F}_t ) \ge \sigma^2$ or
$\mathsf{Var}(Z(t+1) \,|\, \mathcal{F}_t ) = 0$ and $\mathbf{E} [ Z(t+1) = Z(t) \,|\, \mathcal{F}_t ] \le -b$.
Then there exists $u_1 = u_1(B,b,\sigma)$ and $C = C(b,\sigma)$
such that if $u \ge u_1$ then
\eqnst
{ \mathbf{P} [ \tau > u ]
\le C \frac{k}{\sqrt{u}}. }
\end{lemma}
\begin{proof}[Proof of Proposition \ref{prop:near-critical}.]
Given $\varepsilon > 0$ choose $A_0(\varepsilon)$ large enough so that the conclusions
of Propositions \ref{prop:far-enough} and \ref{prop:in-enough}
are satisfied for all $A \ge A_0$. Under the optimal strategy,
we consider the process
\eqn{e:min-dev}
{ Z(t)
= \min \{ L^{F,n-t}(\mathbf{N}(t)) : \, F,\, 0 < d(F) < k \}, }
which is a supermartingale, because the $L^{F,n-t}$ are.
Since the increments of $L^{F,n}$ are bounded,
condition (ii) of Lemma \ref{lem:super-exit} is satisfied.
We show that $Z(t)$ satisfies the condition (iii) of Lemma \ref{lem:super-exit}
as well. Let $F$ be the set contributing the minimum in \eqref{e:min-dev}.
Since $d(F) > 0$, there exists an edge $e \in F$ such that
$N_e$ gets updated with probability at least $1/k$.
On this event we have
\eqnst
{ L^{F,n-t-1}(\mathbf{N}(t+1)) - L^{F,n-t}(\mathbf{N}(t))
= - \langle \mathbf{1}^e - \mathbf{z}^F, \mathbf{u}^F \rangle
=: - b(e;F)
< 0, }
since $d(F) < k$. Therefore, if $\mathsf{Var} ( Z(t+1) \,|\, \mathcal{F}_t ) = 0$,
we have $\mathbf{E} [ Z(t+1) - Z(t) \,|\, \mathcal{F}_t ] \le -b(e;F)$.
On the other hand, since there are only finitely many
possible shifts in the values of the $L^{F,n-t}$, and only
finitely many possible vectors $\mathbf{Y}(t)$ (recall that there
exists a deterministic optimal strategy), if
$\mathsf{Var} ( Z(t+1) \,|\, \mathcal{F}_t )$ is non-zero, then
it is bounded below by some $\sigma^2 = \sigma^2(G) > 0$.
We will choose a small $a > 0$, and subdivide $\mathcal{B}_G^{III}(n; A)$ into
the slices:
\eqnsplst
{ \mathcal{B}_G^{III}(n; a, k)
&= \left\{ \mathbf{n} \in n \mathcal{S}_G : \, \min \left\{ L^{F,n}(\mathbf{n}) :
\, F,\, 0 < d(F) < k \right\}
\in [a k \sqrt{n}, a (k+1) \sqrt{n}) \right\}, \\
&\qquad\qquad a > 0,\, -k_\mathrm{max}-2 \le k \le k_\mathrm{max}+1, }
where $k_\mathrm{max} = \lceil A/a \rceil$.
Let $\mathbf{n} \in \mathcal{B}_G^{III}(n; a, k)$. The idea of the proof is to
run the martingale $p_G(\mathbf{N}(t))$ until $Z(t)$ moves well
into one of the neighbouring slices, and use
optional stopping to get an inequality
relating the maximum of $p_G(\mathbf{n})$ over
$\mathcal{B}_G^{III}(n; a, k)$ to the maxima over
$\mathcal{B}_G^{III}(n'; a, k-1)$ and $\mathcal{B}_G^{III}(n'; a, k+1)$,
with $\frac{1}{4} n \le n' < n$. The parameter $a$ will be chosen small
so that we can apply Lemma \ref{lem:super-exit} to the stopping rule.
We will need to handle $k \ge 1$, $k = 0, -1$ and $k \le -2$ separately.
It will be convenient to introduce the following notation:
\eqnsplst
{ M_n(k)
&= \max \left\{ p_G(\mathbf{n}) : \mathbf{n} \in \mathcal{B}_G^{III}(n ; a, k) \right\} \\
\overline{M}_n(k)
&= \sup_{m \ge n} M_m(k) \\
\beta(k)
&= \limsup_{n \to \infty} M_n(k)
= \lim_{n \to \infty} \overline{M}_n(k). }
\emph{Case $1 \le k \le k_\mathrm{max}$.} We define the stopping time
\eqnsplst
{ \tau_k
&= \sqrt{a} n \left( \frac{1}{k} - \frac{1}{4 k^2} \right) \, \wedge \,
\inf \left\{ t \ge 0 : Z(t) < \left( k - \frac{1}{2} \right) a \sqrt{n-t} \right\} \\
&\qquad\qquad \wedge \, \inf \left\{ t \ge 0 :
Z(t) \ge \left( k + \frac{3}{2} \right) a \sqrt{n-t} \right\}, }
It is straightforward to check that whenever
$\tau_k < n ( \frac{1}{k} - \frac{1}{4 k^2} )$,
the value of $Z(\tau_k)$ is such that $\mathbf{N}(\tau_k)$ is either in
the slice $\mathcal{B}_G^{III}(n-\tau_k ; a, k-1)$ or in the slice
$\mathcal{B}_G^{III}(n-\tau_k ; a, k+1)$.
An application of Lemma \ref{lem:super-exit} to
$Z(t) - (k-1) a \sqrt{n}$ yields
\eqn{e:bound-prob}
{ \mathbf{P} \left[ \tau_k \ge \sqrt{a} n \left( \frac{1}{k} - \frac{1}{4 k^2} \right) \right]
\le C \frac{2 a \sqrt{n}}{a^{1/4} \sqrt{n} \sqrt{ \frac{1}{k} - \frac{1}{4 k^2} }}
\le C \frac{4 a^{3/4}}{\sqrt{\frac{a}{2 A} \left(4 - \frac{a}{2 A}\right)}}
= \frac{4 C}{\sqrt{\frac{1}{A} \left(\frac{2}{\sqrt{a}} - \frac{\sqrt{a}}{2 A}\right)}}. }
By optional stopping, we have
\eqnspl{e:pG-mart}
{ p_G(\mathbf{n})
&= \mathbf{E} [ p_G(\mathbf{N}(\tau_k)) ] \\
&\le \mathbf{P} [ Z(\tau_k) < k a \sqrt{n - \tau_k} ] \overline{M}_{n/4}(k-1)
+ \mathbf{P} [ Z(\tau_k) \ge (k+1) a \sqrt{n - \tau_k} ] \overline{M}_{n/4} (k+1) \\
&\qquad + \mathbf{P} [ Z(\tau_k) \in [k a \sqrt{n - \tau_k}, (k+1) a \sqrt{n - \tau_k}) ]
\overline{M}_{n/4} (k). }
Note that due to our choice of $a$ in \eqref{e:bound-prob} the
probability in the third term of \eqref{e:pG-mart} is at most
$C(A) \sqrt{a}$. Maximizing $p_G(\mathbf{n})$ over its slice yields
\eqn{e:rel-1}
{ M_n(k)
\le c_n(k) \overline{M}_{n/4}(k-1) + d_n(k) \overline{M}_{n/4}(k) +
e_n(k) \overline{M}_{n/4}(k+1),
\quad 1 \le k \le k_\mathrm{max}, }
where $d_n(k) \le C(A) \sqrt{a}$. By stopping the supermartingale
$Z'(t) = Z(t) - (k-1) a \sqrt{n}$ at $\tau_k$ we have
\eqn{e:super-mart-ineq}
{ 2 a \sqrt{n}
\ge Z'(0)
\ge \mathbf{E} [ Z'(\tau_k) ; Z'(\tau_k) \ge \frac{5}{2} a \sqrt{n-\tau_k} ]
\ge \frac{5}{2} a \sqrt{n} \sqrt{1 - \sqrt{a}} \, e_n(k). }
When $a$ is sufficiently small, the inequalties \eqref{e:super-mart-ineq}
and $d_n(k) \le C(A) \sqrt{a}$ imply that $c_n(k) \ge \frac{1}{6}$.
\emph{Case $k = -1, 0$.} We define
\eqnst
{ \tau_k
= \frac{3}{4} a n \wedge \inf \left\{ t \ge 0 :
Z(t) < \left( k - \frac{1}{2} \right) a \sqrt{n-t} \right\}
\wedge \inf \left\{ t \ge 0 :
Z(t) \ge \left( k + \frac{3}{2} \right) a \sqrt{n-t} \right\}. }
We now have
\eqn{e:bound-prob-2}
{ \mathbf{P} \left[ \tau_k \ge \frac{3}{4} a n \right]
\le C \frac{2 a \sqrt{n}}{\sqrt{\frac{3}{4} a n}}
= \frac{2 \sqrt{a} C}{\sqrt{3/4}}. }
Analogously to \eqref{e:rel-1} we obtain
\eqn{e:rel-2}
{ M_n(k)
\le c_n(k) \overline{M}_{n/4}(k-1) + d_n(k) \overline{M}_{n/4}(k) +
e_n(k) \overline{M}_{n/4}(k+1),
\quad k = -1, 0. }
By an argument similar to the one for the previous case, for $a$
sufficiently small we have $c_n(k) \ge \frac{1}{4}$.
\emph{Case $-k_\mathrm{max}-1 \le k \le -2$.} This time we define
\eqnsplst
{ \tau_k
&= n \sqrt{a} \left( \frac{1}{1-k} - \frac{1}{4 (1-k)^2} \right) \, \wedge \,
\inf \left\{ t \ge 0 : Z(t) < \left( k - \frac{1}{2} \right) a \sqrt{n-t} \right\} \\
&\qquad\qquad \wedge \, \inf \left\{ t \ge 0 :
Z(t) \ge \left( k + \frac{3}{2} \right) a \sqrt{n} \right\}. }
Then with the same choice of $a$ as in the case $k \ge 1$ we have
\eqnst
{ \mathbf{P} \left[ \tau_k > n \left( \frac{1}{1-k} - \frac{1}{4 (1-k)^2} \right) \right]
\le C \frac{4 a^{3/4}}{\sqrt{\frac{a}{2 A} \left(4 - \frac{a}{2 A}\right)}}
\le C(A) \sqrt{a}. }
This yields the relation
\eqn{e:rel-3}
{ M_n(k)
\le c_n(k) \overline{M}_{n/4}(k-1) + d_n(k) \overline{M}_{n/4}(k) +
e_n(k) \overline{M}_{n/4}(k+1),
\quad -k_\mathrm{max}-1 \le k \le -2, }
where $c_n(k) \ge \frac{1}{4}$ for sufficiently small $a$.
We select a subsequence of $n$ along which $c_n(k), d_n(k), e_n(k)$ all
converge to some limits $c(k), d(k), e(k)$, as well as all $M_n(k)$
converge to $\beta(k)$. Then we get
\eqn{e:beta-ks}
{ \beta(k)
\le c(k) \beta(k-1) + d(k) \beta(k) + e(k) \beta(k+1), }
Due to Proposition \ref{prop:far-enough} we have $\beta(-k_\mathrm{max}-2) \le \varepsilon$
and $\beta(k_\mathrm{max}+1) \le c_G + \varepsilon$. It is easy to deduce from the relation
\eqref{e:beta-ks} and $c(k) \ge \frac{1}{4} > 0$ that
if $\beta(k) \ge \beta(k+1)$ then also $\beta(k-1) \ge \beta(k)$.
Hence the maximum in the variable $k$ occurs at the
right endpoint and $\beta(k) \le c_G + \varepsilon$
for all $-k_\mathrm{max}-2 \le k < k_\mathrm{max}+1$. This completes the proof of the
Proposition.
\end{proof}
\section{Further Questions}
\label{sec:open}
\begin{problem}
\label{prob:exp-converge}
It is plausible that the limit $c_G$ is reached at an
exponential rate everywhere in $\mathcal{R}_G$.
If one could show that
$p_G(\mathbf{n})$ is maximized in the interior of $\mathcal{R}_G$,
then this would follow rather easily from \eqref{e:liminf}.
Can one describe the asymptotic behaviour of the optimal
strategy?
\end{problem}
\begin{problem}
The estimates in Section \ref{sec:critical-proof} strongly
suggest Gaussian behaviour near $\partial \mathcal{R}_G$. Can one
make this more precise?
\end{problem}
\begin{problem}
It is plausible that under the optimal strategy, the games starting from
$\mathbf{n}, \mathbf{n}' \in n \mathcal{R}_G$ (and with the same sequence of vertices drawn)
couple with high probability. This may provide an alternative approach
to the rather technical arguments of
Theorem \ref{thm:phase-trans-graph}(ii)
and Proposition \ref{prop:in-enough}.
\end{problem}
\begin{problem}
We describe a possible definition of an ``order parameter'', in analogy
with statistical physics models. Let $0 \le \alpha \le 1$, and
suppose that the player has to give up proportion $\alpha$ of
her/his moves to an adversary, at which times the
move is chosen by the adversary. Let $p_{G,\alpha}(\mathbf{n})$ denote
the probability of winning in such a game. Let
\eqnst
{ \theta(\mathbf{x})
= \inf \{ 0 \le \alpha \le 1 :
\lim_{n \to \infty} p_{G,\alpha}(n \mathbf{x}) = 0 \}. }
The methods of Theorem \ref{thm:phase-trans-graph} show that
$\theta(\mathbf{x}) > 0$ in $\mathcal{R}_G$ and $\theta(\mathbf{x}) = 0$ in $\mathcal{I}_G$.
Can one analyze $\theta$, or a suitable alternative?
\end{problem}
\bigbreak
| {
"timestamp": "2016-09-21T02:07:04",
"yymm": "1507",
"arxiv_id": "1507.04169",
"language": "en",
"url": "https://arxiv.org/abs/1507.04169",
"abstract": "We study the following game on a finite graph $G = (V, E)$. At the start, each edge is assigned an integer $n_e \\ge 0$, $n = \\sum_{e \\in E} n_e$. In round $t$, $1 \\le t \\le n$, a uniformly random vertex $v \\in V$ is chosen and one of the edges $f$ incident with $v$ is selected by the player. The value assigned to $f$ is then decreased by $1$. The player wins, if the configuration $(0, \\dots, 0)$ is reached; in other words, the edge values never go negative. Our main result is that there is a phase transition: as $n \\to \\infty$, the probability that the player wins approaches a constant $c_G > 0$ when $(n_e/n : e \\in E)$ converges to a point in the interior of a certain convex set $\\mathcal{R}_G$, and goes to $0$ exponentially when $(n_e/n : e \\in E)$ is bounded away from $\\mathcal{R}_G$. We also obtain upper bounds in the near-critical region, that is when $(n_e/n : e \\in E)$ lies close to $\\partial \\mathcal{R}_G$. We supply quantitative error bounds in our arguments.",
"subjects": "Probability (math.PR); Optimization and Control (math.OC)",
"title": "Phase transition in a sequential assignment problem on graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373084,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449310917195
} |
https://arxiv.org/abs/2111.00356 | A note on the uniformity threshold for Berge hypergraphs | A Berge copy of a graph is a hypergraph obtained by enlarging the edges arbitrarily.Grósz, Methuku and Tompkins in 2020 showed that for any graph $F$, there is an integer $r_0=r_0(F)$, such that for any $r\ge r_0$, any $r$-uniform hypergraph without a Berge copy of $F$ has $o(n^2)$ hyperedges. The smallest such $r_0$ is called the uniformity threshold of $F$ and is denoted by $th(F)$. They showed that $th(F)\le R(F,F')$, where $R$ denotes the off-diagonal Ramsey number and $F'$ is any graph obtained form $F$ by deleting an edge.We improve this bound to $th(F)\le R(K_{\chi(F)},F')$, and use the new bound to determine $th(F)$ exactly for several classes of graphs. | \section{Introduction}
Given a graph $G$ and a hypergraph $\cH$, we say that $\cH$ is a \textit{Berge} copy of $G$ (Berge-$G$ in short) if $V(G)\subset V(\cH)$ and there is a bijection $f:E(G)\rightarrow E(\cH)$ such that for any edge $e\in E(G)$, we have $e\subset f(e)$. We also say that $G$ is the \textit{core} of $\cH$. Note that there are several non-isomorphic Berge copies of a graph, and a hypergraph is a Berge copy of several non-isomorphic graphs. Also, a hypergraph can have multiple isomorphic cores.
Berge copies (or Berge hypergraphs), extending the notion of hypergraph cycles due to Berge, were introduced by Gerbner and Palmer \cite{gp1}. Since then, extremal problems for Berge hypergraphs have attracted a lot of attention, see Section 5.2.2 of \cite{gp} for a survey.
In this paper we are concerned with the maximum number of hyperedges in $r$-uniform $n$-vertex hypergraphs that do not contain any Berge copy of a given graph $F$ (in short: Berge-$F$-free hypergraphs). We denote this quantity by $\ex_r(n,\textup{Berge-}F)$. Note that a 2-uniform Berge copy of $F$ is $F$, and $\ex(n,F):=\ex_2(n,\textup{Berge-}F)$ is the Tur\'an number of $F$.
Gerbner and Palmer showed that if the uniformity is large enough, then a Berge-$F$-free hypergraph can have at most quadratic many hyperedges.
\begin{proposition}[Gerbner, Palmer,\cite{gp1}]
If $r\ge |V(F)|$, then $\ex_r(n,\textup{Berge-}F)=O(\ex(n,F))=O(n^2)$.
\end{proposition}
This was improved by Gr\'osz, Methuku and Tompkins \cite{GMT} in the case the uniformity is even larger.
\begin{thm}[Gr\'osz, Methuku, Tompkins \cite{GMT}]
For any graph $F$, there is an integer $r_0=r_0(F)$ such that for any $r\ge r_0$, $\ex_r(n,\textup{Berge-}F)=o(n^2)$.
\end{thm}
The smallest possible $r_0$ in the above theorem is called the \textit{uniformity threshold} of $F$ and is denoted by $\thre(F)$.
Gr\'osz, Methuku and Tompkins \cite{GMT} initiated the study of the uniformity threshold. They proved the following general upper bound.
\begin{thm}[Gr\'osz, Methuku, Tompkins \cite{GMT}]\label{grmeto}
For any graph $F$ and any edge $e$ of $F$, we have $\thre(F)\le R(F,F\setminus e)$.
\end{thm}
Here we let $F\setminus e$ denote an arbitrary graph obtained by deleting an edge from $F$.
$R$ denotes the off-diagonal Ramsey number, i.e. $R(H,G)$ is the smallest number $r$ of vertices such that if we color each the edge of $K_r$ to blue or red, then we can find either a mono-blue $H$ or a mono-red $G$.
Using the above theorem, Gr\'osz, Methuku and Tompkins \cite{GMT} determined the uniformity threshold exactly for seven graphs on at most 5 vertices, including the triangle. Other than those, the uniformity threshold is known only in some "easy" cases, where $\ex_r(n,\textup{Berge-}F)$ is well studied for every $r$. For example, $\thre(F)=2$ for bipartite graphs with a vertex whose removal results in a forest \cite{gmv}, and $\thre(F)=3$ for odd cycles of length more than three \cite{gyl}.
Our main result is the following bound.
\begin{thm}\label{main}
For any graph $F$ and any edge $e$ of $F$, we have $\thre(F)\le R(K_{\chi(F)},F\setminus e)$.
\end{thm}
We prove another bound for a class of graphs. Before stating our next theorem, we have to introduce a notion closely connected to Berge hypergraphs.
Given graphs $H$ and $G$, we let $\cN(H,G)$ denote the number of copies of $H$ in $G$. Given $F$, we let $\ex(n,H,F):=\{\cN(H,G): G \text{ is an $F$-free $n$-vertex graph}\}$. This quantity is called the \textit{generalized Tur\'an number} of $H$ and $F$. After several sporadic result, the systematic study of generalized Tur\'an problems was initiated by Alon and Shikhelman \cite{as}.
The connection to Berge hypergraphs was established by Gerbner and Palmer \cite{gp2}, who showed that $\ex(n,K_r,F)\le \ex_r(n,\textup{Berge-}F)\le\ex(n,K_r,F)+\ex(n,F)$.
\begin{thm}\label{kicsi}
If $\ex(n,K_3,F)=o(n^2)$, then $\thre(F)\le (\chi(F)-1)(|V(F)|-1)+1$.
\end{thm}
Using the above upper bounds and known results on Ramsey numbers, we can determine the uniformity threshold exactly for several classes of graphs. Let $F_k$ denote the \textit{$k$-fan}, $k$ triangles sharing a vertex. Let $B_t$ denote the \textit{book with $t$ pages}, $t$ triangles sharing an edge. Let $W_k$ denote the \textit{wheel with $k$ spokes}, a $C_k$ with an additional vertex connected to each vertex of the cycle. Let $B_{p,q}(m)$ denote the \textit{generalized book}, $m$ copies of $K_q$ each sharing a fixed set of $p$ vertices.
\begin{thm}\label{rams}
We have $\thre(F)= (\chi(F)-1)(|V(F)|-1)+1$ if $F$ is the $k$-fan with $k>1$, $B_t$ with $t>1$, wheel graph $W_k$ with $k\ge 6$ an even number, or a generalized book $B_{p,q}(m)$ with $m$ large enough. In particular
\begin{itemize}
\item $\thre(F_k)=4k+1$ for $k>1$,
\item $\thre(B_t)=2t+3$ for $t>1$,
\item $\thre(W_k)=2k+1$ for even $k>4$,
\item $\thre(B_{p,q}(m))=(q-1)(m(q-p)+p-1)+1$ for $m$ large enough.
\end{itemize}
\end{thm}
The structure of the paper is as follows. In Section 2, we prove our theorems through a series of propositions.
We finish the paper with some concluding remarks in Section 3.
\section{Proofs}
We will use the removal lemma \cite{efr}. It states that if an $n$-vertex graph $G$ has $o(n^{|V(H)|})$ copies of $H$, then we can make $G$ $H$-free by removing $o(n^2)$ edges.
The \textit{shadow graph} of a hypergraph $\cH$ is the graph $G$ on the same vertex set with $uv\in E(G)$ if and only if there is a hyperedge of $\cH$ containing both $u$ and $v$.
Lu and Wang \cite{lw} initiated the study of the maximum number of edges in the shadow graph of a Berge-$F$-free $r$-uniform $n$-vertex hypergraph. This quantity is called the \textit{$r$-cover Tur\'an number} of $F$ and is denoted by $\hat{\ex}_r(n,\textup{Berge-}F)$. We initiate the study of a generalized version of this, the straightforward analogy of generalized Tur\'an numbers. We let $\hat{\ex}_r(n,H,\textup{Berge-}F)$ denote the maximum number of copies of $H$ in the shadow graph of an $r$-uniform Berge-$F$-free $n$-vertex hypergraph.
\begin{proposition}\label{propp} If $r\ge |V(F)|$, then $\hat{\ex}_r(n,H,\textup{Berge-}F)\le O(\ex(n,H,F))+O(n^{|V(H)|-1})$.
\end{proposition}
\begin{proof}
Let $\cH$ be a Berge-$F$-free $r$-uniform $n$-vertex hypergraph. Recall that $\cH$ has $O(n^2)$ hyperedges by a result of Gerbner and Palmer \cite{gp1}.
We distinguish two types of copies of $H$ in the shadow graph $G$ of $\cH$. Those copies of $H$ that contain at least 3 vertices from some hyperedge of $\cH$ can be counted by picking a hyperedge $O(n^2)$ ways, picking 3 vertices of it constant many ways, and then picking $V(H)-3$ other vertices, $O(n^{|V(H)|-3})$ many ways. Therefore, there are $O(n^{|V(H)|-1})$ such hyperedges.
Let us consider now the other copies of $H$. For each hyperedge of $\cH$, let us pick a sub-edge randomly with uniform distribution, independently from the other edges. Let $G'$ be the graph having those edges. Then $G'$ is clearly $F$-free. Observe that a copy of $H$ that shares at most two vertices with any hyperedge of $\cH$ is in $G'$ with probability at least $1/\binom{r}{2}^{|E(H)|}$. Indeed, every edge of the copy of $H$ is in at least one hyperedge of $\cH$, thus it is in $G'$ with probability at least $1/\binom{r}{2}$.
Distinct edges of the copy of $H$ are in $G'$ independently of each other, as they may be included in $G'$ only via distinct hyperedges. This implies that the number of the copies of the second type of $H$ is at most $\binom{r}{2}^{|E(H)|}\cN(H,G')\le \binom{r}{2}^{|E(H)|}\ex(n,H,F)$, completing the proof.
\end{proof}
We are going to be interested in the case when $\hat{\ex}_r(n,H,\textup{Berge-}F)=o(n^{|V(H)|})$. By the above result, it holds if
$\ex(n,H,F)=o(n^{|V(H)|})$. By a result of Alon and Shikhelman \cite{as} this happens if and only if $F$ is a subgraph of a blow-up of $H$.
\begin{corollary}\label{coro1} If $r\ge |V(F)|$ and $F$ is a subgraph of a blow-up of $H$, then
$\hat{\ex}_r(n,H,\textup{Berge-}F)=o(n^{|V(H)|})$.
\end{corollary}
What we are actually interested in is the following quantity. We say that a set $E'$ of edges cover a subgraph of $G$ if the subgraph contains an edge from $G$. Let $C(H,G)$ denote the minimum number of edges in $G$ that cover each copy of $H$. Let $x(n,H,F)$ denote the largest value of $C(H,G)$ in $F$-free $n$-vertex graphs $G$. Let $\hat{x}_r(n,H,\textup{Berge-}F)$ denote the largest value of $C(H,G)$ in the shadow graph $G$ of a Berge-$F$-free $n$-vertex $r$-uniform hypergraph. Corollary \ref{coro1} and the removal lemma imply the following.
\begin{corollary}\label{coro2} If $r\ge |V(F)|$ and $F$ is a subgraph of a blow-up of $H$, then
$\hat{x}_r(n,H,\textup{Berge-}F)=o(n^{2})$. In particular, $\hat{x}_r(n,K_{\chi(F)},\textup{Berge-}F)=o(n^{2})$.
\end{corollary}
Given a hypergraph $\cH$, we say that an edge $uv$ of the shadow graph is $t$-heavy if $u$ and $v$ are contained in at least $t$ hyperedges of $\cH$. Otherwise $uv$ is $t$-light. We say that a subgraph of the shadow graph is $t$-heavy (resp. $t$-light) if each edge of it is $t$-heavy (resp. $t$-light).
Let us describe first the main advantage of heavy edges.
\begin{observation}\label{obs1}
Assume that $t\ge |E(F)|$
and we find a Berge copy of a subgraph $F'$ of $F$ in $\cH$, such that its core is extended to a copy of $F$ with $t$-heavy edges in the shadow graph $G$. Then this copy is the core of a Berge-$F$ in $\cH$.
\end{observation}
\begin{proof}
All we need to do is to pick distinct hyperedges for the additional edges. We go through those additional edges in an arbitrary order, and pick such an edge arbitrarily. There are at most $|E(F)|-1$ hyperedges picked earlier (either already in the original Berge copy of $F'$, or picked for an earlier one of the additional edges). Thus we can pick a new hyperedge for each of the additional edges, to complete the Berge copy of $F$.
\end{proof}
In particular, this implies that if $\cH$ is Berge-$F$-free, then there is no $t$-heavy copy of $F$ in the shadow graph.
\begin{proposition}\label{lig}
If $\cH$ is a Berge-$F$-free $r$-uniform hypergraph, then there are at most $(t-1)\hat{x}_r(n,H,\textup{Berge-}F)$ hyperedges in $\cH$ containing a $t$-light copy of $H$.
\end{proposition}
\begin{proof}
Let $G$ be the shadow graph of $\cH$ and $S$ be a set of $\hat{x}_r(n,H,\textup{Berge-}F)$ edges covering each copy of $H$. Then there is a $t$-light edge of $S$ inside every hyperedge that contains a $t$-light copy of $H$. Each $t$-light edge of $S$ is counted in at most $t-1$ hyperedges, thus there are at most $(t-1)|S|$ hyperedges, completing the proof.
\end{proof}
\begin{proposition}\label{fo}
If $F$ is a subgraph of a blow-up of $H$, then $\thre(F)\le R(H,F\setminus e)$.
\end{proposition}
\begin{proof} Let $\cH$ be an $r$-uniform Berge-$F$-free $n$-vertex hypergraph and let $t=|E(F)|$. We apply Proposition \ref{lig} to obtain that there are at most $(t-1)\hat{x}_r(n,H,\textup{Berge-}F)$ hyperedges in $\cH$ containing a $t$-light copy of $H$. By Corollary \ref{coro2}, $\hat{x}_r(n,H,\textup{Berge-}F)=o(n^2)$, thus there are $o(n^2)$ hyperedges containing a $t$-light copy of $H$. Let $\cH'$ denote the subhypergraph obtained by deleting these $o(n^2)$ hyperedges. We will show that if $r\ge R(H,F\setminus e)$, then $\cH'$ is empty.
By Observation \ref{obs1}, if a hyperedge $h_1$ of $\cH'$ contains a $t$-heavy copy of $F\setminus e$, then that copy is the core of a Berge-$F\setminus e$ in $\cH'\setminus h$.
Now we can add $h$, representing $e$, to obtain a copy of Berge-$F$ in $\cH'$, a contradiction. We obtained that every hyperedge $h$ of $\cH'$ contains no $t$-heavy $F\setminus e$, nor $t$-light $H$. But $t$-heavy and $t$-light gives a 2-coloring of $h$, thus $h$ has less than $R(H,F\setminus e)\le r$ vertices, a contradiction finishing the proof.
\end{proof}
Theorem \ref{main} immediately follows from the above result by observing that $F$ is a subgraph of a blow-up of $K_{\chi(F)}$. We are also ready to prove Theorem \ref{kicsi}. Recall that it states the upper bound $\thre(F)\le (\chi(F)-1)(|V(F)|-1)+1$ if $\ex(n,K_3,F)=o(n^2)$.
\begin{proof}[Proof of Theorem \ref{kicsi}]
Let $\cH$ be a Berge-$F$-free $r$-uniform $n$-vertex hypergraph and $t=|E(F)|$. As in the proof of Proposition \ref{fo}, we have that no hyperedge contains a $t$-heavy $F\setminus e$ and $o(n^2)$ hyperedges contain a $t$-light copy of $K_{\chi(F)}$. Let $\cH_1$ denote the subhypergraph of those hyperedges that contain a triangle with two $t$-heavy edges and one $t$-light edge. We claim that $\cH_1$ has $o(n^2)$ hyperedges.
Let us pick a triangle with two $t$-heavy edges and one $t$-light edge for each hyperedge in $\cH_1$ and let $G_1$ denote the graph that contains the three edges picked for each hyperedge. We claim that $G_1$ is $F$-free. Indeed, assume that there is a copy of $F$. First we pick the hyperedges for the $t$-light edges in that copy. This is doable, as for each of those edges $uv$ there is at least one hyperedge $h$ that contains $uv$ such that $uv$ is in the triangle we picked for $h$. Then we pick $h$ for $uv$. This way we pick every hyperedge $h$ at most once, as we picked only one $t$-light edge for $h$. Now we have a Berge copy of a subgraph of $F$, and the core of this subgraph is extended to $F$ by $t$-heavy edges, thus we can use Observation \ref{obs1} to find a Berge-$F$ in $\cH_1$, a contradiction.
Thus we have that $G_1$ is $F$-free and hence by our assumption has $o(n^2)$ triangles. For every hyperedge of $\cH_1$, we picked a triangle in $G_1$. Observe that each triangle $uvw$ was picked at most $t-1$ times. Indeed, the triangle contains a $t$-light edge $uv$, thus at most $t-1$ hyperedges contain this triangle. This implies that $\cH_1$ has $o(n^2)$ hyperedges.
We will show that there are no further hyperedges if $r\ge (\chi(F)-1)(|V(F)|-1)+1$. Assume that $h$ is a hyperedge without a $t$-light copy of $K_{\chi(F)}$ not in $\cH_1$, and let us consider its subedges. By forbidding triangles with exactly two $t$-heavy edges, we obtain that $t$-heavy edges form vertex-disjoint cliques in $h$. Such a clique has at most $|V(F)|-1$ vertices, as otherwise we have a $t$-heavy copy of $F$. If there are $k$ vertex-disjoint $t$-heavy cliques, then there is a $t$-light $K_k$, thus $k<\chi(F)$. Hence $h$ has at most $(\chi(F)-1)(|V(F)|-1)$ vertices, a contradiction.
\end{proof}
Let us turn our attention to lower bounds.
Gr\'osz, Methuku and Tompkins \cite{GMT} proved a general lower bound. We say that a partition of $V(F)$ into
sets of size at most $t$ is a \textit{$t$-admissible partition} of $F$ if there is at most one edge between any two sets in $F$. Given a $t$-admissible partition $P$, we let $F(P)$ be the graph which has the sets of the partition as vertices, with $UU'$ being an edge if and only if there is a vertex in $U$ connected to a vertex of $U'$.
Given $F$, we let $c_t(F)$ denote the smallest chromatic number of graphs $F(P)$, where $P$ is a $t$-admissible partition.
\begin{thm}[Gr\'osz, Methuku, Tompkins \cite{GMT}]
Let $F$ be a graph with $c_t(F)\ge 3$ and $1\le t\le |V(F)|-1$. Then $\thre(F)\ge (c_t(F)-1)t+1$.
\end{thm}
Observe that partitioning into singletons is $t$-admissible, thus $c_t(F)\le \chi(F)$. This means that the lower bound on $\thre(F)$ is at most $(\chi(F)-1)(|V(F)|-1)+1$. Let us compare this to our upper bounds.
Theorem \ref{kicsi} has the same quantity as an upper bound, and $(\chi(F)-1)(|V(F)|-1)+1$ is also a well-known and easy lower bound on $R(K_{\chi(F)},F\setminus e)$ if $F\setminus e$ is connected \cite{chv}.
This means that to find graphs where we can determine the uniformity threshold, we should check the cases of equality in the lower and upper bounds.
\begin{corollary}
Let $F$ consist of a subgraph $F_0$ and an additional vertex $v$ connected to each vertex of $F_0$. If $F_0$ is connected or $F_0$ is a bipartite graph such that at least two of its components contain an edge,
then $\thre(F)\ge (\chi(F)-1)(|V(F)|-1)+1$.
\end{corollary}
\begin{proof}
Let us consider a $(|V(F)|-1)$-admissible partition $P$ of $F$. If $\{v\}$ is a part, then each other vertex is a part, thus $\chi(F(P))=\chi(F)$. Otherwise $v$ is in a part with another vertex $u$. Then each other neighbor of $u$ must be in the same part. If $F\setminus v$ is connected, then all the vertices must be in that part, a contradiction.
Otherwise, a connected component of $F_0$ is in the part of $v$. Then the other vertices $w$ must form parts $\{w\}$, as otherwise a part would contain two neighbors of $v$. As $F_0$ contains an edge not in the component of $v$, we have $\chi(F(P))\ge\chi(F)=3$.
\end{proof}
Let us consider now when $R(F\chi(F),F\setminus e)=(\chi(F)-1)(|V(F)|-1)+1$.
This is in fact a well-studied notion in Ramsey theory. A graph $G$ is called \textit{$p$-good} if $R(K_p,G)=(p-1)(|V(G)|-1)+1$.
Now Theorem \ref{rams} follows from Theorem \ref{main} and known results in Ramsey theory that we list below.
Li and Rousseau \cite{lr} showed that the $k$-fan is 3-good for every $k>1$.
Rousseau and Sheehan \cite{rs} showed that the book $B_t$ is 3-good for every $t>1$. Burr and Erd\H os \cite{be} showed that the wheel $W_k$ is 3-good for $k\ge 5$.
Nikiforov and Rousseau \cite{nr} showed that the generalized book $B_{p,q}(m)$ is $q$-good for $m$ large enough. The threshold on $m$ was improved in \cite{fhw}.
We remark that Theorem \ref{kicsi} also implies two of the statements in Theorem \ref{rams} using the following known results in generalized Tur\'an theory. Alon and Shikhelman \cite{as} showed that $\ex(n,K_3,B_t)=o(n^2)$ and $\ex(n,K_3,F_k)=O(n)$.
\section{Concluding remarks}
The main question regarding the uniformity threshold for Berge hypergraphs is whether it can grow exponentially with the number of vertices for some graphs. Our general upper bounds rely on the Ramsey number, which can. Theorem \ref{main}, together with a theorem from \cite{aks} shows that for graphs $F$ with chromatic number $k$, $\thre(F)$ is polynomial in $|V(F)|$, where the degree of the polynomial may depend on $k$. However, this is still exponential if $\chi(F)$ grows with $|V(F)|$, e.g. for cliques. In fact, for cliques Theorem \ref{main} does not improve Theorem \ref{grmeto}. On the other hand, the lower bound on $\thre(K_k)$ is quadratic for $k$.
We initiated the study of generalized cover Tur\'an numbers for Berge hypergraphs. We only proved Proposition \ref{propp} and its corollaries concerning this notion, but we believe that the question is interesting in general.
Let us mention that $\hat{\ex}_r(n,F,\textup{Berge-}F)$ may be interesting as well.
Instead of Theorem \ref{kicsi}, we proved the way more general Proposition \ref{fo}. However, we could not show any example where applying Proposition \ref{fo} could improve the bound on $\thre(F)$.
\bigskip
\textbf{Funding}: Research supported by the National Research, Development and Innovation Office - NKFIH under the grants KH 130371, SNN 129364, FK 132060, and KKP-133819.
| {
"timestamp": "2021-11-02T01:14:54",
"yymm": "2111",
"arxiv_id": "2111.00356",
"language": "en",
"url": "https://arxiv.org/abs/2111.00356",
"abstract": "A Berge copy of a graph is a hypergraph obtained by enlarging the edges arbitrarily.Grósz, Methuku and Tompkins in 2020 showed that for any graph $F$, there is an integer $r_0=r_0(F)$, such that for any $r\\ge r_0$, any $r$-uniform hypergraph without a Berge copy of $F$ has $o(n^2)$ hyperedges. The smallest such $r_0$ is called the uniformity threshold of $F$ and is denoted by $th(F)$. They showed that $th(F)\\le R(F,F')$, where $R$ denotes the off-diagonal Ramsey number and $F'$ is any graph obtained form $F$ by deleting an edge.We improve this bound to $th(F)\\le R(K_{\\chi(F)},F')$, and use the new bound to determine $th(F)$ exactly for several classes of graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "A note on the uniformity threshold for Berge hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449310917195
} |
https://arxiv.org/abs/1001.4574 | Birational invariants and A^1-connectedness | We study some aspects of the relationship between A^1-homotopy theory and birational geometry. We study the so-called A^1-singular chain complex and zeroth A^1-homology sheaf of smooth algebraic varieties over a field k. We exhibit some ways in which these objects are similar to their counterparts in classical topology and similar to their motivic counterparts (the (Voevodsky) motive and zeroth Suslin homology sheaf). We show that if k is infinite the zeroth A^1-homology sheaf is a birational invariant of smooth proper varieties, and we explain how these sheaves control various cohomological invariants, e.g., unramified étale cohomology. In particular, we deduce a number of vanishing results for cohomology of A^1-connected varieties. Finally, we give a partial converse to these vanishing statements by giving a characterization of A^1-connectedness by means of vanishing of unramified invariants. | \section{Introduction}
In this paper, we continue to investigate the relationship between birational geometry and connectedness in the sense of $\aone$-homotopy theory that was initiated in \cite{AM}. Developing some ideas of \cite[\S 4]{AM}, we study cohomological consequences of homotopical connectivity hypotheses and, more specifically, vanishing results for various types of cohomological invariants such as unramified \'etale cohomology. For the most part, however, the results of this paper are logically independent of \cite{AM}.
In the context of the Morel-Voevodsky $\aone$-homotopy theory of smooth schemes over a field $k$ \cite{MV}, one may associate with any smooth scheme $X$ a Nisnevich sheaf of sets, denoted $\pi_0^{\aone}(X)$, called the sheaf of $\aone$-connected components of $X$. A smooth scheme $X$ is called {\em $\aone$-connected} if $\pi_0^{\aone}(X)$ is isomorphic to the constant $1$-point sheaf (see Definition \ref{defn:aoneconnected} for more details). In \cite[Definition 2.2.2 and Corollary 2.4.4]{AM} it was shown that varieties that are $\aone$-connected are ``nearly rational in a strong sense." For example, if $k$ has characteristic $0$, then stably $k$-rational smooth proper varieties or, more generally, Saltman's retract $k$-rational smooth proper varieties are $\aone$-connected.
Many schemes of interest are not $\aone$-connected and providing an explicit description of $\pi_0^{\aone}(X)$ for any arbitrary smooth variety seems difficult at the moment. For that reason, we seek to understand auxiliary invariants that are controlled by the sheaf of $\aone$-connected components. Here, we study the homological counterpart of $\pi_0^{\aone}(X)$: the zeroth $\aone$-homology sheaf, denoted $\H_0^{\aone}(X)$ (see Definition \ref{defn:aonehomology}). While in some ways $\aone$-homology is similar to the more familiar Suslin homology (see, e.g., \cite{SuslinVoevodsky}), the two theories are different. While the zeroth Suslin homology sheaf (see Definition \ref{defn:suslinhomology}) of a smooth proper $k$-scheme $X$ can be described using the Chow group of $0$-cycles on $X$, as we explain, the zeroth $\aone$-homology is more closely related to $R$-equivalence classes in $X$.
Intuitively speaking, the sheaf $\pi_0^{\aone}(X)$ formalizes the idea of algebraic path components of $X$, where algebraic paths are interpreted as chains of affine lines. Indeed \cite[Theorem 6.2.1]{AM} proves that if $X$ is smooth and proper over $k$, then for any finitely generated separable extension $L/k$, the set of sections $\pi_0^{\aone}(X)(L)$ can be identified with the set of $R$-equivalence classes in $X(L)$ in the sense of Manin. While this result does not identify the whole sheaf $\pi_0^{\aone}(X)$, it suggests that $\pi_0^{\aone}(X)$ is a birational invariant of smooth proper $k$-varieties.
By analogy with topology, one might expect that the sheaf $\H_0^{\aone}(X)$ should be the free abelian group on the $\aone$-connected components of $X$. Mirroring the expected behavior of $\pi_0^{\aone}(X)$ above, one might also expect that $\H_0^{\aone}(X)$ is a birational invariant for smooth proper $k$-varieties. Both of these statements are true provided that in the first statement one interprets the expression ``free abelian group" correctly, and in the second statement one restricts $k$ appropriately (the corresponding statement for Suslin homology is well known). Precisely, we prove the following results.
\begin{propintro}[See Proposition \ref{prop:dependsonpi0}]
\label{propintro:universalproperty}
Suppose $k$ is a field and $X$ is a smooth $k$-scheme. The morphism $\H_0^{\aone}(X) \to \H_0^{\aone}(\pi_0^{\aone}(X))$, induced by the canonical morphism $X \to \pi_0^{\aone}(X)$, is an isomorphism of Nisnevich sheaves of abelian groups.
\end{propintro}
\begin{remintro}
By Lemma \ref{lem:h0universal}, $\H_0^{\aone}(X)$ is the free strictly $\aone$-invariant sheaf (see Definition \ref{defn:strictaoneinvariance}) of abelian groups generated by $X$. Thus, this theorem says that $\H_0^{\aone}(X)$ is the free strictly $\aone$-invariant sheaf of groups on $\pi_0^{\aone}(X)$. One particularly useful consequence of this result is that if the morphism $\H_0^{\aone}(X) \to \H_0^{\aone}(\Spec k)$ is not an isomorphism, then $X$ is $\aone$-disconnected.
\end{remintro}
\begin{thmintro}[See Theorem \ref{thm:birationalclass}]
\label{thmintro:birationalinvariance}
Suppose $k$ is an infinite field. If $X$ and $X'$ are stably $k$-birationally equivalent smooth proper schemes, then $\H_0^{\aone}(X) \cong \H_0^{\aone}(X')$.
\end{thmintro}
For any field $k$, the sheaf $\H_0^{\aone}(\Spec k)$ is isomorphic to $\Z$ (see Example \ref{ex:aonehomologyofapoint}) and thus coincides with the zeroth Suslin homology of a point. For a general smooth $k$-scheme $X$, various classical stable birational invariants are related to $\H_0^{\aone}(X)$. Suppose $n$ is an integer coprime to the characteristic of $k$, and $L/k$ is a finitely generated separable field extension. Consider the functor on $k$-algebras defined by $A \mapsto H^i_{\et}(A,\mu_n^{\tensor j})$ (we abuse terminology and write $A$ instead of $\Spec A$ for notational convenience). Given a discrete valuation $\nu$ of $L/k$ with associated valuation ring $A$, one says that a class $\alpha \in H^i_{\et}(L,\mu_n^{\tensor j})$ is unramified at $\nu$ if $\alpha$ lies in the image of the restriction map $H^i_{\et}(A,\mu_n^{\tensor j}) \to H^i_{\et}(L,\mu_n^{\tensor j})$. For any integers $i,j$, Colliot-Th\'el\`ene and Ojanguren \cite{CTO} define the unramified cohomology group $H^i_{ur}(L/k,\mu_n^{\tensor j})$ as the subgroup of $H^i_{\et}(L,\mu_n^{\tensor j})$ consisting of those classes $\alpha$ that are unramified at every discrete valuation of $L$ trivial on $k$. Colliot-Th\'el\`ene and Ojanguren also proved that the groups $H^i_{ur}(L/k,\mu_n^{\tensor j})$ are stable birational invariants of smooth proper varieties (see \cite[Proposition 1.2]{CTO}). For another point of view on these statements see \cite[Theorem 4.1.1]{CTPurity}. The next result demonstrates that the groups $H^i_{ur}(L/k,\mu_n^{\tensor j})$ are controlled by $\H_0^{\aone}(X)$; in essence this was observed by Gabber ({\em cf.} \cite[Remark 1.1.3]{CTO}).
\begin{lemintro}[See Lemma \ref{lem:unramifiedrelation}]
\label{lemintro:unramifiedetalecohomology}
Suppose $k$ is a field, and $n$ is an integer coprime to the characteristic of $k$. If $X$ is a smooth proper $k$-scheme, then there is a canonical bijection
\[
H^i_{ur}(X,\mu_n^{\tensor j}) \isomto \hom(\H_0^{\aone}(X),{\mathbf H}^i_{\et}(\mu_n^{\tensor j})),
\]
where the group on the right hand side is computed in the category of Nisnevich sheaves of abelian groups, and the sheaf ${\mathbf H}^i_{\et}(\mu_n^{\tensor j})$ is defined in \textup{Example \ref{ex:unramifiedetalecohomology}}.
\end{lemintro}
Proposition \ref{propintro:universalproperty} shows that if $X$ is $\aone$-connected, then $\H_0^{\aone}(X)$ is trivial. Thus, for example, non-triviality of unramified \'etale cohomology can be used to detect $\aone$-disconnectedness. More generally, via Lemma \ref{lem:h0universal}, we will see that $\H_0^{\aone}(X)$ is a ``universal unramified invariant" for smooth proper schemes, in an appropriate sense; see Lemma \ref{lem:unramifiedinvariants} for a precise statement. For example, $\H_0^{\aone}(X)$ controls unramified Milnor K-theory (see Example \ref{ex:unramifiedMilnorktheory}) and the unramified Witt sheaf (see Example \ref{ex:unramifiedwittgroup}); this point of view is developed in \S \ref{s:unramifiedelements}.
All of the theories described in the previous paragraph have transfers of an appropriate kind, and the first two are even controlled by Suslin homology. However, the zeroth $\aone$-homology sheaf controls unramified invariants that do not possess transfers. Using this additional information, if $X$ is smooth and proper we can show that $\aone$-connectedness is characterized by vanishing of unramified invariants. More precisely, we prove the following result.
\begin{thmintro}[See Theorem \ref{thm:characterization}]
\label{thmintro:characterization}
If $k$ is a field, and $X$ is a smooth proper $k$-scheme, then $X$ is $\aone$-connected if and only if the canonical morphism $\H_0^{\aone}(X) \to \Z$ is an isomorphism; the same statement holds with rational coefficients.
\end{thmintro}
The use of strictly $\aone$-invariant sheaves that do not possess transfers seems essential. Indeed, Suslin homology (even with integral coefficients) cannot detect $\aone$-connectedness, in large part because of its inability to see the difference between rational points and $0$-cycles of degree $1$. We recall an example of Parimala (see Example \ref{ex:parimala}), pointed out to us by Sasha Merkurjev, showing that even if $X$ is a smooth projective variety such that the degree morphism $\H_0^S(X) \to \Z$ is an isomorphism, $X$ need not be $\aone$-connected.
Section \ref{s:aonehomology} is devoted to briefly reviewing aspects of $\aone$-homotopy theory, Voevodsky's theory of motives, and $\aone$-homology theory; in particular, we fix our notation for the rest of the paper. Section \ref{s:birationalproperties} studies the birational properties of the zeroth $\aone$-homology and Suslin homology sheaves and relates these two objects to the zeroth $\aone$-homotopy sheaf. Finally, Section \ref{s:unramifiedelements} provides a field theoretic point of view useful for studying $\aone$-homology and Suslin homology sheaves together with the universality statement alluded to above.
\subsubsection*{Relationship with other work}
This work is part of a sequence of papers including \cite{AH,AH2,ABK} studying the $\aone$-homology sheaf, its relationship with rational points, and rationality questions. In \cite{AH}, we prove that if $X$ is smooth and proper over a field, then $\H_0^{\aone}(X)$ detects rational points. More precisely \cite[Corollary 2.9]{AH} states that a smooth proper variety $X$ has a $k$-rational point if and only if the canonical map $\H_0^{\aone}(X) \to \Z$ is an epimorphism. On the other hand $\pi_0^{\aone}(X)$ controls the unramified cohomology of smooth varieties $X$. In \cite{ABK}, we produce rationally connected, non-rational smooth proper varieties $X$ where non-rationality is detected by a degree $n$ unramified cohomology class, but cannot be detected by lower degree invariants; the connection with this work is mentioned at the end of Section \ref{s:unramifiedelements}.
\subsubsection*{General conventions}
Throughout the paper $k$ will be a fixed base field. Write $\Sm_k$ for the category of schemes that are smooth, separated and have finite type over $k$. When we use the word {\em sheaf} without modification, we always mean Nisnevich sheaf of sets on $\Sm_k$.
\subsubsection*{Acknowledgements}
We were led to the study of cohomological vanishing properties of $\aone$-connected schemes by B. Bhatt's observation that the Brauer group of a $\aone$-connected smooth scheme is trivial (this is unpublished, but see \cite[Theorem 4.3]{Gille}), which suggested that higher cohomological invariants could obstruct $\aone$-connectedness as well. This work also owes an intellectual debt to Fabien Morel, who has stressed the importance of $\aone$-homological invariants; we thank him for much encouragement and many discussions through the course of this work. Finally, we thank Christian H\"asemeyer for many discussions around this topic, Sasha Merkurjev for pointing out Example \ref{ex:parimala}, and Jean-Louis Colliot-Th\'el\`ene for helpful comments and correspondence.
\section{$\aone$-homotopy, $\aone$-homology and Suslin homology (sheaves)}
\label{s:aonehomology}
We review the construction and basic properties of $\aone$-derived categories and $\aone$-homology as sketched in \cite{MStable} and developed in \cite{MField}. For completeness, we also give a detailed comparison between $\aone$-homology and Suslin homology sheaves, which was alluded to in \cite{MICM} but has not been developed in the literature (in detail). For more discussion of the homological algebra underlying the $\aone$-derived category, we refer the reader to \cite[\S 4]{CisinskiDeglise1}. The results stated in this section are essential to the formulation and proofs of results in subsequent sections.
\subsubsection*{Simplicial homotopy categories}
Let $\simpnis$ denote the category of simplicial Nisnevich sheaves on $\Sm_k$; we will refer to objects in this category as $k$-spaces, or simply as spaces if $k$ is clear from context. The Yoneda embedding provides a fully-faithful functor $\Sm_k \to \simpnis$. We use this to identify $\Sm_k$ with a full subcategory of $\simpnis$, and systematically abuse notation by denoting a smooth scheme and the corresponding simplicial sheaf (the sheaf of $n$-simplices is the Nisnevich sheaf represented by the scheme, and all face and degeneracy morphisms are the identity morphism) by the same roman letter. Generally, we use calligraphic letters (e.g., ${\mathcal X},{\mathcal Y}$) for objects of $\simpnis$.
The category $\simpnis$ admits a proper closed model structure where the cofibrations are monomorphisms, the weak equivalences are those morphisms of simplicial sheaves that stalkwise induce weak equivalences of the corresponding simplicial sets, and the fibrations are those morphisms having the right lifting property with respect to morphisms that are simultaneously cofibrations and weak equivalences (see, e.g., \cite[\S2 Theorem 1.4]{MV}). The resulting model structure is called the injective local model structure, or the Joyal-Jardine model structure. The {\em simplicial homotopy category}, denoted $\hsnis$, is the homotopy category of this model structure. Throughout, we write $[{\mathcal X},{\mathcal Y}]_s$ for $\hom_{\hsnis}({\mathcal X},{\mathcal Y})$.
\subsubsection*{Derived categories of sheaves of $R$-modules}
For a commutative unital ring $R$, we let $\Mod_k(R)$ denote the category of Nisnevich sheaves of $R$-modules. Similarly, we let $\simpmod_k(R)$ denote the category of simplicial Nisnevich sheaves of $R$-modules. Given any object ${\mathcal X} \in \simpnis$, write $R({\mathcal X})$ for the Nisnevich sheaf of $R$-modules freely generated by the simplices of ${\mathcal X}$; $R({\mathcal X})$ an object of $\simpmod_k(R)$. This construction defines a functor $R(\cdot): \simpnis \to \simpmod_k(R)$ that is left adjoint to the forgetful functor $\simpmod_k(R) \to \simpnis$.
Let $Ch_{\geq 0}(\Mod_k(R))$ denote the category of chain complexes (differential of degree $-1$) of Nisnevich sheaves of $R$-modules situated in degrees $\geq 0$. There is a functor of normalized chain complex $N(\cdot): \simpmod_k(R) \to Ch_{\geq 0}(\Mod_k(R))$. The sheaf theoretic Dold-Kan correspondence produces an adjoint equivalence $K(\cdot): Ch_{\geq 0}(\Mod_k(R)) \to \simpmod_k(R)$.
Let $Ch_{-}(\Mod_k(R))$ denote the category of bounded below chain complexes of Nisnevich sheaves of $R$-modules; objects in this category will be referred to simply as {\em complexes}. The category $\Sm_k$ is essentially small, so the category $Ch_{-}(\Mod_k(R))$ is the category of bounded below complexes in a Grothendieck abelian category. Therefore, results of Beke imply that $Ch_{-}(\Mod_k(R))$ can be equipped with a model category structure where cofibrations are monomorphisms, weak equivalences are quasi-isomorphisms, and fibrations are those morphisms having the right lifting property with respect to morphisms that are simultaneously cofibrations and weak equivalences (see \cite[Proposition 3.13]{Beke}). This model structure---the injective local model structure---has homotopy category the bounded below derived category $D_-(\Mod_k(R))$. We denote by $((-)^f,\theta)$ a fixed fibrant resolution functor, i.e., $(-)^f$ is an endofunctor of $Ch_{-}(\Mod_k(R))$ and $\theta: Id \to (-)^f$ is a natural transformation such that if $A$ is complex, the induced map $A \to A^f$ is a quasi-isomorphism and monomorphism and $A^f$ is a fibrant complex. The homotopy category $D_-(\Mod_k(R))$ is a triangulated category with the usual shift functor.
\begin{notation}
We use {\em homological conventions} for complexes. More precisely, if $C_*$ is a complex, then the shift functor satisfies $C_*[1] = C_{*+1}$ so that $H_i(C_*[1]) = H_{i-1}(C_*)$; this convention will be justified in the next subsection. Any complex $C_*$ can be considered as a cohomological complex $C^*$ with $C^i = C_{-i}$; we use this convention when computing hypercohomology.
\end{notation}
\subsubsection*{Hurewicz theory}
If ${\mathcal X}$ is a space, set $C_*({\mathcal X},R) := N((R({\mathcal X})))$. The assignment ${\mathcal X} \to C_*({\mathcal X},R)$ provides a functor
\[
\simpnis \longrightarrow Ch_{\geq 0}(\Mod_k(R)).
\]
This functor sends monomorphisms to monomorphisms, and sends weak equivalences to quasi-isomorphisms. Thus, it descends to a functor
\[
\hsnis \longrightarrow D_-(\Mod_k(R)).
\]
There is a corresponding version of this functor in the setting of pointed spaces as well. If ${\mathcal X}$ is a space, the structure morphism ${\mathcal X} \to \Spec k$ induces a morphism of complexes $C_*({\mathcal X},R) \to C_*(\Spec k,R)$; we let $\tilde{C}_*({\mathcal X},R)$ denote the kernel of this morphism. If $\mathcal{X}$ is pointed, then $C_*({\mathcal X},R) \to C_*(\Spec k,R)$ is split, and $\tilde{C}({\mathcal X},R)$ is a summand of $C_*({\mathcal X},R)$.
In the other direction, the adjoint $K(\cdot)$ (coming from the Dold-Kan correspondence) composed with the inclusion $\simpmod_k(R) \to \simpnis$ produces a functor
\[
Ch_{\geq 0}(\Mod_k(R)) \longrightarrow \simpnis.
\]
This composite functor sends quasi-isomorphisms to weak equivalences, and using properties of adjunctions can be shown to preserve fibrations as well. In fact, there is an adjunction
\begin{equation}
\label{eqn:simplicialderivedadjunction}
[{\mathcal X},K(A,n)]_s \isomto \hom_{D_-(\Mod_k(R))}(R({\mathcal X}),A[n]),
\end{equation}
which we use freely in the sequel.
We set $H_i({\mathcal X},R) := H_i(C_*({\mathcal X},R))$ and if ${\mathcal X}$ is pointed $\tilde{H}_i({\mathcal X},R) := H_i(\tilde{C}_*({\mathcal X},R))$. If $S^1_s$ denotes the constant sheaf defined by the simplicial circle, we let $\Sigma^1_s {\mathcal X} = \Sigma^1_s \wedge {\mathcal X}$. It is not hard to check that $\tilde{H}_i(\Sigma^1_s {\mathcal X},R) = \tilde{H}_{i-1}({\mathcal X},R)$.
\subsubsection*{$\aone$-homotopy categories}
The $\aone$-homotopy category, constructed in \cite[\S 2 Theorem 3.2]{MV}, is obtained as a categorical localization of $\simpnis$. Recall that a space ${\mathcal X}$ is called {\em $\aone$-local} if for any space ${\mathcal Y}$ the canonical map
\[
[{\mathcal Y},{\mathcal X}]_{s} \longrightarrow [{\mathcal Y} \times \aone,{\mathcal X}]_s
\]
is a bijection. A morphism $f: {\mathcal X} \to {\mathcal Y}$ is an {\em $\aone$-weak equivalence} if the induced map $[{\mathcal Y},{\mathcal Z}]_{s} \to [{\mathcal X},{\mathcal Z}]_s$ is a bijection for all $\aone$-local spaces $\mathcal{Z}$. The category $\simpnis$ can be equipped with a model structure where weak equivalences are $\aone$-weak equivalences, cofibrations are monomorphisms and fibrations are those morphisms having the right lifting property with respect to morphisms that are simultaneously cofibrations and $\aone$-weak equivalences. We write $\ho{k}$ for the resulting homotopy category, and $[{\mathcal X},{\mathcal Y}]_{\aone}$ for $\hom_{\ho{k}}({\mathcal X},{\mathcal Y})$. The full subcategory of $\hsnis$ spanned by $\aone$-local objects can be taken as a model for the $\aone$-homotopy category. Thus, if ${\mathcal X}$ is $\aone$-local, we have $[{\mathcal Y},{\mathcal X}]_s = [{\mathcal Y},{\mathcal X}]_{\aone}$; we use this freely in the sequel.
\begin{defn}
\label{defn:aoneconnected}
Suppose ${\mathcal X}$ is a space. The {\em sheaf of $\aone$-connected components} of $\mathcal{X}$, denoted $\pi_0^{\aone}({\mathcal X})$ is the Nisnevich sheaf associated with the presheaf $U \mapsto [U,{\mathcal X}]_{\aone}$. A space ${\mathcal X}$ is {\em $\aone$-connected} if the canonical morphism $\pi_0^{\aone}({\mathcal X}) \to \ast$ (where $\ast$ is the constant $1$ point sheaf) is an isomorphism.
\end{defn}
\subsubsection*{$\aone$-derived categories}
There is an analogous abelianized version of the $\aone$-homotopy category; we recall the basic definitions, which were originally introduced by Morel.
\begin{defn}
\label{defn:strictaoneinvariance}
A complex $A$ of Nisnevich sheaves of abelian groups is {\em $\aone$-local} if for any smooth $k$-scheme $U$, and every integer $n$
\[
{\mathbb H}^n_{Nis}(U,A) \to {\mathbb H}^n_{Nis}(U \times \aone,A)
\]
is an isomorphism. If $A$ is simply a sheaf of abelian groups, say $A$ is {\em strictly $\aone$-invariant} if it is $\aone$-local viewed as a complex of sheaves.
\end{defn}
\begin{rem}
\label{rem:stronginvariance}
A sheaf of sets ${\mathcal S}$ is {\em $\aone$-invariant} if for any smooth scheme $U$, the map ${\mathcal S}(U) \to {\mathcal S}(U \times \aone)$ is a bijection. A sheaf of groups ${\mathcal G}$ is {\em strongly $\aone$-invariant} if for any smooth scheme $U$, and any integer $i \in \{0,1\}$, the maps $H^i(U,{\mathcal G}) \to H^i(U \times \aone,{\mathcal G})$ are bijections.
\end{rem}
Denote the localizing subcategory of $D_-(\Mod_k(R))$ generated by complexes of the form $R(X \times \aone) \to R(X)$ for smooth schemes $X$ by $T(\aone,R)$. By the theory of localizing categories, a quotient category $D_-(\Mod_k(R))/T(\aone,R)$ exists; this category, denoted $D_{\aone}(k,R)$, is called the $\aone$-derived category. Let $D_-(\Mod_k(R))^{\aone-loc} \subset D_-(\Mod_k(R))$ denote the full-subcategory consisting of $\aone$-local complexes. This inclusion admits a left adjoint
\[
L_{\aone}: D_-(\Mod_k(R)) \longrightarrow D_-(\Mod_k(R))^{\aone-loc}
\]
that can be used to identify, up to equivalence, $D_-(\Mod_k(R))/T(\aone,R)$ with $D_-(\Mod_k(R))^{\aone-loc}$. The functor $L_{\aone}$ is called the $\aone$-localization functor; for more details see, e.g., \cite[Proposition 4.3]{CisinskiDeglise1}. For simplicity, we will write $D_{\aone}(k)$ for $D_{\aone}(k,\Z)$. Observe that if $A$ is an $\aone$-local complex, then $K(A,n)$ is an $\aone$-local space by the adjunction of \ref{eqn:simplicialderivedadjunction}.
\begin{defn}
\label{defn:aonehomology}
The {\em $\aone$-singular chain complex of ${\mathcal X}$ with $R$-coefficients}, denoted $C_*^{\aone}({\mathcal X},R)$, is defined to be the $\aone$-localization $L_{\aone}(C_*({\mathcal X},R))$. We write $C_*^{\aone}({\mathcal X})$ for $C_*^{\aone}({\mathcal X},\Z)$. The {\em $\aone$-homology sheaves of ${\mathcal X}$ with $R$-coefficients} are defined by $\H_i^{\aone}({\mathcal X},R) := H_i(C_*^{\aone}({\mathcal X},R))$. If ${\mathcal X}$ is pointed, the {\em reduced $\aone$-homology sheaves of ${\mathcal X}$ with $R$-coefficients} are defined by $\tilde{\H}_i^{\aone}({\mathcal X},R) := H_i(L_{\aone}(\tilde{C}_*({\mathcal X},R)))$.
\end{defn}
\begin{ex}
\label{ex:aonehomologyofapoint}
Suppose $k$ is a field. The complex $C_*(\Spec k)$ is just the Nisnevich sheaf $\Z$ placed in degree $0$. The Nisnevich sheaf $\Z$ is even Nisnevich flasque so all higher Nisnevich cohomology of a smooth $k$-scheme with coefficients in $\Z$ vanishes. Since the functor $\Z(\cdot)$ is clearly $\aone$-invariant, it follows immediately that $\Z$ is strictly $\aone$-invariant. As a consequence $\H_0^{\aone}(\Spec k) = \Z$ and all the higher $\aone$-homology sheaves of $\Spec k$ vanish, as topological intuition suggests.
\end{ex}
\begin{rem}[Mayer-Vietoris]
If $X$ is a smooth scheme, there is a Nisnevich Mayer-Vietoris sequence allowing computationg of $\H_i^{\aone}(X)$ from pieces of an open cover. This follows immediately from \cite[\S Remark 1.7]{MV} together with the fact that the $\aone$-localization functor is exact.
\end{rem}
\subsubsection*{Sheaves with transfers and Suslin homology of motives}
Again, let $R$ be a commutative unital ring. Write $Cor_k(X,Y)$ for the free $R$-module generated by integral closed subschemes of $X \times Y$ that are finite and surjective over a component of $X$ (an element of this group is referred to as a {\em finite correspondence}). We let $R_{tr}(X)$ denote the presheaf on $\Sm_k$ defined by $U \mapsto Cor_k(U,X)$. If $X$ is a smooth scheme, the functor $U \mapsto R_{tr}(X)(U \times \Delta^{\bullet})$ defines a simplicial abelian group for which we write $C_*^S(R_{tr}(X))$. By definition $C_*^S(R_{tr}(X))$ is situated in positive (homological) degrees.
Write $\Cor_k$ for the category whose objects are smooth schemes, and morphisms are finite correspondences from between smooth schemes (this category is described in detail in \cite[Chapter 1]{MVW}). There is a functor $\Sm_k \to \Cor_k$ that sends an element of $\hom(X,Y)$ to the finite correspondence defined by the graph. An additive contravariant functor from $\Cor_k$ to abelian groups is called a presheaf with transfers. Any presheaf with transfers can be viewed as a presheaf of abelian groups on $\Sm_k$ by restriction to $\Sm_k$. A presheaf with transfers whose restriction to $\Sm_k$ is a Nisnevich sheaf is called a Nisnevich sheaf with transfers. The presheaves $R_{tr}(X)$ are all Nisnevich sheaves with transfers (\cite[Lemma 6.2]{MVW}).
One can define a derived category of Nisnevich sheaves with transfers: take the homotopy category of complexes of Nisnevich sheaves with transfers and localize at the quasi-isomorphisms. A complex of Nisnevich sheaves with transfers is $\aone$-local if it is $\aone$-local as a complex of Nisnevich sheaves after forgetting the transfers. A {\em strictly $\aone$-invariant sheaf with transfers} is a sheaf with transfers that is $\aone$-local when viewed as a complex of Nisnevich sheaves with transfers. Taking the quotient of the derived category of Nisnevich sheaves with transfers by the localizing subcategory generated by $\aone$-local complexes of Nisnevich sheaves with transfers one obtains Voevodsky's (``big") derived category of motives $\dmeff$. We refer the reader to \cite[\S 13]{MVW} for a much more detailed discussion of this construction. The techniques given so far are sufficient to define Suslin homology for a smooth scheme, but we will need to define Suslin homology of an arbitrary space for some later statements. To do this, recall the following result.
\begin{lem}[{\cite[\S 2 Lemma 1.16]{MV}}]
There is a pair $(\Phi_{rep},\theta)$ consisting of an endofunctor $\Phi_{rep}: \simpnis \to \simpnis$ and a natural transformation $\theta: \Phi_{rep} \to Id$ such that for any simplicial sheaf ${\mathcal X}$, $\Phi_{rep}({\mathcal X})_n$ is a coproduct of representable sheaves, and $\Phi_{rep}({\mathcal X}) \to {\mathcal X}$ is a simplicial weak equivalence and stalkwise a fibration of simplicial sets.
\end{lem}
We refer to $\Phi_{rep}({\mathcal X})$ as a resolution of ${\mathcal X}$ by representables. As above, let $R$ be a commutative unital ring. Using $\Phi_{rep}$, one can define a motive and Suslin homology of any ${\mathcal X} \in \simpnis$.
\begin{defn}
\label{defn:suslinhomology}
Suppose ${\mathcal X}$ is a $k$-space. The {\em motive of ${\mathcal X}$}, denoted ${\bf M}({\mathcal X})$, is the class of the normalized complex $R_{tr}(\Phi_{rep}({\mathcal X})_\bullet)$ in $\dmeff$. The $i$-th Suslin homology sheaf of $\mathcal{X}$, denoted $\H_i^S({\mathcal X})$, is defined as $H_i(L_{\aone}R_{tr}(\Phi_{rep}({\mathcal X})_\bullet))$.
\end{defn}
\begin{rem}
In fact, this construction can be extended to a functor $\ho{k} \to \dmeff$; see, e.g., \cite{WeibelRoadmap} for more details regarding this construction.
\end{rem}
While this is not the usual definition of Suslin homology, it coincides with that one in case $k$ is assumed perfect via the following important foundational result.
\begin{thm}[{\cite[Corollary 14.9]{MVW}}]
\label{thm:suslincomplexisaonelocal}
If $k$ is a perfect field, and $X$ is a smooth $k$-scheme, the complex $C_*^S(R_{tr}(X))$ is $\aone$-local.
\end{thm}
\begin{rem}
Suppose $k$ is a field having characteristic $p$. If $k$ is not perfect, it is not known whether $C_*^S(R_{tr}(X))$ is $\aone$-local. On the other hand, unpublished work of Suslin establishes that so long as $p$ is invertible in $R$, then Voevodsky's theorem that homotopy invariant presheaves of $R$-modules with transfers have homotopy invariant cohomology \cite[Theorem 13.8]{MVW} still holds. Using Suslin's result, if $p$ is invertible in $R$, then one can show that $C_*^S(R_{tr}(X))$ is $\aone$-local. Thus, in this situation, the definition of Suslin homology given above agrees with the usual definition of Suslin homology.
\end{rem}
\subsubsection*{Comparing homology sheaves}
For any $\mathcal{X} \in \simpnis$, recall that $Sing_*^{\aone}({\mathcal X})$ is defined to be the diagonal of the bisimplicial sheaf $(i,j) \mapsto \underline{\hom}(\Delta^{i}_k,\mathcal{X}_j)$, where $\Delta^i_k$ is the algebraic $i$-simplex and we write $\underline{\hom}$ for the internal hom in the category of Nisnevich sheaves of sets. By construction, there is a canonical morphism $\mathcal{X} \to Sing_*^{\aone}({\mathcal X})$ that is an $\aone$-weak equivalence (see \cite[p. 88]{MV}). In particular, for any smooth scheme $X$ the morphism $X \to Sing_*^{\aone}(X)$ induces a morphism $C_*(X,R) \to C_*(Sing_*^{\aone}(X),R)$ that becomes an isomorphism after $\aone$-localization.
For any smooth scheme $Y$, there is a canonical monomorphism of sheaves $R(Y) \to R_{tr}(Y)$ (send a morphism $U \to Y$ to the correspondence defined by its graph), and this construction induces a map $N(R(Sing_*^{\aone}(X))) \to C_*^S(R_{tr}(X))$. Combining this morphism with the discussion of the previous paragraph and applying the $\aone$-localization functor we get morphisms
\[
C_*^{\aone}(X,R) \longrightarrow C_*^{\aone}(Sing_*^{\aone}(X),R) \longrightarrow L_{\aone}(C_*^S(R_{tr}(X)).
\]
We thus obtain a comparison morphism from $\aone$-homology to Suslin homology, and we summarize this construction as follows.
\begin{cor}
\label{cor:aonetosuslin}
For any smooth scheme $X$, there are comparison maps
\[
\H_i^{\aone}(X,R) \longrightarrow \H_i^S(X,R).
\]
induced by the morphism $C_*^{\aone}(X,R) \to L_{\aone}(C_*^S(R_{tr}(X)))$.
\end{cor}
\begin{rem}
With a little more work, one can extend this comparison map to a natural transformation of functors on spaces.
\end{rem}
The following examples show this morphism is {\em not} an isomorphism in general.
\begin{ex}
\label{ex:aonehomologyofgm}
If $(X,x)$ is a pointed smooth scheme, the zeroth Suslin homology sheaf splits $\H_0^S(X) \isomt \Z \oplus \tilde{\H}_0^{S}(X)$. The morphism of Corollary \ref{cor:aonetosuslin} is compatible with this spitting and there is an induced morphism $\tilde{\H}_0^{\aone}(X) \to \tilde{\H}_0^S(X)$. In general, this morphism is not an isomorphism, e.g., for $X = \gm$. Indeed the sheaf of groups $\gm$ is strictly $\aone$-invariant with transfers given by the usual norm map on units. Lemma \ref{lem:h0universal} shows that the identity map $\gm \to \gm$ induces a homomorphism of strictly $\aone$-invariant sheaves (with transfers) $\tilde{\H}_0^{\aone}(\gm) \to \gm$ (resp. $\tilde{\H}_0^S(\gm) \to \gm$). Theorem 3.1 of \cite{SuslinVoevodsky} shows the map $\tilde{\H}_0^S(\gm) \to \gm$ is an isomorphism. More generally, $\tilde{\H}_0^{S}(\gm^{\wedge n})$ is closely related to Milnor K-theory. On the other hand, $\tilde{\H}_0^{\aone}(\gm^{\wedge n})$ has been computed by Morel (combine \cite[Theorem 2.37]{MField} and \cite[Theorem 4.46]{MField}): when $n \geq 1$, this sheaf is a mixture of Milnor K-theory and Witt groups, which one refers to as Milnor-Witt K-theory. Similar examples can be constructed from any smooth proper curve.
\end{ex}
\begin{rem}
There is also a ``stabilized" version $\H_0^{s\aone}(X)$ of $\H_0^{\aone}(X)$ where one ``inverts $\gm$." If $S^0_s = \Spec k_+$, then $\tilde{\H}_0^{s\aone}(S^0_s) = \H_0^{s\aone}(\Spec k)$ essentially by definition. One can also work directly with the stable $\aone$-homotopy category to prove $\H_0^{s\aone}(\Spec k)$ coincides with the $0$-th stable $\aone$-homotopy sheaf of the motivic sphere spectrum (see \cite[p. 7]{MStable}). This sheaf has been identified as $\K^{MW}_0$ by \cite[Corollary 6.4.1]{Morelpi0}.
For any finitely generated separable extension $L/k$ there is an isomorphism $\K^{MW}_0(L) \isomt GW(L)$ (see \cite[Remark 6.1.6b]{MStable} or \cite[Lemmas 2.9-2.10]{MField}), where $GW(L)$ denotes the Grothendieck-Witt group of isomorphism classes of non-degenerate symmetric bilinear forms. On the other hand it follows from the definitions given above that $\H_0^{S}(\Spec k) = \Z$. These computations suggest that $D_{\aone}(k)$, or perhaps its stabilized version, provides a version of Voevodsky's triangulated category of motives incorporating data from the theory of quadratic forms. This point of view is further developed in \cite{AH2}.
\end{rem}
\subsubsection*{Connectivity and the $t$-structure}
We now recall some basic facts regarding the structure of $\aone$ (or Suslin) homology sheaves, all due to Morel. Recall that a complex of sheaves $A_*$ is called {\em $(-1)$-connected (or positive)} if its homology sheaves $H_i(A_*)$ vanish for $i < 0$. Each of the following results was proven in the context of stable $\aone$-homotopy theory by Morel in \cite{MStable}. However, the proofs he gives apply just as well (as he observes) to the setting of derived categories that we consider. For this reason, we give references to the corresponding statements in stable $\aone$-homotopy theory.
\begin{thm}[{\cite[Theorem 6.1.8]{MStable}}]
\label{thm:stableaoneconnectivity}
If $A_*$ is a $(-1)$-connected complex of $R$-modules, then its $\aone$-localization $L_{\aone}(A_*)$ is also $(-1)$-connected.
\end{thm}
\begin{thm}[{\cite[Theorem 6.2.7]{MStable}}]
\label{thm:strictaoneinvarianceofhomology}
If ${\mathcal X}$ is a $k$-space, then for every integer $i$ the sheaves $\H_i^{\aone}({\mathcal X},R)$ and $\H_i^{S}({\mathcal X},R)$ are always strictly $\aone$-invariant, and these sheaves are trivial if $i < 0$.
\end{thm}
\begin{proof}
By construction, the complexes $C_*^{\aone}({\mathcal X},R)$ are $\aone$-localizations of the complexes $C_*({\mathcal X},R)$. The latter complexes are $(-1)$-connected by definition. The result follows immediately from Theorem \ref{thm:stableaoneconnectivity}. The statement for Suslin homology is proven in an identical fashion.
\end{proof}
Before we state the next result, let us recall a variant of \cite[D\'efinition 1.3.1]{BBD}.
\begin{defn}[Homological $t$-structure]
\label{defn:homologicaltstructure}
Let ${\mathcal T}$ be a triangulated category. A {\em homological $t$-structure} on ${\mathcal T}$ consists of a pair of strictly full subcategories ${\mathcal T}_{\leq 0} \subset {\mathcal T}$ and ${\mathcal T}_{\geq 0} \subset {\mathcal T}$ such that, setting ${\mathcal T}_{\leq n}:= {\mathcal T}_{\leq 0}[n]$ and ${\mathcal T}_{\geq n} := {\mathcal T}_{\geq 0}[n]$, the following properties hold:
\begin{itemize}
\item[i)] for any $C \in {\mathcal T}_{\geq 0}$ and any $D \in {\mathcal T}_{\leq -1}$, one has $\hom_{{\mathcal T}}(C,D) = 0$;
\item[ii)] there are inclusions ${\mathcal T}_{\geq 1} \subset {\mathcal T}_{\geq 0}$, and ${\mathcal T}_{\leq 0} \subset {\mathcal T}_{\leq 1}$;
\item[iii)] for any object $X \in {\mathcal T}$ there is a distinguished triangle
\[
A \longrightarrow X \longrightarrow B \longrightarrow A[1]
\]
with $A \in {\mathcal T}_{\geq 0}$ and $B \in {\mathcal T}_{\leq 1}$.
\end{itemize}
\end{defn}
If $({\mathcal T},{\mathcal T}_{\leq 0},{\mathcal T}_{\geq 0})$ is a homological $t$-structure on ${\mathcal T}$, then by \cite[Proposition 1.3.3]{BBD} there are truncation functors $\tau_{\geq n}: {\mathcal T} \to {\mathcal T}_{\geq n}$, and $\tau_{\leq n}: {\mathcal T} \to {\mathcal T}_{\leq n}$ adjoint to the corresponding inclusion functors. Using homological conventions as above, the proof of {\em loc. cit.} gives the following result.
\begin{prop}
\label{prop:homologicaltstructure}
Suppose $({\mathcal T},{\mathcal T}_{\leq 0},{\mathcal T}_{\geq 0})$ is a $t$-category. If $X$ is any object in ${\mathcal T}$, there exists a unique morphism $d \in \hom^1(\tau_{\leq 1}X,\tau_{\geq 0}X)$ such that the triangle
\[
\tau_{\geq 0} X \longrightarrow X \longrightarrow \tau_{\leq 1} X \stackrel{d}{\longrightarrow} \tau_{\geq 0}X[1]
\]
is distinguished.
\end{prop}
A complex $A_*$ is called {\em negative} if $H_i(A_*) = 0$ for $i > 0$ and {\em positive} if $H_i(A_*) = 0$ for $i < 0$. We write $D_{\aone}(k)_{\leq 0}$ for the full subcategory of $D_{\aone}(k)$ consisting of $\aone$-local negative complexes, and $D_{\aone}(k)_{\geq 0}$ for the full subcategory consisting of $\aone$-local positive complexes.
\begin{prop}[{\cite[Lemma 6.2.11]{MStable}}]
\label{prop:aonetstructure}
The triple $(D_{\aone}(k),D_{\aone}(k)_{\leq 0},D_{\aone}(k)_{\geq 0})$ is a homological $t$-structure on $D_{\aone}(k)$.
\end{prop}
If we write $\Ab^{\aone}_k$ for the category of strictly $\aone$-invariant sheaves. The category of strictly $\aone$-invariant sheaves of groups can be identified as the heart of this $t$-structure. By \cite[Th\'eor\`eme 1.3.6]{BBD}, we get the following result.
\begin{cor}
\label{cor:strictlyaoneinvariantabeliancategory}
The category $\Ab^{\aone}_k$ is abelian.
\end{cor}
If $k$ is perfect, Voevodsky showed that $\dmeff$ admits a $t$-structure defined in a manner identical to Proposition \ref{prop:aonetstructure}. The heart of the resulting $t$-structure is precisely the category of strictly $\aone$-invariant Nisnevich sheaves {\em with transfers}; see \cite[\S 4.3]{Deglise1} for a discussion. We write $\Ab^{\aone}_{tr,k}$ for the abelian category of strictly $\aone$-invariant sheaves with transfers. We can define $\H_0^{S}$ as a functor from $\dmeff$ to $\Ab^{\aone}_{tr,k}$ (see {\em loc. cit.} Formula 4.12a).
\subsubsection*{Gersten resolutions}
Suppose $A$ is an $\aone$-local complex. The axiomatic approach of \cite{CTHK} provides general machinery for producing a Gersten resolution associated with the Nisnevich hypercohomology of $A$ (see {\em ibid.} \S 7). Let $(X,Z)$ be a pair where $X$ is a smooth scheme and $Z \subset X$ is a closed subscheme. The functor
\[
(X,Z) \longmapsto {\mathbb H}^*_{Z}(X_{Nis},A)
\]
(i.e., Nisnevich hypercohomology with supports on $Z$) defines a cohomology theory with supports in the sense of \cite[Definition 5.1.1]{CTHK}. Moreover, this theory satisfies Nisnevich excision (Axiom {\bf COH1} of \cite[p. 55]{CTHK}) and $\aone$-homotopy invariance (Axiom {\bf COH3} of {\em ibid} p. 58). Let ${\mathbb H}^n_{Zar}(A)$ denote the Zariski sheaf associated with the presheaf $U \mapsto {\mathbb H}^n_{Nis}(U,A)$; this apparent abuse of terminology will be justified momentarily.
\begin{prop}[{\cite[Corollary 5.1.11]{CTHK}}]
\label{prop:gerstenresolution}
Suppose $k$ is an infinite field. For any smooth $k$-scheme $X$, and any $\aone$-local complex $A$, the complex
\[
{\mathbb H}^n_{Zar}(A)|_X \longrightarrow \coprod_{x \in X^{(0)}} i_{x_*}{\mathbb H}^n_x(X,A) \longrightarrow \cdots \longrightarrow \coprod_{x \in X^{(p)}} i_{x_*}{\mathbb H}^{p+n}_x(X,A) \longrightarrow \cdots
\]
is a flasque resolution.
\end{prop}
\begin{proof}
We just observe that \cite{CTHK} Proposition 5.3.2a, Axiom {\bf COH2} is implied by Axiom {\bf COH3}).
\end{proof}
We write ${\mathbb H}^n_{Nis}(A)$ for the {\em Nisnevich} sheaf associated with the presheaf $U \mapsto {\mathbb H}^n_{Nis}(U,A)$. We use the following fundamental comparison result.
\begin{thm}[{\cite[Theorem 8.3.1]{CTHK}}]
\label{thm:comparisonZariskiNisnevich}
Suppose $k$ is an infinite field. For any smooth $k$-scheme $X$, and any $\aone$-local complex $A$, the canonical maps
\[
H^i_{Zar}(X,{\mathbb H}^n_{Zar}(A)) \longrightarrow H^i_{Nis}(X,{\mathbb H}^n_{Nis}(A))
\]
are isomorphisms.
\end{thm}
\section{Birational geometry and strictly $\aone$-invariant sheaves}
\label{s:birationalproperties}
In this section, we study the relationship between the zeroth $\aone$-homology or Suslin homology sheaf (recall Definitions \ref{defn:aonehomology} and \ref{defn:suslinhomology}) and the zeroth $\aone$-homotopy sheaf (recall Definition \ref{defn:aoneconnected}). The principal results of this section imply Proposition \ref{propintro:universalproperty} and Theorem \ref{thmintro:birationalinvariance} of the introduction. We prove in Lemma \ref{lem:h0universal} that the zeroth $\aone$-homology (resp. Suslin homology) sheaf of a smooth scheme $X$ is initial among strictly $\aone$-invariant sheaves (with transfers) admitting a morphism from $X$. In Proposition \ref{prop:dependsonpi0} we establish that the zeroth $\aone$-homology (resp. Suslin homology) sheaf of $X$ is the free strictly $\aone$-invariant sheaf (with transfers) generated by $\pi_0^{\aone}(X)$, and in Theorem \ref{thm:birationalclass} that it is a stable birational invariant for smooth proper schemes over infinite fields.
\subsubsection*{A factorization lemma}
The functor sending a Nisnevich sheaf $\F$ to the corresponding constant simplicial sheaf (all face and degeneracy maps are the identity) is fully-faithful. The full-subcategory of $\simpnis$ consisting of constant simplicial sheaves will be referred to as the subcategory of spaces of simplicial dimension $0$ (see \cite[p. 47]{MV}). Spaces of simplicial dimension $0$ are automatically simplicially fibrant (see \cite[\S 2 Remark 1.14]{MV}); in particular, since smooth schemes have simplicial dimension $0$, they are simplicially fibrant. If ${\mathcal X}$ is any space, then the unstable $\aone$-$0$-connectivity theorem \cite[\S 2 Corollary 3.22]{MV} gives an epimorphism ${\mathcal X} \to \pi_0^{\aone}({\mathcal X})$. We begin by stating a result that will be used repeatedly in the sequel.
\begin{lem}
\label{lem:factorization}
Suppose ${\mathcal X}$ and ${\mathcal Y}$ are $k$-spaces of simplicial dimension $0$, and ${\mathcal Y}$ is $\aone$-local. The canonical epimorphism ${\mathcal X} \to \pi_0^{\aone}({\mathcal X})$ induces a bijection
\begin{equation}
\label{eqn:factorization}
\hom_{\Shv_{Nis}(\Sm_k)}(\pi_0^{\aone}({\mathcal X}),{\mathcal Y}) \longrightarrow \hom_{\Shv_{Nis}(\Sm_k)}({\mathcal X},{\mathcal Y})
\end{equation}
functorial in both inputs.
\end{lem}
\begin{proof}
Since ${\mathcal X}$ and ${\mathcal Y}$ have simplicial dimension $0$, the canonical map
\[
\hom_{\Shv_{Nis}(\Sm_k)}({\mathcal X},{\mathcal Y}) \longrightarrow [{\mathcal X},{\mathcal Y}]_s
\]
is a bijection, as we observed just before the statement of the lemma. Since ${\mathcal Y}$ is $\aone$-local, we also have identifications $[{\mathcal X},{\mathcal Y}]_{\aone} = [{\mathcal X},{\mathcal Y}]_s$.
Since smooth schemes have simplicial dimension $0$, for any smooth scheme $U$, we have $[U,{\mathcal Y}]_{\aone} = \hom_{\Shv_{Nis}(\Sm_k)}(U,{\mathcal Y})$. Sheafifying for the Nisnevich topology, we deduce that $\pi_0^{\aone}({\mathcal Y}) = {\mathcal Y}$. To finish, observe that any morphism ${\mathcal X} \to \pi_0^{\aone}({\mathcal Y})$ factors uniquely through $\pi_0^{\aone}({\mathcal X})$ by the definition of $\pi_0^{\aone}(\cdot)$.
\end{proof}
\begin{cor}
\label{cor:aonelocaldetection}
Suppose $X$ is an $\aone$-connected smooth $k$-scheme. If ${\mathcal Y}$ is an $\aone$-local space of simplicial dimension $0$, then the map ${\mathcal Y}(k) \to {\mathcal Y}(X)$ induced by the structure map is a bijection. In particular, if $M$ is a strictly $\aone$-invariant sheaf, and $X$ is an $\aone$-connected smooth $k$-scheme, the canonical map $M(k) \to M(X)$ is a bijection.
\end{cor}
\begin{lem}
\label{lem:h0universal}
If $M$ (resp. $M'$) is a strictly $\aone$-invariant sheaf of $R$-modules (with transfers), then for any space ${\mathcal X}$ there are bijections
\[
\begin{split}
H^0_{Nis}({\mathcal X},M) &\isomto \hom_{\Ab^{\aone}_k}(\H_0^{\aone}({\mathcal X},R),M) \\
H^0_{Nis}({\mathcal X},M') &\isomto \hom_{\Ab^{\aone}_k}(\H_0^{S}({\mathcal X},R),M')
\end{split}
\]
functorial in both ${\mathcal X}$ and $M$ (resp. $M'$).
\end{lem}
\begin{proof}
As above, since $M$ is $\aone$-local we have identifications
\[
\hom_{\Shv_{Nis}(\Sm_k)}({\mathcal X},M) \isomto [{\mathcal X},M]_{\aone} = [{\mathcal X},K(M,0)]_{\aone}.
\]
The adjunction between the $\aone$-homotopy and $\aone$-derived categories allows one to identify the last abelian group with $\hom_{D_{\aone}(k)}(C_*^{\aone}({\mathcal X},R),M)$.
Now, by Proposition \ref{prop:aonetstructure}, we know that $D_{\aone}(k)$ admits a homological $t$-structure. By the stable $\aone$-connectivity theorem, we know that $C_*^{\aone}({\mathcal X},R) \in D_{\aone}(k)_{\geq 0}$. The result follows from a general fact about homological $t$-structures (see Definition \ref{defn:homologicaltstructure}). Let $\mathcal{T}$ be a triangulated category with a homological $t$-structure $(\mathcal{T}_{\geq 0},\mathcal{T}_{\leq 0})$ and heart $\mathcal{A}$. Let $M \in {\mathcal A}$, and write $M[0]$ for $M$ viewed as an object of ${\mathcal T}$ situated in degree $0$. For any object $C \in {\mathcal T}_{\geq 0}$, Proposition \ref{prop:homologicaltstructure} and a shifting argument give rise to the distinguished triangle
\[
\tau_{\geq -1}C \longrightarrow C \longrightarrow \tau_{\leq 0}C \longrightarrow \tau_{\geq -1}C[1].
\]
Applying the functor $\hom_{{\mathcal T}}(\cdot,M[0])$ to this distinguished triangle, we get a map
\[
\hom_{{\mathcal T}}(\tau_{\leq 0}C,M[0]) \longrightarrow \hom_{{\mathcal T}}(C,M[0]),
\]
and this map is an isomorphism directly from Definition \ref{defn:homologicaltstructure}(i) and (ii), which show that $\hom_{{\mathcal T}}(\tau_{\geq -1}C,M[0])$ and $\hom_{{\mathcal T}}(\tau_{\geq -1}C[1],M[0])$ vanish. Since $\tau_{\geq 0}C = C$, and $H_0(C) = \tau_{\leq 0}\tau_{\geq 0}C$ (see \cite[Th\'eor\`eme 1.3.6]{BBD}) we get an isomorphism
\[
\hom_{{\mathcal A}}(H_0(C),M) \isomto \hom_{\mathcal{T}}(C,M[0]).
\]
The proof for Suslin homology sheaves is similar; one uses \cite[Exercise 13.6]{MVW} to identify $\hom_{\Shv_{Nis}(\Sm_k)}({\mathcal X},M')$ with $\hom_{\dmeff}({\mathbf M}({\mathcal X}),M)$ (since $M'$ is $\aone$-local) together with an identical truncation argument.
\end{proof}
\begin{cor}
\label{cor:factorization}
If $X$ is a smooth $k$-scheme, and $M$ is a strictly $\aone$-invariant sheaf of $R$-modules with transfers, then any morphism $\varphi: \H_0^{\aone}(X,R) \to M$ factors as a composite
\[
\H_0^{\aone}(X,R) \longrightarrow \H_0^{S}(X,R) \longrightarrow M,
\]
where the first map is the morphism of \textup{Corollary \ref{cor:aonetosuslin}}.
\end{cor}
\begin{proof}
The morphism of Corollary \ref{cor:aonetosuslin} induces for any strictly $\aone$-invariant sheaf of $R$-modules with transfers a morphism
\[
\hom_{\Ab^{\aone}_k}(\H_0^{S}(X,R),M) \longrightarrow \hom_{\Ab^{\aone}_k}(\H_0^{\aone}(X,R),M).
\]
Lemma \ref{lem:h0universal} implies that this morphism is a bijection.
\end{proof}
\subsubsection*{Dependence on the sheaf of $\aone$-connected components}
If $M$ is a (sufficiently nice) topological space, the Mayer-Vietoris sequence shows that the ordinary singular homology group $H_0(M,R)$ is the free $R$-module generated by the connected components of $M$. On the other hand, Lemma \ref{lem:h0universal} can be interpreted as saying that $\H_0^{\aone}({\mathcal X},R)$ (resp. $\H_0^{S}({\mathcal X},R)$) is the free strictly $\aone$-invariant sheaf (with transfers) on the sheaf ${\mathcal X}$. We now show that $\H_0^{\aone}({\mathcal X},R)$ (resp. $\H_0^{S}({\mathcal X},R)$) is the free strictly $\aone$-invariant sheaf (with transfers) on the sheaf of $\aone$-connected components of ${\mathcal X}$.
\begin{prop}
\label{prop:dependsonpi0}
For any space ${\mathcal X}$, and any commutative unital ring $R$, the maps
\[
\begin{split}
\H_0^{\aone}({\mathcal X},R) &\longrightarrow \H_0^{\aone}(\pi_0^{\aone}({\mathcal X}),R) \text{ and }\\
\H_0^{S}({\mathcal X},R) &\longrightarrow \H_0^{S}(\pi_0^{\aone}({\mathcal X}),R),
\end{split}
\]
induced by the canonical epimorphism ${\mathcal X} \to \pi_0^{\aone}({\mathcal X})$ are isomorphisms.
\end{prop}
\begin{proof}
We prove the first statement; the second statement is proven in an essentially identical manner. Assume first that ${\mathcal X}$ has simplicial dimension $0$. Let $M$ be an arbitrary strictly $\aone$-invariant sheaf (with transfers for the second statement). We have a commutative diagram
\[
\xymatrix{
\hom_{\Shv_{Nis}(\Sm_k)}(\pi_0^{\aone}({\mathcal X}),M) \ar[r]\ar[d] & \hom_{\Ab^{\aone}_k}(\H_0^{\aone}(\pi_0^{\aone}({\mathcal X})),M) \ar[d] \\
\hom_{\Shv_{Nis}(\Sm_k)}({\mathcal X},M) \ar[r] & \hom_{\Ab^{\aone}_k}(\H_0^{\aone}({\mathcal X}),M)
}.
\]
By Lemma \ref{lem:h0universal} the horizontal maps are isomorphisms, and by Lemma \ref{lem:factorization} the left vertical map is a bijection. Indeed, all these bijections are functorial in both variables. It follows that the right vertical map is a bijection functorially in both variables as well. The result then follows from the Yoneda lemma.
To treat the general case, it suffices to observe that by \cite[\S 2 Proposition 3.14]{MV} every space ${\mathcal X}$ is $\aone$-weakly equivalent to a space of simplicial dimension $0$.
\end{proof}
\begin{rem}[A non-abelian variant]
One may also prove a non-abelian version of Proposition \ref{prop:dependsonpi0}. Because one needs to keep track of base points, this version seems not as widely applicable. Recall that if $({\mathcal S},s)$ is a pointed sheaf of sets, we can consider $F_{\aone}({\mathcal S}) := \pi_1^{\aone}(\Sigma^1_s {\mathcal S})$. Results of \cite{MField} show that this sheaf is strongly $\aone$-invariant (see Remark \ref{rem:stronginvariance}). As above, one can show that the canonical map $F_{\aone}({\mathcal S}) \to F_{\aone}(\pi_0^{\aone}({\mathcal S}))$ is an isomorphism. This result is compatible with the previous results via the $\aone$-Hurewicz theorem (also proven by Morel). The sheaves $F_{\aone}(\pi_0^{\aone}({\mathcal S}))$ contain ``non-abelian" information, e.g., related to finite covers with non-abelian fundamental group.
\end{rem}
\begin{lem}[{\cite[Lemma 6.4.4]{MStable}}]
\label{lem:excision}
Suppose $M$ is a strictly $\aone$-invariant sheaf of groups. If $X$ is a smooth $k$-scheme, and $U \subset X$ is an open subscheme whose complement has codimension $\geq d$ in $X$, then the restriction map
\[
H^i_{Nis}(X,M) \to H^i_{Nis}(U,M)
\]
is a monomorphism if $i \leq d-1$ and a bijection if $i \leq d-2$.
\end{lem}
\begin{prop}
\label{prop:epimorphism}
Suppose $X$ is a smooth $k$-scheme and $U \subset X$ is an open subscheme of $X$. Assume the complement of $U$ in $X$ has codimension $\geq d$, for some integer $d > 0$. For any commutative unital ring $R$, the canonical maps
\[
\begin{split}
\H_0^{\aone}(U,R) &\longrightarrow \H_0^{\aone}(X,R), \text{ and } \\
\H_0^{S}(U,R) &\longrightarrow \H_0^{S}(X,R)
\end{split}
\]
are epimorphisms if $d = 1$, and isomorphisms if $d \geq 2$.
\end{prop}
\begin{proof}
For the first statement, if $M$ is an arbitrary strictly $\aone$-invariant sheaf of $R$-modules, we have functorial bijections $\hom_{\Ab^{\aone}_k}(\H_0^{\aone}(X,R),M) \isomt M(X)$ by Lemma \ref{lem:h0universal}. Likewise, if $M$ is an arbitrary strictly $\aone$-invariant sheaf of $R$-modules with transfers, we have functorial bijections $\hom_{\Ab^{\aone}_{tr,k}}(\H_0^{S}(X,R),M) \isomt M(X)$ by Lemma \ref{lem:h0universal}. Thus, the result follows immediately from Lemma \ref{lem:excision} and the Yoneda lemma.
\end{proof}
\subsubsection*{Stable birational equivalence}
Recall that two smooth proper $k$-varieties $X$ and $Y$ are stably $k$-birationally equivalent if $X \times {\mathbb P}^n$ is $k$-birationally equivalent to $Y \times {\mathbb P}^m$ for integers $m,n \geq 0$. In particular, if $X$ is stably $k$-birationally equivalent to projective space, then we say that $X$ is stably $k$-rational.
\begin{thm}
\label{thm:birationalclass}
Suppose $k$ is an infinite field, and $R$ is a commutative unital ring. If $X$ and $X'$ are stably $k$-birationally equivalent smooth proper varieties then $\H_0^{\aone}(X,R) \cong \H_0^{\aone}(X',R)$ and $\H_0^{S}(X,R) \cong \H_0^{S}(X',R)$.
\end{thm}
\begin{proof}
Consider the composite map $Y \times {\mathbb A}^n \hookrightarrow Y \times {\mathbb P}^{n} \longrightarrow Y$. Since $\H_0^{\aone}(Y)$ is $\aone$-homotopy invariant, it follows that the composite map $\H_0^{\aone}(Y \times {\mathbb A}^n) \longrightarrow \H_0^{\aone}(Y)$ is an isomorphism. On the other hand, the map $\H_0^{\aone}(Y \times {\mathbb A}^n) \longrightarrow \H_0^{\aone}(Y \times {\mathbb P}^n)$ is an epimorphism by Proposition \ref{prop:epimorphism}. A diagram chase shows that projection map must then also be an isomorphism. The same argument works for Suslin homology.
If $k$ has characteristic $0$, we may finish the proof by means of a straightforward geometric argument using resolution of singularities. Indeed, given any $k$-birational morphism $X \to Y$, there is a commutative diagram of $k$-birational morphisms of the form
\[
\xymatrix{
X' \ar[r]\ar[d] & Y' \ar[d]\ar[dl] \\
X \ar[r] & Y
}
\]
where all the vertical maps are composites of a finite number of blow-ups with smooth centers. We claim that it suffices to show that the morphism on zeroth $\aone$-homology sheaves induced by a blow-up with smooth center is an isomorphism. If that is the case, since the composite map $\H_0^{\aone}(X') \to \H_0^{\aone}(Y') \to \H_0^{\aone}(X)$ is an isomorphism, we realize $\H_0^{\aone}(X)$ as a summand of $\H_0^{\aone}(Y)$ and vice versa (by reversing the roles of $X$ and $Y$).
Let us check the result for $f: X' \to X$, where $f$ is a blow-up at a codimension $\geq 2$ smooth subscheme $Z \subset X$. The induced map $X' \setminus f^{-1}(Z) \to X \setminus Z$ is an isomorphism. The morphism $X' \setminus f^{-1}(Z) \to X'$ is an open immersion with complement having codimension $1$, and the morphism $X \setminus Z \to X$ is an open immersion with complement having codimension $\geq 2$. By the previous proposition, the map $\H_0^{\aone}(X' \setminus f^{-1}(Z),R) \to \H_0^{\aone}(X',R)$ is an epimorphism, the map $\H_0^{\aone}(X' \setminus f^{-1}(Z),R) \to \H_0^{\aone}(X \setminus Z,R)$ is an isomorphism, and the map $\H_0^{\aone}(X \setminus Z,R) \to \H_0^{\aone}(X,R)$ is an isomorphism. Composing the second and third of these isomorphisms, we see that the morphism $\H_0^{\aone}(X' \setminus f^{-1}(Z),R) \to \H_0^{\aone}(X,R)$ is an isomorphism. Consequently, the morphism $\H_0^{\aone}(X',R) \to \H_0^{\aone}(X,R)$ is an isomorphism as well. The case of the zeroth Suslin homology sheaf is identical.
If $k$ is just infinite, we argue as follows. If $M$ is an arbitrary strictly $\aone$-invariant sheaf, then $M$ is $\aone$-local and so admits a Gersten resolution by Proposition \ref{prop:gerstenresolution}. By \cite[Theorem 8.5.1]{CTHK} it follows that $M(X)$ is a birational invariant of smooth proper varieties (this explanation is expanded slightly in Lemma \ref{lem:unramifiedinvariants}). Since $M$ was arbitrary, it follows from \ref{lem:h0universal} that the same statement holds for the zeroth $\aone$-homology sheaf. An analogous argument works for the zeroth Suslin homology sheaf.
\end{proof}
\begin{rem}
As discussed in \cite[\S 2]{AM}, we know that $\aone$-connectedness is a birational invariant for fields having characteristic $0$. However, we do not know whether the sheaf $\pi_0^{\aone}(X)$ is itself a (stable) birational invariant. The above result shows that after abelianization this is the case. Note also that the first proof above implies that if $X$ and $X'$ are two schemes, not necessarily proper, that can be linked by a chain of blow-ups at smooth schemes, then $\H_0^{\aone}(X) \cong \H_0^{\aone}(X')$. In fact, it is not at the moment known whether $\pi_0^{\aone}(X)$ is unchanged by blow-ups along smooth schemes!
\end{rem}
\begin{rem}
\label{rem:birationalinvarianceofchow}
In Example \ref{ex:unramifiedchow} we will see that if $k$ is a perfect field and $X$ is a smooth proper $k$-variety, then for any separable finitely generated extension $L/k$, one can identify $\H_0^S(X)(L)$ with $CH_0(X_L)$, functorially in $L$. However, birational invariance for the Chow group of $0$-cycles is a much older result. Indeed, if $k$ has characteristic $0$ then \cite[Proposition 6.3]{CTC} establishes $k$-birational invariance of $CH_0(X)$, and Fulton \cite[Example 16.1.11]{Fulton} generalizes this to arbitrary characteristic.
\end{rem}
\section{An unramified characterization of $\aone$-connectedness}
\label{s:unramifiedelements}
In this section, we recall aspects of a ``field theoretic" or ``unramified" approach to strictly $\aone$-invariant sheaves pioneered by Morel \cite{MMilnor,MStable,MField} following foundational work of Rost \cite{RostChow}. If $M$ is a strictly $\aone$-invariant sheaf (see Definition \ref{defn:strictaoneinvariance}), we now explain how to identify sections of $M$ over a smooth scheme $X$ in terms of the function field $k(X)$ of $X$ and codimension $1$ geometry of $X$, i.e., geometric discrete valuations on $k(X)$. We then provide a number of examples of strictly $\aone$-invariant sheaves. Combining this result with the discussion of \S \ref{s:birationalproperties} (specifically Lemma \ref{lem:h0universal}) Lemma \ref{lem:unramifiedinvariants} explains the sense in which $\H_0^{\aone}(X)$ is a ``universal unramified invariant" as mentioned in the introduction. Theorem \ref{thm:characterization} and the subsequent corollary give the the unramified characterization of $\aone$-connectedness stated in the introduction. By means of an example, we show that Suslin homology is not sufficiently refined to detect $\aone$-connectedness, or more loosely that $\aone$-connectedness cannot be characterized solely by means of ``unramified invariants with transfers;" see Proposition \ref{prop:rationallyconnected}, Example \ref{ex:artinmumford} and Example \ref{ex:parimala} for more details.
\subsubsection*{Unramified elements}
Fix a field $k$, and suppose $M$ is a strictly $\aone$-invariant sheaf (on $\Sm_k$). Suppose $S$ is an essentially smooth $k$-scheme, i.e., a filtering inverse limit of smooth schemes with smooth affine transition morphisms. If we write $S = \lim X_{\alpha}$, we can define $M(S) = \colim M(X_{\alpha})$. One can check that this colimit is independent of the choice of filtering inverse system defining $S$. Thus, we can extend $M$ uniquely to a functor on the category of essentially smooth $k$-schemes. If $\F_k$ denotes the category of finitely generated extension fields (morphisms are inclusions of fields), then $M$ gives rise to a (covariant) functor on $\F_k$. Abusing notation, we will denote all these functors by $M$.
By Lemma \ref{lem:excision}, for an open immersion of smooth schemes $U \hookrightarrow X$, the restriction map $M(X) \to M(U)$ is injective. If $L/k$ is a finitely generated extension of $k$, $\nu$ is a geometric discrete valuation of $L$ with valuation ring $\O_{\nu}$, and $\kappa_{\nu}$ is the associated residue field then we have a morphism $M(\O_{\nu}) \to M(L)$; this morphism is injective by what we've just said. We now use these observations to define unramified groups associated with any strictly $\aone$-invariant sheaf.
\begin{defn}
Suppose $X$ is an irreducible smooth $k$-scheme. Given $x \in X^{(1)}$, write $\nu_{x}$ for the corresponding discrete valuation. For any $x \in X^{(1)}$, the map $M(X) \to M(\O_{\nu_x})$ is injective. Set
\[
M^{ur}(X) := \bigcap_{x \in X^{(1)}} M(\O_{\nu_x}),
\]
where the intersection is taken in $M(k(X))$.
\end{defn}
\begin{lem}
\label{lem:unramifiedinvariants}
The induced map $M(X) \to M^{ur}(X)$ is an isomorphism. Thus, if $X$ is a smooth variety, the functor $M \mapsto M^{ur}(X)$ (from the category of strictly $\aone$-invariant sheaves of groups to the category of abelian groups) is representable on $\Ab^{\aone}_k$ by the sheaf $\H_0^{\aone}(X)$.
\end{lem}
\begin{proof}
A class $\alpha \in M^{ur}(X)$ comes from a class $M(k(X))$ lying in the image of $M(\O_{\nu_x})$ as $x$ ranges over the codimension $1$ points of $X$. If $\alpha$ is in the image of $\O_{\nu}$, then by definition there is an open subscheme $U_{\nu} \subset X$ on which $\alpha$ is defined. Thus, we can find a collection of open subschemes $U_{i}$ such that $\alpha$ extends to a class on $U_i$ for each $i$. Using the sheaf property and induction, these classes glue to give a class on the union $U$ of the $U_i$. By assumption, this union contains all codimension $1$ points of $X$. By Lemma \ref{lem:excision}, we know that if $U \subset X$ is an open subscheme whose closed complement has codimension $\geq 2$, the restriction map $M(X) \to M(U)$ is an isomorphism. The second statement follows immediately from the first one via Lemma \ref{lem:h0universal}.
\end{proof}
\begin{cor}
\label{cor:isomorphismsofstrictlyaoneinvariantsheaves}
Given $M,M' \in \Ab^{\aone}_k$, then $f: M \to M'$ is an isomorphism if and only if for every separable, finitely generated extension $L/k$ the morphism $M(L) \to M'(L)$ is an isomorphism.
\end{cor}
\begin{proof}
Since $\Ab^{\aone}_k$ is abelian, it suffices to prove that the strictly $\aone$-invariant sheaves $\ker(f)$ and $\operatorname{coker}(f)$ are trivial. However, it follows immediately from Lemma \ref{lem:unramifiedinvariants} that a strictly $\aone$-invariant sheaf $A$ is trivial if and only if $A(L)$ is trivial.
\end{proof}
\begin{rem}
If $M$ admits transfers, then the point of view on strictly $\aone$-invariant sheaves (with transfers) discussed above is closely related to Rost's theory of cycle modules \cite{RostChow}. In fact, all of the examples of strictly $\aone$-invariant sheaves used below can be constructed using either Rost's theory or a modification developed by Morel. The relationship between strictly $\aone$-invariant sheaves with transfers and Rost's theory of cycle modules has been developed by D\'eglise \cite{Deglise1,Deglise2} (the former category is a localization of the latter). The counterpart of Lemma \ref{lem:unramifiedinvariants} in the setting of cycle modules is given by a result of Merkurjev \cite[Theorem 2.10]{Merkurjev}. In fact, the strictly $\aone$-invariant sheaf with transfers associated with Merkurjev's cycle module by D\'eglise's theory (or rather its degree $0$ part) is precisely the $0$-th Suslin homology sheaf as we explain below in Example \ref{ex:unramifiedchow}.
\end{rem}
\begin{rem}
There is a quotient map $\O_{\nu} \to \kappa_{\nu}$, and this induces a morphism $M(\O_{\nu}) \to M(\kappa_{\nu})$. By choosing local parameters, one can define appropriate notions of residue maps for strictly $\aone$-invariant sheaves, though if $M$ does not admit transfers, these residues depend on the choices made. This point of view is developed in \cite[\S 1]{MField}, but we will not use this theory below.
\end{rem}
\subsubsection*{Unramified \'etale cohomology and other examples}
Recall by Corollary \ref{cor:aonelocaldetection}, if $M$ is a strictly $\aone$-invariant sheaf, and $X$ is an $\aone$-connected smooth scheme, the pullback map $M(\Spec k) \to M(X)$ is a bijection. In this section, we give a number of examples of unramified sheaves to show what kind of ``vanishing" statements $\aone$-connectedness entails.
\begin{ex}
\label{ex:unramifiedetalecohomology}
Suppose $k$ is a field, and $n$ is an integer that is not divisible by the characteristic of $k$. Let ${\mathbf H}^p_{\et}(\mu_n^{\tensor q})$ denote the (Nisnevich) sheaf (on $\Sm_k$) associated with the presheaf $U \mapsto H^p_{\et}(U,\mu_n^{\tensor q})$. The sheaf ${\mathbf H}^p_{\et}(\mu_n^{\tensor q})$ is strictly $\aone$-invariant. There are many ways to see this; for example, it follows with a bit of work from homotopy invariance for \'etale cohomology (\cite[Expose XV Lemme 4.2]{SGA43}).
\end{ex}
\begin{lem}
\label{lem:unramifiedrelation}
Suppose $n$ is an integer that is coprime to the characteristic of $k$. If $X \in \Sm_k$, then we have
\[
\hom_{\Ab^{\aone}_k}(\H_0^{\aone}(X),{\mathbf H}^p_{\et}(\mu_n^{\tensor q})) = H^0_{Nis}(X,{\mathbf H}^p_{\et}(\mu_n^{\tensor q})).
\]
If furthermore $X$ is proper, then the latter group is precisely the group $H^p_{ur}(k(X)/k,\mu_n^{\tensor q})$.
\end{lem}
\begin{proof}
Since ${\mathbf H}^p_{\et}(\mu_n^{\tensor q})$ is strictly $\aone$-invariant, the equality in the statement follows immediately from Lemma \ref{lem:h0universal}. By Lemma \ref{lem:unramifiedinvariants}, if $X$ is an irreducible smooth scheme, ${\mathbf H}^p_{\et}(\mu_n^{\tensor q})(X)$ coincides with the subgroup of $H^p_{\et}(k(X),\mu_n^{\tensor q})$ consisting of unramified elements.
\end{proof}
\begin{rem}
For a development of unramified \'etale cohomology see, e.g., \cite[\S 4]{CTPurity}. These groups admit an alternate description. If $L/k$ is a finitely generated field extension and $\nu$ is a discrete valuation of $L/k$ with residue field $\kappa$, there are residue maps
\[
\partial_{\nu}: H^p_{\et}(L,\mu_n^{\tensor q}) \longrightarrow H^{p-1}_{\et}(\kappa,\mu_n^{\tensor q-1}).
\]
The subgroup of $H^p_{\et}(L,\mu_n^{\tensor q})$ can be identified with the intersection of the kernels of the residue maps $\partial_{\nu}$ as $\nu$ ranges over the discrete valuations of $L/k$.
\end{rem}
\begin{ex}
\label{ex:unramifiedMilnorktheory}
Let $X$ be a smooth $k$-variety, with function field $k(X)$. Given a codimension $1$ point, we write $\partial_x$ for the residue map associated with the valuation ring defined by $x$ (see \cite[Lemma 2.1]{Milnor} for the construction of these residue maps). We define
\[
\K^M_n(X) := \ker(K^M_n(k(X)) \stackrel{\bigoplus_{x \in X^{(1)}}\partial_x}{\longrightarrow} \bigoplus_{x \in X^{(1)}} K^M_{n-1}(\kappa_{\nu})).
\]
We recall one functoriality property of these residue maps immediately subsequent to this example. If $\varphi: A \to B$ is a ring homomorphism, the induced map $K^M_n(A) \to K^M_n(B)$ is usually denoted by either $\varphi_*$ or $Res_{A/B}$.
One can show that $\K^M_n$ is a strictly $\aone$-invariant sheaf (see \cite[\S 2.2]{MMilnor} for more details), and in fact $\K^M_n$ is a strictly $\aone$-invariant sheaf with transfers. For any integer $m$, multiplication by $m$ extends to a morphism of sheaves $\K^M_n \stackrel{\times m}{\longrightarrow} \K^M_n$. The category of strictly $\aone$-invariant sheaves is abelian (see Corollary \ref{cor:strictlyaoneinvariantabeliancategory}), and the cokernel of this morphism of sheaves, which is necessarily strictly $\aone$-invariant, is denoted $\K^M_n/m$. By construction, we have identifications $\K^M_n(L) = K^M_n(L)$ and $\K^M_n/m(L) = K^M_n(L)/m$ for any finitely generated extension $L/k$.
\end{ex}
\begin{ex}
\label{ex:unramifiedchow}
Let $k$ be a perfect field and let $X$ be a smooth proper $k$-variety. Consider the functor assigning to a finitely generated separable extension $L/k$ the group $CH_0(X_L)$. By means of duality in Voevodsky's derived category of motives, one can show (see \cite[Theorem 2.2]{HuberKahn} or \cite[\S 3.4]{Deglise2}) that for $L$ as above, there is a canonical identification
\[
\H_0^S(X)(L) \isomto CH_0(X_L).
\]
If one replaces $CH_0(X_L)$ by its rationalization, a similar statement is true for Suslin homology with $\Q$-coefficients. The sections of $\H_0^S(X)$ over a smooth scheme $U$ with function field $L$ can thus be described either in terms of unramified elements, or as follows. If $u$ is a codimension $1$ point of $U$, there are specialization maps $CH_0(X_L) \to CH_0(X_{\kappa_{\nu}})$ \cite[\S 20.3]{Fulton}, and $\H_0^S(X)(U)$ can be realized as the intersection of the kernels of these specialization maps.
\end{ex}
\begin{rem}
By the equivalence of categories between an appropriate category of strictly $\aone$-invariant sheaves with transfers and Rost's category of cycle modules \cite[Th\'eor\`eme 3.3]{Deglise1}, $\H_0^S(X)$ gives rise to Merkurjev's universal cycle module from \cite[\S 2.3]{Merkurjev}. Lemma \ref{lem:h0universal} combined with this observation can be used to give an alternate proof of \cite[Theorem 2.11]{Merkurjev} under the hypothesis that $k$ is perfect.
\end{rem}
\begin{ex}
\label{ex:unramifiedwittgroup}
Let $k$ be a field having characteristic unequal to $2$. For any smooth $k$-scheme $X$, let $W(X)$ be the associated Witt group. Using the purity results of \cite{OjangurenPanin}, one can study the Nisnevich sheafification of this presheaf. Indeed, the Nisnevich sheafification of the functor $X \mapsto W(X)$ defines a sheaf ${\bf W}$, which we refer to as the unramified Witt sheaf. One can identify the group of sections ${\bf W}(X)$ as the subgroup of $W(k(X))$ with trivial (second) residues at points of codimension $1$ of $X$. See \cite[Appendice]{CTO}, \cite[\S 2.1]{MMilnor} and the references therein for more details. While Witt groups do not admit transfers in the same sense as Milnor K-theory or unramified \'etale cohomology, there is a notion of transfer for Witt groups.
\end{ex}
\begin{ex}
\label{ex:unramifiedfundamentalideal}
Let ${\bf W}$ be the unramified Witt sheaf as just defined. Let $I(k)$ denote the fundamental ideal in the Witt ring of $k$, i.e., the ideal of even dimensional forms, and let $I^n$ denote the $n$-th power of the fundamental ideal (which is known to be additively generated by Pfister forms. We then set ${\bf I}^n(X) = I^n(k(X)) \cap {\bf W}(X)$. The presheaf $U \mapsto {\bf I}^n(U)$ is a strictly $\aone$-invariant sheaf by, e.g., \cite[Theorem 2.3]{MMilnor}. There is a monomorphism of strictly $\aone$-invariant sheaves ${\bf I}^{n+1} \hookrightarrow {\bf I}^{n}$ (coming from the corresponding injective maps on sections over fields), and it follows that ${\bf I}^n/{\bf I}^{n+1}$ is also strictly $\aone$-invariant.
\end{ex}
\begin{ex}
\label{ex:unramifiedbrauergroup}
If $k$ is a field having characteristic exponent $p$, let $\gm'$ denote the \'etale sheaf $\gm \tensor_{\Z} \Z[\frac{1}{p}]$. Let ${\mathbf H}^2_{\et}(\gm')$ denote the Nisnevich sheaf associated with the presheaf $U \mapsto H^2_{\et}(U,\gm')$. One can show that $H^2_{\et}(U,\gm')$ is strictly $\aone$-invariant. Using Lemma \ref{lem:h0universal} one deduces that
\[
\hom_{\Ab^{\aone}_k}(\H_0^{\aone}(X),{\mathbf H}^2_{\et}(\gm')) = H_{Nis}^0(X,{\mathbf H}^2_{\et}(\gm')).
\]
Furthermore, one can show using purity that if $X$ is smooth and proper $H_{Nis}^0(X,{\mathbf H}^2_{\et}(\gm'))$ is precisely the cohomological Brauer group $H^2_{\et}(X,\gm')$. It follows from Proposition \ref{prop:dependsonpi0} and Lemma \ref{lem:h0universal} that if $X$ is an $\aone$-connected smooth scheme over an algebraically closed field, then $H_{Nis}^0(X,{\mathbf H}^2_{\et}(\gm'))$ is trivial. That $H^2_{\et}(X,\gm')$ is trivial if $X$ is $\aone$-connected was first observed by B. Bhatt; this result is stated (with proof) in \cite[Theorem 4.3]{Gille}. For yet another proof of this statement, see \cite[Proposition 4.2]{AM}.
\end{ex}
Given any object in the stable $\aone$-homotopy category (i.e., a $\pone$-spectrum), Morel's connectivity results (recalled here as Theorem \ref{thm:stableaoneconnectivity}) show that the associated stable $\aone$-homotopy sheaves are strictly $\aone$-invariant. Thus, given any cohomology theory representable in the stable $\aone$-homotopy category, one can get corresponding strictly $\aone$-invariant sheaves; this applies notably to motivic cohomology, algebraic K-theory, Hermitian K-theory, etc. Another notable example comes from the stable $\aone$-homotopy groups of motivic spheres; the known computations are related to the Milnor-Witt K-theory sheaves mentioned in Example \ref{ex:aonehomologyofgm}.
\subsubsection*{Detecting $\aone$-connectedness with $\aone$-homology and birational sheaves}
Finally, we provide the ``unramified" characterization of $\aone$-connectedness stated in the introduction as Theorem \ref{thmintro:characterization}. This result can be viewed as an extension of \cite[Theorem 1]{AH}, and the techniques are similar.
\begin{thm}
\label{thm:characterization}
If $k$ is a field, $R$ is a commutative unital ring (e.g., $\Z$ or $\Q$) and $X$ is a smooth proper $k$-scheme, then $X$ is $\aone$-connected if and only if the canonical map $\H_0^{\aone}(X,R) \to R$ is an isomorphism.
\end{thm}
\begin{proof}
We prove only the statement with $\Z$-coefficients; the corresponding statement with $R$-coefficents follows by repeating the argument word for word with $\Z$ replaced by $R$. If $X$ is $\aone$-connected, then the canonical map in question is an isomorphism by Proposition \ref{prop:dependsonpi0} (note: this does not require properness). In the other direction, suppose $X$ is not $\aone$-connected. It suffices to provide a strictly $\aone$-invariant sheaf $M$ such that the map $M(k) \to M(X)$ is not an isomorphism. By Lemma \ref{lem:h0universal} this is equivalent to proving that the map
\[
\hom_{\Ab^{\aone}_k}(\Z,M) \longrightarrow \hom_{\Ab^{\aone}_k}(\H_0^{\aone}(X),M)
\]
is not a bijection.
Recall that a presheaf of sets $\F$ on $\Sm_k$ is called birational if for any open dense immersion $U \hookrightarrow X$ the map $\F(X) \to \F(U)$ is a bijection. In \cite[Theorem 6.2.1]{AM}, we showed that if $X$ is a smooth proper $k$-scheme, then there are a birational and $\aone$-invariant sheaf $\pi_0^{b\aone}(X)$ and a morphism $X \to \pi_0^{b\aone}(X)$ (functorial in morphisms of smooth proper schemes) characterized by the property that if $L$ is any finitely generated separable extension of $k$, then $\pi_0^{b\aone}(X)(L) = X(L)/R$. Here, the set $X(L)/R$ is the set of $R$-equivalence classes of points in $X(L)$. Now, either $X(k)$ is empty or not.
{\em Case 1}. Suppose $X$ is $\aone$-disconnected, but $X(k)$ is empty. In \cite[Lemma 2.4]{AH}, we proved that the free sheaf of abelian groups on $\pi_0^{b\aone}(X)$, denoted $\Z(\pi_0^{b\aone}(X))$, is birational and strictly $\aone$-invariant. Homomorphisms $\Z \to \Z(\pi_0^{b\aone}(X))$ correspond precisely to elements of $\pi_0^{b\aone}(X)(k)$. In \cite[Corollary 2.9]{AH}, we showed that $X(k)$ is non-empty if and only if the map $\H_0^{\aone}(X) \to \Z$ is an epimorphism.
{\em Case 2}. Assume that $X$ is $\aone$-disconnected, but $X(k)$ is non-empty. Any rational point in $X(k)$ induces a splitting $\Z \to \H_0^{\aone}(X)$, and a corresponding splitting $\Z \to \Z(\pi_0^{b\aone}(X))$. Since the category of strictly $\aone$-invariant sheaves of groups is abelian (see Corollary \ref{cor:strictlyaoneinvariantabeliancategory}), we have direct sum decompositions $\H_0^{\aone}(X) \cong \Z \oplus \tilde{\H}_0^{\aone}(X)$ and $\Z(\pi_0^{b\aone}(X)) \cong \Z \oplus \widetilde{\Z(\pi_0^{b\aone}(X))}$, and these splittings are compatible in the sense that the morphism $\H_0^{\aone}(X) \to \Z(\pi_0^{b\aone}(X))$ induces a morphism $\Z \oplus \tilde{H}_0^{\aone}(X) \to \Z \oplus \widetilde{\Z(\pi_0^{b\aone}(X))}$ that is the identity morphism on the first summand.
By \cite[Corollary 2.4.4]{AM}, $X$ is $\aone$-connected if and only if for every finitely generated separable extension $L/k$ the set $\pi_0^{b\aone}(X)(L)$ is reduced to a point. Thus, by assumption, there exists a separable extension $K/k$ such that $\pi_0^{b\aone}(X)(K)$ consists of (strictly) more than $1$ element. Write $X_K$ for the base extension of $X$ to $\Spec K$. Pullback gives an identification $\H_0^{\aone}(X)(K) = \H_0^{\aone}(X_K)(K)$ by \cite[\S 5.1]{MStable}, see in particular Example 5.1.3. Thus, without loss of generality, we can assume $k = K$ and that $\pi_0^{b\aone}(X)(k)$ consists of strictly more than $1$ element.
Each element of $\pi_0^{b\aone}(X)(k)$ determines a homomorphism $\Z \to \H_0^{\aone}(X)$ that is non-trivial, since the composite morphism $\Z \to \H_0^{\aone}(X) \to \Z(\pi_0^{b\aone}(X))$ is non-trivial. Taking the sum of these homomorphisms gives rise to a non-trivial homomorphism $\Z(\pi_0^{b\aone}(X))(k) \to \H_0^{\aone}(X)(k)$. It follows immediately that $\tilde{\H}_0^{\aone}(X)(k)$ is non-trivial.
\end{proof}
Combining Lemma \ref{lem:unramifiedinvariants} with Theorem \ref{thm:characterization} we deduce the following result.
\begin{cor}
If $k$ is a field and $X$ is a smooth proper $k$-scheme, then $X$ is $\aone$-connected if and only if for every strictly $\aone$-invariant sheaf $M$, the canonical map $M(k) \to M^{ur}(X)$ is a bijection.
\end{cor}
\subsubsection*{Detecting $\aone$-connectedness with Suslin homology}
As it turns out, the key point in the proof of Theorem \ref{thm:characterization} is the use of strictly $\aone$-invariant sheaves that do not necessarily possess transfers. In general, e.g., if $X$ is a smooth curve of genus $g \geq 1$, the sheaves $\Z(\pi_0^{b\aone}(X))$ do not possess transfers; for a discussion of this point, see \cite[\S 2]{LevineST}. If $X$ is a smooth proper $k$-variety, neither the zeroth Suslin homology sheaf of $X$ with integral nor the variant with rational coefficients can detect $\aone$-connectedness. With rational coefficients, this fits into a general statement about Suslin homology of separably rationally connected varieties (see, e.g., \cite[Chapter 4 Definition 3.2]{Kollar}) and appeal to Example \ref{ex:artinmumford}. If $k$ is a perfect field, \cite[Chapter 4 Theorem 3.9.4]{Kollar} shows that separably rationally connected varieties $X/k$ have the property that for every separably closed extension $L/k$, every two $L$-points can be connected by a $\pone$.
\begin{prop}
\label{prop:rationallyconnected}
If $k$ is a perfect field, and $X$ is a smooth proper $k$-scheme such that for every separably closed field $L/k$ we have $X(L)/R = \ast$, then the canonical morphism $\H_0^{S}(X,\Q) \to \Q$ is an isomorphism.
\end{prop}
\begin{proof}
Since the sheaf $\H_0^S(X,\Q)$ is strictly $\aone$-invariant, it suffices by Corollary \ref{cor:isomorphismsofstrictlyaoneinvariantsheaves} to prove that the map in question is an isomorphism on sections over every finitely generated separable extension $L/k$. By means of the identification $\H_0^S(X,\Q)(L) = CH_0(X_L)_{\Q}$ from Example \ref{ex:unramifiedchow} it therefore suffices to prove $CH_0(X_L)_{\Q}$ is isomorphic to $\Q$ under the stated hypotheses.
If $\bar{L}$ is an algebraic closure of $L$, then we have a restriction map for Chow groups
\[
CH_0(X_L) \to CH_0(X_{\bar{L}}).
\]
The kernel of this map is a torsion subgroup. Indeed, if $Z$ is a cycle in $CH_0(X_L)$ that goes to zero in $CH_0(X_{\bar{L}})$, then $Z$ necessarily goes to zero in a finite extension $L'/L$. On the other hand pullback followed by pushforward is multiplication by $[L':L]$, and thus $[L':L] Z = 0$.
Under the assumption on $X$, we know that $CH_0(X_{\bar{L}}) = \Z$. Upon tensoring with $\Q$, restriction becomes an isomorphism: it is surjective since $X_L$ has a $0$-cycle of finite degree coming from a point over some finite extension $L'/L$ and injective since the kernel of restriction is torsion and therefore becomes trivial after tensoring with $\Q$.
\end{proof}
\begin{ex}
\label{ex:artinmumford}
The classic examples of \cite{ArtinMumford} provide unirational (hence separably rationally connected) smooth proper varieties $X$ over $\cplx$ that are non-rational, but for which $Br(X)$ is non-trivial. In particular, these varieties have $\H_0^{S}(X,\Q) = \Q$, but are not $\aone$-connected, e.g., by Example \ref{ex:unramifiedbrauergroup} and Corollary \ref{cor:aonelocaldetection}. Other examples along these lines are provided in \cite{CTO, Peyre1} and \cite{ABK}.
\end{ex}
Finally, we observe that the zeroth integral Suslin homology sheaf of a smooth projective variety cannot detect $\aone$-connectedness. Since we know that the zeroth $\aone$-homology detects rational points, and the zeroth Suslin homology is related to $0$-cycles, a natural place to look for a counterexample is among the smooth projective $k$-varieties that possess a $0$-cycle of degree $1$ but that have no $k$-rational point; we thank Sasha Merkurjev for pointing out the following example due to Parimala.
\begin{ex}
\label{ex:parimala}
If $X$ is a smooth projective variety such that the morphism $\H_0^S(X) \to \Z$ is an isomorphism, it need not be the case that $X$ is $\aone$-connected. In \cite[Theorem 3]{Parimala}, Parimala gives a field $k$ (having characteristic $0$) and a projective homogeneous space $X$ under a connected reductive linear algebraic group over $k$ such that i) $X$ has a point over a degree $2$ and degree $p$ ($p$ odd) extension of $k$, but ii) has no $k$-rational point. Point (ii) guarantees that $X$ is not $\aone$-connected.
Again combining \ref{ex:unramifiedchow} and Corollary \ref{cor:isomorphismsofstrictlyaoneinvariantsheaves}, to prove that $\H_0^S(X) \to \Z$ is an isomorphism, it suffices to prove that $CH_0(X_L) \to \Z$ is an isomorphism for every finitely generated extension, separable extensions $L/k$. If $K$ is a field, and $X$ is a projective homogeneous space under a connected reductive linear algebraic group such that $X(K)$ is non-empty, then choice of $x \in X(K)$ determines an isomorphism $G/P \isomt X$, where $P$ is a $K$-parabolic subgroup of $G$. Then, \cite[V Theorem 21.20(ii)]{Borel} states that $X_K$ is $K$-rational, and $K$-birational invariance of the Chow group of $0$-cycles (see Remark \ref{rem:birationalinvarianceofchow}) allows one to deduce that $CH_0(X_K) = \Z$. Combining this discussion with point (i), it follows that for any extension $L/k$, the degree map $CH_0(X_L) \to \Z$ is also an isomorphism. In fact, since having a $0$-cycle of degree $1$ is equivalent to having points over extensions of coprime degrees, this argument shows that if $X$ is any projective homogeneous space under a connected reductive group that has a $0$-cycle of degree $1$, then $\H_0^S(X) \to \Z$ is an isomorphism.
\end{ex}
\begin{footnotesize}
\bibliographystyle{alpha}
| {
"timestamp": "2011-11-22T02:01:03",
"yymm": "1001",
"arxiv_id": "1001.4574",
"language": "en",
"url": "https://arxiv.org/abs/1001.4574",
"abstract": "We study some aspects of the relationship between A^1-homotopy theory and birational geometry. We study the so-called A^1-singular chain complex and zeroth A^1-homology sheaf of smooth algebraic varieties over a field k. We exhibit some ways in which these objects are similar to their counterparts in classical topology and similar to their motivic counterparts (the (Voevodsky) motive and zeroth Suslin homology sheaf). We show that if k is infinite the zeroth A^1-homology sheaf is a birational invariant of smooth proper varieties, and we explain how these sheaves control various cohomological invariants, e.g., unramified étale cohomology. In particular, we deduce a number of vanishing results for cohomology of A^1-connected varieties. Finally, we give a partial converse to these vanishing statements by giving a characterization of A^1-connectedness by means of vanishing of unramified invariants.",
"subjects": "Algebraic Geometry (math.AG); Algebraic Topology (math.AT); K-Theory and Homology (math.KT)",
"title": "Birational invariants and A^1-connectedness",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449310917195
} |
https://arxiv.org/abs/2006.15614 | Multiple list colouring of $3$-choice critical graphs | A graph $G$ is called $3$-choice critical if $G$ is not $2$-choosable but any proper subgraph is $2$-choosable. A characterization of $3$-choice critical graphs was given by Voigt in [On list Colourings and Choosability of Graphs, Habilitationsschrift, Tu Ilmenau(1998)]. Voigt conjectured that if $G$ is a bipartite $3$-choice critical graph, then $G$ is $(4m, 2m)$-choosable for every integer $m$. This conjecture was disproved by Meng, Puleo and Zhu in [On (4, 2)-Choosable Graphs, Journal of Graph Theory 85(2):412-428(2017)]. They showed that if $G=\Theta_{r,s,t}$ where $r,s,t$ have the same parity and $\min\{r,s,t\} \ge 3$, or $G=\Theta_{2,2,2,2p}$ with $p \ge 2$, then $G$ is bipartite $3$-choice critical, but not $(4,2)$-choosable. On the other hand, all the other bipartite 3-choice critical graphs are $(4,2)$-choosable. This paper strengthens the result of Meng, Puleo and Zhu and shows that all the other bipartite $3$-choice critical graphs are $(4m,2m)$-choosable for every integer $m$. | \section{Introduction}
An {\em $a$-list assignment} of a graph $G$ is a mapping $L$ which assigns to each vertex $v$ of $G$ a set $L(v)$ of $a$ colours. A {\em $b$-fold coloring} of $G$ is a mapping $\phi$ which assigns to each vertex $v$ of $G$ a set $\phi(v)$ of $b$ colors such that for every edge $uv$, $\phi(u) \cap \phi(v) = \emptyset$.
An{ \em $(L,b)$-colouring} of $G$ is a $b$-fold coloring $\phi$ of $G$ such that $\phi(v) \subseteq L(v)$ for each vertex $v$.
We say $G$ is {\em $(a,b)$-choosable} if for any $a$-list assignment $L$ of $G$, there is an $(L,b)$-colouring of $G$.
We say $G$ is {\em $a$-choosable } if $G$ is $(a,1)$-choosable.
The concept of list colouring of graphs was introduced independently by Erd\H{o}s, Rubin and Taylor \cite{ERT} and Vizing \cite{Vizing1976} in the 1970's. Since then, list colouring of graphs has attracted considerable attention and becomes an important branch of chromatic graph theory.
Erd\H{o}s, Rubin and Taylor \cite{ERT} characterized all the $2$-choosable graphs. Given a graph $G$, the {\em core} of $G$ is obtained from $G$ by repeatedly removing degree $1$ vertices. Denote by $\Theta_{k_1,k_2,\ldots, k_q}$ the graph consisting of internally vertex disjoint paths of lengths $k_1,k_2, \ldots, k_q$ connecting two vertices $u$ and $v$. Erd\H{o}s, Rubin and Taylor proved that a graph $G$ is $2$-choosable if and only if the core of $G$ is $K_1$ or an even cycle or $\Theta_{2,2,2p}$ for some positive integer $p$.
We say a graph $G$ is {\em $3$-choice-critical} if $G$ is not $2$-choosable and any proper subgraph of $G$ is $2$-choosable.
In 1998, Voigt characterized all the $3$-choice-critical graphs.
\begin{theorem}[\cite{Voigt1998}]
\label{3-choice-critical}
A graph is 3-choice-critical if and only if it is one of the following:
\begin{enumerate}
\item An odd cycle.
\item Two vertex-disjoint even cycles joined by a path.
\item Two even cycles with one vertex in common.
\item $\Theta_{2r,2s,2t}$ with $r\geq 1$, and $s, t>1$, or
$\Theta_{2r+1,2s+1,2t+1}$ with $r\ge 0$, $s,t > 0$.
\item $\Theta_{2,2,2,2t}$ graph with $t\geq 1$.
\end{enumerate}
\end{theorem}
Except the odd cycle, all the other $3$-choice-critical graphs are bipartite.
In \cite{Voigt1998}, Voigt conjectured that every bipartite $3$-choice -critical graph $G$ is $(2m,m)$-choosable for every even integer $m$. This conjecture is true if $G=\Theta_{2,2,2,2}$ \cite{TuzaVoigt1996}.
However, Meng, Puleo and Zhu \cite{4choose2} proved that if $min\{r,s,t\} \geq 3$, $r,s,t$ have the same parity, then $\Theta_{r,s,t}$ is not $(4,2)$-choosable, and if $t \geq 2$, then $\Theta_{2,2,2,2t}$ is not $(4,2)$-choosable.
Nevertheless, the other bipartite 3-choice-critical graphs are $(4,2)$-choosable \cite{4choose2}.
It was conjectured by Erd\H{o}s, Rubin and Taylor \cite{ERT} that every $(a,b)$-choosable graph is $(am,bm)$-choosable. This conjecture was refuted recently by Dvo\v{r}\'{a}k, Hu and Sereni \cite{DHS} who proved that for any integer $k \ge 4$, there exists a $k$-choosable graph which is not $(2k,2)$-choosable.
On the other hand,
it was proved by Tuza and Voigt \cite{TuzaVoigt1996-2choosable} that if $G$ is $2$-choosable, then $G$ is $(2m,m)$-choosable for any positive integer $m$.
A natural question is whether all the $(4,2)$-choosable $3$-choice critical graphs are $(4m,2m)$-choosable for all integer $m$. In this paper, we answer this question in affirmative.
\begin{theorem}
\label{thm-main}
If $G$ consists of two vertex disjoint even cycles joined by a path or
two even cycles intersecting at a single vertex, or $G = \Theta_{r,s,t}$ and $r \leq 2$, $s,t>2$ and $r,s,t$ have the same parity, then $G$ is $(4m,2m)$-choosable for every integer $m$.
\end{theorem}
The {\em strong fractional choice number} of a graph $G$ studied in \cite{JZ2019,LZ2019,Zhu2018} is defined as
$$ch_f^*(G)= \inf\{r \in \mathbb{R}: G \text{ is $(a,b)$-choosable for any $a,b$ for which $a/b \ge r$}\}.$$
As a consequence of Theorem \ref{thm-main}, every $(4,2)$-choosable $3$-choice critical graph $G$ has $ch_f^*(G)=2$. It remains an open problem whether every bipartite $3$-choice critical graph $G$ has $ch_f^*(G) =2$.
\section{Proof of Theorem \ref{thm-main}}
The idea of the proof of Theorem \ref{thm-main} is the following: Assume $G$ is a graph as in Theorem \ref{thm-main} and $L$ is a $4m$-list assignment of $G$. Let $H$ be the set of vertices of $G$ of degree at least $3$. Then $G-H$ is the disjoint union of a family of two or three paths, where each end vertex of these paths has exactly one neighbour in $H$ unless the path consists of a single vertex $w$, in which case $w$ has two neighbours in $H$,
and other vertices of the paths have no neighbour in $H$.
We shall assign a set of $2m$ colours to each vertex in $H$.
Then extend this pre-colouring of $H$ to an $(L, 2m)$-colouring of the remaining vertices of $G$, that consists of two or three paths.
The extension to the paths are independent to each other.
The question in concern becomes the following: Assume $P$ is a path with vertices $v_1, v_2, \ldots, v_n$ in order and $L$ is a $4m$-list assignment on $P$. Assume $S,T$ are the $2m$-sets of colours assigned to the neighbours of $v_1$ and $v_n$ in $H$ respectively (note that the neighbours of $v_1$ and $v_n$ maybe the same, in that case, $S=T$). Under what condition, we can find an $(L,2m)$-colouring of $P$ so that the end vertices of $P$ avoid the colours from $S$ and $T$, respectively?
A sufficient condition for the existence of such an extension to a $2m$-fold colouring of the whole path was given in \cite{4choose2}. We shall use this condition to show that there exists appropriate $(L,2m)$-colouring of $H$ so that the colouring can be extended to all the paths in $G-H$. This is the same idea used in \cite{4choose2}.
\begin{definition}
\label{def-slp1}
Assume $P$ is an $n$-vertex path with vertices $v_1, v_2, \ldots, v_n$ in order.
For a list assignment $L$ of $P$,
Let
\begin{equation*}
\begin{array}{rl}
X_1 =& L(v_1), \\
X_i =& L(v_i)-X_{i-1}, i \in \{2,3,\ldots, n\},\\
S_L(P)=&\sum_{i=1}^{n}|X_i|.
\end{array}
\end{equation*}
\end{definition}
The following lemma was proved in \cite{4choose2}.
\begin{lemma}[ \cite{4choose2}]
\label{42lemma}
Let $P$ be an $n$-vertex path and let $L$ be a list assignment on $P$.
If $|L(v_1)|$, $|L(v_n)| \geq 2m$ and {$|L(v_i)|= 4m$} for $i \in \{2,3,\ldots, n-1\}$, then path $P$ is $(L,2m)$-colourable if and only if $S_L(P) \geq 2nm$.
\end{lemma}
\begin{definition}
Assume $L$ is a $4m$-list assignment on $P$ and $S,T$ are two colour sets.
Let $L\ominus(S,T)$ be the list assignment obtained
from $L$ by deleting all colours in $S$ from $L(v_1)$, all colours in $T$ from $L(v_n)$, and leaving all other lists unchanged.
The damage of $(S,T)$ with respect to $L$ and $P$ is defined as
$$dam_{L,P}(S,T)=S_L(P)-S_{L\ominus(S,T)}(P).$$
\end{definition}
So to prove that $P$ has an $2m$-fold $L\ominus(S,T)$-colouring, it suffices to show that
$$S_L(P) - dam_{L,P}(S,T) \ge 2nm.$$
For this purpose, a few lemmas were proved in \cite{4choose2} that give lower bounds for $S_L(P)$ and upper bounds for $dam_{L,P}(S,T)$.
\begin{definition}[\cite{4choose2}]
\label{def-slpAX1Xn}
Assume $n$ is an odd integer, $P$ is an $n$-vertex path with vertices $v_1, v_2, \ldots, v_n$ in order, and $L$ is a list assignment on $P$.
Let
\begin{equation*}
\begin{array}{rl}
\Lambda= &\mathop{\bigcap}\limits_{x\in V(P)}L(x), \\
\hat{X}_1 =& \{c \in L(v_1)-\Lambda: \mbox{the smallest index $i$ for which $c \notin L(v_i) $ is even} \}, \\
\hat{X}_n =& \{c \in L(v_n)-\Lambda: \mbox{the largest index $i$ for which $c \notin L(v_i) $ is even}\}.
\end{array}
\end{equation*}
\end{definition}
\begin{lemma}[\cite{4choose2}]
\label{slp-dam}
Let $L$ be a list assignment on an $n$-vertex path $P$, where $n$ is odd. For any sets of colours $S,T$,
$$dam_{L,P}(S,T)=|\hat{X}_1\cap S|+|\hat{X}_n\cap T|+|\Lambda \cap(S\cup T)|.$$
\end{lemma}
\begin{lemma}[\cite{4choose2}]
\label{first lower bound for SLP}
If $L$ is a list assignment on an $n$-vertex path $P$, where $n$ is odd and $|L(v_i)|=4m$ for all $i$, then $$S_{L}(P)\ge \max\{ 2(n-1)m+ |\hat{X}_1|+|\hat{X}_n|+|\Lambda|, 2(n+1)m\}.$$
\end{lemma}
The following is a key lemma for the proof in this paper.
\begin{lemma}
\label{main-lemma}
Let $m, \ell$ and $\tau$ be fixed integers, where $m\geq 1$, $0 \leq \ell \leq 4m$, $0 \leq \tau \leq 2m-2$, $\ell +\tau \geq 2m+2$, and both $\ell$ and $\tau$ are even. Assume $x,y$ are non-negative integers with $x+y \le \ell$.
Let
$$F(x,y)=\sum \binom{x}{a}\binom{y}{b}\binom{\ell-x-y}{2m-\tau-a-b},$$
where the summation is over all non-negative integer pairs $(a,b)$ for which $0 \leq a \leq x$, $0 \leq b \leq y$, $a+b \le 2m-\tau$ and $2a+b \geq \max\{2x+y+2m+1-\ell-\tau, 2m+1-\tau\}$. Then
$$ F(x,y) \leq \frac{1}{2}\binom{\ell}{2m-\tau}-1.$$
\end{lemma}
Note that when $a > x$ or $b > y$, then ${x \choose a}{y \choose b} = 0$. Also $a+b \le 2m-\tau$ and
$2a+b \geq 2x+y+2m-\tau+1-\ell$ implies that
$2x+y \le \ell +\tau -1 + 2a+b \le \ell +2m-\tau-1$.
Thus the summation can be restricted to $0 \le a \le x, 0 \le b \le y$, $a+b \leq 2m-\tau$ and $2x+y \le \ell +2m-\tau-1$.
The proof of Lemma \ref{main-lemma} will be given in next section. In the rest of the section, we will prove Theorem \ref{thm-main}. In Section \ref{sec-first-half}, we will prove the first half of Theorem \ref{thm-main}: If $G$ is a graph consisting of two edge-disjoint even cycles $E$ and $F$ connected by a path $Q$ (possibly $Q$ is a single vertex path), then $G$ is $(4m,2m)$-choosable for all positive $m$. In Section \ref{sec-first-second}, we prove the second half of Theorem \ref{thm-main}: $\Theta_{r,s,t}$ is $(4m,2m)$-choosable if $r,s,t$ have
the same parity and $r \leq 2$, $s,t > 2$.
\subsection{Proof of the first part of Theorem \ref{thm-main}}
\label{sec-first-half}
\begin{definition}
\label{bad(S,S)}
Assume $P$ is a path and $L$ is a $4m$-list assignment for $P$, $S,T$ are two $2m$-sets of colours. We say $S$ is {bad with respect to $(L,P)$} if $dam_{L,P}(S,S) > S_L(P) - 2nm$.
\end{definition}
\begin{lemma}
\label{half-bad}
Assume $P$ is a path with an odd number of vertices, $L$ is a $4m$-list assignment on $P$, and $W$ is a set of $4m$ colours. Then $W$ has less than $\frac{1}{2}\binom{4m}{2m}$ bad $2m$-subsets with respect to $(L,P)$.
\end{lemma}
\begin{proof}
Let $\Lambda,\hat{X}_1,\hat{X}_n$ be calculated for $P$ as in Definition \ref{def-slpAX1Xn}. Let $X=\hat{X}_1\cap \hat{X}_n\cap W $, $Y=[(\hat{X}_1\Delta\hat{X}_n)\cup \Lambda]\cap W$ ($\Delta$ means symmetric difference) and $Z=W-Y-X$.
Assume $S$ is a bad subset of $W$ with respect to $(L,P)$.
Let $A=S\cap X$, $B=S\cap Y$ and $C= S\cap Z$, see Figure \ref{W&S}. Let $|X|=x$, $|Y|=y$, $|Z|=z$, $|A|=a$, $|B|=b$ and $|C|=c$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[>=latex]
\draw [line width =0.8pt](1.4,2)ellipse(1.4 and 2);
\draw [line width =0.8pt](0.2,1)--(2.6,1);
\draw [line width =0.8pt](0.2,3)--(2.6,3);
\draw [line width =0.8pt,dashed](0.1,1.67)--(2.7,1.67);
\draw [line width =0.8pt,dashed](0.1,2.33)--(2.7,2.33);
\filldraw [line width =0.8pt, fill opacity=0](0.8,3)--(0.8,3.6)--(2,3.6)--(2,3);
\filldraw [line width =0.8pt,draw opacity=0, fill opacity=0.1](0.8,3)--(0.8,1)--(2,1)--(2,3)--(0.8,3);
\filldraw [line width =0.8pt, fill opacity=0](0.8,1)--(0.8,0.4)--(2,0.4)--(2,1);
\filldraw [line width =0.8pt,draw opacity=1, fill opacity=0](0.8,0.4)--(0.8,3.6)--(2,3.6)--(2,0.4)--(0.8,0.4);\
\draw [line width =1pt,->](2.2,3.4)--(3.2,3.4);
\draw [line width =1pt,->](2,3.6)--(2.5,4);
\draw [line width =1pt,->](2.5,2.6)--(3.2,2.6);
\draw [line width =1pt,->](2.5,2)--(3.2,2);
\draw [line width =1pt,->](2.5,1.4)--(3.2,1.4);
\draw [line width =1pt,->](1.2,4)--(1.6,4.4);
\node at (4.6,3.4){$(\hat{X}_1\cap\hat{X}_n)\cap W$};
\node at (4.6,2.6){$(\hat{X}_1-\hat{X}_n)\cap W$};
\node at (4,2){$\Lambda\cap W$};
\node at (4.6,1.4){$(\hat{X}_n-\hat{X}_1)\cap W$};
\node at (-0.5,3.4){$X$};
\node at (-0.5,2){$Y$};
\node at (-0.5,0.6){$Z$};
\node at (1.4,3.3){$A$};
\node at (1.4,2){$B$};
\node at (1.4,0.7){$C$};
\node at (1.7,4.6){$W$};
\node at (2.7,4.2){$S$};
\end{tikzpicture}
\caption{$W, X,Y,Z$ and $S, A, B,C$}
\label{W&S}
\end{figure}
Since $W$ is the disjoint union of $X, Y,Z$ and $S$ is the disjoint union of $A,B,C$, we have
\begin{eqnarray*}
x+y+z=4m, \ \ a+b+c =2m.
\end{eqnarray*}
As
\begin{equation*}
\begin{array}{lll}
|\hat{X}_1|+|\hat{X}_n|+|\Lambda| & = & |\hat{X}_1 \cup \hat{X}_n|+|\hat{X}_1 \cap \hat{X}_n|+|\Lambda|\\
& \geq & |(\hat{X}_1 \cup \hat{X}_n)\cap W|+|(\hat{X}_1\cap \hat{X}_n)\cap W|+|\Lambda \cap W|\\
& = & |(\hat{X}_1 \Delta\hat{X}_n)\cap W|+2|(\hat{X}_1\cap \hat{X}_n)\cap W|+|\Lambda \cap W|\\
& = & 2x+ y,
\end{array}
\end{equation*}
by Lemma \ref{first lower bound for SLP}, we have $S_{L}(P) \geq \max\{ 2x+ y+2(n-1)m, 2(n+1)m\}$.
By Lemma \ref{slp-dam}, $$dam_{L,P}(S,S) = |\hat{X}_1\cap S|+|\hat{X}_n\cap S|+|\Lambda \cap S|
=|(\hat{X}_1 \Delta\hat{X}_n)\cap S|+2| (\hat{X}_1 \cap \hat{X}_n ) \cap S|+|\Lambda \cap S|=2a+b.$$
As $
dam_{L,P}(S,S) > S_{L}(P) -2nm$, we conclude that
\begin{equation*}
\label{2b+c geq max}
2a+b \geq \max\{ 2x+ y-2m+1, 2m+1\}.
\end{equation*}
Thus the number of bad $2m$-subsets of $W$ with respect to $(L,P)$ is at most
\begin{equation*}
F(x,y)= \sum \binom{x}{a}\binom{y}{b}\binom{4m-x-y}{2m-a-b},
\end{equation*}
where the summation is over all pairs of integers $(a,b)$ with $0 \leq a \leq x$, $0 \leq b \leq y$, $a+b \leq 2m$ and
$2a+b \geq \max\{2x+y-2m+1,2m+1\}$. By the special case with $\ell =4m$ and $\tau=0$ of Lemma \ref{main-lemma}, $F(x,y) < \frac{1}{2}\binom{4m}{2m}$.
This completes the proof of Lemma \ref{half-bad}.
\end{proof}
If $G$ consists of two even cycles intersecting at a single vertex $v$, then $G-v$ is consists of two odd paths $P_1$ and $P_2$. It follows from Lemma \ref{half-bad} that there is a $2m$-subset $S$ of $L(v)$ such that $S$ is not bad with respect to both $(L,P_1)$ and $(L,P_2)$. Thus we can colour $v$ by $S$ and extend it to an $(L, 2m)$-colouring of $G$.
Assume $G$ consists of two even cycles $E$ and $F$ joined by a path $Q$, and let $u,v$ be the end vertices of $Q$, as shown in
Figure \ref{decomposing}.
\begin{figure}[H]{}
\centering
\begin{tikzpicture}
\draw [line width =0.8pt](1,1)circle (1);
\filldraw [line width =0.8pt](2,1)circle (0.05)--(4,1)circle (0.05);
\filldraw [line width =0.8pt](1.85,1.3)--(2.05,1.4);
\filldraw [line width =0.8pt](1.85,0.7)--(2.05,0.6);
\filldraw [line width =0.8pt](4.15,0.7)--(3.95,0.6);
\filldraw [line width =0.8pt](4.15,1.3)--(3.95,1.4);
\draw [line width =0.8pt](5,1)circle (1);
\node at (3,1.3){$Q$};
\node at (1,1){$E$};
\node at (5,1){$F$};
\node at (1.7,1){$u$};
\node at (4.3,1){$v$};
\node at (-0.3,1){$P$};
\node at (6.3,1){$R$};
\end{tikzpicture}
\caption{Decomposing $G$ into $P,Q,R$.}
\label{decomposing}
\end{figure}
Observe that there is an injective function $h:\binom{L(u)}{2m} \rightarrow \binom{L(v)}{2m}$ such that for all $S \in \binom{L(u)}{2m}$, the precolouring $\phi(u)=S$, $\phi(v)=h(S)$ extends to all of $Q$. Indeed, if $Q$ consists of a single vertex $v$, then $u=v$ and $h(S)=S$.
Otherwise,
for each $S \in \binom{L(u)}{2m}$, let $\phi(u) = S$, extend $\phi$ to a $2m$-fold $L$-colouring $\phi$ of $Q$. We simply let $h(S) = \phi(v)$.
By Lemma \ref{half-bad}, $L(u)$ has less than $\frac{1}{2}\binom{4m}{2m}$ bad $2m$-subsets with respect to $(L,P)$, and $L(v)$ has less than $\frac{1}{2}\binom{4m}{2m}$ bad subsets of size $2m$ with respect to $(L,R)$. So there exists some $S$ such that $S$ is not bad with respect to $(L,P)$ and $h(S)$ is not bad with respect to $(L,R)$. Therefore the pre-colouring $\phi$ of $u,v$ defined as $\phi(u)=S$ and $\phi(v)=h(S)$ extends to an $(L,2m)$-colouring of $G$. This completes the first half of Theorem \ref{thm-main}.
\qed-
\subsection{Proof of the second part of Theorem \ref{thm-main}}
\label{sec-first-second}
The following lemma was proved in \cite{4choose2}.
\begin{lemma}[\cite{4choose2}]
Let $G$ be a graph, let $v \in V(G)$, and let $G'$ be obtained from G by deleting $v$ and merging its neighbours. If $G$ is $(4m,2m)$-choosable, then $G'$ is $(4m,2m)$-choosable.
\end{lemma}
\begin{corollary}
If $\Theta_{2,2r,2s}$ is $(4m,2m)$-choosable, then $\Theta_{1,2r-1,2s-1}$ is $(4m,2m)$-choosable.
\end{corollary}
So it suffices to show that $\Theta_{2,2r,2s}$ is $(4m,2m)$-choosable, where $r, s \ge 2$. Instead of proving it directly, we prove the following stronger result.
\begin{theorem}
\label{main-second-part-stronger}
Assume $G=\Theta_{2,2r,2s}$, where $r, s \geq 2$. Let $u,v$ be the two vertices of degree $3$. Let $P^0,P^1,P^2$ be the three paths of $G-\{u,v\}$.
Assume $P^i=(v^i_1,v^i_2,\ldots,v^i_{n_i} )$, $|V(P^0)|=1$, $v^i_1$ is adjacent to $u$ and $v^i_{n_i}$ is adjacent to $v$.
Assume $\ell, \tau$ are non-negative even integers and $L$ is a list assignment for $G$ satisfying the following:
\begin{enumerate}[label= {(C\arabic*)}]
\item \label{C1} $\tau \leq 2m$ and $\ell +\tau \geq 2m$.
\item \label{C2} $|L(u)|=|L(v)| = \ell \ge 0$.
\item \label{C3} For each $i \in \{0,1,2\}$, $|L(v^i_1)|,|L(v^i_{n_i})|\geq 4m-\tau$.
\item \label{C4} $|L(w)|= 4m$ for $w \ne u,v, v^i_1, v^i_{n_i}$.
\item \label{C5} For $i=0,1,2$, $S_L(P^i) \geq 2m|V(P^i)|+2m-\tau$, and $$dam_{L,P^i}(L(u),L(v)) \leq S_L(P^i)-2m|V(P^i)|+\ell-2m+\tau.$$
\end{enumerate}
Then there exists a set $S \subset L(u)$ and a set $T \subset L(v)$ satisfying $|S|=|T|=2m-\tau$ such that for each $i$, $$dam_{L,P^i}(S,T) \leq S_L(P^i)-2m|V(P^i)|.$$
\end{theorem}
Theorem \ref{thm-main} follows from Theorem \ref{main-second-part-stronger} by setting $\ell =4m$ and $\tau=0$. Indeed, by Lemma \ref{first lower bound for SLP}, $S_{L}(P^i) \geq 2m|V(P^i)|+2m$, and $S_{L}(P) -2m|V(P^i)|+2m \geq |\hat{X}^i_1|+|\hat{X}^i_n|+|\Lambda^i| \geq dam_{L,P^i}(L(u),L(v))$ (The last inequality holds by Lemma \ref{slp-dam}). So there exist two sets $S \subset L(u)$, $T \subset L(v)$ such that $|S|=|T|=2m$ and $dam_{L,P^i}(S,T) \leq S_L(P^i) -2m|V(P^i)|$, which implies that $G$ is $(4m,2m)$-choosable.
Let $L(u)=\{c_1, c_2, \ldots, c_{\ell} \}$ and $L(v)=\{c'_1,c'_2, \ldots, c'_{\ell}\}$ be indexed in such a way that $c_j=c'_j$ whenever $c_j \in L(u)\cap L(v)$. In other words, $\{c_i,c'_i\}\cap \{c_j,c'_j\}=\emptyset$ whenever $i\neq j$.
\begin{definition}
\label{bad simple pair}
For a fixed indexing of $L(u)$ and $L(v)$, a {\rm couple} is a tuple of the form $(c_j,c'_j)$ for $j\in\{1,2,\ldots,\ell \}$. When we write a couple, we suppress the parentheses and simply write $c_jc'_j$. A {\rm pair} is a tuple $(S,T)$ with $S\subset L(u)$, $T\subset L(v)$, and $|S|=|T|$, and we define the \emph{size} of a pair as $|S|$.
A pair $(S,T)$ is {\rm bad with respect to $(L,P)$} if $dam_{L,P}(S,T) > S_{L}(P) - 2m|V(P)|$.
{A {\rm simple pair} is a pair $(S,T)$ such that $S,T$ have the same index set.}
\end{definition}
By Lemma \ref{slp-dam}, we know that if $(S,T)$ is a simple pair, then
\begin{equation}
\label{comput for dam(S,T)}
dam_{L,P}(S,T)=\sum_{c_j\in S}dam_{L,P}(\{c_j\},\{c'_j\}).
\end{equation}
In the following, we may write $dam_{L,P}(c,c')$ for $dam_{L,P}(\{c\},\{c'\})$.
The following observation follows from Lemma \ref{slp-dam}.
\begin{observation}
\label{obs-couple}
For any couple $cc'$, the following hold:
\begin{enumerate}
\item $dam_{L,P}(c,c') = 2$ if $c \in \hat{X}_1 \cup \Lambda$ and $c' \in \hat{X}_n \cup \Lambda$, and moreover if $c = c'$, then $c \notin \Lambda$;
\item $dam_{L,P}(c,c') = 1$ if $c \in \hat{X}_1 \cup \Lambda$ or $c' \in \hat{X}_n \cup \Lambda$ but not both unless $c=c' \in \Lambda$;
\item $dam_{L,P}(c,c') = 0$ if $c \notin \hat{X}_1 \cup \Lambda$ and $c' \notin \hat{X}_n \cup \Lambda$.
\end{enumerate}
\end{observation}
\begin{definition}
\label{safe-light-heavy}
Assume $c_jc'_j$ is a couple.
\begin{itemize}
\item $c_jc'_j$ is {\rm heavy} for the internal path $P$ if $dam_{L,P}(c_j, c'_j)=2$;
\item $c_jc'_j$ is {\rm light} for the internal path $P$ if $dam_{L,P}(c_j, c'_j)=1$;
\item $c_jc'_j$ is {\rm safe} for the internal path $P$ if $dam_{L,P}(c_j, c'_j)=0$.
\end{itemize}
\end{definition}
For each $i\in \{0,1,2\}$, let $x^{(i)},y^{(i)},z^{(i)}$ denote the number of heavy, light couples and safe couples for $P^{i}$, respectively. For a simple pair $(S,T)$ which is of size $2m-\tau$, let $a^{(i)}(S,T),b^{(i)}(S,T),c^{(i)}(S,T)$ denote the number of heavy, light and safe couples for $P^{i}$ in $ (S,T)$, respectively.
It follows from the definition that $x^{(i)}+y^{(i)}+z^{(i)}=\ell$, and $a^{(i)}(S,T)+b^{(i)}(S,T)+c^{(i)}(S,T)=2m-\tau$. Thus by Equality (\ref{comput for dam(S,T)}), $dam_{L,P^i}(S,T) = 2a^{(i)}(S,T)+b^{(i)}(S,T)$. Let $\beta(P^{i})$ denote the number of bad simple pairs of size $2m-\tau$ with respect to $(L, P^i)$. We write $\hat{X}^i_1, \hat{X}^i_n$ and $\Lambda^i$ for the sets $\hat{X}_1, \hat{X}_n, \Lambda$ calculated for $P^i$.
\medskip
\noindent
{\bfseries Proof of Theorem \ref{main-second-part-stronger}:}
First we observe that the conclusion of
Theorem \ref{main-second-part-stronger} is equivalent to the statement that there exists a pair $(S,T)$ which is not a bad pair for any of the paths $P^0,P^1,P^2$.
The proof is by induction on $2\ell+\tau$. First assume that $2\ell + \tau =2m$. Since both $\ell$ and $\tau$ are non-negative, and $\ell +\tau \geq 2m$, we have that $\ell = 0$ and $\tau= 2m$. By assumption, for each $i \in \{0,1,2\}$,
$$S_L(P^i)-2m|V(P^i)| \geq dam_{L,P^i}(L(u),L(v))=0,$$ so $S_L(P^i) \geq 2m|V(P^i)|$. Then let $S=L(u)= \emptyset$, $T=L(v) =\emptyset$, and we are done. This finishes the basic step of the induction.
Thus in the sequel, we assume that $2\ell+\tau \geq 2m+2$. If $\ell + \tau =2m$, then we let $S=L(u)$, $T=L(v)$, and we are done. Hence we assume that $\ell +\tau \geq 2m+2$. If $\tau =2m$, then the statement holds obviously, since we can just take $S=T=\emptyset$ and we are done. So we assume that $\tau \leq 2m-2$. Assume that Theorem \ref{main-second-part-stronger} is not true for $L$.
\begin{claim}
\label{all at least light}
There does not exist a simple pair $(D_u,D_v)$ such that $|D_u|=|D_v|=d \le \ell -2m+\tau$ is even, and $dam_{L,P^i}(D_u,D_v) \geq d$ for each $i \in \{0,1,2\}$.
\end{claim}
\begin{proof}
Assume $(D_u,D_v)$ is such a simple pair. Let $L'$ be a new list assignment for $G$ with $L'(u)= L(u)-D_u$, $L'(v)=L(v)-D_v$, $L'(w)=L(w)$ for $w \in V(G) \setminus \{u,v\}$.
\ref{C1}-\ref{C4} of Theorem \ref{main-second-part-stronger} are easily seen to be satisfied by $L'$, with $ \ell'= \ell - d$ and $\tau'=\tau$. As
\begin{equation*}
\begin{array}{ll}
dam_{L',P^i}(L'(u),L'(v)) & = dam_{L,P^i}(L(u),L(v))-dam_{L,P^i}(D_u,D_v) \\
& \leq S_L(P^i)-2m|V(P^i)|+\ell-2m+\tau-d \\
& = S_{L'}(P^i)-2m|V(P^i)| + \ell'+ \tau'-2m,
\end{array}
\end{equation*}
\ref{C5} is also satisfied by $L'$.
By induction, there exists a pair $(S,T)$, where $|S|=|T|=2m-\tau$ such that for each $i \in \{0,1,2\}$, $$dam_{L,P^i}(S,T) \leq S_L(P^i)-2m|V(P^i)|.$$ This completes the proof of this claim.
\end{proof}
\begin{claim}
\label{all at most light}
There does not exist a simple pair $(D_u,D_v)$ such that $|D_u|=|D_v|=d \le 2m-\tau$ is even, and $dam_{L,P^i}(D_u,D_v) \leq d$ for each $i \in \{0,1,2\}$.
\end{claim}
\begin{proof}
Assume $(D_u,D_v)$ is such a simple pair. Let $L'$ be a new list assignment for $G$ with $L'(u)=L(u)-D_u$, $L'(v)=L(v)-D_v$, for $i=1,2$, $L'(v^i_1)= L(v^i_1)-D_u$, $L'(v^i_{n_i})=L(v^i_{n_i})-D_v$, $L'(v^i_j)=L(v^i_j)$ where $1 < j < n_i$, and $L'(v^0_1)=L(v^0_1)-D_u \cup D_v$.
Note that $dam_{L,P^0}(D_u,D_v) \leq d$ implies that $|L(v^0_1)-D_u \cup D_v| \ge |L(v^0_1)|-d$. So
\ref{C1}-\ref{C4} of Theorem \ref{main-second-part-stronger} are satisfied by $L'$, with $ \ell'= \ell - d$ and $\tau'=\tau+d$. As
$S_{L}(P^i) = S_{L'}(P) + dam_{L,P^i}(D_u,D_v)$, we have $S_L'(P^i) = S_{L}(P^i)- dam_{L,P^i}(D_u,D_v) \geq 2m|V(P^i)|+2m-\tau-d$, and also by the second part of \ref{C5},
it follows that
\begin{equation*}
\begin{array}{ll}
dam_{L',P^i}(L'(u),L'(v)) & \le dam_{L,P^i}(L(u),L(v))-d \\
& \leq S_L(P^i)-2m|V(P^i)|+\ell-2m+\tau-d \\
& = S_{L'}(P^i)+d-2m|V(P^i)|+\ell-2m+\tau-d \\
& = S_{L'}(P^i)-2m|V(P^i)|+(\ell-d)+(\tau+d)-2m.
\end{array}
\end{equation*}
So \ref{C5} is also satisfied by $L'$.
By induction, there exists a pair $(S',T')$, where $|S'|=|T'|=2m-\tau' = 2m-\tau-d$ such that for every $i$, $$dam_{L',P^i}(S',T') \leq S_{L'}(P^i)-2m|V(P^i)|.$$ Let $S=S' \cup D_u$ and $T=T' \cup D_v$. We have $|S|=|T|=2m-\tau$ and
\begin{equation*}
\begin{array}{ll}
dam_{L,P^i}(S,T) & = dam_{L,P^i}(D_u,D_v)+dam_{L,P^i}(S',T') \\
& \leq dam_{L,P^i}(D_u,D_v)+ S_{L'}(P^i)-2m|V(P^i)|\\
& = S_{L}(P^i)-2m|V(P^i)|.
\end{array}
\end{equation*}
This completes the proof of Claim \ref{all at most light}.
\end{proof}
The following claim gives a necessary condition for a simple pair of size $2m-\tau$ being bad with respect to $(L,P^i)$.
\begin{claim}
\label{claim-bad simple pair}
If $(S,T)$ is a bad simple pair of size $2m-\tau$ with respect to $(L, P^i)$, then
$dam_{L,P^i}(S,T) = 2a^{(i)}(S,T)+b^{(i)}(S,T) \geq \max\{2x^{(i)}+y^{(i)}+2m+1-\ell -\tau, 2m+1-\tau\}$.
\end{claim}
\begin{proof}
By Equality (\ref{comput for dam(S,T)}), $dam_{L,P^i}(L(u),L(v))= 2x^{(i)}+y^{(i)}$. By \ref{C5}, $$S_L(P^i) \geq \max\{2m|V(P^i)|+2x^{(i)}+y^{(i)}+2m-\ell-\tau, 2m|V(P^i)|+ 2m-\tau\}.$$ If $(S,T)$ is a bad simple pair of size $2m-\tau$ with respect to $(L, P^i)$, then by Definition \ref{bad simple pair} and above inequality,
\begin{equation*}
\begin{array}{ll}
dam_{L,P^i}(S,T) & = 2a^{(i)}(S,T)+b^{(i)}(S,T) \\
& \geq S_L(P^i) -2m|V(P^i)| + 1 \\
& \ge \max\{2x^{(i)}+y^{(i)} +2m+1-\ell-\tau, 2m+1-\tau\}.
\end{array}
\end{equation*}
Thus we proved this claim
\end{proof}
The following claim gives an upper bound and a lower bound of the number of bad simple pairs of size $2m-\tau$ with respect to $(L,P^i)$.
\begin{claim}
\label{second case half lemma}
For each $i \in \{0,1,2\}$, $2 \le \beta(P^i) \leq \frac{1}{2}\binom{\ell}{2m-\tau}-1$.
\end{claim}
\begin{proof}
If a simple pair $(S,T)$ of size $2m-\tau$ is bad with respect to $(L,P^i)$, then by Claim \ref{claim-bad simple pair}, $dam_{L,P^i}(S,T) \geq \max\{2x^{(i)}+y^{(i)} +2m+1-\ell-\tau, 2m+1-\tau\}$. Note that $a^{(i)}(S,T) + b^{(i)}(S,T) +c^{(i)}(S,T) =2m-\tau$, it follows from Lemma \ref{main-lemma} that $\beta(P^i) \leq \frac{1}{2}\binom{\ell}{2m-\tau}-1$.
If $\beta(P^i) \le 1$ for some $i$, then $\beta(P^0)+\beta(P^1)+\beta(P^2) \le \binom{\ell}{2m-\tau}-1$. So there exists a simple pair $(S,T)$ of size $2m-\tau$ which is not bad with respect to any $(L,P^i)$, a contradiction to the assumption.
\end{proof}
\begin{claim}
\label{2x+yupper bound and xz at least one}
For each $i \in \{0,1,2\}$, $2x^{(i)}+y^{(i)} \leq \ell + 2m-\tau-1$, and $x^{(i)},z^{(i)} \geq 1$.
\end{claim}
\begin{proof}
If $S_{L}(P^i) \geq 2m|V(P^i)|+2m-\tau$, then for any simple pair $(S,T)$ of size $2m-\tau$,
$S_L(P^i) - dam_{L,P^i}(S,T) \ge 2m|V(P^i)|$ (as $dam_{L,P^{i}}(S,T) \leq 2m-\tau$), and hence $(S,T)$ is not bad with respect to $(L,P^i)$ which means that $\beta(P^i)=0$, a contradiction. Thus we may assume that $S_{L}(P^i) \leq 2m|V(P^i)|+4m-1-2\tau$. By \ref{C5},
\begin{equation*}
\label{2x+y-upperbound-6m-1}
2x^{(i)}+y^{(i)} =dam_{L,P^i} (L(u),L(v)) \leq S_{L}(P^i) - 2m|V(P^i)|+ \ell+\tau - 2m \leq \ell+2m-1-\tau.
\end{equation*}
Assume $x^{(i)}=0$ for some $i \in \{0,1,2\}$, say $x^{(0)}=0$, then for every simple pair $(S,T)$ of size $2m-\tau$, $dam_{L,P^0}(S,T) \leq 2m-\tau$. As $S_L(P^i)-2m|V(P^i)| \geq 2m-\tau$ (by \ref{C5}), $(S,T)$ is not bad with respect to $(L,P^i)$, so $\beta(P^i)=0$, in contrary to Claim \ref{second case half lemma}. Thus $x^{(i)} \geq 1$.
Assume $z^{(0)}=0$, then $x^{(0)}+y^{(0)}=\ell$ and for any simple pair $(S,T)$ of size $2m-\tau$, $a^{(0)}(S,T)+b^{(0)}(S,T)=2m-\tau$. By Claim \ref{claim-bad simple pair}, we have
\begin{equation*}
\begin{array}{ll}
2a^{(0)}(S,T)+b^{(0)}(S,T) & = a^{(0)}(S,T) +2m-\tau \\
& \geq 2x^{(0)}+y^{(0)} +2m+1-\ell-\tau \\
& = x^{(0)} + \ell +2m+1-\ell-\tau \\
& = x^{(0)}+1 +2m-\tau.
\end{array}
\end{equation*}
This implies that $a^{(0)}(S,T) \geq x^{(0)}+1$, in contrary to the fact that $a^{(0)}(S,T) \leq x^{(0)}$.
\end{proof}
\begin{claim}
\label{evey couple is HSL}
Every couple is heavy (respectively, safe) for at most one internal path.
There is at most one couple which is light for all internal paths. If there exists a couple which is light for at least two internal paths, then it is light for all internal paths.
\end{claim}
\begin{proof}
Assume to the contrary, $c_jc'_j$ is heavy for two paths, say for both $P_0$ and $P_1$. If $c_jc'_j$ is also heavy for $P^2$, then for any other couple $c_kc'_k$, we know that $(\{c_j,c_k\},\{c'_j,c'_k\})$ is a simple pair of size $2$ contradicting to Claim \ref{all at least light}. Thus $c_jc'_j$ is not heavy for $P^2$. Note that $x^{(2)} \geq 1$, there exists a couple $c_kc'_k$ which is heavy for $P^2$. It follows that $(\{c_j,c_k\},\{c'_j,c'_k\})$ is a simple pair of size $2$ contradicting to Claim \ref{all at least light}.
Similarly, we can prove that no couple is safe for at least two internal paths.
If there are two couples which are light for all internal paths, then two such couples comprise a simple pair of size $2$ which contradicts to Claim \ref{all at most light}.
Assume the last sentence of this claim is not true, say $c_jc'_j$ is light for $P^0$ and $P^1$ but not light for $P^2$. Note that by Claim \ref{2x+yupper bound and xz at least one}, $z^{(2)} \geq 1$, so if $c_jc'_j$ is heavy for $P^2$, then there exists a distinct couple $c_kc'_k$ which is safe for $P^2$. By the first part of this claim, $c_kc'_k$ is safe for neither $P^0$ nor $P^1$. Then $(\{c_j,c_k\},\{c'_j,c'_k\})$ is a simple pair of size $2$ contradicting to Claim \ref{all at least light}. If $c_jc'_j$ is safe for $P^2$, then since $x^{(2)} \geq 1$ (by Claim \ref{2x+yupper bound and xz at least one}), there exists a distinct couple $c_kc'_k$ which is heavy for $P^2$. By the first part of this claim, $c_kc'_k$ is heavy for neither $P^0$ nor $P^1$. Then $(\{c_j,c_k\},\{c'_j,c'_k\})$ is a simple pair of size $2$ contradicting to Claim \ref{all at most light}.
This completes the proof of Claim \ref{evey couple is HSL}.
\end{proof}
\medskip
Without loss of generality, we may assume that $c_0c'_0$ is heavy for $P^0$, light for $P^1$ and safe for $P^2$.
\begin{claim}
\label{LuLv-HSL}
For any couple $c_jc'_j$,
\begin{itemize}
\item if it is heavy for $P^0$, then it is light for $P^1$, safe for $P^2$;
\item if it is light for $P^0$, then it is safe for $P^1$, heavy for $P^2$;
\item if it is safe for $P^0$, then it is heavy for $P^1$, light for $P^2$.
\end{itemize}
Consequently, $x^{(0)}=y^{(1)}=z^{(2)}$, $y^{(0)}=z^{(1)}=x^{(2)}$ and $z^{(0)}=x^{(1)}=y^{(2)}$.
\end{claim}
\begin{proof}
If $c_jc'_j$ is safe for $P^0$,
then $c_jc'_j$ is light for $P^2$, for otherwise by Claim \ref{evey couple is HSL}, $c_jc'_j$ is heavy for $P^2$, and light for $P^1$. Then $(\{c_0,c_j\},\{c'_0,c'_j\})$ is a simple pair of size $2$ which contradicts to Claim \ref{all at most light}. By Claim \ref{evey couple is HSL},
$c_jc'_j$ is heavy for $P^1$.
By Claim \ref{2x+yupper bound and xz at least one}, $z^{(0)} \geq 1$, there is a couple $c_ic'_i$, which is safe for $P^0$. Hence $c_ic'_i$ is light for $P^2$ and {heavy} for $P^1$.
Also by Claim \ref{2x+yupper bound and xz at least one}, $z^{(1)} \geq 1$, there exists a couple $c_kc'_k$ which is safe for $P^1$.
Then $c_kc'_k$ is light for $P^0$, otherwise by Claim \ref{evey couple is HSL}, $c_kc'_k$ is heavy for $P^0$, and light for $P^2$. Then $(\{c_i,c_k\},\{c'_i,c'_k\})$ is a simple pair of size $2$ which contradicts to Claim \ref{all at most light}.
By Claim \ref{evey couple is HSL},
$c_kc'_k$ is heavy for $P^2$.
If $c_jc'_j$ is heavy for $P^0$, then it is light for $P^1$, for otherwise, it is safe for $P^1$, light for $P^2$, but then $(\{c_i,c_j\},\{c'_i,c'_j\})$ is a simple pair of size $2$ which contradicts to Claim \ref{all at most light}.
Next we show that no couple is light for all internal paths.
Assume that there exist a couple $cc'$ which is light for all internal paths. If $\tau \leq 2m-4$, then $2m-\tau \geq 4$, so $(\{c_0,c_i,c_k,c\},\{c'_0,c'_i,c'_k,c'\})$ is a simple pair of size $4$ contradicting to Claim \ref{all at most light}.
Assume $\tau =2m-2$. Recall that $\ell+\tau \geq 2m+2$, so if $\ell \neq 4$, then $\ell \geq 6$ and $\ell-2m+\tau \geq 4$. Thus $(\{c_i,c_j,c_k,c\},\{c'_i,c'_j,c'_k,c'\})$ is a simple pair of size $4$ which contradicts to Claim \ref{all at least light}.
Assume $\tau =2m-2$ and $\ell=4$. Since $|V(P^0)|=1$, we know that $\hat{X}^0_1=\hat{X}^0_n=\emptyset$. By Observation \ref{obs-couple}, $c_0,c'_0 \in \Lambda^0$ and $c_0 \neq c'_0$. As $c_0c'_0$ is light for $P^1$, by Observation \ref{obs-couple}, $c_0 \notin$ $\hat{X}^1_1 \cup \Lambda^1$ or $c'_0 \notin \hat{X}^1_n \cup \Lambda^1$. By symmetric, we may assume that $c_0 \notin$ $\hat{X}^1_1 \cup \Lambda^1$. Then for $S=\{c_0,c\}$ and $T=\{c'_i,c'\}$, the conclusion of Theorem \ref{main-second-part-stronger} holds.
If $c_jc'_j$ is light for $P^0$, as $c_jc'_j$ is not light for all internal paths, we conclude that it must be safe for $P^1$, for otherwise,
$c_jc'_j$ is heavy for $P^1$ and $(\{c_k,c_j\},\{c'_k,c'_j\})$ is a simple pair of size $2$ contradicting to Claim \ref{all at most light}. This implies that $c_jc'_j$ heavy for $P^2.$
\end{proof}
\begin{claim}
For each $i \in \{0,1,2\}$, $x^{(i)},y^{(i)},z^{(i)} \geq2$.
\end{claim}
\begin{proof}
Assume to the contrary that $x^{(0)}=1$.
As $x^{(0)}+y^{(0)}+z^{(0)} = \ell$, we have
$z^{(0)}=\ell -y^{(0)}-1$.
By Claim \ref{2x+yupper bound and xz at least one} and Claim \ref{LuLv-HSL}, $2y^{(0)}+z^{(0)}=2x^{(2)}+y^{(2)} \leq \ell +2m-\tau-1$. So $2y^{(0)}+(\ell -1 -y^{(0)}) \leq \ell +2m-\tau -1$, which implies that $y^{(0)} \leq 2m-\tau$.
If $y^{(0)} \leq 2m-\tau- 2$, then for any simple pair $(S,T)$,
$$2a^{(0)}(S,T)+b^{(0)}(S,T)\leq 2x^{(0)} +y^{(0)} \le 2 + 2m-\tau-2 = 2m-\tau.$$
This implies that $(S,T)$ is not bad with respect to $(L,P^0)$. Hence $\beta(P^0)=0$, in contrary to Claim \ref{second case half lemma}. So we have $y^{(0)} \geq 2m-\tau-1$, thus $y^{(0)}= 2m-\tau-1$ or $y^{(0)}=2m-\tau$.
If $y^{(0)} = 2m-\tau-1$, then a bad simple pair with respect to $(L,P^0)$ consists of the unique couple which is heavy for $P^0$ and the exactly $2m-\tau-1$ couples which are light for $P^0$. So
$\beta(P^0)=1$, in contrary to Claim \ref{second case half lemma}.
Assume $y^{(0)} =2m-\tau$. Suppose $(S,T)$ is a bad simple pair with respect to $(L,P^2)$. By Claim \ref{claim-bad simple pair}, $$2a^{(2)}(S,T)+b^{(2)}(S,T) \geq \max\{2x^{(2)}+y^{(2)}+2m-\tau+1-\ell,2m-\tau+1\}.$$ By Claim \ref{LuLv-HSL}, $x^{(2)}=y^{(0)}$ and $y^{(2)}=z^{(0)}=\ell-x^{(0)}-y^{(0)}=\ell-1-y^{(0)}$. Hence, \begin{equation}
\label{claim8-eq}
2a^{(2)}(S,T)+b^{(2)}(S,T) \geq y^{(0)}+2m-\tau = 4m-2\tau.
\end{equation}
As $a^{(2)}(S,T)+b^{(2)}(S,T)=2m-\tau-c^{(2)}(S,T) \leq 2m-\tau$, we have $2a^{(2)}(S,T)+b^{(2)}(S,T) \leq 2x^{(2)}+(2m-\tau-x^{(2)}) = x^{(2)}+2m-\tau= 4m-2\tau$. Together with Inequality (\ref{claim8-eq}), we have $2a^{(2)}(S,T)+b^{(2)}(S,T)=4m-2\tau$ and hence $a^{(2)}(S,T)=x^{(2)}=2m-\tau$, i.e., a bad simple with respect to $(L,P^2)$ consists of exactly the $2m-\tau$ couples which are heavy for $P^2$. So $\beta(P^2)=1$, in contrary to Claim \ref{second case half lemma}.
\end{proof}
Without loss of generality, we assume that
\begin{itemize}
\item $c_0c'_0$ and $c_1c'_1$ are heavy for $P^0$ (and hence light for $P^1$ and safe for $P^2$ by Claim \ref{LuLv-HSL}).
\item $c_2c'_2$ and $c_3c'_3$ are light for $P^0$ (and hence safe for $P^1$ heavy for $P^2$).
\item $c_4c'_4$ and $c_5c'_5$ are safe for $P^0$ (and hence heavy for $P^1$ and light for $P^2$).
\end{itemize}
As $|V(P^0)|=1$, we know that $\hat{X}^0_1=\hat{X}^0_n=\emptyset$.
By Observation \ref{obs-couple},
$c_0 \neq c'_0$ and $c_1 \neq c'_1$, and we may assume that $c_0 \notin $$\hat{X}^1_1 \cup \Lambda^1$.
Let $S_1=\{c_0,c_4\}$, $T_1=\{c'_2,c'_4\}$, and let $S_3=\{c_0,c_1,\ldots,c_5\}$, $T_3=\{c'_0,c'_1,\ldots,c'_5\}$. If $c_1\notin \hat{X}^1_1 \cup \Lambda^1$, then let $S_2=\{c_0,c_1,c_4,c_5\}$, $T_2=\{c'_2,c'_3,c'_4,c'_5\}$. Otherwise, $c'_1\notin \hat{X}^1_n \cup \Lambda^1$ (again by Observation \ref{obs-couple}), we let $S_2=\{c_0,c_2,c_4,c_5\}$, $T_2=\{c'_1,c'_3,c'_4,c'_5\}$.
Now we show that for each $i\in \{0,1,2\}$, $j \in \{1,2,3\}$, $dam_{L,P^i}(S_j,T_j) \leq 2j$.
For each $i \in \{0,1,2\}$, among the six couples $c_0c'_0, \ldots, c_5c'_5$, two are light, two are safe and two are heavy for $P^i$. Therefore, $dam_{L,P^i}(S_3, T_3)=6$.
As $c_4c'_4$ is safe for $P^0$,
$dam_{L,P^0}({c}_4, {c'}_4)=0$. Hence
$dam_{L,P^0}(S_1, T_1)=dam_{L,P^0}(c_0, c'_2) \le 2$. As $c_0\notin \hat{X}^1_1 \cup \Lambda^1$
and $c_2c'_2$ is safe for $P^1$, we have
$dam_{L,P^1}(c_0, {c'}_2) =0$. Hence
$dam_{L,P^1}(S_1, T_1)=dam_{L,P^1}({c}_4, {c'}_4) = 2$. As $c_0c'_0$ is safe for $P^2$ and $c_4c'_4$ is light for $P^2$, we have
$dam_{L,P^2}({c}_0, {c'}_2) =1$ and $dam_{L,P^2}({c}_4, {c'}_4) =1$. Thus
$dam_{L,P^2}(S_1, T_1)=2$.
Consider the case that $c_1\notin \hat{X}^1_1 \cup \Lambda^1$. As $c_4c'_4,c_5c'_5$ are safe for $P^0$, we conclude that
$dam_{L,P^0}(S_2, T_2)=dam_{L,P^0}({c}_0, {c'}_2)+
dam_{L,P^0}({c}_1, {c'}_3) \le 4$. As $c_0,c_1 \notin \hat{X}^1_1 \cup \Lambda^1$ and $c_2c'_2, c_3c'_3$ are safe for $P^1$, it follows that
$dam_{L,P^1}(S_2, T_2)=dam_{L,P^1}({c}_4, {c'}_4)+
dam_{L,P^1}({c}_5, {c'}_5) = 4$. As $c_0c'_0, c_1c'_1$ are safe for $P^2$, it follows that
$dam_{L,P^2}(c_0,c'_2)+ dam_{L,P^2}(c_1,c'_3) \le 2$. As $c_4c'_4, c_5c'_5$ are light for $P^2$, we have
$dam_{L,P^2}(S_2, T_2) \le 2+ dam_{L,P^2}({c}_4, {c'}_4)+
dam_{L,P^2}({c}_5, {c'}_5) = 4$.
The other case is verified similarly and the details are omitted.
If $\tau \leq 2m-6$, then we know that $(S_3,T_3)$ is a simple pair of size $6$ which contradicts to Claim \ref{all at most light}. So we have that $\tau \in \{2m-4,2m-2\}$.
If $\tau =2m-4$, then we let $S=S_2$ and $T=T_2$ and if $\tau=2m-2$, then we let $S=S_1$ and $T=T_1$. In each case, $|S|=|T| = 2m-\tau$ and by the arguments above, $dam_{L, P^i}(S,T) \leq 2m-\tau$ for $i=0,1,2$.
By \ref{C5}, for $i=0,1,2$, $$dam_{L,P^i}(S,T) \leq 2m-\tau \leq S_{L}(P^i)-2m|V(P^i)|.$$
This completes the proof of Theorem \ref{main-second-part-stronger}.
\section{Proof of Lemma \ref{main-lemma}}
This section proves Lemma \ref{main-lemma}. I.e.,
\begin{equation}
\label{orginal-def-lemma}
F(x,y)=\sum \binom{x}{a}\binom{y}{b}\binom{\ell-x-y}{2m-\tau-a-b} \le \frac 12 {\ell \choose 2m-\tau}-1,
\end{equation}
where $x+y \le \ell, 2x+y \le \ell+2m-\tau-1$, $m\geq 1$, $0 \leq \ell \leq 4m$, $0 \leq \tau \leq 2m-2$, $\ell +\tau \geq 2m+2$, $\ell$ and $\tau$ are both even,
and the summation is over non-negative integer pairs $(a,b)$ for which $0 \leq a \leq x$, $0 \leq b \leq y$, $a+b \le 2m-\tau$ and $2a+b \geq \max\{2x+y+2m-\tau+1-\ell, 2m-\tau+1\}$.
Note that $a+b \le 2m-\tau$ and $2a+b \geq 2m-\tau+1$ implies that $a \ge 1$.
In the sequel, we define
$$\binom{p}{q}_+=
\begin{cases}
\binom{p}{q} & \text{if $p \geq q \geq 0$,}\\
0 & \text{if $q<0$ or $p <q$.}
\end{cases}$$
For convenience, we allow $p<q$ or $q<0$ in the binormial coefficient in the summations below. It is easy to check in these cases, either the pair $(a,b)$ does not lie in the range of the summation, and hence contributes $0$ to the summations, or by extending the equality $\binom{p+1}{q}=\binom{p}{q}+\binom{p}{q-1}$ to $q=0$. For the readability, we suppress the index `$+$'.
First, we analyze the monotonicity of $F(x,y)$ about $y$ when $x$ is fixed, say $x=x_0$. For convenience, we let $2k = 2m-\tau$.
\begin{lemma}
\label{monotonous}
Assume $x=x_0$ is fixed.
\begin{itemize}
\item If $y \ge \ell-2x_0$, then $F(x_0,y+1)\leq F(x_0,y)$.
\item If $ y < \ell -2x_0$, then $F(x_0,y) \leq F(x_0,y+1)$.
\end{itemize}
\end{lemma}
\begin{proof}
In the following, let $z=\ell-x_0-y$.
\noindent
{\bf Case 1.} $\ell-2x_0 \leq y \leq \ell+ 2m-\tau-1-2x_0 = \ell+2k-1-2x_0$.
In this case, $\max \{2x_0+y+2m-\tau+1-\ell, 2m-\tau+1\} = 2x_0+y+2m-\tau+1-\ell =2x_0+y+2k+1-\ell$.
For brevity, let
$$
t(y) = 2x_0+y+1-\ell, \ \
r(y) = t(y)+2k.
$$
As $2a+b \geq r(y)$ and $a+b \leq 2m-\tau =2k$, we have $a \geq r(y)-2k =t(y)$ and $ r(y) - 2a \le b \le 2k-a$.
So
$$F(x_0,y)
=\sum\limits_{i=t(y)}^{x_0}\sum\limits_{j=r(y)-2i}^{2k-i}\binom{x_0}{i}\binom{y}{j}\binom{z}{2k-i-j}.
$$
Note that $r(y+1)=r(y)+1$ and $t(y+1) = t(y)+1 $. Let $\Delta = F(x_0,y) -F(x_0,y+1)$
and for $t(y+1) \le i \le x_0$, let
\begin{equation}
\label{eq-lem9-1}
\Delta_i=\sum\limits_{j=r(y)-2i}^{2k-i}\binom{y}{j}\binom{z}{2k-i-j}-\sum\limits_{j=r(y)+1-2i}^{2k-i}\binom{y+1}{j}\binom{z-1}{2k-i-j}.
\end{equation}
Note that $2k-t(y)=r(y)-2t(y)$, we have
$$\Delta \geq \sum\limits_{j=r(y)-2t(y)}^{2k-t(y)}\binom{x_0}{t(y)}\binom{y}{j}\binom{z}{2k-t(y)-j}+ \sum\limits_{i=t(y+1)}^{x_0}\binom{x_0}{i}\Delta_i = \binom{x_0}{t(y)}\binom{y}{2k-t(y)}+\sum\limits_{i=t(y+1)}^{x_0}\binom{x_0}{i}\Delta_i.$$
Therefore, to prove that $F(x_0,y) \ge F(x_0,y+1)$, it suffices to prove that $\Delta_i \geq 0$ for $t(y)+1 \leq i \leq x_0$.
Using equalities $\binom{z}{2k-i-j} = \binom{z-1}{2k-i-j} + \binom{z-1}{2k-i-j-1}$ (in the first sum of Equality (\ref{eq-lem9-1}))
and $\binom{y+1}{j} = \binom{y}{j}+\binom{y}{j-1}$ (in the second sum of Equality (\ref{eq-lem9-1})), and cancel the term $ \sum\limits_{j=r(y)+1-2i}^{ 2k-i}\binom{y}{j}\binom{z-1}{2k-i-j}$, we have
$$
\Delta_i
= \sum\limits_{j=r(y)+1-2i}^{2k-i}\binom{y}{j}\binom{z-1}{ 2k-1-i-j}-\sum\limits_{j=r(y)+1-2i}^{ 2k-i}\binom{y}{j-1}\binom{z-1}{ 2k-i-j}+\binom{y}{r(y)-2i}\binom{z}{ 2k+i-r(y)}.$$
When $j=2k-i$ in the first sum, we have $\binom{z-1}{-1}=0$. Writing the second sum in the equality as
$\sum\limits_{j=r(y)-2i}^{ 2k-1-i}\binom{y}{j}\binom{z-1}{ 2k-1-i-j}$, we have
$$ \Delta_i = \binom{y}{r(y)-2i}\big[-\binom{z-1}{ 2k+i-r(y)-1} + \binom{z}{2k+i-r(y)}\big]= \binom{y}{r(y)-2i}\binom{z-1}{ 2k+i-r(y)} \geq 0. $$
\bigskip
\noindent
{\bf Case 2.} $ 0 \leq y < \ell -2x_0 $.
In this case, $2x_0+y \leq \ell$, thus we have that $2a+b \geq \max\{2x_0+y+2k+1-\ell,2k+1\}=2k+1$. Let $s(y) = \max\{\lceil \frac{ 2k+1-y}{2}\rceil,1\}$. As $2a+b \ge 2k+1$ and $b \le y$, we have $a \geq \lceil \frac{ 2k+1-y}{2} \rceil$. We have observed already that $a\geq 1$. So $a \ge s(y)$. and
$$ F(x_0,y) = \sum\limits_{i=s(y)}^{x_0}\sum\limits_{j= 2k+1-2i}^{ 2k-i}\binom{x_0}{i}\binom{y}{j}\binom{z}{ 2k-i-j}.$$
Note that $s(y+1) \leq s(y)$ and the equality holds when $y$ is odd.
Let $\Delta = F(x_0,y+1) -F(x_0,y)$ and for $s(y) \le i \le x_0$, let
\begin{equation}
\label{second-deltai}
\Delta_i =\sum\limits_{j= 2k+1-2i}^{ 2k-i}\binom{y+1}{j}\binom{z-1}{ 2k-i-j}-\sum\limits_{j= 2k+1-2i}^{ 2k-i}\binom{y}{j}\binom{z}{2k-i-j}.
\end{equation}
Then
$\Delta \geq \sum\limits_{i=s(y)}^{x_0}\binom{x_0}{i}\Delta_i$. To prove that $F(x_0,y+1) \ge F(x_0,y)$, it suffices to prove that for each $i$, $\Delta_i \geq 0$.
Using equalities $\binom{y+1}{j} = \binom{y}{j}+\binom{y}{j-1}$ and $\binom{z}{2k-i-j}=\binom{z-1}{2k-i-j}+\binom{z-1}{2k-i-j-1}$ in Equality (\ref{second-deltai}) and cancel the term $ \sum\limits_{j=2k+1-2i}^{ 2k-i}\binom{y}{j}\binom{z-1}{2k-i-j}$,
we have
$$\Delta_i =\sum\limits_{j=2k+1-2i}^{2k-i}\binom{y}{j-1}\binom{z-1}{2k-i-j}-\sum\limits_{j=2k+1-2i}^{2k-i}\binom{y}{j}\binom{z-1}{2k-1-i-j}.$$
When $j=2k-i$ in the second sum, we have $\binom{z-1}{-1}=0$. Writing the first sum in the equality above as
$\sum\limits_{j=2k-2i}^{ 2k-1-i}\binom{y}{j}\binom{z-1}{ 2k-1-i-j}$, we have
$$ \Delta_i = \binom{y}{2k-2i}\binom{z-1}{i-1} \geq 0. $$
\end{proof}
Now, we continue with the proof of Lemma \ref{main-lemma}. First assume that $x < \frac{\ell}{2}$. By Lemma \ref{monotonous}, $F(x,y) \leq F(x,\ell-2x)$.
So it suffices to show that $F(x,\ell-2x) \le \frac 12 \binom{\ell}{2k}-1$.
Recall that (by Equality (\ref{orginal-def-lemma}))
$$F(x,\ell-2x) = \sum\limits_{t=2k+1}^{4k}\sum\limits_{2a+b =t}\binom{x}{a}\binom{\ell-2x}{b}\binom{x}{2k-a-b} =\sum\limits_{t=2k+1}^{4k} C(t,x),$$
where $$C(t,x) = \sum\limits_{2a + b=t} \binom{x}{a} \binom{\ell-2x}{b} \binom{x}{2k -a-b} = \sum\limits_{2a \leq t} \binom{x}{a} \binom{\ell-2x}{t-2a} \binom{x}{2k +a - t}.$$
Then $\sum_{t=0}^{4k}C(t,x) = \binom{\ell}{2k}$. Since for any $0 \leq t \leq 2k$, $$ C(t,x) = \sum\limits_{2a \leq t} \binom{x}{a} \binom{\ell-2x}{t-2a} \binom{x}{2k +a - t} = \sum\limits_{2a' \leq 4k-t} \binom{x}{a'} \binom{\ell-2x}{4k-t-a'} \binom{x}{a'+t-2k} = C(4k -t,x), $$
where $a'=2k+a-t$. So $$F(x,\ell-2x) = \sum_{t=2k+1}^{4k}C(t,x) = \frac {\binom{\ell}{2k} - C(2k,x)}{2} \le \frac 12 \binom{\ell}{2k}-1.$$
Here we used the fact that $C(2k, x)\geq 2$ when $1 \leq x < \frac{\ell}{2}$. Indeed, if $x \geq k$,
\begin{equation*}
C(2k, x) \geq \binom{x}{k}^2\binom{\ell-2x}{0}+\binom{x}{k-1}^2\binom{\ell-2x}{2} \geq 1+1 = 2.
\end{equation*}
If $1 \leq x \leq k-1$, then
\begin{equation*}
C(2k, x) \geq \binom{x}{x}^2\binom{\ell-2x}{2k-2x}+\binom{x}{x-1}^2\binom{\ell-2x}{2k-2x+2} \geq 1+1 =2.
\end{equation*}
Now we assume that $x \geq \frac{\ell}{2}$. It follows that $y \geq 0\geq \ell-2x$. Hence, by the first part of Lemma \ref{monotonous}, $F(x,y) \leq F(x,0)$. So it suffices to prove that $F(x,0) \leq \frac{1}{2}\binom{\ell}{2k}-1$.
Note that in this case, $b=y=0$ and $2x+y+2k+1-\ell \geq 2k+1$, so $2a+b=2a \geq 2x+y+2k+1-\ell = 2x+2k+1-\ell$, which implies that $a \geq x+k-\ell'+1$, thus
\begin{equation*}
F(x,0) = \sum\limits_{i=x+k-\ell'+1}^{2k}\binom{x}{i}\binom{\ell-x}{2k-i}.
\end{equation*}
We first prove that when $x \geq \frac{\ell}{2}=\ell'$, $F(x,0)> F(x+1,0)$. Let $\Delta = F(x,0) -F(x+1,0)$, then
\begin{equation}
\label{delta3}
\Delta=\sum\limits_{i=x+1+k-\ell'}^{2k}\binom{x}{i}\binom{\ell-x}{2k-i}-\sum\limits_{i=x+2+k-\ell'}^{2k}\binom{x+1}{i}\binom{\ell-1-x}{2k-i}.
\end{equation}
Using equalities $\binom{x+1}{i} = \binom{x}{i}+\binom{x}{i-1}$ and $\binom{\ell-x}{2k-i}=\binom{\ell-x-1}{2k-i}+\binom{\ell-x-1}{2k-i-1}$, and cancel the term $ \sum\limits_{j=x+k+2-\ell'}^{2k-1}\binom{x}{i}\binom{\ell-x-1}{2k-i}$, we have
$$\Delta =\binom{x}{x+1+k-\ell'}\binom{\ell-x}{k+\ell'-x-1}+\sum\limits_{i=x+2+k-\ell'}^{ 2k}\binom{x}{i}\binom{\ell-1-x}{2k-1-i} - \sum\limits_{i=x+2+k-\ell'}^{2k}\binom{x}{i-1}\binom{\ell-1-x}{2k-i}.$$
When $i=2k$ in the first sum above, we have $\binom{\ell-1-x}{-1}=0$. Writing the last sum in the equality above as $\sum\limits_{i=x+1+k-\ell'}^{2k-1}\binom{x}{i}\binom{\ell-1-x}{2k-1-i}$, we have
\begin{eqnarray*}
\Delta &=& \binom{x}{x+1+k-\ell'}\binom{\ell-x}{k+\ell'-x-1} - \binom{x}{x+1+k-\ell'}\binom{\ell-x-1}{k+\ell'-x-2} \\
&=& \binom{x}{x+1+k-\ell'}\binom{\ell-x-1}{k+\ell'-x-1} \geq 0.
\end{eqnarray*}
So, $F(x,y) \leq F(\ell',0)$. Note that $
\sum\limits_{i=k+1}^{2k}\binom{\ell'}{i}\binom{\ell'}{2k-i} = \sum\limits_{i=1}^{k-1}\binom{\ell'}{2k-i}\binom{\ell'}{i}$
and
\begin{equation*}
\binom{\ell}{2k} =\sum\limits_{i=k+1}^{2k}\binom{\ell'}{i}\binom{\ell'}{2k-i}+ \sum\limits_{i=1}^{k-1}\binom{\ell'}{2k-i}\binom{\ell'}{i}+\binom{\ell'}{k}^2.
\end{equation*}
As $\ell'=\frac{\ell}{2} \geq k+1 \geq 2$, we have
\begin{equation*}
F(x,y) \leq F(\ell',0)
= \sum\limits_{i=k+1}^{2k}\binom{\ell'}{i}\binom{\ell'}{2k-i} = \frac{\binom{\ell}{2k}-\binom{\ell'}{k}^2}{2} < \frac{1}{2}\binom{\ell}{2k}-1.
\end{equation*}
This completes the proof of Lemma \ref{main-lemma}.
| {
"timestamp": "2020-06-30T02:20:13",
"yymm": "2006",
"arxiv_id": "2006.15614",
"language": "en",
"url": "https://arxiv.org/abs/2006.15614",
"abstract": "A graph $G$ is called $3$-choice critical if $G$ is not $2$-choosable but any proper subgraph is $2$-choosable. A characterization of $3$-choice critical graphs was given by Voigt in [On list Colourings and Choosability of Graphs, Habilitationsschrift, Tu Ilmenau(1998)]. Voigt conjectured that if $G$ is a bipartite $3$-choice critical graph, then $G$ is $(4m, 2m)$-choosable for every integer $m$. This conjecture was disproved by Meng, Puleo and Zhu in [On (4, 2)-Choosable Graphs, Journal of Graph Theory 85(2):412-428(2017)]. They showed that if $G=\\Theta_{r,s,t}$ where $r,s,t$ have the same parity and $\\min\\{r,s,t\\} \\ge 3$, or $G=\\Theta_{2,2,2,2p}$ with $p \\ge 2$, then $G$ is bipartite $3$-choice critical, but not $(4,2)$-choosable. On the other hand, all the other bipartite 3-choice critical graphs are $(4,2)$-choosable. This paper strengthens the result of Meng, Puleo and Zhu and shows that all the other bipartite $3$-choice critical graphs are $(4m,2m)$-choosable for every integer $m$.",
"subjects": "Combinatorics (math.CO)",
"title": "Multiple list colouring of $3$-choice critical graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717464424892,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449308080046
} |
https://arxiv.org/abs/2003.03005 | The Multiple Points of Fractional Brownian Motion | Nils Tongring (1987) proved sufficient conditions for a compact set to contain $k$-tuple points of a Brownian motion. In this paper, we extend these findings to the fractional Brownian motion. Using the property of strong local nondeterminism, we show that if $B$ is a fractional Brownian motion in $\mathbb{R}^d$ with Hurst index $H$ such that $Hd=1$, and $E$ is a fixed, nonempty compact set in $\mathbb{R}^d$ with positive capacity with respect to the function $\phi(s) = (\log_+(1/s))^k$, then $E$ contains $k$-tuple points with positive probability. For the $Hd > 1$ case, the same result holds with the function replaced by $\phi(s) = s^{-k(d-1/H)}$. | \section{Introduction}
It is a well known result that almost all sample paths of Brownian motion in $\mathbb{R}^2$ have $k$-tuple points for any positive integer $k$ \cite{DEK}. It is also well known that a Brownian path in $\mathbb{R}^2$ will hit a compact set if and only if the set has positive logarithmic capacity \cite{K44}. Nils Tongring \cite{Tongring} combined these results in 1987 to prove that if $B$ is a Brownian motion in $\mathbb{R}^2$, and $E$ is a fixed compact set in the plane with positive capacity with respect to the function $\phi(s)=(\log(1/s))^k$, $k$ a positive integer, then $E$ contains $k$-tuple points for almost all paths.
We say that $x$ is a \emph{$k$-tuple point} (or \emph{$k$-multiple point}) for the path $\omega$ if there are times $t_1 < t_2 < \cdots < t_k$ such that $x = B(t_1, \omega) = B(t_2, \omega) = \cdots = B(t_k, \omega)$.
In this paper, we extend Tongring's results to the case of fractional Brownian motion, which is known as an extension of Brownian motion with the dropped required property of independent increments of Brownian motion. The fractional Brownian motion of Hurst index $0 < H < 1$ is defined as a mean zero Gaussian process $\{B_t : t \ge 0\}$ with continuous sample paths and covariance function
$\mathbb{E}(B_t B_s)=(|t|^{2H}+|s|^{2H}-|t-s|^{2H})/2$.
The fractional Brownian motion in $\mathbb{R}^d$ is defined as $B_t = (B_t^1, \dots, B^d_t)$ where $B^1, \dots, B^d$ are i.i.d.~copies of one-dimensional fractional Brownian motion.
The fractional Brownian motion has the scaling property that for any $c > 0$, the process $\{ c^{-H} B_{ct} : t \ge 0 \}$ is also a fractional Brownian motion.
The existence of multiple points for fractional Brownian motion was studied by several authors. The results of K\^{o}no \cite{K78}, Goldman \cite{G81} and Rosen \cite{R84} show that if $k > (k-1)Hd$, then $k$-tuple points exist in any interval $T \subset (0, \infty)$; and if $k < (k-1)Hd$, then $k$-tuple points do not exist.
Talagrand \cite{T98} proved that in the critical case $k = (k-1)Hd$, $k$-tuple points do not exist. In fact, their results cover the multiparameter case $(t \in \mathbb{R}^N)$.
Rosen \cite{R84} also studied the Hausdorff dimension of multiple times using local times.
We study sufficient conditions for a fixed set $E$ in $\mathbb{R}^d$ to contain multiple points of a fractional Brownian motion in $\mathbb{R}^d$.
Our main results are Theorem \ref{ktuplepointsHd=1} and \ref{ktupleHd>1}, concerning the cases $Hd = 1$ and $Hd > 1$ respectively.
We show that if $Hd = 1$ (resp.~$Hd>1$) and $E$ is a fixed, nonempty compact set in
$\mathbb{R}^d$ with positive capacity with respect to the function
$\phi(s) = (\log_+(1/s))^k$ (resp.~$\phi(s) = s^{-k(d-1/H)}$), then $E$ contains
$k$-tuple points with positive probability. To avoid negative values of the logarithm, we define $\log_+(x) = \max\{\log(x), 0\}$.
Our results extend Theorem 1 and 3 of Tongring \cite{Tongring}.
For the case that $Hd<1$, it is known that for any fixed point $x$ in $\mathbb{R}^d$, the Hausdorff dimension of the level set $B^{-1}(x) = \{ t \ge 0 : B(t) = x \}$ of the fractional Brownian motion is a.s.~equal to $1 - Hd$, which is positive (see \cite{A, K}).
It follows that that $B^{-1}(x)$ contains uncountably many time points and therefore, $x$ is a multiple point of infinitely large multiplicity.
Unlike the Brownian motion, the fractional Brownian motion (for $H \ne 1/2$) does not have independent increments.
In order to estimate the joint distributions, our proofs rely on the property of strong local nondeterminism for the fractional Brownian motion due to Pitt \cite{Pitt} (also see Proposition \ref{Prop:LND} below).
The concept of local nondeterminism (LND) was first introduced by Berman \cite{B73} to study local times of Gaussian processes.
We refer to \cite{B87, C78, Pitt} for general conditions of LND for Gaussian processes.
LND is useful for studying small ball probability, H\"{o}lder conditions of local times, Hausdorff measure of ranges and level sets, and other properties of Gaussian random fields (see e.g.~\cite{X97, X08}).
Let us explain notations and gather useful formulas before we end the introduction.
For any nonempty Borel set $E$ in $\mathbb{R}^d$, the \emph{capacity} of $E$ respect to a function $\phi: [0, \infty) \to [0, \infty]$, or the \emph{$\phi$-capacity}, can be defined as the quantity
\[ C_\phi(E) = \bigg[\inf_\mu \int_E \int_E \phi(|z_1 - z_2|) \mu(dz_1) \mu(dz_2)\bigg]^{-1} \]
where the infimum is taken over all probability measures $\mu$ on $E$.
It follows that $E$ has positive capacity with respect to $\phi$ if and only if there exists a probability measure $\mu$ on $E$ such that
\[ \int_E \int_E \phi(|z_1 - z_2|) \mu(dz_1) \mu(dz_2) < \infty.\]
Recall that for any mean zero Gaussian vector $(Z, Z_1, \dots, Z_n)$, the conditional variance of $Z$ given $Z_1, \dots, Z_n$ is equal to the squared $L^2(\mathbb{P})$-distance of $Z$ from the subspace generated by $Z_1, \dots, Z_n$, namely,
\begin{equation}\label{cond_var=L2_dist}
\mathrm{Var}(Z|Z_1, \dots, Z_n) = \inf_{a_1, \dots, a_n \in \mathbb{R}} \mathbb{E}\bigg[\Big(Z - \sum_{j=1}^n a_j Z_j\Big)^2 \bigg].
\end{equation}
The covariance matrix of the Gaussian vector $(Z_1, \dots, Z_n)$ is denoted by
$\mathrm{Cov}(Z_1, \dots, Z_n)$.
A useful formula for the determinant of the covariance matrix is:
\begin{equation}\label{Eq:detcov}
\det\mathrm{Cov}(Z_1, \dots, Z_n) = \mathrm{Var}(Z_1) \prod_{j=2}^n \mathrm{Var}(Z_j|Z_1, \dots, Z_{j-1}).
\end{equation}
An oft-used result in this paper will be the following proposition.
\begin{prop}\label{Prop:LND}
Let $B$ be a fractional Brownian motion of Hurst index $H$. There exists a constant $0<C_0 \le1$ such that for all $n \ge 1$, for all $t, s_1, ..., s_n \ge 0$,
\begin{equation*}
C_0 \min\limits_{1\le i \le n}|t-s_i|^{2H}\le \mathrm{Var}(B_t|B_{s_1},\cdots,B_{s_n})\le \min\limits_{1\le i \le n}|t-s_i|^{2H}.
\end{equation*}
\end{prop}
The upper bound follows easily from \eqref{cond_var=L2_dist} and the fact that $\mathbb{E}[(X_t - X_s)^2] = |t-s|^{2H}$.
The lower bound was proved by Pitt \cite[Lemma 7.1]{Pitt} and is known as the property of strong local nondeterminism.
Formula \eqref{Eq:detcov} and Proposition \ref{Prop:LND} together allow us to estimate the joint distributions of fractional Brownian motion.
Throughout this paper, $C$ denotes a positive finite constant whose value may vary in each appearance, and $C_1, C_2, \dots$ denote specific constants.
For any $z \in \mathbb{R}^d$, $|z|$ denotes its Euclidean norm.
Also, $D(z, \varepsilon)$ denotes the closed ball in $\mathbb{R}^d$ centered at $z$ of radius $\varepsilon$ under the Euclidean norm, i.e.~$D(z, \varepsilon) = \{ x \in \mathbb{R}^d : |x-z| \le \varepsilon \}$.
\section{Multiple Points of Fractional Brownian Motion with $Hd = 1$}
\begin{lem}\label{cap_scaling}
Let $E$ be a Borel set in $\mathbb{R}^d$ and $\phi(s) = (\log_+(1/s))^k$, where $k$ is a positive integer.
If $E$ has positive $\phi$-capacity, then $\lambda E$ has positive $\phi$-capacity for any $\lambda > 0$.
\end{lem}
\begin{proof}
Suppose that $E$ has positive $\phi$-capacity and $\mu$ is a probability measure on $E$ such that
\begin{equation}\label{finite_energy}
\int_E \int_E \bigg(\log_+\frac{1}{|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2) < \infty.
\end{equation}
Define the probability measure $\mu_\lambda$ on $\lambda E$ by $\mu_\lambda(A) = \mu(\lambda^{-1}A)$ for any Borel subset $A$ of $\lambda E$. We claim that
\begin{equation}\label{finite_energy_scaled}
\int_{\lambda E} \int_{\lambda E} \bigg(\log_+\frac{1}{|z_1-z_2|}\bigg)^k \mu_\lambda(dz_1) \mu_\lambda(dz_2) < \infty.
\end{equation}
By changing variables,
\[ \int_{\lambda E} \int_{\lambda E} \bigg(\log_+\frac{1}{|z_1-z_2|}\bigg)^k \mu_\lambda(dz_1) \mu_\lambda(dz_2) = \int_E \int_E \bigg(\log_+\frac{1}{\lambda|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2). \]
If $\lambda \ge 1$, then $\log_+(\frac{1}{\lambda|z_1-z_2|}) \le \log_+(\frac{1}{|z_1-z_2|})$ and \eqref{finite_energy_scaled} clearly holds. Suppose $\lambda < 1$. Then
\begin{align*}
\int_E \int_E \bigg(\log_+\frac{1}{\lambda|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2)
&= \iint_{\substack{z_1, z_2 \in E, \\|z_1 - z_2| \le \lambda^{-1}}} \bigg(\log \frac{1}{\lambda|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2)\\
& = \iint_{\substack{z_1, z_2 \in E, \\|z_1 - z_2| \le \lambda^{-1}}} \bigg(\log \frac{1}{\lambda} + \log \frac{1}{|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2).
\end{align*}
We can split the last integral into two parts:
\begin{align*}
\Bigg(\iint_{\substack{z_1, z_2 \in E, \\|z_1 - z_2| \le 1}} + \iint_{\substack{z_1, z_2 \in E, \quad\{\mathsf 1} < |z_1 - z_2| \le \lambda^{-1}}} \Bigg) \bigg(\log \frac{1}{\lambda} + \log \frac{1}{|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2).
\end{align*}
For the first part, we can use the inequality $(a+b)^k \le 2^{k-1}(a^k+b^k)$ for $a, b \ge 0$ and $k \ge 1$. For the second part, we note that $0 \le \log(1/\lambda) - \log |z_1-z_2| \le \log(1/\lambda)$. It follows that
\begin{align*}
\int_E \int_E \bigg(\log_+\frac{1}{\lambda|z_1-z_2|}\bigg)^k \mu(dz_1) \mu(dz_2)
&\le 2^{k-1} \iint_{\substack{z_1, z_2 \in E, \\|z_1 - z_2| \le 1}} \Bigg[\left(\log\frac{1}{\lambda}\right)^k + \bigg( \log\frac{1}{|z_1-z_2|}\bigg)^k \Bigg]\mu(dz_1) \mu(dz_2) \\
&\quad + \iint_{\substack{z_1, z_2 \in E,\quad\{\mathsf 1} < |z_1 - z_2| \le \lambda^{-1}}} \left(\log \frac{1}{\lambda}\right)^k \mu(dz_1) \mu(dz_2).
\end{align*}
The right-hand side is finite by \eqref{finite_energy}. Hence \eqref{finite_energy_scaled} holds and $\lambda E$ has positive $\phi$-capacity.
\end{proof}
The following is our main result for the case of $Hd=1$.
\begin{thm}
\label{ktuplepointsHd=1}
If B is a fractional Brownian motion in $\mathbb{R}^d$ with Hurst index $H$ such that $Hd=1$, and $E$ is a fixed, nonempty compact set in $\mathbb{R}^d$ with positive capacity with respect to the function $\phi(s) = (\log_+(1/s))^k$, then E contains $k$-tuple points with positive probability.
\end{thm}
\begin{proof}
Suppose that $E$ has positive capacity with respect to the function $\phi(s) = (\log_+(1/s))^k$. That is, there is a probability measure $\mu$ defined on $E$ such that
\begin{equation}\label{assump}
\int_{E}\int_{E}\bigg(\log_+\frac{1}{|z_1-z_2|}\bigg)^k\mu(dz_1)\mu(dz_2) < \infty.
\end{equation}
By Lemma \ref{cap_scaling} and the scaling property of the fractional Brownian motion, we can assume without loss of generality that the set $E$ is contained in a ball of radius $1/3$ centered at the origin, so that $|z_1 - z_2| \le 2/3$ and $\log_+(1/|z_1 - z_2|)$ has a positive lower bound:
\begin{equation}\label{Eq:log_lower_bd}
\log_+(1/|z_1 - z_2|) \ge \log(3/2) > 0, \quad \forall\, z_1, z_2 \in E.
\end{equation}
Let $u \in \mathbb{R}^+$. Let $\mathbf{K}_\varepsilon(u)$ be an indicator function such that
\[
\mathbf{K}_\varepsilon(u) = \begin{cases}
1, & u\leq \varepsilon, \\
0, & u> \varepsilon.
\end{cases}
\]
For $0 < \varepsilon < 1$, consider the following integral:
\begin{align}\label{Int:I_epsilon}
I_{\varepsilon}= \frac{1}{\varepsilon^{kd}}\int_{E}\mu(dz) \prod_{j=1}^k \int_{2j-1}^{2j}ds_j \textbf{K}_\varepsilon({|B_{s_j}-z|}).
\end{align}
To prove positive probability of $k$-tuple points in the set $E$, we will show that $\mathbb{P}(I_\varepsilon>0)$ has a positive lower bound independent of $\varepsilon$.
Since $I_\varepsilon$ is non-negative, $\mathbb{E}(I_\varepsilon) = \mathbb{E}(I_\varepsilon \cdot {\bf 1}_{\{ I_\varepsilon > 0 \}})$.
By the Cauchy--Schwarz inequality, we have
\begin{equation}\label{Eq:CS}
\mathbb{P}(I_\varepsilon>0)\ge\frac{\mathbb{E}(I_\varepsilon)^2}{\mathbb{E}(I_\varepsilon^2)}.
\end{equation}
To prove that right-hand side is positive, we show that the numerator is bounded below, and the denominator is bounded above; both bounds are also independent of $\varepsilon$.
First we prove a lower bound for $\mathbb{E}(I_\varepsilon)^2$.
By Fubini's theorem,
\begin{equation}\label{Eq:E(I_ep)}
\mathbb{E}(I_\varepsilon)= \frac{1}{\varepsilon^{kd}}\int_{E}\mu(dz) \int_1^2 ds_1 \int_3^4 ds_2 \cdots \int_{2k-1}^{2k} ds_k\, \mathbb{P}\bigg( \bigcap_{j=1}^k \big\{|B_{s_j}-z|\le\varepsilon\big\}\bigg).
\end{equation}
Let $s_j \in [2j-1, 2j]$ and write $B_{s_j} = (B_{s_j}^1, \dots, B_{s_j}^d)$ for $j = 1, \dots, k$.
Let $\Sigma_0 = \mathrm{Cov}(B_{s_1}^1, \dots, B_{s_k}^1)$ be the covariance matrix of the $k$-dimensional Gaussian vector $(B_{s_1}^1, \dots, B_{s_k}^1)$, and let $\Sigma = \mathrm{Cov}(B_{s_1}, \dots, B_{s_k})$ be the covariance matrix of the $kd$-dimensional Gaussian vector $(B_{s_1}, \dots, B_{s_k})$.
Then,
\begin{align*}
\mathbb{P}\bigg( \bigcap_{j=1}^k \big\{|B_{s_k} - z| \le \varepsilon \big\}\bigg) = \int\cdots\int_{D(z, \varepsilon)^k} \frac{1}{(2\pi)^{kd/2} (\det\Sigma)^{1/2}}
\exp\left(-\frac{x \Sigma^{-1}x^T}{2} \right) dx_1 \cdots dx_k,
\end{align*}
where $x$ denotes the row vector $(x_1, \dots, x_k)$, with $x_j \in \mathbb{R}^d$, and $x^T$ denotes its transpose.
Since $B^1, \dots, B^d$ are i.i.d., $\det\Sigma = (\det\Sigma_0)^d$, and by \eqref{Eq:detcov}, we have
\[\det\Sigma_0 = \mathrm{Var}(B_{s_1}^1) \prod_{j=2}^k \mathrm{Var}(B_{s_j}^1|B_{s_1}^1, \dots, B_{s_{j-1}}^1).\]
Then by Proposition \ref{Prop:LND}, we get that
\[\det\Sigma \le [s_1^{2H} (s_2-s_1)^{2H} \cdots (s_k-s_{k-1})^{2H}]^d \le [2^{2H} 3^{2H(k-1)}]^d=: C_1. \]
Let $\lambda_1 \le \dots \le \lambda_k$ be the eigenvalues of the matrix $\Sigma_0$.
Then,
\[ x \Sigma^{-1} x^T \le \frac{1}{\lambda_1}(|x_1|^2 + \dots + |x_k|^2). \]
Recall that the largest eigenvalue of a real symmetric $k\times k$ matrix $A$ is equal to $\sup\{ v A v^T : v \in \mathbb{R}^k, |v| = 1 \}$.
Since the entries of $\Sigma_0$ are bounded as a function of $(s_1, \dots, s_k)$ on the compact set $\prod_{j=1}^k[2j-1, 2j]$, the eigenvalues of $\Sigma_0$ are bounded by a constant $C_2$.
It follows that
\[ \det\Sigma_0 = \prod_{j=1}^k \lambda_j \le \lambda_1 C_2^{k-1}. \]
Also, by Proposition \ref{Prop:LND}, we have
\begin{align*}
\det\Sigma_0 &=\mathrm{Var}(B_{s_1}^1) \prod_{j=2}^k \mathrm{Var}(B_{s_j}^1|B_{s_1}^1, \dots, B_{s_{j-1}}^1)\\
& \ge s_1^{2H} \prod_{j=2}^k[C_0(s_j-s_{j-1})^{2H}] \ge C_0^{k-1}.
\end{align*}
Hence $\lambda_1\ge (C_0/C_2)^{k-1}$. Note that for $x_j \in D(z, \varepsilon)$, $|x_j|\le |z| + \varepsilon < 1$, and $|x_1|^2 + \dots + |x_k|^2 < k$, so
\begin{align*}
\mathbb{P}\bigg(\bigcap_{j=1}^k\big \{|B_{s_j}-z|\le \varepsilon\big\}\bigg)
&\ge \frac{1}{C_1^{1/2}(2\pi)^{kd/2}} \int\dots \int_{D(z, \varepsilon)^k} \exp\bigg(-\frac{k}{2(C_0/C_2)^{k-1}}\bigg)dx_1\cdots dx_k
= C \varepsilon^{kd}.
\end{align*}
Plugging this back into \eqref{Eq:E(I_ep)} gives $\mathbb{E}(I_\varepsilon)^2 \ge C_3$, where $C_3 > 0$ is a constant independent of $\varepsilon$.
Next we show that $\mathbb{E}(I_\varepsilon^2)$ has a constant upper bound independent of $\varepsilon$.
Following \cite{Tongring}, we divide $\mathbb{E}(I_\varepsilon^2)$ into two parts, $F$ and $S$. The first part $F$ is given by restricting the variables $z_1, z_2$ to values for which $|z_1 - z_2| \leq 4\varepsilon$, and the second part $S$ is given by the remaining $z_1, z_2$ for which $|z_1 - z_2| > 4\varepsilon$. That is,
\[ \mathbb{E}(I_\varepsilon^2) = F + S,\]
where $F$ and $S$ are defined by
\begin{multline*}
F= \frac{1}{\varepsilon^{2kd}}\iint_{\substack{z_1, z_2 \in E,\\ |z_1 - z_2| \le 4\varepsilon}} d\mu (z_1) d\mu (z_2) \iint_{[1,2]^2} ds_1 d\widehat{s}_1 \iint_{[3, 4]^2} ds_2 d\widehat{s}_2\\\cdots
\iint_{[2k-1, 2k]^2}ds_k d\widehat{s}_k \,\mathbb{P}\bigg(\bigcap_{j=1}^k\big\{|B_{s_j}-z_1|\le\varepsilon,|B_{\widehat{s}_j}-z_2|\le\varepsilon\big\}\bigg)
\end{multline*}
and
\begin{multline*}
S= \frac{1}{\varepsilon^{2kd}}\iint_{\substack{z_1, z_2 \in E,\\ |z_1-z_2| > 4\varepsilon}} d\mu (z_1)d\mu (z_2) \iint_{[1,2]^2} ds_1 d\widehat{s}_1 \iint_{[3, 4]^2} ds_2 d\widehat{s}_2\\\cdots
\iint_{[2k-1, 2k]^2}ds_k d\widehat{s}_k \,\mathbb{P}\bigg(\bigcap_{j=1}^k\big\{|B_{s_j}-z_1|\le\varepsilon,|B_{\widehat{s}_j}-z_2|\le\varepsilon\big\}\bigg).
\end{multline*}
Let us consider $S$. Assume $|z_1 - z_2| > 4\varepsilon$. First, we fix $s_j, \widehat s_j \in [2j-1, 2j]$ for $j =1, \dots, k$ and consider the joint probability that appears in $S$.
We may assume without loss of generality that $s_1<\widehat{s}_1 < \cdots <s_k<\widehat{s}_k$, because for other possible orderings, we can interchange $s_j$ and $\widehat s_j$, and follow the same argument.
By the triangle inequality, the joint probability is
\begin{align*}
\mathbb{P}\bigg(\bigcap_{j=1}^k\big\{|B_{s_j}-z_1|\le\varepsilon,|B_{\widehat{s}_j}-z_2|\le\varepsilon\big\}\bigg)
\le \mathbb{P}\bigg( \bigcap_{j=1}^k \big\{ |B_{s_j} - z_1| \le \varepsilon, |B_{\widehat{s}_j} - B_{s_j} - (z_2-z_1)| \le 2\varepsilon \big\} \bigg).
\end{align*}
Now, we normalize the random variables $B_{s_1}, B_{\widehat{s}_1} - B_{s_1}, \dots, B_{s_k}, B_{\widehat{s}_k} - B_{s_k}$ by dividing by their respective standard deviations.
We will see that it allows us to obtain a lower bound involving their inverse covariance matrix.
Let
\[X_j=\frac{B_{s_j}}{s_j^H}, \quad \widehat{X}_j=\frac{B_{\widehat{s}_j}-B_{s_j}}{|\widehat{s}_j-s_j|^H} \]
for $ = 1, \dots, k$.
Define the vector $X_n = (X_n^1,X_n^2, ..., X_n^d)$.
To simplify notations, let us denote
\[y_j=\frac{z_1}{s_j^H}, \quad
\widehat{y}_j=\frac{z_2-z_1}{|\widehat{s}_j-s_j|^H}\]
and
\[
\varepsilon_j = \frac{\varepsilon}{s_j^H}, \quad
\widehat{\varepsilon}_j = \frac{\varepsilon}{|\widehat{s}_j-s_j|^H}
\]
for $j = 1, \dots, k$.
Then
\begin{align}\label{Eq:UBint}
\notag &\quad\, \mathbb{P}\bigg( \bigcap_{j=1}^k \big\{ |B_{s_j} - z_1| \le \varepsilon, |B_{\widehat{s}_j} - B_{s_j} - (z_2-z_1)| \le 2\varepsilon \big\} \bigg)\\
\notag&= \mathbb{P}\bigg( \bigcap_{j=1}^k \big\{ |X_j - y_j| \le \varepsilon_j, |\widehat{X}_j - \widehat{y}_j| \le 2\widehat{\varepsilon}_j \big\} \bigg)\\
& = \int\dots\int_D \frac{1}{(2\pi)^{kd}(\mathrm{det}\Sigma)^{1/2}}\;\exp\bigg({-\frac{x\Sigma^{-1}x^T}{2}}\bigg)dx_1d\widehat{x}_1\cdots dx_kd\widehat{x}_k
\end{align}
where $D = \prod_{j=1}^k [D(y_j, \varepsilon_j) \times D(\widehat{y}_j, 2\widehat{\varepsilon}_j)]$, $x = (x_1, \widehat{x}_1, \dots, x_k, \widehat{x}_k)$ and
\[ \Sigma = \mathrm{Cov}(X_1, \widehat{X}_1, \dots, X_k, \widehat{X}_k). \]
By \eqref{Eq:detcov}, $\mathrm{det}\Sigma$ is equal to
\[ \Big\{\mathrm{Var}(X_1^1)\mathrm{Var}(\widehat{X}_1^1|X_1^1)\prod_{j=2}^k\big[\mathrm{Var}(X_j^1|X_1^1, \widehat{X}_1^1, \dots, X_{j-1}^1, \widehat{X}_{j-1}^1) \mathrm{Var}(\widehat{X}_j^1|X_1^1, \widehat{X}_1^1, \dots, X_{j-1}^1, \widehat{X}_{j-1}^1, X_j^1)\big]\Big\}^d.\]
Clearly, $\mathrm{Var}(X^1_1) = 1$.
To find lower bounds for the conditional variances, we use the property of local nondeterminism.
Using \eqref{cond_var=L2_dist} and Proposition \ref{Prop:LND}, we get that
\begin{align*}
\mathrm{Var}(\widehat{X}^1_1|X^1_1)
\ge \frac{1}{|\widehat{s}_1 - s_1|^{2H}}\mathrm{Var}(B^1_{\widehat{s}_1}|B^1_{s_1}) \ge C_0.
\end{align*}
Similarly, for all $j = 2, \dots, k$, we have
\begin{align*}
\mathrm{Var}(X_j^1|X_1^1,\widehat{X}_1^1, \dots, X_{j-1}^1, \widehat{X}_{j-1}^1)
& \ge \frac{1}{s_j^{2H}} \mathrm{Var}(B^1_{s_j}|B^1_{s_1}, B^1_{\widehat{s}_1}, \dots, B^1_{s_{j-1}}, B^1_{\widehat{s}_{j-1}})\\
& \ge \frac{C_0 |s_j - \widehat{s}_{j-1}|^{2H}}{s_j^{2H}} \ge \frac{C_0}{(2j)^{2H}}
\end{align*}
and
\begin{align*}
\mathrm{Var}(\widehat{X}_j^1|X_1^1,\widehat{X}_1^1, \dots, X_{j-1}^1, \widehat{X}_{j-1}^1, X_j^1)
& \ge \frac{1}{|\widehat{s}_j-s_j|^{2H}} \mathrm{Var}(B^1_{\widehat{s}_j}|B^1_{s_1}, B^1_{\widehat{s}_1}, \dots, B^1_{s_{j-1}}, B^1_{\widehat{s}_{j-1}}, B^1_{s_j}) \ge C_0.
\end{align*}
It follows that
\begin{equation}\label{Eq:detSigma}
\det\Sigma \ge C
\end{equation}
for some constant $C > 0$ independent of $\varepsilon$.
Let us consider the exponential function of the joint density in \eqref{Eq:UBint}.
Let $\lambda_{\mathrm{max}}$ denote the maximum eigenvalue of $\Sigma$.
Then for any $x = (x_1, \widehat{x}_1, \dots, x_k, \widehat x_k) \in D$,
\begin{align*}
x\Sigma^{-1}x^T
& \ge \frac{1}{\lambda_{\mathrm{max}}}|x|^2
\ge \frac{1}{\lambda_{\mathrm{max}}}(|\widehat x_1|^2 + \cdots + |\widehat x_k|^2).
\end{align*}
By normalizing the random variables, we are able to use Gershgorin's circle theorem \cite[Theorem 6.1.1]{HJ} to find this bound. Since the entries in the $\Sigma$ matrix are the correlation coefficients between the random variables, the diagonal entries are 1 and the sum of the absolute values of the non-diagonal entries in each row is bounded by $2k-1$, thus $\lambda_{\mathrm{max}} \le 1 + (2k-1) = 2k$. It follows that
\begin{equation}\label{Eq:UBexp}
\exp\Big(-\frac{x\Sigma^{-1}x^T}{2}\Big)
\le \exp\bigg(-\frac{|\widehat x_1|^2+\dots + |\widehat x_k|^2}{4k}\bigg).
\end{equation}
Using \eqref{Eq:detSigma} and \eqref{Eq:UBexp}, with a change of variables, we get that the integral in \eqref{Eq:UBint} is bounded above by
\begin{align*}
C \prod_{j=1}^k \iint_{D(0, \varepsilon_j) \times D(0, 2\widehat{\varepsilon}_j)} \exp\bigg(-\frac{|\widehat x_j+\widehat y_j|^2}{4k}\bigg)dx_j d\widehat x_j.
\end{align*}
Note that the assumption $|z_1-z_2| > 4\varepsilon$ implies $2\varepsilon |\widehat{s}_j - s_j|^{-H} \le |\widehat{y}_j|/2$ for all $j$.
By the triangle inequality, if $\widehat x_j \in D(0, 2\widehat{\varepsilon}_j)$, then
\begin{align*}
|\widehat{y}_j + \widehat{x}_j| \ge |\widehat{y}_j| - \frac{2\varepsilon}{|\widehat{s}_j - s_j|^H} \ge |\widehat{y}_j| - \frac{|\widehat{y}_j|}{2} = \frac{|\widehat{y}_j|}{2},
\end{align*}
which implies
\[ \exp\bigg(-\frac{|\widehat{x}_j+\widehat{y}_j|^2}{4k}\bigg) \le \exp\bigg(-\frac{|\widehat{y}_j|^2}{16k}\bigg).\]
Recall that $\widehat\varepsilon_j = \varepsilon/|\widehat{s}_j - s_j|^{H}$ and $Hd = 1$.
It follows that
\begin{align*}
\mathbb{P}\bigg( \bigcap_{j=1}^k \big\{ |B_{s_j} - z_1| \le \varepsilon, |B_{\widehat{s}_j} - B_{s_j} - (z_2-z_1)| \le 2\varepsilon \big\} \bigg)
& \le C \prod_{j=1}^k \iint_{D(0, \varepsilon_j) \times D(0, 2\widehat \varepsilon_j)}\exp\bigg( - \frac{|\widehat{y}_j|^2}{16k} \bigg) dx_j d\widehat x_j\\
& \le C' \varepsilon^{2kd} \prod_{j=1}^k\left[\frac{1}{|\widehat{s}_j - s_j|} \exp\Big( - \frac{|\widehat{y}_j|^2}{16k} \Big)\right].
\end{align*}
Returning to our integral S, we have
\begin{align}\label{integralSwithCOV}
S \le C\iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2|>4\varepsilon}}\mu(dz_1)\mu(dz_2) \prod_{j=1}^k \iint_{[2j-1, 2j]^2} \frac{1}{|\widehat{s}_j-s_j|}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) ds_j d\widehat{s}_j.
\end{align}
Let us write
\begin{align*}
\iint_{[2j-1, 2j]^2}\frac{1}{|\widehat{s}_j-s_j|}\exp\bigg(-\frac{|\widehat{y_j}|^2}{16k}\bigg) ds_jd\widehat{s}_j = L_j+M_j,
\end{align*}
where
\begin{align*}
L_j = \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\quad\\|\widehat{s}_j-s_j|<|z_1-z_2|^{1/H}}}\frac{1}{|\widehat{s}_j-s_j|}\exp\bigg(-\frac{|\widehat{y}_j|^2}{16k}\bigg)ds_jd\widehat{s}_j
\end{align*}
and
\begin{align*}
M_j = \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\quad\\|\widehat{s}_j-s_j|\ge|z_1-z_2|^{1/H}}}\frac{1}{|\widehat{s}_j-s_j|}\exp\bigg(-\frac{|\widehat{y}_j|^2}{16k}\bigg)ds_jd\widehat{s}_j.
\end{align*}
For the integral $L_j$, recalling the definition of $\widehat{y}_j$ and noting that the function $y \mapsto |y|^{1/H} \exp(-|y|^2/16k)$ is bounded on $\mathbb{R}^d$, we see that
\begin{align*}
L_j &=\iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\quad\\|\widehat{s}_j-s_j|<|z_1-z_2|^{1/H}}}\frac{1}{|z_1-z_2|^{1/H}}|\widehat{y}_j|^{1/H}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) ds_jd\widehat{s}_j\\
& \le \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\quad\\|\widehat{s}_j-s_j|<|z_1-z_2|^{1/H}}}\frac{C}{|z_1-z_2|^{1/H}}ds_jd\widehat{s}_j\\
& \le C'
\end{align*}
and by \eqref{Eq:log_lower_bd}, we have
\[ L_j \le C \log\frac{1}{|z_1-z_2|} . \]
Considering $M_j$, since $\exp(-|y|^2/16k) \le 1$ for all $y \in \mathbb{R}^d$, we get that
\begin{align*}
M_j \le \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\quad\\ |\widehat{s}_j-s_j|\ge|z_1-z_2|^{1/H}}}\frac{1}{|\widehat{s}_j-s_j|}ds_jd\widehat{s}_j.
\end{align*}
Elementary calculations show that for any $a \in \mathbb{R}$ and $0 < x < 1$,
\begin{equation}\label{int_est_log}
\iint_{\substack{s, \widehat{s} \in [a ,a+1],\\ |\widehat{s}-s| > x}} \frac{1}{|\widehat{s}-s|} ds\, d\widehat{s} = 2\log\frac{1}{x} - 2(1-x).
\end{equation}
It follows that
\[ M_j \le \frac{2}{H}\log\frac{1}{|z_1-z_2|}. \]
Then we can combine the estimates for $L_j$ and $M_j$ to deduce from \eqref{integralSwithCOV} that $S$ has following upper bound:
\begin{align}\label{bound_S}
S \le C \iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2|>4\varepsilon}}
\bigg(\log\frac{1}{|z_1-z_2|}\bigg)^k\mu(dz_1)\mu(dz_2).
\end{align}
Let us consider $F$. Assume $|z_1 - z_2| \leq 4\varepsilon$. Again, we first consider the probability
\[ \mathbb{P}\bigg(\bigcap_{j=1}^k\big\{|B_{s_j}-z_1|\le\varepsilon,|B_{\widehat{s}_j}-z_2|\le\varepsilon\big\}\bigg)\]
with normalized terms. We use the same process from \eqref{Eq:UBint} to \eqref{Eq:UBexp}, and integrate $dx_1, \dots, dx_k$ to find
\begin{align*}
\mathbb{P}\bigg(\bigcap_{j=1}^k\big\{|B_{s_j}-z_1|\le\varepsilon,|B_{\widehat{s}_j}-z_2|\le\varepsilon\big\}\bigg)
& \le \int\cdots\int_D \frac{1}{(2\pi)^{kd}(\det\Sigma)^{1/2}} \exp\bigg(-\frac{x\Sigma^{-1}x^T}{2} \bigg) dx_1 d\widehat x_1 \cdots dx_k d\widehat x_k\\
& \le C \varepsilon^{kd} \prod_{j=1}^k \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j.
\end{align*}
It follows that
\begin{align}\label{Eq:bound_F}
\begin{aligned}
F \le C\varepsilon^{-kd}\iint_{\substack{z_1, z_2 \in E,\\ |z_1 - z_2| \le 4\varepsilon}}
\mu(dz_1) \mu(dz_2) \prod_{j=1}^k \iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j.
\end{aligned}
\end{align}
Now, we consider the integral
\begin{align*}
\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j
\end{align*}
as a sum of two integrals, for one which takes values when the variables $\widehat{s}_j$ and $s_j$ are restricted such that $|\widehat{s}_j-s_j|>\varepsilon^d$, and the other takes values when the variables $\widehat{s}_j$ and $s_j$ are restricted such that $|\widehat{s}_j-s_j| \leq \varepsilon^d$.
Then,
\begin{align*}
& \iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
& = \iint\limits_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j-s_j|>\varepsilon^d}}ds_jd\widehat{s}_j\int_{D(\widehat{y}_j,2\widehat{\varepsilon}_j)}\exp\bigg(-\frac{|\widehat{x}_j|^2}{16k}\bigg)d\widehat{x}_j + \iint\limits_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j-s_j|\le\varepsilon^d}}ds_jd\widehat{s}_j\int_{D(\widehat{y}_j,2\widehat{\varepsilon}_j)}\exp\bigg(-\frac{|\widehat{x}_j|^2}{16k}\bigg)d\widehat{x}_j .
\end{align*}
For the first integral, we use the fact that $\exp(-|x|^2/16k) \le 1$ for all $x$, and the Lebesgue measure of the ball $D(\widehat{y}_j, 2\widehat{\varepsilon}_j)$ is $C {\widehat{\varepsilon_j}}^d = C\varepsilon^d/|\widehat{s}_j - s_j|$ (recall that $\widehat\varepsilon_j = \varepsilon/|\widehat{s}_j - s_j|^{H}$ and $Hd = 1$). For the second integral, we use $\int_{\mathbb{R}^d} \exp(-|x|^2/16k) dx < \infty$.
We get that
\begin{align*}
&\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
&\le \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j-s_j|>\varepsilon^d}} \frac{C\varepsilon^d}{|\widehat{s}_j-s_j|}ds_jd\widehat{s}_j + \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j-s_j|\leq\varepsilon^d}}C ds_jd\widehat{s}_j.
\end{align*}
Then by \eqref{int_est_log} and \eqref{Eq:log_lower_bd}, and also the assumption that $|z_1 - z_2| \le 4\varepsilon$, we have
\begin{align*}
&\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
& \le C\varepsilon^d\log\frac{1}{\varepsilon} + C\varepsilon^d
\le C'\varepsilon^d\log\frac{1}{|z_1 - z_2|}.
\end{align*}
It follows that
\begin{align}\label{bound_F}
F \le C \iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2|\le4\varepsilon}} \bigg(\log \frac{1}{|z_1-z_2|} \bigg)^k \mu(dz_1) \mu(dz_2).
\end{align}
Now consider $F + S$. Combining \eqref{bound_S} and \eqref{bound_F}, we have
\[ \mathbb{E}(I_\varepsilon^2)=F+S\le C\int_E \int_E \bigg(\log\frac{1}{|z_1-z_2|}\bigg)^k \mu(dz_1)\mu(dz_2) =: C_4. \]
By \eqref{assump}, $C_4$ is a finite constant.
Given the lower bound of $\mathbb{E}(I_\varepsilon)^2$ and the upper bound of $\mathbb{E}(I_\varepsilon^2)$, we deduce from \eqref{Eq:CS} that
\[ \mathbb{P}(I_\varepsilon>0) \ge C_3/C_4 > 0 \]
for all small $\varepsilon > 0$.
Because $C_3$ and $C_4$ are independent of $\varepsilon$, we can let $\varepsilon \rightarrow 0$ to conclude that the event that $E$ contains a $k$-tuple point $z = B_{s_1} =\cdots = B_{s_k}$ for some $s_1 \in [1, 2], s_2 \in [3, 4], \dots, s_k \in [2k-1, 2k]$ has positive probability.
\end{proof}
\section{Multiple Points of Fractional Brownian Motion with $Hd > 1$}
The following result is the analogue of Theorem \ref{ktuplepointsHd=1} for $Hd > 1$.
\begin{thm}
\label{ktupleHd>1}
Let $B$ be a fractional Brownian motion in $\mathbb{R}^d$ with Hurst index $H$ such that $Hd>1$. If $E$ is a fixed, nonempty compact set in $\mathbb{R}^d$ with positive capacity with respect to the function $\phi(s) = s^{-k(d-1/H)}$, then $E$ contains $k$-tuple points with positive probability.
\end{thm}
\begin{proof}
It is easy to see that $E$ has positive $\phi$-capacity if and only if $\lambda E$ has positive $\phi$-capacity for all $\lambda > 0$, so we can assume that $E$ is contained in a ball of radius $1/3$ such that $|z_1-z_2| \le 2/3 < 1$.
We can find a probability measure $\mu$ on $E$ such that
\begin{align}\label{finite_energy2}
\int_E\int_E \frac{1}{|z_1-z_2|^{k(d-1/H)}} \mu(dz_1)\mu(dz_2) < \infty.
\end{align}
As before, for small $\varepsilon > 0$, we consider the integral
\begin{align*}
I_{\varepsilon}= \frac{1}{\varepsilon^{kd}}\int_{E}\mu(dz) \prod_{j=1}^k \int_{2j-1}^{2j}ds_j \textbf{K}_\varepsilon({|B_{s_j}-z|}).
\end{align*}
We prove a lower bound for $\mathbb{E}(I_\varepsilon)$ and an upper bound for $\mathbb{E}(I_\varepsilon^2)$.
The proof follows as in Theorem \ref{ktuplepointsHd=1} with changes after \eqref{integralSwithCOV}.
With $Hd > 1$, \eqref{integralSwithCOV} now becomes
\begin{align}\label{Eq:bound_S}
S \le C\iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2|>4\varepsilon}}\mu(dz_1)\mu(dz_2) \prod_{j=1}^k \iint_{[2j-1, 2j]^2} \frac{1}{|\widehat{s}_j-s_j|^{Hd}}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) ds_j d\widehat{s}_j.
\end{align}
Splitting the integral in $ds_j d\widehat s_j$ into $L_j + M_j$ as before, we get that
\begin{align}\label{L+M}
\notag&\iint_{[2j-1, 2j]^2} \frac{1}{|\widehat{s}_j-s_j|^{Hd}}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) ds_j d\widehat{s}_j\\
\notag& = \iint_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\quad\\|\widehat s_j - s_j| < |z_1 - z_2|^{1/H}}} \frac{ds_j d\widehat{s}_j }{|\widehat{s}_j-s_j|^{Hd}}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) + \iint_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\quad\\|\widehat s_j - s_j| \ge |z_1 - z_2|^{1/H}}} \frac{ds_j d\widehat{s}_j }{|\widehat{s}_j-s_j|^{Hd}}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) \\
& \le \iint_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\quad\\|\widehat s_j - s_j| < |z_1 - z_2|^{1/H}}} \frac{ds_j d\widehat{s}_j }{|z_1 - z_2|^{d}}|\widehat y_j|^d\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) + \iint_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\quad\\|\widehat s_j - s_j| \ge |z_1 - z_2|^{1/H}}} \frac{ds_j d\widehat{s}_j }{|\widehat{s}_j-s_j|^{Hd}}.
\end{align}
Since the function $y \mapsto |y|^d \exp(-|y|^2/16k)$ is bounded on $\mathbb{R}^d$, the first integral in \eqref{L+M} is
\[ \le \frac{C}{|z_1-z_2|^{d-1/H}}. \]
Elementary calculations show that for any $a \in \mathbb{R}$, $0 < x < 1$ and $Hd > 1$,
\begin{align}\label{int_est_power}
\iint_{\substack{s, \widehat{s} \in [a, a+1],\\ |\widehat{s}-s| > x}} \frac{1}{|\widehat{s}-s|^{Hd}} ds \, d\widehat{s}
&= \frac{2(Hd-1)^{-1}}{x^{Hd-1}} + \frac{2(2-Hd)^{-1}}{x^{Hd-2}} + \frac{2}{(1-Hd)(2-Hd)},
\end{align}
which is bounded by $C/x^{Hd-1}$ for some constant $C$.
So the second integral in \eqref{L+M} is
\[ \le \frac{C}{(|z_1-z_2|^{1/H})^{Hd-1}} = \frac{C}{|z_1-z_2|^{d-1/H}}. \]
It follows that
\[ \iint_{[2j-1, 2j]^2} \frac{1}{|\widehat{s}_j-s_j|^{Hd}}\exp\bigg(\frac{-|\widehat{y}_j|^2}{16k}\bigg) ds_j d\widehat{s}_j \le \frac{C}{|z_1-z_2|^{d-1/H}}, \]
and hence
\begin{align*}
S \le C \iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2| > 4\varepsilon}} \frac{1}{|z_1-z_2|^{k(d-1/H)}} \mu(dz_1) \mu(dz_2).
\end{align*}
For $F$, we consider $|z_1 - z_2| \le 4\varepsilon$. The bound \eqref{Eq:bound_F} is still valid. For the integral
\begin{align*}
\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j
\end{align*}
in \eqref{Eq:bound_F}, we consider it as the sum of two integrals over $|\widehat{s}_j - s_j| > \varepsilon^{1/H}$ and
$|\widehat{s}_j - s_j| \le \varepsilon^{1/H}$:
\begin{align*}
& \iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
& = \iint\limits_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\\|\widehat s_j - s_j|> \varepsilon^{1/H}}} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j + \iint\limits_{\substack{s_j, \widehat s_j \in [2j-1, 2j],\\ |\widehat s_j - s_j| \le \varepsilon^{1/H}}} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j.
\end{align*}
For the first term, we bound the exponential function by 1 and integrate over the ball $D(\widehat{y}_j, 2\widehat{\varepsilon}_j)$. For the second term, we bound it by replacing the ball $D(\widehat{y}_j, 2\widehat{\varepsilon}_j)$ by all of $\mathbb{R}^d$ and note that $\int_{\mathbb{R}^d} \exp(-|\widehat{x}_j|^2/16k) d\widehat{x}_j$ is finite. It follows that
\begin{align*}
&\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
& \le \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j - s_j|> \varepsilon^{1/H}}} \frac{C\varepsilon^d}{|\widehat{s}_j-s_j|^{Hd}}ds_j d\widehat{s}_j
+ \iint_{\substack{s_j, \widehat{s}_j \in [2j-1,2j],\\|\widehat{s}_j - s_j|\le \varepsilon^{1/H}}}C ds_1 d\widehat{s}_j.
\end{align*}
Then by \eqref{int_est_power} and $|z_1-z_2| \le 4\varepsilon$,
\begin{align*}
&\iint_{[2j-1, 2j]^2} ds_j d\widehat s_j \int_{D(\widehat y_j, 2\widehat \varepsilon_j)} \exp\bigg( -\frac{|\widehat x_j|^2}{16k}\bigg) d\widehat x_j\\
& \le \frac{C\varepsilon^d}{\varepsilon^{d-1/H}} + C\varepsilon^{1/H}\\
& \le \frac{C'\varepsilon^d}{|z_1-z_2|^{d-1/H}}.
\end{align*}
Hence
\begin{align*}
F \le C \iint_{\substack{z_1, z_2 \in E,\\|z_1-z_2| \le 4\varepsilon}} \frac{1}{|z_1-z_2|^{k(d-1/H)}} \mu(dz_1)\mu(dz_2).
\end{align*}
Therefore, we obtain the following bound for $\mathbb{E}(I_\varepsilon^2)$:
\[ \mathbb{E}(I_\varepsilon^2) = S + F \le C \int_E \int_E \frac{1}{|z_1-z_2|^{k(d-1/H)}} \mu(dz_1)\mu(dz_2). \]
This bound is finite by \eqref{finite_energy2} and is independent of $\varepsilon$. The rest of the proof is the same as in Theorem \ref{ktuplepointsHd=1}.
\end{proof}
\section{Acknowledgements}
We would like to thank Dr.~Yimin Xiao for his guidance in the direction of our research project.
This paper was written as a part of the SURIEM REU at Michigan State University, and we would finally like to thank Dr. Robert Bell for his contributions to organizing this program. The SURIEM REU is supported by MSU, the NSA, and the NSF.
| {
"timestamp": "2020-03-09T01:04:39",
"yymm": "2003",
"arxiv_id": "2003.03005",
"language": "en",
"url": "https://arxiv.org/abs/2003.03005",
"abstract": "Nils Tongring (1987) proved sufficient conditions for a compact set to contain $k$-tuple points of a Brownian motion. In this paper, we extend these findings to the fractional Brownian motion. Using the property of strong local nondeterminism, we show that if $B$ is a fractional Brownian motion in $\\mathbb{R}^d$ with Hurst index $H$ such that $Hd=1$, and $E$ is a fixed, nonempty compact set in $\\mathbb{R}^d$ with positive capacity with respect to the function $\\phi(s) = (\\log_+(1/s))^k$, then $E$ contains $k$-tuple points with positive probability. For the $Hd > 1$ case, the same result holds with the function replaced by $\\phi(s) = s^{-k(d-1/H)}$.",
"subjects": "Probability (math.PR)",
"title": "The Multiple Points of Fractional Brownian Motion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98657174604767,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449305242898
} |
https://arxiv.org/abs/1512.04001 | Number Systems with Simplicity Hierarchies II | In [15], the algebraico-tree-theoretic simplicity hierarchical structure of J. H. Conway's ordered field No of surreal numbers was brought to the fore and employed to provide necessary and sufficient conditions for an ordered field to be isomorphic to an initial subfield of No, i.e. a subfield of No that is an initial subtree of No. In this sequel to [15], analogous results for ordered abelian groups and ordered domains are established which in turn are employed to characterize the convex subgroups and convex subdomains of initial subfields of No that are themselves initial. It is further shown that an initial subdomain of No is discrete if and only if it is an initial subdomain of No's canonical integer part Oz of omnifc integers. Finally, extending results of [15], the theories of divisible ordered abelian groups and real-closed ordered fields are shown to be the sole theories of ordered abelian groups and ordered fields all of whose models are isomorphic to initial subgroups and initial subfields of No. | \section{Introduction}
J. H.
Conway \cite{CO} introduced a real-closed field of \emph{surreal numbers} embracing the reals and the ordinals as well as a great
many less familiar numbers including $ - \omega $, $\omega /2$, $1/\omega $, $\sqrt \omega $
and $e^\omega $, to name only a few. This particular real-closed field, which
Conway calls ${\bf No}$, is so remarkably inclusive that, subject to the proviso that
numbers---construed here as members of ordered fields---be individually definable in
terms of sets of von Neumann-Bernays-G\"{o}del set theory with global choice (NBG)
\cite{ME}, it may be said to contain ``All Numbers Great and Small.'' In this respect, ${\bf No}$ bears much the same relation to ordered fields that the ordered field $\mathbb{R}$ of real numbers bears to Archimedean ordered fields (\cite{EH3}, \cite{EH5}, \cite{EH8}).
In addition to its inclusive structure as an ordered field, ${\bf No}$ has a rich \emph{simplicity hierarchical (\emph{or} s-hierarchical) structure} \cite{EH5}, \cite{EH4}, that depends upon
its structure as a \emph{lexicographically ordered full binary tree} and arises
from the fact that the sums and products of any two members of the tree are the simplest possible
elements of the tree consistent with ${\bf No}$'s structure as an ordered group and an ordered
field, respectively, it being understood that $x$ is \emph{simpler than} $y$ just in case $x$ is a
predecessor of $y$ in the tree.
Among the striking s--hierarchical features of ${\bf No}$ that emerged from \cite{EH5} is that much as the surreal numbers emerge from the empty set of surreal
numbers by means of a transfinite recursion that provides an unfolding of the entire spectrum of
numbers great and small (modulo the aforementioned provisos), the recursive process of defining
${\bf No}$'s arithmetic in turn provides an unfolding of the entire spectrum of ordered abelian
groups (ordered fields) in such a way that an isomorphic copy of every such
system either emerges as an initial subtree of ${\bf No}$ or is contained in a theoretically
distinguished instance of such a system that does. In particular, it was shown that every divisible
ordered abelian group (real-closed ordered field) is isomorphic to an \emph{initial subgroup
(initial subfield)} of ${\bf No}$.
The divisible ordered abelian groups and real-closed ordered fields, however, do not exhaust the ordered abelian groups and ordered fields that are isomorphic to initial subgroups and subfields of ${\bf No}$. For example, every 2-divisible Archimedean ordered abelian group has an initial isomorphic copy in ${\bf No}$, as does and every Archimedean ordered field \cite[Theorem 8]{EH5}, and these groups and fields of course are not in general divisible or real-closed. In the case of ordered fields, more generally, in \cite[Theorem 18]{EH5} it was shown that:
\vspace{5pt}
\noindent
\emph{An ordered field is isomorphic to an initial subfield of {\bf No} if and only if it is isomorphic to a truncation closed, cross sectional subfield of a power series field ${\mathbb R}((t^\Gamma))_{\bf On}$ where $\Gamma$ is isomorphic to an initial subgroup of {\bf No}.}
\vspace{5pt}
The present paper is a sequel to \cite{EH5}. Following some preliminary material, in \S 5 and \S 6 we generalize for ordered abelian groups and ordered domains the just-said result for ordered fields, and in \S 7 we employ these generalizations to characterize the convex subgroups and convex subdomains of initial subfields of ${\bf No}$ that are themselves initial. We further show that an initial subdomain of ${\bf No}$ is discrete if and only if it is an initial subdomain of ${\bf No}$'s canonical integer part ${\bf Oz}$ of \emph{omnific integers}. And, in \S 8, we extend results of \cite{EH5} by showing that the theories of divisible ordered abelian groups and real-closed ordered fields are the \emph{sole} theories of ordered abelian groups and ordered fields \emph{all} of whose models are isomorphic to initial subgroups and initial subfields of ${\bf No}$. Finally, in \S 9 we state a pair of open questions regarding s-hierarchical ordered algebraic systems that supplement another open question raised in \S 8.
Throughout the paper, the underlying set theory is assumed to be NBG and as such by
\emph{class} we mean set or proper class, the latter of which, in virtue of the axioms of global
choice and foundation, always has the ``cardinality'' of the universe of sets. For additional
information on formalizing the theory of surreal numbers in NBG, we refer the reader to
\cite{EH2}.
\section{Preliminaries I: Lexicographically ordered binary trees and Surreal Numbers}
A \emph{tree} $\left\langle {A, < _s } \right\rangle $ is a partially ordered class such that for
each $x \in A$, the class $\left\{ {y \in A:y < _s x} \right\}$ of \emph{predecessors} of $x$,
written `$pr_A \left( x \right)$', is a set well ordered by $ < _s $. A maximal subclass of $A$ well
ordered by $ < _s $ is called a \emph{branch} of the tree. Two elements $x$ and $y$ of $A$ are
said to be \emph{incomparable} if $x \ne y$, $x\not < _s y$ and $y\not < _s x$. An
\emph{initial subtree} of $\left\langle {A, < _s } \right\rangle $ is a subclass $A'$ of $A$ with
the induced order such that for each $x \in A'$, $pr_{A'} \left( x \right) = pr_A \left( x \right)$.
The \emph{tree-rank} of $x \in A$, written `$\rho _A (x)$', is the ordinal corresponding to the
well-ordered set $\left\langle {pr_A \left( x \right), <_s}\right\rangle $; the $\alpha $th
\emph{level} of $A$ is $\left\langle {x \in A:\rho _A (x) = \alpha } \right\rangle$ ; and a root of
$A$ is a member of the zeroth level. If $x,y \in A$, then $y$ is said to be an \emph{immediate
successor} of $x$ if $x < _s y$ and $\rho _A (y) = \rho _A (x) + 1$; and if $(x_\alpha
)_{\alpha < \beta }$ is a chain in $A$ (i.e., a subclass of $A$ totally ordered by $ < _s $), then
$y$ is said to be an \emph{immediate successor of the chain} if $x_\alpha < _s y$ for all
$\alpha < \beta$ and $\rho _A (y)$ is the least ordinal greater than the tree-ranks of the members
of the chain. The \emph{length} of a chain $(x_\alpha )_{\alpha < \beta }$ in $A$ is the
ordinal $\beta $.
A tree $\left\langle {A, < _s } \right\rangle $ is said to be \emph{binary} if each member
of $A$ has at most two immediate successors and every chain in $A$ of limit length has at most
one immediate successor. If every member of $A$ has two immediate successors and every
chain in $A$ of limit length (including the empty chain) has an immediate successor, then the
binary tree is said to be \emph{full}. Since a full binary tree has a level for each ordinal, the
universe of a full binary tree is a proper class.
Following \cite[Definition 1]{EH5}, a binary tree $\left\langle {A, < _s } \right\rangle $ together with a total ordering $ < $ defined
on $A$ will be said to be \emph{lexicographically ordered} if for all $x,y \in A$, $x$ is
incomparable with $y$ if and only if $x$ and $y$ have a common predecessor lying between
them (i.e. there is a $z \in A$ such that $z<_s x$, $z<_s y$ and either $x<z<y$ or $y<z<x$). The appellation ``lexicographically ordered" is motivated by the fact that: $\left\langle {A, < , < _s } \right\rangle $ is a lexicographically ordered binary tree if and only if $\left\langle {A, < , < _s } \right\rangle $ is isomorphic to an initial ordered subtree of the \emph{lexicographically ordered canonical full binary tree} $\langle B, < _{lex\left( B \right)} , < _B \rangle $, where $B$ is the class
of all sequences of $ - $s and $ + $s indexed over some ordinal, $x < _B y$ signifies that
$x$ is a proper initial subsequence of $y$, and $(x_\alpha )_{\alpha < \mu } < _{lex\left(
B \right)} (y_\alpha )_{\alpha < \sigma }$ if and only if $x_\beta = y_\beta $ for all $\beta < $
some $\delta $, but $x_\delta < y_\delta$, it being understood that $ - < $ \emph{undefined}
$ < + $ \cite[Theorem 1]{EH5}.
\begin{notational conventions*}
{\rm Let $\left\langle {A, < , < _s } \right\rangle $ be a
lexicographically ordered binary tree. If $\left( {L,R} \right)$ is a pair of subclasses of $A$ for
which every member of $L$ precedes every member of $R$, then we will write `$L < R$'.
Also, if $x$ and $y$ are members of $A$, then `$x < _s y$' will be read ``$x$ \emph{is simpler
than} $y$''; and if there is an $x \in I = \left\{ {y \in A:L < \left\{ y \right\} < R} \right\}$ such
that $x < _s y$ for all $y \in I - \left\{ x \right\}$, then we will denote this \emph{simplest
member of} $A$ \emph{lying between the members of} $L$ \emph{and the members of} $R$
by `$\left\{ {L\, | \, R} \right\}$'. Finally, by `$L_{s\left( x \right)} $' we mean $\left\{ {a \in A:a <
_s x\;{\rm{and}}\;a < x} \right\}$ and by `$R_{s\left( x \right)} $' we mean $\left\{ {a \in A:a <
_s x\;{\rm{and}}\;x < a} \right\}$.}
\end{notational conventions*}
The following three propositions collects together a number of properties of, or results about,
lexicographically ordered binary trees that will be appealed to in subsequent portions of the
paper.
\begin{proposition}\cite[Theorem 2]{EH5}\label{P:Bc}
Let $\left\langle {A, < , < _s } \right\rangle $ be a lexicographically ordered binary tree. (i) For
all $x \in A$, $x = \left\{ {L_{s\left( x \right)} \, | \, R_{s\left( x \right)} } \right\}$; (ii) for all $x,y
\in A$, $x < _s y$ if and only if $L_{s\left( x \right)} < \left\{ y \right\} < R_{s\left( x \right)} $
and $y \ne x$; (iii) for all $x \in A$ and all $L,R \subseteq A$, $x = \left\{ {L \, | \, R} \right\}$ if
and only if $L$ is cofinal with $L_{s\left( x \right)} $ and $R$ is coinitial with $R_{s\left( x
\right)} $ if and only if $L < \{x\} < R$ and $\{ y \in A: L < \{y\} < R\} \subseteq \{ y \in A: L_{s(x)} < \{y\} < R_{s(x)}\}$.
\end{proposition}
Let $\left\langle {{\bf{No}}, < , < _s } \right\rangle $ be the \emph{lexicographically ordered binary tree of surreal numbers} constructed in any of the manners found in the literature (\cite{EH4}, \cite{EH5}, \cite{EH5.1}, \cite{EH8}, \cite{vdDE}, \cite{SCST}), including simply letting $\left\langle {{\bf{No}}, < , < _s } \right\rangle =\langle B, < _{lex\left( B \right)} , < _B \rangle $.\footnote{Conway's cut construction \cite{CO} and the related construction based on Cuesta Dutari cuts introduced in \cite{EH1} (and adopted in \cite{AE1} and \cite{AE2}), do not include {\bf{No}}'s lexicographically ordered binary tree structure. However, as was noted in \cite[page 257]{EH4}, they admit relational extensions to the ordered tree structure vis-\'a-vis the definition: for all $x = (L,R) , y \in {\bf{No}}$, $x<_sy$ if and only if $L < \{y\} < R$ and $y \neq x$.
The identification $\left\langle {{\bf{No}}, < , < _s } \right\rangle =\langle B, < _{lex\left( B \right)} , < _B \rangle $, which is employed in \cite{EH5}, \cite{vdDE},\cite{KM} and \cite{BM}, is simply a relational extension of the familiar (non tree-theoretic) construction of {\bf{No}} based on \emph{sign-expansions}---the members of $B$---introduced by Conway \cite[page 65]{CO}, and made popular by Gonshor \cite{GO}.} Central to the development of the s-hierarchical theory of surreal numbers is the following result where a lexicographically ordered binary tree $\left\langle {A, < , < _s } \right\rangle $ is said
to be \emph{complete} \cite[Definition 6]{EH5}, if whenever $L$ and $R$ are subsets
of $A$ for which $L < R$, there is an $x \in A$ such that $x = \left\{ {L\, | \, R} \right\}$.
\begin{proposition}\cite[Theorem 4]{EH5}\cite{EH5.2}\label{P:Bd}
A lexicographically ordered binary tree is complete if and only if it is full if and only if it is isomorphic to $\left\langle {{\bf{No}}, < , < _s } \right\rangle $.
\end{proposition}
An immediate consequence of Proposition \ref{P:Bd} is
\begin{proposition}\label{P:eta}
Let $\left\langle {A, < , < _s } \right\rangle $ be a lexicographically ordered binary tree. $\left\langle {A, < _s } \right\rangle $ is full if and only if $\left\langle {A, < } \right\rangle $ is an $\eta_{\bf {On}}$-\emph{ordering} (i.e. whenever $L$ and $R$ are subsets
of $A$ for which $L < R$, there is an $x \in A$ such that $L <{x}< R$).
\end{proposition}
\section{Preliminaries II: Conway names}
\label{sec:theorems}
Let $\mathbb{D}$ be the set of all surreal numbers having finite tree-rank, and
$$\mathbb{R}=\mathbb{D}\cup \left \{ \left \{ L\, | \, R \right \}:\left ( L,R \right )\textrm{ is a
Dedekind gap in }\mathbb{D} \right \}.$$
The following result regarding the structure of
$\mathbb{R}$ is essentially due to Conway \cite[pages 12, 23-25]{CO}.
\begin{proposition}\label{P:Cb}
$\mathbb{R}$ (with $ + , - , \cdot $ and $ < $ defined \`{a} la ${\bf{No}}$) is
isomorphic to the ordered field of real numbers defined in any of the more familiar ways,
$\mathbb{D}$ being ${\bf{No}}$'s ring of dyadic rationals (i.e., rationals of the form $m/2^n
$ where $m$ and $n$ are integers); $n = \left\{ {0,\dots,n - 1 \, | \, \varnothing } \right\}$ for each
positive integer $n$, $ - n = \left\{ {\varnothing \, | - \left( {n - 1} \right),\dots,0} \right\}$ for each
positive integer $n$, $0 = \left\{ {\varnothing \, | \, \varnothing } \right\}$, and the remainder of the
dyadics are the arithmetic means of their left and right predecessors of greatest tree-rank; e.g.,
$1/2 = \left\{ {0 \, | \, 1} \right\}$. The systems of natural numbers and integers so defined are henceforth denoted $\mathbb{N}$ and $\mathbb{Z}$, respectively.
\end{proposition}
${\bf No}$'s canonical class $\bf On$ of ordinals consists of the members of the ``rightmost" branch of $\left\langle {{\bf{No}}, < , < _s } \right\rangle $, i.e. the unique branch of $\left\langle {{\bf{No}}, < , < _s } \right\rangle $ whose members satisfy the condition: $x<y$ if and only if $x<_sy$. In those formulations where surreal numbers are pairs $(L,R)$ of sets of previously defined surreal numbers (\cite{CO}, \cite{EH8}, \cite{AE1}), the ordinals are the surreal numbers of the form $(L,\varnothing)$; and in the formulation \cite{GO} where surreal numbers are sign-expansions (see \S 2), the ordinals are the sequences (including the empty sequence) consisting solely of +s.
A striking s--hierarchical feature of ${\bf No}$ is that every surreal number can
be assigned a canonical ``proper name'' (or normal form) that is a reflection of its characteristic
s--hierarchical properties. These \emph{Conway names}, as we call them, are expressed as
formal sums of the form $\sum\nolimits_{\alpha < \beta } {\omega ^{y_\alpha } .r_\alpha} $
where $\beta $ is an ordinal, $\left( {y_\alpha} \right)_{\alpha < \beta } $ is a strictly decreasing
sequence of surreals, and $\left( {r_\alpha } \right)_{\alpha < \beta } $ is a sequence of nonzero
real numbers, the Conway name of an ordinal being just its Cantor normal form \cite[pages 31-33]{CO},\cite[\S3.1 and \S5]{EH5}.
The surreal numbers having Conway names of the form $\omega^y$ are called \emph{leaders} since they denote the simplest positive members of the various Archimedean classes of ${\bf No}$. More formally, they may be inductively defined by formula
\begin{equation}
\omega^{y} = \left\{ 0, n\omega^{y^L}\, |\, \frac{1}{2^n}\omega^{y^R} \right\},
\end{equation}
\noindent
where $n$ ranges over the positive integers, and $a^L$ and $a^R$ range over the elements of $L_{s(y)}$ and $R_{s(y)}$, respectively.
There are a number of significant
relations between surreal numbers that are reflected in terms of relations
between their respective Conway names. The following collection of such results, which are known from the literature, will be appealed to in the subsequent discussion.
\begin{proposition}\cite[Theorems 11 and 15]{EH5}
\label{thrm15ehr01}
(i) For all $x,y \in {\bf No}$, $\omega^{x}<_s \omega^{y}$ if and only if $x<_{s} y$;
(ii) $\sum_{\alpha < \mu} \omega^{y_\alpha} \cdot r_\alpha <_s \sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha$ whenever $\mu <_s \beta$;\\
\begin{equation*}
\begin{split}
(iii) \sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha= \left\{\sum_{\alpha < \mu} \omega^{y_\alpha} \cdot r_\alpha+\omega^{y_\mu}\cdot\right. & \left.\left(r_\mu-\frac{1}{2^n}\right)\ \right |\\
& \left.\sum_{\alpha < \mu} \omega^{y_\alpha} \cdot r_\alpha+\omega^{y_\mu}\cdot\left(r_\mu+\frac{1}{2^n}\right)\right\},
\end{split}
\end{equation*}
if $\beta$ is a limit ordinal (where $n$ and $\mu$ range over all positive integers and all ordinals less than $\beta$, respectively).
\end{proposition}
We shall also appeal to the following compilation of results regarding Conway names which, while new to the literature, essentially consists of corollaries of results from the first author's analysis of the surreal number tree \cite{EH7} or improvements (based on that analysis) of a result of Gonshor \cite[Lemma 5.8(a)]{GO}.
\begin{lemma}
\label{squeeze}
Suppose $a,b,r \in {\mathbb R}$ and $x, y \in {\bf No}$. Then: (i) $\omega^y \cdot a <_s \omega^y \cdot b$ whenever $a <_s b$. (ii) If either $r \in \mathbb{R}-\mathbb{D}$, or $r \in \mathbb{D}-\mathbb{Z}$ and $L_{s(y)}= \varnothing$, then $$ \omega^{y}\cdot r= \{ \omega^{y}\cdot r^L\, |\, \omega^{y}\cdot r^R\};$$ moreover, in virtue of (i), $\omega^y \cdot r^L<_s \omega^y \cdot r$ and $\omega^y \cdot r^R <_s \omega^y \cdot r$ for all $r^L \in L_{s(r)}$ and $r^R \in R_{s(r)}$. (iii) If $r \in \mathbb{D}-\mathbb{Z}$ and $L_{s(y)} \neq \varnothing$, then $$ \omega^{y}\cdot r= \{ \omega^{y}\cdot r^L+\omega^{y^L}\cdot n\, |\, \omega^{y}\cdot r^R-\omega^{y^L}\cdot n\},$$ where $n$ ranges over $\mathbb{N}$; moreover, $ \omega^{y}\cdot r^L+\omega^{y^L}\cdot n\ <_s \omega^{y}\cdot r$ and $ \omega^{y}\cdot r^R-\omega^{y^L}\cdot n\ <_s \omega^{y}\cdot r$, for all $y^L \in L_{s(y)}$, $r^L \in L_{s(r)}$ and $r^R \in R_{s(r)}$.
(iv) For all $n \in \mathbb{N}$, $\frac{1}{2^n}\omega^y <_s \omega^{x}$ if $y \in R_{s(x)}$.
\end{lemma}
\begin{proof}
(i) follows immediately from \cite[Theorems 3.13 and 3.16]{EH7}, by considering a number $c \in {\mathbb R} - {\mathbb D}$ such that $b=c$ or $b <_s c$. Necessarily, $\omega^y \cdot c$ is the immediate successor of a chain of limit length having a cofinal subchain of the form $(\omega^y\cdot c_n )_{n<\omega}$ where $c$ is the immediate successor of $(c_n)_{n<\omega}$. As $a \in (c_n)_{n<\omega}$ and either $b \in (c_n)_{n<\omega}$ or $b=c$, it is the case that $\omega^y\cdot a \in (\omega^y\cdot c_n )_{n<\omega}$ and either $\omega^y \cdot b \in (\omega^y\cdot c_n )_{n<\omega}$ or $\omega^y \cdot b = \omega^y\cdot c$. In either case, $\omega^y\cdot a <_s \omega^y \cdot b$. For (ii), if $r \in \mathbb{R}-\mathbb{D}$, the result can be proved from (i) by simply forming the specified cut in the just-said cofinal subchain; and if $L_{s(y)}= \varnothing$ and $r \in \mathbb{D}-\mathbb{Z}$, the result follows from (i) and the second part of Gonshor's Lemma 5.8(a) from \cite{GO}. (iii) follows from the first part of Gonshor's Lemma 5.8(a) from \cite{GO} and repeated applications of \cite[Theorem 3.18(ii)]{EH7}. For (iv), suppose without loss of generality that $x$ is the immediate left successor of $y$; for if it is not, the immediate left successor of $y$ is in $R_{s(x)}$ and if we show that $\frac{1}{2^n}\omega^y$ is simpler than the immediate left successor of $y$ for each $n$, then it follows that $\frac{1}{2^n}\omega^y$ is simpler than $x$ as well. Consider the chain $pr_{\textbf{No}}(\omega^x)$ of predecessors of $\omega^x$. Clearly, this is a chain of limit length, as $\omega^x$ is the immediate successor of another surreal number if and only if $x=0$, in which case $R_{s(x)} = \varnothing$. By \cite[Theorems 3.13 and 3.16]{EH7}, we may conclude that $pr_{\textbf{No}}(\omega^x)$ contains a cofinal chain of the form $(\omega^{y}\cdot a_n)_{n<\omega}$ where $a_n=\frac{1}{2^n}$ for each $n$. As this chain is contained in $pr_{\textbf{No}}(\omega^x)$, $\frac{1}{2^n}\omega^y <_s \omega^{x}$ for each $n$.
\end{proof}
Let ${\mathbb R}((t^\Gamma))_{\bf On}$ be the ordered group (ordered domain; ordered field) of power series (defined \'a la Hahn \cite{H}) consisting of all formal power series of the form $\sum_{\alpha < \beta} r_\alpha t^{y_\alpha}$ where $(y_\alpha)_{\alpha < \beta \in {\bf On}}$ is a possibly empty descending sequence of elements of an ordered class (ordered monoid; ordered abelian group) $\Gamma$ and $r_\alpha \in {\mathbb R} - \{0\}$ for each $\alpha < \beta$. ${\mathbb R}((t^\Gamma))_{\bf On}$ is a set (often simply written ${\mathbb R}((t^\Gamma))$) if $\Gamma$ is a set, and a proper class otherwise. An element $x \in {\mathbb R}((t^\Gamma))_{\bf On}$ is said to be a \textit{proper truncation} of $\sum_{\alpha < \beta} r_\alpha t^{y_\alpha} \in {\mathbb R}((t^\Gamma))_{\bf On}$ if $x = \sum_{\alpha < \sigma} r_\alpha t^{y_\alpha}$ for some $\sigma < \beta$. A subgroup (subdomain; subfield) $A$ of ${\mathbb R}((t^\Gamma))_{\bf On}$ is said to be \textit{truncation closed} if every proper truncation of every member of $A$ is itself a member of $A$. A subgroup (subdomain; subfield) $A$ of ${\mathbb R}((t^\Gamma))_{\bf On}$ is said to be \textit{cross sectional} if $\{t^y : y \in \Gamma \} \subseteq A$. For a truncation closed, cross sectional subgroup (subdomain; subfield) $A$ of ${\mathbb R}((t^\Gamma))_{\bf On}$, the set ${\mathbb R}_y = \{r\ \in {\mathbb R} : rt^y \in A \}$ is an Archimedean ordered group (domain; field) which we will call the \textit{$y$-coefficient group (domain; field)} of $A$.
\begin{proposition}[\cite{EH5}, \cite{EH8}]
\label{prop 5}
There is an isomorphism of ordered groups from {\bf No} onto ${\mathbb R}((t^{\bf No}))_{\bf On}$ that sends each surreal number $\sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha$ to $\sum_{\alpha < \beta} r_\alpha t^{y_\alpha}$. The isomorphism is in fact an isomorphism of ordered domains and, hence, of ordered fields.
\end{proposition}
\section{Preliminaries III: s-hierarchical ordered structures }
Following \cite[Definition 2]{EH5}, $\left\langle A,+,<,<_{s},0\right\rangle $ is said
to be an {\it s-hierarchical ordered group} if (i) $\left\langle
A,+,<,0\right\rangle $ is an ordered abelian group; (ii) $\left\langle
A,<,<_{s}\right\rangle $ is a lexicographically ordered binary tree; and
(iii) for all $x,y\in A$
$$
x+y=\left\{ x^{L}+y,x+y^{L} \, | \, x^{R}+y,x+y^{R}\right\} \text{.\medskip }
$$
$\left\langle A,+,\cdot ,<,<_{s},0,1\right\rangle $ will be said to be an {\it s-hierarchical ordered domain }if (i) $\left\langle
A,+,\cdot ,<,0,1\right\rangle $ is an ordered domain; (ii) $\left\langle
A,+,<,<_{s},0\right\rangle $ is an s-hierarchical ordered group; and (iii)
for all $x,y\in A$
\begin{eqnarray*}
\!\!\!xy &=&\{x^{L}y+xy^{L}-x^{L}y^{L},x^{R}y+xy^{R}-x^{R}y^{R} \,| \\
&&\quad \quad \qquad \qquad \qquad \text{\quad }
x^{L}y+xy^{R}-x^{L}y^{R},x^{R}y+xy^{L}-x^{R}y^{L}\}\text{.}
\end{eqnarray*}
\noindent
Moreover, $\left\langle A,+,\cdot ,<,<_{s},0\right\rangle $ will be
said to be an {\it s-hierarchical ordered $K$-module} if
(i) $K$ is an s-hierarchical ordered domain, (ii) $A$ is an s-hierarchical
ordered group, and (iii) $A$ is an ordered $K$-module in which for all
$x\in K$ and all $y\in A$%
\begin{eqnarray*}
\!\!\!xy &=&\{x^{L}y+xy^{L}-x^{L}y^{L},x^{R}y+xy^{R}-x^{R}y^{R} \, | \\
&&\quad \quad \qquad \qquad \qquad \text{\quad }%
x^{L}y+xy^{R}-x^{L}y^{R},x^{R}y+xy^{L}-x^{R}y^{L}\}\text{.}
\end{eqnarray*}
s-hierarchical ordered domains and modules are generalizations of the s-hierarchical ordered fields and vector spaces introduced in \cite{EH5}. In virtue of Conway's field operations, $\left\langle {\bf {No}},+,\cdot ,<,<_{s},0,1\right\rangle $ is an s-hierarchical ordered domain. In fact, it is (up to isomorphism) the unique universal and unique maximal s-hierarchical ordered domain (in the sense of \cite[page 1239]{EH5}). Moreover, if $K$ is an s-hierarchical ordered subdomain of {\bf {No}}, then {\bf {No}} is an s-hierarchical ordered $K$-module. Furthermore, extending the argument for s-hierarchical ordered subfields and subspaces of {\bf {No}} from \cite[page 1236]{EH5}, it is evident that
\begin{proposition}
The s-hierarchical ordered subdomains and submodules of
{\bf {No}} coincide with the initial subdomains and the initial submodules of {\bf {No}}, respectively.
\end{proposition}
Extending the notation employed in \cite{EH5}, if $A$ is an ordered module and $B \subseteq A$, then by $(B)_{A}$ we mean \emph{the ordered submodule of $A$ generated by $B$.}
The next preparatory result is a modest generalization of \cite[Theorem 6]{EH5}.
\begin{lemma}
\label{thrm6anlg}
Let $M'$ be an s-hierarchical ordered $K$-module and $M$ be an initial submodule of $M'$. If $(L,R)$ is a partition of $M$ and $b=\{L\,| \, R\}^{M'}$, then $(M\cup \{b\})_{M'}$ is an initial submodule of $M'$.
\end{lemma}
\begin{proof}
Except for replacing the references to ``$K$-vector spaces" and ``subspaces" with references to ``$K$-modules" and ``submodules", the proof is the same as the proof of \cite[Theorem 6]{EH5}.
\end{proof}
\section{Initial subgroups and submodules of {\bf No}}
\label{sec:groups}
To fully characterize the initial subgroups of $\bf {No}$, we must first characterize the initial subgroups of ${\mathbb R}$.
\begin{lemma}
\label{initarchgroups}
An ordered group $G$ is an initial subgroup of ${\mathbb R}$ if and only if either $G=\{0\}$, $\mathbb{D} \subseteq G$ or $G=\{\frac{z}{2^m}: z \in {\mathbb Z}\}$ for some $m \in {\mathbb N}$.
\end{lemma}
\begin{proof}
It follows from the definition of ${\mathbb R}$ and Proposition \ref{P:Cb} that the subgroups of ${\mathbb R}$ specified in the statement of the lemma are initial. Now suppose $G$ is a nontrivial initial subgroup of ${\mathbb R}$ that does not contain ${\mathbb D}$. Then there is a greatest $m \in {\mathbb N}$ such that $\frac{z}{2^m} \in G$ for some $z \in {\mathbb Z}$. Using closure under subtraction and the fact that we must always have 1 and, therefore ${\mathbb Z}$ in any nontrivial initial subgroup of $\textbf{No}$, it follows that $\frac{1}{2^m} \in G$ and, by closure under addition and subtraction, $\{\frac{z}{2^m}: z \in {\mathbb Z}\} \subseteq G$. But since $m$ is the greatest member of ${\mathbb N}$ for which $\frac{z}{2^m} \in G$ for some $z \in {\mathbb Z}$, the inclusion is not proper, thereby proving the lemma.
\end{proof}
Our final preparatory result follows immediately from the definitions.
\begin{proposition}
\label{modgengroup}
If $G$ is a cross sectional, truncation closed subgroup of ${\mathbb R}((t^\Gamma))_{\bf On}$ and
\begin{equation*}
Z = \left\{ \sum_{\mu < \nu}r_\mu t^{y_\mu} \in G : \nu \textit{ is an infinite limit ordinal and }r_0 = 1 \right\},
\end{equation*}
then $\{rt^y : y \in \Gamma,\ r \in {\mathbb R}_y\}\cup Z$ constitutes a class of generators for $G$ considered as a ${\mathbb Z}$-module.
\end{proposition}
As is noted above, {\bf No} is isomorphic to ${\mathbb R}((t^{\bf{No}}))_{\bf On}$. We now prove more generally
\begin{theorem}
\label{initgroups}
An ordered abelian group is isomorphic to an initial subgroup of {\bf No} if and only if it is isomorphic to a truncation closed, cross sectional subgroup $G$ of a power series group ${\mathbb R}((t^\Gamma))_{\bf On}$, where (i) $\Gamma$ is isomorphic to an initial ordered subclass of {\bf No},
(ii) every $y$-coefficient group ${\mathbb R}_y$ of $G$ is an initial subgroup of ${\mathbb R}$, and
(iii) ${\mathbb D} \subseteq {\mathbb R}_y$ for all $x, y \in \Gamma$ where $y \in R_{s(x)}$.
\end{theorem}
\begin{proof}
Let $A$ be an initial subgroup of {\bf No}, $\textit{Lead}(A)$ be the class of leaders in $A$, and $\Gamma=\{y: \omega^y \in \textit{Lead}(A)\}$. Following \cite[Definition 14]{EH5}, a class $B$ of surreal numbers is said to be \emph{approximation complete} if $\sum_{\alpha < \sigma} \omega^{y_\alpha} \cdot r_\alpha \in B$ whenever $\sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha \in B$ and $\sigma < \beta$. Since $A$ is initial, it follows from parts (i) and (ii) of Proposition \ref{thrm15ehr01} that $A$ is approximation complete and $\Gamma$ is initial. Moreover, since $A$ is a group, for each $y \in \Gamma$ the set $ {\mathbb R}_y=\{r\in {\mathbb R}: \omega^y\cdot r \in A\}$ is a subgroup of ${\mathbb R}$. Furthermore, since $A$ is initial, it follows from Lemma \ref{squeeze}(i) that ${\mathbb R}_y$ is itself initial. Now suppose $x, y \in \Gamma$ where $y \in R_{s(x)}$. In virtue of Lemma \ref{squeeze}(iv), $\frac{1}{2^n}\omega^y <_s \omega^{x}$ for each $n \in {\mathbb N}$. Therefore, $\frac{1}{2^n} \in {\mathbb R}_y$ for each $n \in {\mathbb N}$; and thus, since groups are closed under addition and subtraction, ${\mathbb D} \subseteq {\mathbb R}_y$. Accordingly, by appealing to the restriction to $A$ of the isomorphism specified in Proposition \ref{prop 5}, it is evident that
$G=\{\sum_{\alpha < \beta} r_\alpha t^{y_\alpha} \in {\mathbb R}((t^\Gamma))_{\bf On}: \sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha \in A\}$ is a truncation complete, cross sectional subgroup of ${\mathbb R}((t^\Gamma))_{\bf On}$ having the requisite properties listed in the statement of the theorem, thereby establishing the ``only if" portion of the theorem.
Aspects of the ``if" portion of the proof borrow from the first author's proof of \cite[Theorem 18]{EH5}. However, whereas the latter proof concerns an ordered subfield $F$ of ${\mathbb R}((t^\Gamma))_{\bf On}$, which is treated as an ordered vector space over the Archimedean ordered field $\{r \in{\mathbb R}: rt^0 \in F\}$, here we are concerned with an ordered subgroup $G$ of ${\mathbb R}((t^\Gamma))_{\bf On}$ that we may only assume to be an ordered ${\mathbb Z}$-module, which complicates the argument. To keep the argument largely self-contained, however, we repeat with modifications portions of the earlier proof.
Let $G$ be a subgroup of ${\mathbb R}((t^\Gamma))_{\bf On}$ satisfying the conditions specified in the statement of the theorem and let $A$ be the isomorphic copy of $G$ in {\bf No} that is the image of the restriction to $G$ of the inverse of the mapping referred to in Proposition \ref{prop 5}. That is, let $A=\{\sum_{\alpha < \beta} \omega^{y_\alpha} \cdot r_\alpha \in {\bf No}: \sum_{\alpha < \beta} r_\alpha t^{y_\alpha} \in G \}$. To show that $A$ is an initial subgroup of \textbf{No}, it suffices to show that $\langle A, <_s | A \rangle$ is an initial subtree of $\langle \textbf{No}, <_s \rangle$. We do this by induction on $\Gamma$.
Let $a_0, \ldots, a_\alpha, \ldots ( \alpha < \beta)$ be a well-ordering of $\Gamma$ such that $\rho_{\textbf{No}}(a_\mu) \leq \rho_{\textbf{No}}(a_\nu)$ whenever $\mu < \nu < \beta$. We consider $A$ as an ordered ${\mathbb Z}$-module. Let $A_\alpha$ be the submodule of $A$ containing $0$ as well as all of the elements in $A$ with exponents only from $\Gamma_\alpha = \{ a_\delta : \delta \leq \alpha \}$. We see that, considered as an ordered ${\mathbb Z}$-module, $A=\bigcup_{\alpha < \beta} A_\alpha$. Notice also that since $a_0=0$, $A_0={\mathbb R}_0$, which, by condition (ii), is an initial subgroup of \textbf{No}. Therefore, $\langle A_0, <_s|A_0 \rangle$ is an initial subtree of \textbf{No}. To complete the proof that $A$ is an initial subtree of \textbf{No}, it remains to show that for all $0<\alpha < \beta$, $\langle A_\alpha, <_s | A_\alpha \rangle$ is an initial subtree of \textbf{No}, if $\langle \bigcup_{\mu < \alpha} A_\mu, <_s | \bigcup_{\mu < \alpha} A_\mu\rangle$ is an initial subtree of \textbf{No}.
Accordingly, let $0<\alpha < \beta$ and suppose $\langle \bigcup_{\mu < \alpha} A_\mu, <_s | \bigcup_{\mu < \alpha} A_\mu\rangle$ is an initial subtree of \textbf{No}. Also, let $Z_\alpha =:$
\begin{equation*}
\left\{ \sum_{\mu < \nu} \omega^{y_\mu} \cdot r_\mu \in A_\alpha - \bigcup_{\mu < \alpha} A_\mu : \textit{$\nu$ {\rm is an infinite limit ordinal and} $r_0 = 1$}\right\},
\end{equation*}
and let $b_0,\ldots,b_\sigma,\ldots (\sigma < \tau)$ be a well-ordering of $\{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}\} \cup Z_\alpha$ such that for all $\gamma,\ \delta < \tau$:
\begin{enumerate}
\item if $b_\gamma$, $b_\delta \in \{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}\}$, then $b_\gamma < b_\delta$ only if $\rho_{\textbf{No}}(b_\gamma) \leq \rho_{\textbf{No}}(b_\delta)$;
\item if $b_\gamma \in \{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}\}$ and $b_\delta \in Z_\alpha$, then $b_\gamma < b_\delta$;
\item if $b_\gamma$, $b_\delta \in Z_\alpha$ and the initial sequence of ordinals over which the exponents in $b_\gamma$ are indexed is contained in the initial sequence of ordinals over which the exponents in $b_\delta$ are indexed, then $b_\gamma < b_\delta$.
\end{enumerate}
By appealing to Proposition \ref{modgengroup} (and recalling that $(X)_{A}$ denotes the ordered submodule of $A$ generated by $X$), it is easy to see that $\{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}\} \cup Z_{\alpha} \cup\bigcup_{\mu < \alpha} A_\mu$ is a class of generators for $A_\alpha$, and hence that, $A_\alpha = \bigcup_{\sigma < \tau} B_\sigma$, where $B_0 = \left( \{b_0\} \cup \bigcup_{\mu < \alpha} A_\mu \right)_A$ and $B_\sigma = \left( \{b_\sigma\} \cup \bigcup_{\delta < \sigma} B_\delta \right)_A$ for $0 < \sigma < \tau$. Thus to show that $A_\alpha$ is an initial subtree of \textbf{No}, it suffices to show that $B_\sigma$ is an initial subtree of \textbf{No} for each $\sigma < \tau$. Moreover, since $B_\sigma = \bigcup_{\delta < \sigma} B_\delta$, whenever $b_\sigma \in \bigcup_{\delta < \sigma} B_\delta$, henceforth we need only consider those $b_\sigma \notin \bigcup_{\delta < \sigma} B_\delta$.
First, note that $b_0 = \omega^{a_\alpha}$. Moreover, since $\Gamma$ is assumed to be initial, both $L_{s(a_\alpha)}$ and $R_{s(a_\alpha)} \subseteq \{a_\delta : \delta < \alpha \}$. Let $a^L$ and $a^R$ be typical elements of $L_{s(a_\alpha)}$ and $R_{s(a_\alpha)}$, respectively. It follows from Equation (1) (see \S3) that
\begin{equation*}
b_0 = \omega^{a_\alpha} = \left\{ 0, n\omega^{a^L} \, | \, \frac{1}{2^n}\omega^{a^R} \right\}
\end{equation*}
where $n$ ranges over the positive integers. As every element of $\bigcup_{\mu < \alpha} A_\mu - \{ 0 \}$ is Archimedean equivalent to a unique member of $\{\omega^{a_\delta} : \delta < \alpha \} \subseteq \bigcup_{\mu < \alpha} A_\mu - \{ 0 \}$, there is be a unique partition $(L_\alpha, R_\alpha)$ of $\bigcup_{\mu < \alpha} A_\mu$ where $L_\alpha < R_\alpha$, $\{ 0, n\omega^{a^L} \}$ is cofinal with $L_\alpha$ and $\{ \frac{1}{2^n}\omega^{a^R} \}$ is coinitial with $R_\alpha$. In virtue of the well ordering, each $n\omega^{a^L}$ is in $\bigcup_{\mu < \alpha} A_\mu$ as is each $\omega^{a^R}$. Plainly then, $\{n\omega^{a^L} \} \subseteq L_\alpha$. Moreover, in virtue of the well ordering and the fact that condition (iii) in the statement of the theorem requires ${\mathbb D} \subseteq {\mathbb R}_{a^R}$ for each $a^R$, it follows that $\{ \frac{1}{2^n}\omega^{a^R} \} \subseteq R_\alpha$. But then by Proposition \ref{P:Bc}, $b_0 = \{L_\alpha\, |\, R_\alpha \}$; and so, by Lemma \ref{thrm6anlg}, $B_0$ is an initial subtree of \textbf{No}.
If $\{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}-{\mathbb Z}\}=\varnothing$, we turn to $Z_\alpha$. Otherwise, we take the first remaining $b_\sigma\in \{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}-{\mathbb Z}\}$ for which $b_\sigma \notin \bigcup_{\delta < \sigma} B_\delta$ and observe that, in virtue of the nature of our well ordering, $b_\sigma=\omega^{a_\alpha}\cdot r_\alpha$ for some $r_\alpha \in {\mathbb D} - {\mathbb Z}$ since every $r_\alpha \in {\mathbb R} - {\mathbb D}$ has predecessors in ${\mathbb D} - {\mathbb Z}$. Moreover, either $L_{s(a_\alpha)} = \varnothing$ or $L_{s(a_\alpha)} \neq \varnothing$. If $L_{s(a_\alpha)} = \varnothing$, then by the relevant portion of Lemma \ref{squeeze}(ii), we have
\begin{equation*}
b_\sigma = \{ \omega^{a_\alpha}\cdot r_\alpha^L \, | \, \omega^{a_\alpha}\cdot r_\alpha^R\},
\end{equation*}
where $\omega^{a_\alpha}\cdot r_\alpha^L$, $\omega^{a_\alpha}\cdot r_\alpha^R <_s b_\sigma$; and this together with the nature of our well ordering implies $\omega^{a_\alpha}\cdot r_\alpha^L$, $\omega^{a_\alpha}\cdot r_\alpha^R \in \bigcup_{\delta < \sigma} B_\delta$. Furthermore, if $L_{s(a_\alpha)} \neq \varnothing$, then by Lemma \ref{squeeze}(iii), we have
\begin{equation*}
b_\sigma = \{ \omega^{a_\alpha}\cdot r_\alpha^L+\omega^{a_\alpha^L}\cdot n \, | \,\omega^{a_\alpha}\cdot r_\alpha^R-\omega^{a_\alpha^L}\cdot n\},
\end{equation*}
where $ \omega^{y}\cdot r^L+\omega^{y^L}\cdot n\ <_s \omega^{y}\cdot r$ and $ \omega^{y}\cdot r^R-\omega^{y^L}\cdot n\ <_s \omega^{y}\cdot r$, for all $y^L \in L_{s(y)}$, $r^L \in L_{s(r)}$, $r^R \in R_{s(r)}$ and $n \in\mathbb{N}$; and, again, this together with the nature of our well ordering implies that all those options are contained in $\bigcup_{\delta < \sigma} B_\delta$. Thus, in virtue of Proposition \ref{P:Bc} (iii), in both cases there is a partition $(L_\sigma', R_\sigma')$ of $\bigcup_{\delta < \sigma} B_\delta$ such that $b_\sigma = \{ L_\sigma'\, |\, R_\sigma' \}$; and so, by Lemma \ref{thrm6anlg}, $B_\sigma$ is an initial subtree of \textbf{No}. This portion of the proof is repeated until each $\omega^{a_\alpha}\cdot r_\alpha$ for some $r_\alpha \in {\mathbb D} - {\mathbb Z}$ has been dealt with.
Next, if $\{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}-{\mathbb D}\}=\varnothing$, we turn to $Z_\alpha$. If not, we take the first remaining $b_\sigma\in \{ \omega^{a_\alpha} \cdot r_\alpha : r_\alpha \in {\mathbb R}_{a_\alpha}-{\mathbb D}\}$ for which $b_\sigma \notin \bigcup_{\delta < \sigma} B_\delta$ and argue exactly as we did in the case where $r_\alpha \in {\mathbb D} - {\mathbb Z}$ and $L_{s(a_\alpha)} = \varnothing$ except we appeal to the relevant portion of Lemma \ref{squeeze}(ii) concerned with members of ${\mathbb R} - {\mathbb D}$. This portion of the proof is repeated until each $\omega^{a_\alpha}\cdot r_\alpha$ for some $r_\alpha \in {\mathbb R} - {\mathbb D}$ has been dealt with.
Finally, if $Z_{\alpha }=\varnothing $, we are
finished; if not, let $0<\sigma <\tau $ and, as our induction hypothesis,
suppose $\bigcup_{\delta <\sigma }B_{\delta }$ is an initial subtree of $\bf No$.
Since $0<\sigma <\tau $, $b_{\sigma }$ has a Conway name of the form $
\sum\nolimits_{\alpha <\,\pi }\omega ^{y_{\alpha }}.r_{\alpha }$, where $\pi
$ is an infinite limit ordinal and $r_{0}=1$. Moreover, by part (ii) of Proposition \ref{thrm15ehr01}, $
b_{\sigma }=\left\{ L \, | \, R\right\}$ where
\[
L=\left\{ \sum\nolimits_{\alpha <\,\mu }\omega ^{y_{\alpha }}.r_{\alpha
}+\omega ^{y_{\mu }}.\left( r_{\mu }-\frac{1}{2^{n}}\right) \right\}
_{0<n<\omega ,\mu <\pi }
\]
and
\[
R=\left\{ \sum\nolimits_{\alpha <\,\mu }\omega ^{y_{\alpha }}.r_{\alpha
}+\omega ^{y_{\mu }}.\left( r_{\mu }+\frac{1}{2^{n}}\right) \right\}
_{0<n<\omega ,\mu <\pi .}
\]
But since, by construction, $L\cup R\subseteq \bigcup_{\delta <\sigma
}B_{\delta }$ and $b_{\sigma }\notin \bigcup_{\delta <\sigma }B_{\delta }$,
there is a partition $\left( L_{\sigma }^{\prime },R_{\sigma }^{\prime
}\right) $ of $\bigcup_{\delta <\sigma }$ $B_{\delta }$ such that
\[
b_{\sigma }=\left\{ L_{\sigma }^{\prime } \, | \, R_{\sigma }^{\prime }\right\}
\text{.}
\]
Therefore, by virtue of the induction hypothesis and Lemma \ref{thrm6anlg}, $B_{\sigma }$
is an initial subtree of ${\bf {No}}$. Thus, by induction, $B_{\sigma }$ is an
initial subtree of $\bf No$ for each $\sigma <\tau $, and so $A_{\alpha }$ and,
hence, $\left\langle A,<_{s}|A\right\rangle $ are initial subtrees of $\bf No$;
thereby proving the theorem.
\end{proof}
\begin{remark*}
Using the axiom of choice or global choice (if the class is a proper class), it is a routine matter to prove that every ordered class is isomorphic to an initial ordered subclass of {\bf {No}}.
This might seem to suggest that in the statement of Theorem \ref{initgroups} one may omit the assumption that $\Gamma$ is isomorphic to an initial ordered subclass of {\bf No}. However, the presence of condition (iii) precludes the omission of that statement.
\end{remark*}
\subsection{Densely and discretely ordered initial subgroups of {\bf No}}
A nontrivial ordered group $\langle G, <, +, 0\rangle$ is said to be \textit{discrete or discretely ordered} if it contains a least positive member, and it is said to be \textit{dense or densely ordered} if for all $a,b\in G$ where $a<b$ there is a $c\in G$ such that $a<c<b$. A nontrivial ordered group $\langle G, <, +, 0\rangle$ is dense if and only if it is not discrete.
The following proposition provides a simple means of distinguishing between nontrivial initial subgroups of {\bf No} that are discrete and those that are dense.
\begin{proposition}
\label{grouptypes}
An initial subgroup $G$ of {\bf No} is discrete if and only if there is a member of $G$ of the form $\frac{1}{2^n}\omega^{-\alpha}$ (where $n \in {\mathbb N}$ and $\alpha \in On$) having no left immediate successor in $G$.
\end{proposition}
\begin{proof}
Note that elements of \textbf{No} of the form $\frac{1}{2^n}\omega^{-\alpha}$ are precisely those having 0 as their sole left predecessor. We must show that for any $g \in G$, $g$ is the least positive element of $G$ if and only if $L_{s(g)}=\{0\}$ and $g$ has no left immediate successor.
First, suppose there is a least positive $g \in G$ for which $L_{s(g)}\neq \{0\}$. If $0\notin L_{s(g)}$, then $g$ is not positive, a contradiction. If there is an $a\in L_{s(g)}$ where $a\neq 0$, then $a$ is a positive element less than $g$, another contradiction.
Next, notice that any least positive $g \in G$ must have no immediate left successor, since, if $g$ has a left immediate successor, this successor must be a positive element less than $g$.
To show the other direction, suppose $G$ is initial. Further suppose $g$ has no left immediate successor, $L_{s(g)}=\{0\}$ and there is an $a\in G$ where $0<a<g$. Since $G$ is lexicographically ordered, it follows that for any $x,y \in G$ where $x<y$, $x$ is incomparable with $y$ if and only if $x$ and $y$ have a common predecessor $z$ such that $x<z<y$ (see \S2). Clearly, $a$ must be incomparable with $g$, as the only surreal less than $g$ and comparable with $g$ is zero. Therefore, they must have a common predecessor $z$ such that $a<z<g$, but by the assumption that $L_{s(g)}=\{0\}$, $z$ must equal 0 and $a$ must be negative, which is impossible.
\end{proof}
\section{Initial subdomains of \textbf{No}}
\label{sec:domains}
We now turn to the characterization of initial subdomains of $\bf {No}$ beginning with the initial subdomains of ${\mathbb R}$.
\begin{lemma}
\label{archdom}
The initial subdomains of ${\mathbb R}$ are ${\mathbb Z}$ and the subdomains of ${\mathbb R}$ containing ${\mathbb D}$. Every initial subdomain of $\bf {No}$ is an extension of an initial subdomain of ${\mathbb R}$.
\end{lemma}
\begin{proof}
Let $K$ be a subdomain of ${\mathbb R}$. If $K={\mathbb Z}$, then by Proposition \ref{P:Cb} $K$ is initial. Now suppose $K \neq {\mathbb Z}$. If $K$ is initial, then ${\mathbb Z}\subset K$ and, by Proposition \ref{P:Cb}, $K$ must contain some element of the form $z+\frac{1}{2}$ where $z \in {\mathbb Z}$. By subtracting $z$, we see that $\frac{1}{2} \in K$. But since the domain ${\mathbb D}$ is generated by $\frac{1}{2}$, ${\mathbb D}\subseteq K$. Finally, if ${\mathbb D}\subseteq K$, then $K$ is initial since every every predecessor of a member of ${\mathbb R}-{\mathbb D}$ is a member of ${\mathbb D}$, and ${\mathbb D}$ is initial. The second part of the lemma is trivial.
\end{proof}
\begin{theorem}
\label{initdoms}
An ordered domain is isomorphic to an initial subdomain of {\bf No} if and only if
it is isomorphic to a truncation closed, cross sectional subdomain $K$ of a power series domain ${\mathbb R}((t^\Gamma))_{\bf On}$, where $\Gamma$ is isomorphic to an initial submonoid of {\bf No}, every $y$-coefficient domain ${\mathbb R}_y$ of $K$ is an initial ordered subdomain of ${\mathbb R}$, and ${\mathbb D} \subseteq {\mathbb R}_y$
for any $x, y \in \Gamma$ where $y \in R_{s(x)}$.
\end{theorem}
\begin{proof}
First note that the above conditions are precisely the conditions for initial groups, with the exception of the stipulation that $\Gamma$ must be form a monoid. The latter condition is necessary since, if the image of the domain $K$ is cross sectional, then for all $x, y \in \Gamma$, $t^x$ and $t^y$ are in $K$, so $t^x\cdot t^y = t^{x+y}$ is in $K$ and $x+y$ is in $\Gamma$. Of course, the rest of the conditions are necessary for integral domains as well, as they are necessary for abelian groups. In order to show that the conditions are also sufficient, we may treat $K$ as a ${\mathbb Z}$-module and repeat the second part of the proof of Theorem \ref{initgroups}.
\end{proof}
\subsection{Densely ordered initial subdomains of \textbf{No}}
An ordered domain is said to be \textit{dense} if its ordered additive group is dense.
\begin{corollary}
\label{initdensedoms}
A densely ordered domain is isomorphic to an initial subdomain of {\bf No} if and only if it is isomorphic to a truncation closed, cross sectional subdomain $K$ of a power series domain ${\mathbb R}((t^\Gamma))_{\bf On}$ where $\Gamma$ is isomorphic to an initial submonoid of {\bf No} and ${\mathbb D}$ is an initial subdomain of $K$.
\end{corollary}
\begin{proof}
Let $K$ be a dense initial subdomain of {\bf No}. In light of Theorem \ref{initdoms}, to prove the corollary it suffices to show that ${\mathbb D}$ is an initial subdomain of $K$ if and only if (i) every $y$-coefficient domain ${\mathbb R}_y$ of $K$ is an initial ordered subdomain of ${\mathbb R}$, and (ii) ${\mathbb D} \subseteq {\mathbb R}_y$
for any $x, y \in \Gamma$ where $y \in R_{s(x)}$. Suppose ${\mathbb D}$ is an initial subdomain of $K$. Then ${\mathbb D} \subseteq {\mathbb R}_0$. Moreover, since $K$ is cross sectional, ${\mathbb R}_0 \subseteq {\mathbb R}_y$ for all $y \in \Gamma$, since $t^y \in K$ and $r\cdot t^0 \in K$ for all $r \in {\mathbb R}_0$, which implies $r\cdot t^y \in K$ for all $r \in {\mathbb R}_0$. This implies (ii) is satisfied, which along with Lemma \ref{archdom} implies (i) is satisfied as well. Now suppose (i) and (ii) are the case. If ${\mathbb D}$ is not an initial subdomain of $K$, then by (i) and Lemma \ref{archdom}, ${\mathbb R}_0={\mathbb Z}$. Moreover, since $K$ is both dense and initial, the simplest member of {\bf No} lying between $0$ and $1$, namely $\frac{1}{2}$, is in $K$. But this implies ${\mathbb D}$ is an initial subdomain of $K$, contrary to assumption.
\end{proof}
The following result, which is a special case of Corollary \ref{initdensedoms}, is the aforementioned result (see \S 1) categorizing the initial subfields of ${\bf No}$ established in \cite{EH5}. Since the special case is about ordered fields, the ordered monoid $\Gamma$ must be an ordered abelian group and the reference to ${\mathbb D}$ may be deleted since every ordered field is an extension of an isomorphic copy of ${\mathbb D}$, the latter of which is initial in {\bf No}.
\begin{corollary}
\label{initfields}
An ordered field is isomorphic to an initial subfield of {\bf No} if and only if it is isomorphic to a truncation closed, cross sectional subfield of a power series field ${\mathbb R}((t^\Gamma))_{\bf On}$ where $\Gamma$ is isomorphic to an initial subgroup of {\bf No}.
\end{corollary}
\subsection{ Discretely ordered initial subdomains of \textbf{No}}
An ordered domain is said to be \textit{discrete} if its additive group is discrete. Accordingly, an ordered domain is discrete if and only if it is not dense. The least positive member of an ordered domain is its multiplicative identity 1.
A discrete subdomain $A$ of an ordered field $B$ is said to be an \emph{integer part} if every member of $B$ is at most a distance 1 from a member of $A$. Conway introduced a canonical integer part \textbf{Oz} of \textbf{No}
consisting of the surreal numbers of the form $x = \{x-1\, |\, x+1\}.$ These \emph{omnific integers}, as Conway calls them, are precisely the surreal numbers having Conway names of the form $\sum_{\alpha < \beta} \omega^{y_{\alpha}} \cdot r_{\alpha}$ where $y_{\alpha} \geq 0$ for all $\alpha < \beta$, and $r_{\alpha}$ is an integer if $y_{\alpha} = 0$.
As we will now see, \textbf{Oz} is in fact (up to isomorphism) the unique discrete s-hierarchical ordered domain containing an initial isomorphic copy of every discrete s-hierarchical ordered domain.
\begin{theorem}
\label{omnificdiscrete}
An initial subdomain of {\bf No} is discrete if and only if it is an initial subdomain of {\bf Oz}.
\end{theorem}
\begin{proof}
Suppose $K$ is an initial subdomain of \textbf{Oz}. As \textbf{Oz} is an initial subdomain of \textbf{No} \cite [p. 3: Note 2]{EH7}, $K$ must be as well. To see that $K$ is discrete, suppose on the contrary there is an element in $K$ between 0 and 1. Since $\frac{1}{2}= \{0 \, | \, 1\}$, it follows that $\frac{1}{2} \in K$. But this is impossible since $\frac{1}{2} \notin \textbf{Oz}$, as $\{\frac{1}{2}-1\, |\, \frac{1}{2}+1\}=0$. For the converse, suppose that there is a discrete initial subdomain $K$ of \textbf{No} containing some element $a\notin \textbf{Oz}$. Let $\sum_{\alpha < \beta} \omega^{y_{\alpha}} \cdot r_{\alpha}$ be the Conway name of $a$. Also let $b = \sum_{\alpha < \beta} \omega^{y_{\alpha}} \cdot r'_{\alpha}$ where $r'_\alpha=r_\alpha$ if $y_\alpha > 0$, $r'_\alpha= 0$ if $y_\alpha<0$ and $r'_\alpha$ is largest integer less than $r_\alpha$ if $y_\alpha=0$. Note that $b \in \textbf{Oz}$, so $b = \{b-1\, |\, b+1\}$. Moreover, since $b-1 < a < b+1$, $b <_s a$; and so, as $K$ is initial and $a\in K$, $b\in K$. As $K$ is a domain, $a-b\in K$; but $0<a-b<1$, so $\frac{1}{2}<_s a-b$ and, therefore, $\frac{1}{2} \in K$, which contradicts the assumption that $K$ is discrete.
\end{proof}
\section{Initial subgroups and subdomains that are convex}
Among the important subgroups and subdomains of ordered groups and fields are those that are convex. Using Theorems \ref{initgroups} and \;\ref{initdoms}, we now identify the convex subgroups of initial subgroups of \textbf{No} that are themselves initial as well as the convex subdomains (i.e. the valuation rings) of initial subfields of \textbf{No} that are likewise initial. As we shall see, unlike the convex subgroups and subdomains of ordinary ordered groups and ordered fields, the initial convex subgroups and subdomains of initial subgroups and subfields of \textbf{No} are always well ordered by inclusion.
Let $A$ be a nontrivial initial subgroup of \textbf{No} and $\textbf{On}(A)=:\textbf{On}\cap{A}$ be its subtree of ordinals. Following \cite[Definition 19]{EH5}, $A$ is said to be $\alpha\textit{-Archimedean}$ if $\alpha$ is the height of $\textbf{On}(A)$. A nontrivial initial subgroup of \textbf{No} is $\alpha\text{-Archimedean}$ if and only if for each $x\in A$ there is a $\beta \in \textbf{On}(A)$ such that $-\beta < x < \beta$ \cite[Theorem 24]{EH5}.
Ordinals of the form $\omega^\phi$ are said to be \textit {additively indecomposable} since they are precisely the ordinals $\lambda$ such that $\mu + \nu < \lambda$ for all ordinals $\mu,\nu < \lambda$, and ordinals of the form $\omega^{\omega^\phi}$ are said to be \textit{multiplicatively indecomposable} since they are precisely the ordinals $\lambda >1$ such that $\mu\nu < \lambda$ for all ordinals $\mu,\nu < \lambda$, where the just-said sums and products of ordinals are the familiar Cantorian operations. Every nontrivial initial subgroup (resp. initial subdomain) is $\omega^\phi$-Archimedean for some nonzero ordinal (resp. nonzero additively indecomposable ordinal) $\phi$; moreover, $A$ is Archimedean if and only if $A$ is $\omega$-Archimedean \cite[Theorem 24]{EH5}.
Let $A$ be an $\omega^{\phi}$-Archimedean initial subgroup of \textbf{No} and for each nonzero ordinal $ \tau \le \phi$, let $$A[\omega^{\tau}] =: \{x \in A: -\alpha < x < \alpha \;\text{for some}\; \alpha < \omega^{\tau}\}.$$
\begin{proposition}
\label{pp}
Let $A$ be an $\omega^{\phi}$-Archimedean initial subgroup of {\bf No} and $\tau$ be a nonzero ordinal $\le \phi \le \emph{\bf{On}}$.\newline
\rm{(i)} $A[\omega^{\tau}]=A$ {\em if and only if} $\tau = \phi$.\newline
\rm{(ii)} \em $A[\omega^{\tau}] = \{x \in A: -n\alpha < x < n\alpha \;\text{{\em for some}}\; n \in \mathbb{N} \; \text{{\em and some}} \;\alpha < \omega^{\tau}\}$.\newline
\rm{(iii)} \em The class of ordinals $< \omega^{\tau}$ is a cofinal subclass of $A[\omega^{\tau}]$.\newline
\rm{(iv)} \em For each leader $\omega^y \in A$ there is a unique leader $\omega^{\alpha} \in A$ where $\alpha$ is an ordinal $< \phi$ such that $ \omega^y \le \omega^{\alpha}$ and $\omega^{\alpha} \le_s \omega^y$. If $ \omega^y \neq \omega^{\alpha}$, then $\omega^{\alpha}$ is the least ordinal $> \omega^y$. Moverover, if $\omega^{x} <_s \omega^y$, then $\omega^x \le \omega^{\alpha}$.
\end{proposition}
\begin{proof}
(i) is trivial, (ii) follows from the additively indecomposable nature of $\omega^{\tau}$, and (iii) follows from the definition of $A[\omega^{\tau}]$ and the initial nature of $A$. For (iv), note that for each $z>0$ in \textbf{No}, there is a unique ordinal $\alpha$ such that $z \le \alpha$ and $\alpha \le_s z$; moreover, if $y \neq \alpha$, then $\alpha$ is the least ordinal $>y$. But by \cite[Theorem 11]{EH5} and \cite[Proposition 3.6 (i)]{EH7} for all $z,y \in \textbf{No},\; z<_s y$ if and only if $\omega^z <_s \omega^y$, which implies the first two parts of (iv). Finally, suppose $\omega^z <_s \omega^y$. If $z$ is an ordinal, then plainly $\omega^z \le \omega^\alpha$; and if $z$ is not an ordinal, then its sign-expansion begins with $\omega^\alpha$ pluses followed by a minus, which implies it is $< \omega^\alpha$, thereby completing the proof.
\end{proof}
\begin{theorem}
\label{GROUP}
Let $A$ be an $\omega^{\phi}$-Archimedean initial subgroup of \textbf{\emph{No}}. Then $K$ is a nontrivial initial convex subgroup of $A$ if and only if $K= A[\omega^{\tau}]$ for some additively indecomposable infinite ordinal $\omega^{\tau} \le \omega^{\phi}$.
\end{theorem}
\begin{proof}
First suppose $K$ is a nontrivial initial convex subgroup of $A$. Since $K$ is a nontrivial initial subgroup of $A$, $K$ is $\omega^{\tau}$-Archimedean for some nonzero ordinal $\tau \le \phi$. Moreover, since $K$ is a convex subgroup of $A$, $A[\omega^{\tau}] \subseteq{K}$. Furthermore, if $A[\omega^{\tau}] \neq{K}$, there is an $x\in K$ such that $A[\omega^{\tau}] < \{x\} <\omega^{\tau}$. But then $\omega^{\tau} <_s x$, which implies $K$ is not initial, since $\omega^{\tau} \notin K$; and so $K=A[\omega^{\tau}]$.
Now suppose $K= A[\omega^{\tau}]$ for some additively indecomposable infinite ordinal $\omega^{\tau} \le \omega^{\phi}$. If $\tau=\phi$, then $K=A$, which is a trivial initial convex subgroup of $A$. Now suppose $1\le\tau < \phi$. In virtue of the definition of $A[\omega^{\tau}]$, $K$ is a convex subclass of $A$. Moreover, by Proposition \ref{pp}(ii) and the convex nature of $K$, every element of $A$ that is Archimedean equivalent to some $x \in K$ is likewise in $K$. Plainly then, $x+y \in K$ whenever $x, y \in K$, since $x+y$ is in the Archimedean class containing $\omega^z$, where $z$ is the maximal member in the supports of $x$ and the supports of $y$, which shows $K$ is a group. To establish $K$ is initial, first note that since $A$ is cross sectional and closed under truncation, it follows from the fact that $K$ is a convex subgroup of $A$ (in which every element of $A$ that is Archimedean equivalent to some $x \in K$ is likewise in $K$) that $K$ is also cross sectional and closed under truncation. In addition, since $K$ is a convex subgroup of $A$ and $A$ is initial, the $y$-coefficient groups of $K$ satisfy conditions (ii) and (iii) of Theorem \ref{initgroups}. Therefore, in virtue of Theorem \ref{initgroups}, to complete the proof it remains to show $\{y \in \textbf{No}: \omega^y \in K\}$ is an initial subtree of \textbf{No}. For this purpose, suppose $\omega^{y} \in K$ and further suppose $\omega^{x} <_s \omega^y$. Since $x<_s y$ if and only if $\omega^{x} <_s \omega^y$ for all $x,y \in \bf{No}$, it suffices to show $\omega^x\in K$.
To this end, note that by Proposition \ref{pp}(iii), there is an ordinal $\beta \in K$ such that $\omega^y < \beta< \omega^\tau$. Moreover, by Proposition \ref{pp}(iv), there is an ordinal $\omega^{\alpha}$ such that either $\omega^{\alpha}=\omega^y$ or $\omega^{\alpha}$ is the least ordinal $> \omega^y$ and $\omega^x \le \omega^{\alpha}$. Plainly then, $\omega^x \le \omega^{\alpha} \leq \beta$. And so, since $K$ is a convex subgroup of $A$, $\omega^x \in K$.
\end{proof}
\begin{theorem}
\label{DOMAIN}
Let $A$ be an $\omega^{\phi}$-Archimedean initial subfield of \textbf{\emph{No}}. Then $K$ is an initial convex subdomain of $A$ if and only if $K= A[\omega^{\tau}]$ for some multiplicatively indecomposable ordinal $\omega^{\tau} \le \omega^{\phi}$.
\end{theorem}
\begin{proof}
Since $A$ is a field, it follows that for each $\omega^{y} \in K$, $\{r \in \mathbb{R}: \omega^{y}.r \in K\} = \{r \in \mathbb{R}: \omega^{0}.r \in K\}$ and so, for each leader $\omega^{y} \in K$, $\mathbb{D}$ is a subdomain of $\{r \in \mathbb{R}: \omega^{y}.r \in K\}$. Accordingly, in virtue of Theorem \ref{GROUP} and Theorem \ref{initdoms}, to complete the proof it suffices to show that $K$ (which is convex) is a subdomain of $A$ if and only if $K= A[\omega^{\tau}]$ for some multiplicatively indecomposable ordinal $\omega^{\tau} \le \omega^{\phi}$. If $\omega^{\tau}$ is not multiplicatively indecomposable, there are ordinals $\alpha,\beta \in K$ where $\alpha\beta>\omega^{\tau}$ and, hence, $\alpha\beta \notin K$, which shows $K$ is not a domain. Now suppose $\omega^{\tau}$ is multiplicatively indecomposable. If $\tau=1$, $K$ consists of the finitely bounded members of $A$, in which case $K$ is obviously a subdomain of $A$. Moreover, if $1<\tau < \phi$, then $\{\omega^{\beta}: \beta < \tau\}$ is a cofinal subclass of $K$ without a greatest member. Accordingly, if $x,y \in K$, $|x |\le$ some $\omega^{\beta} \in K$ and $|y| \le$ some $\omega^{\gamma} \in K$, where $\beta, \gamma$ are ordinals $<\tau$, and so $|xy| \le \omega^{\beta + \gamma}<\omega^{\tau}$. But then $\omega^{\beta + \gamma}$ and, hence, $|xy|$ are in $K$, which suffices to show $K$ is a domain.
\end{proof}
\begin{remark*}
{\rm Since every real-closed ordered field is isomorphic to an initial subfield of \textbf{No}, it is natural to inquire if this is true for real-closed ordered domains more generally, the latter of which coincide with the convex subdomains of real-closed ordered fields \cite{CD}. However, since the convex subdomains of real-closed ordered fields are not in general well-ordered by inclusion, Theorem \ref{DOMAIN} implies this is not the case.}
\end{remark*}
\section{Optimality results and an open question}
As we mentioned above, in \cite{EH5} it was shown that every divisible
ordered abelian group (real-closed ordered field) is isomorphic to an initial subgroup
(initial subfield) of ${\bf No}$. The following result shows that in a important sense these results are optimal.
Let $T^{OAG}$ and $T^{DOAG}$ be the theories of ordered abelian groups and divisible ordered abelian groups in the language $\{\leq,+,0\}$ of ordered additive groups, and let $T^{OF}$ and $T^{RCF}$ be the theories of ordered fields and real-closed ordered fields in the language $\{\leq,+,\cdot,0,1\}$ of ordered fields.
\begin{theorem}
\label{OP}
\vspace{5pt}
\noindent
(i) If $T^{OAG} \subseteq T \subseteq T^{DOAG}$, then every model of $T$ is isomorphic to an initial subgroup of \textbf{\em {No}} if and only if $T = T^{DOAG}$.
\vspace{5pt}
\noindent
(ii) If $ T^{OF} \subseteq T \subseteq T^{RCF}$, then every model of $T$ is isomorphic to an initial subfield of \textbf{\em {No}} if and only if $T = T^{RCF}$.
\end{theorem}
\begin{proof}
In light of the above-mentioned results on divisible ordered abelian groups and real-closed fields from \cite{EH5}, it remains to consider the cases where $T \neq T^{DOAG}$ and $T \neq T^{RCF}$. Let $T^{OAG} \subseteq T \subset T^{DOAG}$ and $M$ be a countable model of $T$. There is an elementary chain $M_\alpha$ ($\alpha < {\bf On}$) of models of $T$ such that for each $\alpha $, $M_\alpha$ is an $\omega_{\alpha+1}$-saturated elementary extension of $M$ of power $2^{\omega_\alpha}$ \cite[Lemma 5.1.4]{CK}. Since each $M_\alpha$ is an $\eta_{\alpha+1}$-ordering (\cite[Page 369]{CK}; also see \cite{ALK}), the union $M'$ of the chain is a model of $T$ that is an $\eta_{\bf On}$-ordering (see Proposition \ref{P:eta}). However, since {\bf {No}} is a lexicographically ordered full binary tree and no lexicographically ordered binary tree contains a proper initial ordered subtree that is isomorphic to itself \cite[Lemmas 1 and 2]{EH5}, it follows from Propositions \ref{P:Bd} and \ref{P:eta} that an initial ordered subtree of {\bf {No}} is an $\eta_{\bf On}$-ordering if and only if it is {\bf {No}} itself. But since {\bf {No}} is divisible and $M'$ is not, there is no isomorphism of $M'$ onto an initial subgroup of {\bf {No}}. Except for trivial modifications, the same argument applies to (ii), which suffices to prove the theorem.
\end{proof}
The above proof of Theorem \ref{OP} makes critical use of class models. Using a classical result from the theory of saturated models \cite[Lemma 5.1.5]{CK} together with a natural generalization of Theorem 4 of (\cite{EH5}, \cite{EH5.2}), a variation of the above proof shows that an analog of Theorem \ref{OP} that applies solely to structures whose universes are sets can be established in NBG supplemented with the existence of an inaccessible cardinal $< {\bf On}$ or an instance of GCH. This naturally suggests:
\begin{question}
Can an analog of Theorem \ref{OP} that applies solely to ordered abelian groups and ordered fields whose universes are sets be established in \rm{NBG}?
\end{question}
An ordered field $K$ is said to be $n$-\emph{real-closed} \cite[page 327]{B} if every polynomial of degree at most $n$ admitting a root in a real closure of $K$ admits a root in $K$. Boughattas \cite{B} has shown that, for each positive integer $n$, there is a model of the theory of $n$-real-closed fields whose universe is a set that does not have an integer part. Therefore, since every initial subfield of {\bf {No}} has a canonical integer part \cite[Theorem 25]{EH5}, it follows that for each positive integer $n$, there is a model of the theory of $n$-real-closed fields whose universe is a set that is not isomorphic to an initial subfield of {\bf {No}}. This, however, does not provide a positive answer to the field portion of Question 1 since there are theories of ordered fields that are not equivalent to the theory of $n$-real-closed fields for any $n$. One can also find theories of ordered abelian groups having models whose universe are sets that are not isomorphic to initial subgroups of {\bf {No}}. For example, if we say that an ordered abelian group is $n$-\emph{divisible} if every element is divisible by $n$, then for each prime $p>2$ the $p$-divisible ordered abelian group generated by 1 is not isomorphic to an initial subgroup of {\bf{No}}. However, we are not aware of a proof that applies to every theory of ordered abelian groups lacking full divisibility, which leaves the group portion of Question 1 open as well.
\section{Further open questions}
Unlike discrete initial subdomains of {\bf{No}}, discrete initial subgroups of {\bf{No}} need not be subgroups of {\bf{Oz}}. A case in point is the subgroup of {\bf{No}} consisting of all elements of the form $d+(a/\omega)$, where $d \in {\mathbb D}$ and $a \in {\mathbb Z}$. On the other hand, this discrete initial subgroup of {\bf{No}} is isomorphic to an initial subgroup of {\bf{Oz}}, as is evident from the mapping $f(d+(a/\omega))=\omega.d + a$ for all $d \in {\mathbb D}$ and all $a \in {\mathbb Z}$. This motivates
\begin{question}
Is every discrete initial subgroup of \emph{\bf{No}} isomorphic to an initial subgroup of \emph{\bf{Oz}}?
\end{question}
\begin{question}
What is a set of conditions that are individually necessary and collectively sufficient for an arbitrary ordered monoid to be isomorphic to an initial submonoid of \emph{\bf{No}}?
\end{question}
| {
"timestamp": "2015-12-15T02:08:52",
"yymm": "1512",
"arxiv_id": "1512.04001",
"language": "en",
"url": "https://arxiv.org/abs/1512.04001",
"abstract": "In [15], the algebraico-tree-theoretic simplicity hierarchical structure of J. H. Conway's ordered field No of surreal numbers was brought to the fore and employed to provide necessary and sufficient conditions for an ordered field to be isomorphic to an initial subfield of No, i.e. a subfield of No that is an initial subtree of No. In this sequel to [15], analogous results for ordered abelian groups and ordered domains are established which in turn are employed to characterize the convex subgroups and convex subdomains of initial subfields of No that are themselves initial. It is further shown that an initial subdomain of No is discrete if and only if it is an initial subdomain of No's canonical integer part Oz of omnifc integers. Finally, extending results of [15], the theories of divisible ordered abelian groups and real-closed ordered fields are shown to be the sole theories of ordered abelian groups and ordered fields all of whose models are isomorphic to initial subgroups and initial subfields of No.",
"subjects": "Logic (math.LO)",
"title": "Number Systems with Simplicity Hierarchies II",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717456528508,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449302405748
} |
https://arxiv.org/abs/2203.01091 | Doubly truncated moment risk measures for elliptical distributions | In this paper, we define doubly truncated moment (DTM), doubly truncated skewness (DTS) and kurtosis (DTK). We derive DTM formulae for elliptical family, with emphasis on normal, student-$t$, logistic, Laplace and Pearson type VII distributions. We also present explicit formulas of the DTE (doubly truncated expectation), DTV (doubly truncated variance), DTS and DTK for those distributions. As illustrative example, DTEs, DTVs, DTSs and DTKs of three industry segments' (Banks, Insurance, Financial and Credit Service) stock return in London stock exchange are discussed. | \section{Introduction}
Landsman et al. (2016b) defined a new tail conditional moment (TCM) risk measure for a random variable $X$:
\begin{align}\label{(1)}
\mathrm{TCM}_{q}(X^{n})=\mathrm{E}\left[(X-\mathrm{TCE}_{q}(X))^{n}|X>x_{q}\right],
\end{align}
where
\begin{align*}
\mathrm{TCE}_{q}(X)=E(X|X>x_{q})
\end{align*}
is tail conditional expectation (TCE) of $X$, $x_{q}$ is $q$-th quantile and $q\in(0,1)$.
Furthermore, they also defined novel types of tail conditional skewness and kurtosis (TCS and TCK):
\begin{align}\label{(2)}
\mathrm{TCS}_{q}(X)=\frac{\mathrm{E}\left[(X-\mathrm{TCE}_{q}(X))^{3}|X>x_{q}\right]}{\mathrm{TV}_{q}^{3/2}(X)}
\end{align}
and
\begin{align}\label{(3)}
\mathrm{TCK}_{q}(X)=\frac{\mathrm{E}\left[(X-\mathrm{TCE}_{q}(X))^{4}|X>x_{q}\right]}{\mathrm{TV}_{q}^{2}(X)}-3,
\end{align}
where
\begin{align*}
\mathrm{TV}_{q}(X)=\mathrm{E}[(X-\mathrm{TCE}_{X}(x_{q}))^{2}|X>x_{q}]
\end{align*}
is tail variance (TV) of $X$.
Since Landsman et al. (2016b) has been derived formulae of TCM for elliptical and log-elliptical distributions, and has been presented TCS and TCK for those distributions, Eini and Khaloozadeh (2021) generalized those results to generalized skew-elliptical, Zuo and Yin (2021a) extended them to some shifted distributions.
Recently, Roozegar et al. (2020) derived explicit expressions of the first two moments for doubly truncated multivariate normal mean-variance mixture distributions. Zuo and Yin (2021b) defined multivariate doubly truncated expectation and covariance risk measures, and derived formulas of multivariate doubly truncated expectation (MDTE) and covariance (MDTCov) of elliptical distributions. As special cases of MDTE and MDTCov risk measures, authors also defined doubly truncated expectation (DTE) and variance (DTV) risk measures for a random variable $X$ as follows, respectively:
\begin{align}\label{(5)}
\mathrm{DTE}_{(p,q)}(X)=\mathrm{E}(X|x_{p}<X<x_{q})
\end{align}
and
\begin{align}\label{(8)}
\mathrm{DTV}_{(p,q)}(X)=\mathrm{E}[(X-\mathrm{DTE}_{(p,q)}(X))^{2}|x_{p}<X<x_{q}],
\end{align}
where $x_{k}$ $(k=p,~q)$ is $k$-th quantile, and $p,~q\in(0,1)$.
Inspired by those work, we define doubly truncated moment (DTM), and also define doubly truncated skewness (DTS) and kurtosis (DTK). Moreover, we derive doubly truncated moments (DTM) for elliptical family, and also give explicit expressions of DTE, DTV, DTS and DTK for this family and it's several special cases, such as normal, student-$t$, logistic, Laplace and Pearson type VII distributions. As illustrative example, we discuss DTEs, DTVs, DTSs and DTKs of three industry segments' (Banks, Insurance, Financial and Credit Service) stock return in London stock exchange.
The rest of the paper is organized as follows. Section 2 defines several doubly truncated risk measures. Section 3 introduces elliptical family and it's properties. In Section 4, we present $n$-th doubly truncated moments (TCM) for elliptical distributions, and derive explicit expressions of DTV, DTS and DTK for this family. Special cases are given in Section 5. We give illustrative example in Section 6. Finally, in Section 7, is the concluding remarks.
\section{Doubly truncated risk measures}
We define doubly truncated moment (DTM) risk measure of a random variable $X$ as follows:
\begin{align}\label{(4)}
\mathrm{DTM}_{(p,q)}(X^{n})=\mathrm{E}\left[(X-\mathrm{DTE}_{(p,q)}(X))^{n}|x_{p}<X<x_{q}\right],
\end{align}
where
$\mathrm{DTE}_{(p,q)}(X)$ is as in (\ref{(5)}), $x_{k}$ $(k=p,~q)$ is $k$-th quantile, and $p,~q\in(0,1)$.\\
$\mathbf{Remark~1.}$ When $q\rightarrow1$, the doubly truncated moment (DTM) is reduced to tail conditional moment (TCM); When $p\rightarrow0$ and $q\rightarrow1$, the doubly truncated moment (DTM) is reduced to central moment.
Further, we define doubly truncated skewness (DTS) and kurtosis (DTK) risk measures:
\begin{align}\label{(6)}
\mathrm{DTS}_{(p,q)}(X)=\frac{\mathrm{E}\left[(X-\mathrm{DTE}_{(p,q)}(X))^{3}|x_{p}<X<x_{q}\right]}{\mathrm{DTV}_{(p,q)}^{3/2}(X)},
\end{align}
and
\begin{align}\label{(7)}
\mathrm{DTK}_{(p,q)}(X)=\frac{\mathrm{E}\left[(X-\mathrm{DTE}_{(p,q)}(X))^{4}|x_{p}<X<x_{q}\right]}{\mathrm{DTV}_{(p,q)}^{2}(X)}-3,
\end{align}
where
$\mathrm{DTV}_{(p,q)}(X)$ is as in (\ref{(8)}).\\
$\mathbf{Remark~2.}$ When $q\rightarrow1$, the doubly truncated skewness (DTS) is reduced to tail conditional skewness (TCS), and the doubly truncated kurtosis (DTK) is reduced to tail conditional kurtosis (TCK); When $p\rightarrow0$ and $q\rightarrow1$, the doubly truncated skewness (DTS) is reduced to skewness, and the doubly truncated kurtosis (DTK) is reduced to kurtosis.
Note that Molchanov and Cascos (2016), Cai et al. (2017) and Shushi and Yao (2020) proposed set risk measures (defined as a map from subset $S$ of
possible outcomes of losses $\Omega$ to some measure-valued space
$\mathcal{X}$, i.e., $S \subset \Omega \Rightarrow \rho(S) \in\mathcal{X}$) are
mathematically abstract and are very complicated when dealing
with risks. However,
tail conditional moment risk measures are relatively simpler
than that of the set risk measures and can be derived explicitly,
important for actuarial users (see Landsman et al., 2016a). In addition to these advantages of tail conditional moment, doubly truncated moment risk measures are more flexible than tail conditional moment risk measures. In other words, according to different needs, we can choose different $(p,q)$.
\section{Elliptical distributions}
A random variable $X$ is said to have an elliptically
symmetric distribution (see Landsman and Valdez, 2003)
\begin{align}\label{(9)}
f_{X}(x):=\frac{c_{1}}{\sigma}g_{1}\left\{\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)\right\},~x\in\mathbb{R},
\end{align}
where $\mu$ is a location parameter, $\sigma>0$ is a scale parameter, $g_{1}(u)$, $u\geq0$, is the density generator of $X$, and denoted by $X\sim E_{1}(\mu,~\sigma^{2},~g_{1})$. The density generator $g_{1}$ satisfies the condition
\begin{align}\label{(10)}
\int_{0}^{\infty}s^{-1/2}g_{1}(s)\mathrm{d}s<\infty,
\end{align}
and the normalizing constant $c_1$ is given by
\begin{align*}
c_{1}&=\frac{\Gamma(1/2)}{(2\pi)^{1/2}}\left[\int_{0}^{\infty}s^{-1/2}g_{1}(s)\mathrm{d}s\right]^{-1}\\
&=\frac{1}{\sqrt{2}}\left[\int_{0}^{\infty}s^{-1/2}g_{1}(s)\mathrm{d}s\right]^{-1}.
\end{align*}
We define a sequence of cumulative generators $\overline{G}_{(k)},~k=1,~2\cdots,n,$:
\begin{align}\label{(11)}
\overline{G}_{(1)}(u)=\int_{u}^{\infty}g_{1}(s)\mathrm{d}s
\end{align}
and
\begin{align}\label{(11a)}
\overline{G}_{(k)}(u)=\int_{u}^{\infty}\overline{G}_{(k-1)}(s)\mathrm{d}s,~k\geq2.
\end{align}
The normalizing constants $c_(k)^{\ast},~k\geq1,$ are given by
\begin{align}\label{(12)}
c_{(k)}^{\ast}=\frac{1}{\sqrt{2}}\left[\int_{0}^{\infty}s^{-1/2}\overline{G}_{(k)}(s)\mathrm{d}s\right]^{-1}.
\end{align}
The density generators $\overline{G}_{(k)},~k=1,~2,\cdots,n,$ satisfy the condition
\begin{align}\label{(13)}
\int_{0}^{\infty}s^{-1/2}\overline{G}_{(k)}(s)\mathrm{d}s<\infty,~k=1,2\cdots,n.
\end{align}
\section{$N$-th doubly truncated moment}
In this section, we present $n$-th doubly truncated moment (DTM) of elliptical distributions, and also present DTV, DTS and DTK of elliptical distributions.
To derive $n$-th DTM of elliptical distributions, we define a new truncated distribution function as follows (see Zuo and Yin, 2021b): $$F_{Z}(a,b)=\int_{a}^{b}f_{Z}(z)\mathrm{d}z,$$
where $f_{Z}(z)$ is pdf of random variable $Z$.
Firstly, we give following lemma.
\begin{lemma}\label{le.1}
Let $X\sim E_{1}(\mu,~\sigma^{2},~g_{1})$. Assume it satisfies conditions (\ref{(10)}) and (\ref{(13)}).
Then
\begin{align}\label{(a9)}
\nonumber&\mathrm{E}[X^{n}|x_{p}<X<x_{q}]\\
&=\mu^{n}+n\mu^{n-1}\sigma\mathrm{DTE}_{(p,q)}(Y)+\sum_{i=2}^{n}\binom{n}{i}\mu^{n-i}\sigma^{i}\left[L_{1}+(i-1)\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right],
\end{align}
where
$$\mathrm{DTE}_{(p,q)}(Y)=\frac{c_{1}\left(\overline{G}_{(1)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\overline{G}_{(1)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right)}{F_{Y}(\xi_{p},\xi_{q})},$$
$$L_{1}=\frac{c_{1}\left[\xi_{p}^{i-1}\overline{G}_{1}\left(\frac{1}{2}\xi_{p}^{2}\right)-\xi_{q}^{i-1}\overline{G}_{1}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},$$
$$L_{2}=\frac{\int_{\xi_{p}}^{\xi_{q}}y^{i-2}c_{(1)}^{\ast}\overline{G}_{(1)}\left(\frac{1}{2}y^{2}\right)\mathrm{d}y}{F_{Y}(\xi_{p},\xi_{q})},$$
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, and $Y\sim E_{1}(0,~1,~g_{1})$.
\end{lemma}
\noindent $\mathbf{Proof}$ Using definition, we have
\begin{align*}
\mathrm{E}[X^{n}|x_{p}<X<x_{q}]=\frac{\int_{x_{p}}^{x_{q}}x^{n}\frac{c_{1}}{\sigma}g_{1}\left(\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{2}\right)\mathrm{d}x}{F_{X}(x_{p},x_{q})}.
\end{align*}
Applying the transformation $y=\frac{x-\mu}{\sigma}$, and using the Binomial Theorem, we obtain
\begin{align*}
\mathrm{E}[X^{n}|x_{p}<X<x_{q}]&=\frac{\int_{\xi_{p}}^{\xi_{q}}(\sigma y+\mu)^{n}c_{1}g_{1}\left(\frac{1}{2}y^{2}\right)\mathrm{d}y}{F_{Y}(\xi_{p},\xi_{q})}\\
&=\frac{\sum_{i=0}^{n}\binom{n}{i}\mu^{n-i}\sigma^{i}\int_{\xi_{p}}^{\xi_{q}} y^{i}c_{1}g_{1}\left(\frac{1}{2}y^{2}\right)\mathrm{d}y}{F_{Y}(\xi_{p},\xi_{q})}.
\end{align*}
Therefore,
\begin{align*}
&\mathrm{E}[X^{n}|x_{p}<X<x_{q}]\\
&=\mu^{n}+n\mu^{n-1}\sigma\mathrm{DTE}_{(p,q)}(Y)+\sum_{i=2}^{n}\binom{n}{i}\mu^{n-i}\sigma^{i}\left[L_{1}+(i-1)\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right],
\end{align*}
as required.
Now we establish the formula of DTM for elliptical distributions.
\begin{theorem}\label{th.4} Suppose that $X\sim E_{1}(\mu,\sigma^{2},~g_{1})$, which satisfies conditions (\ref{(10)}) and (\ref{(13)}).
Then
\begin{align}\label{(a10)}
&\nonumber\mathrm{DTM}_{(p,q)}(X^{n})=(-1)^{n}\mathrm{DTE}_{(p,q)}^{n}(X)+(-1)^{n-1}n\mathrm{DTE}_{(p,q)}^{n}(X)\\
&\nonumber~~~~~~~~~~~~~~~~~~+\sum_{k=2}^{n}\binom{n}{k}(-\mathrm{DTE}_{(p,q)}(X))^{n-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&+\sum_{k=2}^{n}\sum_{i=2}^{k}\binom{n}{k}\binom{k}{i}(-\mathrm{DTE}_{(p,q)}(X))^{n-k}\mu^{k-i}\sigma^{i}\left[L_{1}+(i-1)\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right],~n\geq2,
\end{align}
where
\begin{align}\label{(a11)} \mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{c_{1}\left[\overline{G}_{1}\left(\frac{1}{2}\xi_{p}^{2}\right)-\overline{G}_{1}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},
\end{align}
$\xi_{k}$, $k=p,q$, $L_{1},$ $L_{2}$ and $Y$ are the same as those in Lemma 1.
\end{theorem}
\noindent $\mathbf{Proof}$ Using the Binomial Theorem and basic algebraic calculations, we have
\begin{align*}
\mathrm{DTM}_{(p,q)}(X^{n})&=\mathrm{E}[(X-\mathrm{DTE}_{(p,q)}(X))^{n}|x_{p}<X<x_{q}]\\
&=\mathrm{E}\left[\sum_{k=0}^{n}\binom{n}{k}X^{k}(-\mathrm{DTE}_{(p,q)}(X))^{n-k}|x_{p}<X<x_{q}\right]\\
&=\sum_{k=0}^{n}\binom{n}{k}(-\mathrm{DTE}_{(p,q)}(X))^{n-k}\mathrm{E}[X^{k}|x_{p}<X<x_{q}].
\end{align*}
By (45) of Zuo and Yin (2021b), we obtain that $\mathrm{DTE}_{(p,q)}(X)$ is as in (\ref{(a11)}).\\
Then, using Lemma 1, we obtain Eq.(\ref{(a10)}), as required.\\
$\mathbf{Remark~3.}$ When $q\rightarrow1$, the $n$-th tail conditional moment (TCM) for elliptical distribution is given by
\begin{align}\label{(a12)}
\nonumber&\mathrm{TCM}_{p}(X^{n})=(-1)^{n}\mathrm{TCE}_{p}^{n}(X)+(-1)^{n-1}n\mathrm{TCE}_{p}^{n}(X)\\
&\nonumber~~~~~~~~~~~~~~~~+\sum_{k=2}^{n}\binom{n}{k}(-\mathrm{TCE}_{p}(X))^{n-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{TCE}_{p}(Y)]\\
&+\sum_{k=2}^{n}\sum_{i=2}^{k}\binom{n}{k}\binom{k}{i}(-\mathrm{TCE}_{p}(X))^{n-k}\mu^{k-i}\sigma^{i}\left[L_{1}+(i-1)\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right],~n\geq2,
\end{align}
where
\begin{align}\label{(a13)} \mathrm{TCE}_{p}(X)=\mu+\sigma\frac{c_{1}\overline{G}_{1}\left(\frac{1}{2}\xi_{p}^{2}\right)}{\overline{F}_{Y}(\xi_{p})},
\end{align}
$$L_{1}=\frac{c_{1}\xi_{p}^{i-1}\overline{G}_{1}\left(\frac{1}{2}\xi_{p}^{2}\right)}{\overline{F}_{Y}(\xi_{p})},$$
$$L_{2}=\frac{\int_{\xi_{p}}^{\infty}y^{i-2}c_{(1)}^{\ast}\overline{G}_{(1)}\left(\frac{1}{2}y^{2}\right)\mathrm{d}y}{\overline{F}_{Y}(\xi_{p})},$$
and $\overline{F}_{X}(\cdot)$ denotes tail function of $X$.\\
Note that (\ref{(a12)}) is the result of Theorem 1 in Landsman et al. (2016b).\\
$\mathbf{Remark~4.}$ Letting $p\rightarrow0$ and $q\rightarrow1$ in Theorem 1, the $n$-th central moment (CM) for elliptical distribution leads to
\begin{align}\label{(a14)}
\nonumber\mathrm{CM}(X^{n})=&(-1)^{n}\mu^{n}+(-1)^{n-1}n\mu^{n}+\sum_{k=2}^{n}\binom{n}{k}(-\mu)^{n-k}\mu^{k}\\
&+\sum_{k=2}^{n}\sum_{i=2}^{k}\binom{n}{k}\binom{k}{i}(-1)^{n-k}\mu^{n-i}\sigma^{i}(i-1)\frac{c_{1}}{c_{(1)}^{\ast}}L_{2},~n\geq2,
\end{align}
where
$$L_{2}=\int_{-\infty}^{\infty}y^{i-2}c_{(1)}^{\ast}\overline{G}_{(1)}\left(\frac{1}{2}y^{2}\right)\mathrm{d}y.$$
Now, we give explicit expressions of DTV, DTS and DTK for elliptical distributions.
\begin{corollary}\label{co.1} Under conditions of Theorem 1, we have
\begin{align}\label{(a15)}
&\mathrm{DTV}_{(p,q)}(X)=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left(L_{1}+\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right),
\end{align}
where
\begin{align*}
L_{1}=\frac{c_{1}\left[\xi_{p}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\xi_{q}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},~
L_{2}=\frac{F_{Y_{(1)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
where
$\mathrm{DTE}_{(p,q)}(X)$, $\xi_{k},~k=p,q,$ and $Y$ are the same as those in Theorem 1. In addition, $Y_{(1)}\sim E_{1}(0,~1,~\overline{G}_{(1)})$.
\end{corollary}
Note that (\ref{(a15)}) coincides with the result of (65) in Zuo and Yin (2021b). When $q\rightarrow1$, (\ref{(a15)}) is the result of (1.7) in Furman and Landsman (2006).\\
\begin{corollary}\label{co.2} Under conditions of Theorem 1, we have
\begin{align}\label{(a16)}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}[-\mathrm{DTE}_{(p,q)}(X)]^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left(L_{1}+\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right)+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},
\end{align}
where
\begin{align*}
L_{1}^{\ast}=\frac{c_{1}\left[\xi_{p}^{2}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\xi_{q}^{2}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast}&=\frac{c_{1}\left[\overline{G}_{(2)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\overline{G}_{(2)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
where
$\mathrm{DTV}_{(p,q)}(X)$, $L_{1}$ and $L_{2}$ are the same as those in Corollary 1.
\end{corollary}
\begin{corollary}\label{co.3} Under conditions of Theorem 1, we have
\begin{align}\label{(a17)}
\nonumber&\mathrm{DTK}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left(L_{1}+\frac{c_{1}}{c_{(1)}^{\ast}}L_{2}\right)\\
&\nonumber~~+\sum_{k=2}^{4}\binom{4}{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left(L_{1}^{\ast\ast}+3L_{2}^{\ast\ast}\right)\bigg\}-3,
\end{align}
where
\begin{align*}
L_{1}^{\ast\ast}=\frac{c_{1}\left[\xi_{p}^{3}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\xi_{q}^{3}\overline{G}_{(1)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast\ast}&=\frac{c_{1}\left[\xi_{p}\overline{G}_{(2)}\left(\frac{1}{2}\xi_{p}^{2}\right)-\xi_{q}\overline{G}_{(2)}\left(\frac{1}{2}\xi_{q}^{2}\right)\right]}{F_{Y}(\xi_{p},\xi_{q})}+\frac{c_{1}}{c_{(2)}^{\ast}}\frac{F_{Y_{(2)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})}.
\end{align*}
Here
$\mathrm{DTV}_{(p,q)}(X)$, $L_{1}^{\ast}$ and $L_{2}^{\ast}$ are the same as those in Corollary 2. In addition, $Y_{(2)}\sim E_{1}(0,~1,~\overline{G}_{(2)})$.
\end{corollary}
Note that (\ref{(a16)}) and (\ref{(a17)}) coincide with the results of (3.22) and (3.24) in Landsman et al. (2016b) as $q\rightarrow1$, repectively.
\section{Special cases}
In the following, we present DTV, DTS and DTK for several special members of univariate elliptical distributions, such as normal, student-$t$, logistic, Laplace and Pearson type VII distributions.\\
$\mathbf{Example~1}$ (Normal distribution) Let $X\sim N_{1}(\mu,~\sigma^{2})$. In this case, the density generators are expressed:
\begin{align*}
g_{1}(u)=\overline{G}_{(1)}(u)=\overline{G}_{(2)}(u)=\exp\{-u\},
\end{align*}
and the normalizing constants are written as:
\begin{align*}
c_{1}=c_{(1)}^{\ast}=c_{(2)}^{\ast}=(2\pi)^{-\frac{1}{2}}.
\end{align*}
Then
\begin{align*}
&\mathrm{DTV}_{(p,q)}(X)=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left(L_{1}+1\right),
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}[-\mathrm{DTE}_{(p,q)}(X)]^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left(L_{1}+1\right)+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTK}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left(L_{1}+1\right)\\
&\nonumber~~+\sum_{k=2}^{4}\binom{4}{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left[L_{1}^{\ast\ast}+3(L_{1}+1)\right]\bigg\}-3,
\end{align*}
where
\begin{align*}
\mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{\phi(\xi_{p})-\phi(\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}=\frac{\xi_{p}\phi(\xi_{p})-\xi_{q}\phi(\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},~
L_{1}^{\ast}=\frac{\xi_{p}^{2}\phi(\xi_{p})-\xi_{q}^{2}\phi(\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast}=\frac{\phi(\xi_{p})-\phi(\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},~
L_{1}^{\ast\ast}=\frac{\xi_{p}^{3}\phi(\xi_{p})-\xi_{q}^{3}\phi(\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, and $Y\sim N_{1}(0,~1)$. In addition, $\phi(\cdot)$ is pdf of $1$-dimensional standard normal distribution.\\
$\mathbf{Example~2}$ (Student-$t$ distribution). Let
$X\sim St_{1}\left(\mu,~\sigma^{2},~m\right).$
In this case, the density generators are expressed (for details see Zuo et al., 2021):
\begin{align*}
g_{1}(u)=\left(1+\frac{2u}{m}\right)^{-(m+1)/2},
\end{align*}
\begin{align*}
\overline{G}_{(1)}(u)=\frac{m}{m-1}\left(1+\frac{2u}{m}\right)^{-(m-1)/2}
\end{align*}
and
\begin{align*}
\overline{G}_{(2)}(u)=\frac{m^{2}}{(m-1)(m-3)}\left(1+\frac{2u}{m}\right)^{-(m-3)/2}.
\end{align*}
The normalizing constants are written as:
\begin{align*}
c_{1}=\frac{\Gamma\left((m+1)/2\right)}{\Gamma(m/2)(m\pi)^{\frac{1}{2}}},
\end{align*}
\begin{align*}
\nonumber c_{(1)}^{\ast}&=\frac{(m-1)\Gamma(1/2)}{(2\pi)^{1/2}m}\left[\int_{0}^{\infty}u^{1/2-1}\left(1+\frac{2t}{m}\right)^{-(m-1)/2}\mathrm{d}u\right]^{-1}\\
&=\frac{(m-1)}{m^{3/2}B(\frac{1}{2},~\frac{m-2}{2})},~if~m>2
\end{align*} and
\begin{align*}
\nonumber c_{(2)}^{\ast}&=\frac{(m-1)(m-3)\Gamma(1/2)}{(2\pi)^{1/2}m^{2}}\left[\int_{0}^{\infty}u^{1/2-1}\left(1+\frac{2t}{m}\right)^{-(m-3)/2}\mathrm{d}u\right]^{-1}\\
&=\frac{(m-1)(m-3)}{m^{5/2}B(\frac{1}{2},~\frac{m-4}{2})},~if~m>4,
\end{align*}
where $\Gamma(\cdot)$ and $B(\cdot,\cdot)$ are Gamma function and Beta function, respectively. Then
\begin{align*}
\nonumber&\mathrm{DTV}_{(p,q)}(X)\\
&=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left(L_{1}+\frac{m}{m-2}L_{2}\right),~m>2,
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}(-\mathrm{DTE}_{(p,q)}(X))^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
\nonumber&+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left(L_{1}+\frac{m}{m-2}L_{2}\right)+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},\\
&~m>2,
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTK}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left(L_{1}+\frac{m}{m-2}L_{2}\right)\\
&\nonumber~~+\sum_{k=2}^{4}\binom{4}{k}\mathrm{C}_{4}^{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left(L_{1}^{\ast\ast}+3L_{2}^{\ast\ast}\right)\bigg\}-3,~m>2,
\end{align*}
where
\begin{align*} \mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{\Gamma\left((m+1)/2\right)\sqrt{m}\left[\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-1)/2}-\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-1)/2}\right]}{\Gamma(m/2)(m-1)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}=\frac{\Gamma\left((m+1)/2\right)\sqrt{m}\left[\xi_{p}\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-1)/2}-\xi_{q}\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-1)/2}\right]}{\Gamma(m/2)(m-1)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}&=\frac{F_{Y_{(1)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast}=\frac{\Gamma\left((m+1)/2\right)\sqrt{m}\left[\xi_{p}^{2}\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-1)/2}-\xi_{q}^{2}\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-1)/2}\right]}{\Gamma(m/2)(m-1)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast}&=\frac{\Gamma\left((m+1)/2\right)m^{3/2}\left[\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-3)/2}-\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-3)/2}\right]}{\Gamma(m/2)(m-1)(m-3)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast\ast}=\frac{\Gamma\left((m+1)/2\right)\sqrt{m}\left[\xi_{p}^{3}\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-1)/2}-\xi_{q}^{3}\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-1)/2}\right]}{\Gamma(m/2)(m-1)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast\ast}=&\frac{\Gamma\left((m+1)/2\right)m^{3/2}\left[\xi_{p}\left(1+\frac{\xi_{p}^{2}}{m}\right)^{-(m-3)/2}-\xi_{q}\left(1+\frac{\xi_{q}^{2}}{m}\right)^{-(m-3)/2}\right]}{\Gamma(m/2)(m-1)(m-3)\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})}\\
&+\frac{m^{2}}{(m-2)(m-4)}\frac{F_{Y_{(2)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},~m>4,
\end{align*}
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, $Y\sim St_{1}(0,~1,~m)$, $Y_{(1)}\sim E_{1}(0,~1,~\overline{G}_{(1)})$ and $Y_{(2)}\sim E_{1}(0,~1,~\overline{G}_{(2)})$.\\
$\mathbf{Example~3}$ (Logistic distribution). Let $X\sim Lo_{1}\left(\mu,~\sigma^{2}\right)$. In this case, the density generators are expressed (for details see Zuo et al., 2021):
\begin{align*}
g_{1}(u)=\frac{\exp(-u)}{[1+\exp(-u)]^{2}},
\end{align*}
\begin{align*}
\overline{G}_{(1)}(u)=\frac{\exp(-u)}{1+\exp(-u)}
\end{align*}
and
\begin{align*}
\overline{G}_{(2)}(u)=\ln\left[1+\exp(-u)\right].
\end{align*}
The normalizing constants are written as:
\begin{align*}
c_{1}=\frac{1}{(2\pi)^{1/2}\Psi_{2}^{\ast}(-1,\frac{1}{2},1)},
\end{align*}
\begin{align*}
c_{(1)}^{\ast}=\frac{1}{(2\pi)^{1/2}\Psi_{1}^{\ast}(-1,\frac{1}{2},1)}
\end{align*}
and
\begin{align*}
c_{(2)}^{\ast}=\frac{1}{(2\pi)^{1/2}\Psi_{1}^{\ast}(-1,\frac{3}{2},1)}.
\end{align*} Then
\begin{align*}
\nonumber&\mathrm{DTV}_{(p,q)}(X)\\
&=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left[L_{1}+\frac{\Psi_{1}^{\ast}(-1,\frac{1}{2},1)}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)}L_{2}\right],
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}(-\mathrm{DTE}_{(p,q)}(X))^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left[L_{1}+\frac{\Psi_{1}^{\ast}(-1,\frac{1}{2},1)}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)}L_{2}\right]+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTK}_{(p,q)}(X)=\\
\nonumber&\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left[L_{1}+\frac{\Psi_{1}^{\ast}(-1,\frac{1}{2},1)}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)}L_{2}\right]\\
&\nonumber+\sum_{k=2}^{4}\binom{4}{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left(L_{1}^{\ast\ast}+3L_{2}^{\ast\ast}\right)\bigg\}-3,
\end{align*}
where
\begin{align*} \mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{\phi(\xi_{p})(1+\sqrt{2\pi}\phi(\xi_{q}))-\phi(\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))(1+\sqrt{2\pi}\phi(\xi_{q}))},
\end{align*}
\begin{align*}
L_{1}=\frac{\xi_{p}\phi(\xi_{p})(1+\sqrt{2\pi}\phi(\xi_{q}))-\xi_{q}\phi(\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))(1+\sqrt{2\pi}\phi(\xi_{q}))},~
L_{2}&=\frac{F_{Y_{(1)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast}=\frac{\xi_{p}^{2}\phi(\xi_{p})(1+\sqrt{2\pi}\phi(\xi_{q}))-\xi_{q}^{2}\phi(\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))(1+\sqrt{2\pi}\phi(\xi_{q}))},
\end{align*}
\begin{align*}
L_{2}^{\ast}&=\frac{\ln(1+\sqrt{2\pi}\phi(\xi_{p}))-\ln(1+\sqrt{2\pi}\phi(\xi_{q}))}{\sqrt{2\pi}\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast\ast}=\frac{\xi_{p}^{3}\phi(\xi_{p})(1+\sqrt{2\pi}\phi(\xi_{q}))-\xi_{q}^{3}\phi(\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})(1+\sqrt{2\pi}\phi(\xi_{p}))(1+\sqrt{2\pi}\phi(\xi_{q}))},
\end{align*}
\begin{align*}
L_{2}^{\ast\ast}&=\frac{\xi_{p}\ln(1+\sqrt{2\pi}\phi(\xi_{p}))-\xi_{q}\ln(1+\sqrt{2\pi}\phi(\xi_{q}))}{\sqrt{2\pi}\Psi_{2}^{\ast}(-1,\frac{1}{2},1)F_{Y}(\xi_{p},\xi_{q})}+\frac{\Psi_{1}^{\ast}(-1,\frac{3}{2},1)}{\Psi_{2}^{\ast}(-1,\frac{1}{2},1)}\frac{F_{Y_{(2)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, $Y\sim Lo_{1}(0,~1)$, $Y_{(1)}\sim E_{1}(0,~1,~\overline{G}_{(1)})$ and $Y_{(2)}\sim E_{1}(0,~1,~\overline{G}_{(2)})$. \\
$\mathbf{Remark~5}$ Here $\Psi_{\kappa}^{\ast}(z,s,a)$ is the generalized Hurwitz-Lerch zeta function defined by (see
Lin et al., 2006)
$$\Psi_{\kappa}^{\ast}(z,s,a)=\frac{1}{\Gamma(\kappa)}\sum_{n=0}^{\infty}\frac{\Gamma(\kappa+n)}{n!}\frac{z^{n}}{(n+a)^{s}},$$
which has an integral representation
$$\Psi_{\kappa}^{\ast}(z,s,a)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{t^{s-1}e^{-at}}{(1-ze^{-t})^{\kappa}}\mathrm{d}t,$$
where $\mathcal{R}(a)>0$, $\mathcal{R}(s)>0$ when $|z|\leq1~(z\neq1)$, $\mathcal{R}(s)>1$ when $z=1$.\\
$\mathbf{Example~4}$ (Laplace distribution). Let $X\sim La_{1}\left(\mu,~\sigma^{2}\right)$. In this case, the density generators are expressed (for details see Zuo et al., 2021):
\begin{align*}
g_{1}(u)=\exp(-\sqrt{2u}),
\end{align*}
\begin{align*}
\overline{G}_{(1)}(u)=(1+\sqrt{2u})\exp(-\sqrt{2u})
\end{align*}
and
\begin{align*}
\overline{G}_{(2)}(u)=(3+2u+3\sqrt{2u})\exp(-\sqrt{2u}).
\end{align*}
The normalizing constants are written as:
\begin{align*}
c_{1}=\frac{1}{2},~c_{(1)}^{\ast}=\frac{1}{4},~ c_{(2)}^{\ast}=\frac{1}{16}.
\end{align*}
Then
\begin{align*}
&\mathrm{DTV}_{(p,q)}(X)=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left(L_{1}+2L_{2}\right),
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}(-\mathrm{DTE}_{(p,q)}(X))^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left(L_{1}+2L_{2}\right)+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTK}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left(L_{1}+2L_{2}\right)\\
&\nonumber~~+\sum_{k=2}^{4}\binom{4}{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left(L_{1}^{\ast\ast}+3L_{2}^{\ast\ast}\right)\bigg\}-3,
\end{align*}
where
\begin{align*} \mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{(1+|\xi_{p}|)\exp(-|\xi_{p}|)-(1+|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}=\frac{\xi_{p}(1+|\xi_{p}|)\exp(-|\xi_{p}|)-\xi_{q}(1+|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})},~
L_{2}&=\frac{F_{Y_{(1)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast}=\frac{\xi_{p}^{2}(1+|\xi_{p}|)\exp(-|\xi_{p}|)-\xi_{q}^{2}(1+|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast}&=\frac{(3+\xi_{p}^{2}+3|\xi_{p}|)\exp(-|\xi_{p}|)-(3+\xi_{q}^{2}+3|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast\ast}=\frac{\xi_{p}^{3}(1+|\xi_{p}|)\exp(-|\xi_{p}|)-\xi_{q}^{3}(1+|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast\ast}&=\frac{\xi_{p}(3+\xi_{p}^{2}+3|\xi_{p}|)\exp(-|\xi_{p}|)-\xi_{q}(3+\xi_{q}^{2}+3|\xi_{q}|)\exp(-|\xi_{q}|)}{2F_{Y}(\xi_{p},\xi_{q})}+\frac{8F_{Y_{(2)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, $Y\sim La_{1}(0,~1)$, $Y_{(1)}\sim E_{1}(0,~1,~\overline{G}_{(1)})$ and $Y_{(2)}\sim E_{1}(0,~1,~\overline{G}_{(2)})$. In addition, $|\cdot|$ is absolute value function.\\
$\mathbf{Example~5}$ (Pearson type VII distribution). Let
$X\sim PVII_{1}\left(\mu,~\sigma^{2},~t\right).$
In this case, the density generators are expressed:
\begin{align*}
g_{1}(u)=(1+2u)^{-t},
\end{align*}
\begin{align*}
\overline{G}_{(1)}(u)=\frac{1}{2(t-1)}(1+2u)^{-(t-1)}
\end{align*}
and
\begin{align*}
\overline{G}_{(2)}(u)=\frac{1}{4(t-1)(t-2)}(1+2u)^{-(t-2)}.
\end{align*}
The normalizing constants are written as:
\begin{align*}
c_{1}=\frac{\Gamma\left(t\right)}{\Gamma(t-1/2)\pi^{\frac{1}{2}}},~t>\frac{1}{2},
\end{align*}
\begin{align*}
c_{(1)}^{\ast}=\frac{2(t-1)}{B(\frac{1}{2},t-\frac{3}{2})},~t>\frac{3}{2}
\end{align*}
and
\begin{align*}
c_{(2)}^{\ast}=\frac{4(t-1)(t-2)}{B(\frac{1}{2},t-\frac{5}{2})},~t>\frac{5}{2}.
\end{align*}
Then
\begin{align*}
\nonumber&\mathrm{DTV}_{(p,q)}(X)\\
&=-\mathrm{DTE}_{(p,q)}^{2}(X)+\mu^{2}+2\mu\sigma\mathrm{DTE}_{(p,q)}(Y)+\sigma^{2}\left(L_{1}+\frac{1}{2t-3}L_{2}\right),~t>\frac{3}{2},
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTS}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-3/2}(X)\bigg\{\sum_{k=2}^{3}\binom{3}{k}(-\mathrm{DTE}_{(p,q)}(X))^{3-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
\nonumber&+2\mathrm{DTE}_{(p,q)}^{3}(X)+3[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{2}\left(L_{1}+\frac{1}{2t-3}L_{2}\right)+\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)\bigg\},\\
&~t>\frac{3}{2},
\end{align*}
\begin{align*}
\nonumber&\mathrm{DTK}_{(p,q)}(X)\\
\nonumber&=\mathrm{DTV}_{(p,q)}^{-2}(X)\bigg\{-3\mathrm{DTE}_{(p,q)}^{4}(X)
+6[\mu-\mathrm{DTE}_{(p,q)}(X)]^{2}\sigma^{2}\left(L_{1}+\frac{1}{2t-3}L_{2}\right)\\
&\nonumber~~+\sum_{k=2}^{4}\binom{4}{k}(-\mathrm{DTE}_{(p,q)}(X))^{4-k}[\mu^{k}+k\mu^{k-1}\sigma\mathrm{DTE}_{(p,q)}(Y)]\\
&~~+4[\mu-\mathrm{DTE}_{(p,q)}(X)]\sigma^{3}\left(L_{1}^{\ast}+2L_{2}^{\ast}\right)+\sigma^{4}\left(L_{1}^{\ast\ast}+3L_{2}^{\ast\ast}\right)\bigg\}-3,~t>\frac{3}{2},
\end{align*}
where
\begin{align*} \mathrm{DTE}_{(p,q)}(X)=\mu+\sigma\frac{\Gamma(t-1)\left[\left(1+\xi_{p}^{2}\right)^{-(t-1)}-\left(1+\xi_{q}^{2}\right)^{-(t-1)}\right]}{2\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}=\frac{\Gamma(t-1)\left[\xi_{p}\left(1+\xi_{p}^{2}\right)^{-(t-1)}-\xi_{q}\left(1+\xi_{q}^{2}\right)^{-(t-1)}\right]}{2\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},~
L_{2}&=\frac{F_{Y_{(1)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast}=\frac{\Gamma(t-1)\left[\xi_{p}^{2}\left(1+\xi_{p}^{2}\right)^{-(t-1)}-\xi_{q}^{2}\left(1+\xi_{q}^{2}\right)^{-(t-1)}\right]}{2\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast}&=\frac{\Gamma(t-2)\left[\left(1+\xi_{p}^{2}\right)^{-(t-2)}-\left(1+\xi_{q}^{2}\right)^{-(t-2)}\right]}{4\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{1}^{\ast\ast}=\frac{\Gamma(t-1)\left[\xi_{p}^{3}\left(1+\xi_{p}^{2}\right)^{-(t-1)}-\xi_{q}^{3}\left(1+\xi_{q}^{2}\right)^{-(t-1)}\right]}{2\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})},
\end{align*}
\begin{align*}
L_{2}^{\ast\ast}=&\frac{\Gamma(t-2)\left[\xi_{p}\left(1+\xi_{p}^{2}\right)^{-(t-2)}-\xi_{q}\left(1+\xi_{q}^{2}\right)^{-(t-2)}\right]}{4\Gamma(t-\frac{1}{2})\sqrt{\pi}F_{Y}(\xi_{p},\xi_{q})}\\
&+\frac{1}{(2t-5)(2t-3)}\frac{F_{Y_{(2)}}(\xi_{p},\xi_{q})}{F_{Y}(\xi_{p},\xi_{q})},~t>\frac{5}{2},
\end{align*}
$\xi_{k}=\frac{x_{k}-\mu}{\sigma}$, $k=p,q$, $Y\sim PVII_{1}(0,~1,~t)$, $Y_{(1)}\sim E_{1}(0,~1,~\overline{G}_{(1)})$ and $Y_{(2)}\sim E_{1}(0,~1,~\overline{G}_{(2)})$.
\section{Illustrative example}
We discuss DTE, DTV, DTS and DTK of three industry segments in finance
(Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$) collecting stock return data in London stock exchange from April 2013 to November 2019 (For the data, see
https://finance.yahoo.com/ and https://www.londonstockexchange.com/) in the finance sector
of the market by using the results of parameter estimates in Shushi and Yao (2020). Using multivariate normal distribution to fit data. We denote it by
$$\mathbf{X}=(X_{1},X_{2},X_{3})^{T}\sim N_{3}(\boldsymbol{\mu},\mathbf{\Sigma}).$$
Parameters are computed using maximum likelihood estimation:
\begin{align*}
\boldsymbol{\mu}=10^{-3}\left(\begin{array}{ccccccccccc}
-1.140677\\
5.896240\\
2.107343
\end{array}
\right),
\mathbf{\Sigma}=10^{-4}\left(\begin{array}{ccccccccccc}
19.088935&12.503116&-3.720492\\
12.503116&20.268816&-3.162601\\
-3.720492&-3.162601&8.851913
\end{array}
\right).
\end{align*}
(i) Considering $p+q=1$, let $(p,q)=(0.05,0.95),~(0.10,0.90),~(0.15,0.85),$ $(0.20,0.80),~(0.25,0.75),~(0.30,0.70),$ results are presented in Table 1, Figures 1, 2 and 3.
Table 1 and Figures 1-3 show DTEs, DTVs, DTSs and DTKs of $X_{1}$ (Banks), $X_{2}$ (Insurance) and $X_{3}$ (Financial and Credit Service) for $(p,q)=(0.05,0.95),~(0.10,0.90),~(0.15,0.85),$ $(0.20,0.80),~(0.25,0.75),~(0.30,0.70),$ respectively. In Table 1 we see that DTEs of Banks $X_{1}$ for different $(p,q)$ are same, and equal to mean $\mu_{1}=-1.140677\times10^{-3}$; DTEs of Insurance $X_{2}$ for different $(p,q)$ are same, and equal to mean $\mu_{2}=5.896240\times10^{-3}$; DTEs of Financial and Credit Service $X_{3}$ for different $(p,q)$ are same, and equal to mean $\mu_{3}=2.107343\times10^{-3}$;
DTEs of Insurance $X_{2}$ are the greatest, and DTEs of Banks $X_{1}$ are the least.
As we see in Figure 1, there is a clear difference among the DTVs of three industry segments in finance. In three industry segments of finance, no matter how $p$ changes, DTVs of Financial and Credit Service $X_{3}$ are the least, and DTVs of Insurance $X_{2}$ are
the greatest. The
dispersion of values of the DTVs between Insurance $X_{2}$ and Financial and Credit Service $X_{3}$ is the largest at $(p,q)=(0.05,0.95)$. However, DTVs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are decreasing with increase of $p(<0.5)$, which means that the smaller the volatility of data with increase of $p(<0.5)$
From Figure 2, we see that however $p$ changes, DTSs of Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are $0$. This implies that for normal distribution, value of DTS for symmetric interval is $0$. It indicates that distribution is no skewness on interval. This can also explain why the skewness of the normal distribution is $0$.
We observe from Figure 3 that DTKs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are decreasing with increase of $p$. Furthermore, In three industry segments of finance, no matter how $p$ changes, DTKs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are same, which means that the value of DTK is not affected by expectation $\mu_{k}$ and variance $\sigma_{k}$ $(k=1,~2, ~ 3)$.
(ii) Considering $p-q=0.65$, let $(p,q)=(0.05,0.70),~(0.10,0.75),~(0.15,0.80),$ $(0.20,0.85),~(0.25,0.90),~(0.30,0.95),$ results are shown in Figures 4, 5, 6 and 7.
Figures 4-7 show DTEs, DTVs, DTSs and DTKs of $X_{1}$ (Banks), $X_{2}$ (Insurance) and $X_{3}$ (Financial and Credit Service) for $(p,q)=(0.05,0.70),$~\\$(0.10,0.75),~(0.15,0.80),$ $(0.20,0.85),~(0.25,0.90),~(0.30,0.95),$ respectively.
From Figure 4, we observe that DTEs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are increasing with increase of $p$. In three industry segments of finance, increase rate of DTEs of Financial and Credit Service $X_{3}$ are the least, and increase rates of DTEs of Banks $X_{1}$ and Insurance $X_{2}$ are almost equality. DTE of Financial and Credit Service $X_{3}$ is
the greatest for $p= 0.05$, and the DTEs of Insurance $X_{2}$ are the greatest for $p> 0.05$. However, the DTEs of Banks $X_{1}$ are
the least for $p\leq 0.25$, and DTE of Financial and Credit Service $X_{3}$ is
the least for $p= 0.3$.
As we see in Figure 5, there is a clear difference among the DTVs of three industry segments in finance. In three industry segments of finance, no matter how $p$ changes, DTVs of Financial and Credit Service $X_{3}$ are the least, and DTVs of Insurance $X_{2}$ are
the greatest. However, DTVs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are firstly decreasing then increasing with increase of $p$. At middle point, DTVs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are least. That can be explained by the fact that the closer to the mean (center), the smaller the degree of center deviation.
We observe from 6 that DTSs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are increasing with increase of $p$. In three industry segments of finance, no matter how $p$ changes, DTSs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are same, which indicates that the value of DTS is not affected by expectation $\mu_{k}$ and variance $\sigma_{k}$ ($k=1,~2$,~$3$). For $p\leq 0.15$, the value of DTS is negative, it indicates that distribution is left skewness on interval.
For $p\geq 0.20$, the value of DTS is positive, it indicates that distribution is right skewness on interval.
From Figure 7, we notice that DTKs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are firstly decreasing then increasing with increase of $p$. At middle point, DTKs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are least. In three industry segments of finance, no matter how $p$ changes, DTKs of Banks $X_{1}$, Insurance $X_{2}$, Financial and Credit Service $X_{3}$ are same, which also indicates that the value of DTK is not affected by expectation $\mu_{k}$ and variance $\sigma_{k}$ ($k=1,~2$,~$3$).
From the above Table and Figures, it can be concluded that the values of DTS and DTK are not affected by expectation and variance for normal distribution. In addition, choosing different $(p,q)$ will may have different results to policy decision.
\section{Concluding remarks}
In this paper, we give the DTM risk measure for elliptical family, which is a generalization of TCM in Landsman et al. (2016b). The TCE, TV, TCS and TCK risk measures are extended to DTE, DTV, DTS and DTK risk measures, respectively. There are many special cases for this elliptical family. We give several special cases, including normal, student-$t$, logistic, Laplace and Pearson type VII distributions. Finally, we discuss DTEs, DTVs, DTSs and DTKs of three industry segments' (Banks, Insurance, Financial and Credit Service) stock return in London stock exchange, and conclude that choosing different $(p,q)$ will may have different results to policy decision. Furthermore, in Eini and Khaloozadeh (2021), the authors derive formula of TCM for generalized skew-elliptical distributions. It will, therefore, be of interest to extend the results established here to the
generalized skew-elliptical distributions.
\section*{Acknowledgments}
\noindent The research was supported by the National Natural Science Foundation of China (No. 12071251)
\section*{Conflicts of Interest}
\noindent The authors declare that they have no conflicts of interest.
\section*{References}
\bibliographystyle{model1-num-names}
| {
"timestamp": "2022-03-03T02:28:07",
"yymm": "2203",
"arxiv_id": "2203.01091",
"language": "en",
"url": "https://arxiv.org/abs/2203.01091",
"abstract": "In this paper, we define doubly truncated moment (DTM), doubly truncated skewness (DTS) and kurtosis (DTK). We derive DTM formulae for elliptical family, with emphasis on normal, student-$t$, logistic, Laplace and Pearson type VII distributions. We also present explicit formulas of the DTE (doubly truncated expectation), DTV (doubly truncated variance), DTS and DTK for those distributions. As illustrative example, DTEs, DTVs, DTSs and DTKs of three industry segments' (Banks, Insurance, Financial and Credit Service) stock return in London stock exchange are discussed.",
"subjects": "Statistics Theory (math.ST); Risk Management (q-fin.RM)",
"title": "Doubly truncated moment risk measures for elliptical distributions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717448632123,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708944929673145
} |
https://arxiv.org/abs/2206.04339 | Conjugacy classes of maximal cyclic subgroups of metacyclic $p$-groups | In this paper, we set $\eta (G)$ to be the number of conjugacy classes of maximal cyclic subgroups of a finite group $G$. We compute $\eta (G)$ for all metacyclic $p$-groups. We show that if $G$ is a metacyclic $p$-group of order $p^n$ that is not dihedral, generalized quaternion, or semi-dihedral, then $\eta (G) \ge n-2$, and we determine when equality holds. | \section{Introduction}
Unless otherwise stated, all groups in this paper are finite, and we will follow standard notation from \cite{isstext}. As in \cite{pre1} and \cite{pre2}, we set $\eta (G)$ to be the number of conjugacy classes of maximal cyclic subgroups of a group $G$. For $p = 2$, we have that $\eta (G) = 3$ when $G$ is a dihedral $2$-group, a generalized quaternion $2$-group, or a semi-dihedral group. In \cite{pre}, the second and third authors along with Yiftach Barnea and Mikhail Ershov have shown that for every prime $p \ge 5$ there are infinitely many $p$-groups with $\eta = p + 2$ and for $p =3$ there are infinitely many $3$-groups with $\eta = 9$. This answers negatively Question 5.0.9 from \cite{von} which asked whether $\eta(G)$ grows with the order of $G$ when $G$ is a $p$-group and $p$ is odd.
On the other hand, it is rare for this to occur. Indeed, the only $2$-groups (in fact the only $p$-groups) that have $\eta = 3$ are the Klein $4$-group, the dihedral groups, the generalized quaternion groups, and the semi-dihedral groups. To see this, we know that $\eta (G) \ge \eta (G/G')$ (see \cite{pre1}), and for $p$-groups $\eta (G/G') \ge p+1$ when $G/G'$ is not cyclic (see \cite{pre2}). Thus, $\eta = 3$ can only occur when $p = 2$. Also, in \cite{pre2}, we show that $\eta (G/G') = 3$ if and only if $G/G' \cong C_2 \times C_2$. It is well known that if $G$ is a $2$-group of order at least $8$ and $|G:G'| = 4$, then $G$ is either dihedral, generalized quaternion, or semi-dihedral. (See Problem 6B.8 of \cite{isstext}.)
Now, dihedral groups, generalized quaternion groups, and semi-dihedral groups are examples of metacyclic groups. I.e., groups $G$ with a normal subgroup $N$ so that $N$ and $G/N$ are both cyclic groups.
This motivated us to investigate the invariant $\eta$ for all metacyclic $p$-groups. Indeed this project began before the results of \cite{pre} were known and
we were originally curious as to whether we would find another family of metacyclic $p$-groups with fixed $\eta$. However, we prove the following:
\begin{theorem} \label{main1}
Let $G$ be a metacyclic $p$-group of order $p^n$ that is not a dihedral group, generalized quaternion group, or semi-dihedral group. Then $\eta (G) \ge n - 2$.
\end{theorem}
In fact, we compute $\eta (G)$ for every metacyclic $p$-group $G$. Thus, we list the metacyclic $p$-groups where equality occurs in Theorem \ref{main1}. King in \cite{king} gave a description of all metacyclic $p$-groups. We will give this description of these groups in Section \ref{secn:meta}. In particular, King divided the metacyclic $p$-groups into two families of groups which he called {\it positive type} and {\it negative type}. The negative type groups only occur when $p = 2$, so if $p$ is an odd prime, then all of the metacyclic $p$-groups are of positive type. We have the following result for the metacyclic groups of positive type.
\begin{theorem} \label{main2}
Let $G$ be a metacyclic group of positive type. Then $\eta (G) = \eta (G/G')$.
\end{theorem}
We note that Rog\'erio in \cite{rogerio} has a formula to compute $\eta (A)$ for an abelian group $A$. His formula involves the Euler $\phi$-function and a second number theoretic function. When $G$ is a metacyclic abelian $p$-group, we prove in \cite{pre2} a formula for $\eta (G)$ that is only in terms of the sizes of the direct factors of $G$. Notice in Theorem \ref{main2} that $G/G'$ will be a metacyclic abelian $p$-group, and so, our formula will compute $\eta (G/G')$ and hence, $\eta (G)$.
When $G$ is a metacyclic $p$-group of negative type, it is not usually the case that $\eta (G)$ and $\eta (G/G')$ are equal. However, we will find that there usually is a proper quotient whose value of $\eta$ equals $\eta (G)$. We will also see for most metacyclic groups of negative type that the formula for $\eta$ is dependent on the formula for $\eta$ that we found for the metacyclic abelian $p$-groups.
The authors would like to thank Emanuele Pacifici for a number of helpful conversations while working on this paper.
\section{Preliminaries}
In our preprint \cite{pre1}, we prove two results that we need in this paper. The first is a criteria for determining when the quotient of a $p$-group $G$ has the same value for $\eta$ as $\eta (G)$. Given a prime $p$, we set $G^{\{p\}} = \{ g^p \mid g \in G \}$. I.e., $G^{\{p\}}$ is the set of $p$-th powers in $G$.
\begin{theorem}\label{quot}
Let $N$ be a normal subgroup of the $p$-group $G$. Then $\eta(G/N) \leq \eta(G).$ Furthermore, $\eta(G/N) = \eta(G)$ if and only if $N\subseteq G^{\{p\}}$ and for all $x \in G \setminus G^{\{p\}}$ every element of $xN$ is conjugate to a generator of $\langle x \rangle$. In particular, if $\eta(G/N) = \eta(G)$, then $G^{\{p\}}$ is a union of $N$-cosets and $G^{\{p\}}N = G^{\{p\}}$.
\end{theorem}
This second Proposition relates $\eta (G)$ to the number of $G$-orbits of maximal cyclic subgroups of a normal subgroup.
\begin{proposition}\label{centre}
Let $N$ be a normal subgroup of a group $G$ and let $\eta^*(N)$ be the number of $G$-orbits on the $N$-conjugacy classes of maximal cyclic subgroups of $N$. Then $\eta (G) \geq \eta^*(N)$. In particular,
\begin{enumerate}
\item[(i)] if $N$ is central in $G$, then $\eta(G) \geq \eta(N)$.
\item[(ii)] if $|G:N| = k$, then $\eta (G) \geq \eta(N)/k$.
\end{enumerate}
\end{proposition}
Let $p$ be a prime, and let $a$ and $b$ be positive integers. We take $k = {\rm max} (a,b)$ and $l = {\rm min} (a,b)$. We set $g_p (a,b) = p^{(l-1)}((k-l) (p-1) + p + 1)$. In \cite{pre2}, we prove the following lemma.
\begin{lemma}\label{two}
If $p$ is a prime and $a$ and $b$ are positive integers so that $G = C_{p^a} \times C_{p^b}$, then $g_p (a,b) = \eta (G)$.
\end{lemma}
We close this section with an easy lemma that computes $g_2$ for small values and gives a lower bound for larger values. We remark that when $p = 2$, this function is much easier to work with.
\begin{lemma}\label{g2 comp}
Suppose $k \ge l$. Then the following hold:
\begin{enumerate}
\item If $l = 1$, then $g_2 (k,1) = k+2$.
\item If $l = 2$, then $g_2 (k,2) = 2(k+1)$.
\item If $l = 3$, then $g_2 (k,3) = 4k$.
\item If $l \ge 4$, then $g_2 (k,l) \ge 4k + 2l$.
\end{enumerate}
\end{lemma}
\noindent
{\bf Proof.}
We have $g_2 (a,b) = g_2 (k,l) = 2^{l-1}(k-l+3)$. Conclusions (1), (2), and (3) are immediate. We focus on (4). Begin with $g_2 (4,4) =24$; so the result holds for $g_2 (4,4)$. Next, $g_2 (l,l) - 6l = 3 \cdot 2^{l-1} - 6l$ is clearly increasing when $l \ge 3$. Thus, we have $g_2 (l,l) \ge 4l + 2l$ when $l \ge 3$. Let $k = l + m$ for $m \ge 0$. Then $g_2 (k,l) = g_2 (l+m,m) = 2^{l-1}(m+3)$ and $4k + 2l = 4(l+m) + 2l = 6l + 4m$. Fixing $l \ge 4$, we note that $2^{l-1}(m+3) - 6l - 4m$ will be an increasing function in $m$. We conclude that $g_2 (k,l) \ge 4k + 2l$ for $l \ge 4$.
$\Box$\\
\section{Metacyclic $p$-Groups} \label{secn:meta}
For the rest of the paper, we will focus on metacyclic $p$-groups. A finite metacyclic $p$-group can be described as follows. This description is taken from \cite{king},
$$G_p(\alpha, \beta, \epsilon, \delta, \pm ) = \langle x,y \mid x^{p^{\alpha}}=1, y^{p^{\beta}}= x^{p^{\alpha - \epsilon}}, x^y = x^r \rangle$$
where $r = p^{\alpha - \delta} +1$ (positive type) or $r = p^{\alpha - \delta} -1$ (negative type).
The integers $\alpha$, $\beta$, $\delta$, $\epsilon$ satisfy $\alpha, \beta >0$ and $\delta, \epsilon$
nonnegative, furthermore $\delta \leq {\rm min} \{\alpha -1, \beta \}$ and $\delta + \epsilon \leq \alpha$. When $G$ has negative type, only $\epsilon = 0$ or $1$ occur. For $p$ odd
$$G \cong G_p(\alpha, \beta, \epsilon, \delta, +).$$
In other words, the negative type only occurs when $p = 2$; when $p$ is odd, only the positive type occurs. Metacyclic $2$-groups can be of either positive type or negative type. We note that dihedral, semi-dihedral and generalized quaternion groups are all of negative type.
If $p=2$, then in addition $\alpha - \delta >1$ and
$$G \cong G_2(\alpha, \beta, \epsilon, \delta, +)\;\;{\rm or}\;\;G \cong G_2(\alpha, \beta, \epsilon, \delta, -).$$
Note, the above presentation does not guarantee nonisomorphic groups for different parameters (see \cite{beuerle}). However, the parameters do determine some structural information about $G$. For example, $|G| = p^{\alpha + \beta}$ and $G' = \langle x^{p^{\alpha - \delta}} \rangle$ if $G$ is of positive type and $G' = \langle x^2 \rangle$ if $G$ is of negative type. All elements of $G$ can be written as $y^bx^a$ for some integers $a$ and $b$. Also if $G$ is of positive type then $Z(G) = \langle x^{p^{\delta}}, y^{p^{\delta}} \rangle$ and $|Z(G)| = p^{\alpha + \beta - 2 \delta}$, if $G$ is of negative type $Z(G) = \langle x^{2^{\alpha -1}}, y^{2^{\max \{1, \delta\}}} \rangle$, \cite[Prop. 2.5]{beuerle}. Note that if $G$ is of positive type and $\delta = 0$, then $G$ will be abelian.
As we mentioned above, the dihedral groups, the generalized quaternion groups, and the semi-dihedral groups are the only $p$-groups $G$ that satisfy $\eta(G) = 3$. These are also precisely the $2$-groups of maximal class. We have also mentioned that they are metacyclic. In terms of our notation, the dihedral groups are $G_2 (\alpha,1,0,0,-)$, the generalized quaternion groups are $G_2 (\alpha,1,1,0,-)$, and the semi-dihedral groups are $G_2 (\alpha,1,0,1,-)$.
For Lemmas \ref{quo1} and \ref{conjugacy}, we are writing $G_p (\alpha,\beta,\epsilon,\delta, \pm )$ as $G_p (\alpha, \beta,\epsilon,\delta, \gamma)$ where we take $\gamma = +$ when $G$ is of positive type and $\gamma = -$ when $G$ is of negative type. We consider quotients of $G$. Note that this lemma would not be well defined if $\delta = 0$ and would not say anything if $\delta = 1$.
\begin{lemma} \label{quo1}
Suppose $G$ is $G_p (\alpha, \beta, \epsilon, \delta, \gamma)$ with $\delta \ge 2$. Then $N = \langle x^{p^{\alpha - \delta + 1}} \rangle$ is a normal subgroup of $G$ and $G/N$ is isomorphic to
$$G_p (\alpha-\delta+1, \beta, (\epsilon - \delta +1)^*, 1, \gamma )$$ where $(\epsilon - \delta +1)^* = \epsilon - \delta +1$ when $\epsilon \ge \delta - 1$ and $(\epsilon - \delta +1)^* = 0$ when $\epsilon < \delta -1$.
\end{lemma}
\noindent
{\bf Proof.}
Set $Z = \langle x^{p^{\alpha -1}} \rangle \le Z (G)$. We first prove that $G/Z$ is isomorphic to $G_p (\alpha -1, \beta, \epsilon - 1, \delta - 1, \gamma)$ when $\epsilon \ge 1$ and $G_p (\alpha -1, \beta, 0, \delta - 1, \gamma)$ when $\epsilon = 0$. We know that $G/Z = \langle xZ, yZ \rangle$ where $xZ$ has order $p^{\alpha - 1}$. Observe that $(yZ)^{p^\beta} = y^{p^\beta} Z = x^{p^{\alpha -\epsilon}}Z$. When $\epsilon \ge 1$, we have $$x^{p^{\alpha -\epsilon}}Z = x^{p^{(\alpha - 1) - (\epsilon -1)}}Z$$ and when $\epsilon = 0$, we have $$x^{p^{\alpha -\epsilon}}Z = x^{p^\alpha}Z = Z.$$ Also, $$(xZ)^{yZ} = x^y Z = x^{p^{\alpha - \delta} + \gamma} Z = x^{p^{(\alpha - 1) - (\delta - 1)}+ \gamma}Z.$$ Hence, $G/Z$ satisfies the hypotheses for $G_p (\alpha -1, \beta, \epsilon - 1, \delta - 1, \gamma)$ when $\epsilon \ge 1$ and $G_p (\alpha -1, \beta, 0, \delta - 1, \gamma)$ when $\epsilon = 0$.
We know that $X = \langle x \rangle$ is a cyclic, normal subgroup of $G$. Observe that $N$ is contained in $X$ and so is characteristic. This implies that $N$ is normal in $G$. Observe that $Z \le N$ and we have shown that
$G/Z \cong G_p (\alpha -1, \beta, \epsilon - 1, \delta - 1, \gamma)$ or $G_p (\alpha -1, \beta, 0, \delta - 1, \gamma)$. If $\delta = 2$, then $N = Z$, and we have the desired result. Otherwise, we have $\delta \ge 3$. Using induction, we have $G/N \cong (G/Z)/(N/Z)$ is isomorphic to either
$$
G_p((\alpha-1)-(\delta - 1)+1, \beta, (\epsilon - 1) - (\delta - 1) +1, 1, \gamma )
\cong G_p (\ \alpha-\delta+1, \beta, \epsilon - \delta + 1, 1, \gamma )
$$
or
$$ G_p((\alpha-1)-(\delta - 1)+1, \beta, 0, 1, \gamma ) \cong G_p ( \alpha-\delta+1, \beta, 0, 1, \gamma ). ~~\Box
$$
\\
We consider the metacyclic groups of positive type and use Theorem \ref{quot} and Lemma \ref{two}. Thus, we first analyze $G/G'$.
\begin{lemma}\label{ten}
Suppose $G = G_p(\alpha, \beta, \epsilon, \delta, +)$.
\begin{enumerate}
\item [(i)] If $\delta \geq \epsilon$ or $\delta < \epsilon$ and $\alpha \geq \beta + \epsilon$, then $G/G' = C_{p^{\alpha - \delta}} \times C_{p^{\beta}}$.
\item [(ii)] If $\delta < \epsilon$ and $\alpha < \beta + \epsilon$, then $G/G' = C_{p^{\alpha - \epsilon}} \times C_{p^{\beta + \epsilon -\delta}}$.
\end{enumerate}
\end{lemma}
\noindent
{\bf Proof.}
Now $G' = \langle x^{p^{\alpha - \delta}} \rangle$, so $|G'| = p^{\delta}$. Also $|G| = p^{\alpha + \beta}$,
so $|G:G'| = p^{\alpha +\beta - \delta}.$
If $\delta \geq \epsilon$, then $\langle y \rangle \cap G' = \langle x^{p^{\alpha-\epsilon}} \rangle = \langle x \rangle \cap \langle y \rangle$. We see that $xG'$ has order $p^{\alpha - \delta}$, and $yG'$ has order $p^\beta$
and $G/G' = \langle xG' \rangle \times \langle yG' \rangle$ yielding the desired result.
Now suppose $\delta < \epsilon$. In this case, we see that $G' < \langle x^{p^{\alpha - \epsilon}} \rangle = \langle x \rangle \cap \langle y \rangle$. We see that $xG'$ has order $p^{\alpha - \delta}$ and $yG'$ has order $p^{\beta + \epsilon - \delta}$. Since $G' < \langle x \rangle \cap \langle y \rangle$, we do not have that $G/G'$ is a direct product of $\langle xG' \rangle$ and $ \langle yG' \rangle$. We see that $G/G'$ is abelian and generated by $xG'$ and $yG'$, so every element of $G/G'$ has order $\le {\rm max} \{p^{\alpha - \delta}, p^{\beta+\epsilon-\delta} \}$. If $\alpha \geq \beta +\epsilon$, then $\alpha - \delta \geq \beta + \epsilon - \delta$. In this case, $xG'$ has the largest order of any element in $G/G'$, and so we get $G/G' = C_{p^{\alpha - \delta}} \times C_{p^\beta}$ since $|G/G'| = p^{\alpha + \beta - \delta}$. On the other hand, if $\alpha < \beta + \epsilon$, then $\alpha - \delta < \beta + \epsilon - \delta$. In this case, $yG'$ has the largest order of any element in $G/G'$ and we get $G/G' = C_{p^{\alpha - \epsilon}} \times C_{p^{\beta + \epsilon - \delta}}$.
$\Box$\\
Given an element $g \in G$, we write ${\rm cl} (g)$ to denote the conjugacy class of $g$ in $G$.
\begin{lemma} \label{conjugacy}
Let $G = G_p (\alpha, \beta, \epsilon, \delta, \gamma )$. If $g = y^{pl+a}x^m$ for integers $l$, $m$, and $a$ so that $a \in \{ 1, \dots, p-1\}$, then ${\rm cl} (g) = g G'$.
\end{lemma}
\noindent
{\bf Proof.}
We first claim that $G = \langle x, g \rangle$. We know that $G = \langle x, y \rangle$. Obviously, $\langle x, g \rangle \le G$. Observe that $y^{pl + a} = gx^{-m} \in \langle x, g \rangle$. Since the order of $y$ is a power of $p$, this implies that $y \in \langle x, g \rangle$. We conclude that $G = \langle x, y \rangle \le \langle x, g \rangle \le G$. This proves the claim.
Because $\langle x \rangle$ is normal in $G$, we obtain $G = \langle x \rangle \langle g \rangle$. Observe that $\langle g \rangle \le C_G (g)$. By Dedekind's lemma (see Lemma X.3 on page 328 of \cite{isstext}), it follows that $C_G (g) = (C_G(g) \cap \langle x \rangle ) \langle g \rangle = C_{\langle x \rangle} (g) \langle g \rangle $. Since $x$ centralizes $x^m$, we have
$$C_{\langle x \rangle} (g) = C_{\langle x \rangle} (y^{pl+a}x^m) = C_{\langle x \rangle} (y^{pl+a}) = C_{\langle x \rangle} (y) = \langle x^{p^t} \rangle,$$ where $t = \delta$ if $\gamma = +$ and $t = \alpha - 1$ when $\gamma = -$.
We see that $C_G (g) = \langle g, x^{p^t} \rangle$. We deduce that
$$|G:C_G (g)| = |\langle x \rangle:\langle x^{p^t} \rangle| = p^t = |G'|.$$ Since ${\rm cl} (g) \subseteq gG'$, we conclude that ${\rm cl} (g) = g G'$.
$\Box$ \\
Given a group $G$ and a prime $p$, we define $G^p = \langle G^{\{p\}} \rangle$. I.e., $G^p$ is the subgroup generated by $G^{\{p\}}$. In a similar fashion, we define $G^4 = \langle g^4 \mid g \in G \rangle$. Following the literature, we say that a finite $p$-group $G$ is {\it powerful} if (i) $G' \le G^p$ when $p$ is odd and (ii) $G' \le G^4$ when $p = 2$. If $G$ is a powerful $p$-group, then it is known that $G^p = G^{\{p\}}$, i.e. the set of $p$-powers of elements of $G$ is equal to the subgroup the $p$-powers generate. (See Section 2 of \cite{powerful} and in particular Propostion 2.6 of that citation.)
We claim that metacyclic $p$-groups of positive type are powerful. Let $G$ be $G_p (\alpha, \beta, \epsilon, \delta, +)$, then $G' = \langle x^{p^{\alpha - \delta}} \rangle.$ As $\alpha - \delta \ge 1$ it follows immediately that $G$ is powerful when $p$ is odd. For $p = 2$, we note that $\alpha - \delta \ge 2$ so again we have that $G$ is powerful.
When $G$ is of positive type, we extend Lemma \ref{conjugacy}.
\begin{lemma} \label{unnamed}
Let $G = G_p (\alpha, \beta, \epsilon, \delta, +)$ and $g \in G \setminus G^{\{p\}}$. Then ${\rm cl}(g) =
gG'$.
\end{lemma}
\noindent
{\bf Proof.}
Let $g \in G$ then $g = y^nx^m$ for some integers $n$ and $m$. As $G$ is
powerful, it follows that if $g \in G \setminus G^{\{p\}}$, then $g \not\in G^p$, and thus, one of $n$ and $m$
is not divisible by $p$. When $n$ is not divisible by $p$, we obtain the conclusion by Lemma \ref{conjugacy}.
We now suppose that $g = y^nx^m$ where $m$ is not divisible by $p$. We want to prove that ${\rm cl} (g) = gG'$. We know that ${\rm cl} (g) \subseteq gG'$. It suffices to prove that $|{\rm cl} (g)| \ge |gG'| = |G'| = p^\delta$. On the other hand, we know that $y$ acts as an automorphism of order $p^\delta$ on $\langle x \rangle$, so $x$ has $p^\delta$ distinct images under powers of $y$. Thus, if $1 \le a,b \le p^\delta$, then $x^{y^a} = x^{y^b}$ if and only if $a = b$. Since $m$ is coprime to $p$, we see that $(x^{y^a})^m = (x^{y^b})^m$ if and only if $a = b$. Hence, we have that $g^{y^a} = g^{y^b}$ if and only if $(y^nx^m)^{y^a} = (y^nx^m)^{y^b}$ and this occurs if and only if $a = b$. We deduce that $g$ has at least $p^\delta$ distinct conjugates under $\langle y \rangle$ and so $|{\rm cl} (g)| \ge p^{\delta}$ as desired. This proves the lemma.
$\Box$\\
We now prove that if $G$ is metacyclic of positive type, then $\eta (G) = \eta (G/G')$. Combining this fact with Lemmas \ref{two} and \ref{ten}, we are able to compute $\eta (G)$ for all primes $p$.
\begin{corollary} \label{positive quo}
Suppose $G$ is $G_p(\alpha, \beta, \epsilon, \delta, +)$. Then $\eta(G) = \eta(G/G')$.
\end{corollary}
\noindent
{\bf Proof.}
As $G$ is powerful, by Theorem \ref{quot}, we need to show that for all
$g \in G \setminus G^{\{p\}}$ every element of $gG'$ is conjugate to a generator of
$\langle g \rangle$, this follows from Lemma \ref{unnamed}.
$\Box$\\
For the record, we explicitly record the value of $\eta (G)$ when $G$ is a metacyclic group of positive type.
\begin{corollary} \label{explicit}
Suppose $G$ is $G_p(\alpha, \beta, \epsilon, \delta, +)$.
\begin{enumerate}
\item[(i)] If $\delta \ge \epsilon$ or $\delta < \epsilon$ and $\alpha \ge \beta + \epsilon$, then $\eta (G) = g_p (\alpha -\delta, \beta)$.
\begin{enumerate}
\item If $\beta \le \alpha - \delta$, then $\eta (G) = p^{\beta - 1} ((\alpha - \delta - \beta)(p-1) + p +1)$.
\item If $\beta > \alpha - \delta$, then $\eta (G) = p^{\alpha - \delta - 1} ((\beta - \alpha + \delta)(p-1) + p + 1)$.
\end{enumerate}
\item[(ii)] If $\delta < \epsilon$ and $\alpha < \beta + \epsilon$, then $\eta (G) = g_p (\alpha - \epsilon, \beta+\epsilon - \delta)= p^{\alpha -\epsilon - 1} ((\beta - \alpha + 2\epsilon - \delta)(p-1) + p + 1)$.
\end{enumerate}
\end{corollary}
\noindent
{\bf Proof.}
Using Corollary \ref{positive quo}, we have $\eta (G) = \eta (G/G')$. If $\delta \ge \epsilon$ or $\delta < \epsilon$ and $\alpha \ge \beta + \epsilon$, then in view of Lemma \ref{ten}, we see that $G/G' = C_{p^{\alpha - \delta}} \times C_{p^{\beta}}$ and $\eta (G) = g_p (\alpha - \delta, \beta)$. The remainder of (i) follows from the definition of $g_p$. Suppose $\delta < \epsilon$ and $\alpha < \beta + \epsilon$. Applying Lemma \ref{ten}, we see that $G/G' = C_{p^{\alpha - \epsilon}} \times C_{p^{\beta + \epsilon -\delta}}$. Observe that $\alpha < \beta + \epsilon$ yields $\alpha - \epsilon < \beta < \beta + \epsilon - \delta$ as we are assuming $\delta < \epsilon$. In light of the definition of $g_p$, we obtain conclusion (ii).
$\Box$\\
When $G$ is metacyclic of positive type, we show that $\eta (G) \ge \alpha + \beta$.
\begin{corollary}
If $G$ is $G_p (\alpha,\beta,\epsilon,\delta,+)$, then $\eta (G) \ge \alpha + \beta$.
\end{corollary}
\noindent
{\bf Proof.}
We consider separately the cases given in Corollary \ref{explicit}. We use the fact that $2^{\beta - 1} \geq \beta$ for $\beta$ a positive integer. First, (i)(a),
where $\alpha - \delta \geq \beta$,
\begin{eqnarray*}
\eta(G) & = & p^{\beta -1}((\alpha - \delta - \beta)(p-1) + p + 1) \\
& \geq & 2^{\beta -1} (\alpha - \delta - \beta + 3 )\\
& \geq & \alpha - \delta - \beta + 3\beta\\
& \geq & \alpha + \beta
\end{eqnarray*}
since $\beta \geq \delta$.
Now, case (i)(b), so $\alpha - \delta < \beta$. First assume $\alpha - \delta > 1$, then
\begin{eqnarray*}
\eta(G) & = & p^{\alpha - \delta - 1}((\beta - \alpha + \delta)(p-1) + p+1)\\
& \geq & 2^{\alpha - \delta -1}(\beta - \alpha + \delta + 3)\\
& \geq & 2(\beta - \alpha + \delta) + 3(\alpha - \delta)\\
& = & 2\beta + (\alpha - \delta)\\
& \geq & \beta + \alpha + (\beta - \delta) \\
& \geq & \beta + \alpha
\end{eqnarray*}
since $\beta \geq \delta$. If $\alpha - \delta =1$ then $p \geq 3$, also note $\alpha = 1 + \delta \leq 1 + \beta$. So, we have
$$\eta(G) \geq 2(\beta - \alpha + \delta) + 4 = 2\beta + 2 > \beta + \alpha.$$
Case (ii) follows similarly to (i)(a), we have $\alpha - \epsilon \leq \beta + \epsilon - \delta$,
\begin{eqnarray*}
\eta(G) & = & p^{\alpha - \epsilon -1}((\beta - \alpha + 2\epsilon - \delta)(p-1) + p+1) \\
& \geq & 2^{\alpha - \epsilon -1}(\beta - \alpha + 2\epsilon - \delta + 3)\\
& \geq & \beta - \alpha + 2 \epsilon - \delta + 3(\alpha - \epsilon)\\
& = & \beta + \alpha + (\alpha - \epsilon - \delta)\\
& \geq & \beta + \alpha
\end{eqnarray*}
since $\alpha \geq \delta + \epsilon$.$\Box$\\
\section{Metacyclic Groups of Negative Type}
The goal of this section is to compute $\eta$ when $G$ is a metacyclic group of negative type. We begin by looking at quotients of $G$. We begin with a preliminary lemma that is useful in understanding the quotients.
Using the notation of Section \ref{secn:meta} and applying Theorem \ref{quot}, we have that if $G = G_2(\alpha, \beta, \epsilon, \delta, - )$ with $\delta \ge 1$ and $N = \langle x^{2^{\alpha - \delta + 1}} \rangle$, then $\eta (G) \ge \eta (G/N)$. We now show that in fact this is an equality. We remind the reader that $\alpha - \delta \ge 2$ when $p = 2$.
We now prove the promised equality between $\eta (G)$ and $\eta (G/N)$.
\begin{theorem}\label{negative quo}
Let $G = G_2 (\alpha, \beta, \epsilon, \delta, - )$ where $\delta \ge 1$. Then $\eta (G) = \eta (G/N)$ where $N = \langle x^{2^{\alpha-\delta +1}} \rangle$.
\end{theorem}
\noindent
{\bf Proof.}
Note that $N$ does not make sense if $\delta = 0$; so that it is why we assume $\delta \ge 1$. Also, if $\delta = 1$, then $N = 1$; so the conclusion is trivial in this case. Hence, we will assume $\delta \ge 2$.
We first prove that $\eta (G) = \eta (G/Z)$ where $Z = \langle x^{2^{\alpha-1}} \rangle$. Recall from Theorem \ref{quot} that to prove $\eta (G) = \eta (G/Z)$, we need to prove that $Z \subseteq G^{\{2\}}$ and if $g \in G \setminus G^{\{2\}}$, then every element of $gZ$ is conjugate to a generator of $\langle g \rangle$. Observe that $Z \subseteq G^{\{2\}}$. Since $x^{2^{\alpha - 1}}$ is the only nonidentity element of $Z$, it suffices to prove that if $g \not\in G^{\{2\}}$, then $\langle g \rangle$ and $\langle gx^{2^{\alpha - 1}} \rangle$ are conjugate. We know from \cite{beuerle} that $G' = \langle x^2 \rangle$.
We prove the claim by working by induction on $\delta$. We begin with the case that $\delta = 2$. We know that $x^y = x^{2^{\alpha - 2} - 1}$. It follows that
$$(x^2)^y = (x^y)^2 = (x^{2^{\alpha -2} -1})^2 = x^{2^{\alpha - 1} - 2} = (x^{-2})x^{2^{\alpha - 1}}.$$
Observe that this yields that $(x^{-2})^y = x^2 x^{2^{\alpha -1}}$. Using this fact and the observation that $x^{2^{\alpha -1}}$ is central, we then have
$$(x^2)^{y^2} = (x^{-2} x^{2^{\alpha -1}})^y = x^2 x^{2^{\alpha -1}} x^{2^{\alpha -1}} = x^2.$$
It follows that $x^2 $ and $y^2$ commute. Let $A = \langle x^2, y^2 \rangle$, and observe that $G' \le A$, so $A$ is a normal, abelian subgroup of $G$.
We know that every element of $G$ has the form $y^k x^m$ where $0 \le k \le 2^\beta - 1$ and $0 \le m \le 2^\alpha - 1$ are integers. Notice that if $4$ divides both $k$ and $m$, then $g \in A^{\{2\}} \subseteq G^{\{2\}}$. Also, $x^2, y^2 \in G^{\{2\}}$.
If $g = y^{2l+1}x^m$ for integers $l$ and $m$, then we can appeal to Lemma \ref{conjugacy} to see that $g$ is conjugate to $g x^{2^{\alpha - 1}}$ and so, $\langle g \rangle$ and $\langle gx^{2^{\alpha - 1}} \rangle$ are conjugate, as desired.
Since $x^y = x^{2^{\alpha - 2} - 1}$, we have $x^{y^2} = x^{2^{2\alpha -4} - 2^{\alpha - 1} + 1}$. Since $\delta \ge 2$, we know that $\alpha \ge 4$ (this is using the fact that $\alpha - \delta \ge 2$), so $2\alpha - 4 \geq \alpha$. Hence, we have $x^{y^2} = x^{-2^{\alpha - 1} + 1}$. In addition, $x^{2^{\alpha - 1}}$ has order $2$, so $x^{-2^{\alpha - 1}} = x^{2^{\alpha - 1}}$. Thus, we have shown $x^{y^2} = x^{2^{\alpha - 1} + 1}$.
Suppose now that $g = y^{2l} x^{2h +1}$ for integers $l$ and $h$. From above, we have
$$g^{y^2} = (y^{2l}x^{2h+1})^{y^2} = y^{2l} (x^{y^2})^{2h+1} = y^{2l}(x^{2^{\alpha -1} + 1})^{2h+1} = y^{2l} x^{2h+1}x^{2^{\alpha - 1}} = g x^{2^{\alpha - 1}}.$$
We deduce that $\langle g \rangle$ and $\langle gx^{2^{\alpha - 1}} \rangle$ are conjugate, as desired.
We have shown that $x^{y^2} = x x^{2^{\alpha -1}}$. This implies that $x^{-1} y^{-2} x = x^{2^{\alpha - 1}} y^{-2}$. Inverting, we obtain $(y^2)^x = y^2 x^{{2^\alpha -1}}$. Now, suppose that $g = y^{2l}x^{2h}$. We can assume from above that either $l$ is odd or $h$ is odd. Assume first that $l$ is odd. We have
$$g^{x} = (y^{2l}x^{2h})^x = ((y^2)^x)^l x^{2h} = (y^2 x^{2^{\alpha -1}})^l x^{2h} = y^{2l} x^{2h} x^{2^{\alpha -1}} = g x^{2^\alpha - 1}.$$
We obtain $\langle g \rangle$ and $\langle gx^{2^{\alpha - 1}} \rangle$ are conjugate, as desired.
We are left with the case that $g = y^{4l} x^{2(2h+1)}$ for integers $h$ and $l$. We claim that $g \in G^{\{ 2 \}}$. Notice that there is an integer $k$ so that $\langle g \rangle = \langle y^{4k} x^2 \rangle$ and that $g \in G^{\{ 2\}}$ if and only if $y^{4k} x^2 \in G^{\{2\}}$. We show that $y^{4k}x^2 \in G^{\{2\}}$. We have $x^{y^2} = x x^{2^{\alpha -1}}$. It follows that $xy^2 = y^2 x x^{2^{\alpha - 1}}$ and
$$(y^{2k}x)^2 = y^{2k}xy^{2k}x = y^{2k} y^{2k}xx^{2^{\alpha - 1}k}x = y^{4k} x^2 x^{2^{\alpha - 1}k}.$$
When $k$ is even, we see that $(y^{2k}x)^2 = y^{4k} x^2$. Now assume that $k$ is odd. We have
\begin{eqnarray*}
(y^{2k}xx^{2^{\alpha - 2}})^2 &=& y^{2k}x x^{2^{\alpha - 2}}y^{2k}xx^{2^{\alpha - 2}} = y^{2k} y^{2k}xx^{2^{\alpha - 1}k}x x^{2^{\alpha - 2}2}\\
&=& y^{4k} x^2 x^{2^{\alpha - 1}(k+1)} = y^{4k}x^2.
\end{eqnarray*}
Note that we are using the fact that $x^{2^{\alpha - 2}}$ commutes with both $x$ and $y^2$ here. Thus, this yields $g \in G^{\{2\}}$. We conclude for all elements $g \in G \setminus G^{\{2\}}$ that $g$ and $gx^{2^{\alpha -1}}$ are conjugate and we have proved that $\eta (G) = \eta (G/Z)$ when $\delta = 2$.
We now assume that $\delta > 2$. Let $M = \langle x^2, y \rangle$. Since $x^y = x^{2^{\alpha - \delta} - 1}$, we see that $(x^2)^y = (x^{2^{\alpha - \delta}-1})^2 = (x^2)^{2^{(\alpha -1) - (\delta - 1)}-1}$. Also, $y^{2^\beta} = x^{2^{\alpha - \epsilon}} = (x^2)^{2^{(\alpha -1) - \epsilon}}$. Observe that $(x^2)^{2^{(\alpha - 1) -1}} = x^{2^{\alpha -1}}$. We conclude that $M = G_2( \alpha - 1, \beta, \epsilon, \delta - 1, -)$. Let $g \in G \setminus G^{\{2\}}$. If $g \in M$, then $g \in M \setminus M^{\{2\}}$. By induction, we have that $g$ is conjugate to $g (x^2)^{2^{(\alpha - 1) - 1}}$, and so, $g$ and $gx^{2^{\alpha - 1}}$ are conjugate. Thus, we may assume that $g \not\in M$. This implies that $g = y^l x^{2m + 1}$ for integers $l$ and $m$. We know that $y$ induces an automorphism of $\langle x \rangle$ of order $2^\delta$. It follows that $y^{2^{\delta -1}}$ induces an automorphism of $\langle x \rangle$ of order $2$. Since $\delta \ge 3$, we know that this automorphism is a square. It is not difficult to see that $x \mapsto x x^{2^{\alpha - 1}}$ is the unique automorphism of $\langle x \rangle$ that has order $2$ and is a square. Hence, we see that $x^{y^{2^{\delta -1}}} = x x^{2^{\alpha - 1}}$. We conclude that $g^{y^{2^{\delta -1}}} = (y^l x^{2m+1})^{y^{2^{\delta -1}}} = y^l (x^{y^{2^{\delta -1}}})^{2m+1} = y^l (x x^{2^{\alpha - 1}})^{2m+1} = y^l x^{2m+1} x^{2^{\alpha -1}} = g x^{2^{\alpha - 1}}$. This completes the proof of the claim that $\eta (G) = \eta (G/Z)$.
We now work to prove $\eta (G) = \eta (G/N)$. We work by induction on $\delta$. If $\delta = 2$, then $N = Z$, and the above claim yields the result. We assume that $\delta \ge 3$. We have that $\eta (G) = \eta (G/Z)$. By induction, $\eta (G/Z) = \eta ((G/Z)/(N/Z))$, and the First Isomorphism Theorem implies that $G/N \cong (G/Z)/(N/Z)$, so $\eta (G/N) = \eta ((G/Z)/(N/Z))$, and we have the desired equality.
$\Box$\\
In light of Theorem \ref{negative quo} and Lemma \ref{quo1}, we see that if we can compute $\eta$ for $G_2 (\alpha,\beta,\epsilon,\delta, -)$ when $\delta = 0, 1$, then we can compute $\eta$ for all metacyclic $2$-groups of negative type. There are a number of cases to consider when $\delta = 0$ or $1$, and then using these cases, we will compute $\eta$ when $\delta \ge 2$. Recall that the dihedral $2$-groups are the groups of the form $G_2 (\alpha,1,0,0,-)$, the generalized quaternion $2$-groups are of the form $G_2 (\alpha,1,1,0,-)$, and the semi-dihedral groups are of the form $G_2 (\alpha,1,0,1,-)$. Also, it is known that $G_2 (\alpha, \beta, 1, 0, - )$ and $G_2 (\alpha, \beta, 1, 1, - )$ are isomorphic for all $\alpha \geq 3$ and $\beta \geq 2$. Since $\delta \le \beta$, it follows that dihedral, generalized quaternion, and semi-dihedral are the only groups of negative type where $\beta = 1$.
Thus, we need to analyze the negative metacyclic $2$-groups of type
$$G_2 (\alpha, \beta, \epsilon, \delta, -)$$
with $\beta \geq 2$. We recall a few facts about the classification of such groups. In particular, for negative type $\epsilon$ is either $0$ or $1$ only. Also the parameters satisfy: $\alpha \geq \delta + 2$ and $\beta \geq \delta$ when $\epsilon = 0$ and $\beta \geq \delta + 1$ when $\epsilon = 1$.
When $\delta = 0$ or $1$, there is a particular abelian normal subgroup $M$ of $G$. For this subgroup $M$, we determine which maximal cyclic subgroups of $M$ are maximal in $G$ and how many maximal cyclic subgroups of $G$ lie outside of $M$. This yields the following result. Recall that $\eta^* (M)$ is the number of $G$-orbits on the $M$-conjugacy classes of maximal cyclic subgroups of $M$.
\begin{proposition} \label{one prop}
Suppose $G$ is $G_2 (\alpha, \beta, \epsilon, \delta, -)$ where $\delta = 0$ or $1$ and $\beta \geq 2$. Let $M = \langle x, y^2 \rangle$. Then $M$ is a normal abelian subgroup of $G$ and the following holds:
\begin{enumerate}
\item[(i)] If $\delta = 0$, then $\eta(G) = \eta^*(M) + 1$ and every maximal cyclic subgroup of $M$ is maximal cyclic in $G$ except $\langle y^2 \rangle$.
\item[(ii)] If $\delta = 1$, then $\eta(G) = \eta^*(M)$ and every maximal cyclic subgroup of $M$ is maximal cyclic in $G$ except $\langle y^2 \rangle$ and $\langle y^2 x^{2^{\alpha -1}} \rangle$.
\end{enumerate}
\end{proposition}
\noindent
{\bf Proof.} As $M$ is a subgroup of index $2$ in $G$ it follows that $M$ is normal in $G$. Let $Y = \langle y^2 \rangle$. Observe that $y^2$ centralizes $\langle x \rangle$ and is obviously central in $\langle y \rangle$; so $Y = \langle y^2 \rangle$ is central in $G$. Now, $M$ is central-by-cyclic, so $M$ is abelian.
We now prove that there are exactly two conjugacy classes of maximal cyclic subgroups of $G$ outside of $M$. Since $\langle x \rangle$ is normal in $G$ and $G = \langle x \rangle \langle y \rangle = \langle x \rangle \langle xy \rangle$, we see that $C_G (\langle y \rangle) = C_{\langle x \rangle} (\langle y \rangle) \langle y \rangle = \langle x^{2^{\alpha-1}} \rangle \langle y \rangle$ and $C_G (\langle xy \rangle) = \langle x^{2^{\alpha-1}} \rangle \langle xy \rangle$. It follows that both $\langle y \rangle$ and $\langle xy \rangle$ lie in conjugacy classes of size $|\langle x \rangle:\langle x^{2^{\alpha-1}}\rangle | = 2^{\alpha-1}$. It is not difficult to see now that every cyclic subgroup of $G$ outside of $M$ is conjugate to either $\langle y \rangle$ or $\langle xy \rangle$.
$(i)$ For $\delta = 0$ we show that every maximal cyclic subgroup of $M$ is a maximal cyclic subgroup of $G$ except
$\langle y^2 \rangle$ which lies in exactly $2$ different conjugacy classes of maximal cyclic subgroups of $G$, namely
$\langle y \rangle$ and $\langle xy \rangle$.
Observe that $yY$ acts on $M/Y$ inverting every element. Thus, $M/Y$
is a cyclic subgroup of index $2$ in $G/Y$. We have $(yY)^2 = Y$, so
$G/Y$ is a dihedral group. It follows that if $g \in G \setminus M$, then
$(gY)^2 = Y$ and so, $g^2 \in Y$. Hence, $Y$ is the only maximal cyclic
subgroup of $M$ that is not maximal cyclic in $G$. Notice that $Y \le \langle y \rangle$. Also, we know that $\langle yY \rangle$ and $\langle xyY \rangle$ are in
different conjugacy classes of subgroups of $G/Y$, so $\langle y \rangle$ and $\langle xy \rangle$ are in different conjugacy classes of $G$. Since $x^y = x^{-1}$, so
$ xy = yx^{-1}$. It follows that $(yx)^2 = yxyx= y(yx^{-1})x = y^2$.
$(ii)$ For $\delta =1$ we show that the only maximal cyclic subgroups of $M$ that are not maximal in $G$ are $\langle y^2 \rangle$
and $\langle y^2 x^{2^{\alpha - 1}} \rangle$. Again there are exactly $2$ different conjugacy classes of maximal
cyclic subgroups outside of $M$ given by $\langle y \rangle$ and $\langle xy \rangle$. Note that $\langle y \rangle$ contains
$\langle y^2 \rangle$ and $\langle xy \rangle$ contains $\langle y^2 x^{2^{\alpha -1}} \rangle$.
Note that
$M/Y$ is cyclic in $G/Y$ of order $2^\alpha$. Also, $(yY)^2 = Y$ and
$(xY)^{yY} = x^{2^{\alpha-1} - 1} Y = (xY)^{2^{\alpha-1} - 1}$. It follows that $G/Y$ is
isomorphic to a semi-dihedral group. Let $Z = \langle x^{2^{\alpha-1}}, Y \rangle$, and
observe that $Z/Y = Z(G/Y)$. Notice that if $g \in G \setminus M$, then
$(gY)^2 \in Z/Y$. This implies that $g^2 \in Z$. Observe that $\langle y^2 \rangle$ and
$\langle y^2 x^{2^{\alpha-1}} \rangle$ are central (and hence normal) in $G$. It follows
that the square of any conjugate of $y$ will be $y^2$. Since $\delta = 1$, we
have $x^y = x^{2^{\alpha-1} - 1}$, so $xy = yx^{2^{\alpha-1} - 1}$. We have
$(yx)^2 = yxyx = y(yx^{2^{\alpha-1} - 1})x = y^2 x^{2^{\alpha-1}}$. This implies that the square
of any conjugate of $xy$ will be $y^2 x^{2^{\alpha-1}}$. Hence, any other
subgroup of $M$ that is maximal cyclic in $M$ will be maximal cyclic in $G$.
$\Box$\\[1ex]
We now work to compute $\eta$ for the groups with negative type and $\delta$ equal to $0$ or $1$. We will first handle the case when $\epsilon = 0$ and $\beta = 2$. For the following lemma recall that $\alpha \geq \delta +2$ when $p = 2$, so when $\delta =1$ we must have $\alpha \geq 3$.
\begin{lemma} \label{epsilon=0}
Suppose $G$ is $G_2 (\alpha, 2, 0, \delta, - )$. Then
\begin{enumerate}
\item[(i)] $\eta(G) = \alpha + 3$ if $\delta = 0$ and
\item[(ii)] $\eta(G) = \alpha + 2$ if $\delta = 1$.
\end{enumerate}
\end{lemma}
\noindent
{\bf Proof.}
Following Proposition \ref{one prop}, we take $M = \langle x, y^2 \rangle$; so $M$ is abelian. We have $M \cong C_{2^{\alpha}} \times C_2$ and $\eta(M) = \alpha +2$ by Lemma \ref{two}. We claim that all subgroups of $M$ are normal in $G$. To see this, note that if $K$ is a subgroup of $M$ then (1) $K$ is a subgroup of $\langle x \rangle$, (2) $K = \langle x^a, y^2 \rangle$ for some integer $1 \le a \le 2^\alpha - 1$ or (3) $K = \langle x^a y^2 \rangle$ for some integer $1 \le a \le 2^\alpha -1$. When $\delta = 0$, we know that $x^y = x^{-1}$, so $(x^a)^y = (x^a)^{-1}$ and $(x^ay^2)^y = (x^ay^2)^{-1}$ for every integer $a$. When $\delta = 1$, we have $(x^a)^y = x^{a(2^{\alpha - 1} - 1)}$. The observation is that $\langle x^a \rangle = \langle x^{a(2^{\alpha - 1} - 1)} \rangle$, $\langle x^a, y^2 \rangle = \langle x^{a(2^{\alpha - 1} - 1)}, y^2 \rangle$, and $\langle x^a y^2 \rangle = \langle x^{a(2^{\alpha - 1} - 1)} y^2 \rangle$. This proves the claim. Therefore $\eta^*(M) = \eta(M)$ and the result follows from Proposition \ref{one prop}.
$\Box$.\\
We continue with the case where $\epsilon = 0$. We now consider the case that $\beta \ge 3$. Recall that $g_p (a,b) = p^{(l-1)}((k-l) (p-1) + p + 1)$ where $p$ a prime, $a$ and $b$ are positive integers, and we take $k = {\rm max} (a,b)$ and $l = {\rm min} (a,b)$. Recall also that $g_p (a,b) = \eta (C_{p^a} \times C_{p^b})$. The following can be viewed as an improvement on Proposition \ref{centre}(ii).
\begin{theorem} \label{large}
Suppose $G$ is $G_2 (\alpha, \beta, 0, \delta, -)$ with $\beta \ge 3$. As previously let $M = \langle x, y^2 \rangle$.
Then the following hold:
\begin{enumerate}
\item If $\delta = 1$, then $\eta (G) = \eta(M)/2 + 2 = g_2 (\alpha,\beta - 1)/2 + 2$.
\item If $\delta = 0$, then $\eta (G) =\eta(M)/2 + 3 = g_2 (\alpha,\beta - 1)/2 + 3$.
\end{enumerate}
\end{theorem}
\noindent
{\bf Proof.}
Note that we are assuming $\delta$ is $0$ or $1$. As in Proposition \ref{one prop}, we let $M = \langle x, y^2 \rangle$; so it follows that $M$ is abelian. In particular, since we are assuming that $\epsilon = 0$, we have $M \cong \langle x \rangle \times \langle y^2 \rangle = C_{2^\alpha} \times C_{2^{\beta - 1}}$. Using Lemma \ref{two}, we obtain $\eta (M) = g_2 (\alpha,\beta -1)$. Let $k$ be the maximum of $\alpha$ and $\beta - 1$ and let $l$ be the minimum of $\alpha$ and $\beta - 1$; so that $\eta (M) = g_2 (\alpha,\beta -1) = 2^{l-1} (k-l + 3)$. We now work to prove that $\eta^* (M) = g_2(\alpha,\beta -1)/2 +2$. Once this is done, then we will have the conclusion via Proposition \ref{one prop}.
It is not difficult to see that $\langle x \rangle$, $\langle y^2 \rangle$, and $\langle y^2 x^{2^{\alpha - 1}} \rangle$ are maximal cyclic subgroups of $M$ that are normal in $G$. We claim that $\langle y^{2(2^{\beta -2})} x \rangle$ is a maximal cyclic subgroup of $M$ that is normal in $G$. It is easy to see that it is maximal cyclic. When $\delta = 0$, we see that $(\langle y^{2(2^{\beta -2})} x \rangle)^y = \langle y^{2(2^{\beta -2})} x^{-1} \rangle = \langle (y^{2(2^{\beta -2})} x)^{-1} \rangle$, and when $\delta = 1$, we have $(\langle y^{2(2^{\beta -2})} x \rangle)^y = \langle y^{2(2^{\beta -2})} x^{2^{\alpha}-1} \rangle = \langle (y^{2(2^{\beta -2})} x)^{2^{\alpha}-1} \rangle$. This proves that it is normal in $G$.
We will prove that all the other maximal cyclic subgroups of $M$ will be in conjugacy classes of size $2$ in $G$. Thus, $\eta^*(M) = (\eta(M) - 4)/2 + 4 = \eta (M)/2 -2 +4 = g_2 (\alpha,\beta - 1)/2 + 2$.
Let $C$ be a maximal cyclic subgroup of $M$. It is not difficult to see that $C$ will be generated by an element of the form $y^{2l}x$ or one of the form $y^2 x^l$. When $\delta = 0$, we have that $(y^{2l}x)^y = y^{2l}x^{-1}$ and $(y^2x^l)^y = y^2 x^{-l}$. For $C$ to be normal, we need this conjugate to be in $C$. When the generator is $y^{2l}x$, we need $y^{2l}x^{-1} = (y^{2l}x)^k = y^{2lk}x^k$ for some integer $k$. This implies that $y^{2l-2lk} = x^{k+1}$. Since $\epsilon = 0$, we have that $y^{2l-2lk} = x^{k+1} = 1$. We see that we must have $2^\alpha$ dividing $k+1$ and $2^{\beta}$ must divide $2l(1-k)$. Thus, there is an integer $r$ so that $k+1 = 2^\alpha r$, and thus, $k = 2^\alpha r - 1$. We obtain that $2^{\beta -1}$ must divide $l(1-(2^\alpha r -1)) = l(2 - 2^\alpha r) = 2l (1 - 2^{\alpha - 1}r)$. Since we know that $\alpha \ge 2$, this implies that $2^{\beta - 2}$ must divide $l$. It follows that $\langle x \rangle$ and $\langle y^{2^{\beta-1}} x \rangle$ are the only two maximal cyclic subgroups of $M$ that are normal in $G$ that are generated by an element of the form $y^{2l}x$ when $\delta = 0$.
When the generator is $y^2 x^l$, we need $y^2x^{-l} = (y^2 x^l)^k = y^{2k}x^{lk}$ for some integer $k$. This implies that $y^{2-2k} = x^{lk+l} = 1$. This implies that $2^\beta$ divides $2(1-k)$ and so, $2^{\beta -1}$ divides $1-k$. Hence, there is an integer $r$ so that $1-k = r2^{\beta -1}$, and hence, $k = 1 -r2^{\beta -1}$. We see that $2^\alpha$ divides $l(1 + k) = l (1 + (1 - r2^{\beta -1})) = l (2-r2^{\beta-1}) = 2l(1-r2^{\beta - 2})$. Since $\beta \ge 3$, we deduce that $2^{\alpha - 1}$ must divide $l$. It follows that $\langle y^2 \rangle$ and $\langle y^2 x^{2^{\alpha - 1}} \rangle$ are the only maximal cyclic subgroups of $M$ that are normal in $G$ that are generated by an element of the form $y^2 x^l$ when $\delta = 0$. This proves the result when $\delta = 0$.
Now we suppose that $\delta = 1$. Recall that $\alpha \ge \delta + 2$, so $\alpha \ge 3$. We have that $(y^{2l}x)^y = y^{2l}x^{2^{\alpha-1}-1}$ and $(y^2x^l)^y = y^2 x^{l(2^{\alpha-1}-1)}$. For $C$ to be normal, we need this conjugate to be in $C$. Suppose the generator is $y^{2l}x$. We need $y^{2l}x^{2^{\alpha-1}-1} = (y^{2l}x)^k = y^{2lk}x^k$ for some integer $k$. This implies that $y^{2l-2lk} = x^{k-2^{\alpha -1} +1} = 1$. We deduce that $2^\alpha$ must divide $k - 2^{\alpha -1} + 1$, and so, there is an integer $r$ so that $k - 2^{\alpha -1} + 1 = 2^\alpha r$. We obtain $k = 2^\alpha r + 2^{\alpha - 1} - 1$. We have that $2^\beta$ divides $2l(1-k) = 2l(1 - 2^\alpha r - 2^{\alpha - 1} + 1)$. It follows that $2^{\beta - 2}$ divides $l(1 - 2^{\alpha -1}r -2^{\alpha - 2})$. Since $\alpha \ge 3$, we see that $2^{\beta - 2}$ divides $l$. We conclude that $\langle x \rangle$ and $\langle y^{2^{\beta-1}} x \rangle$ are the only two maximal cyclic subgroups of $M$ that are normal in $G$ that are generated by an element of the form $y^{2l}x$ when $\delta = 1$.
When the generator is $y^2 x^l$, we need $y^2x^{l(2^{\alpha - 1}-1)} = (y^2 x^l)^k = y^{2k}x^{lk}$ for some integer $k$. We see that $y^{2-2k} = x^{lk - l(2^{\alpha -1} - 1)} = 1$. It follows that $2^\beta$ divides $2 (1-k)$, and so, $2^{\beta -1}$ divides $1-k$. There is an integer $r$ so that $1-k = 2^{\beta -1}r$ which yields $k = 1 - 2^{\beta - 1}r$. We now determine that $2^\alpha$ divides $l(k - 2^{\alpha -1} + 1) = l (1 - 2^{\beta -1}r -2^{\alpha -1} + 1) = 2l (1 - 2^{\beta - 2}r -2^{\alpha - 2})$. Since $\alpha \ge 3$ and $\beta \ge 3$, we have that $2^{\alpha -1}$ divides $l$. We conclude that $\langle y^2 \rangle$ and $\langle y^2 x^{2^{\alpha - 1}} \rangle$ are the only maximal cyclic subgroups of $M$ that are normal in $G$ that are generated by an element of the form $y^2 x^l$ when $\delta = 1$. This proves the result when $\delta = 1$.
$\Box$\\
In this next corollary, recall that $\delta \le \beta$, so when $\beta = 2$, we must have $\delta = 2$. We are able to use Theorem \ref{large} to compute $\eta$ for groups of negative type where $\delta \ge 2$.
\begin{corollary} \label{delta ge 2}
Suppose $G$ is $G_2 (\alpha,\beta,\epsilon,\delta,-)$ with $\delta \ge 2$, then
\begin{enumerate}
\item $\eta (G) = \alpha - \delta + 3 = \alpha + 1$ if $\beta = 2$.
\item $\eta (G) = g_2 (\alpha - \delta +1,\beta-1)/2 + 2$ if $\beta \ge 3$.
\end{enumerate}
\end{corollary}
\noindent
{\bf Proof.}
By Theorem \ref{negative quo}, we have that $\eta (G) = \eta (G/N)$ where $N = \langle x^{2^{\alpha - \delta + 1}} \rangle$. Applying Lemma \ref{quo1}, we see that $G/N \cong G_2 (\alpha - \delta +1, \beta, 0,1, -)$. Using Lemma \ref{epsilon=0}, we see that $\eta (G/N) = \alpha - \delta + 1 + 2 = \alpha + 3 - \delta$ when $\beta = 2$. Since $2 \leq \delta \leq \beta =2$, we see that $\delta = 2$, and so, $\eta (G) = \alpha + 1$. When $\beta \ge 3$, we apply Theorem \ref{large} to see that $\eta (G) = \eta (G/N) = g_2 (\alpha - \delta +1, \beta -1)/2 + 2$.
$\Box$\\
We now compute $\eta$ for groups of negative type with $\delta = 0$ and $\epsilon = 1$. We first handle the case where $\beta = 2$.
\begin{lemma} \label{beta=2}
Suppose $G$ is $G_2 (\alpha, 2, 1, 0, -)$ then $\eta(G) = \alpha + 2$.
\end{lemma}
\noindent
{\bf Proof.}
Define $M = \langle x, y^2 \rangle$. By Proposition \ref{one prop}, we know that $M$ is a normal abelian subgroup of $G$. First note that $(x^{2^{\alpha - 2}}y^2)^2 = x^{2^{\alpha-1}}y^4 = x^{2^{\alpha -1}}x^{2^{\alpha -1}} = x^{2^{\alpha}} =1.$ Thus, $M = \langle x \rangle \times \langle x^{2^{\alpha - 2}} y^2 \rangle \cong C_{2^\alpha} \times C_2$ and $\eta(M) = \alpha + 2$. Consideration of the maximal cyclic subgroups of $M$ shows that all are normal except $\langle (1, x^{2^{\alpha -2}}y^2) \rangle$ and $\langle (x^{2^{\alpha-1}}, x^{2^{\alpha - 2}}y^2) \rangle$ which are conjugate in $G$ via $y$. To see that these two subgroups are conjugate, observe that $M$ has three subgroups of order $2$ and that $\langle x^{2^{\alpha - 1}} \rangle = \langle y^{2^\beta} \rangle$ is central in $G$ and that $Z(G)$ is cyclic. Either $y$ normalizes both of the other two subgroups of order $2$ or it permutes them. However, if $y$ were to normalize them, they would be normal in $G$ and since they have order $2$, that would imply that they would be central in $G$. This however would contradict the fact that the center of $G$ is cyclic. Thus $\eta^*(M) = \alpha + 1$. The result follows from Proposition \ref{one prop}.
$\Box$\\
We continue with the groups of negative type where $\delta = 0$ and $\epsilon = 1$. We next consider $\beta \ge 3$ and $\alpha = 2$.
\begin{lemma} \label{betage3}
Suppose $G$ is $G_2 ( 2, \beta, 1, 0 , - )$ with $\beta \geq 3$. Then $\eta(G) = \beta + 2$.
\end{lemma}
\noindent
{\bf Proof.}
Define $M = \langle x, y^2 \rangle$. By Proposition \ref{one prop}, we know that $M$ is a normal abelian subgroup of $G$. Note that $(xy^{2^{\beta -1}})^2 = x^2y^{2^{\beta}} = x^2x^2 = x^4 = 1$. So $M = \langle xy^{2^{\beta -1}} \rangle \times \langle y^2 \rangle \cong C_2 \times C_{2^{\beta}}$ and $\eta(M) = \beta + 2$. Consideration of the maximal cyclic subgroups of $M$ shows that all are normal except for $\langle (xy^{2^{\beta -1}}, 1) \rangle$ and $\langle (xy^{2^{\beta -1}}, y^{2^{\beta}}) \rangle$ which are conjugate in $G$ via $y$. The proof that these two subgroups are conjugate is similar to the proof of Lemma \ref{beta=2}. In particular, $Z(G)$ is cyclic, $M$ has three subgroups of order $2$, and if $y$ normalized these two subgroups, then it would centralize them and contradict the fact that $Z (G)$ is cyclic. Thus $\eta^*(M) = \beta + 1$. The result follows from Proposition \ref{one prop}.
$\Box$\\
We conclude by computing $\eta$ when $\delta = 0$, $\epsilon = 1$, $\alpha \ge 3$, and $\beta \ge 3$. Note this also covers the cases $\delta =1$,
$\epsilon =1$ and $\alpha, \beta \ge 3$.
\begin{theorem} \label{largeone}
Suppose $G$ is $G_2 (\alpha, \beta, 1, 0, -)$ with $\alpha \ge 3$ and $\beta \ge 3$. Let $M = \langle x, y^2 \rangle$.
\begin{enumerate}
\item If $\alpha \ge \beta$, then $\eta (G) = \eta(M)/2 + 3 = g_2(\alpha,\beta-1)/2 + 3$.
\item If $\alpha < \beta$, then $\eta (G) = \eta(M)/2+ 3 = g_2 (\alpha-1,\beta)/2 + 3$.
\end{enumerate}
\end{theorem}
\noindent
{\bf Proof.}
As in Proposition \ref{one prop}, we let $M = \langle x, y^2 \rangle$; so it follows that $M$ is abelian. We know that $|M| = 2^{\alpha + \beta - 1}$, that $x$ has order $2^\alpha$ and $y^2$ has order $2^\beta$. Suppose $\alpha \ge \beta$, then $M \cong C_{2^\alpha} \times C_{2^{\beta -1}}$, and so $\eta (M) = g_2 (\alpha,\beta-1)$. Let $w = y^2 x^{2^{\alpha - \beta}}$. Observe that $w^{2^{\beta-2}} = (y^2x^{2^{\alpha-\beta}})^{2^{\beta-2}} = y^{2^{\beta -1}} x^{2^{\alpha - 2}} \not\in \langle x \rangle$ and $w^{2^{\beta-1}} = (y^2x^{2^{\alpha-\beta}})^{2^{\beta-1}} = y^{2^{\beta}} x^{2^{\alpha - 1}} = x^{2^{\alpha -1}} x^{2^{\alpha -1}} = 1$. It follows that $M = \langle x \rangle \times \langle w \rangle$.
If $\beta \ge \alpha + 1$, then $M \cong C_{2^{\alpha-1}} \times C_{2^{\beta}}$, and so $\eta (M) = g_2 (\alpha-1,\beta)$. Let $u = y^{2^{\beta-\alpha+1}}x$. We compute $u^{2^{\alpha -2}} = (y^{2^{\beta-\alpha+1}}x)^{2^{\alpha -2}}= y^{2^{\beta-1}}x^{2^{\alpha - 2}} \not\in \langle y \rangle$ and $u^{2^{\alpha -1}} = (y^{2^{\beta-\alpha+1}}x)^{2^{\alpha -1}}= y^{2^{\beta}}x^{2^{\alpha - 1}} = x^{2^{\alpha -1}} x^{2^{\alpha -1}} = 1$. We deduce that $M = \langle u \rangle \times \langle y \rangle$.
In both cases, we will show that $\eta^* (M) = \eta (M)/2 + 2$, and we obtain the conclusion by applying Proposition \ref{one prop}. Notice that a maximal cyclic subgroup of $M$ will be generated either by an element of the form $y^{2l}x$ for some integer $l$ or by an element of the form $y^2 x^l$ for some integer $l$. Observe that $\langle x \rangle$ and $\langle y^2 \rangle$ are maximal cyclic subgroups of $M$ that are normal in $G$.
We next show that $\langle y^{2^{\beta -1}} x\rangle$ and $\langle y^2 x^{2^{\alpha -2}} \rangle$ are normal subgroups in $G$. Since $M$ is abelian and has index $2$ in $G$, it suffices to show that $y$ normalizes these subgroups. We compute $(y^{2^{\beta -1}} x)^y = y^{2^{\beta -1}} x^{-1} = (y^{2^{\beta -1}} x)^{-1}$. Since $y$ conjugates the generator of $\langle y^{2^{\beta-1}}x \rangle $ to its inverse, this implies that $\langle y^{2^{\beta -1}} x\rangle$ is normal in $G$.
We now turn to $\langle y^2 x^{2^{\alpha -2}} \rangle$. We begin with the observation that $(y^2 x^{2^{\alpha - 2}})^4 = y^8$. Since $\beta \ge 3$, we see that $x^{2^{\alpha -1}} = y^{2^\beta} \in \langle y^2 x^{2^{\alpha -2}} \rangle$. Conjugating yields $(y^2 x^{2^{\alpha -2}})^y = y^2 x^{-2^{\alpha - 2}}$. Note that $x^{-2^{\alpha -2}} = x^{2^{\alpha -2}} x^{2^{\alpha -1}}$. We have $(y^2 x^{2^{\alpha -2}})^y = y^2 x^{2^{\alpha -2}} x^{2^{\alpha -1}}$. Since both $y^2 x^{2^{\alpha -2}}$ and $x^{2^{\alpha -1}}$ lie in $\langle y^2 x^{2^{\alpha -2}} \rangle$, we conclude that $(y^2 x^{2^{\alpha -2}})^y$ lies in $\langle y^2 x^{2^{\alpha -2}} \rangle$.
We deduce that $\langle y^2 x^{2^{\alpha -2}} \rangle$ is normal in $G$.
We prove that the remaining maximal cyclic subgroups of $M$ lie in orbits of size $2$. We have noted that a maximal cyclic subgroup $C$ of $M$ will have a generator of the form $y^{2l} x$ or of the form $y^2 x^l$ for some integer $l$. If $C$ has a generator of the form $y^{2l} x$, then for $C$ to be normal we need $(y^{2l}x)^y = y^{2l}x^{-1} \in C$. This implies that $y^{2l} x^{-1} = (y^{2l}x)^k$ for some integer $k$. We have $y^{2l-2lk} = x^{k+1} = u \in \langle x \rangle \cap \langle y^2 \rangle = \langle x^{2^{\alpha - 1}} \rangle$. Hence, $u$ is either $1$ or $x^{2^{\alpha - 1}}$. If $u = 1$, then $2^{\alpha}$ divides $k+1$ and $2^{\beta +1}$ divides $2l(1-k)$. We see that there is an integer $r$ so that $k+1 = 2^{\alpha} r$, and hence, $k = 2^\alpha r - 1$. This implies that $2^{\beta +1}$ divides $2l (1-k) = 2l(1-2^\alpha r +1) = 4 l(1 -2^{\alpha -1}r)$. Since $\alpha \ge 2$, this yields $2^{\beta -1}$ divides $l$. When $u = x^{2^{\alpha -1}}$, we obtain that $k + 1 \equiv 2^{\alpha -1} ~({\rm mod}~2^{\alpha})$. Hence, there is an integer $r$ so that $k + 1= 2^{\alpha-1} + r 2^{\alpha}$, and so, $k = 2^{\alpha - 1} + r2^\alpha - 1$. We see that $2l(1-k) \equiv 2^\beta ~({\rm mod}~2^{\beta+1})$. This implies that $2^{\beta+1}$ divides $2l(1-k) - 2^{\beta} = 2l (1 -2^{\alpha -1} - r 2^\alpha + 1) -2^\beta = 4l(1-2^{\alpha -2} -r 2^{\alpha-1}) -2^\beta$. We deduce that $2^{\beta-2}$ divides $l$. We conclude that $\langle x \rangle$ and $\langle y^{2^{\beta -1}} x \rangle$ are the only maximal cyclic subgroups of $M$ having the form $\langle y^{2l}x \rangle$ that are normal in $G$.
We now suppose that $C$ has a generator of the form $y^2 x^l$. We need $(y^2 x^l)^y = y^2 x^{-l} \in C$. Hence, we have that $y^2 x^{-l} = (y^2 x^l)^k = y^{2k} x^{lk}$ for some integer $k$. We have $y^{2-2k} = x^{lk+l} = u$. As in the previous paragraph, we see that $u$ is either $1$ or $x^{2^{\alpha -1}}$. If $u = 1$, then we have that $2^{\beta +1}$ divides $2(1-k)$, and so, there is an integer $r$ so that $1-k = 2^\beta r$. We determine that $2^\alpha$ divides $l(k+1) = l(1 - 2^\beta r +1) = 2l(1-2^{\beta-1}r)$. It follows that $2^{\alpha - 1}$ divides $l$. Now, suppose that $u = x^{2^{\alpha -1}}$. We must have that $2(1-k) \equiv 2^\beta ~({\rm mod}~2^{\beta +1})$ and $l(k+1) \equiv 2^{\alpha -1} ~({\rm mod}~2^\alpha)$. Hence, there is an integer $r$ so that $2(1-k) = 2^\beta + 2^{\beta + 1}r$. This implies that $k = 1 -2^{\beta -1} - 2^{\beta} r$. We then obtain that $2^\alpha$ divides $l (k+1) - 2^{\alpha - 1} = l (1 -2^{\beta-1} - 2^\beta r + 1) -2^{\alpha -1} = 2 (l(1 - 2^{\beta -2} - 2^{\beta -1}r) - 2^{\alpha - 2})$. This implies that $2^{\alpha -1}$ divides $l (1 - 2^{\beta -2} - 2^{\beta -1}r) -2^{\alpha -2}$. Hence, there is an integer $s$ so that $l (1 - 2^{\beta -2} - 2^{\beta -1}r) -2^{\alpha -2} = 2^{\alpha -1}s$. This leads to $l (1 - 2^{\beta -2} - 2^{\beta -1}r) = 2^{\alpha -1}s + 2^{\alpha -2} = 2^{\alpha -2} (2s + 1)$. This yields $2^{\alpha -2}$ divides $l$. Observe that $x^{2^{\alpha - 1}} = y^{2^\beta}$, and so, $\langle y^2 x^{2^{\alpha -1}} \rangle = \langle y^2 \rangle$. We deduce that $\langle y^2 \rangle$ and $\langle y^2 x^{2^{\alpha - 2}} \rangle$ are the only maximal cyclic subgroups of $M$ having the form $\langle y^2 x^l \rangle$ that are normal in $G$,
We now see that the number of $G$-orbits of maximal cyclic subgroups of $M$ is $(\eta (M) - 4)/2 +4 = \eta (M)/2 - 2 +4 = \eta (M) + 2$, which completes the proof of the result.
$\Box$\\
We close by proving that when $G$ is metacyclic of minus type that is not dihedral, generalized quaternion, or semi-dihedral, then $\eta (G) \ge \alpha + \beta - 2$ and we determine when equality occurs. We first handle when $\delta$ equals $0$ or $1$. In this case, we have $\eta (G) \ge \alpha + \beta$.
\begin{proposition}
Suppose $G = G_2(\alpha, \beta, \epsilon, \delta, -)$ with $\delta = 0$ or 1 and $\beta \geq 2$. Then $\eta(G) \geq \alpha + \beta.$
\end{proposition}
\noindent{\bf Proof.}
(i) Suppose $\epsilon = 0$.
Denote $l = \min (\alpha, \beta - 1)$ and $k = \max (\alpha, \beta -1)$.
First, consider $l \geq 3$. Then $\beta \geq 4$ and by Theorem \ref{large} and Lemma \ref{g2 comp}
$$\eta(G) \geq g_2(\alpha, \beta -1)/2 + 2 \geq 2k + 2 \geq \alpha + \beta.$$
Next, assume $l = 2$. So $\beta \geq 3$ and by Theorem \ref{large} and Lemma \ref{g2 comp}
$$\eta(G) \ge g_2(\alpha, \beta -1 )/2 + 2 = k+3 \geq \alpha + \beta.$$
Finally, set $l=1$. As $\alpha \geq 2$, we have $\beta = 2$. The result follows from Lemma \ref{epsilon=0}.
(ii) Now suppose $\epsilon = 1$. Assume $\alpha \geq \beta$, then $l = \min (\alpha, \beta -1) = \beta -1$
and $k = \max (\alpha, \beta - 1) = \alpha$. If $l \geq 3$, then $\beta \geq 4$ and $\alpha \geq 4$, so we can assume $\delta = 0$. Applying Theorem \ref{largeone} and Lemma \ref{g2 comp} yields
$$\eta(G) = g_2(\alpha, \beta -1)/2 + 3 \geq 2k + 3 \geq \alpha + \beta.$$
If $l=2$, then $\beta = 3$,
and we again appeal to Theorem \ref{largeone} to obtain
$$\eta(G) = g_2(\alpha, \beta -1)/2 + 3 = g_2(k, 2)/2 + 3 = k+4 \geq \alpha + \beta.$$
If $l=1$, then $\beta = 2$. If $\alpha = 2$ then $\delta =0$ and if $\alpha \geq 3$ we can assume $\delta =0$. Thus we apply Lemma \ref{beta=2}.
Finally, suppose $\epsilon =1$ and $\alpha < \beta$.
We set $l = \min (\alpha-1, \beta) = \alpha -1$ and $k= \max (\alpha - 1, \beta) = \beta$.
When $l \geq 3$, we apply Theorem \ref{largeone} and Lemma \ref{g2 comp} to get
$$\eta(G) = g_2(\alpha -1, \beta)/2 + 3 \geq 2k + 3 \geq \alpha + \beta.$$
If $l=2$, then $\alpha = 3$ and $\beta > 3$. Apply Theorem \ref{largeone} with Lemma \ref{g2 comp} to give
$$\eta(G) = g_2(\beta, 2)/2 + 3 = \beta +4 \geq \alpha + \beta.$$
If $l=1$, then $\alpha = 2$ and $\delta =0$, the result follows from Lemma \ref{betage3}.
$\Box$\\
We now have the case where $\delta \ge 2$.
\begin{proposition}
Suppose $G = G_2(\alpha, \beta, \epsilon, \delta, -)$ with $\delta \geq 2$. Then $\eta(G) \geq \alpha + \beta -2$. Equality holds if and only if $\beta = \delta$ and either (i) $\beta=3$ or (ii) $\beta \geq 4$ and $\alpha - \beta =2$.
\end{proposition}
\noindent{\bf Proof.}
Set $l = \min (\alpha - \delta +1, \beta -1)$ and $k = \max (\alpha - \delta +1, \beta-1)$.
We consider various cases according to the value of $l$.
First, suppose $l \geq 4$.
Then by Corollary \ref{delta ge 2} and Lemma \ref{g2 comp}
\begin{eqnarray*}
\eta(G)
& = & g_2(\alpha - \delta +1, \beta-1)/2 + 2 \\
& = & g_2( k,l)/2 + 2 \geq 2k + l + 2 \\
& = & (k+l) + k + 2\\
& \geq & \alpha - \delta + \beta + \beta -1 + 2 \\
& \geq & \alpha + \beta + 1\\
\end{eqnarray*}
since $\delta \leq \beta$.
Now consider $l = 3$. We use Corollary \ref{delta ge 2} and Lemma \ref{g2 comp} to find an exact value for $\eta(G)$.
$$\eta(G)
= g_2(\alpha - \delta + 1, \beta -1)/2 + 2
= g_2( k,3)/2 + 2
= 2k + 2. $$
If $\alpha -\delta + 1 > \beta-1 = 3$, then $\delta \leq 4$ and
$$\eta(G) = 2(\alpha - \delta + 1) + 2 = \alpha + (\alpha - \delta + 2) + (-\delta + 2) > \alpha + \beta -2.$$
On the other hand, when $\beta - 1 \geq \alpha - \delta + 1 = 3$, we obtain $\beta \ge 4$ and $\alpha - \delta = 2$, so $\alpha - 2 \leq \beta$ and
$$\eta(G) = 2(\beta - 1) + 2 = 2 \beta \geq \beta + \alpha - 2$$
with equality if and only if $\beta = \delta$.
Next suppose $l=2$. Since $\alpha - \delta + 1 \ge 2+1 = 3$, we must have $\beta =3$. Applying Corollary \ref{delta ge 2} and Lemma \ref{g2 comp},
\begin{eqnarray*}
\eta(G) & = & g_2(\alpha - \delta +1, \beta - 1)/2 + 2 = g_2(k, 2)/2 + 2 \\
& = & k+3
= \alpha - \delta + 4 \\
& \geq & \alpha + 1
= \alpha + \beta -2
\end{eqnarray*}
with equality if and only if $\delta = 3 = \beta.$
Lastly consider $l=1$. In this case $\beta = 2$ and the result follows from Corollary \ref{delta ge 2}.
$\Box$\\
| {
"timestamp": "2022-06-10T02:12:35",
"yymm": "2206",
"arxiv_id": "2206.04339",
"language": "en",
"url": "https://arxiv.org/abs/2206.04339",
"abstract": "In this paper, we set $\\eta (G)$ to be the number of conjugacy classes of maximal cyclic subgroups of a finite group $G$. We compute $\\eta (G)$ for all metacyclic $p$-groups. We show that if $G$ is a metacyclic $p$-group of order $p^n$ that is not dihedral, generalized quaternion, or semi-dihedral, then $\\eta (G) \\ge n-2$, and we determine when equality holds.",
"subjects": "Group Theory (math.GR)",
"title": "Conjugacy classes of maximal cyclic subgroups of metacyclic $p$-groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717448632122,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708944929673145
} |
https://arxiv.org/abs/2011.10724 | A Quantized Analogue of the Markov-Krein Correspondence | We study a family of measures originating from the signatures of the irreducible components of representations of the unitary group, as the size of the group goes to infinity. Given a random signature $\lambda$ of length $N$ with counting measure $\mathbf{m}$, we obtain a random signature $\mu$ of length $N-1$ through projection onto a unitary group of lower dimension. The signature $\mu$ interlaces with the signature $\lambda$, and we record the data of $\mu,\lambda$ in a random rectangular Young diagram $w$. We show that under a certain set of conditions on $\lambda$, both $\mathbf{m}$ and $w$ converge as $N\to\infty$. We provide an explicit moment generating function relationship between the limiting objects. We further show that the moment generating function relationship induces a bijection between bounded measures and certain continual Young diagrams, which can be viewed as a quantized analogue of the Markov-Krein correspondence. |
\section{Introduction}
Given a random matrix $M_N$ whose distribution is invariant under conjugation by unitary matrices, let $\lambda$ be the random vector of its eigenvalues and $\mu = \pi_{N,N-1}\lambda$ be the random vector of the eigenvalues of a principal $(N-1)\times(N-1)$ submatrix of $M_N$. We begin with a folk theorem in random matrix theory, stated here as a conjecture:
\begin{conjecture}
\label{con:MK_intro}
Suppose that we have a sequence of unitarily invariant random matrices $M_N$ such that the spectral measure $\frac{1}{N}\sum_{i=1}^N \delta_{\lambda_i}$ converges weakly in probability to a deterministic measure $\mathbf{m}_{\mathrm{RMT}}$ as $N\to\infty$. Let $\mu=\pi_{N,N-1}\lambda$. The random measure $\sum_{i=1}^N \delta_{\lambda_i} - \sum_{i=1}^{N-1} \delta_{\mu_i}$ converges weakly in probability to a signed measure $\mathbf{d}_{\mathrm{RMT}}$ which is related to $\mathbf{m}_{\mathrm{RMT}}$ by
\begin{equation} \label{eq:MK_intro}
\exp\left(\sum_{k=1}^\infty\frac{z^k}{k}\int_\mathbb{R}x^k \mathbf{d}_{\mathrm{RMT}}(dx)\right)=\sum_{k=0}^\infty z^k\int_\mathbb{R} x^k\mathbf{m}_{\mathrm{RMT}}(dx).
\end{equation}
\end{conjecture}
\begin{figure}[ht]
\subfile{SemicircleVKLSCurve}
\caption{Scaled Semicircle Law (left) and Vershik-Kerov-Logan-Shepp Curve, the limit of the Plancherel measure on Young diagrams (right)}
\label{fig:semicircle}
\end{figure}
Furthermore, the above relation is directly related to the Markov-Krein correspondence, which is a bijection between probability measures $\mathbf{m}$ and certain continual Young diagrams, observed by Kerov \cite{KEROV}. Here, $\mathbf{d}$ is the second derivative of the continual Young diagram, see also \cite{KREIN} and Theorem \ref{thm:BMK} in this text. A fascinating instance of this bijection, discovered by Kerov \cite{KEROV}, is where $\mathbf{m}$ is given by the semicircle law and $\mathbf{d}$ is the second derivative of the Vershik-Kerov-Logan-Shepp (VKLS) curve. We note that the semicircle law is the limiting spectral law of the Gaussian Unitary Ensemble (GUE) and more generally of Wigner matrices, see \cite{AGZ}*{Section 2.1}. On the other hand, the VKLS curve arises as the limiting diagram of the Plancherel measure on Young diagrams, see \cite{VK}, \cite{LOGAN}, and \Cref{fig:semicircle}.
Though a proof of \Cref{con:MK_intro} is unavailable in the literature, \cite{Bu} proved the special case where $M_N$ is an $N\times N$ GUE matrix, and more generally a Wigner matrix. Due to the lack of unitary invariance for general Wigner matrices, this suggests that the assumption of unitary invariance may be relaxed, though it is not clear how much one can relax the hypothesis.
The main result of this article establishes a quantized analogue of \Cref{con:MK_intro} where matrices are replaced with representations, and eigenvalues with signatures. Moreover, we find a quantized version of the Markov-Krein correspondence which is directly linked to our main result. We now introduce the quantized setting and our main result.
Let $\mathrm{U}(N)$ denote the $N$-dimensional unitary group. Recall that the irreducible representations of $\mathrm{U}(N)$ are in bijection with the set $\mathbb{GT}_N$ of $N$-tuples of non-increasing integers $\lambda = (\lambda_1, \ldots,\lambda_N)$ called \emph{signatures of length $N$}. Let $V_N^\lambda$ denote the irreducible representation corresponding to $\lambda \in \mathbb{GT}_N$. Given an arbitrary finite-dimensional representation $V_N$ of $\mathrm{U}(N)$, we define a probability measure $\rho[V_N]$ on $\mathbb{GT}_N$ where
\[ \rho[V_N](\lambda) = \frac{m_\lambda\dim V_N^\lambda}{\dim V_N}, \quad V_N = \bigoplus_{\lambda \in \mathbb{GT}_N} m_\lambda V_N^\lambda. \]
In other words, the probability weight of $\lambda$ is proportional to the dimension of the isotypic component corresponding to $\lambda$. For $\lambda \sim \rho[V_N]$, we define $\pi_{N,N-1}\lambda$ to be the random element of $\mathbb{GT}_{N-1}$ such that the probability distribution of $\pi_{N,N-1} \lambda$ given $\lambda$ is $\rho[V_N^\lambda|_{\mathrm{U}(N-1)}]$. In particular, the marginal distribution of $\pi_{N,N-1}\lambda$ is given by $\rho[V_N|_{\mathrm{U}(N-1)}]$.
Our main result is:
\begin{theorem}
\label{thm:DMK_intro}
Suppose that we have a sequence of representations $V_N$ of $\mathrm{U}(N)$ with $\lambda\sim\rho[V_N]$ such that distributions $\{\rho[V_N]\}_{N\ge 1}$ satisfy the technical assumption in Definition \ref{def:LLNA}. Then, the counting measure $\frac{1}{N}\sum_{i=1}^N \delta_{\lambda_i+N-i}$ converges weakly in probability to a deterministic measure $\mathbf{m}$ as $N\to\infty$, and the random measure $\sum_{i=1}^N \delta_{\lambda_i+N-i} - \sum_{i=1}^{N-1} \delta_{\mu_i+N-1-i}$, where $\mu=\pi_{N,N-1}\lambda$, converges weakly in probability to a signed measure $\mathbf{d}$ which are related by
\begin{equation} \label{eq:QMK_intro}
\exp\left(\sum_{k=1}^\infty\frac{z^k}{k}\int_\mathbb{R}x^k \mathbf{d}(dx)\right)=\frac{1}{z}\left(-1+\exp\left(z\sum_{k=0}^\infty z^k\int_\mathbb{R}x^k \mathbf{m}(dx)\right)\right).
\end{equation}
\end{theorem}
Furthermore, we show that the above relation defines a bijection between probability measures with density bounded by $1$ and a certain subclass of continual Young diagrams, where $\mathbf{m}$ is the bounded measure, and $\mathbf{d}$ can be viewed as the second derivative of the continual Young diagram. An exact statement is given in Theorem \ref{thm:BQMK}. This bijection is a quantized analogue of the Markov-Krein correspondence. Moreover, the Markov-Krein correspondence can be obtained from our quantized correspondence through a semiclassical limit, see Section \ref{ssec:semiclassical} for details.
Just as the semicircle law and the VKLS curve are linked by the Markov-Krein correspondence, we similarly find that the one-sided Plancherel measures are linked to the VKLS curve by our quantized correspondence, see \Cref{sec:appendixb}. Here, the one-sided Plancherel measures are a one-parameter family of distributions describing $N\to\infty$ limits of certain characters of $\mathrm{U}(N)$, see \cite{BBO}.
We expect that \Cref{con:MK_intro} may be proved by adapting the proof of \Cref{thm:DMK_intro}. Since the focus of this article is on the quantized setting, we do not pursue this direction, though we point out the necessary adaptations, see the end of \Cref{ssec:random_signatures}.
The main tool we use is the Fourier transform on representations of the unitary group and differential operators whose eigenfunctions are given by Schur functions. The action of these operators on the Fourier transform yield combinatorial expressions for the moments of the measures $\rho[V_N]$ and $\rho[V_N^\lambda|_{\mathrm{U}(N-1)}]$, from which we can study their behavior in the large $N$ limit. The use of the Fourier transform to study large $N$ limits of representations of the unitary group was pioneered by Bufetov and Gorin in \cite{BuG1}, where they studied limit shapes for the classical Lie groups. Their methods were further developed in \cites{BuG2,BuG3} to study global fluctuations for discrete particle systems, encompassing a variety of applications including large limits of lozenge tilings, domino tilings, and representations of the unitary group. Related methods were also recently used to obtain large $N$ local asymptotics of measures $\rho[V_N](\lambda)$ \cite{AHN}. For our purposes, we require the expansion of the moments of $\rho[V_N]$ to subleading terms. A critical property that we use for our analysis is that the differential operators commute with each other asymptotically relative to the order of the limit.
Using this approach, it should be possible to further refine our results to study the global fluctuations of the signed measure $\sum_{i=1}^N \delta_{\lambda_i+N-i} - \sum_{i=1}^{N-1} \delta_{\mu_i+N-1-i}$, though we do not pursue this here. We note that the global fluctuations of the random matrix analogue of these signed measures were studied in \cite{ERDOS} for Wigner matrices, and identified with a derivative of the Gaussian free field --- a $2$ dimensional conformally invariant, universal random distribution. Similar results were also established for the $\beta$-Jacobi corners process \cite{GZ} and the $\beta$-Hermite ensemble \cite{AG}.
The article is organized as follows. In \Cref{sec:main results}, we set up and state our main results, \Cref{thm:DMK}, a technical modification of \Cref{thm:DMK_intro}, and \Cref{thm:BQMK}, the quantized Markov-Krein bijection, as well as touch upon their respective random matrix analogues. In \Cref{sec:momlim}, we set up and prove \Cref{thm:momreg,thm:momdiff}, which explicitly calculate the moments referenced in \Cref{thm:DMK}. In \Cref{sec:theproof}, we use \Cref{thm:momreg,thm:momdiff} to prove \Cref{thm:DMK}. In \Cref{sec:bijection_proof}, we prove \Cref{thm:BQMK} and show how we can extract the Markov-Krein correspondence through a semiclassical limit. Finally, in \Cref{sec:appendixb}, we give an example of objects paired by the quantized Markov-Krein correspondence.
\textbf{Acknowledgements}. We would like to thank our mentor Andrew J. Ahn for guiding us throughout this project. Also, we would like to thank Vadim Gorin for suggesting the project and providing comments.
\section{Main Results and Random Matrix Analogues}
\label{sec:main results}
There are two main results we prove. The first is a technical restatement of Theorem \ref{thm:DMK_intro}. The second main result establishes that the relation \eqref{eq:QMK_intro} gives a bijective correspondence between a class of continual Young diagrams with second derivative $\mathbf{d}$ and probability measures $\mathbf{m}$. For further motivation, we also present random matrix analogues of these results.
\subsection{Asymptotics of Random Signatures} \label{ssec:random_signatures}
An $N$-tuple of non-increasing positive integers $\lambda=(\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_N)$ is called a \textit{signature} of length $N$, and we denote by $\mathbb{GT}_N$ the set of all signatures of length $N$. The \textit{Schur function} $s_\lambda$, for $\lambda\in\mathbb{GT}_N$, is a symmetric rational function defined by
\[s_\lambda(x_1,\ldots,x_N)=\frac{\det\left[x_i^{\lambda_j+N-j}\right]_{i,j=1}^N}{\prod_{i<j}(x_i-x_j)}.\]
Let $\mathfrak{r}$ be a probability measure on the set $\mathbb{GT}_N$. The \textit{Schur generating function} $S_\mathfrak{r}(x_1,\ldots,x_N)$ of $\mathfrak{r}$ is a symmetric Laurent power series in $x_1,\ldots,x_N$ given by
\[S_\mathfrak{r}(x_1,\ldots,x_N)=\sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\frac{s_\lambda(x_1,\ldots,x_N)}{s_\lambda(1^N)}.\]
In general, we will assume that the measure $\mathfrak{r}$ is such that this sum is uniformly convergent in an open neighborhood of $(1^N)$.
A key object of study is the \textit{projection map} $\pi_{N,N-1}$, depending on context either a map from random variables on $\mathbb{GT}_N$ to random variables on $\mathbb{GT}_{N-1}$, or a map from probability distributions on $\mathbb{GT}_N$ to probability distributions on $\mathbb{GT}_{N-1}$.
\begin{definition}
\label{def:proj}
For any fixed $\lambda\in\mathbb{GT}_N$, let $V_N^\lambda$ denote the corresponding irreducible representation of $\mathrm{U}(N)$. Suppose that its restriction onto $\mathrm{U}(N-1)$ decomposes into irreducible representations as
\[V_N^\lambda|_{\mathrm{U}(N-1)} = \bigoplus_{\mu \in \mathbb{GT}_{N-1}} m_\mu V_{N-1}^\mu. \]
Now, if $\lambda$ is a random variable, then $\pi_{N,N-1}\lambda$ is the random element of $\mathbb{GT}_{N-1}$ such that
\[ \mathbb{P}(\pi_{N,N-1}\lambda = \mu |\lambda) = \frac{m_\mu\dim(V_{N-1}^\mu)}{\dim (V_N^\lambda|_{\mathrm{U}(N-1)})}. \]
\end{definition}
Our goal is to study asymptotics of counting measures of random signatures. To this end, for any $\lambda\in\mathbb{GT}_N$, we define the counting measure
\[m[\lambda]:=\frac{1}{N}\sum_{i=1}^N\delta\left(\frac{\lambda_i+N-i}{N}\right).\]
Furthermore, for any $\lambda\in\mathbb{GT}_N$ and $\mu\in\mathbb{GT}_{N-1}$, we define
\[d[\lambda,\mu]:=\sum_{i=1}^N\delta\left(\frac{\lambda_i+N-i}{N}\right)-\sum_{i=1}^{N-1}\delta\left(\frac{\mu_i+N-1-i}{N}\right).\]
The pushforward of a measure $\mathfrak{r}$ on $\mathbb{GT}_N$ with respect to the map $\lambda\to m[\lambda]$ defines a random measure on $\mathbb{R}$ which we denote by $m[\mathfrak{r}]$. Furthermore, the random measure $d[\mathfrak{r}]$ on $\mathbb{R}$ is defined as $d[\lambda,\mu]$ where $\lambda$ is distributed according to $\mathfrak{r}$ and $\mu$ is distributed according to $\pi_{N,N-1}\lambda$.
We will be interested in asymptotics as $N\to\infty$, so we have the following definition.
\begin{definition}[\cite{BuG2}*{Definition 2.1}]
\label{def:LLNA}
A sequence of symmetric functions $\{F_N(x_1,\ldots,x_N)\}_{N\ge 1}$ is called \textbf{LLN-appropriate} if there exists a collection of reals $\{c_k\}_{k\ge 1}$ such that
\begin{itemize}
\item For any $N$, the function $\log F_N(x_1,\ldots,x_N)$ is holomorphic in an open complex neighborhood of $(1^N)$.
\item For any index $j$ and any $k\in\mathbb{N}$ we have
\[\lim_{N\to\infty}\left.\frac{\partial_j^k\log F_N(x_1,\ldots,x_N)}{N}\right|_{x_i=1}=c_k.\]
\item For any $s\in\mathbb{N}$ and any indices $i_1,\ldots,i_s$ such at least two are distinct, we have
\[\lim_{N\to\infty}\left.\frac{\partial_{i_1}\cdots\partial_{i_s}\log F_N(x_1,\ldots,x_N)}{N}\right|_{x_i=1}=0.\]
\item The power series
\[\sum_{k=1}^\infty\frac{c_k}{(k-1)!}(x-1)^{k-1}\]
converges in a neighborhood of unity.
\end{itemize}
Now, a sequence $\rho=\{\rho_N\}_{N\ge 1}$, where $\rho_N$ is a probability measure on $\mathbb{GT}_N$, is called \textbf{LLN-appropriate} if the sequence $\{S_{\rho_N}\}_{N\ge 1}$ of its Schur generating functions is LLN-appropriate. For this sequence, we let $H_\rho(x)$ be a holomorphic function in a neighborhood of unity such that
\[H_\rho'(x) = \sum_{k=1}^\infty\frac{c_k}{(k-1)!}(x-1)^{k-1}.\]
\end{definition}
We have the following technical restatement of Theorem \ref{thm:DMK_intro}, which is what we will refer to for the rest of the paper.
\begin{theorem}
\label{thm:DMK}
Suppose that a sequence of probability measures $\rho=\{\rho_N\}_{N\ge 1}$, where $\rho_N$ is a probability measure on $\mathbb{GT}_N$, is LLN-appropriate. Then the random measures $m[\rho_N]$ and $d[\rho_N]$ converge as $N\to\infty$ in probability, in the sense of moments to \textit{deterministic} measures $\mathbf{m}$ and $\mathbf{d}$ on $\mathbb{R}$ respectively. Furthermore, their moment generating functions are related by
\begin{equation}
\label{eq:DMK}
\exp\left(\sum_{k=1}^\infty\frac{z^k}{k}\int_\mathbb{R}x^k \mathbf{d}(dx)\right)=\frac{1}{z}\left(-1+\exp\left(z\sum_{k=0}^\infty z^k\int_\mathbb{R}x^k \mathbf{m}(dx)\right)\right).
\end{equation}
\end{theorem}
Here are two applications demonstrating that the natural operations of tensor products and projections give LLN-appropriate sequences, meaning Theorem \ref{thm:DMK} applies to them. We adopt the notion below from \cite{BuG1}.
\begin{definition}[\cite{BuG1}*{Definition 2.5}] \label{def:reg}
A sequence $\lambda(N)$ of signatures is called \textit{regular} if there is a piecewise continuous function $f(t)$ and a constant $C$ such that
\[\lim_{N\to\infty}\sum_{j=1}^N\left|\frac{\lambda_j(N)}{N}-f(j/N)\right|=0\]
and
\[\left|\frac{\lambda_j(N)}{N}-f(j/N)\right|<C\]
for all $j=1,\ldots,N$ and $N=1,2,\ldots$.
\end{definition}
For a sequence of representations $V_N$, recall the measure $\rho[V_N]$ on $\mathbb{GT}_N$ defined in the introduction. Note that the Schur generating function of $\rho[V_N]$ is the normalized character of $V_N$, see the proof of \Cref{lem:ESGF} for justification.
The following result is about tensor products of irreducible representations.
\begin{corollary}
\label{thm:tensor}
Suppose $\lambda^{(1)}(N),\ldots,\lambda^{(r)}(N)$ are regular sequences of signatures. Let $V_N$ be the representation of $\mathrm{U}(N)$ given by
\[V_N=V_N^{\lambda^{(1)}(N)}\otimes\cdots\otimes V_N^{\lambda^{(r)}(N)}.\]
As $N\to\infty$, the measures $m[\rho[V_N]]$ and $d[\rho[V_N]]$ converge to measures $\mathbf{m}$ and $\mathbf{d}$ that are related by (\ref{eq:DMK}).
\end{corollary}
\begin{proof}
Note that the character of $V_N^{\lambda^{(1)}(N)}\otimes\cdots\otimes V_N^{\lambda^{(r)}(N)}$ is simply the product of the characters of the $V_N^{\lambda^{(i)}(N)}$, so
\[S_{\rho[V_N]}(x_1,\ldots,x_N)=\prod_{i=1}^r\frac{s_{\lambda^{(i)}(N)}(x_1,\ldots,x_N)}{s_{\lambda^{(i)}(N)}(1^N)}.\]
It is well known that Schur functions of a regular sequence of signatures are LLN-appropriate (see Theorem 8.1 from \cite{BuG2}*{Theorem 8.1} for a reference). Furthermore, it is easy to see that a product of LLN-appropriate functions is also LLN-appropriate, so the conclusion of Theorem \ref{thm:DMK} holds here.
\end{proof}
The next result is about projections of irreducible representations.
\begin{corollary}
\label{thm:project}
Suppose we have a regular sequence $\lambda(N)$ of signatures, and some fixed $0<\eta<1$. Let $V_N$ be the representation of $\mathrm{U}(\lfloor\eta N\rfloor)$ given by
\[V_N=\rho[V_N^\lambda|_{\mathrm{U}(\lfloor\eta N\rfloor)}]].\]
As $N\to\infty$, the measures $m[\rho[V_N]]$ and $d[\rho[V_N]]$ converge to measures $\mathbf{m}$ and $\mathbf{d}$ that are related by (\ref{eq:DMK}).
\end{corollary}
\begin{proof}
It is easy to see that the character of $V_N$ is the character of $V_N^\lambda$ with $1$s plugged in for the last $N-\lfloor\eta N\rfloor$ entries, so the Schur generating function of $\rho[V_N]$ is simply
\[S_{\rho[V_N]}(x_1,\ldots,x_{\eta N})=\frac{s_\lambda(x_1,\ldots,x_{\lfloor{\eta N\rfloor}},1^{N-\lfloor\eta N\rfloor})}{s_\lambda(1^N)}.\]
It is easy to see that the restrictions of LLN-appropriate functions are also LLN-appropriate, so the conclusion of Theorem \ref{thm:DMK} holds in this setting.
\end{proof}
We now give some details on the random matrix analogue of these results, avoiding precise technical statements. Let $(M_N)_{N\ge 1}$ be a sequence of unitarily invariant random matrices --- that is, the distribution of $M_N$ is invariant under conjugation by fixed unitary matrices. Letting $\mathbb{T}_N$ denote the set of all strictly decreasing sequences of real numbers of length $N$, we see that the eigenvalues of $M_N$ induce a probability measure $\rho[M_N]$ on $\mathbb{T}_N$. Furthermore, we may define a probability measure $\tilde{\rho}[M_N]$ on $\mathbb{T}_N\times\mathbb{T}_{N-1}$ given by the eigenvalues of $M_N$ and its principal $(N-1)\times(N-1)$ submatrix.
The counting measures $m[\rho]$ and $d[\tilde{\rho}]$ can be defined in the same way as discussed previously. Then, one can produce an analogue of LLN-appropriateness for the sequence $(M_N)_{N \ge 1}$ such that the limiting measures of $m[\rho]$ and $d[\tilde{\rho}]$ are $\mathbf{m}$ and $\mathbf{d}$, respectively. Moreover, the measures $\mathbf{m}$ and $\mathbf{d}$ satisfy
\begin{equation}
\label{eq:MK}
\exp\left(\sum_{k=1}^\infty\frac{z^k}{k}\int_\mathbb{R}x^k \mathbf{d}(dx)\right)=\sum_{k=0}^\infty z^k\int_\mathbb{R}x^k\mathbf{m}(dx).
\end{equation}
This can be shown using similar methods as ours, replacing Schur functions with multivariate Bessel functions. For more details, see Section 2 of \cite{GS}.
We note that this is nontrivial and do not discuss the proof further. Moreover, the unitarily invariant matrix ensemble can be recovered from the random signatures with a semiclassical limit. A partial description of this semiclassical limit is provided in \Cref{sec:bijection_proof}, where we prove results about the bijective nature of the correspondences.
Clearly, our statement is non-rigorous, and without proof. However, we note that \eqref{eq:MK} was shown to hold for the Wigner and Wishart ensembles in \cite{Bu}, which intersects the above result in the case of the GUE ensemble.
\subsection{Quantized Markov-Krein Correspondence Bijection} \label{ssec:QMK_correspondence}
The next main result establishes that the relation \eqref{eq:DMK} produces a bijection between two classes of objects. An analogous result was proved by Kerov in \cite{KEROV} for the relation \eqref{eq:MK}, which is the Markov-Krein correspondence. We begin by introducing some classes of objects.
Let $\mathcal{M}[a,b]$ denote the set of probability measures supported on the interval $[a,b]$. For any $\mu\in\mathcal{M}[a,b]$, define
\[\mu_k=\int_{-\infty}^{\infty} t^k d\mu(t).\]
Let $\widetilde{\mathcal{M}}[a,b]$ denote the set of probability measures supported on the interval $[a,b]$ with density bounded between $0$ and $1$. A continual Young diagram is defined to be a function $w:\mathbb{R}\to\mathbb{R}$ that satisfies
\begin{itemize}
\item $|w(x_1)-w(x_2)|\le|x_1-x_2|$ for all $x_1,x_2\in\mathbb{R}$.
\item There exists $x_0\in\mathbb{R}$ such that $w(x)=|x-x_0|$ for sufficiently large $x_0$.
\end{itemize}
For an interval $[a,b]$, let $\mathcal{D}[a,b]$ denote the set of continual Young diagrams satisfying $w(x)=|x-x_0|$ for all $x\not\in[a,b]$. Furthermore, let $\widetilde{\mathcal{D}}[a,b]$ denote the set of $w\in\mathcal{D}[a,b]$ such that $R_w(u)>-1$ for real $u$ outside the interval $[a,b]$ ($R_w$ is defined below).
Define the function $p_k:\mathcal{D}[a,b]\to\mathbb{R}$ for $k\in\mathbb{Z}_{\ge 1}$ by
\begin{equation}
\label{eq:pkdef}
p_k(w)=\frac{1}{2}\int_a^b t^k w''(t)dt.
\end{equation}
Note that $w''$ is $0$ outside of the interval $[a,b]$.
For a measure $\mu\in\mathcal{M}[a,b]$, define its $R$-function to be
\[R_\mu(u)=\int_a^b\frac{d\mu(t)}{u-t} = \frac{1}{u}\sum_{k=0}^\infty u^{-k}\mu_k.\]
Similarly, for a measure $\psi\in\widetilde{\mathcal{M}}[a,b]$, define its $R$-function to be
\[R_\psi(u)=-1+\exp\int_a^b\frac{d\psi(t)}{u-t} = -1+\exp\left(\frac{1}{u}\sum_{k=0}^\infty u^{-k}\psi_k\right).\]
Also, for a diagram $w\in\mathcal{D}[a,b]$ with associated $\sigma(x)=\frac{1}{2}(w(x)-|x|)$, define its $R$-function to be
\[R_w(u)=\frac{1}{u}\exp\int_a^b\frac{d\sigma(t)}{t-u} = \frac{1}{u}\exp\sum_{k=1}^\infty\frac{u^{-k}}{k}p_k(w).\]
These functions are holomorphic outside the interval $[a,b]$.
As our next main result, we will prove that \eqref{eq:DMK} produces a bijective correspondence between $\widetilde{\mathcal{M}}[a,b]$ and $\widetilde{\mathcal{D}}[a,b]$.
\begin{theorem}
\label{thm:BQMK}
There is a bijective correspondence between $\widetilde{\mathcal{M}}[a,b]$ and $\widetilde{\mathcal{D}}[a,b]$ such that $\psi\leftrightarrow w$ if and only if
\[\frac{1}{z}\left(-1+\exp\left(z\sum_{k=0}^\infty \psi_k z^k\right)\right)=\exp\left(\sum_{k=1}^\infty\frac{p_k(w)}{k}z^k\right).\]
The above relation is equivalent to $R_\psi(u)=R_w(u)$.
\end{theorem}
Moreover, the Markov-Krein correspondence stated in \eqref{eq:MK}, shown by Krein and Nudelman in \cite{KREIN}, is bijective as well. The statement below, due to Kerov, is that the Markov-Krein correspondence is a bijection between $\mathcal{M}[a,b]$ and $\mathcal{D}[a,b]$.
\begin{theorem}[\cite{KEROV}*{p.\,107}]
\label{thm:BMK}
There is a bijective correspondence between $\mathcal{M}[a,b]$ and $\mathcal{D}[a,b]$ such that $\mu\leftrightarrow w$ if and only if
\[\sum_{k=0}^\infty\mu_k z^k = \exp\left(\sum_{k=1}^\infty\frac{p_k(w)}{k}z^k\right).\]
The above relation is equivalent to $R_\mu(u)=R_w(u)$.
\end{theorem}
One can realize \Cref{thm:BMK} as a semiclassical limit of \Cref{thm:BQMK}. We give a heuristic description of this semiclassical limit in \Cref{sec:bijection_proof}.
\subsection{Connection between Theorems \ref{thm:DMK} and \ref{thm:BQMK}}
The measure $\mathbf{m}$ lives in the space of probability measures with compact support and density bounded by $1$. Instead of looking at $\mathbf{d}$, we will be looking at a related object viewable as a second anti-derivative of $\mathbf{d}$.
Let $\{x_i\}$ and $\{y_i\}$ be two \textit{interlacing} sequences of real numbers, i.e.
\[x_1\le y_1\le x_2\le\cdots\le x_{N-1}\le y_{N-1}\le x_N.\]
Define $w^{\{x_i\},\{y_i\}}(x)$ to be the \textit{rectangular Young diagram} of $\{x_i\}$ and $\{y_i\}$ in the following way. Let $z_0=\sum_{i=1}^N x_i-\sum_{i=1}^{N-1}y_i$. Then, $w^{\{x_i\},\{y_i\}}(x)$ is the unique continuous function with the following properties.
\begin{itemize}
\item $w^{\{x_i\},\{y_i\}}(x)=|x-z_0|$ for $x\le x_1$ and $x\ge x_N$.
\item $\frac{d}{dx}w^{\{x_i\},\{y_i\}}(x) = 1$ for $x_i< x < y_i$ and $\frac{d}{dx}w^{\{x_i\},\{y_i\}}(x)=-1$ for $y_i<x<x_{i+1}$.
\end{itemize}
An example of a rectangular Young diagram is in Figure \ref{fig:interlacing}. For a probability measure $\mathfrak{r}$ on $\mathbb{GT}_N$, $w[\mathfrak{r}]$ is defined to be the random rectangular Young diagram $w^{\{x_i\},\{y_i\}}(x)$ with
\[\{x_i\}=\left\{\frac{\lambda_i+N-i}{N}\right\}, \quad \quad \{y_i\}=\left\{\frac{\mu_i+N-1-i}{N}\right\}\]
where $\lambda$ is distributed according to $\mathfrak{r}$ and $\mu$ is distributed according to $\pi_{N,N-1}\lambda$. For an LLN-appropriate sequence of probability measures $\{\rho_N\}$, convergence in probability of $d[\rho_N]$ to some measure $\mathbf{d}$ implies convergence of $w[\rho_N]$ to a limiting continual Young diagram $w$ such that $\frac{1}{2}w''(x)$ is given by the density of $\mathbf{d}$. This follows from the lemma below.
\begin{figure}[ht]
\subfile{Interlacing}
\caption{Interlacing sequences and rectangular Young diagram with $N= 6$}
\label{fig:interlacing}
\end{figure}
\begin{lemma}[\cite{Bu}*{Lemma 2.1}]
\label{lem:diagram_top}
Let $\mathcal{F}[a,b]$ be the set of all Lipchitz-$1$ real valued function $f(x)$ supported on $[a,b]$. The weak topology on this set defined by the functionals
\[f(x)\to\int_{a}^b f(x)x^k dx\]
for $k\ge 0$ coincides with the uniform topology.
\end{lemma}
Now, \Cref{thm:DMK} and \eqref{eq:pkdef} imply that the limits of $m[\rho_N]$ and $w[\rho_N]$ are paired by quantized Markov-Krein bijection as in Theorem \ref{thm:BQMK}.
\section{Moments of Limiting Measure}
\label{sec:momlim}
Here we will explicitly compute the moments referenced in \Cref{thm:DMK}. This is important for our eventual proof of the theorem.
\begin{theorem}[\cite{BuG1}*{Theorem 5.1}]
\label{thm:momreg}
Suppose that a sequence of probability measures $\rho=\{\rho_N\}_{N\ge 1}$, where $\rho_N$ is a probability measure on $\mathbb{GT}_N$, is LLN-appropriate. Let $H(x)=H_\rho(x)$ be the associated function from Definition \ref{def:LLNA}. The random measure $m[\rho_N]$ converges as $N\to\infty$ in probability, in the sense of moments to a deterministic measure $\mathbf{m}$ on $\mathbb{R}$ with moments
\[\int_\mathbb{R} x^k\mathbf{m}(dx)=\left.\sum_{\ell=0}^k\frac{1}{(\ell+1)!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}.\]
\end{theorem}
\begin{theorem}
\label{thm:momdiff}
Suppose that a sequence of probability measures $\rho=\{\rho_N\}_{N\ge 1}$, where $\rho_N$ is a probability measure on $\mathbb{GT}_N$, is LLN-appropriate. Let $H(x)=H_\rho(x)$ be the associated function from Definition \ref{def:LLNA}. The random measure $d[\rho_N]$ converges as $N\to\infty$ in probability, in the sense of moments to a deterministic measure $\mathbf{d}$ on $\mathbb{R}$ with moments
\[\int_\mathbb{R} x^k\mathbf{d}(dx)=\left.\sum_{\ell=0}^k\frac{1}{\ell!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}.\]
\end{theorem}
\subsection{Moments and Schur Generating Functions}
\label{sec:moments and schur gf}
In this section, we will be introducing tools necessary to prove \Cref{thm:momreg,thm:momdiff}. The point of this section is that the moments of $m[\mathfrak{r}]$ and $d[\mathfrak{r}]$ can be found by applying certain differential operators on the Schur generating function of $\mathfrak{r}$.
\begin{definition}
\label{def:operator}
Define a differential operator on functions of $(x_1,\ldots,x_N)$ by
\[\mathcal{D}_{N,k} := \frac{1}{\displaystyle\prod_{1\le i<j\le N}(x_i-x_j)}\left(\sum_{i=1}^N(x_i\partial_i)^k\right)\prod_{1\le i<j\le N}(x_i-x_j)\]
where $\partial_i := \frac{\partial}{\partial x_i}$.
\end{definition}
The following theorem shows how this can be used to find the moments of $m[\mathfrak{r}]$ and $d[\mathfrak{r}]$. Note that this is similar to \cite{BuG1}*{Theorem 4.5}.
\begin{theorem}
\label{thm:opmom}
Let $\mathfrak{r}$ be a probability distribution on $\mathbb{GT}_N$. Then, we have
\[\mathbb{E}\left[\left(\int_\mathbb{R}x^km[\mathfrak{r}](dx)\right)^n\right]=\left.\frac{1}{N^{n(k+1)}}\mathcal{D}_{N,k}^nS_\mathfrak{r}(x_1,\ldots,x_N)\right|_{x_1=\cdots=x_N=1}\]
and
\[\mathbb{E}\left[\left(\int_\mathbb{R}x^kd[\mathfrak{r}](dx)\right)^n\right]=\left.\frac{1}{N^{nk}}\left(\sum_{\ell=0}^n\binom{n}{\ell}(-1)^{n-\ell}\mathcal{D}_{N-1,k}^{n-\ell}\mathcal{D}_{N,k}^\ell\right)S_\mathfrak{r}(x_1,\ldots,x_N)\right|_{x_1=\cdots=x_N=1}.\]
\end{theorem}
To prove this, we need the following lemma that allows us to get a handle on the Schur generating function after applying the projection map.
\begin{lemma}
\label{lem:ESGF}
For any $\lambda\in\mathbb{GT}_N$, we have
\[S_{\pi_{N,N-1}\delta_\lambda}(x_1,\ldots,x_{N-1})=\frac{s_\lambda(x_1,\ldots,x_{N-1},1)}{s_\lambda(1^N)}.\]
\end{lemma}
\begin{proof}
Let $V_N^\lambda$ be the irreducible representation of $\mathrm{U}(N)$ indexed by $\lambda$. Let it's restriction onto $\mathrm{U}(N-1)$ decompose into irreducible representations as
\[V_N^\lambda|_{\mathrm{U}(N-1)} = \bigoplus_{\mu \in \mathbb{GT}_{N-1}} m_\mu V_{N-1}^\mu. \]
Then,
\begin{align*}
S_{\pi_{N,N-1}\delta_\lambda}(x_1,\ldots,x_{N-1}) &= \sum_{\mu\in\mathbb{GT}_{N-1}}\frac{m_\mu\dim(V_{N-1}^\mu)}{\dim (V_N^\lambda|_{\mathrm{U}(N-1)})}\frac{s_\mu(x_1,\ldots,x_{N-1})}{s_\mu(1^{N-1})} \\
&= \sum_{\mu\in\mathbb{GT}_{N-1}}\frac{m_\mu}{\dim (V_N^\lambda|_{\mathrm{U}(N-1)})}s_\mu(x_1,\ldots,x_{N-1}).
\end{align*}
This is just the normalized character of $V_N^\lambda|_{\mathrm{U}(N-1)}$ due to the fact that characters add under direct sums. Therefore, the proof is complete, since the normalized character is also
\[\frac{s_\lambda(x_1,\ldots,x_{N-1},1)}{s_\lambda(1^N)}.\]
\end{proof}
We are now in position to prove Theorem \ref{thm:opmom}.
\begin{proof}[Proof of Theorem \ref{thm:opmom}]
The key to this theorem is that Schur functions $s_\lambda$ are eigenfunctions of $\mathcal{D}_{N,k}$. In particular, for any $\lambda\in\mathbb{GT}_N$, we have
\[\mathcal{D}_{N,k}s_\lambda(x_1,\ldots,x_N) = \left(\sum_{i=1}^N(\lambda_i+N-i)^k\right)s_\lambda(x_1,\ldots,x_N).\]
This is shown by a straightforward calculation with the determinant definition of Schur functions. We have that
\[\mathbb{E}\left[\left(\int_\mathbb{R} x^km[\mathfrak{r}](dx)\right)^n\right]=\sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\left(\frac{1}{N}\sum_{i=1}^N\left(\frac{\lambda_i+N-i}{N}\right)^k\right)^n.\]
By the eigenfunction observation, this can be written as
\[\mathbb{E}\left[\left(\int_\mathbb{R} x^km[\mathfrak{r}](dx)\right)^n\right]=\frac{1}{N^{n(k+1)}}\sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\frac{\mathcal{D}_{N,k}^n s_\lambda(x_1,\ldots,x_N)}{s_\lambda(x_1,\ldots,x_N)}.\]
Evaluating the right side at $x_1=\cdots=x_N=1$, the first result is immediate by definition of $S_\mathfrak{r}$.
We will now show the second result. By similar logic as above, we may set
\begin{align*}
T(x_1,\ldots,x_N)&:=\frac{1}{N^{\ell k}}\mathcal{D}_{N,k}^\ell\sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\frac{s_\lambda(x_1,\ldots,x_N)}{s_\lambda(1^N)}\\
&= \sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\left(\sum_{i=1}^N\left(\frac{\lambda_i+N-i}{N}\right)^k\right)^\ell\frac{s_\lambda(x_1,\ldots,x_N)}{s_\lambda(1^N)}.
\end{align*}
Our goal is to evaluate
\[\left.\frac{1}{N^{(n-\ell)k}}\mathcal{D}_{N-1,k}^{n-\ell}T(x_1,\ldots,x_N)\right|_{x_i=1}.\]
Since $\mathcal{D}_{N-1,k}$ doesn't act on the variable $x_N$, we may evaluate the above expression with $T(x_1,\ldots,x_N)$ replaced by $T(x_1,\ldots,x_{N-1},1)$. By Lemma \ref{lem:ESGF},
\begin{align*}
&\frac{1}{N^{(n-\ell)k}}\mathcal{D}_{N-1,k}^{n-\ell}\frac{s_\lambda(x_1,\ldots,x_{N-1},1)}{s_\lambda(1^N)}\\
=&\frac{1}{N^{(n-\ell)k}}\mathcal{D}_{N-1,k}^{n-\ell}\sum_{\mu\in\mathbb{GT}_{N-1}}\pi_{N,N-1}\delta_\lambda(\mu)\frac{s_\mu(x_1,\ldots,x_{N-1})}{s_\mu(1^{N-1})} \\
=&\sum_{\mu\in\mathbb{GT}_{N-1}}\pi_{N,N-1}\delta_\lambda(\mu)\left(\sum_{i=1}^{N-1}\left(\frac{\mu_i+N-1-i}{N}\right)^k\right)^{n-\ell}\frac{s_\mu(x_1,\ldots,x_{N-1})}{s_\mu(1^{N-1})}, \end{align*}
so
\begin{align*}
&\left.\frac{1}{N^{(n-\ell)k}}\mathcal{D}_{N-1,k}^{n-\ell}T(x_1,\ldots,x_{N-1},1)\right|_{x_i=1} \\
=& \sum_{\lambda\in\mathbb{GT}_N}\mathfrak{r}(\lambda)\left(\sum_{i=1}^N\left(\frac{\lambda_i+N-i}{N}\right)^k\right)^\ell\sum_{\mu\in\mathbb{GT}_{N-1}}\pi_{N,N-1}\delta_\lambda(\mu)\left(\sum_{i=1}^{N-1}\left(\frac{\mu_i+N-1-i}{N}\right)^k\right)^{n-\ell} \\
=&\sum_{\substack{\lambda\in\mathbb{GT}_N \\ \mu\in\mathbb{GT}_{N-1}}}\mathfrak{r}(\lambda)\pi_{N,N-1}\delta_\lambda(\mu)\left(\sum_{i=1}^N\left(\frac{\lambda_i+N-i}{N}\right)^k\right)^\ell\left(\sum_{i=1}^{N-1}\left(\frac{\mu_i+N-1-i}{N}\right)^k\right)^{n-\ell}.
\end{align*}
The theorem now follows by the binomial theorem.
\end{proof}
\subsection{Two Lemmas}
We introduce two lemmas that will be useful in our analysis of the moments of the counting measure. In particular, they help to reduce the number of variables in the calculation. Before proceeding, we introduce the following notation.
Given a function $f(z_1,\ldots,z_n)$, define
\[\sum_{\mathrm{cyc}} f(z_1,\ldots,z_n):=f(z_1,\ldots,z_n)+f(z_2,\ldots,z_n,z_1)+\cdots+f(z_n,z_1,\ldots,z_{n-1}).\]
\begin{lemma}[\cite{BuG1}*{Lemma 5.5}]
\label{lem:BG 5.5}
Let $g(z)$ be a function analytic in some neighborhood of $z=1$. Then
\[\lim_{z_i\to 1}\sum_{\mathrm{cyc}} \frac{g(z_1)}{(z_1-z_2)(z_1-z_3)\cdots(z_1-z_n)} = \left.\frac{1}{(n-1)!}\frac{\partial^{n-1}g(z)}{\partial z^{n-1}}\right|_{z=1}.\]
\end{lemma}
\begin{lemma}
\label{lem:expansion}
For positive integers $k$, we have that
\[\mathcal{D}_{N,k} = \sum_{m=1}^k\stirling{k}{m}\sum_{\ell=0}^m\binom{m}{\ell}\ell!\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\mathrm{cyc}} \frac{x_{i_0}^m\partial_{i_0}^{m-\ell}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})},\]
where $\stirling{k}{m}$ are Stirling numbers of the second kind.
\end{lemma}
\begin{proof}
Let $\Delta(x)=\prod_{i<j}(x_i-x_j)$ denote the Vandermonde determinant. We first compute $\partial_p^m\Delta(x)$. By the Leibniz rule, we have
\[\partial_p^m\Delta(x) = \sum_{\substack{(k_{ij})_{1\le i<j\le N} \\k_{i,j}\in\mathbb{Z}_{\ge 0} \\ \sum k_{i,j}=m}}\binom{m}{k_{1,2},\ldots,k_{N-1,N}}\prod_{i<j}\partial_p^{k_{i,j}}(x_i-x_j).\]
With some work, this reduces to
\[\partial_p^m\Delta(x) = m!\prod_{i<j}(x_i-x_j)\sum_{\substack{S\subseteq[N]\setminus\{p\}\\|S|=m}}\frac{1}{\displaystyle\prod_{i\in S}(x_p-x_i)}.\]
It is well known that
\[\sum_{i=1}^N (x_i \partial_i)^k = \sum_{i=1}^N\sum_{m=1}^k\stirling{k}{m}x_i^m\partial_i^m,\]
so in fact
\begin{align*}
\mathcal{D}_{N,k} &= \Delta(x)^{-1}\sum_{m=1}^k\stirling{k}{m}\sum_{i=1}^N x_i^m\sum_{\ell=0}^m\binom{m}{\ell}(\partial_i^\ell\Delta(x))\partial_i^{m-\ell} \\
&= \sum_{m=1}^k\stirling{k}{m}\sum_{\ell=0}^m\binom{m}{\ell}\ell!\sum_{i=1}^N x_i^m\sum_{\substack{S\subseteq[N]\setminus\{i\}\\|S|=\ell}}\left(\prod_{j\in S}\frac{1}{x_i-x_j}\right)\partial_i^{m-\ell}.
\end{align*}
But we have that
\[\sum_{i=1}^N x_i^m\sum_{\substack{S\subseteq[N]\setminus\{i\}\\|S|=\ell}}\left(\prod_{j\in S}\frac{1}{x_i-x_j}\right)\partial_i^{m-\ell} = \sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\mathrm{cyc}} \frac{x_{i_0}^m\partial_{i_0}^{m-\ell}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})},\]
which completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm:momreg}}
Note that the proof presented here is very similar to that given in \cite{BuG1}*{Section 5.2}. It is necessary to show
\begin{align}\label{eq:firstorder}
\lim_{N\to\infty}\mathbb{E}\left(\int_\mathbb{R} x^k m[\rho_N](dx)\right)=\sum_{\ell=0}^k\frac{1}{(\ell+1)!}\binom{k}{\ell}\left.\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}
\end{align}
and
\begin{align}\label{eq:secondorder}\lim_{N\to\infty}\mathbb{E}\left[\left(\int_\mathbb{R} x^k m[\rho_N](dx)\right)^2\right] = \lim_{N\to\infty}\left[\mathbb{E}\left(\int_\mathbb{R} x^k m[\rho_N](dx)\right)\right]^2.\end{align}
First, we will show (\ref{eq:firstorder}). By Theorem \ref{thm:opmom},
\begin{align}\label{eq:4}
\mathbb{E}\left(\int_\mathbb{R} x^k m[\rho_N](dx)\right) = \frac{1}{N^{k+1}}\lim_{x_i\to 1}\mathcal{D}_{N,k}S_{\rho_N}(x_1,\ldots,x_N).
\end{align}
Define $T_N(x_1,\ldots,x_N)$ by
\begin{align}\label{eq:almost_mult}
S_{\rho_N}(x_1,\ldots,x_N)=\exp\left(\sum_{i=1}^N NH(x_i)\right)T_N(x_1,\ldots,x_N),
\end{align}
where $T_N$ is analytic in some open neighborhood of $(1^N)$. By Definition \ref{def:LLNA}, $T_N(1^N)=1$ and, for any fixed $k$,
\[\lim_{N\to\infty}\frac{1}{N}\log T_N(x_1,\ldots,x_k,1^{N-k})=0\] uniformly in some open neighborhood of $(1^k)$. Differentiating the above result, we have
\begin{align}\label{eq:deriv_die}
\lim_{N\to\infty}\frac{1}{N}\frac{\partial_1^{a_1}\cdots\partial_k^{a_k} T_N(x_1,\ldots,x_k,1^{N-k})}{T_N(x_1,\ldots,x_k,1^{N-k})}=0
\end{align}
for any $(a_1,\ldots,a_k)\in\mathbb{Z}_{\ge 0}^k$. By Lemma \ref{lem:expansion}, $\mathcal{D}_{N,k}S_{\rho_N}$ is
\begin{align}\label{eq:7}
\mathcal{D}_{N,k}S_{\rho_N} = \sum_{m=1}^k\stirling{k}{m}\sum_{\ell=0}^m\binom{m}{\ell}\ell!\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\mathrm{cyc}} \frac{x_{i_0}^m\partial_{i_0}^{m-\ell}S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})}.
\end{align}
We wish to take the limit $x_i\to 1$ of both sides. To do this, consider the following proposition.
\begin{proposition}
\label{prelimit}
The leading order term of
\[L:=\lim_{x_i\to 1}\sum_{\mathrm{cyc}} \frac{x_{i_0}^m\partial_{i_0}^{m-\ell}S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})}\]
is
\[\frac{1}{\ell!}N^{m-\ell}\left.\left(\frac{\partial}{\partial z}\right)^\ell(z^mH'(z)^{m-\ell})\right|_{z=1}.\]
\end{proposition}
\begin{proof}
Since all functions are symmetric, we may assume $\{i_0,\ldots,i_\ell\}=\{1,\ldots,\ell+1\}$. For $r\ge \ell+2$, we can set $x_r=1$, so
\[L=\lim_{x_1,\ldots,x_{\ell+1}\to 1}\sum_{\mathrm{cyc}}\frac{x_1^m\partial_1^{m-\ell}S_{\rho_N}(x_1,\ldots,x_{\ell+1},1^{N-\ell-1})}{(x_{1}-x_{2})\cdots(x_{1}-x_{\ell+1})}.\]
From (\ref{eq:almost_mult}) and (\ref{eq:deriv_die}), the leading order term of
\[\partial_1^{m-\ell}S_{\rho_N}(x_1,\ldots,x_{\ell+1},1^{N-\ell-1})\]
is $N^{m-\ell}H'(x_1)^{m-\ell}S_{\rho_N}(x_1,\ldots,x_{\ell+1},1^{N-\ell-1})$. Thus, the leading order term of $L$ is
\[N^{m-\ell}\lim_{x_1,\ldots,x_{\ell+1}\to 1}\sum_{\mathrm{cyc}}\frac{x_1^mH'(x_1)^{m-\ell}}{(x_{1}-x_{2})\cdots(x_{1}-x_{\ell+1})}S_{\rho_N}(x_1,\ldots,x_{\ell+1},1^{N-\ell-1}).\]
Applying Lemma \ref{lem:BG 5.5} and noting that $S_{\rho_N}(1^N)=1$ yields the desired result.
\end{proof}
Therefore, the leading order term of the right side of (\ref{eq:7}) under the limit $x_i\to 1$ is
\begin{align}\label{eq:setcount}
\left.\sum_{m=1}^k\stirling{k}{m}\sum_{\ell=0}^m\binom{m}{\ell}\binom{N}{\ell+1}N^{m-\ell}\left(\frac{\partial}{\partial z}\right)^\ell(z^mH'(z)^{m-\ell})\right|_{z=1}.
\end{align}
Since the order of the summand is $N^{\ell+1}N^{m-\ell}=N^{m+1}$, the only contribution in the limit comes from $m=k$, so the leading order term is
\[\left.N^{k+1}\sum_{\ell=0}^k\binom{k}{\ell}\frac{1}{(\ell+1)!}\left(\frac{\partial}{\partial z}\right)^\ell(z^mH'(z)^{m-\ell})\right|_{z=1}.\]
Thus, by (\ref{eq:4}),
\[\lim_{N\to\infty}\mathbb{E}\left(\int_\mathbb{R} x^k m[\rho_N](dx)\right)=\sum_{\ell=0}^k\frac{1}{(\ell+1)!}\binom{k}{\ell}\left.\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1},\]
as desired.
We will now show (\ref{eq:secondorder}). It suffices to show that the leading order terms of $(\mathcal{D}_{N,k}S_{\rho_N})^2$ and $(\mathcal{D}_{N,k})^2S_{\rho_N}$ match up when we take $x_i\to 1$. We see that
\begin{align*}
(\mathcal{D}_{N,k}S_{\rho_N})^2 =& \sum_{m=1}^k\sum_{m'=1}^k\stirling{k}{m}\stirling{k}{m'}\sum_{\ell=0}^m\sum_{\ell'=0}^{m'}\binom{m}{\ell}\ell!\binom{m'}{\ell'}\ell'!\\
&\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\{i_0',\ldots,i_{\ell'}'\}\subseteq[N]}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}S_{\rho_N})(x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}S_{\rho_N})}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}
\end{align*}
and
\begin{align*}
\numberthis \label{eq:Dnk2}(\mathcal{D}_{N,k})^2S_{\rho_N} =& \sum_{m=1}^k\sum_{m'=1}^k\stirling{k}{m}\stirling{k}{m'}\sum_{\ell=0}^m\sum_{\ell'=0}^{m'}\binom{m}{\ell}\ell!\binom{m'}{\ell'}\ell'!\\
&\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\{i_0',\ldots,i_{\ell'}'\}\subseteq[N]}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'})S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}.
\end{align*}
The only difference in the two is the numerator of the final summand. We have the following claim that is an analogue of Proposition \ref{prelimit}.
\begin{proposition}
\label{prop:equal_leading_terms}
The leading order terms of
\[\lim_{x_i\to 1}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}S_{\rho_N})(x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}S_{\rho_N})}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}\]
and
\[\lim_{x_i\to 1}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'})S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}\]
are identical.
\end{proposition}
\begin{proof}
As in the proof of Proposition \ref{prelimit}, set $x_r=1$ for all $r\not\in\{i_0,\ldots,i_\ell\}\cup\{i_0',\ldots,i_\ell'\}$. Then, using (\ref{eq:almost_mult}) and (\ref{eq:deriv_die}), the leading order term of $(x_{i_0}^m\partial_{i_0}^{m-\ell}S_{\rho_N})(x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}S_{\rho_N})$ is
\[N^{m-\ell+m'-\ell'}x_{i_0}^mx_{i_0'}^{m'}H'(x_{i_0})^{m-\ell}H'(x_{i_0'})^{m'-\ell'}S_{\rho_N}^2.\]
Now, the leading order term of $(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'})S_{\rho_N}$ is the same as the leading order term of
\[N^{m'-\ell'}x_{i_0}^m\partial_{i_0}^{m-\ell}(x_{i_0'}^{m'}H'(x_{i_0'})^{m'-l'}S_{\rho_N}).\]
When applying the product rule, there is a new factor of $N$ after applying the derivative on the $S_{\rho_N}$, and none for any of the other terms. Therefore, the leading order of this is
\[N^{m-\ell+m'-\ell'}x_{i_0}^mx_{i_0'}^{m'}H'(x_{i_0})^{m-\ell}H'(x_{i_0'})^{m'-\ell'}S_{\rho_N}.\]
Taking $x_i\to 1$ and using Lemma \ref{lem:BG 5.5}, we will get the same thing since $S_{\rho_N}(1^N)^2=S_{\rho_N}(1^N)=1$. Thus, the leading order terms match as desired.
\end{proof}
This shows (\ref{eq:secondorder}), and the proof of Theorem \ref{thm:momreg} is complete.
\subsection{Proof of Theorem \ref{thm:momdiff}}
This proof is almost identical to the proof of Theorem \ref{thm:momreg}, except for a few key differences. We will highlight those differences, and the rest of the proof carries through in a similar way.
As in the previous proof, it is required to show
\begin{align}\label{eq:firstorder_diff}
\lim_{N\to\infty}\mathbb{E}\left(\int_\mathbb{R} x^k d[\rho_N](dx)\right)=\sum_{\ell=0}^k\frac{1}{\ell!}\binom{k}{\ell}\left.\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}
\end{align}
and
\begin{align}\label{eq:secondorder_diff}\lim_{N\to\infty}\mathbb{E}\left[\left(\int_\mathbb{R} x^k d[\rho_N](dx)\right)^2\right] = \lim_{N\to\infty}\left(\mathbb{E}\left(\int_\mathbb{R} x^k d[\rho_N](dx)\right)\right)^2.\end{align}
First, we will show (\ref{eq:firstorder_diff}). From Theorem \ref{thm:opmom},
\begin{align}\label{eq:4_diff}
\mathbb{E}\left(\int_\mathbb{R} x^k d[\rho_N](dx)\right) = \frac{1}{N^k}\lim_{x_i\to 1}(\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k})S_{\rho_N}(x_1,\ldots,x_N),
\end{align}
and from Lemma \ref{lem:expansion},
\[\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k} = \sum_{m=1}^k\stirling{k}{m}\sum_{\ell=0}^m\binom{m}{\ell}\ell!\sum_{\substack{\{i_0,\ldots,i_\ell\}\subseteq[N]\\mathbb{N}\in\{i_0,\ldots,i_\ell\}}}\sum_{\mathrm{cyc}} \frac{x_{i_0}^m\partial_{i_0}^{m-\ell}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})}.\]
Thus, the proof of (\ref{eq:firstorder_diff}) is the same as the proof of (\ref{eq:firstorder}), except the step at (\ref{eq:setcount}). Now, instead of $\binom{N}{\ell+1}$, we will have $\binom{N-1}{\ell}$. The loss of order of $N$ by $1$ here is compensated by the same loss in (\ref{eq:4_diff}). However, the factor of $\frac{1}{(\ell+1)!}$ in Theorem \ref{thm:momreg} that came from $\binom{N}{\ell+1}$ becomes $\frac{1}{\ell!}$. Thus, the moments of $d[\rho_N]$ converge to
\[\left.\sum_{\ell=0}^k\frac{1}{\ell!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1},\]
as desired.
The modification for (\ref{eq:secondorder_diff}) is slightly more complicated, since $\mathcal{D}_{N,K}^2$ in (\ref{eq:Dnk2}) is replaced with
\[\mathcal{D}_{N,k}^2-2\mathcal{D}_{N-1,k}\mathcal{D}_{N,k}+\mathcal{D}_{N-1,k}^2=(\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k})^2+(\mathcal{D}_{N,k}\mathcal{D}_{N-1,k}-\mathcal{D}_{N-1,k}\mathcal{D}_{N,k})\]
instead of just $(\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k})^2$. Ignoring the commutator term, we see that (\ref{eq:secondorder_diff}) holds using a similar proof as in Theorem \ref{thm:momreg}, with
\[\frac{1}{N^{2k}}(\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k})^2S_{\rho_N}=\frac{1}{N^{2k}}((\mathcal{D}_{N,k}-\mathcal{D}_{N-1,k})S_{\rho_N})^2\]
in the limit $N\to\infty$. So, it suffices to show that the commutator term doesn't contribute to the leading order. In other words, it suffices to show that
\begin{align}
\label{eq:lim_commutator}
\lim_{N\to\infty}\frac{1}{N^{2k}}\lim_{x_i\to 1}(\mathcal{D}_{N,k}\mathcal{D}_{N-1,k}-\mathcal{D}_{N-1,k}\mathcal{D}_{N,k})S_{\rho_N}(x_1,\ldots,x_N)=0.
\end{align}
We see that
\begin{align*}
\numberthis \label{eq:commutator_explicit}(\mathcal{D}_{N,k}\mathcal{D}_{N-1,k}-\mathcal{D}_{N-1,k}\mathcal{D}_{N,k})S_{\rho_N} &= \sum_{m=1}^k\sum_{m'=1}^k\stirling{k}{m}\stirling{k}{m'}\sum_{\ell=0}^m\sum_{\ell'=0}^{m'}\binom{m}{\ell}\ell!\binom{m'}{\ell'}\ell'!\\
&\left(\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N]}\sum_{\{i_0',\ldots,i_{\ell'}'\}\subseteq[N-1]}-\sum_{\{i_0,\ldots,i_\ell\}\subseteq[N-1]}\sum_{\{i_0',\ldots,i_{\ell'}'\}\subseteq[N]}\right) \\
&\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'})S_{\rho_n}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}.
\end{align*}
Note that if $i_0\ne i_0'$, the operators $x_{i_0}^m\partial_{i_0}^{m-\ell}$ and $x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}$ commute, so we may restrict our attention to sets $\{i_0,\ldots,i_\ell\}$ and $\{i_0',\ldots,i_{\ell'}'\}$ having nonempty intersection. Furthermore, if both sets $\{i_0,\ldots,i_\ell\}$ and $\{i_0',\ldots,i_{\ell'}'\}$ are contained in $[N-1]$, the two summations cancel. Thus, we may rewrite (\ref{eq:commutator_explicit}) as
\begin{align*}
(\mathcal{D}_{N,k}\mathcal{D}_{N-1,k}-\mathcal{D}_{N-1,k} & \mathcal{D}_{N,k})S_{\rho_N} = \sum_{m=1}^k\sum_{m'=1}^k\stirling{k}{m}\stirling{k}{m'}\sum_{\ell=0}^m\sum_{\ell'=0}^{m'}\binom{m}{\ell}\ell!\binom{m'}{\ell'}\ell'!\\
&\sum_{\substack{\{i_0,\ldots,i_\ell\}\subseteq[N] \\ N\in \{i_0,\ldots,i_\ell\} \\ \{i_0',\ldots,i_{\ell'}'\}\subseteq[N-1] \\ \{i_0,\ldots,i_\ell\}\cap\{i_0',\ldots,i_{\ell'}'\}\ne\emptyset}}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}-x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}x_{i_0}^m\partial_{i_0}^{m-\ell})S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}.
\end{align*}
We already noted that $(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}-x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}x_{i_0}^m\partial_{i_0}^{m-\ell})S_{\rho_N}=0$ if $i_0\ne i_0'$, but even if $i_0=i_0'$, its order is bounded above by $N^{m+m'-\ell-\ell'-1}$ using essentially the same argument as in the proof of Proposition \ref{prop:equal_leading_terms}. The number of pairs of sets satisfying the four conditions in the summation is on the order of $N^{-1}\cdot N^{\ell}\cdot N^{\ell'+1}=N^{\ell+\ell'}$, so the order of
\[\sum_{\substack{\{i_0,\ldots,i_\ell\}\subseteq[N] \\ N\in \{i_0,\ldots,i_\ell\} \\ \{i_0',\ldots,i_{\ell'}'\}\subseteq[N-1] \\ \{i_0,\ldots,i_\ell\}\cap\{i_0',\ldots,i_{\ell'}'\}\ne\emptyset}}\sum_{\mathrm{cyc}}\cyc \frac{(x_{i_0}^m\partial_{i_0}^{m-\ell}x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}-x_{i_0'}^{m'}\partial_{i_0'}^{m'-\ell'}x_{i_0}^m\partial_{i_0}^{m-\ell})S_{\rho_N}}{(x_{i_0}-x_{i_1})\cdots(x_{i_0}-x_{i_\ell})(x_{i_0'}-x_{i_1'})\cdots(x_{i_0'}-x_{i_\ell'})}\]
is at most $N^{m+m'-1}$, which is at most $N^{2k-1}$. The sum
\[\sum_{m=1}^k\sum_{m'=1}^k\stirling{k}{m}\stirling{k}{m'}\sum_{\ell=0}^m\sum_{\ell'=0}^{m'}\binom{m}{\ell}\ell!\binom{m'}{\ell'}\ell'!\]
is fixed with respect to $N$, so the order of
\[(\mathcal{D}_{N,k}\mathcal{D}_{N-1,k}-\mathcal{D}_{N-1,k}\mathcal{D}_{N,k})S_{\rho_N}\]
is at most $N^{2k-1}$. This proves (\ref{eq:lim_commutator}), completing the proof of Theorem \ref{thm:momdiff}.
\section{Proof of Theorem \ref{thm:DMK}}
\label{sec:theproof}
Assume the hypotheses of \Cref{thm:DMK}, and as before, let $H(x)=H_\rho(x)$. For convenience of notation, define
\[m_k:=\int_\mathbb{R} x^k\mathbf{m}(dx)=\left.\sum_{\ell=0}^k\frac{1}{(\ell+1)!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}\]
and
\[d_k := \int_\mathbb{R} x^k\mathbf{d}(dx)=\left.\sum_{\ell=0}^k\frac{1}{\ell!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}. \]
It turns out these can be expressed as contour integrals, which will allow for more convenient computations in the proof of Theorem \ref{thm:DMK}.
\begin{proposition}
\label{prop: mcounter}
The moment $m_k$ can be expressed as the contour integral
\[m_k=\frac{1}{k+1}\oint_1\frac{1}{w}\left(wH'(w)+\frac{w}{w-1}\right)^{k+1}\frac{dw}{2\pi\i},\]
where the contour is counterclockwise around $1$. Similarly, the moment $d_k$ can be expressed as the contour integral
\[d_k = \oint_1\frac{1}{w-1}\left(wH'(w)+\frac{w}{w-1}\right)^k\frac{dw}{2\pi\i},\]
where the contour is counterclockwise around $1$.
\end{proposition}
\begin{proof}
Expanding with the binomial theorem yields
\begin{align*}
\frac{1}{k+1}\oint_1\frac{1}{w}\left(wH'(w)+\frac{w}{w-1}\right)^{k+1}\frac{dw}{2\pi\i}
= & \frac{1}{k+1}\left(\oint_1w^kH'(w)^{k+1} +\sum_{\ell=0}^k\dbinom{k+1}{\ell+1}\dfrac{w^{k}H'(w)^{k-\ell}}{(w-1)^{\ell+1}}\frac{dw}{2\pi\i}\right)
\\ = & \sum_{\ell = 0}^k\frac{1}{\ell+1}\dbinom{k}{\ell}\oint_1 \dfrac{w^kH'(w)^{k-\ell}}{(w-1)^{\ell+1}}\frac{dw}{2\pi\i}.
\end{align*}
By Cauchy's Differentiation Formula,
\[\oint_1 \dfrac{w^kH'(w)^{k-\ell}}{(w-1)^{\ell+1}} \frac{dw}{2\pi\i} = \dfrac{\left.\left(\dfrac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1}}{\ell!},\]
so
\begin{align*}
\frac{1}{k+1}\oint_1\frac{1}{w}\left(wH'(w)+\frac{w}{w-1}\right)^{k+1}\frac{dw}{2\pi\i} & =\left.\sum_{\ell=0}^k\frac{1}{(\ell+1)!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1},
\end{align*}
which is $m_k$ by \Cref{thm:momreg}.
Similarly,
\begin{align*}
\oint_1\frac{1}{w-1}\left(wH'(w)+\frac{w}{w-1}\right)^k\frac{dw}{2\pi\i} & = \oint_1\sum_{\ell = 0}^k\dbinom{k}{\ell} \dfrac{w^kH'(w)^{k-\ell}}{(w-1)^{\ell+1}}\frac{dw}{2\pi\i} \\
& = \sum_{\ell = 0}^k\dbinom{k}{\ell}\oint_1 \dfrac{w^kH'(w)^{k-\ell}}{(w-1)^{\ell+1}}\frac{dw}{2\pi\i} \\
& = \left.\sum_{\ell=0}^k\frac{1}{\ell!}\binom{k}{\ell}\left(\frac{\partial}{\partial z}\right)^\ell\left(z^k H'(z)^{k-\ell}\right)\right|_{z=1},
\end{align*}
which is $d_k$ by \Cref{thm:momdiff}.
\end{proof}
To simplify our expressions, define $F(z)$ and $G(z)$ as
\[F(z)=\sum_{k=0}^\infty z^k\int_\mathbb{R}x^k \mathbf{m}(dx)=
\sum_{k=0}^\infty m_kz^k\]
and
\[G(z)=\sum_{k=1}^\infty\frac{z^k}{k}\int_\mathbb{R}x^k \mathbf{d}(dx)=
\sum_{k=1}^\infty d_k\frac{z^k}{k},\]
respectively, for the proof of Theorem \ref{thm:DMK}. We have the following lemma that allows us to work with the contour integrals.
\begin{lemma}
\label{lem:inverse}
Let
\[y(w)=\frac{1}{wH'(w)+\frac{w}{w-1}}.\]
This function is locally invertible in a neighborhood of $1$ with inverse
\[w=e^{yF(y)}.\]
\end{lemma}
\begin{proof}
In a neighborhood of $1$, the function $y(w)$ behaves like $\frac{w-1}{w}$ since $H$ is holomorphic, so $y'(1)=1$, and therefore, the inverse exists by the inverse function theorem. Furthermore, a counterclockwise contour of $w$ around $1$ is equivalent to a counterclockwise contour of $y$ around $0$. Thus, there is some holomorphic function $A(y)$ such that
\[\log w=A(y)\]
for $y$ in some neighborhood of $0$. Then, $\frac{dw}{w}=A'(y)dy$, so
\[m_k=\frac{1}{k+1}\oint_0\frac{1}{y^{k+1}}\frac{A'(y)dy}{2\pi\i},\]
which implies
\[m_k=\frac{1}{(k+1)!}\left[\left(\frac{\partial}{\partial y}\right)^kA'(y)\right]_{y=0}.\]
Since $\displaystyle\sum_{k=0}^\infty m_kz^k=F(z)$, we have
\[A(z)=zF(z),\]
as desired.
\end{proof}
\begin{rmk}
This lemma is essentially the same as combining equations (4.5) and (2.5) form \cite{BuG1}. We opt to prove the lemma directly here since our setup differs slightly from \cite{BuG1}, and our proof avoids mentioning $R$-transforms.
\end{rmk}
Now, we are ready to prove Theorem \ref{thm:DMK}.
\begin{proof}[Proof of Theorem \ref{thm:DMK}]
Using the notation and statement of Lemma \ref{lem:inverse}, we have
\[d_k=\oint_0\frac{e^{yF(y)}}{e^{yF(y)}-1}y^{-k}[F(y)+yF'(y)]\frac{dy}{2\pi\i}=\oint_0y^{-(k+1)}\frac{ye^{yF(y)}}{e^{yF(y)}-1}[F(y)+yF'(y)]\frac{dy}{2\pi\i}.\]
Applying Cauchy's Differentiation Formula yields
\[ d_k=\left.\frac{1}{k!}\left(\frac{\partial}{\partial y}\right)^k\left[\frac{ye^{yF(y)}}{e^{yF(y)}-1}[F(y)+yF'(y)]\right]\right|_{y=0}. \]
Thus,
\[1+\sum_{k=1}^{\infty}d_kz^k=\frac{ze^{zF(z)}}{e^{zF(z)}-1}[F(z)+zF'(z)],\]
so
\[1+zG'(z)=\frac{ze^{zF(z)}}{e^{zF(z)}-1}[F(z)+zF'(z)].\]
It is easy to see that this implies $e^{G(z)}=\frac{1}{z}\left(e^{zF(z)}-1\right)$, completing the proof of Theorem \ref{thm:DMK}.
\end{proof}
\section{Quantized Markov-Krein Correspondence} \label{sec:bijection_proof}
In this section, we prove \Cref{thm:BQMK} and heuristically show that \Cref{thm:BMK} can be recovered from a semiclassical limit of \Cref{thm:BQMK}.
\subsection{Proof of Theorem \ref{thm:BQMK}}
We will need the following theorem for our proof.
\begin{theorem}[\cite{KREIN}*{Theorem A.6}]
\label{thm:A6}
Let $\mathcal{R}[a,b]$ be the class of complex functions $F(z)$ that satisfy the following conditions.
\begin{itemize}
\item $F(z)$ is holomorphic for $\Im z>0$.
\item $\Im F(z)\ge 0$ for $\Im z>0$.
\item $F(z)$ is holomorphic and positive in the interval $(-\infty,a)$ and holomorphic and negative in the interval $(b,\infty)$.
\end{itemize}
Then, a function $F(z)$ is in the class $\mathcal{R}[a,b]$ if and only if there is a nonnegative measure $\psi$ on $[a,b]$ such that
\[F(z)=\int_a^b\frac{d\psi(t)}{t-z}.\]
Furthermore, this representation is unique.
\end{theorem}
The proof of Theorem \ref{thm:BMK} can be found in \cite{KEROV}. We present the proof of Theorem \ref{thm:BQMK}, which is similar. The following simpler theorem can be used to prove Theorem \ref{thm:BQMK}.
\begin{theorem}
\label{thm:BQMK1}
There is a bijective correspondence between $\widetilde{\mathcal{M}}[a,b]$ and $\{\mu\in\mathcal{M}[a,b]:R_\mu(u)>-1\text{ for }u<a\}$ where $\psi\leftrightarrow\mu$ if and only if
\[R_\psi(u)=R_\mu(u).\]
\end{theorem}
Note that Theorems \ref{thm:BMK} and \ref{thm:BQMK1} immediately imply Theorem \ref{thm:BQMK}.
\subsubsection{Proof of $\psi\to\mu$}
Suppose we're given $\psi\in\widetilde{\mathcal{M}}[a,b]$. Note that for $\Im u\ge 0$,
\[0\le\Im\int_a^b\frac{d\psi(t)}{t-u}<\int_{-\infty}^{\infty}\frac{\eta dt}{(t-\xi)^2+\eta^2}=\pi,\]
as $0\le\psi'(t)\le 1$. To show that we get a unique $\mu$ satisfying $R_\mu(u)=R_\psi(u)$, it suffices to show that
\[F(u):=1-\exp\left(-G(u)\right)\]
satisfies the conditions of Theorem \ref{thm:A6}, where
\[G(u):=\int_a^b\frac{d\psi(t)}{t-u}\]
is a function satisfying the conditions of Theorem \ref{thm:A6}, along with the restriction
\[0\le \Im G(u)<\pi\]
for $\Im u\ge 0$. Indeed, this is not hard to verify. Now,
\[R_\mu(u)=\exp\left(-G(u)\right)-1>-1,\]
so the proof of this direction is complete.
\subsubsection{Proof of $\mu\to\psi$}
Suppose we're given $\mu\in\{\mu\in\mathcal{M}[a,b]:R_\mu(u)>-1\text{ for }u<a\}$. Let
\[-R_\mu(u)=F(u)=\int_a^b\frac{d\mu(t)}{t-u}.\]
Define
\[G(u)=-\log(1-F(u)),\]
which exists since $F(u)<1$ whenever $F(u)$ is real. The conditions of Theorem \ref{thm:A6} are satisfied, so there exists a unique measure $\psi\in\mathcal{M}[a,b]$ such that
\[G(u)=\int_a^b\frac{d\psi(t)}{t-u}.\]
Since
\[0\le \Im G(u)<\pi,\]
we have $\psi'(t)\le 1$ (by the Cauchy-Stieljis inversion formula), so we have $\psi\in\widetilde{\mathcal{M}}[a,b]$, as desired. This completes the proof of Theorem \ref{thm:BQMK1}, and thus Theorem \ref{thm:BQMK}.
\subsection{Semiclassical Limit}
\label{ssec:semiclassical}
In fact, the Markov-Krein correspondence (\Cref{thm:BMK}) can be obtained with a semiclassical limit of the quantized Markov-Krein correspondence (\Cref{thm:BQMK}).
Specifically, for measures $\mu\in \mathcal{M}[a,b]$ and $w\in\mathcal{D}[a,b]$, it's necessary for the Markov-Krein correspondence to be a bijection. To proceed, let a sequence of probability measures $\psi_\varepsilon$ satisfy
\[\psi_\varepsilon\in \widetilde{\mathcal{M}}[a/\varepsilon, b/\varepsilon]\]
for all $\varepsilon > 0$. Define the sequence of probability measures $\hat{\mu}_\varepsilon$ as $d\hat{\mu}_\varepsilon(t) = d\psi_\varepsilon(t/\varepsilon)$, so that $\hat{\mu}_\varepsilon\in\mathcal{M}[a, b]$ and has density bounded by $1/\varepsilon$. Define $w_\varepsilon\in\widetilde{\mathcal{D}}[a/\varepsilon, b/\varepsilon]$ to be the corresponding measures for $\psi_\varepsilon$ and $\hat{w}_\varepsilon\in\mathcal{D}[a,b]$ as the measure with $d\hat{w}_\varepsilon(t) = d w_\varepsilon(t/\varepsilon)$.
Under the limit $\varepsilon\rightarrow 0,$ construct the $\psi_\varepsilon$ so that the $\hat{\mu}_\varepsilon$ converge to $\mu$; observe that the $\hat{w}_\varepsilon$ converge to some $w$. Alternatively, we could have started by constructing the $w_\varepsilon$ so that the $\hat{w}_\varepsilon$ converge to some $w$, and determine the $\psi_\varepsilon$ from the $w_\varepsilon$. Now, each $\psi_\varepsilon$ satisfies the quantized Markov-Krein correspondence, or
\begin{equation}
\label{eq:QMK_eps1}
\frac{1}{z}\left(-1+\exp\left(z\sum_{k=0}^\infty \psi_{\varepsilon, k} z^k\right)\right)=\exp\left(\sum_{k=1}^\infty\frac{p_k(w_\varepsilon)}{k}z^k\right).
\end{equation}
From setting the variable $z$ as $\varepsilon z$ in (\ref{eq:QMK_eps1}), we know that
\begin{equation}
\label{eq:QMK_eps2}
\frac{1}{\varepsilon z}\left(-1+\exp\left(\varepsilon z\sum_{k=0}^\infty \psi_{\varepsilon, k} (\varepsilon z)^k\right)\right)=\exp\left(\sum_{k=1}^\infty\frac{p_k(w_{\varepsilon})}{k}(\varepsilon z)^k\right).
\end{equation}
Next, to simplify the expression define $F_{\varepsilon}(z)$ as
\[F_{\varepsilon}(z)= \sum_{k=0}^\infty \psi_{\varepsilon, k} (\varepsilon z)^k.\]
Also, note that $\varepsilon^k\psi_{\varepsilon, k}=\hat{\mu}_{\varepsilon, k}$ and $\varepsilon^k p_k(w_{\varepsilon})=p_k(\hat{w}_{\varepsilon}).$
The first implies that
\[ F_\varepsilon(z)=\sum_{k=0}^\infty\psi_{\varepsilon, k} \left(\varepsilon z\right)^k = \sum_{k=0}^\infty\hat{\mu}_{\varepsilon, k} z^k \to \sum_{k=0}^\infty\mu_k z^k,\]
and the second implies that
\[\sum_{k=1}^\infty\frac{p_k(w_{\varepsilon})}{k}(\varepsilon z)^k= \sum_{k=1}^\infty\frac{p_k(\hat{w}_{\varepsilon})}{k} z^k \to \sum_{k=1}^\infty\frac{p_k(w)}{k} z^k,\]
under the limit $\varepsilon\rightarrow 0$.
Now, we use the Taylor Series expansion to obtain
\[\frac{1}{\varepsilon z}\left(-1+\exp(\varepsilon zF_{\varepsilon}(z))\right) = F_{\varepsilon}(z)+\sum_{k=2}^\infty \frac{(\varepsilon z)^{k-1}F_{\varepsilon}(z)^k}{k!}\rightarrow F_{\varepsilon}(z)\]
with the limit $\varepsilon\rightarrow 0$. Here, as $F_\varepsilon(z)$ becomes a holomorphic function, the terms with powers of $\varepsilon$ at least one are removed. Therefore, (\ref{eq:QMK_eps2}) becomes, under the limit,
\[F_\varepsilon(z)=\exp\left(\sum_{k=1}^\infty\frac{p_k(w_{\varepsilon})}{k}(\varepsilon z)^k\right)=\exp\left(\sum_{k=1}^\infty\frac{p_k(\hat{w}_{\varepsilon})}{k} z^k\right),\]
or
\[
\sum_{k=0}^\infty\mu_k z^k= \exp\left( \sum_{k=1}^\infty\frac{p_k(w)}{k}z^k\right),
\]
which is the Markov-Krein correspondence.
Assume that there exist two measures $w_1, w_2\in\mathcal{D}[a,b]$ satisfying the Markov-Krein correspondence. Then, the $p_k(w_1)=p_k(w_2)$ for all positive integers $k$. But, since $w_1''$ and $w_2''$ are supported on $[a,b]$, \[\sum_{k=1}^\infty\frac{p_k(w)}{k}z^k\]
would have a positive radius of convergence for $w=w_1$ and $w=w_2$. Therefore, $w_1''=w_2''$ and $w_1=w_2$. From this, there is an unique measure $w$ satisfying the Markov-Krein correspondence for each $\mu$. By constructing the $w_\varepsilon$ first, the other direction can also be shown to hold.
| {
"timestamp": "2021-07-19T02:03:53",
"yymm": "2011",
"arxiv_id": "2011.10724",
"language": "en",
"url": "https://arxiv.org/abs/2011.10724",
"abstract": "We study a family of measures originating from the signatures of the irreducible components of representations of the unitary group, as the size of the group goes to infinity. Given a random signature $\\lambda$ of length $N$ with counting measure $\\mathbf{m}$, we obtain a random signature $\\mu$ of length $N-1$ through projection onto a unitary group of lower dimension. The signature $\\mu$ interlaces with the signature $\\lambda$, and we record the data of $\\mu,\\lambda$ in a random rectangular Young diagram $w$. We show that under a certain set of conditions on $\\lambda$, both $\\mathbf{m}$ and $w$ converge as $N\\to\\infty$. We provide an explicit moment generating function relationship between the limiting objects. We further show that the moment generating function relationship induces a bijection between bounded measures and certain continual Young diagrams, which can be viewed as a quantized analogue of the Markov-Krein correspondence.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph); Combinatorics (math.CO); Representation Theory (math.RT); Spectral Theory (math.SP)",
"title": "A Quantized Analogue of the Markov-Krein Correspondence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717448632122,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708944929673145
} |
https://arxiv.org/abs/2205.00670 | Parameter estimation for reflected OU processes | In this paper, we investigate the parameter estimation problem for reflected OU processes. Both the estimates based on continuously observed processes and discretely observed processes are considered. The explicit formulas for the estimators are derived using the least squares method. Under some regular conditions, we obtain the consistency and establish the asymptotic normality for the estimators. Numerical results show that the proposed estimators perform well with moderate sample sizes. | \section{Introduction}
Consider a filtered probability space $(\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0}, \mathbb{P})$ where the filtration $\{\mathcal{F}_{t}\}_{t\geq 0}$ satisfies the usual conditions. Let $W=\{W_{t}\}_{t\geq 0}$ be a standard Brownian motion adapted to $\{\mathcal{F}_{t}\}_{t\geq 0}$. The reflected Ornstein–Uhlenbeck (OU) process reflected at $0$ is described by the following stochastic differential equation (SDE)
\begin{equation}\label{eq1}
\left\{
\begin{aligned}
&\mathrm{d}X_{t}=-\theta X_{t}\mathrm{d}t+\sigma\mathrm{d}W_{t}+\mathrm{d}L_{t},\\
&X_{t}\geq 0 \quad\text{for all}\quad t\geq0,\\
&X_{0}=x,
\end{aligned}
\right.
\end{equation}
where $\theta\in (0,\infty)$ is the unknown parameter, $\sigma\in (0,\infty)$ is a constant and $L=\{L_{t}\}_{t\geq 0}$ is the minimal continuous increasing process which ensures that $X_{t}\geq 0$ for all $t\geq 0$. The process $L$ increases only when $X$ hits the boundary $0$, so that
\begin{equation*}
\int_{[0, \infty)} I(X_{t}\geq 0) \mathrm{d} L_{t}=0,
\end{equation*}
where $I(\cdot)$ is the indicator function.
The reflected OU process behaves like a standard OU process in the interior of its domain $(0, \infty)$. Benefiting from its reflecting barrier, the reflected OU process has been widely used in many areas such as the queueing system \citep{Ward2005}, financial engineering \citep{Bo2010} and mathematical biology \citep{Ricciardi1987}. The reflecting barrier is assumed to be $0$ for the physical restriction of the state processes such as queue-length, stock prices and interest rates, which take non-negative values. For more details on reflected OU processes and their broad applications, one can refer to \cite{Harrison1985} and \cite{Whitt2002}.
The parameter estimation problem in the reflected OU process has gained much attention in recent years due to its increased applications in broad fields. It is necessary that the parameters which characterize the reflected OU process should be estimated via the data in many real-world applications.
As far as we know, the maximum likelihood estimator (MLE) for the drift parameter $\theta$ is studied in \cite{Bo2011}. They obtain the strong consistency and asymptotic normality of their estimator, but they don't get the explicit form of asymptotic variance. The sequential MLE based on the continuously observed processes throughout a random time interval $[0,\tau]$ is studied in \cite{Lee2012}, where $\tau$ is a stopping time. The main tool used in the above two papers is the Girsanov's theorem of reflected Brownian motion.
On the other hand, an ergodic type of estimator for $\theta$ based on discrete observations is studied in \cite{Hu2015}. Recently, the moment estimators for all parameters $(\theta,\sigma)$ based on the ergodic theorem is studied in \cite{Hu2021}. However, there is only limited literature on least squares estimator (LSE) for the drift parameter of a reflected OU process.
In this paper, we propose two types of LSEs for the drift parameter $\theta$ based on continuously observed processes and discretely observed processes respectively.
The continuous-type LSE is motivated by aiming to minimize
\begin{equation*}
\int_{0}^{T}\left|\dot{X}_{t}+\theta X_{t}-\dot{L}_{t}\right|^{2} \mathrm{d}t.
\end{equation*}
It is a quadratic function of $\theta$, although we don't know $\dot{L}_{t}$ and $\dot{X}_{t}$. The minimum is achieved when
\begin{equation*}
\hat{\theta}_{T}=-\frac{\int_{0}^{T} X_{t}\mathrm{d} X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t}.
\end{equation*}
Assume that $h\rightarrow0$ and $nh\rightarrow\infty$, as $n\rightarrow\infty$. When the processes is observed at the discrete time instants $\{t_{k}=kh, k=0,1,\cdots,n\}$, the discrete-type LSE is motivated by minimizing the following contrast function
\begin{equation*}
\sum_{k=0}^{n-1}|X_{t_{k+1}}-X_{t_{k}}+\theta X_{t_{k}}h-\vartriangle_{k}L|^{2},
\end{equation*}
where $\vartriangle_{k}L=L_{t_{k+1}}-L_{t_{k}}$. The minimum is achieved when
\begin{equation*}
\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h}.
\end{equation*}
The remainder of this paper is organized as follows. In Section \ref{sec2}, we describe some preliminary results related to our context. Section \ref{sec3} is devoted to obtaining the asymptotic behavior of the two estimators. Section \ref{sec4} presents some numerical results and Section \ref{sec5} concludes.
\section{Preliminaries}\label{sec2}
In this section, we first introduce some basic facts. Throughout this paper, we shall use the notation ``$\stackrel{P}{\longrightarrow}$" to denote ``convergence in probability" and the notation ``$\sim$" to denote ``convergence in distribution".
With the previous results \citep{Hu2015,Linetsky2005,Ward2003}, we know that the unique invariant density of $\{X_{t}\}_{t\geq 0}$ is
\begin{equation}\label{eq2}
p(x)=2 \sqrt{\frac{2 \gamma}{\sigma^{2}}} \phi\left(\sqrt{\frac{2 \gamma}{\sigma^{2}}} x\right), \quad x \in[0, \infty),
\end{equation}
where $\phi(u)=(2 \pi)^{-1 / 2} e^{-\frac{u^{2}}{2}}$ is the (standard) Gaussian density function.
Based on the basic stability theories of Markov processes, we have the following ergodic lemma.
\begin{lem}\label{lem1}
For any $x \in \mathbb{R}_{+}$ and any $f \in L_{1}(\mathbb{R}_{+}, \mathcal{B}(\mathbb{R}_{+}))$, we have
\begin{enumerate}[a.]
\item The continuously observed processes $\{X_{t}\}_{t\geq0}$ is ergodic,
\begin{equation*}
\lim _{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T}f(X_{t})\mathrm{d}t=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty} f(x) p(x) d x.
\end{equation*}
\item The discretely observed processes $\{X_{t_{k}}, k=0,1\cdots,n\}$ is ergodic,
\begin{equation*}
\lim _{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^{n} f(X_{t_{k}})=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty} f(x) p(x) d x.
\end{equation*}
\end{enumerate}
\end{lem}
\noindent\textbf{Proof of Lemma \ref{lem1}}. One can see \cite{Han2016} and \cite{Hu2015} for a proof.
\hfill$\square$
By Lemma \ref{lem1} and the unique invariant density Eq. (\ref{eq2}), we obtain the formula of second-order moment estimator as following
\begin{equation}\label{eq3}
\lim_{T \rightarrow \infty}\frac{1}{T}\int_{0}^{T}X^{2}_{t}\mathrm{d}t=\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}X_{t_{k}}^{2}=\mathbb{E}|X(\infty)|^{2}=\int_{0}^{\infty} x^{2} p(x) \mathrm{d} x=\frac{\sigma^{2}}{2\theta}.
\end{equation}
\section{Asymptotic behavior of the least squares estimators}\label{sec3}
In this section, we consider the asymptotic behavior of the LSEs for the drift parameter $\theta$. By Eq. (\ref{eq1}), we provide two useful and crucial alternative expressions for $\hat{\theta}_{T}$ and $\tilde{\theta}_{n}$
\begin{equation}\label{eq4}
\hat{\theta}_{T}=\theta-\sigma \frac{\int_{0}^{T} X_{t} \mathrm{d} W_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t},
\end{equation}
and
\begin{equation}\label{eq5}
\tilde{\theta}_{n}=\theta+\frac{\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W}{\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation}
The following theorem proves the consistency of the continuous-type LSE.
\begin{thm}\label{thm1}
The continuous-type LSE $\hat{\theta}_{T}$ of $\theta$ admits the strong consistency, i.e.,
\begin{equation*}
\hat{\theta}_{T}\stackrel{P}{\longrightarrow} \theta,
\end{equation*}
as $T$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm1}}. From the alternative expression Eq. (\ref{eq4}), we have
\begin{equation*}
\hat{\theta}_{T}-\theta=-\sigma \frac{\frac{1}{T}\int_{0}^{T} X_{t} \mathrm{d} W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2} \mathrm{d} t}.
\end{equation*}
By Lemma \ref{lem1} and Eq. (\ref{eq3}), we have
\begin{equation}\label{eq6}
\lim_{T \rightarrow \infty}\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t= \frac{\sigma^{2}}{2\theta}.
\end{equation}
Taking into account that the process $\{\int_{0}^{t}X_{s}\mathrm{d}W_{s}, t\geq 0\}$ is a martingale and with quadratic variation $\int_{0}^{t}X_{s}^{2}\mathrm{d}s$. Then
\begin{equation*}
\mathbb{E}\bigg[\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\bigg]=0,
\end{equation*}
and
\begin{equation*}
\lim_{T \rightarrow \infty}\mathbb{E}\bigg[\big(\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\big)^{2}\bigg]=\frac{\sigma^{2}}{2\theta T}=O(T^{-1}).
\end{equation*}
By Chebyshev's inequality, we have
\begin{equation}\label{eq7}
\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}=0.
\end{equation}
Combining Eq. (\ref{eq6}) and (\ref{eq7}), we obtain the desired results.
\hfill$\square$
We establish the asymptotic normality of the continuous-type LSE in the following theorem. The convergence rate is comparable to MLE based approch \citep{Bo2011}, and we obtain the explicit formula of the asymptotic variance.
\begin{thm}\label{thm2}
The continuous-type LSE $\hat{\theta}_{T}$ of $\theta$ admits the asymptotic normality, i.e.,
\begin{equation*}
\sqrt{T}(\hat{\theta}_{T}-\theta)\sim\mathcal{N}(0, 2\theta),
\end{equation*}
as $T$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm2}}
Note that
\begin{equation*}
\begin{aligned}
\sqrt{T}(\hat{\theta}_{T}-\theta)&=-\sigma\sqrt{T}\frac{\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\int_{0}^{T} X_{t}^{2}\mathrm{d}t}\\
&=-\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t}.
\end{aligned}
\end{equation*}
From Eq. (\ref{eq3}), we have that $\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t$ converges to $\frac{\sigma^{2}}{2\theta}$
almost surely, as $T$ tends to infinity. Then, it is sufficient to show that $\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}$ converges in law to a centered normal distribution as $T$ tends to infinity. It follows immediately from $X_{t}$ is adapted with respect to $\mathcal{F}_{t}$ that $\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}$ is a Gaussian random variable with mean $0$ and variance $\frac{\sigma^{2}}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t$. Based on Eq. (\ref{eq3}) again, we obtain
\begin{equation}\label{eq8}
\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\sim\mathcal{N}(0, \frac{\sigma^{4}}{2\theta}).
\end{equation}
By Slutsky's theorem and Eq. (\ref{eq8}), we have
\begin{equation*}
\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t}\sim\mathcal{N}(0,2\theta),
\end{equation*}
which completes the proof.
\hfill$\square$
The following theorem proves the consistency of the discrete-type LSE.
\begin{thm}\label{thm3}
The discrete-type LSE $\tilde{\theta}_{n}$ admits the consistency, i.e.,
\begin{equation*}
\tilde{\theta}_{n}\stackrel{P}{\longrightarrow}\theta,
\end{equation*}
as $n$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm3}}. From the alternative expression Eq. (\ref{eq5}), we have
\begin{equation*}
\tilde{\theta}_{n}-\theta=\frac{\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\big(\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big)}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation*}
We first consider the estimate of $\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|$. For $t_{k}\leq t\leq t_{k+1}$, we have
\begin{equation*}
\begin{aligned}
&|X_{t}-X_{t_{k}}|\\
=&|-\theta\int_{t_{k}}^{t} X_{u}\mathrm{d}u+\sigma(W_{t}-W_{t_{k}})+(L_{t}-L_{t_{k}})|\\
=&|-\theta\int_{t_{k}}^{t}(X_{u}-X_{t_{k}})\mathrm{d}u-\theta X_{t_{k}}(t-t_{k})+\sigma(W_{t}-W_{t_{k}})(L_{t}-L_{t_{k}})|\\
\leq&|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)+\theta\int_{t_{k}}^{t}|X_{u}-X_{t_{k}}|\mathrm{d}u.
\end{aligned}
\end{equation*}
By Gronwall's inequality, we have
\begin{equation*}
|X_{t}-X_{t_{k}}|\leq\bigg(|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)\bigg)e^{\theta(t-t_{k})}.
\end{equation*}
It follows that
\begin{equation*}
\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|\leq \bigg(|X_{t_{k}}|h+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)\bigg)e^{\theta h}.
\end{equation*}
By the properties of the process $L$, we have
\begin{equation*}
L_{t_{k+1}}-L_{t_{k}}=\max\big(0, A_{t_{k}}-X_{t_{k}}\big),
\end{equation*}
where $A_{t_{k}}=\sup_{t_{k}\leq t\leq t_{k+1}}\big\{\theta X_{t_{k}}(t-t_{k})-\sigma(W_{t}-W_{t_{k}})\big\}$.
By all paths of Brownian motion are $\alpha$-H$\ddot{o}$lder continuity, where $\alpha\in(0,\frac{1}{2})$, we have
\begin{equation*}
\begin{aligned}
\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|&\leq Ch^{\alpha}e^{\theta h}=O(h^{\alpha}),
\end{aligned}
\end{equation*}
where $C$ is a constant.
Then
\begin{equation}\label{eq10}
\begin{aligned}
&\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t\\
\leq&\frac{\theta}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|h\\
=&O(h^{\alpha}),
\end{aligned}
\end{equation}
which goes to $0$ as $h\rightarrow0$.
Let $\phi_{k}(t)=X_{t_{k}}I_{\{t\in[t_{k},t_{k+1})\}}(t)$. Then we have
\begin{equation}\label{eq11}
\lim_{n \rightarrow \infty}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W=\lim_{n \rightarrow \infty}\sum_{k=0}^{n-1}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}.
\end{equation}
By some similar arguments as in the proof of Theorem \ref{thm1}, we have
\begin{equation}\label{eq12}
\lim_{n\rightarrow\infty}\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sigma\vartriangle_{k}W=0.
\end{equation}
Combining Eq. (\ref{eq3}), (\ref{eq10}) and (\ref{eq12}), we obtain the desired results.
\hfill$\square$\\
The following theorem establishes the asymptotic normality of the discrete-type LSE.
\begin{thm}\label{thm4}
Assume that $nh^{1+2\alpha}\rightarrow 0$ for $\alpha\in(0,1/2)$, as $n$ tends to infinity. The discrete-type LSE $\tilde{\theta}_{n}$ of $\theta$ admits the asymptotic normality, i.e.,
\begin{equation*}
\sqrt{nh}(\tilde{\theta}_{n}-\theta)\sim\mathcal{N}(0,2\theta),
\end{equation*}
as $n$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm4}}. Note that
\begin{equation*}
\sqrt{nh}(\tilde{\theta}_{n}-\theta)=\frac{\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\big(\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big)}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation*}
By Eq. (\ref{eq10}), we have
\begin{equation}
\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta (X_{t}-X_{t_{k}})\mathrm{d}t\leq O(\sqrt{nh^{1+2\alpha}}),
\end{equation}
which goes to $0$ as $n$ tends to infinity. By some similar arguments as in the proof of Theorem \ref{thm2}, we have
\begin{equation*}
\frac{1}{nh}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).
\end{equation*}
By Eq. (\ref{eq11}), we have
\begin{equation*}
\frac{\sigma}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).
\end{equation*}
By Eq. (\ref{eq3}) and Slutsky's theorem, we obtain the desired results.
\hfill$\square$
\begin{rmk}
Our method can be applied to the reflected OU processes with two-sided reflecting barriers $(0,b)$, where $b\in(0,\infty)$. The two types of LSEs of a two-sided reflected OU process are
\begin{equation*}
\hat{\theta}_{T}=-\frac{\int_{0}^{T} X_{t}\mathrm{d} X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}+\int_{0}^{T}X_{t}\mathrm{d}R_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t},
\end{equation*}
and
\begin{equation*}
\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L+\vartriangle_{k}R)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h},
\end{equation*}
\end{rmk}
where $R$ is the minimal continuous increasing process such that $X\leq b$. The unique invariant density is given by \citep{Linetsky2005}
\begin{equation*}
p(x)=\frac{\sqrt{2 \theta}}{\sigma} \frac{\phi\left(\frac{\sqrt{2 \theta}}{\theta} x\right)}{\Phi\left(\frac{\sqrt{2 \theta}}{\sigma} b\right)-\frac{1}{2}}, \quad x \in[0, b],
\end{equation*}
where $\Phi(y)=\int_{-\infty}^{y}\phi(u)\mathrm{d}u$.
Hence the consistency and asymptotic distributions of the two estimators could be obtained. The proofs are similar to the proofs of Theorem \ref{thm1}-\ref{thm4}. We omit the details here.
\section{Numerical results}\label{sec4}
In this section, we present some numerical results. For a Monte Carlo simulation of the reflected OU process, one can refer to \cite{Lepingle(1995)}, which is known to yield the same rate of convergence as the usual Euler–Maruyama scheme.
Denote the time between each discretization step by $h=0.01$. We perform $N=1000$ Monte Carlo simulations of the sample paths generated by the model with different settings. The overall parameter estimates are evaluated by the bias, standard deviation (Std.dev) and mean squared error (MSE). We also give calculations for the asymptotic variance (Asy.var) $\sqrt{nh}(\tilde{\theta}_{n}-\theta)$. The results are presented in Table \ref{table1}.
What we need to emphasize is that the asymptotic variance is exactly the approximation of $2\theta$ even with different settings of $\theta$. It is effective to verify the explicit, closed form formula proposed in Theorem \ref{thm2} and \ref{thm4}.
Table \ref{table1} summarizes the main findings over 1000 simulations. We observe that as the sample size increases, the bias decreases and is small, that the empirical and model-based standard errors agree reasonably well. The performance improves with larger sample
sizes.
The distribution of the proposed estimator with two different settings are illustrated as a histogram in Figure \ref{fig1} and \ref{fig2}. In each figure, the standard normal distribution density is overlayed as a solid curve. The histogram asymptotically approximates to the standard normal distribution density. Thus, the LSEs work well whether $\theta$ is big ($\theta=1$) or small ($\theta=0.5$) and whether in a fairly short time ($T=10$) or long ($T=1000$) time.
\begin{table}[htp]
\caption{Simulation results}
\begin{tabular}{ccccc}
\hline
\makebox[0.2\textwidth]{True parameter} & \makebox[0.1\textwidth]{} & \makebox[0.18\textwidth]{$n=10^{3}$} & \makebox[0.18\textwidth]{$n=10^{4}$} & \makebox[0.18\textwidth]{$n=10^{5}$}\tabularnewline
\hline
\hline
$\theta=0.5$,
& Bias & 0.2006 & 0.0129 & -0.0076\tabularnewline
$\sigma=0.2$
& Std.dev & 0.4380 & 0.1040 & 0.0310\tabularnewline
& Asy.var & 1.9100 & 1.0800 & 0.9620\tabularnewline
& MSE & 0.2320 & 0.0110 & 0.0010\tabularnewline
\hline
$\theta=0.5$,
& Bias & 0.1890 & 0.0171 & -0.0084\tabularnewline
$\sigma=0.5$
& Std.dev & 0.4550 & 0.1080 & 0.0315\tabularnewline
& Asy.var & 2.0700 & 1.1700 & 0.9900\tabularnewline
& MSE & 0.2430 & 0.0120 & 0.0011\tabularnewline
\hline
$\theta=1$,
& Bias & 0.1410 & -0.0124 & -0.0255\tabularnewline
$\sigma=1$
& Std.dev & 0.5030 & 0.1450 & 0.0439\tabularnewline
& Asy.var & 2.5300 & 2.1200 & 1.9300\tabularnewline
& MSE & 0.2730 & 0.0213 & 0.0026\tabularnewline
\hline
\end{tabular}
\label{table1}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{100.001.05.02}
\caption{Histogram of $\sqrt{T}(\hat{\theta}_{T}-\theta)$ with $T=100$, $h=0.01$, $\theta=0.5$ and $\sigma=0.2$.}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{1000.001.02.02}
\caption{Histogram of $\sqrt{T}(\hat{\theta}_{T}-\theta)$ with $T=1000$, $h=0.01$, $\theta=0.2$ and $\sigma=0.2$.}
\label{fig2}
\end{figure}
\section{Conclusion}\label{sec5}
In this paper, we present two types of least squares estimators for the reflected Ornstein–Uhlenbeck process based on continuously observed processes and discretely observed processes respectively. The consistency and the asymptotic normality have been studied. Moreover, we derive the explicit formula of the asymptotic variance, which is $2\theta$. Numerical results show that the least squares estimators work well with different settings.
Some further research may include investigating the statistical inference for the other reflected diffusions.
| {
"timestamp": "2022-05-03T02:33:33",
"yymm": "2205",
"arxiv_id": "2205.00670",
"language": "en",
"url": "https://arxiv.org/abs/2205.00670",
"abstract": "In this paper, we investigate the parameter estimation problem for reflected OU processes. Both the estimates based on continuously observed processes and discretely observed processes are considered. The explicit formulas for the estimators are derived using the least squares method. Under some regular conditions, we obtain the consistency and establish the asymptotic normality for the estimators. Numerical results show that the proposed estimators perform well with moderate sample sizes.",
"subjects": "Methodology (stat.ME); Statistics Theory (math.ST)",
"title": "Parameter estimation for reflected OU processes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717444683929,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7089449293894302
} |
https://arxiv.org/abs/2104.05464 | Convexity of $λ$-hypersurfaces | We prove that any $n$-dimensional closed mean convex $\lambda$-hypersurface is convex if $\lambda\le 0.$ This generalizes Guang's work on $2$-dimensional strictly mean convex $\lambda$-hypersurfaces. As a corollary, we obtain a gap theorem for closed $\lambda$-hypersurfaces with $\lambda\le 0.$ | \section{{\bf Introduction}}
A hypersurface $M^n$ in $\bb R^{n+1}$ is called a $\lambda$-\textit{hypersurface} if it satisfies
\begin{equation}\label{lambda}
H - \frac{\pair{x,\textbf{n}}}{2} = \lambda
\end{equation}
where $H$ is the mean curvature, $\textbf{n}$ is the outer unit normal of $M,$ $x$ is the position vector, and $\lambda$ is a constant. This equation arises in the study of isoperimetric problems in weighted (Gaussian) Euclidean spaces (c.f. \cite{MR15}), which is a long-standing topic studied in various fields in science (\cite{L94}, \cite{B01}, \cite{B03}, etc.). Recently, Cheng and Wei \cite{CW18} defined a weighted volume functional, and showed that the critical points of the functional under some weighted volume-preserving variations are exactly $\lambda$-hypersurfaces.
When $\lambda=0,$ $\lambda$-hypersurfaces are exactly \textit{self-shrinkers}. Self-shrinkers play an important role in the study of mean curvature flow (MCF), since White \cite{W97} and Ilmanen \cite{I95} showed that self-shrinkers arise as the tangent flows of MCF based on Huisken's monotonicity formula \cite{H90} and Brakke's compactness theorem \cite{B78}. Many classification results of self-shrinkers were proposed. Abresch and Langer \cite{AL86} showed that the only $1$-dimensional closed embedded self-shrinker is the circle $S^1.$ Huisken \cite{H90} later dealt with the higher-dimensional cases, proving that any closed, embedded, and mean convex (which means $H\ge 0$) $n$-dimensional self-shrinkers are exactly spheres $S^n.$ For the non-compact situation, Huisken \cite{H93} proved that all smooth, embedded, and mean convex self-shrinkers with polynomial volume growth and bounded second fundamental form are generalized cylinders $S^k\times \bb R^{n-k}.$ This result was later improved by Colding and Minicozzi \cite{CM12}, in which they removed the condition of bounded second fundamental form in Huisken's classification.
When $\lambda\neq 0,$ there are relatively few and incomplete classification results so far. In \cite{CW18}, Cheng and Wei characterized compact $\lambda$-hypersurfaces with $H-\lambda\ge 0$ and some curvature conditions (c.f. theorem \ref{CW}). Inspired by \cite{SX20}, Guang \cite{G21} showed that any strictly mean convex (which means $H>0$) $2$-dimensional $\lambda$-hypersurfaces are in fact convex if $\lambda\le 0.$ The main goal of this paper is to generalize Guang's result to higher-dimensional mean convex $\lambda$-hypersurfaces.
\begin{thm}\label{main}
Let $M^n$ be a smooth, closed, and embedded $\lambda$-hypersurface in $\bb R^{n+1}$ with $\lambda\le 0.$ If $M$ is mean convex, then it is convex.
\end{thm}
This theorem is a generalization of Guang's result in \cite{G21}. Guang used the explicit expressions for the derivatives of the principal curvatures at the non-umbilical points of a surface, which were first derived in \cite{HIMW19}. We follow the same spirit to derive a differential inequality for the sum of a part of principal curvatures at the points where there is a gap among some principal curvatures (c.f. lemma \ref{LS}). Though we could not derive similar explicit expressions, it turns out that the information we derive is sufficient to obtain the higher-dimensional generalization. Also, we use the maximum principle to weaken the assumption of strict mean convexity in \cite{G21}, which Huisken \cite{H90} also applied when classifying closed mean convex self-shrinkers.
A natural further question is whether Huisken-type classification also holds for $\lambda$-hypersurfaces. That is, could we classify all $\lambda$-hypersurfaces given some curvature conditions, like mean-convexity? In the curve case, Guang \cite{G18} proved that any smooth embedded $1$-dimensional $\lambda$-hypersurface (or $\lambda$-curve) is either a straight line or a circle if $\lambda\ge 0,$ which generalized Abresch and Langer's result. For the higher dimensional case, Heilman \cite{H17} proved that convex $n$-dimensional $\lambda$-hypersurfaces are generalized cylinders if $\lambda\ge 0.$ However, when $\lambda<0,$ Chang \cite{C17} showed that for certain $\lambda<0,$ there are some closed embedded mean convex $\lambda$-curves other than circles. Thus we could not expect Huisken-type results to hold for general $\lambda\in\bb R.$ We hope that theorem \ref{main} will shed some light on the higher-dimensional case when $\lambda\le 0.$ In particular, using the curvature condition discovered in \cite{CW18}, we can prove the following gap theorem for mean convex $\lambda$-hypersurfaces when $\lambda\le 0.$
\begin{thm}\label{gap}
Let $M^n$ be a smooth, closed, and embedded $\lambda$-hypersurface in $\bb R^{n+1}.$ If $\lambda\le 0$ and the mean curvature of $M$ satisfies
$$0\le H\le \frac{\sqrt{\lambda^2+2}+\lambda}{2},$$
then $M$ is a round sphere.
\end{thm}
We remark that if we assume $M$ is a convex $\lambda$-hypersurface, then the result of theorem \ref{gap} could also be derived from the gap theorem proven by Guang \cite{G18}. What's new here is that we only need to assume $H\ge 0,$ and then by theorem \ref{main}, we can get the convexity.
The organization of this paper is as follows. In section \ref{2}, we will introduce the Simons-type identities for $\lambda$-hypersurfaces, which was given by Guang in \cite{G18}. In section \ref{3}, we derive a differential inequality for the sum of a part of principal curvatures at the points where there is a gap among some principal curvatures. In section \ref{4}, we use the identities and the inequality in the preceding sections to prove the main theorem \ref{main}. In section \ref{5}, we prove a gap theorem \ref{gap} by applying Cheng and Wei's theorem.
\subsection*{\bf Acknowledgement}
The author is grateful to Prof. Bill Minicozzi for his helpful and inspiring comments. He also appreciates Kai-Hsiang Wang's indications on some deficiencies in an earlier draft, and Qiang Guang's generosity on sharing some useful references. This work was completed when the author visited the National Center for Theoretical Science (NCTS) in Taiwan, and the author is also grateful for much helpful discussion with people in NCTS.
\section{\bf Simons-type Identities}\label{2}
On a hypersurface $M$ in $\bb R^{n+1},$ we consider the drift Laplacian
$$\mathcal{L} := \Delta-\frac 12\nabla_{x^T}(\cdot)$$
and also the following linear operator
$$L := \mathcal{L} + |A|^2 + \frac 12
= \Delta - \frac 12\nabla_{x^T}(\cdot) + |A|^2 + \frac 12$$
where $\Delta$ and $A$ denote the Laplacian operator and the second fundamental form of $M,$ and $x^T$ is the tangential component (with respect to $M$) of the position vector $x.$ These operators were introduced by Colding and Minicozzi to study the stability of self-shrinkers. In fact, the operator $L$ appears in the second variation formula of the $F$-functional (c.f. \cite{CM12}).
Guang \cite{G18} established the following Simons-type identities. These identities will play a crucial role in the proof of the main theorem \ref{main}. We remark that these kinds of identities have been developed in \cite{CM12} and \cite{CM15} for self-shrinkers. For completeness, we include the proof in \cite{G18} here.
\begin{lemma}[\cite{G18}]\label{LALH} If $M$ is a $\lambda$-hypersurface in $\bb R^{n+1},$ then
\begin{equation}\label{LA}
LA = A-\lambda A^2
\end{equation}
and in particular, taking the trace of \eqref{LA} gives
\begin{equation}\label{LH}
LH = H + \lambda|A|^2.
\end{equation}
\end{lemma}
\noindent{\bf (Proof.)} For any fixed $p\in M,$ take a local orthonormal frame $\{e_i\}_{i=1,\cdots,n}$ such that $\nabla^M_{e_i}e_j=0$ for all $i$ and $j,$ where $\nabla^M$ is the Riemannian connection of $M.$ Thus we can write $\nabla_{e_i}e_j=a_{ij}n$ where $a_{ij}$ is the component of the second fundamental form $A.$ As a result,
\begin{align*}
\text{Hess}_{\pair{x,\textbf{n}}}(e_i,e_j)
= \nabla_{e_j}\nabla_{e_i}\pair{x,\textbf{n}}
& = \nabla_{e_j}\sum_{l=1}^n \pair{x,-a_{il}e_l}\\
& = -a_{ij} + \sum_{l=1}^n \left(-a_{il,j}\pair{x,e_l} - a_{il}\pair{x,a_{jl}\textbf{n}} \right)\\
& = -A(e_i,e_j) - (\nabla_{x^T}A)(e_i,e_j) - \pair{x,\textbf{n}}A^2(e_i,e_j)
\end{align*}
where $a_{il,j}$ is the component of $\nabla A,$ and we use the Codazzi equation $a_{il,j} = a_{ij,l}.$ In conclusion, we derive
\begin{equation}\label{Hij}
\text{Hess}_{\pair{x,\textbf{n}}} = -A - \nabla_{x^T}A - \pair{x,\textbf{n}}A^2.
\end{equation}
Plug this into the Simons identity
\begin{equation}\label{Simons}
\Delta A = -|A|^2 A - HA^2 - \text{Hess}_H
\end{equation}
which holds for any hypersurface in $\bb R^n$ (c.f. the formula (2.14) in \cite{CM11}), and we get
\begin{align*}
LA
& = \Delta A - \frac 12\nabla_{x^T}(A) + |A|^2A + \frac 12 A\\
& = A - \left(H - \frac {\pair{x,\textbf{n}}}{2}\right)A^2\\
& = A - \lambda A^2
\end{align*}
based on the $\lambda$-hypersurface equation \eqref{lambda}. \eqref{LH} follows directly after taking the trace since $\text{tr} A = - H.$\qed
\section{\bf Estimates of Principal Curvatures}\label{3}
In this section, we let $M$ be a smooth mean convex hypersurface in $\bb R^{n+1}.$ Besides, we will write $k_1\le\cdots\le k_n$ to be the principal curvatures of $M$ in the ascending order. For $l\ge 1,$ consider
\begin{equation*}
S_l:=\sum_{m=l+1}^nk_m,
\end{equation*}
which is the sum of the largest $n-l$ principal curvatures. In general, $S_l$ is just a continuous function on $M.$ However, if $k_l<k_{l+1}$ at a point $p\in M,$ the inverse function theorem will imply $k_1+\cdots+k_l$ and thus $S_l$ are both differentiable near $p.$ At such a point, we establish the following differential inequality for $S_l.$
\begin{lemma}\label{LS}
Suppose $k_l<k_{l+1}$ for some $l\ge 1$ at a point $p\in M.$ Then at $p,$ we have
\begin{equation}\label{LS>=}
\mathcal{L} S_l \ge \frac {S_l}2 - |A|^2 S_l +\lambda\sum_{m=l+1}^nk_m^2.
\end{equation}
\end{lemma}
\noindent{\bf(Proof.)} We only need to consider those points near which we could take a principal frame $\{v_1,\cdots,v_n\}$ such that
\begin{equation}\label{ki}
k_i=-a_{ii}:=-A(v_i,v_i)
\end{equation}
and
\begin{equation}\label{aperp}
a_{ij}:=A(v_i,v_j)=0\text{ for }1\le i\neq j\le n.
\end{equation}
Such points form a dense and open set in $M$ (c.f. \cite{S75}), so after proving \eqref{LS>=} at these points, it follows that \eqref{LS>=} holds for all $p\in\{k_l<k_{l+1}\}$ by continuity.
Now assume $v_1,\cdots, v_n$ form a principal frame near $p.$ For any fixed $i,$ since $\pair{v_i,v_i}=1,$ we have
\begin{equation}\label{perp}
\pair{\nabla_v v_i,v_i}=\frac 12\nabla_v\pair{v_i,v_i}=0
\end{equation}
for any local vector field $v.$ Hence we can write
\begin{equation}\label{cimj}
\nabla_{v_i}v_m=\sum_{j\neq m}c_{i}^{mj}v_j
\end{equation}
for some smooth functions $c_{i}^{mj}$ near $p.$ Based on \eqref{aperp} and \eqref{cimj}, near the point $p,$ we have
\begin{equation*}
\nabla_{v_i}k_m
= -\nabla_{v_i}(A(v_m,v_m))
= -(\nabla_{v_i}A)(v_m,v_m) + 2A(\nabla_{v_i} v_m,v_m)
= -(\nabla_{v_i}A)(v_m,v_m),
\end{equation*}
so
\begin{align*}
\Delta k_m
= -\sum_{i=1}^n\nabla_{v_i}\nabla_{v_i}(A(v_m,v_m))
& = -\sum_{i=1}^n\nabla_{v_i}((\nabla_{v_i}A)(v_m,v_m))\\
& = - (\Delta A)(v_m,v_m) - 2\sum_{i=1}^n(\nabla_{v_i}A)(\nabla_{v_i}v_m,v_m)\\
& = - (\Delta A)(v_m,v_m) - 2\sum_{i=1}^n\sum_{j\neq m}c_i^{mj}a_{jm,i}\numberthis\label{deltak_m}
\end{align*}
by \eqref{cimj}. To calculate the term involving the derivative of the second fundamental form, notice that for $j\neq m,$ $A(v_j,v_m)=0,$ based on which we have
\begin{align*}
0
& = \nabla_{v_i}(A(v_j,v_m))\\
& = a_{jm,i} + A(\nabla_{v_i}v_j,v_m) + A(v_j,\nabla_{v_i}v_m)\\
& = a_{jm,i} + A\left(\sum_{l\neq j}c_i^{jl}v_l,v_m \right) + A\left(v_j, \sum_{l\neq m}c_i^{ml}v_l \right)\\
& = a_{jm,i} - c_i^{jm}k_m - c_i^{mj}k_j\numberthis\label{half}
\end{align*}
where we use the decomposition \eqref{cimj} and the relations \eqref{ki} and \eqref{aperp}. To get a more precise form, observe that the orthogonality condition $\pair{v_j,v_m}=0$ implies
\begin{equation}\label{ccommutator}
0
= \nabla_{v_i}\pair{v_j,v_m}
= \pair{\nabla_{v_i}v_j,v_m} + \pair{v_j,\nabla_{v_i}v_m}
= c_i^{jm} + c_i^{mj}
\end{equation}
due to the orthonormality and \eqref{cimj}. Putting \eqref{ccommutator} back into \eqref{half}, we obtain
$$0 = a_{jm,i} + c_i^{mj}(k_m-k_j),$$
with which we could simplify \eqref{deltak_m} as
\begin{equation}\label{dkm}
\Delta k_m
= - (\Delta A)(v_m,v_m) + 2\sum_{i=1}^n\sum_{j\neq m} (c_i^{mj})^2(k_m-k_j).
\end{equation}
Now we apply the Simons-type identity \eqref{LA}, which gives
\begin{align*}
(\Delta A)(v_m,v_m)
& = \frac 12 (\nabla_{x^T}A) (v_m,v_m)
+ \frac 12A(v_m,v_m)
- |A|^2A(v_m,v_m)
- \lambda A^2(v_m,v_m).\\
& = - \frac 12 \nabla_{x^T} k_m
- \frac 12 k_m
+ |A|^2 k_m
- \lambda k_m^2.
\end{align*}
Combining this with \eqref{dkm}, we derive
\begin{equation*}
\Delta k_m
= \frac 12 \nabla_{x^T} k_m
+ \frac 12 k_m
- |A|^2 k_m
+ \lambda k_m^2
+ 2\sum_{i=1}^n\sum_{j\neq m} (c_i^{mj})^2(k_m-k_j).
\end{equation*}
As a result,
\begin{equation*}
\mathcal{L} k_m
= \Delta k_m - \frac 12\nabla_{x^T} k_m
= \frac 12 k_m
- |A|^2 k_m
+ \lambda k_m^2
+ 2\sum_{i=1}^n\sum_{j\neq m} (c_i^{mj})^2(k_m-k_j).
\end{equation*}
Therefore, summing over $m$ from $l+1$ to $n$ leads to
\begin{align*}
\mathcal{L} S_l
= \sum_{m=l+1}^n\mathcal{L} k_m
& = \frac 12 \sum_{m=l+1}^n k_m
- |A|^2 \sum_{m=l+1}^n k_m
+ \lambda \sum_{m=l+1}^n k_m^2
+ 2\sum_{m=l+1}^n\sum_{i=1}^n\sum_{j\neq m} (c_i^{mj})^2(k_m-k_j)\\
& = \frac 12 S_l - |A|^2S_l + \lambda \sum_{m=l+1}^n k_m^2
+ 2 \sum_{i=1}^n \sum_{m=l+1}^n\sum_{j=1}^l(c_i^{mj})^2(k_m-k_j)
\end{align*}
where some of the terms in the large sum get cancelled when $m$ and $j$ are switched since \eqref{ccommutator} implies
$$(c_i^{jm})^2 = (c_i^{mj})^2$$
for all $j\neq m.$ Then the inequality \eqref{LS>=} follows since by our convention, $k_m-k_j\ge 0$ for all $m>l\ge j.$\qed
\section{\bf Proof of the Main Theorem}\label{4}
We are in a position to prove the main theorem \ref{main} using lemma \ref{LALH} and \ref{LS}. We state the main theorem here again.
\begin{thm}
Let $M^n$ be a smooth, closed, and embedded $\lambda$-hypersurface in $\bb R^{n+1}$ with $\lambda\le 0.$ If $M$ is mean convex, then it is convex.
\end{thm}
\noindent{\bf(Proof.)} The case with $\lambda=0$ directly follows from the classification of closed mean convex self-shrinkers, so we may assume $M$ is a mean convex $\lambda$-hypersurface with $\lambda<0.$
First we show that $M$ is strictly convex. In fact, \eqref{LH} implies
\begin{align*}
\Delta H - \frac 12 \nabla_{x^T} H + \left(|A|^2 - \frac 12 \right)H
= \lambda |A|^2
\le 0.
\end{align*}
Therefore, if $H$ vanished at some points, the maximum principle would imply that $H\equiv 0.$ Thus $M$ would be planar, contradicting the assumption. Consequently we verify that $M$ is strictly convex. That is, $H>0$ on $M.$ In particular, $S_l>0$ on $M$ for all $l\ge 1.$
Next, we will prove the conclusion of the theorem by a contradiction argument. That is, assume there existed $\ovl p\in M$ such that $k_1(\ovl p)<0.$ Then
$$\frac H{S_1}=1+\frac {k_1}{S_1}$$
would attain its minimum at such point, say at $p.$ We can find $l\ge 1$ such that at this point $p,$
$$k_1=\cdots=k_l<k_{l+1}.$$
We claim that at $p,$ the function $\frac H{S_l}$ also attains its minimum. Otherwise, if
$\frac{H(q)}{S_l(q)}<\frac{H(p)}{S_l(p)}$
for some $q\neq p,$ which means
$$\frac{\sum_{m=1}^l k_m(q)}{H(q)-\sum_{m=1}^l k_m(q)} < \frac{\sum_{m=1}^l k_m(p)}{H(p)-\sum_{m=1}^l k_m(p)},$$
then after expanding the terms, we get
$$H(p)\sum_{m=1}^l k_m(q)
<H(q)\sum_{m=1}^l k_m(p)
= H(q)\cdot lk_1(p).$$
This particularly implies
$$H(p)k_1(q)
\le H(p)\cdot \frac 1l\sum_{m=1}^l k_m(q)
< H(q)k_1(p),
$$
which then results in
$$\frac {H(q)}{S_1(q)}
= 1 + \frac{k_1(q)}{S_1(q)}
< 1 + \frac{k_1(p)}{S_1(p)}
= \frac {H(p)}{S_1(p)},
$$
contradicting the minimality of $\frac H{S_1}$ at $p.$ Thus we prove that $\frac H{S_l}$ attains its minimum at $p.$ Consequently, we have
\begin{equation}\label{41}
\mathcal{L} \left(\frac H{S_l} \right) \ge 0\text{ and }\nabla \left(\frac H{S_l} \right)=0
\end{equation}
at $p,$ where $S_l$ is differentiable at $p$ since $k_l(p)<k_{l+1}(p).$ Note that \eqref{LH} implies
$$\mathcal{L} H = \frac H2 - |A|^2 H + \lambda|A|^2.$$
Combining this with lemma \ref{LS}, we obtain that at $p,$
\begin{align*}
\mathcal{L} \left(\frac H{S_l} \right)
& = \frac{S_l\mathcal{L} H - H\mathcal{L} S_l}{S_l^2} - 2\pair{\nabla\left(\frac H{S_l} \right), \frac{\nabla S_l}{S_l}}\\
& \le \frac{1}{S_l}\left(\frac H2 - |A|^2 H + \lambda|A|^2 \right) - \frac H{S_l^2}\left(\frac {S_l}2 - |A|^2 S_l +\lambda\sum_{m=l+1}^nk_m^2 \right)\\
& = \frac \lambda {S_l} \left(|A|^2 - \frac H{S_l} \sum_{m=l+1}^nk_m^2 \right)\\
& = \frac{\lambda}{S_l} \left(\sum_{i=1}^n k_i^2 -\left(1+\frac {\sum_{j=1}^lk_j}{S_l} \right) \left(\sum_{m=l+1}^n k_m^2 \right)\right)\\
& = \frac{\lambda}{S_l}\left(\sum_{i=1}^lk_i^2 - \frac {\sum_{j=1}^lk_j}{S_l}\left(\sum_{m=l+1}^n k_m^2 \right)\right)\\
& = \frac{l\lambda k_1}{S_l}\left(k_1 -\frac 1{S_l}\left(\sum_{m=l+1}^n k_m^2\right) \right),
\end{align*}
which is negative since $k_1(p)=\cdots=k_l(p)<0.$ Thus we derive a contradiction with \eqref{41}, and the conclusion of the theorem follows.\qed
\section{\bf Gap theorem for Mean Convex $\lambda$-hypersurfaces}\label{5}
In \cite{CW18}, Cheng and Wei proved a rigidity theorem for $\lambda$-hypersurfaces under some curvature assumptions. Their result is an application of the arguments that Huisken applied in \cite{H90} and \cite{H93}. (Note that the definition of $\lambda$-hypersurfaces in \cite{CW18} is different from that in this article by a constant. The sign convention of the second fundamental form in \cite{CW18} is also different from ours.) We use the maximum principle to give a proof of the theorem here following the ideas in \cite{H90}. In the mean convex case, we can use theorem \ref{main} to derive a gap theorem when $\lambda\le 0.$
\begin{thm}[\cite{CW18}]\label{CW}
Let $M^n$ be a smooth, closed, and embedded $\lambda$-hypersurface in $\bb R^{n+1}.$ If $H-\lambda\ge 0$ and $\lambda(2(H-\lambda)\text{tr} A^3 + |A|^2) \le 0,$ then $M$ is a round sphere.
\end{thm}
\noindent{\bf(Proof.)} By the maximum principle, we have $H-\lambda>0.$ Using \eqref{Hij}, \eqref{Simons}, and the $\lambda$-hypersurface equation \eqref{lambda}, we can derive
\begin{equation*}\label{DH}
\Delta H
= \frac 12 H + \frac 12 \nabla_{x^T} H - (H-\lambda) |A|^2
\end{equation*}
and
\begin{equation*}
\Delta |A|^2
= 2|\nabla A|^2 + |A|^2 - 2|A|^4 + \frac 12\nabla_{x^T} |A|^2 - 2\lambda \text{tr} A^3.
\end{equation*}
As a result,
\begin{align*}
\Delta\left(\frac{|A|^2}{(H-\lambda)^2} \right)
& = \frac{\Delta|A|^2}{(H-\lambda)^2}
- \frac{2|A|^2}{(H-\lambda)^3}\Delta H
- \frac{4}{(H-\lambda)^3}\pair{\nabla |A|^2,\nabla H}
+ \frac{6|A|^2}{(H-\lambda)^4}|\nabla H|^2\\
& = \frac 1{(H-\lambda)^4}\left(
2(H-\lambda)^2|\nabla A|^2
+ \frac 12 H^2\nabla_{x^T}|A|^2
- H|A|^2\nabla_{x^T}H
\right)\\
& +\frac 1{(H-\lambda)^4}\left(
-\lambda(H-\lambda)\left( 2(H-\lambda)\text{tr} A^3 + |A|^2 \right)
- 4(H-\lambda)\pair{\nabla |A|^2,\nabla H}
+ 6|A|^2 |\nabla H|^2
\right).
\end{align*}
Plugging in
$$
|a_{ij}\nabla_l H - (H-\lambda)\nabla_l a_{ij}|^2
= |A|^2 |\nabla H|^2
+ |\nabla A|^2 (H-\lambda)^2
- (H-\lambda)\pair{\nabla H,\nabla |A|^2}
$$
and
$$
\nabla\left(\frac{|A|^2}{(H-\lambda)^2} \right)
= \frac{\nabla |A|^2}{(H-\lambda)^2} - \frac{2|A|^2}{(H-\lambda)^3}\nabla H,
$$
we finally obtain
\begin{align*}
\Delta\left(\frac{|A|^2}{(H-\lambda)^2} \right)
& = \frac{2}{(H-\lambda)^2}\left(
|a_{ij}\nabla_l H - (H-\lambda)\nabla_l a_{ij}|^2
-\frac 12\lambda(H-\lambda)\left( 2(H-\lambda)\text{tr} A^3 + |A|^2 \right)
\right)\\
& + \pair{- \frac{2}{H-\lambda}\nabla H
+ \frac {x^T}2,
\nabla\left(\frac{|A|^2}{(H-\lambda)^2} \right)}.
\end{align*}
By our assumptions, we have
$$|a_{ij}\nabla_l H - (H-\lambda)\nabla_l a_{ij}|^2
-\frac 12\lambda(H-\lambda)\left( 2(H-\lambda)\text{tr} A^3 + |A|^2 \right)\ge 0,$$
so the maximum principle implies $|A|^2 = C(H-\lambda)^2$ for some constant $C$ and that
$$|a_{ij}\nabla_l H - (H-\lambda)\nabla_l a_{ij}|^2
-\frac 12\lambda(H-\lambda)\left( 2(H-\lambda)\text{tr} A^3 + |A|^2 \right)= 0.$$
In particular, we have
$$|a_{ij}\nabla_l H - (H-\lambda)\nabla_l a_{ij}|^2=0.$$
This tells us that the anti-symmetric part of this tensor also vanishes, which implies
\begin{equation}\label{aijajl}
|a_{ij}\nabla_l H -a_{il}\nabla _j H|^2=0
\end{equation}
by the Codazzi equation.
Now we assume $M$ is not a round sphere. Then we can find a point $p\in M$ at which $\nabla H\neq 0.$ If we take a local frame $e_1,\cdots,e_n$ such that $e_1=\frac{\nabla H}{|\nabla H|}$ at $p,$ then \eqref{aijajl} implies
$$|\nabla H|^2\left(|A|^2-\sum_{i=1}^n a_{1i}^2 \right)=0$$
at $p.$ Since $\nabla H(p)\neq 0,$ we get $|A|^2-\sum_{i=1}^n a_{1i}^2=0$ at $p.$ As a result,
$$
\sum_{i=1}^n a_{1i}^2
= |A|^2
= \sum_{j=1}^n\sum_{k=1}^n a_{jk}^2,
$$
which implies $a_{jk}=0$ if $(j,k)\neq (1,1).$ Thus we get $|A|^2=a_{11}^2=H^2.$ This along with the fact that $|A|/(H-\lambda)$ is constant implies that $H$ is constant, which leads to a contradiction since we assume $M$ is not a round sphere.\qed
We remark that when $\lambda=0,$ the calculations above reduce to those in \cite{H90}. Now we can use theorem \ref{CW} to prove our gap theorem \ref{gap} for mean convex $\lambda$-hypersurfaces. We state theorem \ref{gap} here again.
\begin{thm}
Let $M^n$ be a smooth, closed, and embedded $\lambda$-hypersurface in $\bb R^{n+1}.$ If $\lambda\le 0$ and the mean curvature of $M$ satisfies
$$0\le H\le \frac{\sqrt{\lambda^2+2}+\lambda}{2},$$
then $M$ is a round sphere.
\end{thm}
\noindent{\bf (Proof.)} By theorem \ref{main}, we know that $M$ is convex. That is, $k_i\ge 0$ for all $i=1,\cdots,n.$ In particular, this implies
$$
H|A|^2 = \left(\sum_{i=1}^n k_i \right)\cdot \left(\sum_{j=1}^n k_j^2 \right)
\ge \sum_{m=1}^n k_m^3
= -\text{tr} A^3.
$$
On the other hand, by the upper bound of $H,$ we can conclude that $1\ge 2(H-\lambda)H.$ Combining these gives
\begin{align*}
|A|^2 \ge 2(H-\lambda)H|A|^2
\ge -2(H-\lambda)\text{tr} A^3,
\end{align*}
which implies $\lambda(2(H-\lambda)\text{tr} A^3 + |A|^2) \le 0.$ Applying theorem \ref{CW}, the conclusion follows.\qed
| {
"timestamp": "2021-06-21T02:08:06",
"yymm": "2104",
"arxiv_id": "2104.05464",
"language": "en",
"url": "https://arxiv.org/abs/2104.05464",
"abstract": "We prove that any $n$-dimensional closed mean convex $\\lambda$-hypersurface is convex if $\\lambda\\le 0.$ This generalizes Guang's work on $2$-dimensional strictly mean convex $\\lambda$-hypersurfaces. As a corollary, we obtain a gap theorem for closed $\\lambda$-hypersurfaces with $\\lambda\\le 0.$",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP)",
"title": "Convexity of $λ$-hypersurfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717444683929,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7089449293894302
} |
https://arxiv.org/abs/1011.0045 | Domino shuffling for the Del Pezzo 3 lattice | We present a version of the domino shuffling algorithm (due to Elkies, Kuperberg, Larsen and Propp) which works on a different lattice: the hexagonal lattice superimposed on its dual graph. We use our algorithm to count perfect matchings on a family of finite subgraphs of this lattice whose boundary conditions are compatible with our algorithm. In particular, we re-prove an enumerative theorem of Ciucu, as well as finding a related family of subgraphs which have 2^{(n+1)^2} perfect matchings. We also give three-variable generating functions for perfect matchings on both families of graphs, which encode certain statistics on the height functions of these graphs. | \section{Introduction}
The theory of perfect matchings on planar bipartite graphs is quite rich and
mature, and has seen a great deal of activity over the past two decades. The
central questions in this theory are counting questions, in which one attempts
to give a generating function, or even just a count, of perfect matchings on a
fixed planar bipartite graph G.
In the 1960s, Kasteleyn~\cite{kasteleyn} developed a powerful
algebraic tool for answering (in principle) all such questions. In particular,
one can compute the number of perfect matchings of a planar bipartite graph by
taking the determinant of a signed version of its bipartite adjacency matrix (usually called the \emph{Kasteleyn matrix}).
This paper is concerned with a different phenomenon which is not immediately explained by Kasteleyn's formalism. Given a
periodic planar bipartite graph, it is often possible to find a family of
finite subgraphs for which the number of perfect matchings decomposes into
small, simple factors (as do the generating functions of the graphs). The
canonical example of this phenomenon is the Aztec Diamond of order $n$~\cite{eklp} , which
is a diamond-shaped region of the square lattice. The
Aztec diamond of order $n$ has $n(n+1)/2$ perfect matchings~\cite{eklp}.
Elkies et al. prove this by using a technique called \emph{domino shuffling}, which is
a type of random mapping on the set of perfect matchings on a square lattice.
\begin{figure}
\caption{Diamonds of orders 3 and 3.5
\label{fig:some diamonds}}
\begin{center}
\includegraphics[height=3in]{order3.pdf}
\includegraphics[height=3in]{order3half.pdf}
\end{center}
\end{figure}
This paper follows the approach of~\cite{eklp}, but on a different lattice: namely, the hexagon lattice superimposed upon its dual triangular lattice (see Figure~\ref{fig:the dp3 lattice}). We
name this lattice the \emph{$dP_3$ lattice}, since it is associated to the del Pezzo
3 surface (the nature of this association is explained in
Section~\ref{sec:height function}). This association was first made clear in the physics literature~\cite{franco} in a quite general setting; it is from this paper that we take the terminology $dP_3$. Indeed,~\cite{franco} introduces several quivers which correspond to the $dP_3$ surface; ours is their Model I.
We describe a type of domino shuffling
which works on the $dP_3$ lattice. We also define an analogue of the Aztec diamond
of order $n \in \frac{1}{2}\mathbb{Z}$ for this lattice (see Figure~\ref{fig:some
diamonds}), which is well behaved under domino shuffling, and prove that the
number of perfect matchings on these graphs is a power of 2:
\begin{theorem}
\label{thm:main}
Let $m \in \frac{1}{2}\mathbb{Z}$ and let $n = \lfloor m \rfloor$. Then
the number of perfect matchings on a diamond of order $m \in \frac{1}{2}\mathbb{Z}$ is
\begin{align*}
&\left\{
\begin{array}{cl}
2^{n(n+1)} & \text{if }m \in \mathbb{N} \\
2^{(n+1)^2} & \text{if }m + \frac{1}{2} \in \mathbb{N}.
\end{array}
\right.
\end{align*}
\end{theorem}
Moreover, we can give a generating function for these matchings; we defer the statement of that theorem until Section~\ref{sec:generating_functions}, due to the number of definitions required to state the result.
We constructed our domino shuffle by reverse engineering a particular
application of urban renewal, and replacing the averaging procedure (replacing it
instead with a careful accounting of the local structure of the perfect
matching).
Propp later
refined his domino shuffle in~\cite{propp-2003} by reworking domino shuffling so that it can be
applied locally, to a square face of a weighted planar bipartite graph. The resulting technique,
\emph{urban renewal}, involves
altering the graph at a square face S, as well as averaging the weights of the
edges around S in a certain way. Ciucu gave a further refinement to this
formalism, reproving the results in~\cite{ciucu} as well as many others.
Unbeknownst to us when we began this work, Ciucu had already proved half of
our Theorem~\ref{thm:main} in~\cite{ciucu} using a powerful reduction theorem. Moreover, there
is a type of domino shuffle that one can do on any finite planar bipartite
graph G: simply embed G inside a suitably large Aztec diamond and use the generalized
domino shuffle~\cite{propp-2003} on the square lattice. It is therefore important to note that
our main contribution is the construction of the domino shuffle on the $dP_3$
lattice itself, rather than its applications to counting Aztec diamonds. Our
shuffle is indeed different from the ones used in the aforementioned papers.
There are at least two good reasons to look for simple domino
shuffles on particular planar bipartite graphs rather than appealing to general
theory. The first of these came to light in~\cite{szendroi, young-2009}, and
involves a curious interplay between the combinatorics of perfect matchings and
algebraic geometry. By studying a particular class of perfect matchings on
(infinite) periodic planar bipartite graphs, one can compute a
\emph{Donaldson-Thomas (DT) partition function} of the toric surface
associated to the graph by the construction in~\cite{franco}. The DT partition function is a virtual count
of curves in the space (really a type of Euler characteristic on the moduli
space of subschemes of the surface). There is an emerging picture in which
domino shuffling corresponds to a geometric procedure called \emph{wall crossing}
\cite{bryan-cadman-young, kontsevich-soibelman, nagao-nakajima, bryan-young}, and which, for example, links the DT theory of a singular surface to that of its
crepant resolution.
The generating functions that we compute in Section~\ref{sec:generating_functions} are a prelude to our next project, wherein we will compute the Donaldson-Thomas partition function for this lattice; this turns out to be a generating function for a different type of dimer configuration on this lattice, but which uses essentially the same weighting. We do not address Donaldson-Thomas theory at all in this article, but see~\cite{szendroi, young-2009} for an account of the corresponding problem on the square lattice.
The second modern setting for domino shuffling is in the
work of ~\cite{nordenstam, borodin-gorin} and others, which describe square-lattice domino shuffling
as a type of random point process. This process turns out to be partially
determinantal, which makes it one of a select few three-dimensional statistical
physical models which can be treated completely mathematically, without applying to
heuristic physical reasoning. One current frontier of exactly solvable statistical
mechanics is precisely this jump from two to three
dimensions, and it is quite important to understand fully the phenomenology of
any tractable models in this area. It seems very likely that our domino shuffle
fits into the framework of~\cite{borodin-gorin}, but we have yet to exploit this connection.
In a very recent string theory paper~\cite{franco-eager-mauricio}, Franco, Eager and Romo have reproduced (up to a change of variables) and substantially generalized our Theorem~\ref{thm:main}, using different methods. We gratefully acknowledge their help in spotting a typo in Theorem~\ref{thm:main} in an earlier version of this preprint.
\section{The Lattice}
Before taking the time to describe these subgraphs, we must better understand the $dP_3$ lattice itself (see Figure~\ref{fig:the dp3 lattice}).
It is periodic, bipartite, and planar, where each face is bounded by a cycle of size 4. The edges incident to centers of hexagons have length 1, and other edges have length $\sqrt3/3$. Color the vertices of degree four black, and the others white.
\begin{definition}
We define a \emph{square} to be a face bounded by a cycle of size 4. A \emph{long edge} of a square is one of length $1$; a \emph{short edge} has length $\sqrt{3} / 3$. Every square has a unique vertex of degree 3. The \emph{tail} of a given square is the edge which is incident to this unique vertex and not on the boundary of the square. It is incident to another vertex which is not the center of a hexagon. See Figure~\ref{fig:short edge, long edge, tail, square and kite}.
\end{definition}
\begin{figure}
\caption{Short edge, long edge, tail, square and kite; orientations.
\label{fig:short edge, long edge, tail, square and kite}
\label{fig:pairs of orientations}}
\begin{center}
\includegraphics[height=1in]{kites.pdf}
\end{center}
\end{figure}
\begin{figure}
\caption{A strip
\label{fig:a strip}}
\begin{center}
\includegraphics[height= 1in]{strip.pdf}
\end{center}
\end{figure}
We orient our lattice such that for each hexagon, a white vertex is directly north of the center of the hexagon. The square which is bounded by both the center and this white vertex has orientation N, and the square directly south has orientation S. We shall often have cause to pair up orientations which are in opposite directions on the compass: N/S, NE/SW, NW/SE, and we shall speak of these as \emph{pairs of orientations}.
\begin{definition}
A \emph{kite} is a square together its tail, and a \emph{strip} of kites is a set of adjacent kites with the same pair of orientations, which meet at a degree-four vertex of their squares (not their tail). Given a kite $K$, let the unique vertex of degree 3 that is on the boundary of the square and incident to the tail be $K$'s \emph{root vertex}. See Figure~\ref{fig:short edge, long edge, tail, square and kite}.
\end{definition}
Note that if a kite has its square oriented N/S, and the tail is north of the square, we have a N kite, while we have a S kite if the tail is south of the square; the same holds for NE, SW, NW, SE kites.
\begin{definition}
A \emph{matching} is a set of vertex-disjoint edges of a graph $G$. It is \emph{perfect} when it covers every vertex of $G$.
\end{definition}
Two graph transformations which are often used in perfect matching enumeration are urban renewal and double-edge contraction (see Figure ~\ref{fig:urban renewal and double edge contraction}). Double-edge contraction is the simpler of the two techniques, in which a vertex of degree two is merged with its two neighbors; it is easy to check that this transformation induces a bijection on perfect matchings.
Urban renewal is slightly more complicated; it replaces a square $S$ in the graph with a smaller square $S'$, together with four new edges connecting the vertices of $S$ to the vertices of $S'$ (see Figure~\ref{fig:urban renewal and double edge contraction}). When applied to an edge-weighted graph, it is possible to define new weights in such a way that urban renewal leaves the perfect matching generating function invariant up to a constant~\cite{propp-2003}. The reader should note that in our paper, we do \emph{not} do this change of edge weights, and indeed we are often not working with weighted graphs at all; for us, ``urban renewal'' simply means the graph transformation shown in Figure~\ref{fig:urban renewal and double edge contraction}).
\begin{figure}
\caption{Urban renewal and double edge contraction
\label{fig:urban renewal and double edge contraction}}
\includegraphics[height=1.5in]{urbanrenewal.pdf}
\end{figure}
\section{The Integer-order Diamonds}
We now define a family of subgraphs called \emph{diamonds}. This family is indexed by the\emph{order} of the diamonds, which will be an element of $\frac{1}{2}\mathbb{Z}_{\geq 0}$.
Our shuffling algorithm will shuffle the edges on a perfect matching on an order $n$ diamond, and outputs a perfect matching on an order-$(n+1)$ diamond.
We begin by discussing the integer-order diamonds, which were first discussed by Propp~\cite{propp-1999} and further studied by Ciucu~\cite{ciucu}, under the name \emph{Aztec dragons}.
\begin{definition}
A \emph{diamond of order 1} is denoted $D_1$ (see Figure~\ref{fig:order 1 diamond}); it is a connected subgraph of $dP_3$ which consists of 3 squares. One of them is N, while the other two are oriented NW and SE and are adjacent on the NE/SW strip they lie on.
Each diamond contains a unique degree-four vertex which is on the boundary of all of its squares. We call this vertex the \emph{center vertex} of a $D_1$.
\end{definition}
\begin{figure}
\caption{The diamond of order 1, $D_1$
\label{fig:order 1 diamond}}
\begin{center}
\includegraphics[height=1in]{labeledorder1.pdf}
\end{center}
\end{figure}
Note that a N-S strip is tiled by copies of $D_1$, half of which have been rotated 180 degrees. We refer to these as \emph{North} and \emph{South $D_1$s}, according to the direction of their $N-S$ kite.
Now, we develop a coordinate system such that we can define a diamond of order $n$, $n \in \mathbb{N}$. Recall that the $dP_3$ lattice decomposes into strips, each of which can be tiled by copies of $D_1$.
\begin{definition}
\label{defn:dn_integer_coords}
First, fix in the lattice a $D_1$ and call it $T(0,0)$. If $n$ is even, let $T(0,0)$ be oriented South, and if $n$ is odd, let $T(0,0)$ be oriented North. Call $T(1,0)$ the $D_1$ directly east of $T(0,0)$, and $T(-1,0)$ the $D_1$ directly west of $T(0,0)$. Similarly, the $D_1$ directly north of $T(0,0)$ is $T(1,0)$ and the one directly South of $T(0,0)$ is $T(-1,0)$. If $i,j \in \mathbb{Z}$, then $T(i,j)$ must be directly north of $T(i,j-1)$, directly south of $T(i,j+1)$, directly east of $T(i-1,j)$, and directly west of $T(i+1,j)$.
For $i \in \mathbb{N}$, let $S_i$ be the strip containing $T(0,i)$.
Finally, let
\[D_n = \displaystyle\bigcup_{\substack{-n < i,j < n \\ |i| +|j| < n}} T(i,j),\] where the union denotes the union of vertex and edge sets of subgraphs of $dP_3$.
\end{definition}
\begin{figure}[h]
\caption{Coordinates on an integer- and half-integer-order diamonds. In both, $T(2,1)$ is shaded. Not all graph edges are shown.
\label{fig:coordinates on a whole diamond}
\label{fig:constructing a half diamond}}
\begin{center}
\includegraphics[height=2.80in]{order4_coordinates.pdf}
\rule{0.5in}{0in}
\includegraphics[height=2.90in]{order3half_coordinates.pdf}
\end{center}
\end{figure}
\section{The Half-integer-order Diamonds}
We construct a second, new family of finite subgraphs of $dP_3$. These are \emph{half-integer-order diamonds}, $D_{n + \frac{1}{2}}$ where $n \in \mathbb{N}$.
Again we describe them with a coordinate system.
\begin{definition}
Let $T(0,0)$ be oriented North when $n$ is even, and let it be oriented South when $n$ is odd. Let $D_1'$ denote a $D_1$ which has been reflected in the $y$ axis. Decompose the N-S strips into copies of $D_1'$; let $T(i,j)$ denote the $D_1'$ which is $i$ steps east and $j$ steps north of $T(0,0)$ as in Definition~\ref{defn:dn_integer_coords}.
Define $L_n$ to be the SE-oriented square of $T(n,0)$, and $R_n$ to be the SW-oriented square of $T(n+1,0)$. Let $D_{n + \frac{1}{2}}$ be the following graph (see figure ~\ref{fig:constructing a half diamond}):
\begin{align*}
D_{n+\frac{1}{2}} = \left( \displaystyle\bigcup_{\substack{-n < i,j < n \\ |i| +|j| < n}} T(i,j) \right) \cup \left( \displaystyle\bigcup_{\substack{0 < j \le n \\ 0 < i < n + 1 + j}} T(i,j) \right) \cup R_n \cup L_n.
\end{align*}
\end{definition}
Note that these graphs no longer decompose completely into copies of $D_1'$: there are two extra squares, $L_n$ and $R_n$. Finally, we specifically describe $D_{\frac{1}{2}}$.
\begin{definition}
\emph{$D_{\frac{1}{2}}$} consists of a single NE square.
\end{definition}
\begin{definition}
Let \emph{$M_n$} be the number of edges in a perfect matching on a diamond of order $n$, such that $n \in \mathbb{N}$ or $ (n + \frac{1}{2}) \in \mathbb{N}$.
\end{definition}
\begin{lemma}
Let $n = \lfloor m \rfloor$. Then
\begin{center}
$ M_m = \left\{
\begin{array}{cl}
{n(3n+1)} & m \in \mathbb{N} \\
{3n^2 + 4n + 2} & (m + \frac{1}{2}) \in \mathbb{N}.
\end{array}
\right. $
\end{center}
\end{lemma}
\begin{proof} The number of edges in a perfect matching is half of the number of vertices in the graph, so this is a straightforward computation using Euler's formula for planar graphs and the definitions of the graphs $D_n$.
\end{proof}
\section{The Rules of the Shuffling Algorithm}
Throughout this section, we let $n = \lfloor m \rfloor$.
We define a procedure called \emph{domino shuffling}, whose input is a perfect matching $M$ on a $D_m$, and whose output output is a perfect matching on $D_{m + \frac{1}{2}}$.
We were inspired to work out this algorithm by the following observation:
\begin{figure}
\caption{Applying urban renewal and double-edge contraction to a kite
\label{fig: applying urban renewal and double edge contraction to a kite}}
\begin{center}
\includegraphics[height=2in]{rulecases.pdf}
\end{center}
\end{figure}
\begin{lemma}
\label{lem:lattice shift}
Urban renewal applied to all kites of a given pair of orientations, followed by contracting double edges wherever possible, simply translates the $dP_3$ lattice.
\end{lemma}
\begin{proof}
Without loss of generality, let us perform urban renewal on North and South kites; the same argument applies to NE/SW and NW/SE kites.
First of all, we choose a N/S kite and call it $K$; its square has vertices $a,b,c,d$ and its tail has edge $de$. Then, apply urban renewal to all of the N/S kites (see Figure ~\ref{fig: applying urban renewal and double edge contraction to a kite}). For each kite we have a new square $a',b',c',d'$ and 4 new edges $aa', bb', cc', dd'$.
We consider the new edges associated with $K$. We observe that one of the new edges, $dd'$, is adjacent to the root vertex of $K$. Two other new edges, $aa'$ and $cc'$, are adjacent to the new edges of the N/S kites which lie on the same N/S strip as $K$. We now apply double-edge contraction to the vertices $a,c,d$. We do this to every kite we applied urban renewal to. Hence, for each kite we have deleted 1 old and 3 new edges, and added 1 new edge. This new edge happens to be $bb'$, which is adjacent to a vertex opposite the original root vertex. Therefore, the new figure is in fact a kite with tail $bb'$, which is oriented in the opposite direction of the original kite. We have flipped the kite. Moreover, this flip has occurred on every $N-S$ kite, hence we have shifted the lattice.
\end{proof}
\begin{figure}
\caption{Applying urban renewal and double-edge contraction to N$\backslash$S kites in a strip.
\label{fig:applying urban renewal to dP3 lattice}}
\begin{center}
\includegraphics[height=0.88in]{figure8.pdf}
\end{center}
\end{figure}
Now, we proceed to define the algorithm as applied to a perfect matching $M$ on $D_m$. Let $n = \lfloor m \rfloor$.
\begin{figure}
\caption{Steps 1 through 3 of shuffling}
\begin{center}
\includegraphics[height=2in]{order2ovals.pdf}
\includegraphics[height=2in]{order2tails.pdf}
\includegraphics{order25extavertices.pdf}
\end{center}
\end{figure}
\begin{algorithm}[The domino shuffle] Begin with a perfect matching on $D_m$.
\label{alg:domino shuffle}
\begin{enumerate}
\item If $m$ is an integer, draw an oval around every $NE-SW$ kite in $D_m$, including partial kites. Otherwise, draw an oval around every $NW-SE$ kite, including partial kites.
\item Fill in all of the partial kites around the boundary, by adding new edges to the diamond. None of these new edges should be added to the matching, with the following exception: the root vertex of each circled kite is matched. If not, add the tail of that kite to $M$. We will add $2n +1$ tails, by lemma 6 below.
\item Flip each kite in each oval so that it has an opposite orientation.
\item Draw the new matching $M'$ using the rules in Algorithm~\ref{alg:edge rules} (see below) and shown in Figure~\ref{fig:rules of domino shuffling on a kite}.
\item Drop the unmatched vertices on the boundary ovals (the tail on every other kite is never part of the matching.)
\end{enumerate}
\end{algorithm}
Observe that the edges not on $K$ are not altered by our algorithm: they remain as they are (either matched or unmatched).
\begin{figure}
\caption{Steps 4 and 5 of the domino shuffle
\label{fig:steps 5 and 6 of shuffling}
\label{fig:rules of domino shuffling on a kite}}
\begin{center}
\includegraphics[width=2.8in]{thearticlerules.pdf}
\rule{0.5in}{0in}
\includegraphics[width=1.3in]{order25wovertices.pdf}
\end{center}
\end{figure}
As stated earlier, the shuffling is inspired by applying urban renewal and double contraction to the squares of the circled kites (but not changing any of the weights on these edges -- indeed, all of this will work on an unweighted graph), and keeping track of what happens to the associated perfect matching.
The rules are shown graphically in Figure~\ref{fig:rules of domino shuffling on a kite}, which is perhaps the most useful format. However, to ensure clarity, we will also describe the rules more formally.
We call our kite $K$ and adopt the notation of Lemma~\ref{lem:lattice shift}, illustrated in Figure~\ref{fig: applying urban renewal and double edge contraction to a kite}: the vertices of $K$ are $a,b,c,d,e$ and after urban renewal, the square of $K$ is replaced with a new graph $H$ with vertices $a,a',b,b',c,c',d,d'$.
\begin{algorithm}[Rules for edges]
\label{alg:edge rules}
Let $K$ be a kite with edges as described above.
\begin{enumerate}
\item[Case 1:] No edges in $K$ are in $M$.
Then, our root vertex is not matched as it is not adjacent to any edges outside, so this case does not occur.
\item[Case 2:] 1 edge in $K$ is in $M$.
\begin{itemize}
\item[a.] Suppose that the tail $ed$ is in $M$. Take either $a'b'$ and $c'd'$, or or $a'd'$ and $c'b'$ to be in the new matching. This is called a \emph{creation}.
\item[b.] Suppose that the tail is not in $M$. Then $M$ contains a short edge, either $ad$ or $dc$. Take $b'c'$ (respectively, $a'b'$) to be in the matching. We call this the \emph{short edge} case.
\end{itemize}
\item[Case 3:] 2 edges in $K$ are in $M$.
\begin{itemize}
\item[a.] Suppose that the tail of $K$ is in $M$. Then, either $bc$ or $ab$ is in $M$. Take $a'd'$ (respectively, $c'd'$) to be in the new matching, as well as the new tail $bb'$. We call this case the \emph{long edge} case.
\item[b.] Suppose that the tail of $K$ is not in $M$. Then take only the new tail $bb'$ to be in the new matching.
We call this case an \emph{annihilation}.
\end{itemize}
\end{enumerate}
\end{algorithm}
Let us analyze this lemma, to see that it works as advertised.
\begin{lemma}
Shuffling a perfect matching on all of $dP_3$, choosing one of the two outcomes of each creation arbitrarily, produces another perfect matching on the translation of $dP_3$.
\end{lemma}
\begin{proof}
We examine one oval, and proceed by cases as in Algorithm~\ref{alg:edge rules}. Our strategy is to perform the operations of urban renewal and double edge contraction, each of which comes with an induced map on perfect matchings. In each case, we verify that we have duplicated the shuffling rules of Algorithm~\ref{alg:edge rules}.
\begin{enumerate}
\item[Case 1:] No edges in $K$ are in $M$.
As remarked above, this is not possible.
\item[Case 2:] 1 edge in $K$ is in $M$.
\begin{itemize}
\item[a.] Suppose that the tail $ed$ is in $M$. Then, $a,b,$ and $c$ are matched outside of $K$. When we apply urban renewal, we must match the vertices on $S'$. Hence, we can take either $a'b'$ and $c'd'$ or $a'd'$ and $c'b'$. There are two choices. When we apply double-edge contraction we see that $K'$ will have either $a'b'$ and $c'd'$ or $a'd'$ and $c'b'$ in its matching.
\item[b.] Suppose that the tail is not in $M$. In order to ensure that our root vertex is matched, we only have a case where our matching contains a short edge, either $ad$ or $dc$. Without loss of generality, suppose that this edge is $ad$. Then, $b,c,$ and $e$ are matched outside of $K$. We only need to match $S'$ and $a$ and $d$ on $H$. Hence, we let $aa'$ and $dd'$ be in the matching as well as $b'c'$. Then, we have a perfect matching. Finally, we apply double-edge contraction and obtain $K'$ such that only $b'c'$ is in the matching.
\end{itemize}
\item[Case 3:] 2 edges in $K$ are in $M$.
\begin{itemize}
\item[a.] Suppose that the tail of $K$ is in $M$. Then, we must have a long edge in $M$. Without loss of generality, let this edge be $bc$. Then, we have that $a$ is matched outside of $K$. We must now match $S'$ and $b$ and $c$ in $H$. Thus, we take the edges $a'd'$, $bb'$, and $cc'$. Applying double-edge contraction, we get that the edges in $K'$ which are in our new matching are $a'd'$ and $b'b$.
\item[b.] Suppose that the tail of $K$ is not in $M$. Then, the 2 edges in $M$ must be a short edge and a long edge on $S$. Without loss of generality, we let these two edges be $ad$ and $bc$; the other possibility is choosing $dc$ and $ab$. Now, we have that $e$ is matched outside of $K$, so we include the edges $aa'$, $bb'$, $cc'$, and $dd'$. Then, when we apply double-edge contraction, we will only have $bb'$ in our new matching. We see that this occurs no matter which pair of edges on $S$ we select.
\end{itemize}
\end{enumerate}
\end{proof}
\begin{lemma}
When applying step 3 of our shuffling algorithm, we add $2n + 1$ tails to ensure that every root vertex is matched.
\end{lemma}
\begin{proof} Consider first the case $n \in \mathbb{Z}$.
For each strip, we see that the two $D_1$s on the boundary are either both oriented N, or both oriented S. If they are both N diamonds, we circle two kites on the eastern boundary, and one on the western boundary. If they are both S diamonds, we circle two kites on the western boundary, and one on the eastern boundary. Two of these kites have their tails in $D_m$, while the third one only has one long edge in $D_m$. Hence, its root vertex will never be matched. So, we will have to add as many tails as the number of strips.
Finally, note that there are two more kites on the boundary. One is north of $T(0,n-1))$ the other is south of $T(0,-n+1)$. Each of these kites only have one long edge in $D_m$, hence their root vertex also must be matched.
Thus, we add exactly $2n-1 + 2 = 2n+1$ vertices.
The same argument holds for half-integer-order diamonds, if we replace $D_1$ by $D_1'$ throughout.
\end{proof}
\begin{lemma}
Given a perfect matching $M$ on $D_m$, applying shuffling to $M$ gives us a matching $M'$ which covers all of the vertices on a diamond of order $m + \frac{1}{2}$.
\end{lemma}
\begin{proof}
First of all, by Lemma~\ref{lem:lattice shift}, domino shuffling applied to the entire lattice behaves as a simple shift. Hence we simply need to show that the number of strips on the new subgraph is correct, that the correct types of squares are present on the boundary, and that step (5) is deterministic and removes only tail vertices from the boundary.
We will do in detail the case that $m=n$ are integers; the half-integer case is
similar. We encourage the reader to refer to figures~\ref{fig:some diamonds} and~\ref{defn:dn_integer_coords} to verify these claims.
First, note that an N square transforms into a SE square and \textit{vice versa}; a S square becomes a NW square and \textit{vice versa}; and a SW square becomes a NE square and \textit{vice versa}.
When we flip the kite directly north of $T(0,n-1)$ we have added a
square that is in a strip which was previously not included in our subgraph.
This new square is a SW, since the original is NE. Similarly, when we flip the
kite which is directly south of $T(0,-(n-1))$, we include a square which was
not on any strips that were on our original graph. The new square will be NE,
since the original was SW. We see that we have increased the number of
strips by 1.
Now, we check that our new boundary is correct.
We start with the northernmost kite that is added in Step 2: it is a NE kite. Hence, the square we obtain after shuffling is a SW square. The northernmost square on the original subgraph was a SE square, and this one will become a N square. There is also an adjacent SW square in the original graph which becomes a NE square. These three new squares share a common vertex which becomes the center vertex of an eastern N $D_1$; these form the correct shape for the top corner of a half-integer-order diamond. Moreover, one can apply this to the $D_1$'s on the strips which lie on or to the north of the center strip $S_0$.
We can apply a similar argument for the strips which lie south of $S_0$. We start with the southernmost square which is added in Step 2: it is a SW kite. Hence, the square we obtain after shuffling is a NE square. The southernmost square on the original subgraph was a NW square, and this one will become a S square. Similarly, we added a NE kite in Step 2; such that its square will be a SW kite after we apply shuffling. These three new squares are such that they share a common vertex which becomes the center vertex of a western S $D_1$. Hence, we have the exact shape which we were looking for. One can apply this to the $D_1$s on the strips which lie south of $S_0$.
Next, we check that step (5), in which unmatched boundary vertices are removed, is completely deterministic. Indeed, all tail vertices on the boundary are always unmatched after shuffling. This follows from an inspection of the shuffling rules. Let $K$ be a kite. Observe that tail is unmatched within $K$ after shuffling if the vertex $v$ incident to the two long edges of $K$ is matched within $v$ \emph{before} shuffling, which is always the case when $v$ is on the boundary of $D_n$.
\end{proof}
\section{Proof of Theorem 1}
Our proof is essentially an analysis of the domino shuffling algorithm. In short, the algorithm behaves much like a $1$-to-$2^{|n|+1}$ correspondence for enumerative purposes.
\begin{proof}
Let $n = \lfloor m \rfloor$.
When we apply the shuffling algorithm to $M \in PM(D_m)$, we add $2n + 1$ tails, but we know that the associated matching on $D_{m + \frac{1}{2}}$ has $3n + 2$ more edges than those in $M$. We can account for $2n + 1$ of these new edges by the $2n + 1$ tails added when applying our algorithm. Hence, the difference between the number of creations and number of annihilations is $n+1$.
Suppose that $M$ has $k$ annihilations. There is a class of exactly $2^k$ tilings which have $k$ annihilations on these kites, and are otherwise identical to $M$ . Moreover, $M$ must experience $k + n + 1$ creations after shuffling.
On each kite which had a creation, there are 2 possible outcomes. Hence, the $2^k$ matchings with $k$ annihilations map to exactly $2^{k+n+1}$ matchings on $PM(D_{m + \frac{1}{2}})$. As such,
\begin{align*}
|PM(D_{n+1})| &= 2^{n+1} |PM(D_{n+\frac{1}{2}})|; \\
|PM(D_{n+\frac{1}{2}})| &= 2^{n+1} |PM(D_{n})|.
\end{align*}
The count of perfect matchings now follows by a standard induction argument, the base cases being $PM(D_0) = 1$, because the empty graph has only the empty matching, and $PM(D_{\frac{1}{2}}) = 2$, because the order $\frac{1}{2}$ diamond is a $SW$ square and has two matchings (see Figure~\ref{fig:perfect matchings on order half diamond}).
\end{proof}
\begin{figure}
\caption{Case $m = \frac{1}{2}$
\label{fig:perfect matchings on order half diamond}}
\begin{center}
\includegraphics[height=1in]{basecase1.pdf}
\end{center}
\end{figure}
\section{Height function}
\label{sec:height function}
Our next aim is to give generating functions for the perfect matchings on $D_n$. To describe the statistics encoded in our generating functions, we will need to introduce the notion of \emph{height function}. This is a rather general concept associated to any perfect matching on a planar bipartite graph (see, for example,~\cite{kenyon}), but it is necessary to work out the precise nature of the height function on $dP_3$ to proceed further.
\begin{figure}
\caption{A fundamental domain of the dual quiver of the $dP_3$ lattice
\label{fig:dualquiver}}
\begin{center}
\includegraphics[width=1.5in]{quiver.pdf}
\end{center}
\end{figure}
If a planar graph $G$ is bipartite, its dual graph $Q$ inherits a natural orientation: orient each edge of $Q$ so that a black vertex of $G$ is on the left (say). We use the letter $Q$ because the dual graph should be regarded as a \emph{quiver}~\cite{szendroi, young-2009}, but for combinatorists a quiver is precisely the same structure as a directed graph. If we do this construction for the $dP_3$ graph, we obtain an orientation on the (6,4,3) lattice with arrows pointing counterclockwise on hexagons and triangles, and clockwise on squares (see Figure ~\ref{fig:dualquiver}).
We will assign an integer to each arrow $e$ of $G'$ as follows:
\begin{equation}
H(e)= \begin{cases}
1 & \text{if $e$ crosses a long edge} \\
2 & \text{if $e$ crosses a short edge}
\end{cases}
\end{equation}
This integer is called the \emph{height change} of $e$. Moreover, given a perfect matching $M$ on $G$, we will define a function $h:V(G') \rightarrow \mathbb{Z}$ as follows: fix $a$ to be one of the vertices in $G'$ and set $H(a) = 0$. Now, if $e:b \rightarrow c$ is an edge of $g'$ and $h(b)$ has been defined, set
\begin{equation}
h(c) =
\begin{cases}
h(a) + H(e) - 6 &\text{if }e \in M \\
h(a) + H(a) &\text{if }e \not \in M.
\end{cases}
\end{equation}
It is easy to check that $h$ is well-defined up to the initial choice of height $h(a)$. This is essentially the same height function defined in~\cite{kenyon}.
\begin{definition}
Let $f$ be a face of $G$, bounded by edges $E = \{e_1, \cdots, e_{2k}\}$. Let $M$ be a perfect matching on $G$ such that $M \cap E = \{e_1, \ldots, e_{2k-1}\}$. The \emph{plaquette flip of $M$ around $f$} is the perfect matching $M'$ which agrees with $M$ except around $f$, where $M' \cap f = \{e_2, \ldots, e_{2k} \}$.
\end{definition}
It is easy to check that performing a plaquette flip at a kite $S$ changes the height at $S$ by $\pm 6$, leaving the height function otherwise unchanged. Observe also that one can transform any perfect matching to any other by a sequence of plauqette flips. This is because if $h_1, h_2$ are height functions for two perfect matchings, then so is $min(h_1, h_2)$; one constructs its associated perfect matching by superimposing $h_1$ and $h_2$, producing a collection of disjoint doubled edges and loops of even length.
There are two ways to remove every second edge from a loop, leaving a perfect matching on the vertices of the loop; for each loop, remove the set of edges which minimizes the resulting height function inside the loop.
Doing this, we obtain the matching associated to $min(h_1, h_2)$.
In particular, the function
\[
\min_{M \in PM_n} h(M)
\]
is a height function. Observe that this matching admits only plaquette moves which increase $h$.
\begin{figure}
\caption{Tiles for encoding the height function on the $dP_3$ lattice.
\label{fig:tiles}}
\includegraphics[width=3in]{threetiles.png}
\end{figure}
\begin{figure}
\caption{A perfect matching, viewed as a stack of bricks in a tray.
\label{fig:traytiles}}
\includegraphics[width=1.6in]{order3_1.png}
\includegraphics[width=1.6in]{order3_3.png}
\includegraphics[width=1.6in]{order3_5.png}
\end{figure}
In light of these observations, we may visualize a perfect matching on $D_n$ as a pile of special bricks, contained in a V-shaped tray (just as is done in~\cite{eklp}). Plaquette flipping corresponds to adding a brick, and the tray encodes the shape of the minimal height function(see Figure ~\ref{fig:traytiles}.
See Figure~\ref{fig:tiles} for a picture of the tiles. They are constructed so as to meet the following criteria:
\begin{enumerate}
\item Each tile covers exactly the four faces in $Q$ which are dual to the vertices composing a square in $G$;
\item If one tile is placed partially atop another, covering two adjacent kites $a$, $b$ of $G$ joined by a dual arrow $e$ in $Q$, then the vertical distance between the two tiles is the height change $h(e)$;
\item The tiles, viewed from above, have two dimers stencilled on top: the same two dimers which are present \emph{after} a plaquette flip on the corresponding kite has been performed, so as to \emph{increase} the height function there.
\end{enumerate}
Any tiles meeting these criteria will work to reproduce perfect matchings on $D_n$. The tiles we have chosen are the simplest possible ones: all of the tiles are identical, all of the faces are flat, and the underside of each tile is planar. We have colored each tile according to its orientation.
This construction appears to be highly artificial, so we should say a few words to explain why it is not. Our notion of height function comes from noncommutative Donaldson-Thomas theory~\cite{szendroi}, in which we study the representation theory of a certain quotient of the path algebra $\mathbb{C}Q$, the algebra of directed paths in $Q$ (multiplication being given by concatenation when the paths share an endpoint, and the zero map when they do not). This theory was worked out first in the physics literature in~\cite{franco}. In particular, let
$\mathcal{P}_{CCW}$ and $\mathcal{P}_{CCW}$ denote the set of counterclockwise (respectively, clockwise) directed paths around single faces in $Q$, and let
$W \in \mathbb{C}_Q / [\mathbb{C} Q, \mathbb{C} Q]$ be the \emph{superpotential}
\[
\sum_{p \in \mathcal{P}_{CCW}} \prod_{a \in p} a -
\sum_{p \in \mathcal{P}_{CW}} \prod_{a \in p} a.
\]
Let $I_w$ be the ideal generated by partial derivatives $\partial W / \partial a$ for each arrow $a \in Q$. The algebra that we study in noncommutative Donaldson-Thomas theory is $\mathbb{C} Q / I_W$. In this algebra, any two closed loops around a face with the same endpoint are identified; moreover, if $p$ is a path from $a$ to $b$, and $\ell_a$ and $\ell_b$ are loops enclosing one face rooted at $a$ and $b$ respectively, one can show that
\begin{equation}
\label{eqn:moving a loop along a path}
[\ell_b p] = [p \ell_a].
\end{equation}
(see ~\cite{davison} for a proof).
For our particular quiver $Q$, this algebra is non-commutative, but its center is described by a system of equations which cut out a cone on a del Pezzo surface.
Our height change, as defined, is designed so that any two paths which are equal in $\mathbb{C} Q / I_w$ have the same height change. Indeed, consider the map
\begin{align*}
\Psi_a: C_Q / I_w &\rightarrow \widetilde{Q} \times \mathbb{Z}_{\geq 0}
\end{align*}
which sends a path to its endpoint in the universal cover $\widetilde{Q}$ together with its height change. We have chosen the definition of the height function $h$ in such a way that $\Psi_a$ is well-defined, and in nice cases, namely when $Q$ is \emph{consistent}, $\Psi_a$ is injective. Our quiver is in fact consistent. See ~\cite{bocklandt} for a number of equivalent definitions and implications of consistency, as well as~\cite{broomhead, davison,szendroi} for an introduction to the portions of the representation theory of quivers relevant to this project. See also ~\cite{eager} for a physics perspective.
\section{The Generating Function}
\label{sec:generating_functions}
We aim to give a generating function which counts, for each perfect matching, the number of tiles needed to construct $M$. Indeed, we can do slightly better: we can record some information about the orientations of these tiles.
\begin{theorem}
\begin{align*}
Z_{n}(a,b,c) &=\prod_{i = 0}^{n-1} (1 + a^ib^{i}c^{i+1})^{n-i}(1+a^ib^{i +1}c^{i+1 })^{n-i}. \\
Z_{n+\frac{1}{2}}(a,b,c) &= \prod_{i=0}^{n} \left(1+a^{i+1}b^ic^i\right)^{n-i+1}
\prod_{i=0}^{n-1}
\left(1+a^{i+1}b^{i+1}c^i\right)^{n-i}.
\end{align*}
\end{theorem}
\begin{proof}
Our strategy is to use a standard, folklore technique to encoding the weight of a perfect matching on $D_n$ as an edge weight function
\[
w_{n;abc} : E(D_n) \rightarrow \{\text{Laurent monomials in }a,b,c\}
\]
where a Laurent monomial is a term of the form $a^ib^jc^k$, $i,j,k \in \mathbb{Z}$. This was worked out explicitly for the square lattice in~\cite{young-2009}.
Consider a hexagon-shaped fundamental domain $H$ of the $dP_3$ lattice, which has six squares, one square of each orientation. Assign weight 1 to both of the horizontal edges in $H$, as well as to the eight edges along the upper and lower boundary of $H$. Assign weight $X$ to the easternmost vertical edge of the SE kite in $H$, and weight $Y$ to the westernmost vertical edge of the NW kite in $H$. Having done this, there is now a unique way to assign Laurent monomial weights to the other edges in $H$ such that a plaquette flip around one of the six faces $f$ multiplies the weight of a perfect matching by the weight of $f$; it is shown in Figure~\ref{fig:weights around H} on the left. Denote these weights by
$w^{\text{local}}_{X,Y;a,b,c}$.
\begin{figure}
\caption{Weights on the lattice, before and after shuffling.
\label{fig:weights around H}
\label{fig:weights around H afterwards}}
\begin{center}
\input{beforeweights.pdftex_t}
\rule{0.5in}{0in}
\input{afterweights.pdftex_t}
\end{center}
\end{figure}
For $n \in \frac{1}{2}\mathbb{Z}$, let $G$ be a minimal covering of $D_n$ by hexagons. Define $w_{n;abc}(e) = 1$ for all vertical edges $e$ on the western boundary of $G$. This has the effect of setting $Y=X(abc)^{-1}=1$ in $w^{\text{local}}_{X,Y;abc}$; for edges $e$ in these hexagons, let
\[w_{n;a,b,c}(e) = w^{\text{local}}_{abc,1;a,b,c}.\]
We can now define $w_{n;a,b,c}$ inductively for the remaining edges in $G$. Finally we restrict $w_{n;a,b,c}$ to $D_n$ to obtain the desired weight function.
Then we have, for the NE-SW shuffle (which takes $Z_n$ to $Z_{n+\frac{1}{2}}$),
\begin{align*}
Z_n(a,b,c)
&= K \! \sum_{\pi \text{ perfect matching}} \! w_{n;a,b,c}(\pi) \quad \in \mathbb{Z}[a,b,c]\\
\end{align*}
Now, when we perform domino shuffling on this weighted lattice, we allow each edge to retain its weight (see Figure~\ref{fig:weights around H afterwards}). This can be achieved by a simple change of variables in the generating function:
\begin{align*}
Z_{n+\frac{1}{2}}(a, (ac)^{-1}, (ab)^{-1}) &=
(1+a)^{n+1} \cdot K Z_n(a,b,c)
\end{align*}
Here, $K$ is a Laurent monomial such that the constant term of $Z_n(a,b,c) =1.$ It is the inverse of the edge weight assigned to the minimum height function.
Changing variables $x=a, y=(ac)^{-1}, z=(ab)^{-1}$ for the moment, and doing the similar computation for the $NW-SE$ shuffle, we obtain the following straightforward system of recurrences:
\begin{align*}
Z_{n+\frac{1}{2}}(x,y,z) &= K' \cdot (1+x)^{n+1} \cdot Z_n(x, (zx)^{-1}, (yx)^{-1}) \\
Z_{n+1}(x,y,z) &= K'' \cdot (1+z)^{n+1} Z_{n+\frac{1}{2}}((yz)^{-1}, (xz)^{-1}, z)
\end{align*}
for some Laurent monomials $K', K''$.
Let us solve these for the integer-order diamonds, leaving the half-integer order ones as a nearly identical exercise. We change variables again: $x=a, y=b, z=c$. It is also helpful to set $q=abc$, so that we obtain
\begin{align*}
Z_{n+1}(a,b,c) &= K'K'' (1+c)^{n+1} \left(1+(bc)^{-1}\right)^{n+1} Z_n((bc)^{-1}, b, abc^2) \\
&= K'K''(1+c)^{n+1}\left(1+\frac{a}{q}\right)^{n+1} \cdot Z_n\left(\frac{a}{q}, b, qc\right).
\end{align*}
thus, factoring out $(\frac{q}{a})^{n+1}$, we have
\begin{align}
\label{eqn:recurrence_with_constant}
Z_{n+1}(a,b,c) &= K' \cdot K'' \cdot (1+c)^{n+1}\left(\frac{q}{a}\right)^{n+1}\left(1+\frac{q}{a}\right)^{n+1}Z_n\left(\frac{a}{q}, b, qc\right).
\end{align}
We know that the following are polynomials with constant term 1:
\[
\begin{array}{cccc}
Z_{n+1}(a,b,c), &
(1+c)^{n+1}, &
\left(1+\frac{q}{a}\right)^{n+1}, &
Z_n\left(\frac{a}{q}, b, qc\right).
\end{array}
\]
Hence, the monomial $K'K''\left(\frac{a}{q}\right)^{n+1}$ is equal to 1 and can be omitted from \eqref{eqn:recurrence_with_constant}, giving the easy linear recurrence relation
\[
Z_{n+1}(a,b,c) = (1+c)^{n+1}\left(1+\frac{q}{a}\right)^{n+1}Z_n\left(\frac{a}{q}, b, qc\right),
\]
It is elementary to check that the first formula in the theorem statement satisfies this first-order recurrence. As such, the theorem will follow once we check that $Z_1(a,b,c)$ is equal to $(1+c)(1+bc)$, which it is.
\end{proof}
\section{Conclusion}
We should reiterate that our main contribution is the domino shuffling algorithm itself, rather than our enumeration formulae. One other application of this algorithm is to a different type of boundary conditions on this lattice, coming from Donaldson-Thomas theory, analogous to the pyramid partitions studied in~\cite{young-2009}; this example was mentioned in~\cite{mozgovoy-reineke}.
Indeed, we were originally inspired to study this problem because of its algebro-geometric connections. Accordingly, in a future paper, we will explain how to
compute this generating function.
The reader may have noted that when $n$ is an integer, the number of our order-$n$ diamonds is the square of the number of order $n$ Aztec diamonds; one might hope for a bijective proof of this fact. However, this fact is \emph{not} true of the generating functions, so such a proof will likely be very difficult to find.
\begin{figure}[h]
\caption{A uniform random matching on an order 100 diamond
\label{fig:bigguy}}
\begin{center}
\includegraphics[height=2.62in]{order100_1.png}
\end{center}
\end{figure}
We have used our shuffling algorithm to generate several perfect matchings of large diamonds, chosen uniformly at random from the set of all matchings. An order-100 is shown in Figure~\ref{fig:bigguy}. One can clearly see the boundary approaching a smooth curve; indeed it is natural~\cite{kenyon-okounkov} to guess that it is an algebraic curve, with boundary fluctuations given by the Airy kernel~\cite{johansson}. The simplicity of the domino shuffle that we have described suggests that the approach of ~\cite{jockusch-propp-shor} may be effective in calculating this limiting curve: the frozen boundary evolves according to a TASEP-like process which is likely amenable to analysis.
The first author was supported in part by an undergraduate research scholarship
from the Institut des Sciences Math\'ematiques (ISM).
\bibliographystyle{plain}
| {
"timestamp": "2011-10-25T02:07:18",
"yymm": "1011",
"arxiv_id": "1011.0045",
"language": "en",
"url": "https://arxiv.org/abs/1011.0045",
"abstract": "We present a version of the domino shuffling algorithm (due to Elkies, Kuperberg, Larsen and Propp) which works on a different lattice: the hexagonal lattice superimposed on its dual graph. We use our algorithm to count perfect matchings on a family of finite subgraphs of this lattice whose boundary conditions are compatible with our algorithm. In particular, we re-prove an enumerative theorem of Ciucu, as well as finding a related family of subgraphs which have 2^{(n+1)^2} perfect matchings. We also give three-variable generating functions for perfect matchings on both families of graphs, which encode certain statistics on the height functions of these graphs.",
"subjects": "Combinatorics (math.CO); High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG)",
"title": "Domino shuffling for the Del Pezzo 3 lattice",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717440735736,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7089449291057152
} |
https://arxiv.org/abs/2103.08002 | Variance of the number of zeros of dependent Gaussian trigonometric polynomials | We compute the variance asymptotics for the number of real zeros of trigonometric polynomials with random dependent Gaussian coefficients and show that under mild conditions, the asymptotic behavior is the same as in the independent framework. In fact our proof goes beyond this framework and makes explicit the variance asymptotics of various models of random Gaussian polynomials. Though we use the Kac--Rice formula, we do not use the explicit closed formula for the second moment of the number of zeros, but we rather rely on intrinsic properties of the Kac--Rice density. | \section{Introduction}
The asymptotic behavior of the variance of the number of zeros of random trigonometric polynomials $\sum a_k\cos(kt) + b_k\sin(kt)$ with independent Gaussian coefficients has been established in \cite{Gra11}. Since then, the variances of numerous models have been studied: for instance, see \cite{Aza16} for the analogous model $\sum a_k\cos(kt)$, see \cite{Bal19, Do20} in the independent framework with non Gaussian coefficients, or more recently \cite{Lub21} for random orthogonal polynomials on the real line. In greater dimension, the asymptotic behavior of the variance of random nodal volume has been established in several kinds of random waves models, see for instance the survey \cite{Ros19} and the references therein.\jump
Roughly speaking, two distinct methods are used in the literature to study the asymptotic behavior of the variance: the Wiener chaos expansion of the number of zeros (see for instance \cite{Aza16, Mar20}), and the Kac--Rice formula (see \cite{Gra11,Lub21}). In this paper we adopt the second method to make explicit the asymptotics of the variance of the number of zeros of random trigonometric polynomials with dependent coefficients.\jump
This paper can be viewed as the natural continuation of \cite{Ang19} and \cite{Ang21}, in which the asymptotic behavior of the mean number of zeros of this model has been established. To the best of our knowledge, the question of variance for dependent trigonometric polynomials has not been considered until now and, the aforementioned examples do not seem to cover this situation.\jump
Let us now detail our model. Let $\T = \R/2\pi\Z$ be the one-dimensional torus, which can be identified with a segment of $\R$ of length $2\pi$. For $s,t\in \T$ we define the distance
\[\dist(s,t)=d(s-t,2\pi\Z),\numberthis\label{eq:19}\]
For $s\in\T$ we define
\[X_n(s) = \frac{1}{\sqrt{n}}\sum_{k=1}^{n}a_k\cos(ks) + b_k\sin(ks),\]
with $(a_k)_k, (b_k)_k$ two independent stationary centered Gaussian processes with correlation function $\rho : \Z\rightarrow \R$. That is,
\[\forall k,l\geq0,\quad\E[a_ka_l] = \E[b_kb_l] = \rho(k-l).\]
Thanks to Bochner Theorem, the correlation function $\rho$ is associated with a spectral measure $\mu$ on the torus $\T$ via the relation
\[\rho(k) = \frac{1}{2\pi}\int_\T e^{-iku}\dd \mu(u).\]
We denote by $Z_{X_n}(I)$ the number of zeros of $X_n$ on a subinterval $I$ of the torus $\T$. Under suitable conditions on the spectral measure $\mu$, it has been shown in \cite{Ang19} and \cite{Ang21} that the expectation of the number of zeros of the process $X_n$ on $\T$ behaves like $\frac{2}{\sqrt{3}}n$, as in the independent framework.\jump
The first main theorem of this article is the following, and makes explicit the variance asymptotics for the number of zeros of the process $X_n$ on $\T$, as $n$ grows to infinity, under the assumption that the spectral measure $\mu$ has a positive continuous density.
\begin{theorem}
\label{theorem1}
We suppose that the spectral measure $\mu$ has a positive continuous density $\psi$ with respect to the Lebesgue measure on the torus $\T$. Then there is an explicit positive constant $C_\infty$ that does not depend on $\psi$, such that for any subinterval $J$ of the torus,
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty.\]
\end{theorem}
The constant $C_\infty\simeq 0.089$ is thus the same constant computed in \cite{Gra11} in the particular framework of independent Gaussian random variables. It is quite remarkable that the limiting value is universal with respect to the dependency of the Gaussian coefficients, as long as the spectral measure $\mu$ has a positive continuous density.\jump
Theorem \ref{theorem1} is in fact an application of the more general next Theorem \ref{theorem2}, that puts forwards the principal ingredients necessary to obtain such a universal asymptotics for the variance. Let $(X_n)_n$ be a sequence of centered Gaussian random processes defined on an open subinterval $I$ of the torus $\T$, with covariance functions $(r_n)_n$ defined for $n\geq 0$ and $s,t\in I$ by
\[r_n(s,t) = \E[X_n(s)X_n(t)].\]
\begin{theorem}
\label{theorem2}
Let $J$ be a subinterval of $I$ such that $\overline{J}\subset I$. We suppose that the sequence of random processes $(X_n)_n$ satisfies the two following conditions
\begin{itemize}
\item[$(A1)$] Let $K$ be a compact subset of $\R$. There exists a continuous positive function $\psi$ on $I$ such that uniformly on $x\in J$, $u,v\in K$, and $a,b\in \lbrace 0,1,2,3,4\rbrace$,
\[\lim_{n\rightarrow+\infty} \frac{1}{n^{a+b}} r_n^{(a,b)}\left(x+\frac{u}{n},x+\frac{v}{n}\right) = (-1)^b\psi(x)\sinc^{(a+b)}(u-v).\]
\item[$(A2)$] There is a constant $C$ and an exponent $\alpha > \frac{1}{2}$ such that for $a,b\in\lbrace 0,1\rbrace$,
\begin{align*}
\forall s,t\in J,\quad|r_n^{(a,b)}(s,t)|\leq C\frac{n^{a+b}}{(n\dist(s,t))^\alpha}.
\end{align*}
\end{itemize}
Then there is an explicit positive constant $C_\infty$ independent of the function $\psi$, such that
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty.\]
\end{theorem}
This last Theorem \ref{theorem2} covers Theorem \ref{theorem1}, as we will prove that the assumptions on the spectral measure $\mu$ in Theorem \ref{theorem1} ensure that the associated sequence of processes $(X_n)_{n}$ satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2}. In this particular case, we can take advantage of the fact that the covariance function $r_n$ of the model is a trigonometric polynomial of degree $n$, to show that the general statements of hypotheses $(A1)$ and $(A2)$ roughly follow from the case $a=b=0$.\jump
In fact, Theorem \ref{theorem2} covers various previously known results about the asymptotics of variance of the number of zeros of general random trigonometric polynomials. Indeed, Theorem it also allows to make explicit the variance asymptotics for the number of zeros on any compact subinterval of $]0,\pi[$ of the process
\[\widetilde{X}_n(s)= \frac{1}{\sqrt{n}}\sum_{k=0}^n a_k\cos(ks+\theta),\quad\quad\theta\in\R,\numberthis\label{eq:33}\]
with $(a_k)_{k\geq 0}$ a stationary sequence of centered random Gaussian variables whose spectral measure has a positive continuous density. Note that the variance asymptotics for the number of zeros on $[0,\pi]$ for this process was established in the case of Gaussian iid $(a_k)_{k\geq 0}$ in \cite{Aza16}. In fact, a slight modification of our proof would prove the asymptotics for the variance of the number of zeros on the whole interval $[0,\pi]$. \jump
More recently, the authors in \cite{Lub21} established the variance asymptotics for the number of zeros of sums of real orthogonal polynomials (with respect to some compactly supported measure $\nu$) with independent Gaussian coefficients. The one to one mapping between $[-1,1]$ and the half-torus $[0,\pi]$ given by the relation $x =\cos(\theta)$ allows us to partially connect their results and Theorem \ref{theorem2}. Let $(P_n)_n$ be a sequence of real orthogonal polynomials with respect to a measure $\nu$ supported on $[-1,1]$. We suppose that $\nu$ has a continuous positive density $\phi$ on $]-1,1[$ which satisfies,
\[\int_0^\pi \log(\phi(\cos\theta))\dd \theta > -\infty\quand \int_0^1 \frac{\omega_\phi(x)}{x}\dd \delta <+\infty.\]
We define
\[\overline{X}_n(s) = \frac{1}{\sqrt{n}}\sum_{k=0}^n a_kP_k(\cos(s)),\]
where $(a_n)_{n\geq 0}$ are iid centered Gaussian random variables. Then the hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} are satisfied with $\psi= 1/\phi$ (see \cite{Lub09,Tot00}) and Theorem \ref{theorem2} implies \cite[Cor.~1.3]{Lub21}.\jump
Section \ref{sec1} of the paper is devoted to the study of the Kac density, that is the integrand in the Kac--Rice formula, for the second moment of the number of zeros of a general one-dimensional Gaussian process. We first recall standard formulas for Gaussian conditional expectation, and the integral expression for the second moment of the number of zeros of a smooth Gaussian process. Under suitable regularity assumptions on the covariance function, we then deduce the behavior of the Kac density near the diagonal and we give a non-singular formula for this integrand, valid in a neighborhood of the diagonal. At last we give an explicit bound for the Kac density far away from the diagonal.\jump
In Section \ref{sec3}, we make explicit the asymptotics of the Kac density associated with a sequence of processes satisfying the hypotheses of Theorem \ref{theorem2}. Hypothesis $(A1)$ implies in particular that as $n$ grows to infinity, the sequence $(X_n)_n$ has a local limit proportional to the $\sinc$ process on $\R$. Together with the results of Section \ref{sec1}, we deduce after a suitable scaling that the Kac density associated to the process $X_n$ uniformly converges towards the Kac density of the $\sinc$ process on $\R$, as $n$ grows to infinity. This fact leads to the proof of Theorem \ref{theorem2}. We then check that the model of trigonometric polynomials with dependent coefficients satisfying the hypotheses of Theorem \ref{theorem1}, also satisfies the hypotheses of Theorem \ref{theorem2}, from which immediately follows the conclusion of Theorem \ref{theorem1}.
\section{Kac--Rice formula for the second moment}\label{sec1}
In this section we give the two main lemmas that quantify the behavior of the Kac density along the diagonal and far from the diagonal, for a general centered Gaussian process satisfying some natural regularity assumptions. First we recall some formula for the conditional density of Gaussian vectors. For a Gaussian stochastic process $(Y(s))_{s\in I}$, where $I$ is a subinterval of $\R$, we denote by $r$ its covariance function defined for $s,t\in I$ by
\[r(s,t) = \E[Y(s)Y(t)],\]
and for $a,b>0$, we denote by $r^{(a,b)}(s,t)$ the derivatives
\[r^{(a,b)}(s,t) = \partial_1^a\partial_2^b r(s,t) = \E[Y^{(a)}(s)Y^{(b)}(t)],\]
provided that $Y$ is sufficiently regular.
\subsection{Determinant formulas}
The Kac--Rice formula for the expectation and variance of the number of zeros of a centered Gaussian stochastic process $(Y(s))_{s\in I}$ involves the conditional Gaussian distributions
\[\gamma_0(s) \sim \mathrm{Law}\left((Y'(s)|Y(s)=0)\right)\;\,\text{and}\;\, (\gamma_{1}(s,t),\gamma_{2}(s,t))\sim\mathrm{Law}\left((Y'(s),Y'(t)|Y(s) = Y(t) = 0)\right).\]
We will need formulas for the distribution of $\gamma_0$, $\gamma_{1}$ and $\gamma_{2}$. For now we assume that the process $Y$ has $\CC^2$ sample paths and that the joint distribution of the Gaussian vector $(Y(s),Y(t))$ does not degenerate for $s\neq t$. Let
\[\Omega(s) := \Cov(Y(s),Y'(s)),\quand\Sigma(s,t) := \Cov(Y(s),Y(t),Y'(s),Y'(t)).\]
Then we have
\[
\Omega(s) = \begin{pmatrix}
r(s,s) & r^{(1,0)}(s,s) \\
r^{(1,0)}(s,s) & r^{(1,1)}(s,s)
\end{pmatrix},\]
and
\[\Sigma(s) = \begin{pmatrix}[cc|cc]
r(s,s) & r(s,t) & r^{(1,0)}(s,s) & r^{(0,1)}(s,t) \\
r(s,t) & r(t,t) & r^{(1,0)}(s,t) & r^{(1,0)}(t,t) \\
\hline
r^{(1,0)}(s,s) & r^{(1,0)}(s,t) & r^{(1,1)}(s,s) & r^{(1,1)}(s,t) \\
r^{(0,1)}(s,t) & r^{(1,0)}(t,t) & r^{(1,1)}(s,t) & r^{(1,1)}(t,t)
\end{pmatrix} = \begin{pmatrix}[c|c]
\Sigma_{11} & \Sigma_{12} \\
\hline
^T\Sigma_{12} & \vphantom{\int^\int}\Sigma_{22}\quad
\end{pmatrix}.
\]
Now, set
\[\omega(s) := r^{(1,1)}(s,s) - \frac{r^{(1,0)}(s,s)^2}{r(s,s)}\quand\Gamma := \Sigma_{22} - ^T\Sigma_{12}\Sigma_{11}^{-1}\Sigma_{12}.\numberthis\label{eq:30}\]
The matrix $\Gamma$ is the Schur complement of $\Sigma_{11}$ in $\Sigma$.
Let $x\in\R^4$ with $x = (x_1,x_2)$ and $x_1,x_2\in\R^2$. Let $y=(y_1,y_2)\in\R^2$. We define
\[m = \frac{r^{(1,0)}(s,s)}{r(s,s)}y_1\quand M = \Sigma_{12}\Sigma_{11}^{-1}x_1.\]
The identities
\[^Ty\Omega^{-1}y = \frac{y_1^2}{r(s,s)} + \frac{(y_2-m)^2}{\omega(s)}\quand\,^Tx \Sigma^{-1} x = \,^Tx_1\Sigma_{11}^{-1}x_1 + \,^T(x_2-M)\Gamma^{-1}(x_2-M)\numberthis\label{eq:07}\]
imply, according to the standard formula for the conditional density, the following lemma.
\begin{lemma}
\label{lemma7}
\[\E[\gamma_0]=0,\quad\E[\gamma_0^2] = \omega(s),\quand\E[\gamma_{1}]=0,\;\E[\gamma_{2}]=0,\quad\Cov(\gamma_{1},\gamma_{2}) = \Gamma(s,t).\]
\end{lemma}
By row reduction we have
\[\det(\Sigma) = \det(\Sigma_{11})\det(\Gamma)\quand \Sigma^{-1} = \begin{pmatrix}
* & * \\
* & \Gamma^{-1}
\end{pmatrix},\numberthis\label{eq:06}\]
and thus,
\[\det[\Cov(\gamma_{1},\gamma_{2})] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(s),Y'(t))\right]}{\det\left[\Cov(Y(s),Y(t))\right]}.\numberthis\label{eq:31}\]
The relation between the inverse of a matrix and its adjugate matrix implies
\[\E[\gamma_{1}^2] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(s))\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\quad \E[\gamma_{2}^2] = \frac{\det\left[\Cov(Y(s),Y(t),Y'(t))\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\numberthis\label{eq:03}\]
\[\E[\gamma_{1}\gamma_{2}] = \frac{\det\left[\Sigma_{3,4}\right]}{\det\left[\Cov(Y(s),Y(t))\right]},\quad\text{and also}\quad \E[\gamma_0^2] = \frac{\det\left[\Cov(Y(s),Y'(s))\right]}{\E[Y(s)^2]},\]
where $\Sigma_{3,4}$ is the matrix $\Sigma$ with the third row and the fourth line removed.\jump
From Definition of $\Gamma$ in Equation \eqref{eq:30} and the identity $\det(A)\leq \det(A+B)$ valid for any two positive semi-definite matrices $A$ and $B$, we also have the inequality
\[\det(\Gamma) \leq \det(\Sigma_{22}).\numberthis\label{eq:08}\]
\jump
Note that when $s=t$, the formulas for the distribution of $\gamma_0$ and $(\gamma_{1},\gamma_{2})$ are singular. We will make explicit in Lemma \ref{lemma2} below the behavior of these distributions near the diagonal, provided that the process $Y$ is sufficiently regular.
\subsection{Kac--Rice formula for the variance}
Let $(Y(t))_{t\in I}$ be a centered Gaussian process such that the joint distribution of the Gaussian vector $(Y(s),Y(t),Y'(s),Y'(t))$ does not degenerate as $s\neq t$. Denote by $p_s$ the density of $Y(s)$ and by $p_{s,t}$ the density of $(Y(s),Y(t))$. We define
\[Z_Y(I) = \Card\enstq{s\in I}{Y(s)=0}\]
the number of zeros of $Y$ in $I$. Kac--Rice formula (see \cite[Thm.~3.2]{Aza09}) then asserts that
\begin{align*}
\E[Z_Y] &= \int_I \rho_1(s)\dd s,
\end{align*}
with
\begin{align*}
\rho_1(s) &= \E\left[|Y'(s)|\;\middle|\; Y(s)=0\right]p_s(0)\;= \frac{1}{2\pi\sqrt{\det(\Omega)}}\int_\R |y|\exp\left(-\frac{y^2}{2w(s)}\right)\dd y,
\end{align*}
and
\[\E[Z_Y^2]-\E[Z_Y] = \iint_{I^2} \rho_2(s,t)\dd s\dd t,\]
with
\begin{align*}
\rho_2(s,t)&= \E\left[|Y'(s)||Y'(t)|\;\middle|\; Y(s)=0, Y(t)=0\right]p_{s,t}(0,0)\\
&=\frac{1}{(2\pi)^2\sqrt{\det(\Sigma(s,t))}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2}\,^T\!y\Gamma^{-1}(s,t)y\right)\dd y_1\dd y_2.
\end{align*}
Hence we deduce the integral representation
\begin{align*}
\Var(Z_Y) &= \left(\E[Z_Y^2] - \E[Z_Y] - \E[Z_Y]^2\right) + \E[Z_Y]\\
&=\left(\iint_{I^2} \left(\rho_2(s,t) - \rho_1(s)\rho_1(t)\right)\dd s \dd t\right) + \int_I\rho_1(t)\dd t.\numberthis\label{eq:29}
\end{align*}
\begin{remark}
\label{remark3}
$\rho_1$ and $\rho_2$ have explicit values given by
\[\rho_1(s) = \frac{1}{\pi}\frac{\sqrt{\vphantom{\int}\det(\Omega)}}{r(s,s)}\quand\rho_2(s,t)=\frac{1}{\pi^2\sqrt{\det(\Sigma_{11})}}\left(\sqrt{\det(\Gamma)}+\Gamma_{12}\arcsin
\left(\frac{\Gamma_{12}}{\sqrt{\Gamma_{11}\Gamma_{22}}}\right)\right),\]
but these explicit formulas will not be used throughout the proof.
\end{remark}
\subsection{Estimates for the singularity along the diagonal}
Our aim now is to derive the asymptotics of formulas \eqref{eq:31} and \eqref{eq:03} as $\tau = (t-s)$ goes to zero. In the case where the process $Y$ is sufficiently regular, a Taylor development around the point $s$ with sufficiently high order allows us to remove the singularity at $\tau=0$. This procedure is standard, see for instance \cite{Anc20} for a general treatment, or \cite[Prop.~5.8]{Aza09}.\jump
In the following, $(Y(s))_{s\in I}$ is a Gaussian process such that its covariance function is of class $\CC^4$ in each variable. We define the following quantities.
\[\xi(s) = \frac{1}{4}\det\left[\Cov(Y(s),Y'(s),Y^{(2)}(s))\right],\]
\[\Delta(s) = \frac{1}{144}\det\left[\Cov(Y(s),Y'(s),Y^{(2)}(s),Y^{(3)}(s))\right],\]
\[\zeta(s) = \frac{1}{24}\det\left[\Cov(Y(s),Y'(s),Y^{(3)}(s))\right].\]
The following lemma details the exact asymptotics for the distributions of $\gamma_0,\gamma_1$ and $\gamma_2$ as the parameter $\tau=(t-s)$ goes to zero.
\begin{lemma}
\label{lemma8}
Suppose that there is a constant $C$ such that for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C\quand\det[\Omega(s)]\geq \frac{1}{C}.\]
Then there is an explicit constant $T_\varepsilon$ depending only on the constant $C$, such that for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$,
\[\det[\Sigma_{11}(s,s+\tau)]=\tau^2\det[\Omega(s)] + \tau^3R_{\Sigma_{11}}(s,\tau),\]
\[\Gamma(s,s+\tau) = \frac{\tau^2}{\det[\Omega(s)]} \begin{pmatrix}
\xi(s) & -\xi(s) \\
-\xi(s) & \xi(s)
\end{pmatrix} + \tau^3R_\Gamma(s,\tau)\]
\[\det\left[\Sigma(s,t)\right] = \tau^8\Delta(s) + \tau^9R_\Sigma(s,\tau),\]
\[\E[(\gamma_1+\gamma_2)^2] = \tau^4\frac{\zeta(s)}{\det[\Omega(s)]} + \tau^5R_{\gamma}(s,\tau).\]
Moreover, the functions $\Omega,\gamma,\Delta,\zeta,R_{\Sigma_{11}},R_\Gamma,R_\Sigma,R_\gamma$ are continuous functionals with respect to the $\|.\|_\infty$ norm, of the covariance function $r$ and its derivatives up to order $4$.
\end{lemma}
\begin{proof}
By Taylor expansion with integral remainder, we have
\[Y(s+\tau) = \sum_{k=0}^n \frac{Y^{(k)}(s)}{k!}\tau^k + \frac{\tau^{n+1}}{n!}\int_0^1(1-u)^nY^{(n+1)}(s+\tau u)\dd u.\]
We will use this expression with $n=1,2,3$. For instance, by applying a unitary transformation on the covariance matrix and the above Taylor formula with $n=1$, we get
\begin{align*}
\det[\Sigma_{11}(s,t)]=\det\left[\Cov(Y(s),Y(t))\right] &= \det\left[\Cov(Y(s),Y(t)-Y(s))\right]\\
&= \tau^2\det\left[\Cov(Y(s),Y'(s))\right] + \tau^3 R_0(s,\tau)\\
&= \tau^2\det\left[\Omega(s)\right] + \tau^3 R_0(s,\tau).
\end{align*}
The remainder $R_0(s,\tau)$ is explicit and follows from the expansion of the determinant. Here we have
\begin{align*}
R_0(s,\tau) = ae-2bc - c^2,
\end{align*}
with
\[a = \E[Y(s)^2] = r(s,s),\quad b = \E[Y(s)Y'(s)] = r^{(0,1)}(s,s)\quad d = \E[Y'(s)^2] = r^{(1,1)}(s,s),\]
\[c = \int_0^1 (1-u)r^{(0,2)}(s,s+\tau u)\dd u,\]
\[e= 2\int_0^1 (1-u)r^{(1,2)}(s,s+\tau u)\dd u + \tau\int_0^1\int_0^1 (1-u)(1-v)r^{(2,2)}(s+\tau u,s+\tau v)\dd u\dd v.\]
By our assumptions the remainder $R_0(s,\tau)$ is a continuous functional of the covariance function of $r$ and its partial derivatives up to order $2$. We deduce that there are explicit positive constants $C'$ and $T_\varepsilon$ depending only on $C$, such that for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$,
\[\frac{1}{\tau^2}\det[\Sigma_{11}(s,s+\tau)] \geq C'.\numberthis\label{eq:18}\]
We can do similar computations for the others determinants. In particular, we have
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(s))\right] &= \det\left[\Cov(Y(s),Y'(s),Y(t)-Y(s)-\tau Y'(s))\right]\\
&= \tau^4\xi(s) + \tau^5 R_{11}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(t))\right] = \tau^4\xi(s) + \tau^5 R_{22}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Sigma_{3,4}(s,t)\right] = -\tau^4\xi(s) + \tau^5 R_{12}(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Sigma(s,t)\right] = \tau^8\Delta(s) + \tau^9 R_\Sigma(s,\tau),
\end{align*}
\begin{align*}
\det\left[\Cov(Y(s),Y(t),Y'(t)+Y'(s))\right] = \tau^6\zeta(s) + \tau^7 R_3(s,\tau).
\end{align*}
As for the computation of $R_0(s,\tau)$, the remainders $R_{11}$, $R_{12}$, $R_{22}$, $R_\Sigma$ and $R_3$ can be explicitly computed and are defined as a sum of integrals (on a compact interval) of quantities that depends polynomially on $r^{(a,b)}(s+\tau_1,s+\tau_2)$ for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $\tau_1,\tau_2\in[s,s+T_0]$. In particular, they are continuous functionals of the covariance function $r$ and its derivatives up to order $4$.\jump
The asymptotics for $\Gamma$ and $(\gamma_1+\gamma_2)$ follows from these identities, and the fact that the common denominator of $\gamma_1,\gamma_2$, $\gamma_{12}$ and $(\gamma_1+\gamma_2)$ in their expression \eqref{eq:03}, which is $\det(\Sigma_{11})$, is bounded from below by a positive constant on $[-T_\varepsilon,T_\varepsilon]$, according to \eqref{eq:18}.
\end{proof}
\begin{remark}
\label{remark2}
From Lemma \ref{lemma8}, we have the following behavior near the diagonal for $\Gamma$.
\[\Gamma(s,s+\tau) \simeq \frac{\tau^2}{\det[\Omega(s)]} \begin{pmatrix}
\xi(s) & -\xi(s) \\
-\xi(s) & \xi(s)
\end{pmatrix}.\]
It means that at a scaling limit, the vector $(\gamma_1,\gamma_2)$ degenerates. If $X\simeq \mathcal{N}(0,\xi)$, then we have approximately
\[\E[|\gamma_1||\gamma_2|] \simeq \tau^2\E[|X|\,|\!-\!X|] = \E[X^2] = \tau^2\frac{\xi(s)}{\det[\Omega(s)]}.\numberthis\label{eq:04}\]
It will be convenient to define
\[(\widetilde{\gamma}_1,\widetilde{\gamma}_2) = \left(\frac{\gamma_1+\gamma_2}{\tau^2}\frac{\gamma_2}{\tau}\right),\quand \widetilde{\Gamma}\,(s,s+\tau) = \Cov(\widetilde{\gamma}_1,\widetilde{\gamma}_2).\]
We have then
\[\det[\Cov(\widetilde{\gamma}_1,\widetilde{\gamma}_2)] = \frac{\det[\Cov(\gamma_{1},\gamma_{2})]}{\tau^6} = \frac{\Delta(s)}{\det\left[\Omega(s)\right]}+\tau R_\Sigma(s,\tau).\]
This time, we have the following asymptotics
\[\widetilde{\Gamma}(s,s+\tau) \simeq \begin{pmatrix}
\zeta(s) & \eta(s) \\
\eta(s) & \xi(s)
\end{pmatrix},\numberthis\label{eq:05}\]
with $\eta(s)$ some explicit function of $s$. The limit matrix is non degenerate, provided that $\Delta(s)>0$.
\end{remark}
The following lemma precises the behavior of $\rho_2(s,t)$ near the diagonal.
\begin{lemma}
\label{lemma3}
Suppose that there is a constant $C$ such that for $a,b\in\lbrace 0,1,2,3,4\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C,\quad \det[\Omega(s)]\geq \frac{1}{C} \quand \Delta(s)\geq \frac{1}{C}.\]
Then there exists a positive constant $T_\varepsilon$ depending only on the constant $C$, and continuous bounded functions $\kappa$ and $R$ such that
\[\forall \tau\in[-T_\varepsilon,T_\varepsilon],\quad \rho_2(s,s+\tau) = \tau\kappa(s) + \tau^2R(s,\tau).\]
Moreover, the functions $\kappa$ and $R$ are continuous functionals of the covariance function $r$ and its derivatives up to order $4$.
\end{lemma}
\begin{proof}
We use the results established in Lemma \ref{lemma8}. In order to obtain a relevant scaling, we follow Remark \ref{remark2} and we make the change of variable $(\gamma_1,\gamma_2)\rightarrow (\widetilde{\gamma_1},\widetilde{\gamma_2})$. By the previous asymptotics, we deduce
\begin{align*}
\rho_2(s,s+\tau)&= p_{s,s+\tau}(0,0)\E\left[|\gamma_1||\gamma_2|\right]\\
&=\frac{\tau^2}{2\pi\det[\Sigma_{11}(s)]}
\E\left[|\tau\widetilde{\gamma_1}-\widetilde{\gamma_2}||\widetilde{\gamma_2}|\right]\\
&= \frac{\tau^2}{(2\pi)^2\sqrt{\det(\Sigma_{11})\det(\widetilde{\Gamma})}}\iint_{\R^2} |\tau x_1-x_2||x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2\\
&= \frac{\tau^2}{(2\pi)^2\sqrt{\frac{1}{\tau^6}\det(\Sigma)}}\iint_{\R^2} |\tau x_1-x_2||x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2\\
&= \frac{\tau}{(2\pi)^2\sqrt{\Delta(s) + \tau\R_\Sigma(s)}}\int_\R(I_\tau (x_1) + J_\tau(x_1))\dd x_1,
\end{align*}
where
\[I_\tau(x_1) = \int_{\tau x_1}^\infty(\tau x_1-x_2)|x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2,\]
and
\[J_\tau(x_1) = \int_{-\infty}^{\tau x_1}(x_2-\tau x_1)|x_2|\exp\left(-\frac{1}{2}\,^T\!x\widetilde{\Gamma}^{-1}x\right)\dd x_1\dd x_2.\]
By Remark \ref{remark2} and Equation \eqref{eq:05}, there is a constant $T_\varepsilon$ and $C'$ such that the matrix $\widetilde{\Gamma}$ satisfies for all $s\in I$ and $\tau\in[-T_\varepsilon,T_\varepsilon]$ the inequality $\det(\widetilde{\Gamma})\geq 1/C'$. The formula for the inverse matrix shows that that the functions $\tau\mapsto I_\tau(x_1)$ and $\tau\mapsto J_\tau(x_2)$ are continuous functionals of the covariance function $r$ and its derivatives up to order $4$, as well as their derivatives with respect to the parameter $\tau$. It means that we can write
\[\rho_2(s,s+\tau) = \tau\kappa(s) + \tau^2R(s,\tau),\]
with $\kappa$ and $R$ some explicit continuous functional of the covariance function $r$ and its derivatives up to order $4$.
\end{proof}
\begin{remark}
\label{remark4}
As suggested by Remark \ref{remark2} and Equation \eqref{eq:04}, the proof of Lemma \Ref{lemma3} shows that
\[\kappa(s) = \frac{\xi(s)}{2\pi\det[\Omega(s)]^2}.\]
\end{remark}
\subsection{Decay estimates far from the diagonal}
In order to derive the asymptotics of the variance of the number of zeros, we need to estimate the quantity $(\rho_2(s,t)-\rho_1(s)\rho_1(t))$ in the Kac--Rice formula \eqref{eq:29} for the variance. Suppose that as $s$ and $t$ are far enough from each other, the random vector $(Y(s),Y'(s))$ and $(Y(t),Y'(t))$ are then "almost" decorrelated. Heuristically we should therefore have
\begin{align*}
\E\left[|Y'(s)||Y'(t)|\;\middle|\; Y(s)=0, Y(t)=0\right] \simeq \E\left[|Y'(s)|\;\middle|\; Y(s)=0\right]\;\E\left[|Y'(t)|\;\middle|\; Y(t)=0\right],
\end{align*}
and
\[p_{s,t}(0,0)\simeq p_s(0)p_t(0).\]
The two following lemmas make the above heuristic rigorous and quantify the error term. We define
\[M(\tau) := \sup_{s\in I}\sup_{a,b\in\lbrace 0,1\rbrace} |r^{(a,b)}(s,s+\tau)|.\numberthis\label{eq:27}\]
\begin{lemma}
\label{lemma9}
We assume the existence of a constant $C$ such that for $a,b\in\lbrace0,1\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C.\]
Then
\[|\det(\Sigma(s,s+\tau)-\det(\Omega(s)\det(\Omega(s,+\tau))|\leq 20C^2 M(\tau)^2.\]
\end{lemma}
\begin{proof}
For a pair $(p,q)$ in $\lbrace1,2,3,4\rbrace$, with $p<q$ we denote by $(\tilde{p},\tilde{q})$ the complementary pair of $(p,q)$ in $\lbrace1,2,3,4\rbrace$. For instance, if $(p,q) = (2,3)$ then $(\tilde{p},\tilde{q})=(1,4)$. We set
\[\Sigma_{p,q} = \begin{pmatrix}
\Sigma_{1p} & \Sigma_{1q} \\
\Sigma_{3p} & \Sigma_{3q}
\end{pmatrix},\quand \widetilde{\Sigma}_{p,q} = \begin{pmatrix}
\Sigma_{2\tilde{p}} & \Sigma_{2\tilde{q}} \\
\Sigma_{4\tilde{q}} & \Sigma_{4\tilde{q}}
\end{pmatrix}.\]
The formula for Laplace expansion along rows $1$ and $3$ asserts that
\[\det(\Sigma) = \sum_{p<q}(-1)^{p+q}\det(\Sigma_{p,q})\det(\widetilde{\Sigma}_{p,q}).\]
Observe that
\[\Sigma_{1,3} = \Omega(s),\quand \widetilde{\Sigma}_{1,3} = \Omega(t),\]
and thus,
\[\det(\Sigma) = \det(\Omega(s))\det(\Omega(t)) + \sum_{\underset{(p,q)\neq (1,3)}{p<q}} (-1)^{p+q} \det(\Sigma_{p,q})\det(\widetilde{\Sigma}_{p,q}).\]
For each pair $(p,q)\neq (1,3)$, there is at least a column of the form $^T(r^{(a,b)}(s,s+\tau),r^{(c,d)}(s,s+\tau))$ in both $\Sigma_{p,q}$ and $\widetilde{\Sigma}_{p,q}$. For instance,
\[\Sigma_{1,4} = \begin{pmatrix}
r(s,s) & r^{(0,1)}(s,t) \\
r^{(1,0)}(s,s) & r^{(1,1)}(s,t)
\end{pmatrix},\quand \widetilde{\Sigma}_{1,4} = \begin{pmatrix}
r(t,t) & r^{(0,1)}(s,t) \\
r^{(0,1)}(t,t) & r^{(1,1)}(s,t)
\end{pmatrix}.\]
It means that for every pair $(p,q)\neq (1,3)$, we have the inequalities
\[|\det(\Sigma_{p,q})|\leq 2CM(\tau),\quand |\det(\widetilde{\Sigma}_{p,q})|\leq 2CM(\tau),\]
and thus
\[\left|\det(\Sigma(s,t)) - \det(\Omega(s))\det(\Omega(t))\right| \leq 20C^2M(\tau)^2.\]
\end{proof}
\begin{lemma}
\label{lemma4}
We assume the existence of a constant $C$ such that for $a,b\in\lbrace0,1\rbrace$ and $s,t\in I$,
\[|r^{(a,b)}(s,t)|\leq C,\quad\quad \det(\Omega(s)) \geq 1/C.\]
There are positive constants $\varepsilon$ and $C'$ depending only on the constant $C$, such that for all $\tau$ satisfying $M(\tau)\leq \varepsilon$, we have
\[|\rho_2(s,s+\tau)-\rho_1(s)\rho_1(s+\tau)|\leq C'M(\tau)^2.\]
\end{lemma}
\begin{proof}
The hypotheses of Lemma \ref{lemma4} imply that
\[r(s,s)\geq \frac{\det(\Omega(s))}{r^{(1,1)}(s,s)}\geq \frac{1}{C^2},\quand \omega(s)=\frac{\det(\Omega(s))}{r(s,s)}\geq \frac{1}{C^2}.\numberthis\label{eq:12}\]
Set $\varepsilon = 1/(\sqrt{40}C^2)$. Lemma \ref{lemma9} above implies that for all $\tau$ such that $M(\tau)\leq \varepsilon$, we have the following inequality
\begin{align*}
\det(\Sigma(s,s+\tau))&\geq \det(\Omega(s))\det(\Omega(t))- 20C^2M(\tau)^2 \geq \frac{1}{2C^2}.\numberthis\label{eq:11}
\end{align*}
Moreover, Equations \eqref{eq:06} and \eqref{eq:08} implies that
\[\det(\Sigma)=\det(\Sigma_{11})\det(\Gamma)\leq C^2\det(\Gamma),\quand \det(\Sigma)\leq \det(\Sigma_{11})\det(\Sigma_{22})\leq C^2 \det(\Sigma_{11}).\]
Combining these inequalities with \eqref{eq:11}, we deduce the following inequalities for all $\tau$ satisfying $M(\tau)\leq \varepsilon$
\[\det(\Sigma_{11}(s,s+\tau)) \geq \frac{1}{2C^4}\,,\quand \det(\Gamma(s,s+\tau)) \geq \frac{1}{2C^4}.\numberthis\label{eq:10}\]
Now let $s,t\in I$ such that for $\tau = t-s$ we have $M(\tau)\leq \varepsilon$. We can express the difference
\begin{align*}
&\rho_2(s,s+\tau)-\rho_1(s)\rho_1(s+\tau) = R_1 + R_2,
\end{align*}
with
\[R_1 = \left[\frac{1}{\sqrt{\det(\Sigma)}}\!-\!\frac{1}{\sqrt{\det(\Omega(s))\det(\Omega(t))}}\right]\!\!
\left(\int_\R |x|\exp\left(-\frac{x^2}{2\omega(s)}\right)\dd x\right)\!\!\left(\int_\R |y|\exp\left(-\frac{y^2}{2\omega(t)}\right)\dd y\right),\]
\[R_2 = \frac{1}{\sqrt{\det(\Sigma)}}\iint_{\R^2}|x||y|\left[\exp\left(-\frac{1}{2}\,^T\!(x,y)\Gamma^{-1}\,(x,y)\right)-\exp\left(-\frac{1}{2}\left(\frac{x^2}{\omega(s)}+\frac{y^2}{\omega(t)}\right)\right)\right]\dd x\dd y.\]
We treat first the term $R_1$. Let
\[\mathrm{Denom} = \sqrt{\det(\Sigma)\det(\Omega(s))\det(\Omega(t))}\left(\sqrt{\det(\Sigma)}+\sqrt{\det(\Omega(s))\det(\Omega(t))}\right).\]
From Lemma \ref{lemma9} and Equation \eqref{eq:11}, there is an explicit constant $C'$ depending only on $C$ such that
\begin{align*}
\left|\frac{1}{\sqrt{\det(\Sigma)}}-\frac{1}{\sqrt{\det(\Omega(s))\det(\Omega(t))}}\right|& = \frac{\left|\det(\Sigma)- \det(\Omega(s))\det(\Omega(t))\right|}{\mathrm{Denom}}\\
& \leq C'M(\tau)^2.
\end{align*}
From the estimate \eqref{eq:12}, the quantity $\omega(s)$ is bounded from below $1/C^2$. We deduce the existence of a constant $C''$ depending only on $C$ and such that
\[|R_1|\leq C''M(\tau)^2.\]
For the term $R_2$, we estimate the distance between $\Gamma$ and the diagonal matrix $\mathrm{diag}(\omega(s),\omega(t))$. First, we have
\[\left|\vphantom{\Sigma^2}\det\left[\Cov(Y(s),Y(t))\right] - r(s,s)r(t,t)\right| = r(s,t)^2 \leq M(\tau)^2.\numberthis\label{eq:13}\]
Developing the determinant $\det[\Cov(Y(s),Y(t),Y'(s))]$ along the second row, we also deduce
\[\left|\det\left[\Cov(Y(s),Y(t),Y'(s))\right] - r(t,t)\det(\Omega(s))\right|\leq 6CM(\tau)^2.\numberthis\label{eq:14}\]
If the parameter $\tau$ satisfies the inequality $M(\tau)\leq \varepsilon$, the inequalities \eqref{eq:13} and \eqref{eq:10} imply
\begin{align*}
r(s,s)r(t,t)\geq \det(\Sigma_{11}(s,s+\tau)) - M(\tau)^2 \geq \frac{1}{2C^4} - \frac{1}{40C^4} \geq \frac{1}{3C^4}.\numberthis\label{eq:15}
\end{align*}
Using the formulas \eqref{eq:03} for the coefficients of $\Gamma(s,t)$, we get
\begin{align*}
\left|\E[\gamma^2_1]-\omega(s)\right|=\left|\frac{\det\left[\Cov(Y(s),Y(t),Y'(s))\right]}{\det\left[\Cov(Y(s),Y(t))\right]}- \frac{r(t,t)\det(\Omega(s))}{r(s,s)r(t,t)}\right|\leq C'M(\tau)^2\numberthis\label{eq:16},
\end{align*}
where the last inequality is justified by the inequalities \eqref{eq:13} and \eqref{eq:14}, and the lower bounds for the denominators given by inequalities \eqref{eq:10} and \eqref{eq:15}.
Similarly,
\[\left|\E[\gamma^2_2] -\omega(t)\right| \leq C'M(\tau)^2\quand|\E[\gamma_{1}\gamma_{2}]| \leq C'M(\tau)^2.\numberthis\label{eq:17}\]
From Equation \eqref{eq:10}, $\det(\Gamma)$ is bounded from below by an explicit positive constant. Estimates \eqref{eq:16} and \eqref{eq:17} imply the existence of a constant $C'$ depending only on $C$ such that
\[\left\|\Gamma - \mathrm{diag}(\omega(s),\omega(t))\right\| \leq C'M(\tau)^2,\]
\[\left\|\Gamma^{-1} - \mathrm{diag}\left(\frac{1}{\omega(s)},\frac{1}{\omega(s)}\right)\right\| \leq C'M(\tau)^2.\]
In order to recover the estimate on $R_2$ we use the following inequality valid for $a,b\in\R$
\[|e^{b} - e^{a}|\leq |b-a|\left(e^{a} + e^{b}\right).\]
Since the quantities $\omega^{-1}$ and $\det(\Sigma)=\det(\Omega)\det(\Gamma)$ are uniformly bounded from below by an explicit positive constant, and $\Gamma^{-1}$ is bounded, we get
\begin{align*}
|R_2|&\leq \frac{C'M(\tau)^2}{\sqrt{\det(\Sigma)}}\iint_{\R^2} |x||y||(|x|+|y|)\left[\exp\left(-\frac{1}{2}\,^T\!(x,y)\Gamma^{-1}\,(x,y)\right)\right.\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+\left.\exp\left(-\frac{1}{2}\left(\frac{x^2}{\omega(s)}+\frac{y^2}{\omega(t)}\right)\right)\right]\dd x\dd y.
\end{align*}
Since $\omega(s),\det(\Gamma(s,s+\tau)$ and $\det(\Sigma(s,s+\tau)$ are bounded below by an explicit constant depending only on $C$, we deduce the existence of a constant $C''$ (depending only on $C$) such that
\[|R_2|\leq C''M(\tau)^2.\]
Combining the estimates for $R_1$ and $R_2$ we obtain
\[|\,\rho_2(s,s+\tau)- \rho_1(s)\rho_1(t)\,|\leq C'M(\tau)^2,\]
where $C'$ is a constant depending only on the constant $C$.
\end{proof}
\section{Proof of the main theorems}\label{sec3}
\subsection{Asymptotics of the Kac density}
Let $(X_n)_n$ be a sequence of processes of the torus $\T$ satisfying the hypotheses of Theorem \ref{theorem2} and let $J$ be a subinterval of $I$ with $\overline{J}\subset I$. We define for $x\in J$
\[\forall s\in\R,\; Y_n(s) :=X_n\left(\frac{s}{n}\right)\quand\forall \tau\in\R,\; Y_n^x(\tau) := X_n\left(x+\frac{\tau}{n}\right).\]
Let $Y_\infty$ be a centered stationary Gaussian process with $\sinc$ covariance function. An explicit computation gives
\[\E[(Y_\infty'(0))^2] = \frac{1}{3},\quad \E[(Y_\infty^{(2)}(0))^2] = \frac{1}{5},\quand \E[(Y_\infty^{(3)}(0))^2] = \frac{1}{7}.\numberthis\label{eq:23}\]
Let
\[q_n(s,t):=\E[Y_n(s)Y_n(t)]=r_n\left(\frac{s}{n},\frac{t}{n}\right),\quad\text{and thus}\quad q_n^{(a,b)}(s,t) = \frac{1}{n^{a+b}}r_n^{(a,b)}\left(\frac{s}{n},\frac{t}{n}\right).\]
Hypothesis $(A1)$ of Theorem \ref{theorem2} implies that for $a,b\in\lbrace 0,1,2,3,4\rbrace$, we have the following uniform convergence on $x\in J$ and on $\tau$ in compact subsets of $\R$
\[\lim_{n\rightarrow+\infty} q_n^{(a,b)}(nx,nx + \tau) = \psi(x)(-1)^b\sinc^{(a+b)}(\tau).\numberthis\label{eq:25}\]
It means that the finite dimensional distributions of the process $(Y_n^x(\tau))_{\tau\in\R}$ and its derivatives up to order $4$ converge in distribution towards the finite dimensional distributions of the process $\psi(x)Y_\infty$ and its derivatives up to order $4$. Moreover, it implies the quantity $q_n^{(a,b)}$ is uniformly bounded for all $x\in J$ and $a,b\in\lbrace0,1,2,3,4\rbrace$ by some constant $C$. By Cauchy--Schwarz inequality for covariance functions, we deduce then that for $s,t\in J$,
\[q_n^{(a,b)}(s,t)\leq \sqrt{q_n^{(a,a)}(s,s)}\sqrt{q_n^{(b,b)}(t,t)} \leq C.\numberthis\label{eq:35}\]
Denote by $\rho_{1,n}$ and $\rho_{2,n}$ (resp. $\rho_{1,\infty}$ and $\rho_{2,\infty}$) the Kac densities of the process $Y_n$ (resp. $Y_\infty$). Since the process $Y_\infty$ is stationary, the function $\rho_{1,\infty}$ is a constant and the function $\rho_{2,\infty}$ depends only on $t-s$.
Similarly, we will use the notation $\Omega_n,\Omega_\infty,\Gamma_n,\Gamma_\infty$, etc. as defined in Section \ref{sec1}. The following lemma give the asymptotics of the Kac densities as $n$ grows to infinity.
\begin{lemma}
\label{lemma5}
We have the following uniform convergences, with $x\in J$ and $\tau$ in a compact set of $\R$
\[\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) = \rho_{1,\infty}\quand\lim_{n\rightarrow+\infty} \rho_{2,n}(nx,nx+\tau) = \rho_{2,\infty}(\tau).\]
\end{lemma}
\begin{proof}
We begin with the convergence of $\rho_{1,n}$. From Equation \eqref{eq:25} with $\tau=0$ and the fact that the function $\psi$ is bounded from below by a positive constant $C_\psi$, we deduce the following uniform convergence on $x\in J$
\[\lim_{n\rightarrow+\infty} \Omega_n(nx) = \psi(x)\Omega_\infty,\quand\lim_{n\rightarrow+\infty} \omega_n(nx) = \psi(x)\omega_\infty.\]
It follows from \eqref{eq:23} that $\Omega_\infty = \mathrm{diag}(1,1/3)$ and $\omega_\infty = 1/3$. It implies the existence of rank $n_0$ independent of $x$ such that
\[\forall n\geq n_0,\forall x\in J,\quad\det[\Omega_n(nx)]\geq \frac{C_\psi^2}{4}\quand \omega_n(nx) \geq \frac{C_\psi}{4}.\numberthis\label{eq:24}\]
It implies the following uniform convergence on $x\in J$
\begin{align*}
\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) &= \lim_{n\rightarrow+\infty}\frac{1}{2\pi\sqrt{\det(\Omega_n(nx))}}\int_\R |y|\exp\left(-\frac{y^2}{2w_n(nx)}\right)\dd y\\
&=\frac{1}{2\pi\psi(x)\sqrt{\det(\Omega_\infty)}}\int_\R |y|\exp\left(-\frac{y^2}{2\psi(x)\omega_\infty}\right)\dd y\\
&=\frac{1}{2\pi\sqrt{\det(\Omega_\infty)}}\int_\R |u|\exp\left(-\frac{u^2}{2\omega_\infty}\right)\dd u\\
&= \rho_{1,\infty}.
\end{align*}
Now for the convergence of $\rho_{2,n}$, let $T_\varepsilon$ be some small positive constant and $K$ a large compact set of $\R$. Observe that the process $Y_\infty$ is stationary and the support of its spectral measure has an accumulation point. It implies (see \cite[Ex. ~3.5]{Aza09}) that for $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$, the covariance matrix of the Gaussian vector $(Y_\infty(s),Y_\infty(s+\tau),Y_\infty'(s),Y_\infty'(s+\tau))$ is nondegenerate and moreover, one can find an explicit positive constant $C$ such that
\[\forall \tau\in K\setminus[-T_\varepsilon,T_\varepsilon],\quad 1/C\leq \det(\Sigma_\infty)\leq C\quand 1/C\leq \det(\Sigma_{11,\infty}) \leq C.\]
From Equation \eqref{eq:06} we deduce that the matrices $\Gamma_\infty$ and $\Sigma_\infty$ are nondegenerate on $K\setminus[-T_\varepsilon,T_\varepsilon]$ and we have the following convergences, uniformly on $x\in J$ and $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$
\[\lim_{n\rightarrow+\infty} \Sigma_n(nx,\tau) = \psi(x)\Sigma_\infty(\tau),\]
\[\lim_{n\rightarrow+\infty} \Gamma_n(nx,\tau) = \psi(x)\Gamma_\infty(\tau).\]
Moreover, there exists a rank $n_0$ depending only on $T_\varepsilon$ such that
\[\forall n\geq n_0,\forall \tau\in K\setminus[-T_\varepsilon,T_\varepsilon],\quad\det(\Sigma_n(nx,nx+\tau))\geq \frac{C_\psi^4}{2C}\quand \det(\Gamma_n(nx,nx+\tau))\geq \frac{C_\psi^2}{2C^2}.\]
We deduce the following convergence, uniform on $x\in J$ and $\tau\in K\setminus[-T_\varepsilon,T_\varepsilon]$
\begin{align*}
\lim_{n\rightarrow+\infty} \rho_2(nx,nx+\tau) &= \lim_{n\rightarrow+\infty} \frac{1}{(2\pi)^2\sqrt{\det(\Sigma_n)}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2}\,^T\!y\Gamma_n^{-1}y\right)\dd y_1\dd y_2\\
&= \frac{1}{(2\pi)^2\psi^2(x)\sqrt{\det(\Sigma_\infty)}}
\iint_{\R^2} |y_1||y_2|\exp\left(-\frac{1}{2\psi(x)}\,^T\!y\Gamma_\infty^{-1}y\right)\dd y_1\dd y_2\\
&= \frac{1}{(2\pi)^2\sqrt{\det(\Sigma_\infty)}}
\iint_{\R^2} |u_1||u_2|\exp\left(-\frac{1}{2}\,^T\!u\Gamma_\infty^{-1}u\right)\dd u_1\dd u_2\\
&= \rho_{2,\infty}(\tau).
\end{align*}
It remains to prove the uniform convergence in the case where $\tau$ lives in a neighborhood of the origin. To this end, we apply Lemma \ref{lemma3}. We check first that our process $Y_n$ fulfills the hypotheses of the lemma. From Equation $\eqref{eq:35}$, there is a constant $C$ such that for $n\geq0$, $x\in J$, $\tau\in \R$ and $a,b\in\lbrace 0,1,2,3,4\rbrace$,
\[ |q_n^{(a,b)}(nx,nx + \tau)|\leq C.\numberthis\label{eq:26}\]
Moreover, we have uniformly on $x\in J$
\[\lim_{n\rightarrow +\infty} \det(\Omega_n(nx)) = \psi^2(x)\Omega_\infty (0)= \frac{\psi^2(x)}{3}.\]
Set
\[\Delta_n(nx) = \det[\Cov(Y_n(nx),Y_n'(nx),Y_n^{(2)}(nx),Y_n^{(3)}(nx))].\]
From identities \eqref{eq:23}, we have
\begin{align*}
\lim_{n\rightarrow +\infty} \Delta_n(nx) = \psi^4(x)\Delta_\infty = \psi^4(x)\begin{vmatrix}
1 & 0 & -\frac{1}{3} & 0 \\
0 & \frac{1}{3} & 0 & -\frac{1}{5} \\
-\frac{1}{3} & 0 & \frac{1}{5} & 0 \\
0 & -\frac{1}{5} & 0 & \frac{1}{7}
\end{vmatrix} = \frac{16}{23625}\psi^4(x).
\end{align*}
In particular there is a rank $n_0$ such that uniformly on $x\in J$
\[\forall n\geq n_0,\quad \det(\Omega_n(nx))\geq \frac{C_\psi^2}{4}\quand \Delta_n(nx)\geq \frac{C_\psi^4}{2000}.\]
Hence, the hypotheses of Lemma \ref{lemma3} are satisfied. There exists a positive constant $T_\varepsilon$ such that for $n$ greater than $n_0$, there exists continuous functions $R_n$ and $\kappa_n$ such that
\[\forall \tau\in[-T_\varepsilon,T_\varepsilon],\quad \rho_{2,n}(nx,nx+\tau) = \tau \kappa_n(nx) + \tau^2R_n(nx,\tau),\]
and the functions $\kappa_n$ and $R_n$ are continuous functionals of $q_n$ and its partial derivatives up to order $4$. From the uniform convergence of $q_n(nx,nx+\tau)$ and its derivatives, we obtain the uniform convergence of $\kappa_n(nx)$, $R_n(nx,nx+\tau)$ and thus $\rho_{2,n}(nx,nx+\tau)$ towards $\kappa_\infty$, $R_\infty(\tau)$ and $\rho_{2,\infty}(\tau)$ on $[-T_\varepsilon,T_\varepsilon]$.\jump
Gathering the uniform convergence of $\rho_{2,n}(nx,nx+\tau)$ towards $\rho_{2,\infty}(\tau)$ on $K\setminus[-T_\varepsilon,T_\varepsilon]$, and on $[T_\varepsilon,T_\varepsilon]$ we have proved the uniform convergence for $x\in J$ and $\tau\in K$ of $\rho_{2,n}(nx,nx+\tau)$ towards its limit.
\end{proof}
The following lemma establishes a decay property for the Kac density.
\begin{lemma}
\label{lemma6}
Let $\alpha$ be the exponent in Assumption $(A2)$ of Theorem \ref{theorem2}. There exists a constant $C$ and a rank $n_0$ independent of $x\in J$ such that for all $\tau$ with $x+\frac{\tau}{n}\in J$ and $n\geq n_0$, we have
\[\left|\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right| \leq \frac{C}{\dist(\tau,2\pi n\Z)^{2\alpha}}.\]
\end{lemma}
\begin{proof}
We check that the hypotheses of Lemma \ref{lemma4} are satisfied with the process $Y_n$ defined on the subinterval $nJ$ of $\R$. In virtue of Hypothesis $(A2)$, there is a constant $C$ such that for $a,b\in\lbrace 0,1\rbrace$, $n\in\N$ and $x\in J$,
\begin{align*}
q_n^{(a,b)}(nx,nx+\tau)= \frac{1}{n^{a+b}}r_n^{(a,b)}\left(x,x+\frac{\tau}{n}\right)\leq \frac{C}{\dist(\tau,2\pi n\Z)^\alpha}.
\end{align*}
It implies that the function $M(\tau)$ defined in \eqref{eq:27} satisfies,
\[\forall \tau\in\R,\quad M(\tau)\leq \frac{C}{\dist(\tau, 2\pi n\Z)^\alpha}.\]
From Equation \eqref{eq:35}, the function $q_n^{(a,b)}$ is uniformly bounded for $a,b\in\lbrace 0,1\rbrace$ by a constant $C$, and inequality \eqref{eq:24} states that for $n$ greater than a rank $n_0$ independent of $x$, and uniformly in $x\in J$, $\det[\Omega_n(nx)]\geq C_\psi^2/4$. For $n\geq n_0$, Lemma \ref{lemma4} implies the existence of positive constants $\varepsilon$ and $C'$ independent of $n$ and $x$, such that for all $\tau$ with $x+\frac{\tau}{n}
\in J$ satisfying $M(\tau)\leq \varepsilon$, we have
\[|\rho_{2,n}(nx,nx+\tau)-\rho_{1,n}(nx)\rho_{1,n}(nx+\tau)|\leq \frac{C'}{\dist(\tau,2\pi n\Z)^{2\alpha}}.\numberthis\label{eq:02}\]
If $M(\tau)\geq \varepsilon$ then
\[\dist(\tau,2\pi n\Z)\leq \frac{C}{\varepsilon}.\]
In that case, Lemma \ref{lemma5} implies that the left-hand side of Equation \eqref{eq:02} is bounded by a constant $C_\varepsilon$ independent of $n$ and $x$. To conclude, note that Inequality \eqref{eq:02} remains valid for all $\tau\in\R$, with $C'$ replaced by $C' + C_\varepsilon(C/\varepsilon)^{2\alpha}$.
\end{proof}
\subsection{Proof of Theorem \ref{theorem2}}
Let $J$ be a subinterval of $I$ with $\overline{J}\subset I$. We identify $J$ with a segment $[a,b]$, such that $|b-a|\leq 2\pi$. Let $n\geq n_0$, where $n_0$ is the rank defined in Lemma \ref{lemma6}. We write
\begin{align*}
\Var(Z_{X_n}(J)) &= \Var(Z_{Y_n}(nJ))\\
&=n\int_{a}^{b}\!\rho_{1,n}(nx)\dd x + n\int_{a}^{b}\!\int_{-n(a-x)}^{n(b-x)} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x.
\end{align*}
For the first term, Lemma \ref{lemma5} asserts that uniformly on $x\in J$, we have
\[\lim_{n\rightarrow+\infty} \rho_{1,n}(nx) = \rho_{1,\infty},\quad\text{and thus}\quad \lim_{n\rightarrow+\infty} \int_{a}^{b}\rho_{1,n}(nx)\dd x = |b-a|\rho_{1,\infty}.\]
For the second term, we define
\begin{align*}
R_n = &\int_{a}^{b}\int_{n(a-x)}^{n(b-x)} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x\\
&\quad-\quad|b-a|\int_\R \left(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2\right)\dd \tau.
\end{align*}
Let $A$ be a large constant. We split $R_n$ into four parts:
\[R_n = R_{1,n}(A)- R_{2,n}(A)+ R_{3,n}(A) - R_{3,\infty}(A),\]
where,
\begin{align*}
R_{1,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\leq A} \left[\left(\rho_{2,n}(nx,nx+\tau)-\rho_{2,\infty}(\tau)\right)\right]\dd \tau\dd x,\\
R_{2,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\leq A} \left[\left(\rho_{1,n}(nx)\rho_{1,n}(nx+\tau)-\rho_{1,\infty}^2\right)\right]\dd\tau\dd x,\\
R_{3,n}(A) &= \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\geq A} \left(\rho_{2,n}(nx,nx+\tau) - \rho_{1,n}(nx)\rho_{1,n}(nx+\tau)\right)\dd \tau\dd x,\\
R_{3,\infty}(A) &= |b-a|\int_{\R\setminus[-A,A]}(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2)\dd \tau\dd x.
\end{align*}
For the the terms $R_{1,n}(A)$ and $R_{2,n}(A)$, Lemma \ref{lemma5} shows that the two integrands converge uniformly towards $0$ and thus,
\[\lim_{n\rightarrow +\infty} R_{1,n}(A) = 0\quand \lim_{n\rightarrow +\infty} R_{2,n}(A) = 0.\]
For the term $R_{3,n}(A)$, Lemma \ref{lemma6} and the inequality $\alpha>1/2$ gives
\begin{align*}
|R_{3,n}(A)|\leq \int_{a}^{b}\int_{n(a-x)}^{n(b-x)}\one_{|\tau|\geq A} \frac{C}{\tau^{2\alpha}}\dd \tau\,\leq\, 2C|b-a|\int_{A}^\infty \frac{1}{\tau^{2\alpha}}\dd \tau \,\leq\, \frac{2C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.\numberthis\label{eq:32}
\end{align*}
For the term $R_{3,\infty}$, we apply dominated convergence to $R_{3,n}(A)$. Equation \eqref{eq:32} implies in particular that the integrand is bounded by a integrable function that does not depend on the parameter $n$, and \ref{lemma5} shows that the integrand converges to the integrand of $R_{3,\infty}(A)$. Hence by Equation \eqref{eq:32},
\begin{align*}
|R_{3,\infty}(A)| = \lim_{n\rightarrow +\infty} |R_{3,n}(A)| \leq \frac{2C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.
\end{align*}
Gathering the estimates for $R_{1,n}(A)$, $R_{2,n}(A)$, $R_{3,n}(A)$ and $R_{3,\infty}(A)$, we deduce that
\[\limsup_{n\rightarrow+\infty} |R_n| \leq \frac{4C|b-a|}{(2\alpha+1)A^{2\alpha-1}}.\]
Letting $A$ go to infinity, we deduce that $\lim_{n\rightarrow+\infty} R_n = 0$, from which follows the following convergence
\[\lim_{n\rightarrow +\infty} \frac{\Var(Z_{X_n}(J))}{n} = \mathrm{length}(J)\,C_\infty,\]
where
\[C_\infty := \int_\R \left(\rho_{2,\infty}(\tau) - \rho_{1,\infty}^2\right)\dd \tau + \rho_{1,\infty}.\]
It remains to show the positiveness of the constant $C_\infty$. A slight modification of the above proof would show that
\[C_\infty = \lim_{n\rightarrow +\infty} \frac{\Var(Z_{Y_\infty}[0,n])}{n}.\]
It has been shown by several methods (see for example \cite{Anc20,Slu91}) that $C_\infty>0$, not only for the process $Y_\infty$, but also for a large class of stationary Gaussian processes satisfying mild conditions on their covariance functions.
\begin{remark}
Using the explicit formulas given by Remark \ref{remark3}, we have
\[\rho_{1,\infty} = \frac{1}{\pi\sqrt{3}}\quad \rho_{2,\infty}(\tau)=\frac{1}{\pi^2\sqrt{\det(\Sigma_{11,\infty}(\tau))}}\left(\sqrt{\det(\Gamma_\infty(\tau))}+\Gamma_{12,\infty}\arcsin
\left(\frac{\Gamma_{12,\infty}}{\sqrt{\Gamma_{11,\infty}\Gamma_{22,\infty}}}\right)\right).\]
\end{remark}
\subsection{Proof of Theorem \ref{theorem1}}
In the following, we consider the sequence of trigonometric Gaussian polynomials $(X_n)_{n}$ defined in introduction. Assuming the hypotheses of Theorem \ref{theorem1}, we show that this sequence of processes satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} with $I=\T$, from which follows the conclusion of Theorem \ref{theorem1}. Following \cite{Ang21}, the next computation gives an integral expression for the covariance function of $X_n$.
\begin{align*}
r_n(s,t) :\!&= \E[X_n(s)X_n(t)]\\
&= \frac{1}{n}\sum_{k,l=1}^n \rho(k-l)\cos(ks-lt)\\
&= \frac{1}{n}\sum_{k,l=1}^n\left(\frac{1}{2\pi}\int_0^{2\pi}e^{-i(k-l)u}\dd\mu(u)\right)\cos(ks-lt)\\
&=\frac{1}{2\pi n}\int_0^{2\pi}\mathrm{Re}\left(\sum_{k,l=1}^ne^{-i(k-l)u}e^{iks-ilt}\right)\dd\mu(u)\\
&=\frac{1}{2\pi n}\int_0^{2\pi}\mathrm{Re}\left(\left(\sum_{\,k=1}^ne^{ik(s-u)}\right)\left(\sum_{\,l=1}^ne^{il(t-u)}\right)\right)\dd\mu(u)\\
&= \cos\left(\frac{n+1}{2}(s-t)\right)\frac{1}{2\pi}\int_0^{2\pi} K_n(s-u,t-u)\dd \mu(u),\numberthis\label{eq:36}
\end{align*}
where $K_n$ is the two points Fejér kernel
\[K_n(s,t) = \frac{1}{n}\frac{\sin\left(\frac{ns}{2}\right)}{\sin\left(\frac{s}{2}\right)}\frac{\sin\left(\frac{nt}{2}\right)}{\sin\left(\frac{t}{2}\right)}.\]
In the case where $\rho(k-l) = \delta_{k,l}$, the measure $\mu$ is the normalized Lebesgue measure on $[-\pi,\pi]$. In that case, we denote by $r_{0,n}$ its covariance function. The function $r_{0,n}$ has the following explicit expression.
\[r_{0,n}(s,t) = \frac{1}{2n}\left[\frac{\sin\left(\left(n+\frac{1}{2}\right)(s-t)\right)}{\sin\left(\frac{s-t}{2}\right)}-1\right].\numberthis\label{eq:01}\]
Let us now assume that the spectral measure $\mu$ has a continuous and positive density $\psi$ on $\T$. The two following lemmas show that the covariance function $r_n$ satisfies hypotheses $(A1)$ and $(A2)$ of Theorem \ref{theorem2} with any exponent $\alpha\in]1/2,1[$. \jump
\begin{lemma}
\label{lemma2}
Let $a,b\in \lbrace 0,1,2,3,4\rbrace$. Uniformly for $s\in \T$ and $u,v$ in any compact subset of $\R$,
\[\lim_{n\rightarrow+\infty} \frac{1}{n^{a+b}}r_n^{(a,b)}\left(s+\frac{u}{n},s+\frac{v}{n}\right) = \psi(s)(-1)^b\sinc^{(a+b)}(u-v).\]
\end{lemma}
\begin{proof}
Let us first remark that the covariance function $r_n$ is here a trigonometric polynomial an can thus be extended to an analytic function on $\C$. We will prove that the conclusion of Lemma \ref{lemma2} holds when $u$ and $v$ belong to a compact subset of $\C$. In that case, it suffices to prove the lemma for $a=b=0$. The general case follows from the analyticity of the covariance function $r_n$ with respect to the parameters $u$ and $v$, and the uniform convergence on compact subsets of $\C$. We have
\begin{align*}
r_n\left(s+\frac{u}{n},s+\frac{v}{n}\right) &= I_n + \psi(s)r_{0,n}\left(s+\frac{u}{n},s+\frac{v}{n}\right),
\end{align*}
where
\[I_n = \cos\left(\frac{n+1}{2}\frac{u-v}{n}\right)\frac{1}{2\pi}\int_{-\pi}^{\pi} K_n\left(\frac{u}{n}-x,\frac{v}{n}-x\right)\left[\psi\left(x+s\right)-\psi(s)\right]\dd x.\]
Firstly, we have
\begin{align*}
r_{0,n}\left(s+\frac{u}{n},s+\frac{v}{n}\right) &= \frac{1}{2n}\left[\frac{\sin\left(\left(n+\frac{1}{2}\right)\frac{u-v}{n}\right)}{\sin\left(\frac{u-v}{2n}\right)}-1\right]\\
&=\sinc(u-v) + O\left(\frac{1}{n}\right),
\end{align*}
where the remainder in uniform on $s\in\R$ and $u,v$ in compact subsets of $\C$. It remains to prove that the quantity $I_n$ converges towards $0$ uniformly on $s\in\T$ and $u,v$ in compact sets of $\C$. Let $K$ be a compact subset of $\C$ and $C(K) = 1+\sup_{u\in K}|\mathrm{Re}(u)|$. We have
\begin{align*}
|I_n| &\leq \frac{1}{2\pi n}\int_{-\pi}^{\pi} \left|\frac{\sin\left(\frac{u-nx}{2}\right)}{\sin\left(\frac{1}{2}\left(\frac{u}{n}-x\right)\right)}\frac{\sin\left(\frac{v-nx}{2}\right)}{\sin\left(\frac{1}{2}\left(\frac{v}{n}-x\right)\right)}\right|\omega_\psi(x)\dd x\\
&\leq \frac{1}{2\pi n^2}\int_{-n\pi}^{n\pi} \left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq R_1 + R_2,\\
\end{align*}
where
\[R_1 = \frac{1}{2\pi n^2}\int_{-C(K)}^{C(K)} \left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y,\]
and
\[R_2 = \frac{1}{2\pi n^2}\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\left|\frac{\sin\left(\frac{u-y}{2}\right)}{\sin\left(\frac{u-y}{2n}\right)}\frac{\sin\left(\frac{v-y}{2}\right)}{\sin\left(\frac{v-y}{2n}\right)}\right|\omega_\psi\left(\frac{y}{n}\right)\dd y.\]
For the term $R_1$, we have
\[R_1 \leq \frac{1}{2\pi n^2}\int_{-C(K)}^{C(K)} \omega_\psi\left(\frac{y}{n}\right)\dd y.\]
Since the spectral density is continuous (hence uniformly continuous) on $\T$, the quantity $R_1$ converges towards zero as $n$ goes to infinity by dominated convergence, uniformly on $s\in\T$ and $u,v\in K$. For the term $R_2$ we use the following inequalities, valid for $\mathrm{Re}(z)\in[-5\pi/6,5\pi/6]$:
\[\frac{3}{5\pi}|\mathrm{Re}(z)|\leq |\sin(\mathrm{Re}(z))|\leq |\sin(z)|\numberthis\label{eq:22}.\]
There is a rank $n_0$ depending only on the compact subset $K$ such that, for all $n\geq n_0$, $u,v\in K$ and $y\in [-n\pi,n\pi]$,
\[-\frac{5\pi}{6}\leq \frac{u-y}{2n}\leq \frac{5\pi}{6}\quand -\frac{5\pi}{6}\leq \frac{v-y}{2n}\leq \frac{5\pi}{6}.\]
Define
\[y_K := \sup_{z\in K} |\mathrm{Im}(z)|\quand I(K) := \left[\sup_{|\mathrm{Re}(z)|\leq y_K}|\sin(z)|\right]^2.\]
It follows from the series of inequalities \eqref{eq:22} that
\begin{align*}
R_2 &\leq \frac{I(K)}{2\pi n^2}\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{\left|\sin\left(\frac{u-y}{2n}\right)\right|}\frac{1}{\left|\sin\left(\frac{v-y}{2n}\right)\right|}\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq \frac{5}{3}I(K)\int_{-n\pi}^{n\pi} \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{|\mathrm{Re}(u)-y||\mathrm{Re}(v)-y|}\omega_\psi\left(\frac{y}{n}\right)\dd y\\
&\leq \frac{5}{3}I(K)\int_{-\infty}^\infty \one_{\lbrace|y|\geq C(K)\rbrace}\frac{1}{\left(|y|-C(K)+1\right)^2}\,\omega_\psi\left(\frac{y}{n}\right)\dd y.
\end{align*}
By dominated convergence, the quantity $R_2$ also converges towards $0$ when $n$ goes to infinity, uniformly on $s\in\T$ and $u,v\in K$.
\end{proof}
\begin{lemma}
\label{lemma1}
Let $a,b\in\lbrace 0,1\rbrace$ and $0<\alpha<1$. There is a constant $C$ such that
\begin{align*}
\forall s,t\in \T,\quad |r_n^{(a,b)}(s,t)|\leq C\frac{n^{a+b}}{(n\dist(s,t))^\alpha}.
\end{align*}
\end{lemma}
\begin{proof}
Let $s,t\in \T$. If $\dist(s,t)\leq 4/n$ then according to the previous Lemma \ref{lemma2}, there is a constant $C$ such that
\[|r_n^{(a,b)}(s,t)|\leq Cn^{a+b} \leq 4^\alpha C\frac{n^{a+b}}{(n\dist(s,t))^\alpha},\]
and the conclusion of Lemma \ref{lemma1} holds. It suffices then to prove Lemma \ref{lemma1} for $s,t\in \T$ such that $\dist(s,t)\geq 4/n$. Let $C(s,\varepsilon)$ denote the circle of center $s$ and radius $\varepsilon$. By Cauchy integral formula,
\begin{align*}
|r_n^{(a,b)}(s,t)| &= \left|\frac{1}{(2i\pi)^{a+b}}\int_{C\left(s,\frac{1}{n}\right)}\int_{C\left(t,\frac{1}{n}\right)}\frac{r_n(w,z)}{(s-w)^{a+1}(t-z)^{b+1}}\dd w\dd z\right|\\
&\leq \vphantom{\int^\int}n^{a+b}\!\!\!\sup_{w\in C\left(s,\frac{1}{n}\right)}\sup_{z\in C\left(t,\frac{1}{n}\right)} |r_n(w,z)|\\
&\leq n^{a+b}\sup_{|u|\leq 1}\sup_{|v|\leq 1} \left|r_n\left(s+\frac{u}{n},t+\frac{v}{n}\right)\right|.
\end{align*}
Let $u,v$ be complex numbers such that $|u|\leq 1$ and $|v|\leq 1$. Using the explicit formula \eqref{eq:36} for $r_n$ we obtain
\begin{align*}
r_n\!\left(s+\frac{v}{n},t+\frac{v}{n}\right) &= \frac{1}{2\pi n}\cos\left(\frac{n+1}{2}\left(s-t+\frac{u-v}{n}\right)\right)\!\!\int_{-\pi}^\pi\! \frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\psi(x)\dd x.
\end{align*}
Using the fact that the sine and cosine functions are bounded by some constant $C$ on the complex strip $\enstq{z\in\C}{|\mathrm{Im}(z)|\leq 1}$ , we have
\begin{align*}
\left|r_n\left(s+\frac{v}{n},t+\frac{v}{n}\right)\right| &\leq \frac{C\|\psi\|_\infty}{2\pi n}\int_\T \left|\frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\right|\dd x.
\end{align*}
Let $\delta = \dist(s,t)/2$. Up to translating $s$ and $t$ by $\pm 2\pi$ and exchanging $s$ and $t$, we can assume that $\delta = \frac{t-s}{2}$. We then make the change of variable $x = y + \frac{t+s}{2}$ to obtain
\[\int_\T \left|\frac{\sin\left(\frac{n(s-x)+u}{2}\right)}{\sin\left(\frac{s-x+\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(t-x)+v}{2}\right)}{\sin\left(\frac{t-x+\frac{v}{n}}{2}\right)}\right|\dd x =
\int_{-\pi}^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y.\]
This last integral splits into two integrals $I_1$ and $I_2$ defined by
\[I_1 = \int_{0}^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y\quand I_2 = \int_{-\pi}^0 \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|\dd y.\]
Both term can be treated the exact same way. We have by Hölder inequality with $0<\alpha<1$,
\begin{align*}
I_1&\leq \left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\right|^\frac{1}{1-\alpha}\dd y\right)^{1-\alpha}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha.\numberthis\label{eq:38}
\end{align*}
For the left integral in \eqref{eq:38}, we make use of the following inequalities, which are consequences of inequalities \eqref{eq:22}, $|u|\leq 1$ and $\delta\geq 2/n$.
\[\left|\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)\right|\geq \frac{3}{10\pi}\left(\delta +y - \frac{\mathrm{Re}(u)}{n}\right)\geq\frac{3}{10\pi}\left(\frac{\delta}{2}+ y\right),\]
to get
\begin{align*}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta+y)-u}{2}\right)}{\sin\left(\frac{\delta+y-\frac{u}{n}}{2}\right)}\right|^\frac{1}{1-\alpha}\dd y\right)^{1-\alpha}&\leq
C\left(\int_0^\infty \frac{\dd y}{\left(y+\frac{\delta}{2}\right)^\frac{1}{1-\alpha}}\right)^{1-\alpha}\\
&\leq \frac{C'}{\delta^\alpha}.\numberthis\label{eq:37}
\end{align*}
For the right integral in \eqref{eq:38}, we make the change of variable $x=n(\delta-y)+\mathrm{Re}(v)$. We also use the inequality
\[|x+iy|\leq C|\sin(x+iy)|,\]
valid for $x\in [-2\pi/3,2\pi/3]$ and $y\in [-1/4,1/4]$, and some positive constant $C$.
\begin{align*}
\left(\int_0^\pi \left|\frac{\sin\left(\frac{n(\delta-y)+v}{2}\right)}{\sin\left(\frac{\delta-y+\frac{v}{n}}{2}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha&\leq
n^{-\alpha}\left(\int_{n(\delta-\pi)+\mathrm{Re}(v)}^{n\delta + \mathrm{Re}(v)} \left|\frac{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2}\right)}{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2n}\right)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha\\
&\leq 2Cn^{1-\alpha}\left(\int_{-\infty}^\infty \left|\frac{\sin\left(\frac{x-i\,\mathrm{Im}(v)}{2}\right)}{x-i\,\mathrm{Im}(v)}\right|^\frac{1}{\alpha}\dd y\right)^\alpha\\
&\leq C'n^{1-\alpha},\numberthis\label{eq:39}
\end{align*}
where in the last inequality we used the fact that the integrand is uniformly bounded in a neighborhood of the origin, and that $\frac{1}{\alpha}>1$ so the integrand is also integrable near $\pm\infty$. Plugging estimates \eqref{eq:37} and \eqref{eq:39} into inequality \eqref{eq:38} we obtain
\[\left|r_n\left(s+\frac{v}{n},t+\frac{v}{n}\right)\right|\leq \frac{C\|\psi\|_\infty}{2\pi n}(I_1+I_2) \leq \frac{2C\|\psi\|_\infty}{2\pi n}\left(C'^2\frac{n^{1-\alpha}}{\delta^\alpha}\right) \leq \frac{C''}{(n\dist(s,t))^\alpha},\]
where $C''$ is a positive constant independent of $s$, $t$ and $n$.
\end{proof}
\begin{remark}
If we assume the following Dini--Lipschitz condition on the spectral density $\psi$, that is
\[\int_0^1 \frac{\omega_\psi(x)}{x}\dd x<+\infty,\]
then we can take $\alpha = 1$ in Lemma \ref{lemma2}, and we have the following inequality
\[|r^{(a,b)}(s,t)|\leq C\frac{n^{a+b-1}}{\dist(s,t)}\left[\|\psi\|_\infty+\int_0^{2\pi} \frac{\omega_\psi(x)}{x}\dd x\right],\]
Where $C$ is an explicit constant.
\end{remark}
\printbibliography
\end{document}
| {
"timestamp": "2021-03-16T01:21:20",
"yymm": "2103",
"arxiv_id": "2103.08002",
"language": "en",
"url": "https://arxiv.org/abs/2103.08002",
"abstract": "We compute the variance asymptotics for the number of real zeros of trigonometric polynomials with random dependent Gaussian coefficients and show that under mild conditions, the asymptotic behavior is the same as in the independent framework. In fact our proof goes beyond this framework and makes explicit the variance asymptotics of various models of random Gaussian polynomials. Though we use the Kac--Rice formula, we do not use the explicit closed formula for the second moment of the number of zeros, but we rather rely on intrinsic properties of the Kac--Rice density.",
"subjects": "Probability (math.PR)",
"title": "Variance of the number of zeros of dependent Gaussian trigonometric polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717480217662,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7089449259972688
} |
https://arxiv.org/abs/1510.00930 | Isometric embeddings of dual polar graphs in Grassmann graphs over finite fields | We consider the Grassmann graphs and dual polar graphs over the same finite field and show that, up to graph automorphism, for every dual polar graph there is the unique isometric embedding in the corresponding Grassmann graph. | \section{Introduction}
Grassmann graphs and polar Grassmann graphs (not necessarily over finite fields)
are interesting for many reasons
\cite{BC-book,D-book,Pankov-book1,Pankov-book2,Pasini-book, Shult-book}.
For example, they are closely related to buildings of classical types \cite{Tits}.
Also, Grassmann graphs and dual polar graphs (over finite fields)
are classical examples of distance regular graphs \cite{BCN-book}.
Embeddings (not necessarily isomorphic in some cases)
of Grassmann graphs and polar Grassmann graphs
(over division rings) are investigated in
\cite{Pankov-paper1,Pankov-book2,KP,Pankov-paper2}.
All dual polar graphs defined by sesquilinear, quadratic and pseudo-quadratic forms
are naturally isometrically embedded in the corresponding Grassmann graphs.
In this short note we consider the Grassmann graphs and dual polar graphs over
the {\it same finite} field.
We show that, up to graph automorphism, for every dual polar graph
there is the unique isometric embedding in the corresponding Grassmann graph.
This statement is related to the problem formulated in \cite{KMP}.
The author's interest is the general case when
Grassmann graphs and dual polar graphs are
related to different not necessarily finite division rings.
To describe isometric embeddings and get a result in spirit of \cite[Chapter 3]{Pankov-book2},
we need semilinear embeddings of special type.
This is a topic for a more detailed research.
\section{Result}
Let $V$ be an $n$-dimensional vector space over a division ring.
Denote by ${\mathcal G}_{k}(V)$ the Grassmannian formed by
$k$-dimensional subspaces of $V$.
The corresponding {\it Grassmann graph} $\Gamma_{k}(V)$ is
the graph whose vertex set is ${\mathcal G}_{k}(V)$
and two $k$-dimensional subspaces are adjacent vertices of this graph if their intersection is $(k-1)$-dimensional.
If $k=1,n-1$ then any two distinct vertices of $\Gamma_{k}(V)$ are adjacent and we will alway suppose that
$1<k<n-1$. Also, we can suppose that $2k\le n$, since the Grassmann graphs $\Gamma_{k}(V)$ and $\Gamma_{n-k}(V^{*})$ are
isomorphic.
We write $\Pi_{V}$ for the projective space associated to $V$
(the points are the $1$-dimensional subspaces and the lines are defined by the $2$-dimensional subspaces).
Let $\Pi=({\mathcal P},{\mathcal L})$ be a rank $m$ polar space,
see \cite{BC-book, Ueberberg} for the precise definition.
Also, we suppose that the polar space $\Pi$ is embedded in
the projective space $\Pi_{V}$, i.e.
the lines of $\Pi$ are lines of $\Pi_{V}$.
Then $2m\le n$ and all maximal singular subspaces of $\Pi$ can be identified with some $m$-dimensional subspaces of $V$.
The set of all such $m$-dimensional subspaces is denoted by ${\mathcal G}(\Pi)$.
The corresponding {\it dual polar graph} $\Gamma(\Pi)$
is the restriction of the Grassmann graph $\Gamma_{m}(V)$
to the set of maximal singular subspaces ${\mathcal G}(\Pi)$.
The existence of isometric embeddings of $\Gamma(\Pi)$ in $\Gamma_{k}(V)$
implies that $m\le k$, i.e.
the diameter of the dual polar graph is not greater than the diameter of the Grassmann graph.
\begin{theorem}
If $m\le k$ and $V$ is a vector space over a finite field then,
up to automorphism of the Grassmann graph $\Gamma_{k}(V)$,
there is the unique isometric embedding of $\Gamma(\Pi)$ in $\Gamma_{k}(V)$.
\end{theorem}
\section{Sketch of proof}
Let $U$ be a subspace of $V$ whose dimension is less than $k$.
Denote by $[U\rangle_{k}$ the set of all $k$-dimensional subspaces containing $U$.
If $U$ is a singular subspace for $\Pi$ then
$$[U\rangle:={\mathcal G}(\Pi)\cap [U\rangle_{m}$$
is the set of all maximal singular subspaces containing $U$.
Let $f:{\mathcal G}(\Pi)\to {\mathcal G}_{k}(V)$
be an isometric embedding of $\Gamma(\Pi)$ in $\Gamma_{k}(V)$,
i.e. an injection preserving the distance between vertices.
\begin{lemma}
The image of $f$ is contained in $[U\rangle_{k}$,
where $U$ is a $(k-m)$-dimensional subspace.
\end{lemma}
\begin{proof}
A simple modification of the proof \cite[Lemma 6]{Pankov-paper1}.
\end{proof}
By Lemma 1, our isometric embedding
can be considered as an isometric embedding $g$
of $\Gamma(\Pi)$ in $\Gamma_{m}(W)$, where $W=V/U$.
Let $P$ be a $1$-dimensional subspace of $V$
which is a point of the polar space $\Pi$.
All lines of $\Pi$ containing $P$ form a polar space of rank $m-1$
and the associated dual polar graph is
the restriction of $\Gamma(\Pi)$ to the set $[P\rangle$.
Lemma 1 implies the existence of a $1$-dimensional subspace $q(P)\subset W$
such that
$$g([P\rangle)\subset [q(P)\rangle_{m}.$$
In other words, our isometric embedding induces a certain mapping
$q:{\mathcal P}\to {\mathcal G}_{1}(W)$.
The mapping $q$ is injective (this follows from the fact that $g$ is an isometric embedding).
Next, we establish the following: if $P$ and $Q$ are collinear points of $\Pi$
then
$$g([P+Q\rangle)\subset [q(P)+q(Q)\rangle_{m}$$
which implies that $q$ sends lines of $\Pi$ to subsets in lines of $\Pi_{W}$
(our objects are not necessarily over the same finite field).
However, if $V$ is over a finite field
then $q$ sends every line of $\Pi$ to a line of $\Pi_{W}$.
The restriction of $q$ to every maximal singular subspace of $\Pi$
is a collineation to a certain projective space.
By the Fundamental Theorem of Projective Geometry,
this restriction is induced by a semilinear isomorphism
between the corresponding vector spaces.
Using \cite[Section III.3, f)]{D-book},
we extend $q$ to a collineation of $\Pi_{V'}$ to $\Pi_{W'}$,
where $V'$ is the minimal subspace of $V$ containing $\Pi$
and $W'$ is a subspace of $W$
\footnote{This arguments do not work if Grassmann graphs and dual polar graphs are
related to different not necessarily finite division rings}.
This gives the claim.
| {
"timestamp": "2015-10-06T02:10:40",
"yymm": "1510",
"arxiv_id": "1510.00930",
"language": "en",
"url": "https://arxiv.org/abs/1510.00930",
"abstract": "We consider the Grassmann graphs and dual polar graphs over the same finite field and show that, up to graph automorphism, for every dual polar graph there is the unique isometric embedding in the corresponding Grassmann graph.",
"subjects": "Combinatorics (math.CO)",
"title": "Isometric embeddings of dual polar graphs in Grassmann graphs over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7089449251461243
} |
https://arxiv.org/abs/1511.09429 | Chromatic roots and limits of dense graphs | In this short note we observe that recent results of Abert and Hubai and of Csikvari and Frenkel about Benjamini--Schramm continuity of the holomorphic moments of the roots of the chromatic polynomial extend to the theory of dense graph sequences. We offer a number of problems and conjectures motivated by this observation. | \section{Introduction}
Recently, there has been much work on developing limit theories of
discrete structures, and of graphs in particular. The best understood
limit concepts are those for dense graph sequences and bounded-degree graph sequences.
The former one was developed by Borgs, Chayes, Lov\'asz, S\'os, Szegedy and Vesztergombi~\cite{BCLSV,LovSze}, and the
latter was initiated by Benjamini and Schramm~\cite{BS}.
The convergence notion in both these theories is based on frequencies
of finite subgraphs, and it is a fundamental programme to understand
what other parameters are captured in the limits (i.e., are continuous
with respect to the corresponding topologies). In this short note we
show that recent proofs of Ab\'ert and Hubai and of Csikv\'ari and Frenkel
about the convergence of holomorphic moments of the chromatic roots
in a Benjamini--Schramm convergent sequence translate to the dense
model as well. Furthermore, we conjecture that in the dense
model we actually have weak convergence of the root distributions.
Let us now give the details. We assume the reader's familiarity with
basics of graph limits, we shall however give pointers to Lov\'asz's recent monograph~\cite{Lov:Book} throughout.
Recall that given a graph $G$ of order $n$, its \emph{chromatic polynomial} $P(G,x)$ (in a complex variable $x$) is defined as
\begin{equation}\label{eq:defchromatic}
P(G,x)=\sum_{k=0}^n \mathrm{ip}(G,k)x(x-1)\ldots(x-k+1)\;,
\end{equation}
where $\mathrm{ip}(G,k)$ is the number of partitions of $V(G)$ into $k$ non-empty independent sets. In other words, for nonnegative
integer values of $x$, $P(G,x)$ counts the number of proper vertex-colorings of $G$ with $x$ colors.
Next, we recall the result of Ab\'ert and Hubai \cite{AbeHub}, which concerns convergence of
bounded-degree graphs. To this end, recall that a sequence of graphs $(G_n)_n$ of maximum
degree uniformly upper-bounded by $D$ is \it Benjamini--Schramm convergent \rm if
for each fixed connected graph $F$, the number sequence $\hom (F, G_n)/
v(G_n)$ converges. Here $\hom (F,G)$ denotes the number of homomorphisms from $F$ to $G$, and $v(G)$ denotes the number of nodes in $G$. There are many equivalent definitions of Benjamini--Schramm convergence. A detailed treatment appears in~\cite[
Chapter 19]{Lov:Book}.
Suppose that $G$ is a graph of maximum degree at most $D$.
We can associate to it the uniform
probability measure $\mu_{G}$ on the multiset of the roots of
the chromatic polynomial $P(G,x)$. The Sokal bound \cite{Sokal:Bound}
tells us that this \it chromatic measure \rm $\mu_{G}$ is supported in the disk of radius (strictly
less than) $8D$. The main result of~\cite{AbeHub} then reads as
follows.
\begin{thm} \label{thm:AbHu}
Suppose that $(G_{n})_{n}$ is a Benjamini--Schramm
convergent sequence of graphs of maximum degree at most $D$. Suppose
that $f:B\rightarrow\mathbb{C}$ is a holomorphic function defined
on the open disk $B=B(0,8D)$. Then the sequence $$\int f(z)\mathrm{d}\mu_{G_{n}}(z)$$
converges.
\end{thm}
Note that to prove Theorem~\ref{thm:AbHu} it suffices to prove the convergence
of the
\emph{holomorphic moments} $$\int z^{k}{\mathrm{d}}\mu_{G_{n}}(z)\qquad (k\in\mathbb{N})$$ for a Benjamini--Schramm convergent graph sequence $(G_n)_n$, and this is indeed how the proof goes.
As was noted in~\cite{AbeHub}, it is not always the case that the
measures $\mu_{G_{n}}$ in a Benjamini--Schramm convergent graph sequence
converge weakly. This can be seen from the following example.
\begin{example}
\label{ex:Counterexample}
Consider paths $P_{n}$ and
cycles $C_{n}$ of growing order. These two sequences have the same
Benjamini--Schramm limit but the weak limit of $(\mu_{P_{n}})_{n\rightarrow\infty}$
is concentrated on 1 whereas the weak limit of $(\mu_{C_{n}})_{n\rightarrow\infty}$
is the uniform measure on the unit circle with the center in~1.
\end{example}
Csikv\'ari and Frenkel~\cite{CsiFre} generalized Theorem~\ref{thm:AbHu} to
a wider class of graph polynomials. This is discussed in Section~\ref{ssec:otherpoly}.\medskip{}
Let us now turn to dense graphs. The convergence notion in the dense model
was introduced by Borgs, Chayes, Lov\'asz, S\'os, Szegedy and Vesztergombi. Of the
many equivalent definitions, we shall give the one that is the most convenient for our
purposes. We refer to \cite[Chapters 11-12]{Lov:Book} for more details. A sequence of graphs
$(G_n)_n$ is \it convergent in the dense model \rm if for each fixed graph $F$, the number sequence $\hom (F,G_n)/
v(G_n)^{v(F)}$ converges. Let us also recall that
if we have a convergent sequence of dense graphs then we can associate to it a limit
object, a so-called \it graphon, \rm see \cite[\S 11.3]{Lov:Book}.
Suppose that $G$ is a graph of order
$n$. Then the vertices of $G$ have arbitrary degrees between
$0$ and $n-1$. The measure $\mu_{G}$ need not be supported in a
bounded region for a such a graph $G$; the Sokal bound gives
only a bound of roughly $8n$ on the modulus of the chromatic roots.
This bound can probably be improved down to $n-1$ (see Conjecture~\ref{conj:modulusN}) but not more.
Thus, it is natural to scale down $\mu_{G}$ by the factor of $n$, defining
a new probability measure $\nu_{G}$, $\nu_{G}(X):=\mu_{G}(nX)$, where for $X\subset\mathbb{C}$ we define $nX=\{nx:x\in X\}\subset\mathbb{C}$.
Now, $\nu_{G}$ is supported in the disk of radius $8$. The main
result of this note is the observation that Theorem~\ref{thm:AbHu} has a counterpart
for sequence of dense graphs.
\begin{thm} \label{thm:Dense} Suppose that $(G_{n})_{n}$ is a sequence of graphs which converges in the dense model. Suppose that $f:B\rightarrow\mathbb{C}$
is a holomorphic function defined on an open disk $B=B(0,8)$. Then
the sequence $$\int f(z)\mathrm{d}\nu_{G_{n}}(z)$$ converges.
\end{thm}
We will give a sketch of a proof of Theorem~\ref{thm:Dense} in Section~\ref{sec:ProofThmDense}.
\medskip
Note that by a standard argument from complex analysis, we can approximately count the number of colorings in a convergent graph sequence.
Note that when $G$ has $n$ vertices and $\ell=Cn$, we expect $P(G,\ell)$ to grow as $(cn)^n$ for some $c\in\mathbb R$.
\begin{thm}
Let $(G_n)_n$ be a sequence of graphs convergent in the dense model, where $G_n$ has order $n$. Then for each $C>8$, the quantity
$$\frac{\sqrt[n]{P(G_n,Cn)}}n$$ converges as $n\rightarrow \infty$.
\end{thm}
The proof follows the same lines as Theorem 1.2 of ~\cite{AbeHub}. Also, the result can be stated a bit more generally, as can again be seen in Theorem 1.2 of ~\cite{AbeHub}.
\medskip
We believe that there exists no counterpart to Example~\ref{ex:Counterexample}
for dense graphs. This is the main conjecture of the present paper.
\begin{conjecture}
\label{conj:weak}Suppose that $(G_{n})_{n}$ is a sequence of graphs convergent in the dense model. Then the rescaled chromatic measures $\nu_{G_{n}}$ converge
weakly.
\end{conjecture}
In general, a graphon does not carry much information about chromatic
properties of graphs which converge to it. For example, it is easy
to construct a sequence of graphs such that their chromatic numbers grow
almost linearly with their orders, yet converge to the constant-zero
graphon. On the other hand it is easy to construct another sequence of graphs such that their chromatic numbers grow arbitrarily slowly, yet converge to the constant-one graphon. That is, in a sense, the chromatic number is not even semicontinuous with respect to the cut-distance.
An immediate consequence of Conjecture~\ref{conj:weak}
would be that it would allow us to associate ``chromatic roots''
to graphons. This is perhaps the most substantial information about
chromatic properties which could be reflected in the limit.
The only support for Conjecture~\ref{conj:weak} is a lack of counterexamples
we could come up with. In particular, the Conjecture asserts that
the normalized chromatic measures of Erd\H{o}s--R\'enyi random graphs
(with constant edge probability) or more generally random graphs coming
from sampling from a graphon converge --- and this seems to be a very
weak form of the conjecture. It would be very interesting to prove
this, and to describe the weak limit.
\begin{problem}
\label{prob:ErdosRenyi}
What is the typical distribution of the chromatic
roots of the Erd\H{o}s--R\'enyi random graph $\mathbb{G}_{n,p}$, for a fixed
$p\in(0,1)$?
\end{problem}
Computational restrictions allowed us to run simulations only for $n\le 10$. Such limited simulations did not hint for any limit behavior.
\medskip
Last, let us remark that the measure $\nu_{G}$ is not trivialized
by the scaling we introduced. This is stated in the following proposition.
\begin{prop} \label{prop:nottrivial}
For every $\delta>0$ there exists $\epsilon>0$
such that the following holds. Suppose that $G$ is a graph of order
$n$ with at least $\delta n^{2}$ edges. Then at least $\epsilon n$
of the chromatic roots of $G$ have modulus at least $\epsilon n$.
\end{prop}
\section{\label{sec:ProofThmDense}Proof of Theorem~\ref{thm:Dense}}
It is only needed to observe that the argument in~\cite{AbeHub}
is valid even in the dense model. More precisely, in \cite[Theorem 3.4]{AbeHub}
the following is proven. The symbol $\hom(T,H)$ stands for the number
of homomorphisms from a graph $T$ to a graph~$H$.
\begin{thm}[{\cite[Theorem 3.4]{AbeHub}}]
Let $H$ be a graph, and for $k\in\mathbb{N}$ let $$p_{k}=|V(H)|\int z^{k}\mathrm{d}\mu_{H}(z).$$ Then
\begin{equation}
p_{k}=\sum_{T}(-1)^{k-1}kc_{k}(T)\hom(T,H)\;,\label{eq:pk}
\end{equation}
where $c_{k}(T)$ are constants, and the summation ranges over connected
graphs $T$ of order at most $k+1$.
\end{thm}
With this result, the proof of Theorem~\ref{thm:Dense} is straightforward. Let us write $p_{k,n}$ for the number $p_{k}$ from the previous
theorem associated to the graph $G_{n}$. As was remarked earlier,
it suffices to prove the theorem for $f(z)=z^{k}$, $k\in\mathbb{N}$.
For simplicity, let us assume that the graph $G_{n}$ has $n$ vertices.
We have
\[
\int z^{k}\mathrm{d}\nu_{G_{n}}(z)=\frac{1}{n^{k}}\int z^{k}\mathrm{d}\mu_{G_{n}}(z)=\frac{p_{k,n}}{n^{k+1}}\;.
\]
The sequence $(G_{n})$ is convergent. In particular, for every graph
$T$ of order at most $k+1$ the quantity $$\frac{\hom(T,G_{n})}{n^{k+1}}$$
converges. Observe that the right-hand side of (\ref{eq:pk}) (for
a fixed number $k$) contains only a bounded number of summands. Consequently,
$$\frac{p_{k,n}}{n^{k+1}}$$ converges as $n\rightarrow\infty$, finishing
the proof.
\section{Proof of Proposition~\ref{prop:nottrivial}}
It is well known, and easy to see from the formula \eqref{eq:defchromatic}, that
the sum of the chromatic roots of $G$ is the number of edges in $G$.
By the assumption of the proposition, this
is at least
$\delta n^{2}$. Also, recall that the chromatic roots are contained in the disk of radius~$8n$. Thus, for $$\delta n^2\le \sum_{x\;\mathrm{ chr.root}}x$$
to hold, we must have at least $\frac{\delta}{9}n$ roots $x$
of the chromatic polynomial with $\Re(x)\in\left[\frac{\delta}{9}n,8n\right]$.
\section{Remarks and conjectures}
\subsection{Variants of Sokal's bound}
Recall that the bound asserts
that if a graph has maximum degree $\Delta$, then all the chromatic
roots lie in the disk of radius $r=8\Delta$. The value $8\Delta$
is not optimal; Sokal himself actually gives $7.96\ldots\times \Delta$.
On the other hand the complete bipartite graph $K_{\Delta,\Delta}$
shows~\cite{Sokal:Dense}
that one cannot go below $r=1.59\ldots\Delta$.\footnote{Let us note that this has not been proven rigorously.} Here, we suggest to bound the moduli of the chromatic roots by the
order instead of the maximum degree.
\begin{conjecture} \label{conj:modulusN}
Every graph $G$ of order $n$ has all the chromatic
roots of modulus at most $n-1$.
\end{conjecture}
If true, complete graphs would be the extremal graphs for the problem.\footnote{Recall that the roots of the chromatic polynomial of a complete graph $K_n$ are $\{0,1,\ldots,n-1\}$.} Note that Conjecture~\ref{conj:modulusN} is known to be true for real zeros. Indeed, if $x>n-1$ is real then each summand in~\eqref{eq:defchromatic} is non-negative, and the summand for $k=n$ is strictly positive, yielding $P(G,x)>0$. Secondly, we claim that if $x$ is negative then it is not a root of $P(G,\cdot)$. Indeed, it is well-known (see e.g.~\cite[Corollary~2.3.1]{Dong:Chromaticbook}) that the coefficients of $P(G,\cdot)$ alternate in sign. The value $P(G,x)$ is then a sum of terms with the same sign, and in particular, non-zero.
By enumerating all graphs of a given order on a computer, we have verified Conjecture~\ref{conj:modulusN} for $n\le 10$.
\medskip
Our next problem can be seen as an extension of Sokal's bound, but is also connected to Conjecture~\ref{conj:weak} as we show below.
\begin{problem} \label{prob:addedges}
Suppose that $G$ is a graph and $G'$ is obtained
from $G$ by adding edges in such a way that the degree at each vertex
increases by at most $\Delta$. Is it true that the chromatic roots
move by at most $c\Delta$, for some absolute constant $c$? (By ``moving''
we mean that there is a bijection $\pi$ from the multiset of the
chromatic roots of $G$ to the multiset of the chromatic roots of
$G'$ so that $\left|x-\pi(x)\right|\le c\Delta$ for each chromatic
root $x$ of $G$.)
\end{problem}
Note that to answer Problem~\ref{prob:addedges} in the affirmative, it
would suffice to prove the case $\Delta=1$.
Suppose that $G_1$ and $G_2$ are two $n$-vertex graphs with edit-distance at most $\epsilon n^2$. That means that after a suitable vertex identification of $V(G_1)$ and $V(G_2)$ the graph $G$ on the same vertices whose edges are the common edges of $G_1$ and $G_2$ has the property that for at most $2\sqrt{\epsilon}n$ vertices do we have $\deg_G(v)\le \deg_{G_1}(v)-\sqrt{\epsilon}n$ or $\deg_G(v)\le \deg_{G_2}(v)-\sqrt{\epsilon}n$. For the sake of drawing the link to Conjecture~\ref{conj:weak}, let us assume that there are no such exceptional vertices. Then a positive solution to Problem~\ref{prob:addedges} would give that the chromatic measure $\nu_{G_1}$ and $\nu_{G_2}$ are close in the weak$^*$ topology. In particular, a positive answer to Problem~\ref{prob:addedges} would provide a support for Conjecture~\ref{conj:weak} when the topology generated by the cut-distance is replaced by the stronger $L^1$-metric.
\subsection{Matching polynomial}\label{ssec:otherpoly}
As mentioned before, Theorem~\ref{thm:AbHu} has been~\cite{CsiFre} extended to
a large class of ``multiplicative graph polynomials of bounded exponential
type''. (In particular, this includes univariate polynomials derived
from the Tutte polynomial, and a modified version of the matching
polynomial. For the matching polynomial, the behavior of the root distribution in
Benjamini--Schramm convergent graph sequences was discussed in \cite{ACSFK,
ACSH}.)
The proof in~\cite{CsiFre} of this more general statement translates
verbatim to the dense setting as well,\footnote{Let us remark that the proof
is quite different from the original proof by Ab\'ert and Hubai.} thus
giving Theorem~\ref{thm:Dense} for multiplicative graph polynomials of bounded
exponential type. Problem~\ref{prob:ErdosRenyi} can be asked for these alternative graph polynomials as well.
The case of the matching polynomial is particularly simple. We recall the definition.
Let $G=(V,E)$ be a finite graph on $v(G)=n$ vertices. Let $m_k(G)$ be the number of $k$-matchings. Note that $m_0(G)=1$ and $m_k(G)=0$ for $k>\lfloor n/2\rfloor$. The \emph{matching polynomial} $\mu(G,x)$ in one variable $x$ is defined as
\begin{align*}
\mu(G,x)&=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^km_k(G)x^{n-2k}.
\end{align*}
A well-known result of Heilmann and Lieb \cite{hei} asserts that the roots
of the matching polynomial are all real.
It is easy to see that the matching
polynomial is multiplicative (w.r.t.\ disjoint union) and the coefficient of
$x^{n-i}$ is a linear combination of subgraph counts. Thus, the version of the main
result of~\cite{CsiFre} for dense graphs applies. Recall the Weierstrass approximation theorem: on a compact interval, any continuous function can be uniformly approximated by polynomials.
It follows that
convergence of
(holomorphic) moments, i.e., $$\int x^k\mathrm d\nu_n\to\int x^k\mathrm d\nu,\quad (k=1,2,\dots)$$ is equivalent to the
weak convergence $\nu_n\to\nu$ for probability distributions $\nu_n$, $\nu$
supported on a compact interval. We get the following. If $(G_n)$ is a sequence of
graphs converging in the dense model, consider the uniform distribution
$\pi_n$ on the roots of the matching polynomial $\mu(G_n,x)$. Then
\pi_n$ scaled down by a factor of $v(G_n)$ converges weakly. Let us explain
why this corollary is trifling. Indeed, the full Heilmann--Lieb theorem
asserts that if $G$ is a graph of maximum degree $D$, and $G$ is not a
matching, than the roots of the matching polynomial $\mu(G,x)$ lie in the
interval $\left[-2\sqrt{D-1},2\sqrt{D-1}\right]$, and in particular in
$\left[-2\sqrt{v(G)-2},2\sqrt{v(G)-2}\right]$. In other words, the
distribution
\pi_n$ scaled down by a factor of $v(G_n)$ converges to the Dirac measure at 0. So,
the rescaling suggested by the Heilmann--Lieb theorem is by a factor of
{\sqrt{v(G_n)}}
$. To get the right statement, we need to
introduce the \emph{modified matching polynomial}. This is a polynomial in one variable $x$ defined by
$$M(G,x)=\sum_{k=0}^{\lfloor n/2\rfloor}(-1)^km_k(G)x^{n-k}.$$
The matching polynomial and its modified version encode the same information. Indeed, we have $\mu(G,x)=x^{-n}M(G,x^{2})$.
We can factor $M(G,x)$ as
\begin{equation}
\label{WER}
M(G,x)=x^{\lceil n/2\rceil}\prod_{i=1}^{\lfloor n/2\rfloor}(x-\gamma_i(G) )\;.
\end{equation}
Then the real numbers$$\left(\pm \sqrt{\gamma_i(G)}\right)_{i=1}^{\lfloor n/2\rfloor},$$ together with an
extra zero if $n$ is odd, are the roots of $\mu(G,x)$.
It can be easily checked directly from~\cite[Definitions~1.3,~1.4]{CsiFre}
that $M(G,x)$ is
a graph polynomial of bounded exponential type. So, it is the \it modified
\rm matching polynomial that we want to apply the main result
of~\cite{CsiFre} to. We thus readily obtain a counterpart of Conjecture~\ref{conj:weak} for the roots of the matching polynomial, with the right scaling.
\begin{thm}\label{thm:matchingconvergence}
Suppose that $(G_{n})_{n}$ is a
sequence of graphs convergent in the dense model. Let $\pi_n$ be the uniform probability measure on roots of the matching polynomial $\mu(G_n,x)$. Then the rescaled measures $$\lambda_n(X):=\pi_n\left({\sqrt{v(G_n)}}X\right)$$ converge
weakly.
\end{thm}
In particular, this allows us to associate a ``matching measure'' to a graphon (cf.\ text below Conjecture~\ref{conj:weak}).
\medskip
In the rest of this section we answer the counterpart of Problem~\ref{prob:ErdosRenyi} for the matching polynomial. This was done independently, and prior to the current manuscript being publicly available, in~\cite{ChLiLi:Matching}. Our proof relates the roots of the matching polynomial of $G_n$ to those of the complete graphs $K_n$. To compare, the proof in~\cite{ChLiLi:Matching} goes via counting ``tree-like walks'', a concept introduced in~\cite{GodsilTreeWalks}.
We can now state the main result of~\cite{ChLiLi:Matching}.
\bigskip
\begin{thm}
Let $p\in(0,1)$, and let $(G_n)_n$ be a sequence of Erd\H{o}s--R\'enyi random
graphs $G_n\sim \mathbb{G}_{n,p}$. Let $\pi_n$ be the uniform probability
distribution on the roots of the matching polynomial of $G_n$. Then almost
surely, the measures $\lambda_n(X):
\pi_n\left(\sqrt n X\right)$ converge weakly to the semicircle distribution $SC_p$ whose density function is
$$\rho_p(x):=\frac1{2\pi}\sqrt{4-\tfrac{x^2}p}\;, \quad -2p\le x\le 2p\;.$$
\end{thm}
In combination with Theorem~\ref{thm:matchingconvergence}, this determines the limit of matching measures for an arbitrary sequence of quasirandom (in the sense of Chung--Graham--Wilson) graphs. Also, note that we present a proof only for $p$ fixed, but the same technique works also for $$p=\Omega\left(\frac{\log^{\text{const}}n}{n}\right).$$
\begin{proof}
Since all the roots of the matching polynomial are real, the convergence of the holomorphic moments $\int z^k\mathrm{d}\lambda_n(z)$, $k\in\mathbb{N}$ readily implies convergence in distribution. Let us thus argue that for each $k\in\mathbb{N}$, almost surely we have $$\int z^k\mathrm{d}\lambda_n(z)\rightarrow \int z^k\mathrm{d} SC_p(z)\;.$$
For each fixed $k=0,1,2,\ldots$, and for the random graphs $G_n$, we asymptotically almost surely have
\begin{equation}\label{eq:POI}
\frac{m_k(G_n)}{m_k(K_n)}=(1+o(1))p^k\;,
\end{equation}
as each set of $k$ pairs of vertices has probability $p^k$ of being entirely
included as edges of $G_n$, and this quantity is concentrated around the
expectation. For details on how to prove such a result, see \cite[Chapter 4]{AlonSpencer}.
Since the $m_i(G)$ are elementary symmetric polynomials of the roots $\gamma_i(G)$ (cf.~\eqref{WER}), the Newton identities give that for each fixed $k$,
\begin{equation*}
\sum_{i=1}^{\lfloor n/2\rfloor}\gamma_i(G)^k=P_k(m_1(G),\dots ,m_k(G))
\end{equation*}
for some multivariate polynomial $P_k$. It follows from the Newton identities that the polynomial $P_k$ has the property that
$$P_k(a_1t,\dots ,a_kt^k)=t^kP(a_1,\dots ,a_k).$$
Putting this together with~\eqref{eq:POI}, we get that
$$\sum_{i=1}^{\lfloor n/2\rfloor}\gamma_i(G)^k=(1+o(1))p^kP_k(m_1(K_n),\dots ,m_k(K_n)).$$
We conclude that
$$\sum_{i=1}^{\lfloor n/2\rfloor}\gamma_i(G)^k=(1+o(1))p^k\sum_{i=1}^{\lfloor n/2\rfloor}\gamma_i(K_n)^k.$$
By a classical result of Heilmann and Lieb~\cite[(3.15)]{hei}, the matching polynomials of complete graphs are the Hermite polynomials. The distribution of zeros of the Hermite polynomial of degree $n$ scaled down by $\sqrt{2n}$ converges to the semicircle distribution $SC_1$, see for instance \cite{lal}. Hence almost surely the measures $\lambda_n$ converge weakly to the semicircle distribution $SC_p$. (Note that the zeros of the matching polynomial are supported on $\pm \sqrt{\gamma_i}$ so we have to rescale the semicircle distribution only by a factor of $\sqrt{p}$.)
\end{proof}
\section{Acknowledgements}
Most of the work was done in the summer of~2013 while JH was visiting E\"otv\"os Lor\'and University.
He would like to thank L\'aszl\'o Lov\'asz for helping him with the arrangements and all the members of the group for a stimulating atmosphere.
\bigskip
The contents of this publication reflect only the authors' views and not necessarily
the views of the European Commission or the European Union.
| {
"timestamp": "2016-11-07T02:06:26",
"yymm": "1511",
"arxiv_id": "1511.09429",
"language": "en",
"url": "https://arxiv.org/abs/1511.09429",
"abstract": "In this short note we observe that recent results of Abert and Hubai and of Csikvari and Frenkel about Benjamini--Schramm continuity of the holomorphic moments of the roots of the chromatic polynomial extend to the theory of dense graph sequences. We offer a number of problems and conjectures motivated by this observation.",
"subjects": "Combinatorics (math.CO)",
"title": "Chromatic roots and limits of dense graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7089449251461243
} |
https://arxiv.org/abs/1412.6886 | $p$-local stable splitting of quasitoric manifolds | We show a homotopy decomposition of $p$-localized suspension $\Sigma M_{(p)}$ of a quasitoric manifold $M$ by constructing power maps. As an application we investigate the $p$-localized suspension of the projection $\pi$ from the moment-angle complex onto $M$, from which we deduce its triviality for $p>\dim M/2$. We also discuss non-triviality of $\pi_{(p)}$ and $\Sigma^\infty\pi$. | \section{Introduction and statement of results}
Manifolds which are now known as quasitoric manifolds were introduced by Davis and Januszkiewicz \cite{DJ} as a topological counterpart of smooth projective toric varieties, and have been the subject of recent interest in the study of manifolds with torus action. As well as toric varieties, quasitoric manifolds have been studied in a variety of contexts where combinatorics, geometry, and topology interact in a fruitful way. We refer the reader to the exposition \cite{BP} written by Buchstaber and Panov for basics of quasitoric manifolds. This note studies a topological aspect of quasitoric manifolds involving their $p$-localized suspension. A quasitoric manifold $M$ over a simple $n$-polytope $P$ is by definition a $2n$-manifold on which the compact $n$-torus $T^n$ acts in such a way that the orbit space $M/T^n$ is identified with the simple polytope $P$ as manifolds with corners. A fundamental example of quasitoric manifolds is the complex projective space $\C P^n$ which is the only quasitoric manifold over the $n$-simplex, whereas there are several quasitoric manifolds on the same simple polytope in general. Observe that since $\C P^n$ admits power maps, the $p$-localization of the suspension $\Sigma\C P^n_{(p)}$ splits into a wedge of $p-1$ spaces as in \cite{MNT}. We prove that any quasitoric manifold also admits power maps, and as a consequence the $p$-localization of its suspension splits into a wedge of $p-1$ spaces.
\begin{theorem}
\label{main-split}
For a quasitoric manifold $M$ there is a homotopy equivalence
$$\Sigma M_{(p)}\simeq X_1\vee\cdots\vee X_{p-1}$$
such that for each $i$, $\widetilde{H}_*(X_i;\Z)=0$ unless $*\equiv 2i+1\mod 2(p-1)$.
\end{theorem}
As a corollary we get a kind of rigidity of quasitoric manifolds over the same polytope, which also follows from a more general result Proposition \ref{split-even}.
\begin{corollary}
\label{main-rigid}
Let $M,N$ be quasitoric manifolds over the same simple $n$-polytope. For $p>n$ there is a homotopy equivalence
$$\Sigma M_{(p)}\simeq\Sigma N_{(p)}.$$
\end{corollary}
To a simplicial complex $K$ we can assign a space $\ZZ_K$ which is called the moment-angle complex for $K$ (see \cite{DJ,BP}). The fundamental construction involving quasitoric manifolds is that every quasitoric manifold over a simple polytope $P$ is obtained by the quotient of a certain free torus action on the moment-angle complex $\ZZ_{K(P)}$, where $K(P)$ denotes the boundary of the dual simplicial polytope of $P$. Then for a quasitoric manifold $M$ over $P$ the projection $\pi\colon\ZZ_{K(P)}\to M$ is of particular importance. We investigate the $p$-localization of the suspension of this projection through the $p$-local stable splitting of Theorem \ref{main-split}. Let $K$ be a simplicial complex on the vertex set $V$. Recall from \cite{BBCG} that there is a homotopy equivalence
\begin{equation}
\label{BBCG}
\Sigma\ZZ_K\simeq\bigvee_{\emptyset\ne I\subset V}\Sigma^{|I|+2}|K_I|
\end{equation}
where $K_I$ denotes the full subcomplex of $K$ on the vertex set $I\subset V$, i.e. $K_I=\{\sigma\in K\,\vert\,\sigma\subset I\}$, and $|K_I|$ means the geometric realization of $K_I$. We identify the map $\Sigma\pi_{(p)}\colon\Sigma(\ZZ_{K(P)})_{(p)}\to\Sigma M_{(p)}$ through the homotopy equivalences of Theorem \ref{main-split} and \eqref{BBCG}. Note that if $P$ has $m$ facets, then the vertex set of $K(P)$ is $[m]:=\{1,\ldots,m\}$.
\begin{theorem}
\label{main-projection}
Let $M$ be a quasitoric manifold over a simple polytope $P$ with $m$ facets. Then through the homotopy equivalences of Theorem \ref{main-split} and \eqref{BBCG}, the map $\Sigma\pi_{(p)}\colon\Sigma(\ZZ_{K(P)})_{(p)}\to\Sigma M_{(p)}$ is identified with a wedge of maps
$$\bigvee_{\substack{\emptyset\ne I\subset [m]\\|I|\equiv i\mod p-1}}(\Sigma^{|I|+2}|K(P)_I|)_{(p)}\to X_i.$$
for $i=1,\ldots,p-1$.
\end{theorem}
\begin{corollary}
\label{main-projection-trivial}
Let $M$ be a quasitoric manifold over a simple $n$-polytope $P$. For $p>n$, the map $\Sigma\pi_{(p)}\colon\Sigma(\ZZ_{K(P)})_{(p)}\to\Sigma M_{(p)}$ is null homotopic.
\end{corollary}
We also discuss necessity of suspension and localization for triviality of the projection $\pi\colon\ZZ_{K(P)}\to M$ in Corollary \ref{main-projection-trivial}. Consider the complex projective space $\C P^1$ as a quasitoric manifold. Then the projection $\pi$ is the Hopf map $S^3\to\C P^1$, so neither $\Sigma^\infty\pi$ nor $\pi_{(p)}$ for any $p$ are null homotopic. We will discuss this problem for more general quasitoric manifolds.
The authors are grateful to Kouyemon Iriye and Shuichi Tsukuda for useful comments.
\section{Cohomology of quasitoric manifolds}
This section collects basic properties of the cohomology of quasitoric manifolds which will be used later. Let $P$ be a simple $n$-polytope, and let $M$ be a quasitoric manifold over $P$. Put $f_i(P)$ to be the number of $(n-i-1)$-dimensional faces of $P$ for $i=-1,0,\ldots,n-1$. The $h$-vector of $P$ is defined by $(h_0(P),\ldots,h_n(P))$ such that for $k=0,\ldots,n$,
$$h_k(P)=\sum_{i=0}^k(-1)^{k-i}\binom{n-i}{n-k}f_{i-1}(P).$$
It is known that the module structure of the cohomology of $M$ is described by the $h$-vector of $P$, implying that the module structure depends only on $P$.
\begin{proposition}
[Davis and Januszkiewicz {\cite[Theorem 3.1]{DJ}} (cf. \cite{BP})]
\label{h-vector}
Let $M$ be a quasitoric manifold over $P$. Then we have
$$H^{\rm odd}(M;\Z)=0\quad\text{and}\quad H^{2i}(M;\Z)\cong\Z^{h_i(P)}.$$
\end{proposition}
Let $K$ be a simplicial complex on the vertex set $[m]$. The moment-angle complex $\ZZ_K$ is defined by
$$\ZZ_K:=\bigcup_{\sigma\in K}D(\sigma)\quad(\subset(D^2)^m)$$
where $D(\sigma)=\{(x_1,\ldots,x_m)\in(D^2)^m\,\vert\,|x_i|=1\text{ whenever }i\not\in\sigma\}$ and $D^2$ is regarded as the unit disk of $\C$. Then the canonical action of $T^m$ on $(D^2)^m$ restricts to the action of $T^m$ on $\ZZ_K$. Let $M$ be a quasitoric manifold over a simple $n$-polytope $P$ with $m$ facets. Then we may regard the vertex set of $K(P)$ is $[m]$. As in \cite{DJ,BP}, $M$ is obtained by quotienting out the moment-angle complex $\ZZ_{K(P)}$ by a certain free $T^{m-n}$-action which is the restriction of the canonical $T^m$-action. Then there is a homotopy fibration
\begin{equation}
\label{fibration}
\ZZ_{K(P)}\xrightarrow{\pi}M\xrightarrow{\alpha}BT^{m-n}.
\end{equation}
One easily sees that $\ZZ_{K(P)}$ is 2-connected (cf. \cite{BP}), hence the transgression $H^1(T^{m-n};\Z)\to H^2(M;\Z)$ associated with the fibration $T^{m-n}\to\ZZ_{K(P)}\xrightarrow{\pi}M$ is an isomorphism. In particular the induced map $\alpha^*\colon H^2(BT^{m-n};\Z)\to H^2(M;\Z)$ is an isomorphism. It is also known as in \cite[Theorem 4.14]{DJ} (cf. \cite{BP}) that the cohomology ring $H^*(M;\Z)$ is generated by 2-dimensional elements. We record these properties of the cohomology of $M$.
\begin{proposition}
\label{H^2}
Let $M$ be a quasitoric manifold over a simple $n$-polytope $P$ with $m$ facets.
\begin{enumerate}
\item The transgression $H^1(T^{m-n};\Z)\to H^2(M;\Z)$ associated with the fibration $T^{m-n}\to\ZZ_{K(P)}\xrightarrow{\pi}M$ is an isomorphism.
\item The map $\alpha^*\colon H^2(BT^{m-n};\Z)\to H^2(M;\Z)$ is an isomorphism.
\item The cohomology ring $H^*(M;\Z)$ is generated by $H^2(M;\Z)$.
\end{enumerate}
\end{proposition}
\section{Proofs of the main results}
Let $P$ be a simple $n$-polytope with $m$ facets, and let $M$ be a quasitoric manifold over $P$. We construct power maps of $M$. Let $u$ be an integer. By the definition of moment-angle complexes, the degree $u$ self-map of $S^1$ induces a self-map $\underline{u}\colon\ZZ_{K(P)}\to\ZZ_{K(P)}$.
\begin{lemma}
\label{power-map}
There is a self-map $\underline{u}\colon M\to M$ satisfying
$$\underline{u}^*=u^k\colon H^{2k}(M;\Z)\to H^{2k}(M;\Z).$$
\end{lemma}
\begin{proof}
Since the self-map $\underline{u}\colon\ZZ_{K(P)}\to\ZZ_{K(P)}$ commutes with the canonical $T^m$-action, it induces a self-map $\underline{u}\colon M\to M$ since $M$ is the quotient of the restriction of the canonical $T^m$-action to a certain subtorus, and by construction it satisfies the commutative diagram
$$\xymatrix{T^{m-n}\ar[d]^{\underline{u}}\ar[r]&\ZZ_{K(P)}\ar[r]^\pi\ar[d]^{\underline{u}}&M\ar[d]^{\underline{u}}\\
T^{m-n}\ar[r]&\ZZ_{K(P)}\ar[r]^\pi&M}$$
where $\underline{u}\colon T^{m-n}\to T^{m-n}$ is the product of the degree $u$ map of $S^1$. Then by Proposition \ref{H^2} and naturality of transgression, we see that the self-map $\underline{u}\colon M\to M$ has the desired property.
\end{proof}
We now recall the result of \cite{MNT}, here we reproduce the proof in order to clarify naturality. Let $X$ be a CW-complex of finite type connected satisfying
\begin{enumerate}
\item $H_{\rm odd}(X;\Z)=0$ and $H_{\rm even}(X;\Z)$ is free, and
\item there is a self-map $\varphi\colon X\to X$ satisfying $\varphi_*=u^k\colon H_{2k}(X;\Z)\to H_{2k}(X;\Z)$ for any $k\ge 0$, where $u$ is an integer whose modulo $p$ reduction is the primitive $(p-1)^\text{\rm th}$ root of unity of $\Z/p$.
\end{enumerate}
Define a self map $\alpha_i\colon\Sigma X\to\Sigma X$ by $\alpha_i:=(\Sigma\varphi-u^1)\circ\cdots\circ\widehat{(\Sigma\varphi-u^i)}\circ\cdots\circ(\Sigma\varphi-u^{p-1})$ for $i=1,\ldots,p-1$. Then $(\alpha_i)_*\colon\widetilde{H}_{2k+1}(\Sigma X;\Z/p)\to\widetilde{H}_{2k+1}(\Sigma X;\Z/p)$ is trivial for $k\not\equiv i\mod p-1$ and is the isomorphism for $k\equiv i\mod p-1$. Put
$$X_i=\mathrm{hocolim}\{\Sigma X_{(p)}\xrightarrow{\alpha_i}\Sigma X_{(p)}\xrightarrow{\alpha_i}\Sigma X_{(p)}\xrightarrow{\alpha_i}\cdots\}.$$
Then it is easy to check that $X_i$ is $p$-locally of finite type and
$$\widetilde{H}_{2k+1}(X_i;\Z/p)=\begin{cases}\widetilde{H}_{2k+1}(\Sigma X;\Z/p)&k\equiv i\mod p-1\\0&k\not\equiv i\mod p-1\end{cases}$$
such that the canonical map $\Sigma X_{(p)}\to X_i$ induces the projection in mod $p$ homology. Then the composite $\Sigma X_{(p)}\to\Sigma X_{(p)}\vee\cdots\vee\Sigma X_{(p)}\to X_1\vee\cdots\vee X_{p-1}$ is an isomorphism in mod $p$ homology, hence an isomorphism in homology with coefficient $\Z_{(p)}$ since spaces on both sides are $p$-locally of finite type, where the first arrow in the composite is defined by using the suspension comultiuplication. Therefore by the J.H.C. Whitehead theorem we obtain:
\begin{lemma}
[Mimura, Nishida, and Toda \cite{MNT}]
\label{MNT}
Let $X$ and $X_i$ be as above. There is a homotopy equivalence
$$\Sigma X_{(p)}\simeq X_1\vee\cdots\vee X_{p-1}$$
such that $\widetilde{H}_*(X_i;\Z/p)=0$ unless $*\equiv 2i+1\mod 2(p-1)$ for $i=1,\ldots,p-1$.
\end{lemma}
We now prove the main results.
\begin{proof}
[Proof of Theorem \ref{main-split}]
Combine Lemma \ref{power-map} and \ref{MNT}
\end{proof}
\begin{proof}
[Proof of Corollary \ref{main-rigid}]
Recall that $M$ is of dimension $2n$. Apply Theorem \ref{main-split} to $M$, then we get $\Sigma M_{(p)}\simeq X_1\vee\cdots\vee X_{p-1}$. If $p>n$, the space $X_i$ satisfies $\widetilde{H}_*(X_i;\Z/p)=0$ unless $*=2i+1$. Then since $X_i$ is simply connected, $X_i$ is a wedge of $S^{2i+1}_{(p)}$, where the number of spheres is the $2i$-dimensional Betti number of $M$ which is equal to $h_i(P)$ by Proposition \ref{h-vector}. So we obtain a homotopy equivalence $\Sigma M_{(p)}\simeq\bigvee_{i=1}^{p-1}\bigvee^{h_i(P)}S^{2i+1}_{(p)}$. We can get the same homotopy equivalence for $N$ as well, and therefore the proof is completed.
\end{proof}
\begin{proof}
[Proof of Theorem \ref{main-projection}]
Define a map $\beta_i\colon\Sigma \ZZ_{K(P)}\to\Sigma\ZZ_{K(P)}$ by $\beta_i=(\Sigma\underline{u}-u^1)\circ\cdots\circ\widehat{(\Sigma\underline{u}-u^i)}\circ\cdots\circ(\Sigma\underline{u}-u^{p-1})$ for $i=1,\ldots,p-1$, where $u$ is an integer whose modulo $p$ reduction is the primitive $(p-1)^\text{\rm th}$ root of unity of $\Z/p$. Put
$$Y_i=\mathrm{hocolim}\{\Sigma(\ZZ_{K(P)})_{(p)}\xrightarrow{\beta_i}\Sigma(\ZZ_{K(P)})_{(p)}\xrightarrow{\beta_i}\Sigma(\ZZ_{K(P)})_{(p)}\xrightarrow{\beta_i}\cdots\}.$$
By naturality of the homotopy equivalence \eqref{BBCG} with respect to self-maps of $S^1$, the self-map $\underline{u}\colon\Sigma\ZZ_{K(P)}\to\Sigma\ZZ_{K(P)}$ is identified with a wedge of the degree $u^{|I|}$ maps
$$u^{|I|}\colon\Sigma^{|I|+2}|K(P)_I|\to\Sigma^{|I|+2}|K(P)_I|$$
for $\emptyset\ne I\subset[m]$. Then we have $Y_i=\bigvee_{\substack{\emptyset\ne I\subset[m]\\|I|\equiv i\mod p-1}}\Sigma^{|I|+2}|K(P)_I|$ and the canonical map $\Sigma(\ZZ_{K(P)})_{(p)}\to Y_i$ is the projection similarly to the proof of Proposition \ref{MNT}. So the composite $\Sigma(\ZZ_{K(P)})_{(p)}\to\Sigma(\ZZ_{K(P)})_{(p)}\vee\cdots\vee\Sigma(\ZZ_{K(P)})_{(p)}\to Y_1\vee\cdots\vee Y_{p-1}$ is a homotopy equivalence, where the first map is defined by the suspension comultiplication and the second map is a wedge of the canonical maps into the homotopy colimits. On the other hand, by Lemma \ref{power-map} there is a commutative diagram
$$\xymatrix{\Sigma\ZZ_{K(P)}\ar[r]^{\beta_i}\ar[d]^{\Sigma\pi}&\Sigma\ZZ_{K(P)}\ar[d]^{\Sigma\pi}\\
\Sigma M\ar[r]^{\alpha_i}&\Sigma M}$$
where $\alpha_i$ is as above. Then there are maps $\pi_i\colon Y_i\to X_i$ satisfying a commutative diagram
$$\xymatrix{\Sigma(\ZZ_{K(P)})_{(p)}\ar[d]^{\Sigma\pi_{(p)}}\ar[d]\ar[r]&Y_1\vee\cdots\vee Y_{p-1}\ar[d]^{\pi_1\vee\cdots\vee\pi_{p-1}}\\
\Sigma M_{(p)}\ar[r]&X_1\vee\cdots\vee X_{p-1}}$$
where the horizontal arrows are the prescribed homotopy equivalences. Thus the proof is completed.
\end{proof}
\begin{proof}
[Proof of Corollary \ref{main-projection-trivial}]
Since $p>n$, the map $\Sigma\pi_{(p)}\colon\Sigma(\ZZ_{K(P)})_{(p)}\to\Sigma M_{(p)}$ is identified with a wedge of the maps $\bigvee_{I\subset[m],\,|I|=i}(\Sigma^{|I|+2}|K(P)_I|)_{(p)}\to\bigvee S^{2i+1}_{(p)}$ for $i=1,\ldots,p-1$. If $\dim K(P)_I= |I|-1$, then $K(P)$ is a simplex, so $|K(P)_I|$ is contractible. Then $\bigvee_{I\subset[m],\,|I|=i}\Sigma^{|I|+1}|K(P)_I|$ is homotopy equivalent to a CW-complex of dimension at most $2i$, completing the proof.
\end{proof}
We close this section by showing a general homotopy theoretical property of finite complexes consisting only of even cells from which Corollary \ref{main-rigid} also follows since there are cell decompositions of quasitoric manifolds only by even dimensional cells.
\begin{proposition}
\label{split-even}
Let $X$ be an $n$-dimensional connected finite complex consisting only of even cells. If $p>n$, then $\Sigma X_{(p)}$ is homotopy equivalent to a wedge of $p$-local odd spheres.
\end{proposition}
\begin{proof}
Induct on the skeleta of $X$. We may assume the 0-skeleton is a point since $X$ is connected, so the claim is trivially true for the 0-skeleton. Suppose that $\Sigma X^{(2k-2)}_{(p)}\simeq\bigvee_{i=1}^{k-1}\bigvee^{m_i}S^{2i+1}_{(p)}$. Then the attaching maps of $(2k+1)$-cells of $\Sigma X_{(p)}$ are identified with maps $S^{2k}\to\bigvee_{i=1}^{k-1}\bigvee^{m_i}S^{2i+1}_{(p)}$. By the Hilton-Milnor theorem, $\Omega(\bigvee_{i=1}^{k-1}\bigvee^{m_i}S^{2i+1}_{(p)})$ is homotopy equivalent to a weak product of the loop spaces of $p$-local odd spheres of dimension $\ge 3$. Then since $p>k$ and $\pi_{2j}(S^{2\ell+1})_{(p)}=0$ for $j<\ell+p-1$, the attaching maps are null homotopic, hence the induction proceeds.
\end{proof}
\section{Non-triviality of the projection $\pi$}
Let $M$ be a quasitoric manifold over an $n$-polytope $P$ and let $\pi\colon\ZZ_{K(P)}\to M$ denote the projection. By Corollary \ref{main-projection-trivial}, $\Sigma \pi_{(p)}$ is trivial for $p>n$. So one would ask whether $\pi_{(p)}$ and $\Sigma^\infty\pi$ are trivial or not. This section shows non-triviality of $\pi_{(p)}$ and examines non-triviality of $\Sigma^\infty\pi$ for quasitoric manifolds over a product of simplices and low dimensional quasitoric manifolds. We first consider the $p$-localization.
\begin{proposition}
The $p$-localization $\pi_{(p)}$ is not null homotopic.
\end{proposition}
\begin{proof}
Recall that there is a homotopy fibration \eqref{fibration}. Then if $\pi_{(p)}$ were null homotopic, we would have $T^{m-n}_{(p)}\simeq(\ZZ_{K(P)})_{(p)}\times\Omega M_{(p)}$. It is shown in \cite{MN} that if $X$ is a simply connected $p$-local finite complex which is not contractible, then $\pi_*(X)$ has torsion. By \eqref{fibration} we also see that $\ZZ_{K(P)}$ is not contractible at any prime $p$, since $M_{(p)}$ is $p$-locally finite but $BT^{m-n}_{(p)}$ is not. Then since $\ZZ_{K(P)}$ is simply connected, we get that $T^{m-n}_{(p)}\simeq(\ZZ_{K(P)})_{(p)}\times\Omega M_{(p)}$ has torsion in homotopy groups, a contradiction.
\end{proof}
We next consider non-triviality of $\Sigma^\infty\pi$ for quasitoric manifolds over a product of simplices. We start with the easiest case. Recall that the complex projective space $\C P^n$ is the only quasitoric manifold over the $n$-simplex $\Delta^n$, and that the projection $\pi$ is the canonical map $S^{2n+1}\to\C P^n$. Then since the cofiber of $\pi$ is $\C P^{n+1}$ whose top cell does not split after stabilization, one sees that $\Sigma^\infty\pi$ is not null homotopic. We here record this almost trivial fact.
\begin{lemma}
\label{CP^k}
The projection $\pi\colon\ZZ_{K(\Delta^n)}\to\C P^n$ is not null homotopic after stabilization.
\end{lemma}
It is helpful to recall the fact on moment-angle complexes regarding products of simple polytopes. For simple polytopes $P_1,P_2$ the product $P_1\times P_2$ is also a simple polytope and $K(P_1\times P_2)=K(P_1)*K(P_2)$, the join of $K(P_1)$ and $K(P_2)$. By definition we have $\ZZ_{K(P_1\times P_2)}=\ZZ_{K(P_1)*K(P_2)}=\ZZ_{K(P_1)}\times\ZZ_{K(P_2)}$, and in particular $\ZZ_{K(P_1)}$ is a retract of $\ZZ_{K(P_1\times P_2)}$. We prepare a simple lemma.
\begin{lemma}
\label{criterion-proj}
Let $P$ be a simple polytope, and let $M$ be a quasitoric manifold over $P\times\Delta^k$. If there is a map $q\colon M\to\C P^k$ satisfying a homotopy commutative diagram
$$\xymatrix{\ZZ_{K(P\times\Delta^k)}\ar[r]^{\rm proj}\ar[d]^\pi&\ZZ_{K(\Delta^k)}\ar[d]^\pi\\
M\ar[r]^q&\C P^k,}$$
then the projection $\pi\colon\ZZ_{K(P\times\Delta^k)}\to M$ is not null homotopic after stabilization.
\end{lemma}
\begin{proof}
Since $\ZZ_{K(\Delta^k)}$ is a retract of $\ZZ_{K(P\times\Delta^k)}$, it follows from Lemma \ref{CP^k} that $\Sigma^\infty(q\circ\pi)$ is not null homotopic. Therefore since $\Sigma^\infty(q\circ\pi)=\Sigma^\infty q\circ\Sigma^\infty\pi$, the proof is completed.
\end{proof}
There is a class of generic quasitoric manifolds over a product of simplices called generalized Bott manifolds which have been intensively studied in toric topology. See \cite{CMS} for details.
By definition a generalized Bott manifold $B$ over $\Delta^{n_1}\times\cdots\times\Delta^{n_\ell}$ satisfies a commutative diagram
$$\xymatrix{\ZZ_{K(\Delta^{n_1}\times\cdots\times\Delta^{n_\ell})}\ar[r]\ar[d]^\pi&\ZZ_{K(\Delta^{n_1}\times\cdots\times\Delta^{n_{\ell-1}})}\ar[r]\ar[d]^\pi&\cdots\ar[r]&\ZZ_{K(\Delta^{n_1}\times\Delta^{n_2})}\ar[r]\ar[d]^\pi&\ZZ_{K(\Delta^{n_1})}\ar[d]^\pi\\
B_\ell\ar[r]^{q_\ell}&B_{\ell-1}\ar[r]^{q_{\ell-1}}&\cdots\ar[r]^{q_2}&B_2\ar[r]^{q_1}&B_1}$$
where the upper horizontal arrows are the projections. Since $B_1=\C P^{n_1}$, we get the following by Lemma \ref{criterion-proj}
\begin{corollary}
\label{Bott}
If $B$ is a generalized Bott manifold over $\Delta^{n_1}\times\cdots\times\Delta^{n_\ell}$, then the projection $\pi\colon\ZZ_{K(\Delta^{n_1}\times\cdots\times\Delta^{n_\ell})}\to B$ is not null homotopic after stabilization.
\end{corollary}
In order to examine non-triviality of $\Sigma^\infty\pi$ for quasitoric manifolds other than Bott manifolds, we give a cohomological generalization of Lemma \ref{criterion-proj}.
\begin{lemma}
\label{criterion-1}
Let $X$ be a space such that $H^2(X;\Z)=\Z\langle x_1,\ldots,x_k\rangle$ and $H^*(X;\Z/p)$ is generated by the mod $p$ reduction of $x_1,\ldots,x_k$ as a ring, and let $F$ be the homotopy fiber of a map $\alpha=(x_1,\ldots,x_k)\colon X\to BT^k$. Suppose the following conditions hold:
\begin{enumerate}
\item There are $x\in H^{2\ell-2i}(BT^k;\Z/p)$ and transgressive $a\in H^{2\ell-1}(F;\Z/p)$ such that
$$\tau(a)=\theta(x)$$
for some degree $2i$ Steenrod operation $\theta$.
\item There is a map $f\colon S^{2\ell-1}\to F$ such that $f^*(a)\ne 0$ in mod $p$ cohomology.
\end{enumerate}
Then the stabilization of the fiber inclusion $F\to X$ is not null homotopic.
\end{lemma}
\begin{proof}
Let $i\colon F\to X$ and $j\colon X\to C_{i\circ f}$ denote the inclusions. Then there is a commutative diagram of exact sequences
$$\xymatrix{0\ar[r]&H^{2\ell-1}(S^{2\ell-1};\Z/p)\ar[r]^\delta&H^{2\ell}(C_{i\circ f};\Z/p)\ar[r]^{j^*}&H^{2\ell}(X;\Z/p)\\
&H^{2\ell-1}(F;\Z/p)\ar[r]^\delta\ar[u]_{f^*}&H^{2\ell}(X,F;\Z/p)\ar[u]_{f^*}&H^{2\ell}(BT^k;\Z/p)\ar[l]_{\alpha^*}\ar[u]_{\alpha^*}.}$$
Put $\bar{x}:=f^*\circ\alpha^*(x)$. Since $\tau(a)=\theta(x)$, we have $\theta(\bar{x})=f^*\circ\alpha^*(\theta(x))=\delta\circ f^*(a)\ne 0$, where $\theta(x)=\tau(a)\ne 0$ since $H^{\rm odd}(X;\Z/2)=0$. Then we obtain
$$H^*(C_{i\circ f};\Z/p)\cong A\oplus\langle\theta(\bar{x})\rangle,\quad A\cong H^*(X;\Z/p)$$
as modules, implying that $\theta(A)\not\subset A$ by $\bar{x}\in A$. If $\Sigma^\infty i$ were null homotopic, we would have $\theta(A)\subset A$ which contradicts to the above calculation, so $\Sigma^\infty i$ is not null homotopic.
\end{proof}
We apply Lemma \ref{criterion-1} to quasitoric manifolds over a product of two simplices which are not necessarily generalized Bott manifolds.
\begin{proposition}
\label{two}
If $M$ is a quasitoric manifold over $\Delta^k\times\Delta^{n-k}$ and neither $n+2$ nor $n-k+2$ is a power of $2$, then $\Sigma^\infty\pi$ is not null homotopic.
\end{proposition}
\begin{proof}
By Proposition \ref{CP^k} we may assume $0<k<n$. By \cite{CMS} the mod 2 cohomology of $M$ is given by
$$H^*(M;\Z/2)=\Z/2[x,y]/(x^{k'-\ell+1}(x+y)^\ell,y^{n-k'+1})$$
for some $\ell\ge 0$, where $k'=k$ or $k'=n-k$.
It is sufficient to check that $M$ satisfies the conditions of Lemma \ref{criterion-1}.
By Proposition \ref{H^2} $M$ satisfies the conditions for the space $X$ of Lemma \ref{criterion-1}.
It follows immediately from Lucas' theorem that $\theta(t^r)=t^{n-k'+1}$ for some Steenrod operation $\theta$ and $t\in H^2(BT^2;\Z/2)$ corresponding to $y$, so the latter part of the condition (1) is satisfied.
Since $\ZZ_{K(\Delta^k\times\Delta^{n-k})}=S^{2k+1}\times S^{2(n-k)+1}$, there is $a\in H^{2(n-k')+1}(\ZZ_{K(\Delta^k\times\Delta^{n-k})};\Z/2)$ satisfying $\tau(a)=t^{n-k'+1}$ for a degree reason.
Then the former part of the condition (1) is also satisfied. Moreover, the element $a$ is spherical, so the condition (2) is satisfied.
\end{proof}
We next specialize Lemma \ref{criterion-1} for applications to low dimensional quasitoric manifolds.
\begin{proposition}
\label{criterion-x^2}
Let $M$ be a quasitoric manifold. If there is non-zero $x\in H^2(M;\Z/2)$ satisfying $x^2=0$, then $\Sigma^\infty\pi$ is not null homotopic.
\end{proposition}
\begin{proof}
It is sufficient to check that the conditions of Lemma \ref{criterion-1} are satisfied. Let $P$ be a polytope on which $M$ stands. Since $\ZZ_{K(P)}$ is 2-connected, there is $a\in H^3(\ZZ_{K(P)};\Z/2)$ satisfying $\tau(a)=t^2$, where $t\in H^2(BT^{m-n};\Z/2)$ satisfies $\alpha^*(t)=x$. Then for $t^2=\mathrm{Sq}^2t$, the condition (1) of Lemma \ref{criterion-1} is satisfied. We also have that the Hurewicz map $\pi_3(\ZZ_{K(P)})\to H_3(\ZZ_{K(P)};\Z)$ is an isomorphism, so any element of $H^3(\ZZ_{K(P)};\Z/2)$ is spherical. Then the condition (2) of Lemma \ref{criterion-1} is satisfied, and therefore the proof is done.
\end{proof}
We now apply Proposition \ref{criterion-x^2} to low dimensional quasitoric manifolds.
\begin{corollary}
\label{h}
If $M$ is a 4-dimensional quasitoric manifold, then $\Sigma^\infty\pi$ is not null homotopic.
\end{corollary}
\begin{proof}
Suppose that the quasitoric manifold $M$ stands over a 2-polytope $P$. If $P=\Delta^2$, the corollary follows from Lemma \ref{CP^k} since $\C P^2$ is the only quasitoric manifold over $\Delta^2$. If $P\ne\Delta^2$, then $P$ is a $k$-gon for $k\ge 4$, hence $h_2(P)=1<k-2=h_1(P)$. Then it follows from Proposition \ref{h-vector} that $\dim H^4(M;\Z/2)<\dim H^2(M;\Z/2)$, implying that there must be non-zero $x\in H^2(M;\Z/2)$ satisfying $x^2=0$. Thus the proof is completed by Proposition \ref{criterion-x^2}.
\end{proof}
\begin{remark}
We here remark that $h_1(P)=h_2(P)$ by the Dehn-Sommerville equation for $\dim P=3$ and $h_1(P)>h_2(P)$ for $\dim P>3$ by the $g$-theorem (cf. \cite{BP}), so the argument in the proof of Corollary \ref{h} does not work for $\dim P\ge 3$.
\end{remark}
\begin{corollary}
\label{cube}
If $M$ is a quasitoric manifold over the 3-cube, then $\Sigma^\infty\pi$ is not null homotopic.
\end{corollary}
\begin{proof}
It is calculated in \cite{CMS,H} that the mod 2 cohomology of $M$ is given by
$$H^*(M;\Z/2)=\Z/2[x,y,z]/(x^2+x(ay+bz),y^2+y(cx+dz),z^2+z(ex+fy))$$
for $a,b,c,d,e,f\in\Z/2$ satisfying
$$ac=df=0,\quad\begin{vmatrix}1&c&e\\a&1&f\\b&d&1\end{vmatrix}=1.$$
We now suppose that $w^2\ne 0$ for all non-zero $w\in H^2(M;\Z/2)$. Then for $x_1^2\ne 0$ we have $(a,b)$ is either $(1,0),(0,1),(1,1)$. Consider the case $(a,b)=(1,0)$. That $a=1$ implies $c=0$, so $d=1$ since $y^2\ne 0$. Then $f=0$, implying $e=1$ since $z^2\ne 0$. Hence we obtain $\begin{vmatrix}1&c&e\\a&1&f\\b&d&1\end{vmatrix}=\begin{vmatrix}1&0&1\\1&1&0\\0&1&1\end{vmatrix}=0$, a contradiction. In the case $(a,b)=(0,1),(1,1)$ we can similarly get $(c,d,e,f)=(0,1,1,0)$, so a contradiction occurs. Thus there is non-zero $w\in H^2(M;\Z/2)$ with $w^2=0$, and therefore the proof is done by Proposition \ref{criterion-x^2}.
\end{proof}
For the last we dare to conjecture the following from Proposition \ref{criterion-x^2} and Corollary \ref{Bott}, \ref{two}, \ref{h}, \ref{cube}.
\begin{conjecture}
For any quasitoric manifold $M$, $\Sigma^\infty\pi$ is not null homotopic.
\end{conjecture}
| {
"timestamp": "2014-12-23T02:16:38",
"yymm": "1412",
"arxiv_id": "1412.6886",
"language": "en",
"url": "https://arxiv.org/abs/1412.6886",
"abstract": "We show a homotopy decomposition of $p$-localized suspension $\\Sigma M_{(p)}$ of a quasitoric manifold $M$ by constructing power maps. As an application we investigate the $p$-localized suspension of the projection $\\pi$ from the moment-angle complex onto $M$, from which we deduce its triviality for $p>\\dim M/2$. We also discuss non-triviality of $\\pi_{(p)}$ and $\\Sigma^\\infty\\pi$.",
"subjects": "Algebraic Topology (math.AT)",
"title": "$p$-local stable splitting of quasitoric manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717464424892,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7089449248624093
} |
https://arxiv.org/abs/1707.07193 | The expected number of elements to generate a finite group with $d$-generated Sylow subgroups | Given a finite group $G,$ let $e(G)$ be expected number of elements of $G$ which have to be drawn at random, with replacement, before a set of generators is found. If all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq d+\kappa$ with $\kappa \sim 2.75239495.$ The number $\kappa$ is explicitly described in terms of the Riemann zeta function and is best possible. If $G$ is a permutation group of degree $n,$ then either $G=S_3$ and $e(G)=2.9$ or $e(G)\leq \lfloor n/2\rfloor+\kappa^*$ with $\kappa^* \sim 1.606695.$ | \section{Introduction}
In 1989, R. Guralnick \cite{rg} and the first author \cite{al} independently proved
that if all the Sylow subgroups of a finite group $G$ can be generated by $d$ elements, then the group $G$ itself can be generated by $d+1$ elements.
A probabilistic version of this result was obtained in \cite{alex}.
Let $G$ be a nontrivial finite group and let $x=(x_n)_{n\in\mathbb N}$ be a sequence of independent, uniformly distributed $G$-valued random variables.
We may define a random variable $\tau_G$ by
$\tau_G=\min \{n \geq 1 \mid \langle x_1,\dots,x_n \rangle = G\}.$
We denote by $e(G)$ the expectation $\Ee(\tau_G)$ of this random variable:
$e(G)$ is the expected number of elements of $G$ which have to be drawn at random, with replacement,
before a set of generators is found. In \cite{alex} it was proved that
if all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq d+\eta$ with $\eta \sim 2.875065.$ This bound is not too far from being best possible. Indeed in \cite{pom}, Pomerance proved that if $\Omega_d$ is the set of all the $d$-generated finite abelian groups, then
$$\sup_{G\in \Omega_d} e(G)=d+\sigma, \text { where } \sigma \sim 2.11846.$$ However the bound $e(G)\leq d+\eta$ is approximative, and one could be interest in finding a best possible estimation for $e(G).$ We give an exhaustive answer to this question, proving the following result.
\begin{thm}\label{tuno}
Let $G$ be a finite group. If all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq d+\kappa$ with $\kappa \sim 2.75239495.$ The number $\kappa$ is explicitly described in terms of
the Riemann zeta function and is best possible.
\end{thm}
This bound can be further improved under some additional assumptions on $G.$ For example we prove that if all the Sylow subgroups of $G$ can be generated by $d$ elements and $G$ is not soluble, then $e(G)\leq d+2.7501$ (Proposition \ref{mainn}). A stronger result holds if $|G|$ is odd.
\begin{thm}\label{teorema}
Let $G$ be a finite group of odd order. If all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq d+\tilde \kappa$ with $\tilde \kappa \sim
2.148668.$
\end{thm}
If $G$ is a $p$-subgroup of $\perm(n),$ then $G$ can be generated by $\lfloor n/p\rfloor$ elements (see \cite{kp}), so Theorem \ref{tuno} has the following consequence: if $G$ is a permutation group of degree $n,$ then $e(G)\leq \lfloor n/2\rfloor+\kappa.$ However this bound is not best possible and a better result can be obtained:
\begin{cor}
If $G$ is a permutation group of degree $n,$ then either $G=\perm(3)$ and $e(G)=2.9$ or $e(G)\leq \lfloor n/2\rfloor+\kappa^*$ with $\kappa^* \sim 1.606695.$
\end{cor}
The number $\kappa^*$ is best possible. Let $m=\lfloor n/2\rfloor$ and set $G_n=\perm(2)^m$ if $m$ is even, $G_n=\perm(2)^{m-1}\times \perm(3)$ if $m$ is odd. If $n\geq 8,$ then $e(G_n)-m$ increase with $n$ and $\lim_{n\to \infty} e(G)-m=1.606695.$
\section{Preliminary results}
Let $G$ be a finite group and use the following notations:
\begin{itemize}
\item For a given prime $p,$ $d_p(G)$ is the smallest cardinality of a generating set of a Sylow $p$-subgroup of $G.$
\item For a given prime $p$ and a positive integer $t,$ $\alpha_{p,t}(G)$ is the number of complemented factors of order $p^t$ in a chief series of $G.$
\item For a given prime $p,$ $\alpha_p(G)=\sum_t \alpha_{p,t}(G)$ is the number of complemented factors of $p$-power order in a chief series of $G.$
\item $\beta(G)$ is the number of nonabelian factors in a chief series of $G.$
\end{itemize}
\begin{lemma}\label{stime}For every finite group $G,$ we have:
\begin{enumerate}
\item $\alpha_p(G)\leq d_p(G).$
\item $\alpha_2(G)+\beta(G)\leq d_2(G).$
\item If $\beta(G)\neq 0$, then $\beta(G)\leq d_2(G)-1.$
\item If $\alpha_{2,1}(G)=0,$ then $\alpha_2(G)+\beta(G)\leq d_2(G)-1.$
\item If $\alpha_{p,1}(G)=0,$ then $\alpha_p(G)\leq d_p(G)-1.$
\end{enumerate}
\end{lemma}
\begin{proof}
(1), (2) and (3) are proved in \cite[Lemma 4]{alex}. Now assume that no complemented chief factor of $G$ has order 2 and let $r=\alpha_2(G)+\beta(G)$. There exists a sequence $X_r\leq Y_r\leq\dots \leq X_1\leq Y_1$ of normal subgroups of $G$ such that, for every $1\leq i\leq r,$
$Y_i/X_i$ is a complemented chief factor of $G$ of even order. Notice that $\beta(G/Y_1)=\alpha_2(G/Y_1)=0,$ hence $G/Y_1$ is a finite soluble group all of whose complemented chief factors have odd order, but then $G/Y_1$ has odd order and consequently $d_2(G)=d_2(Y_1).$ Moreover, as in the proof of
\cite[Lemma 4]{alex}, $d_2(Y_1)\geq d_2(Y_1/X_1)+r-1.$ Since $|Y_1/X_1|\neq 2$ and the Sylow 2-subgroups of a finite nonabelian simple cannot be cyclic \cite[10.1.9]{rob}, we deduce $d_2(Y_1/X_1)\geq 2$ and consequently $d_2(G)=d_2(Y_1)\geq r+1.$ This proves (4). The proof of (5) is similar.
\end{proof}
Recall (see \cite[(1.1)]{alex} for more details) that
\begin{equation}\label{inizi1}
\begin{aligned}e(G)=\sum_{n\geq 0}(1-P_G(n))
\end{aligned}
\end{equation}
where $$P_G(n) =
\frac{|\{(g_1,\dots,g_n)\in G^n \mid \langle g_1,\dots,g_n\rangle=G\}|}{|G|^n}$$ is the probability that $n$ randomly chosen
elements of $G$ generate $G.$
Denote by $m_n(G)$ the number of index $n$ maximal subgroups of $G.$ We have (see \cite[11.6]{sub}):
\begin{equation}\label{inizi2}1-P_G(k)\leq \sum_{n\geq 2}\frac{m_n(G)}{n^{k}}.
\end{equation}
Using the notations introduced in \cite[Section 2]{pak}, we say that a maximal subgroup $M$ of $G$ is of type A if $\soc(G/\core_G(M))$ is abelian, of type B otherwise, and we denote by $m^A_n(G)$ (respectively $m^B_n(G)$) the number of maximal subgroups of $G$ of type A (respectively B) of
index $n.$ Given $t\in \mathbb N$ and $p\in \pi(G),$ define $$\mu^*(G,t)=\sum_{k\geq t}\left(\sum_{n\geq 5} \frac{m_n^B(G)}{n^k}\right), \quad \mu_p(G,t)=\sum_{k\geq t}\left(\sum_{n\geq 1}\frac{m_{p^n}^A(G)}{p^{nk}}\right).$$
\begin{lemma}\label{e123}
Let $t\in \mathbb N.$ Then $e(G)\leq t+\mu^*(G,t)+\sum_{p\in \pi(G)}\mu_p(G,t).$
\end{lemma}
\begin{proof}
By (\ref{inizi1}) and (\ref{inizi2}), $$e(G)\leq t+\sum_{n\geq t}(1-P_G(n))\leq t+\sum_{k\geq t}\left(\sum_{n\geq 2}\frac{m_n(G)}{n^k}\right). \ \qedhere$$
\end{proof}
\begin{lemma}\label{mustar}Let $t\in \mathbb N$.
If $\beta(G)=0,$ then $\mu^*(G,t)=0.$
If $t\geq \beta(G)+3,$ then $$\mu^*(G,t)\leq \frac{\beta(G)(\beta(G)+1)}{2\cdot 5^{t-4}}\cdot \frac{1}{4}.$$
\end{lemma}
\begin{proof}
It follows from \cite[Lemma 8]{alex} and its proof.
\end{proof}
\begin{lemma}\label{mup}For $t\in \mathbb N$ and $p\in \pi(G).$ If $\alpha_p(G)=0,$ then $\mu_p(G,t)=0.$
\begin{enumerate}
\item If $\alpha_2(G)\leq t-1$ and $\alpha_{2,u}(G)\leq t-2$ for every $u>1,$ then $$\mu_2(G,t)\leq \frac{1}{2^{t-\alpha_2(G)-1}}.$$
\item Let $p$ be an odd prime. If $\alpha_p(G)\leq t-2$ then $$\mu_p(G,t)\leq \frac{1}{p^{t-\alpha_p(G)-2}}\frac{1}{(p-1)^2}.$$
\end{enumerate}
\end{lemma}
\begin{proof}
It follows from \cite[Lemma 7]{alex} and its proof.
\end{proof}
Let $G$ be a finite soluble group and let $\mathcal A$ be a set of representatives for the irreducible $G$-module that are $G$-isomorphic to some complemented chief factor of $G.$ For every $A\in\mathcal A,$ let $\delta_A$ be the number of
complemented factors $G$-isomorphic to $A$ in a chief series of $G$, $q_A=|\End_{G}(A)|$, $r_A=\dim_{\End_{G}(A)}(A),$ $\zeta_A=0$ if $A$ is a trivial $G$-module, $\zeta_A=1$ otherwise. Moreover, for every $l\in \mathbb N,$ let $Q_{A,l}(s)$ be the Dirichlet polynomial defined by
$$Q_{A,l}(s)=1-\frac{q_A^{l+r_A\cdot \zeta_A}}{q_A^{r_A\cdot s}}.$$
By \cite[Satz 1]{g2}, for every positive integer $k$ we have
\begin{equation}\label{gzpr}
P_G(k)=\prod_{A\in \mathcal A}\left(\prod_{0\leq l\leq \delta_A-1}Q_{A,l}(k)\right).
\end{equation}
For every prime $p$ dividing $|G|$, let $\mathcal A_p$ be the subset of $\mathcal A$ consisting of the irreducible $G$-modules having order a power of $p$ and let $$P_{G,p}(k)=\prod_{A\in \mathcal A_p}\left(\prod_{0\leq l\leq \delta_A-1}Q_{A,l}(k)\right).$$
\begin{defn}For every prime $p$ and every positive integer $\alpha$ let
$$C_{p,\alpha}(s)=\prod_{0\leq i\leq \alpha-1}\left(1-\frac{p^i}{p^s}\right), \quad D_{p,\alpha}(s)=\prod_{1\leq i\leq \alpha}\left(1-\frac{p^i}{p^s}\right).$$
\end{defn}
\begin{lemma}\label{confronti}
Let $G$ be a finite soluble group and let $k$ be a positive integer.
\begin{enumerate}
\item If $d_p(G)\leq d$, then $P_{G,p}(k)\geq D_{p,d}(k)$.
\item If $p$ divides $|G/G^\prime|,$ then $P_{G,p}(k)\geq C_{p,d}(k)$.
\item If $\alpha_{p,1}(G)=0,$ then $P_{G,p}(k)\geq C_{p,d}(k)$.
\item If $d_2(G)\leq d$, then $P_{G,2}(k)\geq C_{2,d}(k)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $\mathcal A_p=\{A_1,\dots ,A_t\}$ and let $q_i=q_{A_i},$ $r_i=r_{A_i},$ $\zeta_i=\zeta_{A_i}$ and $\delta_i=\delta_{A_i}.$
Recall that
\begin{equation}\label{a}
P_{G,p}(k)=\prod_{\substack{1 \leq i \leq t \\
0 \leq l \leq \delta_i-1}}
Q_{A_i,l}(k).
\end{equation}
By Lemma \ref{stime}, $\delta_1+\delta_2+\dots +\delta_t=\alpha_p(G)\leq d_p(G)\leq d$, hence the number of factors $Q_{A_i,l}(k)$ in (\ref{a}) is at most $d$.
We order these factors in such a way that $Q_{A_i,u}(k)$ precedes $Q_{A_j,v}(k)$ if either $i < j$ or $i=j$ and $u < v$. Moreover we order the elements of $\mathcal A_p$ in such a way that $A_1$ is the trivial $G$-module if $p$ divides $|G/G^\prime|.$
\noindent 1)
Since $D_{p,d}(k)=0$ if $k\leq d$, we may take $k>d$.
To show that $P_{G,p}(k)\geq D_{p,d}(k),$ it is sufficient to show that the $j$-th factor $Q_j(k)=Q_{A_i,l}(k)$ of $P_{G,p}(k)$ is greater than the $j$-th factor
$$D_j(k)=1-\frac{p^j}{p^k}$$ of $D_{p,d}(k)$.
If $j\leq \delta_1$ then $Q_j(k)=Q_{A_1,l}(k)
$ with $l=j-1$.
If $j>\delta_1$ then $Q_j(k)=
Q_{A_i,l}(k)$
for some $i\in \{2,\dots, t\}$ and $l\in \{0,\dots, \delta_i-1\}$, thus $$j=\delta_1+\delta_2+\dots +\delta_{i-1}+l+1\geq l+2.$$ In any case,
$$q_i^{r_i\zeta_i}q_i^{l}\leq q_i^{r_i(l+1)}\leq q_i^{r_ij}.$$
We have $q_i=p^{n_i}$ for some $n_i\in \mathbb N.$ Since $j\leq d<k$, we deduce that
\begin{equation*}
\frac{q_i^{r_i\zeta_i}q_i^{l}}{q_i^{r_ik}}\leq \frac{q_i^{r_ij}}{q_i^{r_ik}} =\left(\frac{{p^{j}}}{{p^k}}\right)^{r_in_i}\leq \frac{p^j}{p^k}.
\end{equation*}
But then
\begin{equation*}
Q_j(k)= 1-\frac{q_i^{r_i\zeta_i}q_i^{l}}{q_i^{r_ik}}\geq 1-\frac{p^j}{p^k}=D_j(k).
\end{equation*}
\noindent
2) Since $C_{p,d}(k)=0$ if $k < d$, we may take $k\geq d$.
To show that $P_{G,p}(k)\geq C_{p,d}(k),$ it is sufficient to show that the $j$-th factor $Q_j(k)=Q_{A_i,l}(k)$ of $P_{G,p}(k)$ is greater than the $j$-th factor
$$C_j(k)=1-\frac{p^{j-1}}{p^k}$$ of $C_{p,d}(k)$.
If $i=1,$ then, by the way in which we ordered the elements of $\mathcal A_p$, we have $Q_j(k)=C_j(k)$. Otherwise, as we have seen in the proof of (1), $l+2\leq j$ so $r_i\zeta_i+l \leq r_i+j-2\leq r_i(j-1)$. Since $j\leq d\leq k$, we deduce that
\begin{equation*}
\frac{q_i^{r_i\zeta_i}q_i^{l}}{q_i^{r_ik}}\leq \frac{q_i^{r_i(j-1)}}{q_i^{r_ik}} \leq \frac{p^{j-1}}{p^k}\text {\ \ and\ \ }
Q_j(k)= 1-\frac{q_i^{r_i\zeta_i}q_i^{l}}{q_i^{r_ik}}\geq 1-\frac{p^{j-1}}{p^k}=C_j(k).
\end{equation*}
\noindent 3) Assume that no complemented chief factor of $G$ has order $p.$ By (5) of Lemma \ref{stime}, $\alpha_p(G)\leq d_p(G)-1\leq d-1.$ But then the factors $Q_{A_i,l}(k)$ in (\ref{a}) are at most $d-1$ and, arguing as in the proof of (1), we conclude $P_{G,p}(k)\geq D_{p,d-1}(k)\geq C_{p,d}(k)$
\noindent 4) We may assume $\alpha_2(G)\neq 0$ (otherwise $P_{G,2}(k)=1).$ Since $\alpha_{2,1}(G)\neq 0$ if and only if $2$ divides $|G/G^\prime|$, the conclusion follows from (2) and (3).
\end{proof}
\section{The main result}\label{main}
\begin{prop}\label{mainn}Let $G$ be a finite group. If all the Sylow subgroups of $G$ can be generated by $d$ elements and $G$ is not soluble, then $$e(G)\leq d+\kappa^* \quad \text { with } \quad \kappa^* \leq 2.7501.$$
\end{prop}
\begin{proof}
Let $\beta=\beta(G).$ Since $G$ is not soluble, $\beta > 0$, hence by (2) and (3) of Lemma \ref{stime}, we have $1\leq \beta\leq d_2(G)-1\leq d-1$ and $\alpha_2(G)\leq d_2(G)-\beta\leq d-1.$
We distinguish two cases:
\noindent a) $\beta < d-1.$ By Lemma \ref{e123}, \ref{mustar} and
\ref{mup} and using an accurate estimation of $\sum_p (p-1)^{-2}$ given in \cite{hc}, we conclude
$$\begin{aligned}e(G)&\leq d+2+\mu^*(G,d+2)+\mu_2(G,d+2)+\sum_{p>2}\mu_p(G,d+2)\\&\leq d+2+\frac{1}{20}+\frac{1}{4}+\sum_{p>2}\frac{1}{(p-1)^2}\leq d+2.6751.
\end{aligned}$$
\noindent b) $\beta = d-1.$ By (2) and (4) of Lemma \ref{stime}, either $\alpha_2(G)=0$ or $\alpha_2(G)=\alpha_{2,1}(G)=1.$ In the first case $\mu_2(G,d+2)=0,$ in the second case $m_{2}^A(G)=1$ and consequently $$\mu_{2}(G,d+2)=\sum_{k\geq d+2}\frac{m_{2}^A(G)}{2^{k}}\leq \sum_{k\geq d+2}\frac{1}{2^{k}}\leq \sum_{k\geq 4}\frac{1}{2^{k}}\leq \frac{1}{8}.$$
By Lemma \ref{e123}, \ref{mustar} and
\ref{mup}, we conclude
$$\begin{aligned}e(G)&\leq d+2+\mu^*(G,d+2)+\mu_2(G,d+2)+\sum_{p>2}\mu_p(G,d+2)\\&\leq d+2+\frac{1}{4}+\frac{1}{8}+\sum_{p>2}\frac{1}{(p-1)^2}\leq d+2.7501.\quad\qedhere
\end{aligned}$$
\end{proof}
The previous proposition reduces the proof of Theorem \ref{tuno} to the particular case when $G$ is soluble. To deal with this case, we are going to introduce, for every positive integer $d$ and every set of primes $\pi$, a supersoluble group $H_{\pi,d}$ with the property that $e(G)\leq e(H_{\pi,d})$ whenever $G$ is soluble, $\pi(G)\subseteq \pi$ and the Sylow subgroups of $G$ are $d$-generated.
\begin{defn}Let $\pi$ be a finite set of prime integers with $2\in \pi,$ and let
$d$ be a positive integer. We define $H_{\pi,d}$ as the semidirect product
$$H_{\pi,d}=\left(\left(\prod_{p \in \pi \setminus \{2\}} C_p^d\right) \rtimes C_2\right)\times C_2^{d-1}$$ where $C_p$ is the cyclic group of order $p$ and $C_2=\langle y \rangle$ acts on $A=\prod_{p \in \pi \setminus \{2\}} C_p^d$ by setting $x^y=x^{-1}$ for all $x\in A$.
\end{defn}
\begin{thm}\label{mainsol}Let $G$ be a finite soluble group. If all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\leq e(H_{\pi,d})$, where $\pi=\pi(G)\cup\{2\}.$
\end{thm}
\begin{proof}
Let $H=H_{\pi,d}$, $p\in \pi$ and $k\in\mathbb N.$ By (\ref{gzpr}), $P_{H,p}(k)=D_{p,d}(k)$ if $p\neq 2,$ while $P_{H,2}(k)=C_{2,d}(k).$ By Lemma \ref{confronti},
$P_{G,p}(k)\geq P_{H,p}(k)$ for every $p\in \pi(G).$ This implies
$$P_G(k)=\prod_{p\in \pi(G)}P_{G,p}(k)\geq \prod_{p\in \pi}P_{H,p}(k)=P_H(G)$$
and consequently $e(G)=\sum_{k\geq 0}(1-P_G(k))\leq \sum_{k\geq 0}(1-P_H(k))=e(H).$
\end{proof}
\begin{defn}Let $e_d=\sup_{\pi}e(H_{\pi,d})$ and $\kappa=\sup_d (e_d-d).$
\end{defn}
Let $2\in \pi$ and let $\pi^*=\pi\setminus \{2\}.$ Since $P_{H_{\pi,d}}(k)=0$ for all $k\leq d$ we have
$$\begin{aligned}
e(H_{\pi,d})&=\sum_{k\geq 0} \left(1 - P_{H_{\pi,d}}(k)\right)=d+1+\sum_{k\geq d+1}\left(1-C_{2,d}(k)\prod_{p\in \pi^*}D_{p,d}(k)\right)
\\&=d+1+\sum_{k\geq d+1}\left(1-\prod_{1\leq i\leq d}\left(1-\frac{2^{i-1}}{2^k}\right)\prod_{p\in \pi^*}\prod_{1\leq i\leq d}\left(1-\frac{p^i}{p^k}\right)\right)\\&=d+1+\sum_{t\geq 0}\left(1-\prod_{1\leq i\leq d}\left(1-\frac{2^{i-1}}{2^{t+(d+1)}}\right)\prod_{p\in \pi^*}\prod_{1\leq i\leq d}\left(1-\frac{p^i}{p^{t+(d+1)}}\right)\right)
.\end{aligned}$$
We immediately deduce that $e(H_{\pi,d})-d$ increase as $d$ increase. Moreover we have
$$\begin{aligned}
e_d-d&=\sup_\pi \left(e(H_{\pi,d})-d\right)\\&=1+\sum_{k\geq d+1}\left(1-\frac{(1-\frac{1}{2^k})}{(1-\frac{2^d}{2^k})}\prod_{p}\prod_{1\leq i\leq d}\left(1-\frac{p^i}{p^k}\right)\right).\end{aligned}$$
For $k=d+1$ the double product goes to $0$ while for $k\geq d+2$ goes to $\prod_{1\leq i\leq d}{\zeta(k-i)}^{-1}$ and so we get
$$\begin{aligned}
e_d-d&=2+\sum_{k\geq d+2}\left(1-\frac{(1-\frac{1}{2^k})}{(1-\frac{2^d}{2^k})}\prod_{1\leq i\leq d}{\zeta(k-i)}^{-1}\right)\\
&=2+\sum_{j\geq 1}\left(1-\frac{(1-\frac{1}{2^{j+(d+1)}})}{(1-\frac{1}{2^{j+1}})}\prod_{1\leq l\leq d}{\zeta(j+l)}^{-1}\right)\\
&=2+\sum_{j\geq 1}\left(1-\left(\frac{2^{j+1}-2^{-d}}{2^{j+1}-1}\right)\prod_{1+j\leq n\leq d+j}{\zeta(n)}^{-1}\right)
\end{aligned}$$
Let $c=\prod_{2\leq n\leq \infty}{\zeta(n)}^{-1}
.$ Since $e_d-d$ increases as $d$ grows,
we get
$$\begin{aligned}
\kappa& = \lim_{d \to \infty} e_d-d \\
&=2+\left(1-\left(\frac{2^{2}}{2^{2}-1}\right)c\right)+\sum_{j\geq 2}\left(1-\left(\frac{2^{j+1}}{2^{j+1}-1}\right)c\prod_{2\leq n\leq j}{\zeta(n)}\right)\\
&=2+\left(1-\frac{4}{3}\cdot c\right)+\sum_{j\geq 2}\left(1-\left(1+\frac{1}{2^{j+1}-1}\right)c\prod_{2\leq n\leq j}{\zeta(n)}\right).
\end{aligned}$$
Using the computer algebra system \textbf{PARI/GP} \cite{PARI2}, we get
\begin{equation*}
\kappa=2+\left(1-\frac{4}{3}\cdot c\right)+\sum_{j\geq 2}\left(1-\left(1+\frac{1}{2^{j+1}-1}\right)c\prod_{2\leq n\leq j}{\zeta(n)}\right)\sim 2.75239495.
\end{equation*}
Combining this result with Proposition \ref{mainn} and Theorem \ref{mainsol}, we obtain the proof of Theorem \ref{tuno}.
\section{Finite groups of odd order}
\begin{thm} \label{4}
Let $G$ be a finite soluble group. There exists a finite supersoluble group $H$ such that
\begin{enumerate}
\item $\pi(H)=\pi(G)$,
\item $P_G(k)\geq P_H(k)$ for all $k\in \mathbb{N}$,
\item $d_p(G)\geq d_p(H)$ for all $p\in \pi(G)$,
\item $\pi(G/G^\prime) \subseteq \pi(H/H^\prime).$
\end{enumerate}
\end{thm}
\begin{proof} Let $\pi(G)=\{p_1,\dots ,p_n\}$ with $p_1\leq\dots\leq p_n$. For $i\in\{1,\dots,n\},$ set $\pi_i= \{p_1,\dots ,p_i\}$. We will prove, by induction on $i,$ that for every $i\in\{1,\dots,n\}$ there exists a supersoluble group $H_i$ such that $\pi(H_i)=\pi_i$ and, for every $j\leq i$,
\begin{enumerate} \item $P_{H_i,p_j}(k)\leq P_{G,p_j}(k)$ for all $k\in \mathbb{N}$, \item $d_{p_j}(H_i)\leq d_{p_j}(G),$ \item if $C_{p_j}$ is an epimorphic image of $G$, then $C_{p_j}$ is an epimorphic image of $H_{i}$.
\end{enumerate}
Assume that $H_i$ has been constructed and set $p_{i+1}=p$ and $d_p(G)=d_p$.
We distinguish two different cases:
\noindent 1) Either $p$ divides $|G/G^\prime|$ or $G$ contains no complemented chief factor of order $p.$
We consider the direct product
$H_{i+1}=H_i\times C_p^{d_p}.$ Clearly $P_{H_{i+1},p_j}(k)=P_{H_i,p_j}(k)\leq P_{G,p_j}(k)$ if $j\leq i.$ Moreover, by (2) and (3) of Lemma \ref{confronti}, $P_{H_{i+1},p}(k)=C_{p,d_p}(k)\leq P_{G,p}(k).$
\noindent 2) $p$ does not divide $|G/G^\prime|$ but $G$ contains a complemented chief factor which is isomorphic to a nontrivial $G$-module, say $A,$ of order $p$. In this case $G/C_G(A)$ is a nontrivial cyclic group whose order divides $p-1.$ Let $q$ be a prime divisor of
$|G/C_G(A)|$ (it must be $q=p_j$ for some $j\leq i$). Since $q$ divides $|G/G^\prime|,$ we have that $q$ divides also $|H_i/H_i^\prime|,$ hence there exists a normal subgroup $N$ of $H_i$ with $H_i/N\cong C_q$ and a nontrivial action of $H_i$ on $C_p$ with kernel $N.$
We use this action to construct the supersoluble group $H_{i+1}=C_p^{d_p}\rtimes H_i.$ Clearly $P_{H_{i+1},p_j}(k)=P_{H_i,p_j}(k)\leq P_{G,p_j}(k)$ if $j\leq i.$ Moreover, by (1) of Lemma \ref{confronti}, $P_{H_{i+1},p}(k)=D_{p,d_p}(k)\leq P_{G,p}(k).$
We conclude the proof, noticing that $H=H_n$ satisfies the requests in our statement.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainsol}]
Let $\pi(G)=\pi.$ By Theorem \ref{4}, there exists a supersoluble group $H$ such that $\pi(H)=\pi,$ $d_p(H)\leq d$ for every $p\in \pi$ and $P_G(k)\geq P_H(k)$ for every $k\in \mathbb N.$ In particular $e(G)=\sum_{k\geq 0}\left(1-P_G(k)\right)\leq \sum_{k\geq 0}\left(1-P_H(k)\right)=e(H).$
Since $H$ is supersoluble, if $A$ is $H$-isomorphic to a chief factor of $H,$ then $|A|=p$ for some $p\in \pi$ and $H/C_H(A)$ is a cyclic group of order dividing $p-1.$ If $p$ is a Fermat prime, then $H/C_H(A)$ is a 2-group and, since $|H|$ is odd, we must have $H=C_H(A).$ This implies that if $p\in \pi$ is a Fermat prime, then
$P_{H,p}(k)=C_{p,d_p(H)}(k)\geq C_{p,d}(k).$ For all the other primes in $\pi,$ by (1) of Lemma \ref{confronti} we have $P_{H,p}(k)\geq D_{p,d}(k).$
Therefore, denoting by $\Lambda$ the set of the Fermat primes and by $\Delta$ the set of the remaining odd primes, we get
\begin{equation*}
P_H(k)=\prod_{p\in \pi}P_{H,p}(k)
\geq \underset{p\in \Lambda}{\prod}C_{p,d}(k)\underset{p \in \Delta}{\prod}D_{p,d}(k).
\end{equation*}
It follows that
\begin{equation*}
\begin{split}
e(H)&=\sum_{k\geq 0}\left(1-P_H(k)\right)\\
&\leq \sum_{k\geq 0}\left(1-\underset{p\in \Lambda}{\prod}\prod_{1\leq i\leq d}\left(1-\frac{p^{i-1}}{p^k}\right)\underset{\substack{p\in \Delta\\ p\neq 2}}{\prod}\prod_{1\leq i\leq d}\left(1-\frac{p^{i}}{p^k}\right)\right)\\
&=d+1+\sum_{k\geq d+1}\left(1-\underset{p\in \Lambda}{\prod}\prod_{1\leq i\leq d}\left(1-\frac{p^{i-1}}{p^k}\right)\underset{\substack{p\in \Delta}}{\prod}\prod_{1\leq i\leq d}\left(1-\frac{p^{i}}{p^k}\right)\right)\\
&= d+1+\sum_{t\geq 0}\left(1-\prod_{p\in \Lambda}\prod_{1\leq i\leq d}\left(1-\frac{p^{i-1}}{p^{t+(d+1)}}\right)\prod_{p\in \Delta
}\prod_{1\leq i\leq d}\left(1-\frac{p^i}{p^{t+(d+1)}}\right)\right).
\end{split}
\end{equation*}
Let $$\tilde\kappa_d=\sum_{t\geq 0}\left(1-\prod_{p\in \Lambda}\prod_{1\leq i\leq d}\left(1-\frac{p^{i-1}}{p^{t+(d+1)}}\right)\prod_{p\in \Delta
}\prod_{1\leq i\leq d}\left(1-\frac{p^i}{p^{t+(d+1)}}\right)\right)+1.$$
It can be easily check that $\tilde\kappa_d$ increase as $d$ increases. Let
\begin{equation*}
b={\underset{1\leq n\leq \infty}{\prod}\left(1-\frac{1}{2^{n}}\right)^{-1}},\qquad c=\prod_{2\leq n\leq \infty}{\zeta(n)}^{-1}
\end{equation*}
and let $\Lambda^*=\{3,\;5,\;17,\ 257,\; 65537\}$ be the set of the known Fermat primes.
Similar computations to the ones in the final part of Section \ref {main}
lead to the conclusion
$$\begin{aligned}
\tilde\kappa_d
& \leq 3\!-\!\frac{b\cdot c}{2}\prod_{p\in \Lambda}\frac{p^{2}}{p^{2}\!-\!1}+\sum_{j\geq 2} \left(1\!-\! b\!\!\underset{1\leq n\leq j}{\prod}\!\!\!\left(\!1-\!\frac{1}{2^{n}}\right)\prod_{p\in \Lambda}\left(1\!+\!\frac{1}{p^{j+1}-1}\right)\!c\!\!\prod_{2\leq n\leq j}\!\!\!{\zeta(n)}\right) \\
& \leq 3\!-\!\frac{b\cdot c}{2}\!\prod_{p\in \Lambda^*}\!\frac{p^{2}}{p^{2}\!-\!1}\!+\sum_{j\geq 2} \left(\!1\!-\! b\!\!\underset{1\leq n\leq j}{\prod}\!\!\!\left(1-\frac{1}{2^{n}}\right)\prod_{p\in \Lambda^*}\!\!\left(1\!+\!\frac{1}{p^{j+1}-1}\right)\!c\!\!\!\prod_{2\leq n\leq j}\!\!\!{\zeta(n)}\right)\!.
\end{aligned}$$
Let $$\tilde \kappa=3\!-\!\frac{b\cdot c}{2}\!\prod_{p\in \Lambda^*}\!\frac{p^{2}}{p^{2}\!-\!1}\!+\sum_{j\geq 2} \left(\!1\!-\! b\underset{1\leq n\leq j}{\prod}\!\!\!\left(1-\frac{1}{2^{n}}\right)\prod_{p\in \Lambda^*}\!\!\left(1\!+\!\frac{1}{p^{j+1}-1}\right)\!c\!\!\!\prod_{2\leq n\leq j}\!\!\!{\zeta(n)}\right).$$
With the help of \textbf{PARI/GP}, we get that $\tilde \kappa \sim 2.148668.$ \end{proof}
\section{Permutation groups}
\begin{thm}{\cite[Corollary]{kp}\label{kopr}} If $G$ is a $p$-subgroup of $\perm(n),$ then $G$ can be generated by $\lfloor n/p\rfloor$ elements.
\end{thm}
\begin{thm}{\cite[Theorem 10.0.5]{nina}}\label{nin}
The chief length of a permutation group of degree $n$ is at most $n-1.$
\end{thm}
\begin{lemma}\label{basso}
If $G\leq \perm(n)$ and $n\geq 8$, then $\beta(G)\leq \lfloor n/2 \rfloor -3$.
\end{lemma}
\begin{proof}
Let $R(G)$ be the soluble radical of $G.$ By \cite[Theorem 2]{holt} $G/R(G)$ has a faithful permutation representation of degree at most $n,$ so we may assume $R(G)=1$. In particular $\soc(G)=S_1\times \cdots \times S_r$ where $S_1,\dots,S_r$ are nonabelian simple groups and, by \cite[Theorem 3.1]{ep}, $n\geq 5r.$ Let $K=N_G(S_1)\cap \dots \cap N_G(S_r).$ We have that $K/\soc(G)$ is soluble and that $G/K\leq \perm(r),$ so by Theorem \ref{nin}, $\beta(G/K)\leq r-1$ (and indeed $\beta(G/K)=0$ if $r\leq 4).$ But then $\beta(G)\leq 2r-1\leq 2\lfloor n/5 \rfloor -1$ if $r\geq 5,$ $\beta(G)\leq r \leq \lfloor n/5 \rfloor$ otherwise.
\end{proof}
\begin{lemma}\label{notsol}
Suppose that $G\leq \perm(n)$ with $n\geq 8$. If $G$ is not soluble, then $$e(G)\leq \lfloor n/2 \rfloor +1.533823.$$
\end{lemma}
\begin{proof} Let $m=\lfloor n/2 \rfloor.$ By Theorem $\ref{kopr},$ $d_2(G)\leq m.$ Since $G$ is not soluble, we must have $\beta(G)\geq 1$. By Lemma \ref{basso}, $\beta(G)\leq m-3$, hence, by Lemma \ref{mustar}, $\mu^*(G,m)\leq 1/4.$ By (2) and (4) of Lemma \ref{stime}, $\alpha_2(G)\leq m-1$ and $\alpha_{2,u}(G)\leq m-2$ for every $u > 1,$ hence, by Lemma \ref{mup}, $\mu_2(G,m)\leq 1.$ If $p\geq 5,$ then, by Theorem $\ref{kopr},$ $m-\alpha_p(G)\geq m-d_p(G)\geq m - \lfloor n/5 \rfloor \geq 3$ so,
by Lemma \ref{mup},
$\mu_p(G,m)\leq (p(p-1)^2)^{-1}.$ Since $n\geq 8$ we have $m-\alpha_3(G)\geq m - \lfloor n/3 \rfloor \geq 2$
if $n\neq 9.$ On the other hand, it can be easily checked that $\alpha_3(G)\leq 2$ for every unsoluble subgroup $G$ of $\perm(9),$ so
$m-\alpha_3(G)\geq 2$ also when $n=9.$ But then, again by Lemma \ref{mup},
$\mu_3(G,m)\leq 1/4.$ It follows
$$\begin{aligned}e(G)&\leq m + \mu^*(G,m)+\mu_2(G,m)+\mu_3(G,m)+\sum_{p>3}\mu_p(G,m)\\&\leq m+\!\frac{1}{4}\!+\!1+\!\frac{1}{4}\!+\!\sum_{p\geq 5}\frac{1}{p(p-1)^2}\leq m\!+\!\frac{3}{2}\!+\!\sum_{n\geq 5}\frac{1}{n(n-1)^2}\leq m\!+\!1.533823.\ \qedhere \end{aligned}$$
\end{proof}
\begin{lemma}
Suppose that $G\leq \perm(n)$ with $n\geq 8$. If $G$ is soluble
and $\alpha_{2,1}(G)<\lfloor n/2 \rfloor,$ then
$$e(G)\leq \lfloor n/2 \rfloor + 1.533823.$$
\end{lemma}
\begin{proof}
Let $\alpha=\alpha_{2,1}(G)$, $\alpha^*=\sum_{i>1}\alpha_{2,i}(G)$ and $m=\lfloor n/2 \rfloor.$ Notice that $\alpha^*\leq m-1$ by Lemma \ref{stime} (4). Set $$\mu_{2,1}(G,t)=\sum_{k\geq t}\frac{m_{2}^A(G)}{2^{k}},\quad
\mu_{2,2}(G,t)=\sum_{k\geq t}\left(\sum_{n\geq 2}\frac{m_{2^n}^A(G)}{2^{nk}}\right).$$
We distinguish two cases:
\noindent a) $\alpha_{2,u}(G)<m-1$ for every $u\geq 2.$
Since ${m_{2}^A(G)}=2^\alpha-1,$ we have $$\mu_{2,1}(G,m)\leq \sum_{k\geq m}\frac{2^\alpha}{2^k}=\frac{1}{2^{m-\alpha-1}}\leq 1.$$ Moreover, arguing as in the proof of \cite[Lemma 7]{alex}, we deduce $$\mu_{2,2}(G,m)\leq \frac{1}{2^{m-\alpha^*-1}}\leq 1.$$ Notice that if $\alpha=m-1,$ then $\alpha^*\leq 1$ and consequently $\mu_{2,2}(G,m)\leq 2^{2-m}\leq 1/4.$ Similarly, if $\alpha^*=m-1,$ then $\alpha\leq 1$ and $\mu_{2,1}(G,m)\leq 2^{2-m}\leq 1/4.$ If follows that $\mu_2(G,m)=\mu_{2,1}(G,m)+\mu_{2,2}(G,m)\leq 5/4.$ Except in the case when $n=9$ and $\alpha_3(G)=3$, arguing as in the end of Lemma \ref{notsol}, we conclude
$$\begin{aligned}e(G)&\leq m +\mu_2(G,m)+\mu_3(G,m)+\sum_{p>3}\mu_p(G,m)\\&\leq m+\frac{5}{4}+\frac{1}{4}+\sum_{p\geq 5}\frac{1}{p(p-1)^2}\leq m+1.533823.\quad \end{aligned}$$
We remain with the case when $G$ is a soluble subgroup of $\perm(9)$ with $\alpha_3(G)=3.$ This occurs only if $G$ is contained in the wreath product
$\perm(3)\wr \perm(3).$ In particular $\alpha_2(G)\leq 3.$ If $\alpha_2(G)\leq 2,$ then, by Lemma \ref{mup},
$$e(G)\leq 5+\mu_2(G,5)+\mu_3(G,5)\leq 5+\frac{1}{4}+\frac{1}{4} = 5.5.$$
We have $\alpha_2(G)=\alpha_3(G)=3$ only in two cases: $\perm(3)\times \perm(3) \times \perm 3$, $\langle (1,2,3), (4,5,6), (1,4)(2,5)(3,6), (1,2)(4,5)\rangle \times \perm(3).$ In this two cases, $G$ contains exactly 16 maximal subgroups, 7 with index 2 and 9 of index 3. But them
$$e(G)\leq 4+\sum_{k\geq 4}\frac{m_2(G)}{2^k}+\sum_{k\geq 4}\frac{m_3(G)}{3^k}= 4+\sum_{k\geq 4}\frac{7}{2^k}+\sum_{k\geq 4}\frac{9}{3^k}=4+\frac{7}{8}+\frac{1}{6}\sim 5.0417.
$$
\noindent b) $\alpha_{2,u}(G)=m-1$ for some $u\geq 2.$ In this case
$m_2^A(G)\leq 1,$ so $$\mu_{2,1}(G,m+1)\leq \sum_{k\geq m+1}\frac{1}{2^{k}}=\frac{1}{2^m}\leq \frac{1}{16}.$$
Moreover, by \cite[Lemma 5]{alex}, $m_{2^u}^A(G)\leq 2^{u\alpha_{2,t}(G)+u}$, hence
$$\begin{aligned}\mu_{2,2}(G,m+1)&=\sum_{k\geq m+1}\left(\sum_{n\geq 2}\frac{m_{2^n}^A(G)}{2^{nk}}\right)=\sum_{k\geq m+1}\frac{m_{2^u}^A(G)}{2^{uk}}\leq \sum_{k\geq m+1}\frac{2^{u\alpha_{2,t}(G)+u}}{2^{uk}}\\&\leq \sum_{k\geq m+1}\frac{2^{um}}{2^{uk}}=\frac{1}{2^u-1}\leq \frac 1{3}.
\end{aligned}$$
If $p\geq 5,$ then $m-\alpha_p(G)\geq 3$ so,
by Lemma \ref{mup},
$\mu_p(G,m+1)\leq (p(p-1))^{-2}.$ Moreover $m-\alpha_3(G)\geq 2$ (notice that there is no subgroup of $\perm(9)$ with $\alpha_3(G)=3$ and $\alpha_{2,u}(G)=3$ for some $u\geq 2)$, so, again by Lemma \ref{mup}, $\mu_3(G,m+1)\leq 1/12.$ It follows
$$\begin{aligned}e(G)&\leq m\! + \!1 +\mu_{2,1}(G,m+1)+\mu_{2,2}(G,m+1)+\mu_3(G,m+1)+\sum_{p>3}\mu_p(G,m+1)\\&\leq m\!+\!1\!+\!\frac{1}{16}\!+\!\frac 1{3}\!+\!\frac 1{12}\!+\!\!\sum_{p\geq 5}\frac{1}{p^2(p-1)^2}\leq \frac {71}{48}\!+\!\!\sum_{n\geq 5}\frac{1}{n^2(n-1)^2}\leq m\!+\!1.4843.\!\qedhere \end{aligned}$$
\end{proof}
When $G\leq \perm(n)$ and $n \leq 7,$ the precise value of $e(G)$ can be computed by \textbf{GAP} \cite{gap} using the formula $$e(G)=-\sum_{H < G}\frac{\mu_G(H)|G|}{|G|-|H|},$$ where $\mu_G$ is the M\"obius function defined on the
subgroup lattice of $G$ (see \cite[Theorem 1]{mon}). The crucial information are summarized in the following lemma.
\begin{lemma}
Suppose that $G\leq \perm(n)$ with $n\leq 7$. Either $e(G)\leq \lfloor n/2 \rfloor+1$ or one of the following cases occurs:\begin{enumerate}
\item $G\cong \perm(3),$ $n=3$, $e(G)=29/10;$
\item $G\cong C_2\times C_2,$ $n=4$, $e(G)=10/3;$
\item $G\cong D_8,$ $n=4$, $e(G)=10/3;$
\item $G\cong C_2\times \perm(3),$ $n=5$, $e(G)=1181/330;$
\item $G\cong C_2\times C_2\times C_2,$ $n=6$, $e(G)=94/21;$
\item $G\cong C_2\times D_8,$ $n=6$, $e(G)=94/21;$
\item $G\cong C_2\times C_2\times \perm(3),$ $n=7$, $e(G)=241789/53130;$
\item $G\cong D_8\times \perm(3),$ $n=7$, $e(G)=241789/53130.$
\end{enumerate}
\end{lemma}
\begin{thm}
Let $G$ be a permutation group of degree $n\neq 3$. If $\alpha_{2,1}(G)=\lfloor n/2 \rfloor$, then
$e(G)\leq \lfloor n/2 \rfloor+\nu,$ with $\nu\sim 1.606695.$
\end{thm}
\begin{proof}
Let $m=\lfloor n/2 \rfloor.$ We have that $\alpha_{2,1}(G)=m$ if and only if $C_2^m$ is an epimorphic image of $G.$ By \cite{kp} if $C_2^m$ is an epimorphic image of $G$ then $G$ is the direct product of its transitive constituents
and each constituent is one of the following: $\perm(2),$ of degree 2, $\perm(3),$ of degree 3,
$C_2 \times C_2,$ $D_8,$ of degree 4, and the central product $D_8 \circ D_8,$ of degree 8. Consequently:
$$
G/\frat(G)\simeq \begin{cases} C_2^m &\text{ if $n=2m$},\\
C_2^{m-1}\times \perm(3) &\text{ if $n=2m+1.$}
\end{cases}
$$
And so, by (\ref{gzpr}),
\begin{equation*}
P_G(k)=P_{G/\frat(G)}(k)=\prod_{0\leq i\leq m-1}\left(1-\frac{2^i}{2^k}\right){\left(1-\frac{3}{3^k}\right)}^{n-2m}.
\end{equation*}
Setting $\eta=0$ if $n$ is even, $\eta=1$ otherwise, we have
\begin{equation*}
\begin{split}
e(G)&=\sum_{k\geq 0}\left(1-P_G(k)\right)\leq \sum_{k\geq 0}\left(1-\prod_{0\leq i\leq m-1}\left(1-\frac{2^i}{2^k}\right){\left(1-\frac{3}{3^k}\right)^\eta}\right)\\
&= m+ \sum_{k\geq m}\left(1-\prod_{0\leq i\leq m-1}\left(1-\frac{2^i}{2^k}\right){\left(1-\frac{3}{3^k}\right)^\eta}\right)\\
&= m+ \sum_{j\geq 0}\left(1-\prod_{1\leq l\leq m}\left(1-\frac{1}{2^{j+l}}\right){\left(1-\frac{3}{3^{j+m}}\right)^\eta}\right).
\end{split}
\end{equation*}
Set $$\omega_{m,\eta}=\sum_{j\geq 0}\left(1-\prod_{1\leq l\leq m}\left(1-\frac{1}{2^{j+l}}\right){\left(1-\frac{3}{3^{j+m}}\right)^\eta}\right).$$
Clearly $\omega_{m,0}$ increase with $m.$ On the other hand, if $m\geq 4$ and $j\geq 0$ then
\begin{equation*}
\begin{split}
\left(1-\frac{1}{2^{j+m+1}}\right)\left(1-\frac{3}{3^{j+m+1}}\right)\leq \left(1-\frac{3}{3^{j+m}}\right)
\end{split}
\end{equation*}
and so $\omega_{m,1} \leq \omega_{m+1,1}$ if $m \geq 4.$
Moreover $$\lim_{m\to \infty}\omega_{m,1}=\lim_{m\to \infty}\omega_{m,0}\sim 1.606695.$$
But then $e(G)\leq m+1.606695$ whenever $m
\geq 4.$ The value of $e(G)$ when $n$ is small is given by the following table (that indicates also how fast $e(G)-m$ tends to 1.606695).
\begin{center}
\begin{tabular}{|p{0.7cm}|p{4.2cm}|}
\hline
\rule[-2mm]{0mm}{0,7cm}
$n$ & $e(G)$ \\
\hline
\rule[-2mm]{0mm}{0,7cm}
$2$ & $2$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$3$ & $\frac{29}{10}=2.900$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$4$ & $\frac{10}{3}\sim 3.334$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$5$ & $\frac{1181}{330}\sim 3.579$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$6$ & $\frac{94}{21}\sim 4.476$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$7$ & $\frac{241789}{53130}\sim 4.551$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$8$ & $\frac{194}{35}\sim 5.5429$\\
\hline
\end{tabular}
\begin{tabular}{|p{0.7cm}|p{4.2cm}|}
\hline
\rule[-2mm]{0mm}{0,7cm}
$n$ & $e(G)$ \\
\hline
\rule[-2mm]{0mm}{0,7cm}
$9$ & $\frac{4633553}{832370}\sim 5.5667$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$10$ & $\frac{7134}{1085}\sim 6.5751$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$11$ & $\frac{3227369181}{490265930}\sim 6.5828$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$12$ & $\frac{74126}{9765}\sim 7.59099$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$13$ & $\frac{6399598043131}{842767133670}\sim 7.59355$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$14$ & $\frac{10663922}{1240155}\sim 8.59886$\\
\hline
\rule[-2mm]{0mm}{0,7cm}
$15$ & $\frac{70505670417749503}{8198607229768494}
\sim 8.59971$\\
\hline
\end{tabular}
\end{center}
From the information contained in this table, we deduce that $e(G)\leq m+1.606695$, except when $G=\perm(3).$
\end{proof}
| {
"timestamp": "2017-07-25T02:05:33",
"yymm": "1707",
"arxiv_id": "1707.07193",
"language": "en",
"url": "https://arxiv.org/abs/1707.07193",
"abstract": "Given a finite group $G,$ let $e(G)$ be expected number of elements of $G$ which have to be drawn at random, with replacement, before a set of generators is found. If all the Sylow subgroups of $G$ can be generated by $d$ elements, then $e(G)\\leq d+\\kappa$ with $\\kappa \\sim 2.75239495.$ The number $\\kappa$ is explicitly described in terms of the Riemann zeta function and is best possible. If $G$ is a permutation group of degree $n,$ then either $G=S_3$ and $e(G)=2.9$ or $e(G)\\leq \\lfloor n/2\\rfloor+\\kappa^*$ with $\\kappa^* \\sim 1.606695.$",
"subjects": "Group Theory (math.GR)",
"title": "The expected number of elements to generate a finite group with $d$-generated Sylow subgroups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98657174604767,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7089449245786944
} |
https://arxiv.org/abs/1410.5115 | Remarks on the the circumcenter of mass | Suppose that to every non-degenerate simplex Delta in n-dimensional Euclidean space a `center' C(Delta) is assigned so that the following assumptions hold: (i) The map that assigns C(Delta) to Delta commutes with similarities and is invariant under the permutations of the vertices of the simplex; (ii) The map that assigns Vol(Delta) C(Delta) to Delta is polynomial in the coordinates of the vertices of the simplex. Then C(Delta) is an affine combination of the center of mass and the circumcenter of Delta (with the coefficients independent of the simplex). The motivation for this theorem comes from the recent study of the circumcenter of mass of simplicial polytopes by the authors and by A. Akopyan. | \section{Introduction} \label{intro}
Given a homogeneous polygonal lamina $P$, one way to find its center of mass is as follows: triangulate $P$, assign to each triangle its centroid, taken with the weight equal to the area of the triangle, and find the center of mass of the resulting system of point masses. That the resulting point, $CM(P)$, does not depend on the triangulation, is a consequence of the Archimedes Lemma: {\it if an object is divided into smaller objects, then the center of mass of the compound object is the weighted average of the centers of mass of the parts, with the weights equal to the respective areas}.
Replace, in the above construction, the centroids of the triangles by their circumcenters. The resulting weighted average is called the {\it circumcenter of mass} of the polygon $P$, denoted by $CCM(P)$. This point is well defined, that is, does not depend on the triangulation (assuming that degenerate triangles are avoided), see Figure \ref{Defn}.
\begin{figure}[hbtp]
\centering
\includegraphics[height=2in]{Defn.pdf}
\caption{Circumcenter of mass}
\label{Defn}
\end{figure}
This construction is mentioned in the 19th century book \cite{La}, where it is attributed to the Italian algebraic geometer G. Bellavitis. We learned about this reference from B. Gr\"unbaum who, together with G. C. Shephard, studied this construction in the early 1990s \cite{Gr}. Independently, and at about the same time, the circumcenter of mass was rediscovered by V. Adler \cite{Ad1,Ad2} as an integral of a discrete dynamical system called recutting of polygons.
The explicit formulas are as follows. Let the coordinates of the vertices of the polygon $P$, taken in the cyclic order, be $(x_i,y_i),\ i=1,\ldots,n$. Then $CCM(P)=$
$$
\frac{1}{4 A(P)}\left(\sum_{i=0}^{n-1}y_{i}(x_{i-1}^{2}+y_{i-1}^{2}-x_{i+1}^{2}-y_{i+1}^{2}),\sum_{i=0}^{n-1}-x_{i}(x_{i-1}^{2}+y_{i-1}^{2}-x_{i+1}^{2}-y_{i+1}^{2})\right),
$$
where $A(P)$ is the signed area of $P$. For comparison, $CM(P)=$
$$
\frac{1}{6 A(P)} \left(\sum_{i=0}^{n-1}(x_{i}+x_{i+1})(x_{i}y_{i+1}-x_{i+1}y_{i}),\sum_{i=0}^{n-1}(y_{i}+y_{i+1})(x_{i}y_{i+1}-x_{i+1}y_{i})\right).
$$
The construction of the circumcenter of mass extends to higher dimensions, and to the elliptic and hyperbolic geometries. We studied it in \cite{TT2} in relation with the so-called discrete bicycle transformation \cite{TT1}. See also the paper by A. Akopyan \cite{Ak}.
The construction in ${\mathbb R}^n$ is similar. Given a simplicial polytope $P$, consider its non-degenerate triangulation. Assign the circumcenter $CC(\Delta_i)$ to each simplex $\Delta_i$ of the triangulation, and take the center of mass of these points with weights equal to the oriented volumes of the respective simplices:
\begin{equation} \label{totsum}
CCM(P)=\frac{1}{\mathrm{Vol}(P)}\sum_i \mathrm{Vol}(\Delta_i)\ CC(\Delta_i).
\end{equation}
The result does not depend on the triangulation.
The explicit formula is as follows. Let $F=(V_1,\ldots,V_n)$ be a face of $P$, where $V_i$ are vectors in ${\mathbb R}^n$. Let $A(F)$ be the $n\times n$ matrix made of vectors $V_i$, and let $A_i(F)$ be obtained form $A(F)$ by replacing $i$th row with $(|V_i|^2,\ldots,|V_n|^2)$. Then the $i$th component of the circumcenter of mass is given by
$$
CCM(P)_i=\frac{1}{2 (n!) {\rm Vol}(P)} \sum_{F\subset \partial P} \det A_i(F).
$$
One can take affine combinations $t CM + (1-t) CCM,\ t\in{\mathbb R}$, resulting in a line, called the {\it generalized Euler line} of the polytope $P$ (for a triangle, the Euler line is the line through the centroid and the circumcenter; it passes through the orthocenter as well).
In this note we are interested in the uniqueness of this construction.
Suppose that to every non-degenerate simplex $\Delta \subset {\mathbb R}^n$ a `center' $C(\Delta) \in {\mathbb R}^n$ is assigned so that the following assumptions hold:
\begin{enumerate}
\item The map $\Delta \mapsto C(\Delta)$ commutes with similarities (both orientation-preserving and orientation-reversing);
\item The map $\Delta \mapsto C(\Delta)$ is invariant under the permutations of the vertices of the simplex $\Delta$;
\item The map $\varphi: \Delta \mapsto \mathrm{Vol}(\Delta) C(\Delta)$ is polynomial in the coordinates of the vertices of the simplex $\Delta$;
\end{enumerate}
\begin{theorem} \label{main}
Under these assumptions, $C(\Delta)$ is an affine combination of the center of mass and the circumcenter:
$$C(\Delta) = t CM(\Delta) + (1-t) CC(\Delta),$$
where the constant $t\in{\mathbb R}$ depends on the map $\Delta \mapsto C(\Delta)$ (and does not depend on the simplex $\Delta$).
\end{theorem}
\section{Basic determinants} \label{basic}
Let $x_1,\ldots,x_n$ be Cartesian coordinates in ${\mathbb R}^n$. Let $\Delta =(V_0,\ldots,V_n)$ be a simplex, and let $V_j=(x_1^j,\ldots,x_n^j),\ j=0,\ldots,n$, be the coordinates of its vertices (where $j$ is a superscript, not an exponent).
Let
\begin{align*}
V=\left|\begin{array}{ccccccccccccc}
x_1^0 & x_1^1 & \cdots & x_1^n \\
x_2^0 & x_2^1 & \cdots & x_2^n \\
\vdots & \vdots & \ddots & \vdots \\
x_n^0 & x_n^1 & \cdots & x_n^n \\
1 & 1 & \cdots & 1
\end{array}\right|,
\end{align*}
a multiple of the oriented volume of $\Delta$, and
\begin{align*}
X_{i,jk}=\left|\begin{array}{ccccccccccccc}
x_1^0 & x_1^1 & \cdots & x_1^n \\
x_2^0 & x_2^1 & \cdots & x_2^n \\
\vdots & \vdots & \ddots & \vdots \\
x_{i-1}^0 & x_{i-1}^1 & \cdots & x_{i-1}^n \\
x_j^0 x_k^0 & x_j^1 x_k^1 & \cdots & x_j^n x_k^n \\
x_{i+1}^0 & x_{i+1}^1 & \cdots & x_{i+1}^n \\
\vdots & \vdots & \ddots & \vdots \\
x_n^0 & x_n^1 & \cdots & x_n^n \\
1 & 1 & \cdots & 1
\end{array}\right|=\text{Skew}(x_1^0 x_2^1 \cdots \widehat{x_i^{i-1}} \cdots x_n^{n-1} x_j^{i-1} x_k^{i-1}), \quad 1 \leq i,j,k \leq n.
\end{align*}
where Skew is skew-symmetrization over superscripts.
Evidently, $X_{i,jk}=X_{i,kj}$, and the number of such polynomials equals $n^2(n+1)/2$. Both determinants, $V$ and $X_{i,jk}$, are skew-symmetric under permutations of the vertices of the simplex.
\begin{lemma} \label{basis}
The polynomials $X_{i,jk}$ constitute a linear basis of the space ${\cal S}$ of homogeneous polynomials of degree $n+1$ in the variables $x_1^0,x_2^0,\ldots,x_n^n$, skew-symmetric under permutations of the superscripts.
\end{lemma}
\paragraph{Proof.}
Since
\[
X_{i,jk}=\text{Skew}(x_1^0 x_2^1 \cdots \widehat{x_i^{i-1}} \cdots x_n^{n-1} x_j^{i-1} x_k^{i-1}),
\]
a monomial $x_1^0 x_2^1 \cdots \widehat{x_i^{i-1}} \cdots x_n^{n-1} x_j^{i-1} x_k^{i-1}$ determines $X_{i,jk}$. We will show that:
\begin{enumerate}
\item There exists no nonidentity permutation acting on superscripts that maps this monomial to itself.
\item These monomials lie in different orbits under this action.
\end{enumerate}
The former shows that this monomial does not cancel in the expression of $X_{i,jk}$. The latter will then imply that different monomials give rise to different determinants.
Suppose that there exists a permutation $\sigma$ such that
\[
x_1^0 x_2^1 \cdots \widehat{x_i^{i-1}} \cdots x_n^{n-1} x_j^{i-1} x_k^{i-1}=x_1^{\sigma(0)} x_2^{\sigma(1)} \cdots \widehat{x_{i'}^{\sigma(i'-1)}} \cdots x_n^{\sigma(n-1)} x_{j'}^{\sigma(i'-1)} x_{k'}^{\sigma(i'-1)}.
\]
The superscripts $i-1$ and $\sigma(i'-1)$ are the unique ones which occur twice. Therefore $\sigma(i'-1)=i-1$. The corresponding subscripts are $j,k$ and $j',k'$, so that $\{j,k\}=\{j',k'\}$. Dividing both sides by $x_j^{i-1} x_k^{i-1}$, we get
\[
x_1^0 x_2^1 \cdots \widehat{x_i^{i-1}} \cdots x_n^{n-1} =x_1^{\sigma(0)} x_2^{\sigma(1)} \cdots \widehat{x_{i'}^{\sigma(i'-1)}} \cdots x_n^{\sigma(n-1)},
\]
so that $i=i'$ and $\sigma$ is the identity.
Let $f \in {\cal S}$. Then $f$ is equal to its skew-symmetrization. Write $f$ in its monomial basis:
$$
f(x_1^0,\ldots,x_n^n)=\sum c_{\beta}^{\alpha} x_{\beta}^{\alpha}, \quad |\alpha|=|\beta|=n+1.
$$
Consider the skew-symmetrization of a monomial $x_{\beta}^{\alpha}$. If some number appears in $\alpha$ with multiplicity $3$ or greater, then $\alpha$ must be missing some two distinct numbers $i,j \in \{0,1,\ldots,n\}$. Each permutation $\sigma$ has a counterpart $\sigma (i \, j)$ of opposite sign which maps $x_\beta^{\alpha}$ to the same monomial. Therefore the skew-symmetrization of $x_\beta^{\alpha}$ in this case is zero.
Now suppose that $\alpha$ contains $n+1$ different elements of $\{0,1,\ldots,n\}$. Since the entries of $\beta$ are elements of $\{1,2,\ldots,n\}$, there exist some $\beta_i$ and $\beta_j$ with $i \neq j$ such that $\beta_i=\beta_j$. Each permutation $\sigma$ has a counterpart $\sigma (\alpha_i \, \alpha_j)$ of opposite sign which maps $x_\beta^{\alpha}$ to the same element.
It follows that the only monomials appearing in $f$ are those for which $\alpha$ is a permutation of $(0,1,\ldots,\widehat{i-1},\ldots,n-1,i-1,i-1)$. Assume without loss of generality that $\alpha$ is of this form. Let $\alpha=(\alpha_1,\alpha_2,\ldots,\alpha_n,\alpha_{n+1})$ and $\beta=(\beta_1,\beta_2,\ldots,\beta_n,\beta_{n+1})$. Suppose that $\beta_{i_1}=\beta_{i_2}=\ldots=\beta_{i_k}$. For the monomial to not vanish under skew-symmetrization, the corresponding multiset $\alpha_{i_1},\alpha_{i_2},\ldots,\alpha_{i_k}$ cannot be invariant under any transpositions. Knowing the structure of $\alpha$, we see that this implies that $k=1,2$ or $3$. If $k=2$, then $(\alpha_{i_1},\alpha_{i_2})=(i-1,i-1)$. If $k=3$ then $(\alpha_{i_1},\alpha_{i_2},\alpha_{i_3})=(i-1,i-1,q)$, with $q \neq i-1$. This proves the claim.
\proofend
Consider the map $\varphi: \Delta \mapsto \mathrm{Vol}(\Delta) C(\Delta)$, and let $(y_1,\ldots,y_n)$ be its components. Our assumption 3 implies that each $y_\ell,\ \ell=1,\ldots,n$, is a polynomial in the variables $x_1^0,x_2^0,\ldots,x_n^n$. The assumption 1, applied to scaling, implies that these polynomials are homogeneous of degree $n+1$, and the assumption 2 that they are skew-symmetric under permutations of the superscripts. Lemma \ref{basis} implies that
\begin{equation*}
y_\ell = \sum A_{i,jk}^\ell X_{i,jk},\ \ell=1,\ldots,n,
\end{equation*}
where the coefficients $A_{i,jk}^\ell$ satisfy $A_{i,jk}^\ell= A_{i,kj}^\ell$. We always assume that summation is over repeated indices.
\begin{example} \label{twoknown}
{\rm The center of mass and the circumcenter of mass correspond to the functions
$$
y_\ell = \frac{1}{n+1}\sum X_{i,i\ell}\ \ {\rm and}\ \ y_\ell = \frac{1}{2}\sum X_{\ell,ii},
$$
respectively. In terms of the coefficients, one has:
\begin{equation} \label{both}
A_{i,jk}^\ell = \frac{1}{2(n+1)} (\delta_{ij}\delta_{lk}+\delta_{ik}\delta_{lj}) \ \ \ {\rm and}\ \ \ A_{i,jk}^\ell = \frac{1}{2} \delta_{i\ell}\delta_{jk},
\end{equation}
where $\delta$ is the Kronecker symbol.
}
\end{example}
We now show that Archimedes Lemma is automatically satisfied for any choice of coefficients $A_{i,jk}^\ell$. Let $\Delta=(V_0,\ldots,V_n)$ be a simplex and $O$ a point. Consider the simplices
$$\Delta_i = (V_0,\ldots,V_{i-1},O,V_{i+1},\ldots,V_n),\ i=0,\ldots,n.
$$
\begin{lemma} \label{addit}
For every choice of $i,j,k$, one has:
$$
X_{i,jk}(\Delta)=\sum_{i=0}^n X_{i,jk}(\Delta_i).
$$
\end{lemma}
\paragraph{Proof.}
Let $f:{\mathbb R}^n \to {\mathbb R}^n$ be a mapping. Consider the $(n+2)\times (n+2)$-determinant
\begin{equation*}
\begin{split}
0=\left|\begin{array}{ccccccccccccc}
f(O) & f(V_0) & f(V_1) & \cdots & f(V_n)\\
1 & 1 & 1 & \cdots & 1\\
1 & 1 & 1 & \cdots & 1
\end{array}\right| =
\left|\begin{array}{ccccccccccccc}
f(V_0) & f(V_1) & \cdots & f(V_n)\\
1 & 1 & \cdots & 1
\end{array}\right| \\
- \left|\begin{array}{ccccccccccccc}
f(O) & f(V_1) & \cdots & f(V_n)\\
1 & 1 & \cdots & 1
\end{array}\right| + \ldots + (-1)^{n+1}
\left|\begin{array}{ccccccccccccc}
f(O) & f(V_0) & \cdots & f(V_{n-1})\\
1 & 1 & \cdots & 1
\end{array}\right|.
\end{split}
\end{equation*}
Let
$$
f(x_1,\ldots,x_n)=(x_1,\ldots,x_{i-1},x_jx_k,x_{i+1},\ldots,x_n).
$$
Then, taking the orientations of the simplices into account, the above equality for determinants yields the result.
\proofend
In the next section, we shall use assumption 1, namely, the fact that the map $\Delta \mapsto C(\Delta)$ commutes with parallel translations, rotations, and transpositions of coordinates, to conclude that the coefficients $A_{i,jk}^\ell$ must be affine combinations of the ones in (\ref{both}).
\section{Proof of Theorem} \label{tedious}
Introduce infinitesimal parallel translations in $r$th direction and rotations in the $p,q$-plane:
$$
\xi_r = \frac{\partial}{\partial x_r},\ \ \eta_{pq}=x_p \frac{\partial}{\partial x_q} - x_q \frac{\partial}{\partial x_r},\ \ p,q,r=1,\ldots,n.
$$
Let $\sigma$ denote a transposition of coordinates. The next lemma describes the action of these transformations on the polynomials $X_{i,jk}$.
\begin{lemma} \label{action}
One has
\begin{equation*}
\begin{split}
\sigma (X_{i,jk}) = - X_{\sigma(i),\sigma(j)\sigma(k)}, \
\xi_r (X_{i,jk})=(\delta_{ij}\delta_{rk}+\delta_{ik}\delta_{rj}) V,\\
\eta_{pq} (X_{i,jk})= \delta_{qj} X_{i,pk} + \delta_{qk} X_{i,pj} - \delta_{pi} X_{q,jk}
- \delta_{pj} X_{i,qk} - \delta_{pk} X_{i,qj} + \delta_{qi} X_{p,jk}.
\end{split}
\end{equation*}
\end{lemma}
\paragraph{Proof.}
The first equality follows from the fact that, along with the transposition of indices, exactly two rows of the determinant $X_{i,jk}$ are interchanged. For the second equality, notice that
$$
X_{i,jk} = x_jx_k \frac{\partial}{\partial x_i} (V),\
\frac{\partial}{\partial x_r} (V) =0,\ x_j \frac{\partial}{\partial x_i} (V) = \delta_{ij} V.
$$
It follows that
$$
\xi_r (X_{i,jk})= \left[\frac{\partial}{\partial x_r},x_jx_k \frac{\partial}{\partial x_i}\right] (V) =
\delta_{rj}\ x_k \frac{\partial}{\partial x_i} (V) + \delta_{rk}\ x_j \frac{\partial}{\partial x_i} (V)
= (\delta_{ij}\delta_{rk}+\delta_{ik}\delta_{rj}) V.
$$
The third equality is proved similarly.
\proofend
The covariance of the map $\Delta \mapsto C(\Delta)$ with respect to rigid motions is expressed by the next equations on the coefficients $A_{i,jk}^\ell$.
\begin{proposition} \label{eqcoeff}
For every transposition $\sigma$ of the indices $1,\ldots,n$, one has
\begin{equation} \label{trasp}
A_{i,jk}^\ell = A_{\sigma(i),\sigma(j)\sigma(k)}^{\sigma(\ell)}.
\end{equation}
The covariance with respect to infinitesimal translations is given by
\begin{equation} \label{transl}
\sum A_{i,ir}^\ell = \frac{1}{2} \delta_{\ell r}\ \ {\rm for\ all}\ \ \ell, r,
\end{equation}
and with respect to infinitesimal rotations by
\begin{equation} \label{rota}
\begin{split}
A_{a,qc}^\ell \delta_{pb} + A_{a,qb}^\ell \delta_{pc} - A_{p,bc}^\ell \delta_{qa}
- A_{a,pc}^\ell \delta_{qb} - A_{a,pb}^\ell \delta_{qc} + A_{q,bc}^\ell \delta_{pa}\\
- A_{a,bc}^p \delta_{\ell q} + A_{a,bc}^q \delta_{\ell p}=0,
\end{split}
\end{equation}
for all $a,b,c,p,q,\ell$.
\end{proposition}
\paragraph{Proof.}
A transposition of coordinates is reflection in a hyperplane, and it changes the sign of $V$. Hence the covariance of the map $\Delta \mapsto C(\Delta)$ with respect to $\sigma$ implies the equality
$$
\sum A_{i,jk}^\ell X_{\sigma(i),\sigma(j)\sigma(k)} = \sum A_{i,jk}^{\sigma(\ell)} X_{i,jk}
$$
for all $\ell$. Since the polynomials $X_{i,jk}$ form a basis, for each term on the right, there is a matching term on the left:
$$
A_{i,jk}^{\sigma(\ell)} = A_{\sigma(i),\sigma(j)\sigma(k)}^\ell.
$$
Renaming $\sigma(\ell)$ by $\ell$, we obtain (\ref{eqcoeff}).
To establish (\ref{transl}), we use Lemma \ref{action} to calculate:
$$
\xi_r \left(\frac{y_\ell}{V}\right)= \sum A_{i,jk}^\ell (\delta_{ij}\delta_{rk}+\delta_{ik}\delta_{rj}) = \sum 2 A_{i,ir}^\ell.
$$
On the other hand, $y_\ell/V$ is the $\ell$th component of the map $\Delta \mapsto C(\Delta)$, and the infinitesimal translation in the $r$th direction sends it to $\delta_{\ell r}$.
By translation covariance, the above sum equals $\delta_{\ell r}$, as claimed.
Likewise, the infinitesimal rotation in the $p,q$-plane annihilates $y_\ell$ for $\ell$ distinct from $p,q$, and sends $y_q$ to $y_p$, and $y_p$ to $-y_q$; in short,
$$
y_\ell \mapsto y_p \delta_{\ell q} - y_q \delta_{\ell q}.
$$
On the other hand, by Lemma \ref{action},
\begin{equation*}
\begin{split}
\eta_{pq} (y_\ell) = \sum A_{i,jk}^\ell (\delta_{qj} X_{i,pk} + \delta_{qk} X_{i,pj} - \delta_{pi} X_{q,jk}
- \delta_{pj} X_{i,qk} - \delta_{pk} X_{i,qj} + \delta_{qi} X_{p,jk})\\
= 2A_{i,qj}^\ell X_{i,pj} - A_{p,jk}^\ell X_{q,jk} - 2A_{i,pj}^\ell X_{i,qj} + A_{q,jk}^\ell X_{p,jk}.
\end{split}
\end{equation*}
Equate this to
$$
y_p \delta_{\ell q} - y_q \delta_{\ell q} = \sum (A_{i,jk}^p \delta_{\ell q} - A_{i,jk}^q \delta_{\ell p}) X_{i,jk},
$$
and then, for fixed $a,b,c$, equate the coefficients in front of $X_{a,bc}$ in both expressions to obtain (\ref{rota}).
\proofend
Now we need to solve the system of linear equations (\ref{trasp})--(\ref{rota}) on the unknowns $A_{i,jk}^\ell$. We use (\ref{trasp}) to reduce the number of variables.
Consider the following four cases. If $|\{i,j,k,\ell\}| = 4$ then, applying an appropriate sequence of transpositions, we obtain: $A_{i,jk}^\ell= A^1_{2,34}=:t$. If $|\{i,j,k,\ell\}| = 3$, then one has four sub-cases, and $A_{i,jk}^\ell$ is equal to
$$
A^1_{1,23}=:u,\ {\rm or}\ A^1_{2,13}=:v,\ {\rm or}\ A^2_{3,11}=:w,\ {\rm or}\ A^2_{1,13}=:s.
$$
Likewise, if $|\{i,j,k,\ell\}| = 2$, then one has five sub-cases, and $A_{i,jk}^\ell$ is equal to
$$
A^1_{1,22}=:\phi,\ {\rm or}\ A^1_{2,12}=:\psi,\ {\rm or}\ A^2_{1,11}=:\alpha,\ {\rm or}\ A^1_{1,12}=:\beta,\ {\rm or}\ A^1_{2,11}=:\gamma.
$$
Finally, if $|\{i,j,k,\ell\}| = 1$, then $A_{i,jk}^\ell= A^1_{1,11}=:\nu$. Thus we have 11 unknowns.
Now the strategy is to consider particular cases of (\ref{rota}) and (\ref{transl}).
To start with, consider (\ref{rota}) with $\ell=q=b=c \neq p=a$. One obtains
$$
- 2 A^q_{p,pq} + A^q_{q,qq} - A^p_{p,qq}=0,
$$
or
$2\psi+\phi=\nu.$ Likewise, (\ref{transl}) with $\ell=r$ yields
$ (n-1) \psi + \nu = 1/2.$
It follows that
\begin{equation} \label{soln}
\nu = \frac{1}{2} - (n-1) \psi,\ \phi = \frac{1}{2} - (n+1) \psi.
\end{equation}
We pause to check against Example \ref{twoknown}. For the center of mass,
$$
\psi = \frac{1}{2(n+1)},\ \nu=\frac{1}{n+1},
$$
and the rest of variables vanish; for the circumcenter of mass,
$\phi = \nu = 1/2,$
and the rest vanishes. In both cases, (\ref{soln}) holds.
To finish the proof of Theorem \ref{main}, we need to show that all variables, except $\phi, \psi, \nu$, vanish. We proceed in a similar fashion: (\ref{rota}) with $\ell=q=a=b=c \neq p$ yields
$$
\alpha+2\beta+\gamma=0,
$$
(\ref{rota}) with $\ell=q=c \neq p=a=b$ yields $\gamma=\alpha$. Hence $\beta=-\alpha$.
Next, (\ref{transl}) with $\ell\neq r$ yields
$$
(n-2)s+\alpha+\beta=0,
$$
hence $s=0$.
Next, (\ref{rota}) with $\ell=q=c$, but distinct from pairwise distinct $p,a,b$, yields $t=0$. It remains to eliminate $u,v,w$ and $\alpha$. Toward this, (\ref{rota}) with $\ell=q=c\neq p=b$ and distinct from $a$ yields
$$
\gamma=v+w,\ \ {\rm hence}\ \ \alpha=v+w.
$$
Likewise, (\ref{rota}) with $\ell=q=c\neq p=a$ and distinct from $b$, yields
$$
-s+\beta-u=0, \ \ {\rm hence}\ \ \alpha+u=0.
$$
Next, (\ref{rota}) with $\ell=q=c=a$, but distinct from pairwise distinct $p,b$, yields
$$
u+v+s=0, \ \ {\rm hence}\ \ u+v=0,
$$
and (\ref{rota}) $\ell=q=c=b$, but distinct from pairwise distinct $p,a$, yields
$2v+w=0.$ We have obtained four linear equations on $u,v,w,\alpha$, and the only solution of this system is zero. This completes the proof.
\section{Final remarks} \label{rmks}
(i) Degenerate simplices can be safely ignored when calculating the center of mass: such a simplex has a finite centroid and zero volume, making no contribution to the total sum. Not so for the circumcenter of mass: although the volume of a nearly degenerate simplex tends to zero, its circumcenter may go to infinity, and the contribution to the sum (\ref{totsum}) may be non-negligible. The map $\varphi: \Delta \mapsto \mathrm{Vol}(\Delta)\ CC(\Delta)$, being polynomial in the coordinates of the vertices, is continuous.
For example, consider an isosceles right triangle $ABC$. Its circumcenter is the midpoint $M$ of the hypothenuse $AC$. Consider the triangulation in figure \ref{discont} consisting of three triangles, one of which, $AMC$, is degenerate. If one ignored this triangle, then, by the Archimedes Lemma, the circumcenter of mass of $\triangle ABC$ would be the midpoint of the segment connecting the midpoints of the hypothenuses $AB$ and $BC$ of the triangles $ABM$ and $BCM$. The latter point is the circumcenter of the {\it quadrilateral} $ABCM$, not the {\it triangle} $ABC$.
\begin{figure}[hbtp]
\centering
\includegraphics[width=5in]{triangles.pdf}
\caption{Contribution of a degenerate triangle}
\label{discont}
\end{figure}
(ii) One may wish to extend the notion of the circumcenter of mass to more general sets. For example, let $\gamma(t)$ be a parameterized smooth curve, star-shaped with respect to point $O$, see figure \ref{curve}.
It is natural to define the Circumcenter of Mass by continuity as
\begin{equation} \label{integ}
\frac{\int {C}(t)\ dA}{\int dA},
\end{equation}
where $C(t)$ denotes the limiting $\varepsilon \to 0$ position of the vector from $O$ to the circumcenter of the infinitesimal triangle $O \gamma(t) \gamma(t+\varepsilon)$, and $dA$ is the area of this infinitesimal triangle. However, this does not give anything new: the integral (\ref{integ}) is the center of mass of the lamina bounded by the curve \cite{TT2}.
\begin{figure}[hbtp]
\centering
\includegraphics[width=2in]{curve.pdf}
\caption{Continuous limit of the circumcenter of mass}
\label{curve}
\end{figure}
(iii) Although the rational map $\Delta \mapsto CC(\Delta)$ is discontinuous, the polynomial map $\varphi : P \mapsto \mathrm{Vol}(P)\ CCM(P)$, defined on simplicial polytopes in ${\mathbb R}^n$, is continuous and is a valuation:
$$
\varphi (P_1 \cup P_2) + \varphi (P_1\cap P_2) = \varphi(P_1) + \varphi(P_2).
$$
This valuation is isometry covariant; see, e.g., \cite{Sch} for the theory of valuations.
${\mathbb R}^n$-valued
continuous isometry covariant valuations on convex compact subsets of ${\mathbb R}^n$ were classified in \cite{HS} as linear combinations of intrinsic moments. Namely, given a convex set $K$, let $K_\varepsilon$ be its $\varepsilon$-neighborhood. Then the moment vector
\begin{equation} \label{moment}
\int_{K_\varepsilon} x\ dx
\end{equation}
is a polynomial in $\varepsilon$, and its coefficients span the space ${\cal V}_n$ of continuous isometry covariant valuations (the free term being $\mathrm{Vol}(K)\ CM(K)$). One has: dim ${\cal V}_n = n+1$.
The next proposition states that the circumcenter of mass is not a linear combination of the intrinsic moments.
\begin{proposition} \label{new}
The map $\varphi : P \mapsto \mathrm{Vol}(P)\ CCM(P)$ is not an element of the space ${\cal V}_n$.
\end{proposition}
\paragraph{Proof.}
We argue in dimension 2; the general case is similar.
Consider an isosceles triangle $K(\alpha)$ whose base is aligned with the $x$-axis and has length 2, whose axis of symmetry is the $y$-axis, and whose base angle is $\alpha$. If $\alpha$ is close to zero then the moment vector (\ref{moment}) is close to the zero vector, and so are the coefficients of the powers of $\varepsilon$ in (\ref{moment}) that constitute a basis in ${\cal V}_2$.
Assume that $\varphi$ is a linear combination of the three basic vectors of the space ${\cal V}_2$ (with constant coefficients, independent of $K$). Then $\varphi(K(\alpha)) \to 0$ as $\alpha\to 0$. However, a straightforward computation shows that
$$
\lim_{\alpha\to 0} \varphi(K(\alpha)) = \left(0, -\frac{1}{2}\right)
$$
(without computation, this can be seen in figure \ref{discont}).
This is a contradiction.
\proofend
(iv) We finish with a conjecture and two problems.
Assume that one assigns a ``center" to every simplicial polytope in ${\mathbb R}^n$ so that the center depends analytically on the polytope, commutes with dilations, and satisfies the Archimedes Lemma (with the weights equal to the respective volumes).
\begin{conjecture} \label{conj}
The space of such centers is 1-dimensional: they are affine combinations of the centers of mass and the circumcenters of mass.
\end{conjecture}
For $n=2$, this is proved in \cite{TT2}.
As we mentioned, the construction of the circumcenter of mass extends to the spherical and hyperbolic geometries \cite{Ak,TT2}. Thus one has versions of Conjecture \ref{conj} for ${\mathbb S}^n$ and ${\mathbb H}^n$ as well.
Next, we pose the following problem.
\begin{problem} \label{prbl}
Describe ${\mathbb R}^n$-valued continuous isometry covariant valuations on simplicial polytopes in ${\mathbb R}^n$.
\end{problem}
Finally, it is interesting to find an axiomatic description of the centers for simplicial polygons and polytopes, discussed in this note, in the three geometries of constant curvature (for the center of mass, see \cite{Ga}). These centers should be isometry covariant and satisfy some additivity condition (Archimedes Lemma or valuation-like).
Such a description should include the valuations from Problem \ref{prbl}. At the moment of writing, we do not know such an axiomatic description.
\bigskip
{\bf Acknowledgments}. This work is an extension of the project that originated in the Summer@ICERM 2012 program; it is a pleasure to acknowledge the inspiring atmosphere and hospitality of the institute. We are grateful to V. Adler, A. Akopyan, Yu. Baryshnikov and B. Gr\"unbaum for their interest and help.
The first author was supported by the NSF grant DMS-1105442, and the second author was supported by a NSF Graduate Research Fellowship under Grant No. DGE 1106400
| {
"timestamp": "2014-10-21T02:12:14",
"yymm": "1410",
"arxiv_id": "1410.5115",
"language": "en",
"url": "https://arxiv.org/abs/1410.5115",
"abstract": "Suppose that to every non-degenerate simplex Delta in n-dimensional Euclidean space a `center' C(Delta) is assigned so that the following assumptions hold: (i) The map that assigns C(Delta) to Delta commutes with similarities and is invariant under the permutations of the vertices of the simplex; (ii) The map that assigns Vol(Delta) C(Delta) to Delta is polynomial in the coordinates of the vertices of the simplex. Then C(Delta) is an affine combination of the center of mass and the circumcenter of Delta (with the coefficients independent of the simplex). The motivation for this theorem comes from the recent study of the circumcenter of mass of simplicial polytopes by the authors and by A. Akopyan.",
"subjects": "Metric Geometry (math.MG)",
"title": "Remarks on the the circumcenter of mass",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717448632121,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7089449237275495
} |
https://arxiv.org/abs/1908.01933 | Surfaces on the Severi line in positive characteristics | Let $X$ be a minimal surface of general type over an algebraically closed field $\mathbf{k}$ of $\mathrm{char}.(\mathbf{k})=p\ge 0$. If the Albanese morphism $a_X:X\to \mathrm{Alb}_X$ is generically finite onto its image, we formulate a constant $c(X,L)\ge 0$ for a very ample line bundle $L$ on $\mathrm{Alb}_X$ such that $c(X,L)=0$ if and only if $\dim \mathrm{Alb}_X=2$ and $a_X: X\to \mathrm{Alb}_X$ is a double cover. A refined Severi inequality $$K^2_X\ge (4+{\rm min}\{\,c(X,L),\,\frac{1}{3}\,\})\chi(\mathcal{O}_X)$$ is proved. Then we prove that $K^2_X=4\chi(\mathcal{O}_X)$ if and only if the canonical model of $X$ is a flat double cover of an Abelian surface. | \section{Introduction}
Let $X$ be a minimal algebraic surface of general type with maximal Albanese dimension defined over an algebraically closed field. The Severi inequality asserts that
$K_X^2\ge 4\chi({\mathcal O}_X)$.
In a long time the validity of this inequality is referred to as the Severi conjecture (cf.\cite{Reid78} and \cite{Catanese83}). In \cite{Manetti03}, Manetti proved this conjecture under an extra assumption that $K_X$ is ample. Later, Pardini \cite{Pardini05} managed to give a complete proof of Severi conjecture in characteristic zero based on her elegant covering trick. After that, Yuan and Zhang \cite{Y-Z14} generalized Pardini's result to fields of all characteristics.
\begin{thm}[Severi inequality, Pardini 05' \& Yuan-Zhang 14']
Let $X$ be a minimal surface of general type with maximal Albanese dimension, then $K_X^2\ge 4\chi({\mathcal O}_X)$.
\end{thm}
It then arises a natural question: when does the equality hold?
\begin{defn}
The surface $X$ with maximal Albanese dimension is called on the Severi line, if the equality $K_X^2=4\chi({\mathcal O}_X)$ holds.
\end{defn}
\begin{conj}[\protect{\cite[Section~5.2]{L-P12} \& \cite[Section~0]{Manetti03}}] \label{Conj: main}
A minimal surface $X$ of general type with maximal Albanese dimension is on the Severi line if and only if its canonical model $X_{\mathrm{can}}$ is a flat double cover of an Abelian surface.
\end{conj}
This conjecture was confirmed by Barja-Pardini-Stoppino \cite{B-P-S16} and Lu-Zuo \cite{L-Z17} in characteristic zero independently.
\begin{thm}[Barja-Pardini-Stoppino 16', Lu-Zuo 17']\label{thm: B-P-S, L-Z}
Let $X$ be a minimal surface of general type with maximal Albanese dimension over an algebraically closed field of characteristic zero, then $X$ is on the Severi line if and only if the canonical model $X_{\mathrm{can}}$ is a flat double cover of an Abelian surface.
\end{thm}
The main result of this paper is a generalization of above theorem to fields of all characteristics:
\begin{thm}[Corollary~\ref{cor: double cover comes out} and Corollary~\ref{Cor: main 1}]\label{thm:main}
Let $X$ be a minimal surface of general type with maximal Albanese dimension over an arbitrary algebraically closed field, then $X$ is on the Severi line if and only if the canonical model $X_{\mathrm{can}}$ is a flat double cover of an Abelian surface.
\end{thm}
Our method is also to use Pardini's covering trick and slope inequality (the same as \cite{Pardini05} and \cite{L-Z17}), but with a lot of refinements to overcome the characteristic $p>0$ obstructions, especially when $p=2$. In characteristic $2$, firstly for the presence of purely inseparable double cover, the slope inequality in \cite{L-Z17} can fail to lead to a double cover $X\dashrightarrow Y$ factoring the Albanese morphism of $X$ (a crucial step of the method in \cite{L-Z17}). Secondly, the double cover theory is much different in this case, and therefore the reduction process of \cite{L-Z17} does not work well (cf. Remark~\ref{Rem: not easy to do reductions} for an explanation).
Our strategy here is to formulate and to prove a refined Severi inequality $$K^2_X\ge (4+{\rm min}\{\,c(X,L),\,\frac{1}{3}\,\})\chi({\mathcal O}_X)$$
where $c(X,L)\ge 0$ is a constant number defined as following: There are at most finite number of rational double covers $\pi_i$ ($i=1,...,s$)
$$
\xymatrix{
X\ar@{-->}[rr]^{\pi_i}\ar[rd]_{a_X} && Y_i \ar[ld]^{h_i}\\
&\mathrm{Alb}_X&
}
$$
here $Y_i$ is taking to be a minimal model. For a very ample line bundle $L$ on ${\rm Alb}_X$, let
$c_i(X,L):=\frac{K_{Y_i}\cdot h_i^*L}{K_X\cdot a_X^*L}$ for all $i=1,...,s$. If $p=2$ and $a_X$ is separable, let $c_0(X,L):=\frac{(2K_X-R_X)\cdot L_X}{2K_X\cdot L_X}$
where $R_X:=c_1(\mathrm{det}(\Omega_{X/\mathrm{Alb}_X}))$. Then the constant $c(X,L)$ is defined to be
$$c(X,L)={\rm min}\{\,\,c_i(X,L)\,\,|\,\,0\le i\le s\,\}.$$
It is easy to see that $c(X,L)=0$ if and only if $\dim \mathrm{Alb}_X=2$ and the Albanese morphism $a_X:X\to {\rm Alb}_X$ is a double cover. Thus, when $K^2_X=4\chi({\mathcal O}_X)$, our inequality implies that $X$ is a double cover of an Abelian surface. After this result, in {\bf Section~\ref{Sec: 6}}, by a detailed study of flat double covers in all characteristics, we prove that $K^2_X=4\chi({\mathcal O}_X)$ holds if and only the canonical model $X_{\mathrm{can}}$ is a flat double cover of an Abelian surface.
To prove our refined Severi inequality, by Pardini's covering trick, we formulate the following commutative diagram:
$$\xymatrix{
X_n \ar[d]_{a_n} \ar[r]^{\nu_n} & X \ar[d]^{a_X} \\
\mathrm{Alb}_X \ar[r]^{\mu_n} & \mathrm{Alb}_X }\,\quad \,\xymatrix{
\widetilde{X}_n\ar[r]^\pi \ar[rrd]_{\varphi_n} & X_n\ar[r]^{a_n} \ar@{-->}[dr] & \mathrm{Alb}_X\ar@{-->}[d]^{\iota_n} \\
&& \mathbb{P}_\sk^1}$$
where $L$ is a very ample line bundle on $\mathrm{Alb}_X$, $\iota_n$ is defined by the linear pencil of dimension one in $|L|$ generated by a general member
$(B_n,B'_n)\in |L|\times |L|$ and $\pi$ is the minimal elimination of the indeterminacy. One obtains a series of fibrations $\varphi_n: \widetilde X_n\to \mathbb{P}_\sk^1$ of genus $g_n$ with slopes
$$\lambda_{\varphi_n}=\frac{K^2_{\varphi_n}}{\chi_{\varphi_n}}=\frac{2K_X^2+6n^{-2}K_X\cdot L_X +8n^{-4}L_X^2}{2\chi({\mathcal O}_X)+n^{-2}K_X\cdot L_X+n^{-4}L_X^2}\rightarrow \frac{K_X^2}{\chi({\mathcal O}_X)}$$ (as $n\rightarrow\infty$) where $L_X=a_X^*L$. Then, by Xiao's slope inequality $$\lambda_{\varphi_n}\ge 4-\frac{4}{g_n},$$
Pardini was able to prove Severi inequality $K_X^2\ge 4\chi({\mathcal O}_X)$ in \cite{Pardini05} since $g_n\rightarrow\infty$ when $n\rightarrow\infty$. This part will be generalized to
characteristic $p>0$ in our {\bf Section~\ref{Section: Severi inequality}} ($\varphi_n$ can inevitably have singular generic geometric fibre since the Bertini theorem is not as strong as over $\mathbb{C}$).
To prove our refined Severi inequality, we choose $(B_n,B_n')\in |L|\times |L|$ such that $\varphi_n: \widetilde X_n\to \mathbb{P}_\sk^1$ are non-hyperelliptic fiberations of genus $g_n$, the Harder-Narasimhan filtration $$0=E_0\subset E_1\subset\cdots\subset E_{m}=(\varphi_{n})_*\omega_{\widetilde X_n/\mathbb{P}_\sk^1}$$ defines a series of rational maps $\phi_{n,i}$
$$
\xymatrix{
\widetilde X_n\ar@{-->}[rr]^{\phi_{n,i} \ \ \ \ }\ar[rd]_{\varphi_n } && Z_{n,i}\subseteq\mathbb{P}(E_i) \ar[ld]\\
&\mathbb{P}_\sk^1&}$$ by the canonical morphism $\varphi_n^*E_i\hookrightarrow \varphi_n^*(\varphi_{n})_*\omega_{\widetilde X_n/\mathbb{P}_\sk^1}\to \omega_{\widetilde X_n/\mathbb{P}_\sk^1}$.
If ${\rm deg}(\phi_{n,i})\neq 2$ for all $i$, the slope inequality of \cite{L-Z17} (see Theorem~\ref{thm: slope with gonality p} (1) for characteristic $p>0$ version) shows
$$\lambda_{\varphi_n}\ge \frac{9}{2}\frac{g_n-1}{g_n+2}.$$ Thus, when $K^2_X=4\chi({\mathcal O}_X)$, the authors of \cite{L-Z17} are able to obtains double covers $\phi_n:\widetilde X_n\dashrightarrow Z_{n,i}$ for some $i$. The key of \cite{L-Z17} is to show that these double covers led to a double cover $X\dashrightarrow Y$ factoring $a_X:X\to {\rm Alb}_X$ and then run by inductions. This argument seems not work in characteristic $2$.
Our strategy is to use another kind of slope inequality (see (2) of Theorem~\ref{thm: slope with gonality p}). Let $b_{n,i}$ be the genus of generic fiber of $Z_{n,i}\to \mathbb{P}_\sk^1$, and $c_n:=\mathrm{min}\{\,\, \dfrac{b_{n,i}}{g_{n,i}}| \,\, \deg(\phi_{n,i})=2 \,\, \}$, we show in {\bf Section~\ref{Sec: slope inequality}} the following slope inequality
\begin{equation}\label{eq.1}
\lambda_{\varphi_n}\ge (4+{\rm min}\{\frac{1}{3},\,c_n\})\frac{g_n-1}{g_n+2}.
\end{equation}
The key technical part of {\bf Section~\ref{Sec: 5}} is devoted to show
$$\varlimsup\limits_{n\to \infty}c_n\ge c(X,L)$$
for suitable choice of $\varphi_n$ (see Proposition \ref{prop: ratio}), which implies
$$\varlimsup\limits_{n\to \infty}\,\lambda_{\varphi_n}\ge 4+{\rm min}\{\frac{1}{3},\,c(X,L)\}$$
and thus the refined Severi inequality.
In {\bf Section~\ref{Sec: 6}}, a study of flat double cover over Abelian surface is carried out to obtain Theorem~\ref{thm:main}. Finally, in {\bf Section~7}, we give two examples of surfaces on the Severi line in characteristic $2$--one with an inseparable Albanese morphism and one with a separable one.
To make the paper more self-contained, some preliminaries are given in {\bf Section~\ref{Sec: pre}}, we suggest the readers first skip this section and then return when it is referred to.
{\bf Conventions:}
We make the following conventions in this paper:
\begin{enumerate}
\item $\mathbf{k}$ is an algebraically closed field of characteristic $p>0$;
\item a surface fibration is a flat morphism $f$ from a smooth projective surface $S$ to a smooth curve $C$ such that $f_*{\mathcal O}_S={\mathcal O}_C$. For such a fibration, we write $K_f:=K_{S}-f^*K_C$ and $\chi_f:=\deg (f_*\omega_{S/C})$.
\item for a rational map $f: S\dashrightarrow T$ of $\mathbf{k}$-varieties, a dominate rational map $g: S\dashrightarrow T'$ is called to be \emph{relative to $f$ or to $T$} if there is another rational map $h: T'\dashrightarrow T$ such that $f=h\circ g$. Note this $h$ is unique if exists.
\end{enumerate}
\section{Preliminaries}\label{Sec: pre}
\subsection{A Bertini type Theorem}
Let $X$ be a smooth projective variety over $\mathbf{k}$. Let $\varphi: X\to \mathbb{P}_\mathbf{k}^r$ be a non-degenerated morphism.
\begin{thm}\label{thm: Bertini}
Suppose $\varphi$ does not factor through the relative Frobenius morphism $F_{X/\mathbf{k}}: X\to X^{(-1)}:=X\times_{\mathbf{k}, F_\mathbf{k}} \mathbf{k}$ and $\dim \varphi(X)\ge 2$, then $\varphi^*(H)$ is a reduced and irreducible divisor for a general hyperplane $H$ in $\mathbb{P}_\mathbf{k}^r$.
\end{thm}
\begin{proof}
Let $Z:=\{(x,H)\in X\times (\mathbb{P}_\mathbf{k}^r)^*| \varphi(x) \in H\}$ be the incidence variety. Its first projection $p_1: Z\to X$ is a $\mathbb{P}^{r-1}$ bundle (hence $Z$ is smooth) and the second projection $p_2: Z\to (\mathbb{P}_\mathbf{k}^r)^*$ is flat. The fibre $p_2^{-1}(H)$ for $H\in (\mathbb{P}_\mathbf{k}^r)^*$ is by construction $\varphi^{-1}(H)\subseteq X$. By \cite[Thm.~I.6.10(1)]{Jouanolou}, $\varphi^{-1}(H)$ is irreducible since $\dim\varphi(X)\ge 2$ for a general $H$. It then remains to show the irreducible curve $\varphi^{-1}(H)$ is smooth at some point. Note that a point $x\in \varphi^{-1}(H)$ is smooth if and only if $(x,H)\in Z$ is smooth for $p_2$. Since the smooth locus of $p_2$ on $Z$ is either empty or dominates $(\mathbb{P}_\mathbf{k}^r)^*$, it suffices to prove $p_2$ is smooth at some point.
Now we prove the general smoothness for $p_2$. Choose the standard affine subset $\mathbb{A}^r_\sk\subsetneq \mathbb{P}^r_{\sk}$ and take $X_0:=\varphi^{-1}(\mathbb{A}^r_\sk)$. The morphism $\varphi: X_0\to \mathbb{A}^r_\sk$ is then given by $r$ regular functions $f_1,...,f_r$ as $$x\mapsto [1,f_1(x),...,f_r(x)] \in \mathbb{P}^r_{\sk}.$$ Since $\varphi(X)$ is non-degenerated, $f_i$ does not vanish identically on $X_0$, we may therefore find an open dense subset $U\subseteq X_0$ where $f_1,...,f_r$ are all invertible.
Above the open affine subset $$(\mathbb{A}_\mathbf{k}^r)^*=\{H_{1,t_1,...,t_r}: X_0+t_1X_1+\cdots+t_rX_r=0| t_1,...,t_r\in \mathbf{k}\}\subsetneq (\mathbb{P}_\mathbf{k}^r)^*,$$ the space $Z_0:=\{(x, H_{1,t_1,...,t_r})| x\in U, \varphi(x)\in H_{1,t_1,...,t_r}\}\subseteq Z$ is nothing but $\mathrm{Spec} U[t_1,...,t_r]/(t_1f_1+\cdots t_rf_r+1).$
Now let us calculate the relative K\"ahler differential sheaf $\Omega_{Z/(\mathbb{P}_\mathbf{k}^r)^*}|_{Z_0}$. By construction, we have $\Omega_{Z/(\mathbb{P}_\mathbf{k}^r)^*}|_{Z_0}$ is isomorphic to $\Omega_{U/\mathbf{k}}[t_1,...,t_r]/(t_1\mathrm{d}f_1+\cdots+ t_r\mathrm{d}f_r).$ Note that $\Omega_{U/\mathbf{k}}$ is a locally free sheaf of rank $d=\dim X$ which is one more than that the relative dimension of $p_2$, thus $p_2$ is not smooth anywhere on $Z_0$ only if $t_1\mathrm{d}f_1+\cdots+ t_r\mathrm{d}f_r$ vanishes identically on $Z_0$. In other words, for any closed point $x\in U$, and any $t_1,...,t_r\in \mathbf{k}$ such that $t_1f_1(x)+\cdots t_rf_r(x)=1$, we have $t_1\overline{\mathrm{d}f_1}+\cdots +t_r\overline{\mathrm{d}f_r}=0\in \Omega_{U/\mathbf{k}}\otimes \mathbf{k}(x).$ Assume this is the case, then for any $x\in U$, considering the point $$\xi_i=(x, (0,\cdots, f_i^{-1}, \cdots, 0))\in Z_0,$$ the vanishing of $t_1\overline{\mathrm{d}f_1}+\cdots +t_r\overline{\mathrm{d}f_r}$ now gives the vanishing of $f_i^{-1}\mathrm{d} f_i$ at $x$. By varying $x$, we have $f_i^{-1}\mathrm{d}f_i$ vanishes identically on $U$. As a consequence $\mathrm{d} f_i\equiv 0, i=1,...,r$ on $U$, the morphism $\varphi^*\Omega_{\mathbb{P}_\mathbf{k}^r}\to \Omega_{X}$ also vanishes identically on $U$ and hence on $X$. It then follows that $\varphi$ factors through the relative Frobenius morphism. A contradiction to our assumption.
\end{proof}
\begin{rmk}\label{Rem: remark on Bertini}
(1). In \cite[Thm.~I.6.10]{Jouanolou}, the unramification assumption ({\it i.e.}, $\Omega_{X/\mathbb{P}_{\mathbf{k}}^r}=0$) is assumed for reducibility of $\varphi^*H$.
(2). Suppose $\varphi: X\to Y\subseteq \mathbb{P}^r_{\sk}$ is a finite purely inseparable morphism of smooth $d$-folds $X,Y$, then $\varphi^*H_i, i=1,...,d$ can never intersect transversely for any hyperplanes $H_1,...,H_d$.
\end{rmk}
\subsection{Inseparable double cover and foliations}\label{Subsec: foliation}
Suppose $p=2$ and $Y$ is a smooth projective surface over $\mathbf{k}$. It is well known that any local derivatives $D_1, D_2$ on $Y$, $[D_1,D_2]=D_1\cdot D_2-D_2\cdot D_1$ and $D_1^2$ are again derivatives.
\begin{defn}[\protect{\cite[\S~1]{Ekedahl2}}]\label{defn: 1-foliation}
A $1$-foliation on $Y$ is a saturated subsheaf ${\mathcal F}$ of the tangent sheaf $\mathcal{T}_{Y/\mathbf{k}}$ such that for any local derivatives $D_1,D_2$ in ${\mathcal F}$, both $[D_1,D_2]$ and $D_1^2$ are also in ${\mathcal F}$.
\end{defn}
\begin{thm}[\protect{\cite[Prop.~2.4]{Ekedahl}}]\label{thm: 1-1 on foliation}
There is a $1-1$ correspondence between the set of $1$-foliation ${\mathcal F}$ of rank $1$ and the set of finite inseparable double cover $\pi: Y\to T$ where $T$ is normal.
\end{thm}
This correspondence is given by
$$\{\pi: Y\to T \}\mapsto \{{\mathcal F}=\mathcal{T}_{Y/T}\} \text{and} \, \{{\mathcal F}\subseteq \mathcal{T}_{Y/\mathbf{k}}\}\mapsto \{\pi: Y\to T=Y/{\mathcal F}\}.$$
Now given a $1$-foliation ${\mathcal F}$ of rank $1$, we obtain automatically the following exact sequence:
\begin{equation}\label{equ: exact sequence associated to a 1-foliation}
0\to {\mathcal F}\to \mathcal{T}_{Y/\mathbf{k}}\to {\mathcal I}_Z\mathcal{M}\to 0,
\end{equation}
where $\mathcal{M}$ is an invertible coherent sheaf and $Z$ is a scheme of finite length. The scheme $Z$ is called the singular scheme of ${\mathcal F}$, it lies exactly above the singularities of $T=Y/{\mathcal F}$ (cf. \cite[\S~3]{Ekedahl}). In particular, $Z=\emptyset$ if and only if $T$ is smooth.
By (\ref{equ: exact sequence associated to a 1-foliation}), we have
\begin{equation}\label{equ: for Z}
\begin{split}
\deg Z&=c_2(\mathcal{T}_{Y/\mathbf{k}})+c_1({\mathcal F})\cdot (c_1({\mathcal F})+K_Y)\\
&=c_2(Y)+c_1({\mathcal F})\cdot (c_1({\mathcal F})+K_Y).
\end{split}
\end{equation}
\begin{ex}\label{exp: 1}
One typical example of a $1$-foliation of rank $1$ is obtained from a fibration. Let $f: Y\to C$ be a surface fibration, then $\mathcal{T}_{Y/C}$ is a $1$-foliation of rank $1$. The finite inseparable double cover $\pi: Y\to T=Y/\mathcal{T}_{Y/C}$ it associates is exactly the normalized relative Frobenius homomorphsim:
$$
\xymatrix{
Y\ar[r]^{\pi} \ar@/^2pc/[rrr]^{F_{Y/C}} \ar[rd]_f & T\ar[rr]^{\text{normalisation} \ \ \ \ }\ar[d]^g && Y\times_{C,F_{C}} C \ar[r] \ar[d] & Y\ar[d]^{f}\\
& C \ar@{=}[rr] && C\ar[r]^{F_C} & C
}$$
Conversely, if $\pi: Y\to T$ is a finite inseparable double cover relative to $f: Y\to C$ with $T$ normal, then $\pi$ must be the normalized relative Frobenius as above.
\end{ex}
\section{Slope inequalities of non-hyperellitic fibrations}\label{Sec: slope inequality}
Let $f: Y\to C$ be a relatively minimal, non-hyperelliptic surface fibration of fibre genus $g\ge 2$.
Let
$$0=E_0\subseteq E_1\subseteq \cdots \subseteq E_m=f_*\omega_{Y/C}$$
be the Harder-Narasimhan filtration of $f_*\omega_{Y/C}$. Then for each $i$, there defines a natural rational map $\phi_i: Y\dashrightarrow \mathbb{P}_C(E_i)$ induced by the generically surjective morphism $f^*E_i \hookrightarrow f^*f_*\omega_{Y/C}\to \omega_{Y/C}$. Whenever $\mathrm{rank}(E_i)>1$, $Z_i:=\phi_i(Y)\subset \mathbb{P}(E_i)$ is a surface and $\phi_i$ is a generically finite morphism. In this case, denote by
\begin{itemize}
\item $\gamma_i:=\deg(\phi_i)$ and $b_i:=$ the fibre arithmetic genus of $Z_i\to C$.
\end{itemize} Note that $\phi_i$ is factored through by $\phi_{i+1}$ birationally, thus $\gamma_{i+1}\mid \gamma_{i}$.
\begin{thm}\label{thm: slope with gonality p}
Suppose $f: Y\to \mathbb{P}_\sk^1$ is a relatively minimal, non-hyperelliptic surface fibration of fibre genus $g\ge 2$ over $\mathbf{k}$.
\begin{enumerate}
\item (cf. \cite[Thm~1.2]{L-Z18} for characteristic zero) Assume that $K_f$ is nef and there is an integer $\delta$ such that either $\gamma_i=1$ or $\gamma_i>\delta$ holds for each $i$, then $$K_f^2\ge (5-\dfrac{1}{\delta})\dfrac{g-1}{g+2}\chi_f.$$
\item Assume that $K_f$ is nef and there is a constant $0<c\le \dfrac{1}{3}$ such that $b_i\ge cg$ whenever $\gamma_i=2$ then $$K_f^2\ge (4+c)\dfrac{g-1}{g+2} \chi_f.$$
\end{enumerate}
\end{thm}
To prove this theorem, let us make some preparations first. As we are working over the base $\mathbb{P}_\sk^1$, due to Grothendieck, we have $$f_*\omega_{Y/\mathbb{P}_\sk^1}=\mathop{\oplus}\limits_{i=1}^m {\mathcal O}(\mu_j)^{n_j}, \mu_1>\mu_2> \cdots >\mu_m .$$ By construction $E_i=\mathop{\oplus}\limits_{j=1}^{i}{\mathcal O}(\mu_j)^{n_j}$ gives the Harder-Narasimhan filtration and $r_i:=\mathrm{rank}(E_i)=\sum\limits_{j=1}^in_i$. Also we have $$\chi_f=\sum\limits_{j=1}^m \mu_i\cdot n_i=\sum\limits_{j=1}^m r_i(\mu_i-\mu_{i+1}),$$ here $\mu_{m+1}=0$.
We denote by $\mathcal{L}_i:=(\mathrm{Im}(f^*E_i \hookrightarrow f^*f_*\omega_{Y/\mathbb{P}_\sk^1} \to \omega_{Y/\mathbb{P}_\sk^1}))^{\vee \vee}$ the invertible coherent sheaf and $d_i:=\mathcal{L}_i\cdot F,i=1,...,n$, here $F$ is a general closed fibre of $f$. As $\mathcal{L}_i\otimes f^*{\mathcal O}(-\mu_i)$ is generically globally generated by construction, it is nef. So Xiao's approach (cf. {\it e.g.} \cite[Prop.~2.4]{L-Z18} or \cite{S-S-Z18}) now gives
\begin{align}
\label{Xiao's approach of slope inequality} K^2_f&\geq \sum_{i=1}^m(d_i+d_{i+1})(\mu_i-\mu_{i+1}),\\
\label{Xiao's approach of slope inequality II}K^2_f&\geq (2g-2)(\mu_1+\mu_m),\\
\label{Xiao's approach of slope inequality III}K_f^2&\geq (2g-2)\mu_1
\end{align}
Here $d_{m+1}:=2g-2$.
\begin{rmk}
In positive characteristic, $\mu_m=\mu_m-\mu_{m+1}$ may be negative. So unless we know $K_f$ is nef in a priori, \cite[Prop.~2.4]{L-Z18} works only if $i_k=m$. Namely, the last inequality $K_f^2\geq (2g-2)\mu_1$ can fail if $K_f$ is not nef.
\end{rmk}
Next, since $f$ is non-hyperelliptic, the second multiplicative map $$\varrho: S^2f_*\omega_{Y/C} \to f_*\omega_{Y/C}^{\otimes 2}$$ is generically surjective by Max Noether's theorem.
\begin{lem}
We have $$K_f^2=\deg(f_*\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2})-\chi_f+l,$$
where $l:=\dim_\mathbf{k} (R^1f_*{\mathcal O}_X)_{\mathrm{tor}}$. In particular, we have
\begin{equation}\label{equ: K_f via second}
K^2_{f}\ge \deg{\mathcal F}-\chi_f.
\end{equation}
Here ${\mathcal F}:=\mathrm{Im}(\varrho)$.
\end{lem}
\begin{proof}
Note that we have $R^1f_*\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2}=0$, so
\begin{align*}
\chi({\mathcal O}_Y)&=\chi({\mathcal O}_{\mathbb{P}_\sk^1})-\chi(R^1f_*{\mathcal O}_Y)=\chi_f-(g-1)\chi({\mathcal O}_{\mathbb{P}_\sk^1})- l\\
\chi(\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2})&=\chi(f_*\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2})=\deg f_*\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2}+3(g-1)\chi({\mathcal O}_{\mathbb{P}_\sk^1}).
\end{align*}
On the other hand, we have by Riemann-Roch formula
\begin{equation*}
\chi(\omega_{Y/\mathbb{P}_\sk^1}^{\otimes 2})-\chi({\mathcal O}_Y)=K_f^2+4(g-1)\chi({\mathcal O}_{\mathbb{P}_\sk^1}).
\end{equation*}
We are done by a simple calculation.
\end{proof}
To approach the slope inequality desired, it suffices to work out a lower bound of $\deg {\mathcal F}$. Let
\begin{equation}\label{1}
0:=\mathcal{F}_0\subseteq \mathcal{F}_1\subseteq \mathcal{F}_2\subseteq \cdots \subseteq \mathcal{F}_{m-1}\subseteq \mathcal{F}_m:={\mathcal F}
\end{equation} be the filtration of ${\mathcal F}$ defined as ${\mathcal F}_i:=\mathrm{Im}(\varrho: S^2(E_i)\to f_*\omega^{\otimes 2}_{Y/\mathbb{P}_\sk^1})$. Since ${\mathcal F}_i/{\mathcal F}_{i+1}$ is a quotient sheaf of $S^2E_i$, its slope is at least $2\mu_i$. So we have
\begin{equation}\label{2}
\mathrm{deg}({\mathcal F})\geq 2\sum\limits_{i=1}^m (\mathrm{rk}({\mathcal F}_{i})-\mathrm{rk}({\mathcal F}_{i-1}))\mu_i =2\sum\limits_{i=1}^m \mathrm{rk}({\mathcal F}_i)(\mu_i-\mu_{i+1}).
\end{equation}
Now we shall turn to study the rank of each ${\mathcal F}_i$. The next lemma gives a lower bound of the rank of ${\mathcal F}_i$. Recall that $b_i$ by definition (at the beginning of this section) is the genus of the image $\phi_i(F)$ for a general fibre $F$.
\begin{lem}[Clifford plus theorem, \protect{\cite[\S~III.2]{A-C-G-H}} or \protect{\cite[\S~1 ]{Harris81}}] For each $1\leq i\leq m$, we have
$$\mathrm{rk}(\mathcal{F}_i)\geq
\left\{\begin{array}{cl}
3r_i-3,& \,\,\,\, \text{if}\,\, r_i\leq b_i+1; \\
2r_i+b_i-1,& \,\,\,\, \text{if}\,\, r_i\geq b_i+2
\end{array}
\right.
$$
In particular, if $\xi_i$ is a birational morphism, then $\mathrm{rk}(\mathcal{F}_i)\geq 3r_i-3$.
\end{lem}
\begin{proof}[Proof of Theorem~\ref{thm: slope with gonality p}] We prove here only the second part, the proof of the first part is similar and one can also refer to \cite[\S~2,3]{L-Z17}.
For the second part of Theorem~\ref{thm: slope with gonality p}, we shall take $\ell'=\mathrm{min}\{i| \gamma_i=2\}$ and $\ell''=\mathrm{min}\{\ell'\le i<\ell| r_i\ge b_i+2\}$ (when $i$ ranges in $[\ell', \ell-1], b_i$ decreases and $r_i$ increases).
Then from (\ref{equ: K_f via second}), (\ref{2}) and the lemma above, we have following inequality:
\begin{equation}\label{equation K2 refined}
\begin{split}
K_f^2&\geq 2\sum\limits_{i=1}^{\ell'-1}(2r_i-1)(\mu_i-\mu_{i+1})+2\sum\limits_{i=\ell'}^{\ell''-1}(3r_i-3)(\mu_i-\mu_{i+1}) \\
&+ 2\sum\limits_{i=\ell''}^{\ell-1}(2r_i+b_i-1)(\mu_i-\mu_{i+1})+2\sum\limits_{i=\ell}^{m}(3r_i-3)(\mu_i-\mu_{i+1})-\chi_f\\
&\geq \sum\limits_{i=1}^{\ell'-1}(3r_i-2)(\mu_i-\mu_{i+1})+\sum\limits_{i=\ell'}^{\ell''-1}(5r_i-6)(\mu_i-\mu_{i+1}) \\
&+ \sum\limits_{i=\ell''}^{\ell-1}((3+2c)r_i-2)(\mu_i-\mu_{i+1})+\sum\limits_{i=\ell}^{m}(5r_i-6)(\mu_i-\mu_{i+1})
\end{split}
\end{equation}
On the other hand, by (\ref{Xiao's approach of slope inequality}) we have another inequality:
\begin{equation}\label{dffjska}
\begin{split}
K_f^2&\geq \sum\limits_{i=1}^m(d_i+d_{i+1})(\mu_i-\mu_{i+1})\\
&\geq \sum\limits_{i=1}^{\ell'-1}(5r_i-4)(\mu_i-\mu_{i+1})+\sum\limits_{i=\ell'}^{\ell''-1}(4r_i-4)(\mu_i-\mu_{i+1}) \\
&+ \sum\limits_{i=\ell''}^{\ell-1}((4+2c)r_i-2)(\mu_i-\mu_{i+1})+\sum\limits_{i=\ell}^{m}(4r_i-2)(\mu_i-\mu_{i+1})-2\mu_m
\end{split}
\end{equation}
Here we note that
\begin{itemize}
\item in case $r_1=1$, we have $(d_1+d_2)=d_2\ge (5r_1-4)=1$;
\item for $i<\ell$ and $r_i>1$, we have $d_i=\gamma_i\cdot d_i'$ where $d_i'=\deg \phi_i(F)$. The Castelnuovo's bound and Clifford's theorem then gives that:
$
d_i\ge \left\{\begin{array}{cc}
3(r_i-1), &i<\ell';\\
2(2r_i-3), &\ell'\le i<\ell'';\\
2(r_i+b_i-1), &\ell''\le i<\ell;\\
2(r_i-1), &\ell\le i\le m-1.
\end{array} \right.
$
\end{itemize}
Finally, $c\cdot$(\ref{equation K2 refined})$+(1-c)\cdot (\ref{dffjska})$ gives:
\begin{align*}
K_f^2&\ge (4+c)\sum\limits_{i=1}^{m}r_i(\mu_i-\mu_{i+1})+ (1-3c)\sum\limits_{i=1}^{\ell'-1}r_i(\mu_i-\mu_{i+1})\\
&-(4-2c)\sum\limits_{i=1}^{\ell'-1}(\mu_i-\mu_{i+1})-(4+2c)\sum\limits_{i=\ell'}^{\ell''}(\mu_i-\mu_{i+1})\\
&-(1+c)\sum\limits_{i=\ell''}^{\ell'-1}(\mu_i-\mu_{i+1})-(2+4c)\sum\limits_{i=\ell}^{m-1}(\mu_i-\mu_{i+1})-(4+2c)\mu_m\\
&\geq (4+c)\chi_f-(4+2c)(\mu_1-\mu_m)-(4+2c)\mu_m\\
&=(4+c)\chi_f-(4+2c)\mu_1.
\end{align*}Now the above inequality along with (\ref{Xiao's approach of slope inequality III}) gives $K_f^2\ge (4+c)\dfrac{g-1}{g+1+c}\chi_f$, which implies our desired inequality whenever $\chi_f$ is positive or not.
\end{proof}
\section{Severi inequality in positive characteristics}\label{Section: Severi inequality}
In this section, as a preparation for studying surfaces on the Severi line, we shall recall Pardini's elegant covering trick of Severi inequality. From now on until the end of this paper, we let $X$ be a minimal algebraic surface of general type over $\mathbf{k}$ and such that $X$ has maximal Albanese dimension. Namely, the Albanese morphism $a_X: X\to \mathrm{Alb}_X$ is generically finite onto its image. For our purpose, some of the original argument of Pardini is changed.
First we take a sufficiently very ample line bundle $L$ on $\mathrm{Alb}_X$ and denote by $L_X:=a_X^*L$.
For any integer $n\geq 2$ satisfying $(n,p)=1$, let $\mu_n: \mathrm{Alb}_X\rightarrow \mathrm{Alb}_X$ be the multiplication by $n$ morphism on $\mathrm{Alb}_X$ and $X_n$ be the base change as follows.
$$
\xymatrix{
X_n \ar[d]_{a_n} \ar[r]^{\nu_n} & X \ar[d]^{a_X} \\
\mathrm{Alb}_X \ar[r]^{\mu_n} & \mathrm{Alb}_X }$$
Then $X_n$ is again a minimal surface of general type with maximal Albanese dimension and $$K^2_{X_n}=n^{2q}K_X^2,\,\,\,\,\,\, \chi({\mathcal O}_{X_n})=n^{2q}\chi({\mathcal O}_X).$$
Here $q:=\dim \mathrm{Alb}_X$.
As it is well known on Abelian varieties that $$\mu^{\ast}_n(L)\sim_{\mathrm{num}}n^2L,$$ we have $L_{X_n}:=a_n^{\ast}(L)\sim_{\mathrm{num}}n^{-2}\nu_n^{\ast}(L_X)$.
Therefore $$L_{X_n}^2=n^{2q-4}L_X^2,\,\,\,\,\,\,\, K_{X_n}\cdot L_{X_n}=n^{2q-2}K_X\cdot L_X.$$
\begin{lem}[Bertini]
Let $B$ be a general member of $|L|$, then $a_n^*B$ is irreducible and reduced.
\end{lem}
\begin{proof}
See Theorem~\ref{thm: Bertini}, and note that $a_n$ does not factors through the relative Fronbenius morphism since the Albanese morphism $a_X: X\to \mathrm{Alb}_X$ is not so automatically.
\end{proof}
\begin{cor}\label{Cor: bigstar}
There is a Zariski open dense subset $U_n(X)\subseteq |L|\times |L|$ such that for any $(B_n,B_n')\in U_n(X)$, they satisfies:
($\bigstar$) both divisors $D_n:=a_n^*B_n, D_n':=a_n^*B_n'$ are irreducible, reduced and $D_n$ intersects $D_n'$ only at their common smooth locus.
\end{cor}
\begin{proof}
In fact, choose $l+1$ different general elements $\widetilde{B}_1,...,\widetilde{B}_l , \widetilde{B}'$ in $|L|$. Since $\widetilde{B}'$ is general, $\widetilde{D}_n':=a_n^*\widetilde{B}'$ will not pass through any point in the finite set consisting of (a.) the intersection point of $\widetilde{D}_i:=a_n^*\widetilde{B}_i$ and $\widetilde{D}_j$ and (b.) the singular points of $\widetilde{D}_i,i=1,...,l$.
So unless $\widetilde{D}'$ contains at least $l$ singular points, $\widetilde{D}'$ intersects with some $\widetilde{D}_i$ at their common smooth locus. Note that the number of singular points on $\widetilde{D}'$ is bounded by $p_a(\widetilde{D}')$ depending only on the divisor class $L$ not on the choice of $\widetilde{B}'$, so taking any $l\ge p_a(\widetilde{D}')+1$ at first, we shall take $B_n=\widetilde{B}_i$ for some $i$ and $B_n'=\widetilde{B}'$ to fulfil all conditions in ($\bigstar$).
Note that the existence of one single pair $(B_n,B_n')\in |L|\times |L|$ satisfying $(\bigstar)$ actually implies that any $(B_n,B_n')$ in an open dense Zariski subset $U_n(X)\subseteq |L|\times |L|$ fulfils $(\bigstar)$.
\end{proof}
Up to now, by choosing a general member $(B_n,B_n')\in |L|\times |L|$ such that $(\bigstar)$ holds, we construct the commutative diagram of (rational) morphisms in the following picture.
$$\xymatrix{
\widetilde{X}_n\ar[r]^\pi \ar[drr]_{\varphi_n} & X_n\ar[r]^{a_n} \ar@{-->}[dr]^{\varphi_n} & \mathrm{Alb}_X \ar@{-->}[d]^{\iota_n}\\
&& \mathbb{P}_\sk^1}$$
Here $\iota_n$ is defined by the linear pencil of dimension one in $|L|$ generated by $B_n$ and $B_n'$, $\pi: \widetilde{X}_n\to X_n$ is the minimal elimination of the base points of $\varphi_n$ and by abuse of language, both the rational map $X_n\dashrightarrow \mathbb{P}_\sk^1$ and $\widetilde{X}_n\to \mathbb{P}_\sk^1$ are called as $\varphi_n$. We write $p_1,...,p_r$ the intersections of $D_n$ and $D_n'$. Since $p_i$ is smooth on both $D_n$ and $D_n'$ by construction, we have the following lemma.
\begin{lem}
The dual graph of the exceptional divisors for $\pi: \widetilde{X}_n\to X_n$ above $p_i$ is a line (type $A_{m_i}$) as below:
$$
\xymatrix{
\bullet \ar@{-}[r]& \bullet \ar@{-}[r]& \cdots \ar@{-}[r]& \bullet \\
E_{i1} & E_{i2} &\cdots& E_{im_i}}
$$
Here $E_{i1}$ is the exceptional divisor obtained from the first blow-up, $E_{i2}$ is the second and so on.
\end{lem}
\begin{prop}\label{prop: for varphis_n}
We have
\begin{enumerate}
\item $K_{\widetilde{X}_n}=\pi^*K_{X_n}+\sum\limits_{i=1}^r (E_{i1}+2E_{i2}+\cdots+ m_iE_{im_i})$.
\item $E_{ik},k=1,...m_i-1$ are all $(-2)$-curve and $E_{im_i}$ is a $(-1)$-curve.
\item the strict transform $\widetilde{D}_n$ of $D_n$ is a fibre of $\varphi_n$. In particular, the fibration $\varphi_n$ has connected fibres.
\item $E_{im_i}$ is a section of $\varphi_n$ and $E_{ik},k=1,...,m_i-1$ are all vertical with respect to $\varphi_n$. In particular, $\varphi_n$ has no multiple fibres.
\item $K_{\varphi_n}$ is nef.
\end{enumerate}
\end{prop}
\begin{proof}
(1), (2) follows from the previous lemma.
(3) We have $\pi^*D=\widetilde{D}+\sum\limits_{i=1}^r (E_{i1}+2E_{i2}+\cdots+ m_iE_{im_i})$ and $\pi^*D'=\widetilde{D}'+\sum\limits_{i=1}^r (E_{i1}+2E_{i2}+\cdots+ m_iE_{im_i})$, so the strict transform $\widetilde{D}$ is the fibre.
(4) Note that $E_{im_i}\cdot \widetilde{D}=1$, so $E_{im_i}$ is a section. On the other hand, $E_{ik}\cdot \widetilde{D}=0$, so they are vertical.
(5) Since $K_{\widetilde{X}_n}=\pi^*K_{X_n}+\sum\limits_{i=1}^r (E_{i1}+2E_{i2}+\cdots+ m_iE_{im_i})$, we have $$K_{\varphi_n}=\pi^*K_{X_n}+\sum\limits_{i=1}^r (E_{i1}+2E_{i2}+\cdots+ m_iE_{im_i})+2F$$ with $F$ being a fibre of $\varphi_n$. It suffices to show $K_{\varphi_n} \cdot E_{ik}\ge 0$. In fact, we have
\begin{itemize}
\item $K_{\varphi_n}\cdot E_{ik}=K_{\widetilde{X}_n}\cdot E_{ik}=0, k=1,...,m_i-1$;
\item $K_{\varphi_n}\cdot E_{im_i}=K_{\widetilde{X}_n}\cdot E_{im_i}+2E_{im_i}\cdot F=-1+2=1$.
\end{itemize}
\end{proof}
As a consequence, the fibration $\varphi_n$ has the following invariants:
\begin{equation}\label{equ: formula of varphi_n}
g_n=\dfrac{K_{X_n}\cdot D+D^2}{2}+1=\dfrac{n^{2q-2}K_X\cdot L_X+ n^{2q-4}L_X^2}{2}+1.
\end{equation}
$$\aligned K_{\varphi_n}^2&=K_{X_n}^2-D^2+4(K_{X_n}\cdot D+D^2)\\
&=n^{2q}K_X^2+3n^{2q-4}L_X^2+4n^{2q-2} K_X\cdot L_X.\\ \chi_{\varphi_{n}}&=\chi({\mathcal O}_{X_n})+ \dfrac{K_{X_n}\cdot D+D^2}{2}\\&= n^{2q}\chi({\mathcal O}_X)+\dfrac{n^{2q-2}K_X\cdot L_X+ n^{2q-4}L_X^2}{2}\endaligned$$
It should be noted here
\begin{itemize}
\item $\varphi_n$ can inevitably have singular generic geometric fibre since the Bertini theorem is not as strong as in characteristic $0$;
\item $\chi({\mathcal O}_X)$, $\chi({\mathcal O}_{X_n})$ (and hence $\chi_{\varphi_n}, n\gg 0$) are all positive integers by \cite{Shepherd-Barron91};
\end{itemize}
So we obtain a sequence of fibred surfaces
$\varphi_n:\widetilde{X}_n\rightarrow \mathbb{P}^1$ for any $(n,p)=1$ with slopes
\begin{equation}\label{equ: formula of slope of varphin}
\begin{split}
\lambda_{\varphi_n}=\frac{K^2_{\varphi_n}}{\chi_{\varphi_n}}=\frac{2K_X^2+6n^{-2}K_X\cdot L_X +8n^{-4}H^2}{2\chi({\mathcal O}_X)+n^{-2}K_X\cdot H+n^{-4}L_X^2}\rightarrow \frac{K_X^2}{\chi({\mathcal O}_X)}.
\end{split}
\end{equation}
Note that for the fibrations $\varphi_n:\widetilde{X}_n\rightarrow \mathbb{P}^1$, all the quotients $E_i/E_{i+1}$ appearing in the Harder-Narasimhan filtration of $(\varphi_n)_{\ast}\omega_{\widetilde{X}_n/\mathbb{P}^1}$ are also strongly semistable. Thus Xiao's approach for slope inequalities works for $\mathrm{char}.(\mathbf{k})>0$ without any modification (cf. \cite{S-S-Z18}) and we have $$\lambda_{\varphi_n}\ge 4-\dfrac{4}{g_n},$$
which and (\ref{equ: formula of slope of varphin}) clearly implies $K_X^2\ge 4\chi({\mathcal O}_X)$ by letting $n\to +\infty$.
\section{Refined slopes of $X$}\label{Sec: 5}
In this section, we shall first define a constant $c(X,L)$ playing the role of $c$ in Theorem~\ref{thm: slope with gonality p}(2) for $\varphi_n$ when $n\gg 0$.
Then, we adjust the morphisms $\varphi_n$ carefully to make this constant $c(X,L)$ actually work.
Finally, by applying Theorem~\ref{thm: slope with gonality p}(2), we show that $X$ is on the Severi line only if it is a double cover of an Abelian surface.
\subsection{The constant $c(X,L)$}
First note there are only finitely many rational double covers $\pi_i: X\dashrightarrow Y_i$ relative to $a_X$ upto rational equivalence $i=1,...,s$:
$$
\xymatrix{
X\ar@{-->}[rr]^{\pi_i} \ar[rd]_{a_X}&& Y_i\ar[dl]\\
&\mathrm{Alb}_X&}
$$ where $Y_i$ is a minimal model.
In fact, the separable rational double covers relative to $a_X$ is in $1-1$ correspondence with the set of involutions of $X$ relative to $\mathrm{Alb}_X$, hence must be finite. The inseparable rational double cover relative to $a_X$ exists only if $a_X$ is inseparable itself. However, when $a_X$ is inseparable, the relative tangent sheaf $\mathcal{T}_{X/\mathrm{Alb}_X}$ is a $1$-foliation (cf. Definition~\ref{defn: 1-foliation}) of rank $1$ since the relative Frobenius morphism of $X$ can never factor through the Albanese morphism. By construction the inseparable double cover $X\to X/\mathcal{T}_{X/\mathrm{Alb}_X}$ (cf. Theorem~\ref{thm: 1-1 on foliation}) must factor through any other inseparable rational map $X\dashrightarrow T$ relative to $\mathrm{Alb}_X$. Therefore $X\to X/\mathcal{T}_{X/\mathrm{Alb}_X}$ is the unique inseparable double cover relative to $\mathrm{Alb}_X$ upto rational equivalence.
For each $\pi_i: X\dashrightarrow Y_i, i=1,...,s$, we denote by $$c_i(X,L):=\dfrac{K_{Y_i}\cdot L_{Y_i}}{K_X\cdot L_X}, \ \ \text{here} \ \ L_{Y_i}:=h_i^*L.$$
If $p=2$ and $a_X$ is separable, there is furthermore a $c_0(X,L)$ as follows
$$c_0:=\dfrac{(2K_X-R_X)\cdot L_X}{2K_X\cdot L_X},$$
where $R_X:=c_1(\mathrm{det}(\Omega_{X/\mathrm{Alb}_X}))$. To avoid inaccuracy, we note that $R_X$ is actually the divisor $\sum\limits_{j}\alpha_jP_j$ where
\begin{itemize}
\item $P_j$ run through all prime divisors contained in the support of $\Omega_{X/\mathrm{Alb}_X}$.
\item let $\xi_j$ be the generic point of $P_j$, then $\alpha_j:=\mathrm{legnth}(\Omega_{X/\mathrm{Alb}_X})_{\xi_j}$.
\end{itemize}
\begin{defn}\label{defn: c}
We take $c(X,L):=\mathrm{min}\{c_i(X,L)\}$ for all possible $c_i(X,L)$ defined as above and define $c(X,L)=1/2$ if $a_X$ is inseparable and no $\pi_i$ exists.
\end{defn}
\begin{prop}\label{prop: characterization of c}
The number $c(X,L)=0$ if and only if $q=2$ (recall that $q$ is the dimension of $\mathrm{Alb}_X$) and $a_X: X\to \mathrm{Alb}_X$ is a double cover.
\end{prop}
\begin{proof}
The 'if' part is clear.
For 'only if' part, first note that for $i>0$, $c_i(X,L)=0$ if and only if $K_{Y_i}\cdot L_{Y_i}=0$. As $L_{Y_i}$ is nef, big and $K_{Y_i}$ is nef, we only have $K_{Y_i}\equiv_{\mathrm{num}}0$ by Hodge index theorem. Since $Y_i$ has maximal Albanese dimension, this holds only if $Y_{i}$ is an Abelian surface. Namely we get $q=2$ and $a_X: X\to \mathrm{Alb}_X$ is a double cover.
Then for $c_0(X,L)$, when it is defined we have an exact sequence:
$$
a_X^*\Omega_{\mathrm{Alb}_X/\mathbf{k}}\to \Omega_{X/\mathbf{k}}\to \Omega_{X/\mathrm{Alb}_X}\to 0
$$
Denote by ${\mathcal B}:=\mathrm{Im}(a_X^*\Omega_{\mathrm{Alb}_X/\mathbf{k}}\to \Omega_{X/\mathbf{k}})$, its double dual $({\mathcal B})^{\vee \vee}$ is an rank $2$ locally free sheaf generically globally generated. In particular, $c_1({\mathcal B})=c_1(\wedge^2({\mathcal B})^{\vee \vee})$ is effective. By the above exact sequence we have $K_X=R_X+c_1({\mathcal B})\ge R_X$. As a consequence $2K_X-R_X\ge K_X$, and in particular we have $c_0(X,L)\ge \dfrac{1}{2}$ as $L_X$ is nef.
\end{proof}
\subsection{Refinements of $\varphi_n$}
Recall that in the construction of $\varphi_n$ in the previous section, we actually need to choose a general member $(B_n, B_n')\in |L|\times |L|$ meeting the condition $(\bigstar)$ in Corollary~\ref{Cor: bigstar}. Namely, our $\varphi_n$ is actually defined after choosing a $\Xi\in U_n(X)$, so we shall write $\varphi_{n, \Xi}$ instead now. We have shown in the same corollary, such a choice can range in an open dense subset $U_n(X)\subset |L|\times |L|$.
\begin{lem}\label{Lem: non-hyperelliptic}
For any $n\gg 0$ and $(n,p)=1$, there is a Zariski open dense subset $V_n(X)\subseteq U_n(X)$ such that $\varphi_{n,\Xi}$ is non-hyperelliptic for all $\Xi\in V_n(X)$.
\end{lem}
\begin{proof}
Note that $\varphi_{n,\Xi}$ can never be inseparably hyperelliptic (namely, the canonical double cover is inseparable). Since otherwise the fibres of $\varphi_{n,\Xi}$ are rational, a contradiction to the maximal Albanese dimension assumption. So whenever $\varphi_{n,\Xi}$ is hyperelliptic, it gives an involution $\sigma_{n,\Xi}$ on $X_n$ relative to $\varphi_{n,\Xi}$ as $X_n$ is the minimal model.
On the other hand, the involution $\sigma_{n,\Xi}$ can clearly not be relative to $a_n$ by the maximal Albanese dimension assumption. We are done by the next lemma.
\end{proof}
\begin{lem}\label{lem:4}
Suppose for a dense subset $\Lambda_n \subseteq U_n(X)$ that each $\Xi\in \Lambda_n$ is equipped with an involution $\sigma_{n,\Xi}$ of $X_n$ relative to $\varphi_{n,\Xi}$, then there is a dense subset $\Lambda_n'\subseteq \Lambda_n$ so that $\sigma_{n,\Xi}$ is relative to $a_n$ for each $\Xi\in \Lambda_n'$.
\end{lem}
\begin{proof}
First, there are only finitely many involutions on $X_n$. Next note that for each involution $\sigma$, the set $\{\Xi\in U_n(X)|\, \sigma \ \text{is relative to} \ \varphi_{n,\Xi}\}$ is a Zariski closed subset. And finally, $\sigma$ is relative to $a_n$ if it is relative to any $\varphi_{n,\Xi},\Xi \in U_n(X)$. Our lemma follows then.
\end{proof}
Now we choose $\Xi_n\in V_n(X)$ and therefore the associated fibration $\varphi_{n,\Xi}: \widetilde{X}_n\to \mathbb{P}_\sk^1$ is not hyperelliptic. Applying the construction at the beginning of Section~\ref{Sec: slope inequality} to $\varphi_{n,\Xi}$, by taking the Harder-Narasimhan filtration of $(\varphi_{n,\Xi})_*\omega_{\widetilde X_n/\mathbb{P}_\sk^1}$, we obtain a sequence of rational maps as in Section~\ref{Sec: slope inequality}: $$\phi_{n,\Xi,i}: \widetilde{X}_n \dashrightarrow Z_{n,\Xi,i}\subseteq \mathbb{P}(E_i).$$
With the help of (\ref{equ: formula of slope of varphin}) and Theorem~\ref{thm: slope with gonality p}(1) we have:
\begin{cor}\label{cor:existence of double cover}
Suppose $K_X^2 <\dfrac{9}{2}\chi({\mathcal O}_X)$, then for all $n\gg 0, (n,p)=1$ and $\Xi_n\in V_n(X)$, some of the morphisms $\phi_{n,\Xi,i}: \widetilde{X}_n\dashrightarrow Z_{n,\Xi,i}$ are rational double cover.
\end{cor}
Fixing $n,\Xi$, there may be more than one $\phi_{n,\Xi,i}: X_n\dashrightarrow Z_{n,\Xi,i}$ of degree $2$, but they are all birationally equivalent. We then take $Z_{n,\Xi}$ to be the minimal model of all such $Z_{n,\Xi,i}$ and $\phi_{n,\Xi}: \widetilde{X}_n \dashrightarrow Z_{n,\Xi}$
$$
\xymatrix{
\widetilde{X}_n \ar[drr]_{\varphi_{n,\Xi}}\ar@{-->}[rr]^{\phi_{n,\Xi}} && Z_{n,\Xi}\ar@{-->}[d]^{\tau_{n,\Xi}} && Z'_{n,\Xi}\ar[ll]_{\vartheta_{n,\Xi}}\ar[dll]^{\tau'_{n,\Xi}} \\
&&\mathbb{P}_\sk^1&
}$$
\begin{defn}
Taking $\vartheta_{n,\Xi}: Z'_{n,\Xi}\to Z_{n,\Xi}$ to be the minimal resolution of the indeterminancy of $\tau_{n,\Xi}$, then we denote by $g'_{n,\Xi}$ the fibre genus of $\tau_{n,\Xi}': Z'_{n,\Xi}\to\mathbb{P}_\sk^1$.
\end{defn}
It is clear that $g'_{n,\Xi}$ is no larger than the fibre arithmetic genus of $Z_{n,\Xi,i}\to \mathbb{P}_\sk^1$. In the spirit of Theorem~\ref{thm: slope with gonality p}(2), to prove Theorem~{\ref{Thm: slope of X with c} below in this section, it suffices to show that $\varlimsup\limits_{n\to \infty}{\dfrac{g'_{n,\Xi}}{g_n}}\ge c(X,L)$ for a suitable choice of $\Xi_n$ to defining $\varphi_{n,\Xi_n}$ for each $n\gg 0, (n,p)=1$.
Now we start to adjust the choice of $\Xi_n$. Take $W_n(X):=V_n(X)\cap \bigcap \limits_{i=1}^s U_n(Y_i)$ ($Y_i$ is defined in the previous subsection and $U_n(Y_i)$ is defined similar to $U_n(X)$), it is a Zariski open dense subset of $|L|\times |L|$.
\begin{lem}[Lemma~3.2, \cite{L-Z17} ]\label{Lem: choice of XI}
Assume $K_X^2<\dfrac{9}{2}\chi({\mathcal O}_X)$, then for each $n\gg 0, (n,p)=1$, either \begin{enumerate}[i)]
\item there is a $\Xi\in W_n(X)$ such that $\phi_{n,\Xi}$ is separable and relative to $a_n$; or
\item there is an open dense subset $W_n(X)'\subset W_n(X)$ such that $\phi_{n,\Xi}$ is inseparable for any $\Xi\in W_n(X)'$.
\end{enumerate}
\end{lem}
\begin{proof}
Following from Lemma~\ref{lem:4}, case (i) happens if there is a dense subset of $W_n(X)$ such that $\phi_{n,\Xi}$ is separable for all $\Xi$ in this subset. Clearly, if this is not the case, we obtain (ii).
\end{proof}
We shall now choose $\Xi_n$ for each $n$ as follows.
\begin{itemize}
\item If (i) in the above lemma happens, we choose any $\Xi_n$ making $\phi_{n,\Xi_n}$ separable and relative to $a_n$.
\item If (i) fails and $a_n: X\to \mathrm{Alb}_X$ is inseparable, we choose any $\Xi_n\in W_n(X)'$ making $\phi_{n,\Xi_n}$ inseparable.
\item If (i) fails and $a_n: X\to \mathrm{Alb}_X$ is inseparable, we choose $\Xi$ as in the following lemma.
\end{itemize}
\begin{lem}\label{lemma: g'}
Suppose $a_n: X\to \mathrm{Alb}_X$ is separable and $\phi_{n,\Xi}$ is inseparable for any $\Xi$ contained in an open dense subset $W_n(X)'\subset W_n(X)$, then there is a suitable choice of $\Xi_n$ such that $$\dim_{\mathbf{k}(t)}(\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}})_\eta\le c_1(\Omega_{X_n/\mathrm{Alb}_X})\cdot L_{X_n}.$$
for the associated fibration $\varphi_{n,\Xi_n}: \widetilde X_n \to \mathbb{P}_\sk^1$. Here $\eta:=\mathrm{Spec}(\mathbf{k}(t))$ is the generic point of $\mathbb{P}_\sk^1$.
\end{lem}
This lemma is crucial to Proposition~\ref{prop: ratio} below. The inequality in this lemma is used to control the lower bound of $g_{n,\Xi}'$. Before, we prove this lemma, we first introduce some necessary notations. First denote by $X_n'\subset X_n$ the open locus where $a_n$ is unramified, and $P_1,...,P_s$ all the prime divisors contained in the complement of $X'_n$. By Theorem~\ref{thm: Bertini} and \cite[Thm.~I.6.10(2)]{Jouanolou}, there is an open subset $V\subset |L|$ such that for any $H\in V$, $a_n^*H$ is irreducible, reduced and moreover smooth inside $X_n'$. The choice of $\Xi_n=(B_n,B_n')$ is then as below.
\begin{enumerate}[(a)]
\item First take $W'\subseteq |L|$ to be a dense open subset contained in the projection of $W_n(X)\subseteq |L|\times |L|$ onto the first factor.
\item Then choose any $B_n\in V\cap W'$ not coinciding with any $P_j$. By construction, there is an open dense subset $M\subset |L|$ such that $B_n\times M$ is contained in $W_n(X)'$.
\item Finally choose $B_n'$ as a general member of $M$.
\end{enumerate}
\begin{proof}
Let us first note that $\dim_{\mathbf{k}(t)}(\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}})_\eta$ is calculated as below. First pick out all the horizontal (w.r.t. to $\varphi_{n,\Xi_n}$) divisors contained in the support of $\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}}$, say $E_1,...,E_m$. Then $$\dim_{\mathbf{k}(t)}(\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}})_\eta=\sum\limits_{j=1}^m\mathrm{length}(\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1, \mathrm{tor}})_{\eta_j} \cdot [E_j:\mathbb{P}_\sk^1]$$ here $\eta_j$ is the generic point of $E_j$.
Then we shall figure out by our choice of $B_n$, $E_1,...,E_m$ is nothing but a subset of the strict transforms of the prime divisors $P_1,...,P_s$. In fact, since the exceptional divisors of $\widetilde{X}_n\to X_n$ consists of either sections of $\varphi_{n,\Xi}$ or vertical divisors by Proposition~\ref{prop: for varphis_n}, they are not contained in the horizontal component of the support of $\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}}$. Next, by the special choice of $B_n$, a general member of the linear system generated by $B_n,B_n'$ lies in $V$. In particular, its pull back is smooth inside $X_n'$. Namely, for any $E_j$ its intersection with a general fibre of $\varphi_{n,\Xi}$ is contained in the inverse image of the complement of $X_0'$. As a result $E_j$ has to be one of the strict transform of $P_j$.
Finally, we calculate the length. After fixing $B_n$, the morphism $X_n \stackrel{a_n}{\to} \mathrm{Alb}_X \subseteq \mathbb{P}(H^0(L))$
restricting to $X_n\backslash D_n:=a_n^*B_n$ is a morphism to the affine space $$X_n\backslash D_n\stackrel{(f_1,...,f_l)}{\longrightarrow}\mathbb{A}^l,$$ with $l=\dim H^0(L)-1$. Then a general choice of a vector $(\lambda_1,...,\lambda_l)\in \mathbb{A}^{l}(\mathbf{k})$ gives arise to a general choice of $B_n'$ whose pull back is the zeroes of the function $\sum\limits_{i=1}^l\lambda_i f_i$. Now denote by $\xi_j$ the generic point of $P_j$.
The module $\Omega_{X_n/\mathbf{k},\xi_j}$ is a free module of ${\mathcal O}_{X_n,\xi_j}$. By above choice of $B_n'$, we have $$\Omega_{\widetilde X_n/\mathbb{P}_\sk^1,\xi_j}=\Omega_{X_n/\mathbf{k},\xi_j}/(\sum\limits_{i=1}^l\lambda_i\mathrm{d} f_i).$$ In particular, its torsion length is $\mathrm{max}\{s\in \mathbb{N}| t_j^{-s}(\sum\limits_{i=1}^l\lambda_i\mathrm{d} f_i)\in \Omega_{X_n/\mathbf{k},\xi_j}\}$, here $t_j$ is a local parameter. It then follows from Lemma~\ref{Lemma on DVR}, that the length is not larger than the length of the torsion sheaf $$\Omega_{X_n/\mathbf{k},\xi_j}/(\mathrm{d} f_1,\mathrm{d} f_2,...,\mathrm{d} f_l)=\Omega_{X_n/\mathrm{Alb}_X,\xi_j}.$$ In other words, we have
\begin{align*}
\sum\limits_{j=1}^s\mathrm{length}(\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1, \mathrm{tor}})_{\xi_j}[P_j:\mathbb{P}_\sk^1]&\le \sum\limits_{j=1}^s\mathrm{length}(\Omega_{{X}_n/\mathrm{Alb}_X})_{\xi_j}[P_j:\mathbb{P}_\sk^1]\\
&=c_1(\Omega_{X_n/\mathrm{Alb}_X})\cdot L_{X_n}.
\end{align*}
\end{proof}
\begin{lem}\label{Lemma on DVR}
Let $R$ be a D.V.R. containing an field $\mathbf{k}$, $t$ be its uniformizer parameter. Suppose $M$ is a free $R$-module of finite rank and $u_i\in M,i=1,...,n$. For each $u\in M$ we denote by $v(u):=\mathrm{max}\{s|t^{-s}u\in M\}\in \mathbb{N} \cup +\infty$. Then,
\begin{enumerate}
\item the torsion length of $M/(u_1,...,u_n)$ is at least $\mathrm{min}\{v(u_i)|i=1,...,n\}$;
\item there is a proper linear subspace $N$ of $\mathbf{k}^n$ such that for all $(\lambda_1,...,\lambda_n)\in \mathbf{k}^n\backslash N$, we have $v(\sum\limits_{i=1}^n\lambda_i\cdot u_i)=\mathrm{min} \{v(u_i)|i=1,...,n\}$.
\end{enumerate}
\end{lem}
\begin{proof}
We rearrange $u_i$ such that $r=v(u_1)\le v(u_2)\le \cdots \le v(u_n)$.
(1). By construction, the canonical map $R/(t^{r}) \stackrel{\cdot t^{-r}u_1}{\to} M/(u_1,...,u_n)$ is an embedding.
(2). Dividing by $t^{r}$, we may assume that $v(u_1)=0$. As a result, we may find a basis $e_1=u_1,e_2,...,e_k$ of $M$. Then each $u_i=\sum\limits_{j=1}^k f_{ij}e_j, f_{ij}\in R,i=2,...,n$. Considering the map $$\mathbf{k}^n\to R/tR: (\lambda_1,...,\lambda_n)\mapsto \lambda_1+\sum\limits_{i=2}^n\lambda_i\overline{f_{i1}}$$
This map is a non-zero $\mathbf{k}$-linear map. If $(\lambda_1,...,\lambda_n)$ is not in the kernel of the this map then by construction we have
$v(\sum\limits_{i=1}^n\lambda_i\cdot u_i)=0$.
\end{proof}
From now on, we fix a choice of $\Xi_n$ following the above rule and we drop the annoying $\Xi_n$ in subscript of the notation introduced previously for simplicity. For example, we write $g_n'=g_{n,\Xi_n}'$ defined above.
We take $\Lambda_1:=\{n| \phi_n \ \text{is inseparable}\}$ and $\Lambda_2:=\{n| \phi_n \ \text{is separable} \}.$
\begin{prop}\label{prop:existence of descent}
If $\Lambda_2$ contains infinitely many prime numbers, then $\phi_n: X_n\dashrightarrow Z_n$ descends (with respective to $\nu_n:X_n\to X$) to $\pi_i: X\dashrightarrow Y_i$ for a fixed $i$ for infinitely many $n\in \Lambda_2$.
\end{prop}
In \cite{L-Z17}, they also provide a similar result \cite[Thm.~3.1]{L-Z17} by using Xiao's linear bound of the automorphism groups of surfaces of general type by $c_1^2$ (cf. \cite{Xiao94}). Such a linear bound is beyond available in positive characteristics, and we shall provide here a new argument instead.
To proceed, we need the following set up. Let $A$ be an abelian variety over $\bf{k}$, and $\mu_\ell:A\rightarrow A$ be the multiplication by $\ell$ morphism. Let $V\hookrightarrow A$ be a fixed subvariety and $V_{\ell}$ be the base change of $V$ via $\mu_\ell$:
$$
\xymatrix{
V_l\ar[rr]^{\nu_{\ell}}\ar@{^(->}[d] && V \ar@{^(->}[d] \\
A\ar[rr]^{\mu_\ell} && A
}
$$
\begin{lem}\label{Lem1}
Given any generically finite morphism $\pi: V'\to V$ (not necessarily proper) where $V'$ is a variety, assume that for infinitely many primes $\ell$ the morphism $\nu_{\ell}: V_{\ell}\to V$ factors through $\pi$, then $\pi$ is an isomorphism.
\end{lem}
\begin{proof}
We consider the field extension $K(V) \subseteq K(V')$. By our assumption, we may find some $\ell$ such that $\ell> [K(V'):K(V)]$ and the covering $\nu_\ell: V_\ell\to V$ is factored through by $\pi$ as $$\xymatrix{
V'_\ell\ar[r]^\tau \ar@/^1pc/[rr]^{\nu'_\ell} & V' \ar[r]^{\pi} & V.
}$$
Therefore $K(V')$ is $K$-linearly embedded into $K(V'_\ell)$, here $V'_\ell$ is a component of $V_\ell$. By our construction, $\mu_\ell$ (and hence $\nu_\ell$) is a $(\mathbb{Z}/\ell\mathbb{Z})^{2g}$ Galois cover, and $V'_\ell$ is a $\Gamma$ Galois cover of $V$ for a certain subgroup $\Gamma\subseteq (\mathbb{Z}/\ell\mathbb{Z})^{2g}$. Thus we have $$[K(V'):K(V)]\cdot [K(V'_\ell):K(V')]=|\Gamma|\mid \ell^{2g}.$$ This is only possible that $[K(V'):K(V)]=1$ since $\ell\ge [K(V'):K(V)]$ by assumption.
By above argument, the morphism $\pi$ is birational. Note that $\mu_\ell$ is finite and $\tau$ is projective. Thus, $\tau$ is surjective and quasi-finite which implies it is also finite. By Chavelley's theorem, the morphism $\pi$ is also finite. On the other hand, it is clear that ${\mathcal O}_{V'}\subseteq {\mathcal O}_{V'_\ell}^\Gamma={\mathcal O}_V$ and we are done.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:existence of descent}]
We write $V:=a_X(X)$ the schemematic image of $X$ and denote by $$M:=\mathrm{Aut}_{V}(X)[2]$$ the scheme of order $2$ automorphisms of $a_X$. Since $X\to V$ is generically finite, $M$ is also generically finite over $V$. Let $M_1, M_2,...,M_s$ be the (reduced) components of $M$ dominating $V$. By our assumption, $\nu_\ell: V_\ell\to V$ factor through $M_i$ (for some fixed $i$) for infinitely many primes $\ell \in \Lambda_2$ represented by $\sigma_\ell$. It then follows from Lemma~\ref{Lem1} that $M_i$ is isomorphic to $V$. Namely there is a non-trivial automorphism $\sigma$ of $X$ order $2$ relative to $\mathrm{Alb}_X$ descending infinitely $\sigma_\ell$. We are done.
\end{proof}
\begin{prop}\label{Prop: inseparable descent}
If $a_X: X\to \mathrm{Alb}_X$ is inseparable, then for any $n\in \Lambda_1$, the morphism $\phi_n$ descends rationally to the unique inseparable double cover $\pi: X\to X/\mathcal{T}_{X/\mathrm{Alb}_X}$.
\end{prop}
\begin{proof}
Note that when $n\in \Lambda_1$, $\phi_n$ is purely inseparable relative to the base $\mathbb{P}_\sk^1$. So $\phi_n$ is obtained by the $1$-foliation $\mathcal{T}_{X_n/\mathbb{P}_\sk^1}$ (cf. Example~\ref{exp: 1}). However, we clearly have $\mathcal{T}_{X_n/\mathrm{Alb}_X}\subseteq \mathcal{T}_{X_n/\mathbb{P}_\sk^1}$ as the morphism $\varphi_n: X_n\to \mathbb{P}_\sk^1$ is factored through by $a_n: X_n\to \mathrm{Alb}_X$ rationally by construction. In case $a_X$ is inseparable, so is $a_n$ and hence both $1$-foliations $\mathcal{T}_{X_n/\mathrm{Alb}_X}, \mathcal{T}_{X_n/\mathbb{P}_\sk^1}$ are of rank $1$. Namely, they coincides. Finally since $\mu_n$ is \'etale, $\mathcal{T}_{X_n/\mathrm{Alb}_X}$ descends to $\mathcal{T}_{X/\mathrm{Alb}_X}$. Therefore $\phi_n$ descends as we desired.
\end{proof}
\begin{prop}\label{prop: ratio}
We have $\varlimsup\limits_{n\to \infty}\dfrac{g_n'}{g_n}\ge c(X,L)$.
\end{prop}
\begin{proof}
Due to Proposition~\ref{prop:existence of descent} and Proposition~\ref{Prop: inseparable descent}, there are actually two possibilities:
\begin{enumerate}
\item $\phi_n$ descends to a $\pi_i$ for infinitely many $n$;
\item $a_X$ is separable and $\phi_n$ is inseparable for infinitely many $n$.
\end{enumerate}
In this first case, the rational fibration $\tau_n: Z_n\dashrightarrow \mathbb{P}_\sk^1$ is obtained similarly to $\varphi_n: X_n\dashrightarrow \mathbb{P}_\sk^1$ by replacing $X_n$ by $(Y_i)_n$. By our choice of $\tau'_n$ (cf. the construction of $W_n(X)$ in this subsection), we can directly calculate that $$g_n'=\dfrac{n^{2q-2}K_{Y_i}\cdot L_{Y_i}+n^{2q-4} L_{Y_i}^2}{2}+1$$ as we do for $X_n$ (cf. formula (\ref{equ: formula of varphi_n})).
So $g_n'/g_n\rightarrow c_i(X,L)=\dfrac{K_{Y_i}\cdot L_{Y_i}}{K_X\cdot L_X}$.
In the second case, we shall need the genus change formula. By \cite[\S~2.1, Prop.~2.2]{Gu16}, the genus $g_n'$ is given by
$$g_n'=g_n-\dfrac{1}{4}\dim_{\mathbf{k}(t)} (\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}})_\eta.$$
Here $\eta:=\mathrm{Spec}(\mathbf{k}(t))$ is the generic point of the base $\mathbb{P}_\sk^1$. By Lemma~\ref{lemma: g'}, we have
$$\dim_{\mathbf{k}(t)} (\Omega_{\widetilde{X}_n/\mathbb{P}_\sk^1,\mathrm{tor}})_\eta\le L_{X_n}\cdot c_1(\mathrm{det}(\Omega_{X_n/\mathrm{Alb}_X})).$$
Now as $\mu_n: \mathrm{Alb}_X\to \mathrm{Alb}_X$ is \'etale, $$ c_1(\mathrm{det}(\Omega_{X_n/\mathrm{Alb}_X}))= \nu_n^* c_1(\Omega_{X/\mathrm{Alb}_X})=\nu_n^*R_X.$$
All together, the following inequality holds:
\begin{align*}
g_n'&\ge g_n-\dfrac{L_{X_n}\cdot \nu_n^*R_X}{4}\\
&=\dfrac{n^{2q-2}K_X\cdot L_X+n^{2q-4}L_X^2}{2}-\dfrac{n^{2q-2}R_X\cdot L_X}{4}+1\\
&=\dfrac{n^{2q-2}(2K_X-R_X)\cdot L_X+2n^{2q-4}L_X^2}{4}+1,
\end{align*}
and $\varlimsup\limits_{n\to \infty} \dfrac{g_n'}{g_n}\ge \dfrac{(2K_X-R_X)\cdot L_X}{2K_X\cdot L_X}=c_0(X,L)$.
\end{proof}
\subsection{Slopes revisited}
Immediately from Theorem~\ref{thm: slope with gonality p} and Proposition~\ref{prop: ratio}, we have the following theorem.
\begin{thm}\label{Thm: slope of X with c}
We have $K_X^2\ge [4+\mathrm{min}\{c(X,L),\dfrac{1}{3}\}]\chi({\mathcal O}_X)$.
\end{thm}
Combining with Proposition \ref{prop: characterization of c}, the following corollary is clear.
\begin{cor}\label{cor: double cover comes out}
The surface $X$ is on the Severi line only if $q=2$ and $a_X: X\to \mathrm{Alb}_X$ is a double cover.
\end{cor}
\section{Double covers of Abelian surface}\label{Sec: 6}
\subsection{Invariants of a double cover}
We start from an arbitrary morphism $\vartheta: T\to Y$ of degree $2$ between minimal smooth surfaces over $\mathbf{k}$. Taking $\vartheta_0: T_0\to Y$ to be normalisation of $Y$ in $K(T)$.
Then $\vartheta_0$ is a flat double cover. In fact, since $T_0$ is normal ${\vartheta_0}_*{\mathcal O}_{T_0}$ is reflexive and hence $S_2$. It then follows from \cite{A-B57}, ${\vartheta_0}_*{\mathcal O}_{T_0}$ is locally free. As a consequence, we have an exact sequence:
\begin{equation}\label{equ: exact of structures}
0\to {\mathcal O}_{Y}\to {\vartheta_0}_*{\mathcal O}_{T_0}\to \mathcal{L}\to 0
\end{equation}
for an invertible coherent sheaf $\mathcal{L}$ on $Y$. By \cite[Prop.~0.1.3]{C-D}, $T_0$ is Gorenstein and $\omega_{T_0/Y}=\vartheta_0^*\mathcal{L}^{-1}$. So by Riemann-Roch formula, we have
\begin{align}
K_{T_0}^2&=2(K_Y-c_1(\mathcal{L}))^2;\\
\label{equs: K^2 and chi}\chi({\mathcal O}_{T_0})&=2\chi({\mathcal O}_{Y})+\dfrac{c_1(\mathcal{L})(c_1(\mathcal{L})-K_Y)}{2}.
\end{align}
Thus
\begin{equation}\label{equ: formula of c^2-4chi}
K_{T_0}^2-4\chi({\mathcal O}_{T_0})=2(K_Y^2-4\chi({\mathcal O}_Y))+\vartheta_0^*K_Y\cdot K_{T_0/Y}.
\end{equation}
To work out the numerical invariants of $T$, we need to resolve the singularity of $T_0$. One typical way to do this is the canonical resolution (cf. \cite[\S~2]{Gu16}). Suppose $Q$ is a singularity of $T_0$ and $P=\vartheta_0(Q)\in Y$, we then blowing up $P$ by $\rho: Y'\to Y$ and take $\vartheta_0': X_0'\to Y'$ again the normalisation in $K(T)$.
$$
\xymatrix{
T_0'\ar[r]^{\rho'}\ar[d]_{\vartheta_0'} & T_0 \ar[d]^{\vartheta_0}\\
Y'\ar[r]^\rho & Y}
$$
We again have
\begin{equation}\label{equ: local use}
0\to {\mathcal O}_{Y'}\to {\vartheta_0'}_*{\mathcal O}_{T'_0}\to \mathcal{L}'\to 0
\end{equation}
Note that $\mathcal{L}'=\rho^*(\mathcal{L})(rE)$ for some $r\in \mathbb{Z}$, here $E$ is the exceptional divisor with respect to $\rho$.
\begin{lem}\label{lem: r>1}
We have $r\ge 1$.
\end{lem}
\begin{proof}
Suppose $x,y\in \mathfrak{m}_P$ is a pair of parameter system. In this case, there is a unique $Q\in T_0$ lying above $P$. Let $z\in \mathfrak{m}_Q$ be a function such that ${\mathcal O}_{T_0,Q}={\mathcal O}_{Y,P}+z\cdot{\mathcal O}_{Y,P}$. We can write out the minimal polynomial of $z$ with respective to ${\mathcal O}_{Y,P}$ as $z^2+az+b=0, a,b\in \mathfrak{m}_P.$ Then the space of relative differentials $\Omega_{T_0/\mathbf{k}}\otimes\mathbf{k}(Q)$ at $Q$ is given by $$\mathbf{k}(Q)\mathrm{d}x\oplus \mathbf{k}(Q)\mathrm{d}y\oplus \mathbf{k}(Q)\mathrm{d}z/ \mathbf{k}(Q)\mathrm{d} b.$$ As $Q$ is not smooth, we have $\mathrm{d}b=0$ in $\Omega_{Y,P}\otimes \mathbf{k}(P)$, or equivalently $b\in \mathfrak{m}_P^2$.
Now considering the blowing-up, in the open subset on $Y'$ where $t=\dfrac{x}{y}$ is defined ($y$ is the generator of $E$ on this open subset), we see that the minimal polynomial of $\dfrac{z}{y}$ is
$$(\dfrac{z}{y})^2-\dfrac{a}{y}\cdot \dfrac{z}{y}+\dfrac{b}{y^2}=0.$$ All coefficients are regular on this piece as $a\in \mathfrak{m}_P$ and $b\in \mathfrak{m}_P^2$. So $r\ge 1$.
\end{proof}
Using formula (\ref{equ: formula of c^2-4chi}), we see that
\begin{align}
\label{equ: change of chi by blowing up}
K_{T_0'}^2&=K_{T_0}^2-(r-1)^2\\
\chi({\mathcal O}_{T'_0})&=\chi({\mathcal O}_{T_0})-\dfrac{r(r-1)}{2}
\end{align}
and
\begin{equation}\label{equ: chang of K^2-4}
K_{T'_0}^2-4\chi({\mathcal O}_{T'_0})=K_{T_0}^2-4\chi({\mathcal O}_{T_0})+2(r-1).
\end{equation}
The canonical resolution process is a series of the above blowing-up and normalisation process from $T_0$ to $T_0'$. We shall write it out as the following diagram. It stops until $T_n$ is smooth over $\mathbf{k}$.
$$
\xymatrix{
\dots \ar[r] & T_n\ar[d]^{\vartheta_n} \ar[r]^{\rho'_n}& \cdots \ar[r]^{\rho'_2} \ar[d] & T_1\ar[r]^{\rho_1' \ \ \ }\ar[d]^{\vartheta_1} & T_0 \ar[d]^{\vartheta_0}\\
\cdots \ar[r]& Y_n\ar[r]^{\rho_n} & \cdots \ar[r]^{\rho_2} & Y_1\ar[r]^{\rho_1 \ \ \ } & Y_0=Y
}
$$
\begin{prop}\label{Prop:1}
The canonical resolution process stops in finitely many steps.
\end{prop}
\begin{proof}
If $p\neq 2$, this result is well known. For example one can refer to \cite[\S~2]{Gu16}.
When $p=2$, if $\vartheta$ is inseparable, this is Proposition~\ref{prop: normalized in p=2} in below, and if $\vartheta$ is separable, this is Proposition~\ref{prop: sep}.
\end{proof}
As a consequence of Proposition~\ref{Prop:1} and formula (\ref{equ: chang of K^2-4}), we can assume $T_n$ is smooth and denote by $r_i$ such that $\mathcal{L}_{i+1}=\rho_i^*\mathcal{L}_i(r_iE_i)$ as before. Then we have
$$
\aligned\label{equ: 90}
K_{T}^2-4\chi({\mathcal O}_{T})\ge& K_{T_n}^2-4\chi({\mathcal O}_{T_n})\\
=&2(K_Y^2-4\chi({\mathcal O}_Y))+\vartheta^*_0 K_Y\cdot (K_{T_0/Y})\\
&+2\sum\limits_{i=1}^{n-1}(r_i-1)\\
\ge &2(K_Y^2-4\chi({\mathcal O}_Y))+\vartheta^*_0 K_Y\cdot (K_{T_0/Y}).
\endaligned
$$
Note that the equality
\begin{equation}
\label{equ: local for ...}K_{T}^2-4\chi({\mathcal O}_{T})=2(K_Y^2-4\chi({\mathcal O}_Y))+\vartheta^*_0 K_Y\cdot (K_{T_0/Y})
\end{equation}
holds if and only if
\begin{itemize}
\item $T_n=T$ is minimal and
\item $r_i= 1$ for $i=1,...,n-1$.
\end{itemize}
By (\ref{equ: change of chi by blowing up}), $r_i=1$ for all $i$ if and only if $\chi({\mathcal O}_{T_0})=\chi({\mathcal O}_{T_n})$, namely $T_0$ has at worst rational singularities. On the other hand, since $T_0$ is obtained by a flat double cover, its singularities are automatically a double point. So by \cite[Cor.~4.19]{Badescu01}, $T_0$ has at worst A-D-E singularities in this case. To summarize, (\ref{equ: local for ...}) holds if and only if the morphism $T\to T_{\mathrm{can}}$ to the canonical model is factored through by $T_0$. As a result, we have the following corollary.
\begin{cor}\label{Cor: main 1}
Suppose $q=2$ and $a_X: X\to \mathrm{Alb}_X$ is a double cover, then $X$ is on the Severi line if and only if the canonical model is a flat double cover of $\mathrm{Alb}_X$.
\end{cor}
\begin{proof}
Take $T=X$, $Y=\mathrm{Alb}_X$. So (\ref{equ: local for ...}) holds if and only if $X$ is on the Severi line. Namely $T_0$ factors through the canonical morphism $X\to X_\mathrm{can}$. But since all rational curve of $X$ shall be contracted on $T_0$ it must be the canonical model itself. By construction, $T_0$ is a flat double cover of $\mathrm{Alb}_X$.
\end{proof}
\begin{rmk}\label{Rem: not easy to do reductions}
In our peculiar case, we have $Y=\mathrm{Alb}_X$ hence $K_Y=0$, hence the term $\vartheta_0^*K_Y\cdot K_{T_0/Y}=0$. In general, when $\vartheta_0$ is inseparable, $K_{T_0/Y}$ may or may not be effective and $\vartheta_0^*K_Y\cdot K_{T_0/Y}$ can be negative. As a consequence, one can not run the reduction process in \cite{L-Z17} from formula (\ref{equ: 90}). This reason prevents us from simulating the reduction argument of \cite{L-Z17} in characteristics $2$.
\end{rmk}
\subsection{Inseparable double covers}
In this subsubsection, we assume $\vartheta$ (hence $\vartheta_0$) is inseparable and aim to prove that the canonical resolution process stops in finitely many steps. For such $\vartheta_0: T_0\to Y$, we have another purely inseparable double cover $\pi_0: Y\to T_0^{(-1)}=T_0\times_{\mathbf{k}, F_\mathbf{k}} \mathbf{k}$ from the commutative diagram below.
$$
\xymatrix{
T_0 \ar[dr]_{F_{T_0/\mathbf{k}}} \ar[rr]^{\vartheta_0}&& Y\ar[ld]^{\pi_0}\\
&T_0^{(-1)} & \\
}
$$
Note that $T_0^{(-1)}$ is isomorphic to $T_0$ as an abstract scheme with the same numerical invariants, it suffices to resolve the singularity of $T_0^{(-1)}$ and work out its own numerical invariants.
Recall that, such a purely inseparable morphism $\pi_0$ (or equivalently $\vartheta_0$) of degree $2$ is characterized by a $1$-foliation of rank $1$ on $Y$ (cf. Subsection~\ref{Subsec: foliation}). We denote by ${\mathcal F}_0$ the $1$-foliation associated to $\pi_0$.
\begin{prop}[\protect{\cite[\S~3]{Ekedahl}}]\label{Prop: formula on 1-foliation}
For $\vartheta_0: T_0\to Y$, $\mathcal{L}$ and ${\mathcal F}_0$ as above, we have the following relation:
$$2c_1(\mathcal{L})=K_Y+c_1({\mathcal F}_0).$$
Here $\mathcal{L}$ is defined as in (\ref{equ: exact of structures}).
\end{prop}
Denote by $Z_0$ the singular scheme of ${\mathcal F}_0$ (cf. Subsection~\ref{Subsec: foliation}). Now reconsider $\rho: Y'\to Y$ a blowing up at a center $P\in Z_0$, denote again by $T_0'$ the normalisation of $Y'$ in $K(T)$, and define ${\mathcal F}_0',\mathcal{L}', Z_0'$ similar as that for $\vartheta_0: T_0\to Y$. We still denote by $r$ such that $\mathcal{L}'=\rho^*\mathcal{L}(rE)$ as in Lemma~\ref{lem: r>1}. Then by (\ref{equ: for Z}):
\begin{equation}
\label{equ: change of Z}\deg Z_0'=\deg Z_0-(4r^2-2r-1)<\deg Z_0.
\end{equation}
Since $Z_0$ controls the singularity of $T_0^{(-1)}$ (or equivalently $T_0$), we have given another proof of \cite[Prop.~2.6]{Hirokado99}.
\begin{prop}\label{prop: normalized in p=2}
We can resolve the singularities of $T_0$ (equivalently, that of a $1$-foliation) by a finite sequence of the normalized blowing-ups. In other words, the canonical resolution stops in finitely many steps.
\end{prop}
\subsection{Separable double covers}
Assume $\vartheta: T\to Y$ is separable, we are going to show that the process of canonical resolution stops in finitely many steps. To our purpose, we assume $\kappa(T)\ge 0$ and $\kappa(Y)\ge 0$ and hence there is an involution $\sigma: T\to T$ associated to the double cover $\vartheta: T\to Y$.
\begin{lem}\label{lem: 34}
If there is a regular model $\pi: T'\to T$ lifting then $\sigma$-action and such that $T'/\sigma$ is regular, then the canonical resolution process stops in finitely many steps.
\end{lem}
\begin{proof}
Since $Y$ is minimal, the morphism $T'/\sigma$ is obtained from $Y$ by a sequence of blowing-ups.
$$
\xymatrix{
T'=T_n\ar[d]^{\vartheta_n} \ar[r]^{\rho'_n}& \cdots \ar[r]^{\rho'_2} \ar[d] & T_1\ar[r]^{\rho_1' \ \ \ }\ar[d]^{\vartheta_1} & T_0 \ar[d]^{\vartheta_0}\\
T'/\sigma=Y_n\ar[r]^{\rho_n} & \cdots \ar[r]^{\rho_2} & Y_1\ar[r]^{\rho_1 \ \ \ } & Y_0=Y
}
$$
We can clearly remove the redundant intermediate blowing-up whose center is lying below a smooth point of $Y_i$. Then it becomes the canonical resolution and actually stops in at most $n$ steps.
\end{proof}
We then choose an arbitrary Lefschetz pencil $\iota: Y\dashrightarrow \mathbb{P}_\sk^1$ and the associated $\tau: T\dashrightarrow \mathbb{P}_\sk^1$. Denote by $T_1\to \mathbb{P}_\sk^1$ the relatively minimal model of $T$. Then clearly $\sigma$ lifts to $T_1$. Denote by $Y_1=T_1/\sigma\to \mathbb{P}_\sk^1$ the associated model. We see that $Y_1$ is regular over the generic fibre since it is normal. Now \cite[Thm.~7.3]{L-L99} applies.
\begin{thm}[\protect{\cite[Thm.~7.3]{L-L99}}]\label{thm: ll}
Let $C/\mathbf{k}$ be a curve and $K:=K(C)$ be its function field. Let $f: X_K\to Y_K$ be an arbitrary Galois cover of proper regular curves over $K$ with Galois group $G$. Suppose $\mathcal{X} \to C$ is a proper regular model of $X_K$ such that the $G$-action spreads onto it. If for every closed point $x\in \mathcal{X}$, its inertia group $I_x$ has order at most $3$, then there is a suitable choice of proper regular models $\widetilde{\mathcal{X}}\to C$ and $\widetilde{\mathcal{Y}}\to C$ such that $f$ extends to a finite Galois cover $\widetilde{f}:\widetilde{\mathcal{X}} \to \widetilde{\mathcal{Y}}$. In other words, the $G$-action spreads to $\widetilde{\mathcal{X}}$ and its quotient $\widetilde{\mathcal{X}}/G$ is regular.
\end{thm}
In this theorem, we let $C=\mathbb{P}_\sk^1$ and $K=\mathbf{k}(t)$ be its function field and let $T_1=\mathcal{X}\to \mathbb{P}_\sk^1, Y_K:=Y_1\times_{\mathbb{P}_\sk^1} K$. In our case, $|G|=2$ and the inertia group assumption is satisfied automatically. It then give a model $T'=\widetilde{\mathcal{X}}$ lifting $\sigma$-action and is such that $T'/\sigma=\widetilde{\mathcal{Y}}$ is regular. So Lemma~\ref{lem: 34} gives:
\begin{prop}\label{prop: sep}
Suppose $\kappa(T)\ge 0$ and $\kappa(Y)\ge 0$, then the canonical resolution process stops in finitely many steps.
\end{prop}
\begin{rmk}
Whether or not the canonical resolution process stops in finitely many steps is a local property, so the Kodaira dimension assumption and minimality assumption is actually redundant.
\end{rmk}
\section{Examples of surfaces on the Severi line in characteristic $2$}\label{sec.7}
We give two examples of surfaces on the Severi line in characteristic $2$--one with an inseparable Albanese morphism and one with a separable one.
Denote by $$E: y^2+y=x^3+x$$ the unique supersingular elliptic curve in characteristic $2$ and we let $A:=E_1\times_\mathbf{k} E_2$, here $E_1,E_2$ are two copies of $E$. We use the subscript $i=1,2$ to indicate the associated points, functions or vectors on $E_i$. We write $\Gamma_1:=\infty \times E_2$ and $\Gamma_2:=E_1\times \infty$ the two infinite divisors. Also we denote by $P=(0,0)$ and $Q=(0,1)$ the zeroes of $x$.
\subsection{Inseparable Albanese morphism}
We denote by $\partial:= \dfrac{\partial}{\partial x}$, the global vector field on $E$ that
$$
\left\{\begin{array}{cl}
\partial x&=1;\\
\partial y&=x^2+1
\end{array}\right.
$$
Then $\widetilde{\partial}:=x_1\partial_1+x_2\partial_2$ is a $2$-closed derivative. It generates a $1$-foliation ${\mathcal F}:=K(A) \cdot \widetilde{\partial} \cap \mathcal{T}_{A/\mathbf{k}}$.
\begin{prop}
\begin{enumerate}
\item The $1$-foliation ${\mathcal F}$ have $5$-singularities, all of which have multiplicity less than $5$;
\item The surface $X_0^{(-1)}:=A/{\mathcal F}$ has $5$ A-D-E singularities and the canonical bundle is ample;
\item Let $X$ be the minimal model of $X_0$, then $X$ is of general type and on the Severi line.
\end{enumerate}
\end{prop}
\begin{proof}
(1). By construction, it is not difficult to see that the singular scheme $Z$ of ${\mathcal F}$ consists of $5$ singularities: $\infty \times \infty$, $P_1\times P_2, P_1\times Q_2, Q_1\times P_2$ and $Q_1\times Q_2$ each with multiplicity $4,1,1,1,1$ respectively.
(2). Following from (\ref{equ: change of Z}), we must have $r=1$ (otherwise the multiplicity is at least $4\cdot 2^2-2\cdot 2-1=11$) in each blowing up case. So $X_0$ has rational double points only. Note that the canonical bundle of $X_0$ is the pull back of $-\dfrac{c_1({\mathcal F})}{2}\equiv \Gamma_1+\Gamma_2$ which is ample, therefore $X_0$ is the canonical model of a surface of general type.
(3). It follows from the formula in the previous section that $K_X^2=4\chi({\mathcal O}_X)$.
\end{proof}
\subsection{Separable Albanese morphism with wild branch divisor}
We shall construct an example of separable admissible flat double (cf. \cite[Chap.~0]{C-D}) cover $\mu_0: X_0\to A$ such that $X_0$ has at worst A-D-E singularities while the branch divisor on $A$ has singularity of arbitrarily large multiplicity. Note in other characteristics, the branch locus can have singularity of multiplicity no larger than $3$ in order the flat double cover space to have at worst A-D-E singularities.
First choose an invertible coherent sheaf $\mathcal{L}$ on $A$. We define an ${\mathcal O}_A$-algebra structure on ${\mathcal A}:={\mathcal O}_A\oplus \mathcal{L}^{-1}$ by $$\mathcal{L}^{-2}\stackrel{s_2,s_1}{\longrightarrow} {\mathcal O}_A \oplus \mathcal{L}^{-1}={\mathcal A}.$$
Here $s_2, s_1$ are non-zero sections of $\mathcal{L}^2$ and $\mathcal{L}$ respectively.
In other words, let $e_i$ be a local generator of $\mathcal{L}$ on an open subset $U_i$. Then
$${\mathcal A}|_{U_i}={\mathcal O}_{U_i}[z_i]/z_i^2+a_iz_i+b_i, a_i, b_i\in {\mathcal O}_{U_i}.$$
Here $s_1=a_ie_i, s_2=b_ie_i^2$ on $U_i$. And in $U_i\cap U_j$ with $e_i=\alpha_{ij}e_j$, we define $z_i=\alpha_{ij}^{-1}z_j$.
Denote by $\mu_0: X_0:=\mathrm{Spec}({\mathcal A})\to A$ be the associated flat double cover and $D_i:=\mathrm{div}(s_i) \in |\mathcal{L}^i|$ ($i=1,2$) the divisor associated.
\begin{prop}\label{Prop: 54}
\begin{enumerate}
\item The branch divisor of $\mu_0$ is $D_1$.
\item If $D_2$ passes through every point in the singular locus of $D_1$ smoothly, then $X_0$ has at worst A-D-E singularities.
\end{enumerate}
\end{prop}
\begin{proof}
(1). By the function mentioned above, on $U_i$ the relative K\"ahler differential $\Omega_{X_0/A}|_{U_i}$ is isomorphic to ${\mathcal A}/a_i{\mathcal A}$. As a result, the ramification divisor is locally defined by $a_i$. In other words, the branch divisor is defined by $a_i$ on $U_i$.
(2). Let $P\in D_1$. Suppose $D_1$ is smooth at $P$, then $X_0$ has at worst A-D-E singularity above $P$ by \cite[Remark~0.2.2]{C-D}. Now suppose $P$ is singular on $D_1$, then by our assumption $D_2$ pass through $P$ and is smooth at $P$. Namely in the local function $z_i^2+a_iz_i+b_i$ defining $X_0$ near $P$ we have $a_i\in\mathfrak{m}_P$ and $b_i\in \mathfrak{m}_p\backslash \mathfrak{m}_p^2$. This clearly implies $X_0$ is regular above $P$.
\end{proof}
Conversely, given any effective divisors $D_1, D_2$ such that $D_2\in |2D_1|$, then one can construct an admissible example as above.
As a result, the singularity of the branch divisor $D_1$ does not eventually leads to the singularity of $X_0$. Along this way, we can construct examples where the branch divisor is very singular at some points but $X_0$ has at worst A-D-E singularities.
\begin{ex}
We take $D_1$ to be the divisor defined by equation: $$x_1^{2n}+x_2^{2n+1}=0.$$ Then $D_1\in|2n\Gamma_1+(2n+1)\Gamma_2|$ and the singularity of $D_1$ is again the five points $\infty \times \infty$, $P_1\times P_2, P_1\times Q_2, Q_1\times P_2$ and $Q_1\times Q_2$. Now we can construct $D_2\in |2D_1|=|4n\Gamma_1+(4n+2)\Gamma_2|$ passing through the five points smoothly. We take $D_2=E_1\times P_2+E_1\times Q_2+\Gamma_2+D_2'$ where $D_2'\in |4n\Gamma_1+(4n-1)\Gamma_2|$ is a general member which does not pass through the five points above. Here note that the linear system $|4n\Gamma_1+(4n-1)\Gamma_2|$ is very ample for all $n\in \mathbb{N}_+$.
We denote by $X_0$ the separable flat double cover of $A$ defined by $D_1, D_2$. Then $X_0$ is the canonical model of a minimal surface of general type $X$ on the Severi line by Corollary~\ref{Cor: main 1} and Proposition~\ref{Prop: 54}. In this case the branch divisor of $X_0\to A$ is wild (of multiplicity $2n$) at the four points $P_1\times P_2, P_1\times Q_2, Q_1\times P_2$ and $Q_1\times Q_2$.
\end{ex}
\section*{\bf Acknowledgement}
The first author would like to thank D. Lorenzini for telling him Theorem~\ref{thm: ll} and T. Zhang for helpful communications.
| {
"timestamp": "2019-09-19T02:08:56",
"yymm": "1908",
"arxiv_id": "1908.01933",
"language": "en",
"url": "https://arxiv.org/abs/1908.01933",
"abstract": "Let $X$ be a minimal surface of general type over an algebraically closed field $\\mathbf{k}$ of $\\mathrm{char}.(\\mathbf{k})=p\\ge 0$. If the Albanese morphism $a_X:X\\to \\mathrm{Alb}_X$ is generically finite onto its image, we formulate a constant $c(X,L)\\ge 0$ for a very ample line bundle $L$ on $\\mathrm{Alb}_X$ such that $c(X,L)=0$ if and only if $\\dim \\mathrm{Alb}_X=2$ and $a_X: X\\to \\mathrm{Alb}_X$ is a double cover. A refined Severi inequality $$K^2_X\\ge (4+{\\rm min}\\{\\,c(X,L),\\,\\frac{1}{3}\\,\\})\\chi(\\mathcal{O}_X)$$ is proved. Then we prove that $K^2_X=4\\chi(\\mathcal{O}_X)$ if and only if the canonical model of $X$ is a flat double cover of an Abelian surface.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Surfaces on the Severi line in positive characteristics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717440735735,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7089449231601197
} |
https://arxiv.org/abs/1810.03133 | On the inverse problem of Moebius geometry on the circle | Any (boundary continuous) hyperbolic space induces on the boundary at infinity a Moebius structure which reflects most essential asymptotic properties of the space. In this paper, we initiate the study of the inverse problem: describe Moebius structures which are induced by hyperbolic spaces at least in the simplest case of the circle. For a large class of Moebius structures on the circle, we define a canonical "filling" each of them, which serves as a natural candidate for a solution of the inverse problem. This is a 3-dimensional (pseudo)metric space Harm, which consists of harmonic 4-tuples of the respective Moebius structure with a distance determined by zig-zag paths. Our main result is the proof that every line in Harm is a geodesic, i.e., shortest in the zig-zag distance on each segment. This gives a good starting point to show that Harm is Gromov hyperbolic with the prescribed Moebius structure at infinity. | \section{Introduction} A M\"obius structure on a set
$X$
is a class of (semi)metrics whose cross-ratios take one and the same value on every given
4-tuple of points in
$X$.
M\"obius structures naturally arise as geometric structures on the boundary
at infinity of hyperbolic spaces. The classical example is the extended
Euclidean space
$\widehat\mathbb{R}^n=\mathbb{R}^n\cup\{\infty\}$,
which gives rise to the canonical M\"obius structure
$M_0$
over the sphere
$S^n=\widehat\mathbb{R}^n$,
whose group of M\"obius transformations is isomorphic to the isometry
group of the hyperbolic space
$\operatorname{H}^{n+1}$.
The inverse problem of M\"obius geometry asks to describe M\"obius structures which are
induced by hyperbolic spaces. The papers \cite{BS14}, \cite{BS15} can be regarded as
solutions of this problem in the case of rank 1 symmetric spaces. In a general case, it seems
very little is known, cp.~\cite{BeS17}, \cite{BFI18}. Thus we consider a simplest nontrivial case when
$X=S^1$
is the circle.
The class of all M\"obius structures on the circle is very large: any extended (semi)metric
on
$\widehat\mathbb{R}$
generates some M\"obius structure on
$S^1$.
Note that various hyperbolic cone constructions (see \cite{BoS}, \cite{BS07}) give a hyperbolic metric space with
prescribed metric at infinity. However, no one of them is equivariant with respect to M\"obius
transformations of the metric. Thus one can consider the inverse problem as the existence problem
of an equivariant hyperbolic cone over a given metric.
Asking more, one should pay an additional price for that: we introduce a set of axioms,
which allow to define a reasonable candidate for a solution of the inverse problem.
This is the set
$\operatorname{Harm}$
of harmonic 4-tuples with respect to a given M\"obius structure
$M$.
It has a natural structure
of a 3-dimensional manifold, which in the case of the canonical structure
$M_0$
is homeomorphic
to the projectivized tangent bundle of
$\operatorname{H}^2$.
Note that
$\operatorname{Harm}$
is automatically invariant under M\"obius transformations of
$M$.
It follows from our axioms that any pair
$(x,y)$
of different points in
$X$
uniquely determines a line
$h=h_{(x,u)}$
in
$\operatorname{Harm}$,
which consists of all pairs of different points
$(z,u)$
such that 4-tuple
$q=((x,y),(z,u))$
is harmonic. It turns out that
$h$
is homeomorphic to
$\mathbb{R}$
and, moreover,
$h$
is isometric to
$\mathbb{R}$
with respect to the naturally defined distance
$$|qq'|=\left|\ln\frac{d(x,z')d(y,z)}{d(x,z)d(y,z')}\right|,$$
$q'=((x,y),(z',u'))$,
where
$d$
is any metric from
$M$
($|qq'|$
is independent of the choice of
$d$).
The pairs
$(x,y)$, $(z,u)$
are called {\em axes} of
$q\in\operatorname{Harm}$.
Since every harmonic
$q$
has two axes, moving along a line in
$\operatorname{Harm}$,
there is a possibility to change the axis at any moment. This leads to a notion
of special curves in
$\operatorname{Harm}$,
which are called {\em zz-paths}. Every (finite) zz-path
$\sigma\subset\operatorname{Harm}$
consists of a finite number of consecutive sides, every side is a segment of a line,
and adjacent sides meet each other at a common harmonic 4-tuple
$q$
as the different
axes of
$q$.
The point of this construction is that while in general two different
$q$, $q'\in\operatorname{Harm}$
cannot be connected by a segment of a line, they are always connected by
a finite zz-path.
The length of a zz-path
$\sigma$
is the sum of the lengths of its sides
$|\sigma|$.
The
$\delta$-distance
on
$\operatorname{Harm}$
is defined by
$$\delta(q,q')=\inf_\sigma|\sigma|,$$
where the infimum is taken over all zz-paths between
$q$
and
$q'$.
The
$\delta$-distance
is symmetric, nonnegative and satisfies the triangle inequality.
However, it is not clear that
$\delta$
is positive, i.e.,
$\delta$
is a pseudometric. Nevertheless, our main result says that lines are geodesics
with respect to the
$\delta$-distance.
\begin{thm}\label{thm:main} Every line
$h\subset\operatorname{Harm}$
is a geodesic with respect to the
$\delta$-distance,
i.e.
$\delta(q,q')=|qq'|$
for any
$q$, $q'\in h$.
\end{thm}
This is not at all obvious or trivial. The precise statement of Theorem~\ref{thm:main}
requires to list axioms for M\"obius structures under which the theorem is true,
see sect.~\ref{sect:distance_segments}. The key property we require from a M\"obius structure
to satisfy Theorem~\ref{thm:main} is the {\em Increment} axiom,
see sect.~\ref{subsect:increment_axiom}. To prove Theorem~\ref{thm:main}, for every
line
$h\subset\operatorname{Harm}$
we define so called {\em midpoint projection} of
$\operatorname{Harm}$
to
$h$.
The increment axiom allows to show that the midpoint projection decreases distances along
zz-paths, which leads to Theorem~\ref{thm:main}.
{\it Aknowledgment.} The author is very much grateful to Viktor Schroeder for numerous
discussions on the topic of the paper, which lead, in particular, to the notion of
a monotone M\"obius structure, and for the proof of Lemma~\ref{lem:unique_common_perpendicular}.
\section{M\"obius structures}
\label{sect:moebius_structures}
\subsection{Basic notions}
\label{subsect:basics}
Let
$X$
be a set. A 4-tuple
$q=(x,y,z,u)\in X^4$
is said to be {\em admissible} if no entry occurs three or
four times in
$q$.
A 4-tuple
$q$
is {\em nondegenerate}, if all its entries are pairwise
distinct. Let
$\mathcal{P}_4=\mathcal{P}_4(X)$
be the set of all ordered admissible 4-tuples of
$X$, $\operatorname{reg}\mathcal{P}_4\subset\mathcal{P}_4$
the set of nondegenerate 4-tuples.
A function
$d:X^2\to\widehat\mathbb{R}=\mathbb{R}\cup\{\infty\}$
is called a {\em semi-metric}, if it is symmetric,
$d(x,y)=d(y,x)$
for each
$x$, $y\in X$,
positive outside of the diagonal, vanishes on the diagonal
and there is at most one infinitely remote point
$\omega\in X$
for
$d$,
i.e. such that
$d(x,\omega)=\infty$
for some
$x\in X\setminus\{\omega\}$.
Moreover, we require that if
$\omega\in X$
is such a point, then
$d(x,\omega)=\infty$
for all
$x\in X$, $x\neq\omega$.
A metric is a semi-metric that satisfies the triangle inequality.
A {\em M\"obius structure}
$M$
on
$X$
is a class of M\"obius equivalent semi-metrics on
$X$,
where two semi-metrics are equivalent if and only if they have
the same cross-ratios on every
$q\in\operatorname{reg}\mathcal{P}_4$.
Given
$\omega\in X$,
there is a semi-metric
$d_\omega\in M$
with infinitely remote point
$\omega$.
It can be obtained from any semi-metric
$d\in M$
for which
$\omega$
is not infinitely remote by a {\em metric inversion},
$$d_\omega(x,y)=\frac{d(x,y)}{d(x,\omega)d(y,\omega)}.$$
Such a semi-metric is unique up to a homothety, see \cite{FS},
and we use notation
$|xy|_\omega=d_\omega(x,y)$
for the distance between
$x$, $y\in X$
in that semi-metric. We also use notation
$X_\omega=X\setminus\{\omega\}$.
There is a distinguished class of M\"obius structures called {\em ptolemaic}.
The property to be ptolemaic is characterized by the inequality
\begin{equation}\label{eq:ptolemaic}
d(x,y)d(z,u)\le d(x,z)d(y,u)+d(x,u)d(y,z)
\end{equation}
for every semi-metric
$d$
of the M\"obius structure and every 4-tuple
$q=(x,y,z,u)\in X^4$.
The property to be ptolemaic is invariant under any metric inversion, and this
invariance can serve as an equivalent definition of ptolemaic M\"obius structures.
It follows from (\ref{eq:ptolemaic}) that any semi-metric of a ptolemaic M\"obius
structure with infinitely remote point
$\omega\in X$
is a metric on
$X_\omega$,
i.e., it satisfies the triangle inequality.
Every M\"obius structure
$M$
on
$X$
determines the
$M$-{\em topology}
whose subbase is given by all open balls centered at finite points
of all semi-metrics from
$M$
having infinitely remote points.
\begin{exa}\label{exa:canonical_moebius_circle} Our basic example is the
{\em canonical} M\"obius structure
$M_0$
on the circle
$X=S^1$.
We think of
$S^1$
as the unit circle in the plane,
$S^1=\set{(x,y)\in\mathbb{R}^2}{$x^2+y^2=1$}$.
For
$\omega=(0,1)\in X$
the stereographic projection
$X_\omega\to\mathbb{R}$
identifies
$X_\omega$
with real numbers
$\mathbb{R}$.
We let
$d_\omega$
be the standard metric on
$\mathbb{R}$,
that is,
$d_\omega(x,y)=|x-y|$
for any
$x,y\in\mathbb{R}$.
This generates a M\"obius structure on
$X$
which is called {\em canonical}. The basic feature of the canonical M\"obius
structure on
$X=S^1$
is that for any 4-tuple
$(\sigma,x,y,z)\subset X$
with the cyclic order
$\sigma xyz$
we have
$d_\sigma(x,y)+d_\sigma(y,z)=d_\sigma(x,z)$.
In particular, the canonical M\"obius structure is ptolemaic.
\end{exa}
\subsection{An alternative description}
\label{subsect:alternative}
The following is an alternative description of a M\"obius structure which
is convenient in many cases. For any semi-metric
$d$
on
$X$
we have three cross-ratios
$$q\mapsto \operatorname{cr}_1(q)=\frac{|x_1x_3||x_2x_4|}{|x_1x_4||x_2x_3|};
\operatorname{cr}_2(q)=\frac{|x_1x_4||x_2x_3|}{|x_1x_2||x_3x_4|};
\operatorname{cr}_3(q)=\frac{|x_1x_2||x_3x_4|}{|x_2x_4||x_1x_3|}$$
for
$q=(x_1,x_2,x_3,x_4)\in\operatorname{reg}\mathcal{P}_4$,
whose product equals 1, where
$|x_ix_j|=d(x_i,x_j)$.
We associate with
$d$
a map
$M_d:\operatorname{reg}\mathcal{P}_4\to L_4$
defined by
\begin{equation}\label{eq:moeb_map}
M_d(q)=(\ln\operatorname{cr}_1(q),\ln\operatorname{cr}_2(q),\ln\operatorname{cr}_3(q)),
\end{equation}
where
$L_4\subset\mathbb{R}^3$
is the 2-plane given by the equation
$a+b+c=0$.
Two semi-metrics
$d$, $d'$
on
$X$
are M\"obius equivalent if and only
$M_d=M_{d'}$.
Thus a M\"obius structure on
$X$
is completely determined by a map
$M=M_d$
for any semi-metric
$d$
of the M\"obius structure, and we often identify a M\"obius structure
with the respective map
$M$.
Let
$S_n$
be the symmetry group of
$n$
elements. The group
$S_4$
acts on
$\operatorname{reg}\mathcal{P}_4$
by entries permutations of any
$q\in\operatorname{reg}\mathcal{P}_4$.
The group
$S_3$
acts on
$L_4$
by signed permutations of coordinates, where a permutation
$\sigma:L_4\to L_4$
has the sign
``$-1$''
if and only if
$\sigma$
is odd.
The {\em cross-ratio} homomorphism
$\phi:S_4\to S_3$
can be described as follows: a permutation of a tetrahedron
ordered vertices
$(1,2,3,4)$
gives rise to a permutation of pairs of opposite edges
$((12)(34),(13)(24),(14)(23))$.
We denote by
$\operatorname{sign}:S_4\to\{\pm 1\}$
the homomorphism that associates to every odd permutation the sign
``$-1$''.
One easily check that any M\"obius structure
$M:\operatorname{reg}\mathcal{P}_4\to L_4$
is equivariant with respect to the signed cross-ratio homomorphism,
\begin{equation}\label{eq:signed_cross-ratio_homomorphism}
M(\pi(q))=\operatorname{sign}(\pi)\phi(\pi)M(q)
\end{equation}
for every
$q\in\operatorname{reg}\mathcal{P}_4$, $\pi\in S_4$,
where
$\phi:S_4\to S_3$
is the cross-ratio homomorphism.
\subsection{Monotone M\"obius structures}
In what follows we assume that M\"obius structures we consider are ptolemaic.
We say that a M\"obius structure
$M$
on
$X=S^1$
is {\em monotone}, if it satisfies the following axioms
\begin{itemize}
\item [(T)] Topology: $M$-topology
on
$X$
is that of
$S^1$;
\item[(M)] Monotonicity: given a 4-tuple
$q=(x,y,z,u)\in X^4$
such that the pairs
$(x,y)$, $(z,u)$
separate each other,
we have
$$|xy|\cdot|zu|>\max\{|xz|\cdot|yu|,|xu|\cdot|yz|\}$$
for some and hence any semi-metric from
$M$.
\end{itemize}
A choice of
$\omega\in X$
uniquely determines the interval
$xy\subset X_\omega$
for any distinct
$x$, $y\in X$
different from
$\omega$
as the arc in
$X$
with the end points
$x$, $y$
that does not contain
$\omega$.
As an useful reformulation of Axiom~(M) we have
\begin{cor}\label{cor:interval_monotone} Assume for a nondegenerate
4-tuple
$q=(x,y,z,u)\in\operatorname{reg}\mathcal{P}_4$
the interval
$xz\subset X_u$
is contained in
$xy$, $xz\subset xy\subset X_u$.
Then
$|xz|_u<|xy|_u$.
\end{cor}
\begin{proof} By the assumption, the pairs
$(x,y)$, $(z,u)$
separate each other. Hence, by Axiom~(M) we have
$|xz||yu|<|xy||zu|$
for any semi-metric from
$M$.
In particular,
$|xz|_u<|xy|_u$.
\end{proof}
\begin{lem}\label{lem:nozero_value} Assume a M\"obius structure
$M$
on
$X=S^1$
is monotone. Then
$M(q)\neq(0,0,0)$
for every
$q\in\operatorname{reg}\mathcal{P}_4$.
\end{lem}
\begin{proof} Assume
$M(q)=(0,0,0)$
for
$q=(x,y,z,u)\in\operatorname{reg}\mathcal{P}_4$.
Then in a metric from
$M$
with infinitely remote point
$u$
we have
$|xy|_u=|xz|_u=|yz|_u$.
Whatever is the order of
$x,y,z$
on
$X_u=X\setminus\{u\}$,
these equalities contradict the mononicity Axiom~(M).
\end{proof}
\subsection{Increment axiom}
\label{subsect:increment_axiom}
Increment axiom for monotone M\"obius structures
has been introduced in \cite{Bu17}, where
it plays an important role since it implies the time
inequality. In this paper, it also plays a key role in solving the
inverse problem for M\"obius structures on the circle. We briefly
recall this axiom and some properties of monotone M\"obius structures
satisfying it.
We use notation
$\operatorname{reg}\mathcal{P}_n$
for the set of ordered nondegenerate
$n$-tuples
of points in
$X=S^1$, $n\in\mathbb{N}$.
For
$q\in\operatorname{reg}\mathcal{P}_n$
and a proper subset
$I\subset\{1,\dots,n\}$
we denote by
$q_I\in\operatorname{reg}\mathcal{P}_k$, $k=n-|I|$,
the
$k$-tuple
obtained from
$q$
(with the induced order) by crossing out all entries which correspond to elements of
$I$.
(I) Increment Axiom: for any
$q\in\operatorname{reg}\mathcal{P}_7$
with cyclic order
$\operatorname{co}(q)=1234567$
such that
$q_{247}$
and
$q_{157}$
are harmonic, we have
$$\operatorname{cr}_1(q_{345})>\operatorname{cr}_1(q_{123}).$$
For definition of harmonic 4-tuples see sect.~\ref{subsect:harmonic_4_tuples}.
It is proved in \cite[Proposition~7.10]{Bu17} that the canonical M\"obius
structure
$M_0$
on the circle
$X=S^1$
satisfies Increment Axiom. Moreover, the class
$\mathcal{I}$
of monotone M\"obius structures on the circle which satisfy Axiom~(I)
contains an open in a fine topology neighborhood of
$M_0$,
see \cite[Proposition~7.14]{Bu17}.
\section{Filling}
\label{sect:filling}
Here we define a space of harmonic pairs which will serve as a filling
of a monotone M\"obius structure on the circle.
\subsection{Harmonic 4-tuples}
\label{subsect:harmonic_4_tuples}
Let
$M$
be a monotone M\"obius structure on the circle
$X=S^1$.
A 4-tuple
$q\in\operatorname{reg}\mathcal{P}_4$
is said to be {\em harmonic} if
$M(q)\in L_4$
has a zero coordinate. It follows from Lemma~\ref{lem:nozero_value} for
$q$
harmonic,
$M(q)$
has a unique zero coordinate. Therefore, we have three types of harmonic
4-tuples
$q=(x,y,z,u)\in\operatorname{reg}\mathcal{P}_4$,
determined by conditions
\begin{itemize}
\item [(1)] $|xz|\cdot|yu|=|xu|\cdot|yz|$,
\item[(2)] $|xu|\cdot|yz|=|xy|\cdot|zu|$,
\item[(3)] $|xy|\cdot|zu|=|xz|\cdot|yu|$,
\end{itemize}
for some and hence every semi-metric from
$M$,
which correspond to the first, the second and the third coordinate of
$M(q)$
respectively.
\begin{lem}\label{lem:three_embeddings} For
$i=1,2,3$
there is an embedding
$e_i:\operatorname{reg}\mathcal{P}_3\to\operatorname{reg}\mathcal{P}_4$
of the set
$\operatorname{reg}\mathcal{P}_3\subset X^3$
of nondegenerate 3-tuples, whose image
$e_i(\operatorname{reg}\mathcal{P}_3)$
is the set of harmonic 4-tuples of type
$(i)$.
\end{lem}
\begin{proof} Given
$t=(x_1,x_2,x_3)\in\operatorname{reg}\mathcal{P}_3$.
we take a semi-metric
$|\cdot\cdot|_i$
from
$M$
with infinitely remote point
$x_i$.
The distance function
$x\mapsto|x_{i+1}x|_i$
is continuous on
$X_{x_i}$
(see \cite[Lemma~4.1]{Bu17}), thus there is
$y_i\in X_{x_i}$
with
$|x_{i+1}y_i|_i=|y_ix_{i+2}|_i$
(indices are taken modulo 3). By Corollary~\ref{cor:interval_monotone},
$y_i$
is uniquely determined and moreover the pairs
$(x_i,y_i)$
and
$(x_{i+1},x_{i+2})$
separate each other. Now, we put
$e_i(t)=(y_i,x_1,x_2,x_3)$.
By constuction,
$e_i(t)$
satisfies
$$|x_{i+1}y_i|\cdot|x_ix_{i+2}|=|y_ix_{i+2}|\cdot|x_ix_{i+1}|$$
for any semi-metric from
$M$,
and thus
$e_i(t)$
is harmonic of type
$(i)$.
Conversely, given a harmonic 4-tuple
$q=(x,y,z,u)$,
we take either of
$y$, $z$, $u$
as an infinitely remote point and see that
$x$
is the midpoint between remaining two ones for harmonicity type (1), (2), (3)
respectively. Therefore, every harmonic 4-tuple of type
$(i)$
is
$e_i(t)$
for an appropriate
$t\in\operatorname{reg}\mathcal{P}_3$.
\end{proof}
The set
$\operatorname{reg}\mathcal{P}_3\subset X^3$
in the induced topology consists of two connected components each of which
is homeomorphic to the unit tangent bundle
$U\operatorname{H}^2$
of the hyperbolic plane
$\operatorname{H}^2$,
that is, it is the trivial
$S^1$-bundle
over
$\mathbb{R}^2$.
By Lemma~\ref{lem:three_embeddings},
$e_i(\operatorname{reg}\mathcal{P}_3)$
is the set of harmonic 4-tuples of type
$(i)$.
Therefore, the set of harmonic 4-tuples consists of six connected
components each of which is homeomorphic to
$\mathbb{R}^2\times S^1$.
The group
$S_4$
acting on
$\operatorname{reg}\mathcal{P}_4$
permutes these components with the stabilizer of each one isomorphic to the
cyclic group
$\mathbb{Z}_4$.
These facts are not used in what follows, they only describe the general structure
of the space of harmonic 4-tuples.
\subsection{Harmonic pairs}
\label{subsect:harm_pairs}
As a topological space, the required filling is defined as the set
$\operatorname{Harm}$
of harmonic pairs. It is convenient to use unordered pairs
$(x,y)\sim(y,x)$
of distinct points on
$X=S^1$,
and we denote the set of them by
$\operatorname{aY}=S^1\times S^1\setminus\Delta/\sim$,
where
$\Delta=\set{(x,x)}{$x\in S^1$}$
is the diagonal. A pair
$(a,b)\in\operatorname{aY}\times\operatorname{aY}$
is harmonic if
\begin{equation}\label{eq:harmonic}
|xz|\cdot|yu|=|xu|\cdot|yz|
\end{equation}
for some and hence any semi-metric of the M\"obius structure, where
$a=(x,y)$, $b=(z,u)$.
That is, we use the first type of harmonic 4-tuples to define harmonic
pairs. The choice of the type is irrelevant to our construction
because different types of harmonicity are permuted with each other by
the group
$S_4$.
Note that the pairs of points
$a$, $b$
separate each other for every harmonic pairs
$(a,b)$.
This follows from mononicity of
$M$,
see the proof of Lemma~\ref{lem:three_embeddings}.
The set
$\operatorname{Harm}$
of the harmonic pairs is a 3-dimensional subspace in
$\operatorname{aY}\times\operatorname{aY}$
given by Equation~(\ref{eq:harmonic}). There is an involution
$\pi(x,y,z,u)=(y,x,u,z)$
acting on the set of harmonic 4-tuples of the first type which factors that
set to
$\operatorname{Harm}$.
Therefore,
$\operatorname{Harm}$
is homeomorphic the projectivized tangent bundle of
$\operatorname{H}^2$.
Given
$q=(a,b)\in\operatorname{Harm}$,
the pair
$a\in\operatorname{aY}$
is called the {\em left axis} and the pair
$b\in\operatorname{aY}$
the {\em right axis} of
$q$.
There is a canonical involution
$j:\operatorname{Harm}\to\operatorname{Harm}$
without fixed points given by
$j(a,b)=(b,a)$.
The quotient space we denote by
$\operatorname{Hm}:=\operatorname{Harm}/j$.
In other words,
$\operatorname{Hm}$
is the set of unordered harmonic pairs of unordered pairs
of points in
$X$.
Note that
$j(q)=(b,a)$
is harmonic with the left axis
$b$
and the right axis
$a$
for every harmonic pair
$q=(a,b)\in\operatorname{Harm}$.
The space
$\operatorname{Harm}$
has two canonical structures of a locally trivial bundle
$\operatorname{pr}_i:\operatorname{Harm}\to\operatorname{aY}$
with respect to the factor projections
$\operatorname{pr}_i:\operatorname{aY}\times\operatorname{aY}\to\operatorname{aY}$, $i=1,2$,
$\operatorname{pr}_1(a,b)=a$, $\operatorname{pr}_2(a,b)=b$.
It follows from Lemma~\ref{lem:three_embeddings}, that the fibers of
$\operatorname{pr}_i$
are homeomorphic to an open arc in
$S^1$,
i.e. to
$\mathbb{R}$.
We obviously have
$\operatorname{pr}_i\circ j=\operatorname{pr}_{i+1}$
for
$i=1,2$,
where the indices are taken modulo 2. Both
$\mathbb{R}$-bundles
$\operatorname{pr}_1$, $\operatorname{pr}_2$
are nontrivial, i.e.
$\operatorname{Harm}$
is not homeomorphic to the product
$\operatorname{aY}\times\mathbb{R}$.
\subsection{Lines and zig-zag paths in $\operatorname{Harm}$}
\label{subsect:lines_harm}
A {\em left line}
$\operatorname{lh}_a$, $a\in\operatorname{aY}$,
in
$\operatorname{Harm}$
is the subset
$\operatorname{lh}_a=\operatorname{pr}_1^{-1}(a)\subset\operatorname{Harm}$.
The pair
$a\in\operatorname{aY}$
is called the {\em axis} of
$\operatorname{lh}_a$.
Similarly, a {\em right line}
$\operatorname{rh}_b$, $b\in\operatorname{aY}$,
is the subset
$\operatorname{rh}_b=\operatorname{pr}_2^{-1}(b)\subset\operatorname{Harm}$.
The pair
$b\in\operatorname{aY}$
is called the {\em axis} of
$\operatorname{rh}_b$.
Note that
$j(\operatorname{lh}_a)=\operatorname{rh}_a$
and
$j(\operatorname{rh}_b)=\operatorname{lh}_b$.
Every fiber of the fibration
$\operatorname{pr}_1:\operatorname{Harm}\to\operatorname{aY}$
is a left line, while every fiber of the fibration
$\operatorname{pr}_2:\operatorname{Harm}\to\operatorname{aY}$
is a right line. Thus every left (right) line is homeomorphic to
$\mathbb{R}$.
The axis
$a$
of
$\operatorname{lh}_a$
is the common left axis for all
$q\in\operatorname{lh}_a$.
The axis
$b$
of
$\operatorname{rh}_b$
is the common right axis for all
$q\in\operatorname{rh}_b$.
A line in
$\operatorname{Hm}$
is the image of a left line or a right line under the canonical projection
$\operatorname{Harm}\to\operatorname{Hm}$.
Thus in
$\operatorname{Hm}$
we do not distinguish left and right lines. The notion of the axis of
a line is preserved by
$j$,
and we denote by
$\operatorname{h}_a\subset\operatorname{Hm}$
a line with the axis
$a\in\operatorname{aY}$.
We say that
$b$, $b'\in\operatorname{aY}$
are in the {\em strong causal relation} if either of them lies
on an open arc in
$X$
determined by the other one (more for this terminology see in \cite{Bu17}).
\begin{lem}\label{lem:common_perp} For different
$q=(a,b)$, $q'=(a,b')$
lying of on a left line
$\operatorname{lh}_a$,
the pairs
$b$, $b'\in\operatorname{aY}$
are in the strong causal relation. Conversely, given
$b$, $b'\in\operatorname{aY}$
in the strong causal relation, there is a left line
$\operatorname{lh}_a$
such that
$q=(a,b)$, $q'=(a,b')\in\operatorname{lh}_a$.
Similar properties hold true also for right lines and lines in
$\operatorname{Hm}$.
\end{lem}
\begin{proof} The arguments can be found in \cite[Proposition~5.8, Proposition~3.2(b)]{Bu17}.
For convenience of the reader we briefly recall them.
Let
$a=(x,y)$, $b=(z,u)$, $b'=(z',u')\in\operatorname{aY}$,
where
$q=(a,b)$, $q'=(a,b')$
lie on a left line
$\operatorname{lh}_a$.
Taking a semi-metric from
$M$
with infinitely remote point
$x$,
we observe that
$y$
is the midpoint of the segments
$zu$, $z'u'\subset X_x$.
Since
$b\not=b'$,
we can assume that
$z'y\subset zy$.
By Axiom~(M),
$|z'y|_x<|zy|_x$,
and thus
$|u'y|_x<|uy|_x$.
Then again by Axiom~(M),
$u'y\subset uy$.
It follows that
$b'$
lies on an open arc in
$X$
determined by
$b$,
i.e.,
$b$, $b'$
are in the strong causal relation.
Conversely, Lemma~\ref{lem:three_embeddings} implies that for every
$b=(z,u)\in\operatorname{aY}$
there is a well defined involutive homeomorphism
$\rho_b:X\to X$,
called the {\em reflection} with respect to
$b$,
that fixes
$z$, $u$,
such that the pair
$(a,b)$
is harmonic for every
$x\in X\setminus b$,
where
$a=(x,\rho_b(x))$.
For
$b$, $b'\in\operatorname{aY}$
in the strong causal relation, we take the composition
$\rho=\rho_b\circ\rho_{b'}$
of the respective reflection and note that
$\rho(b^+)\subset\operatorname{int}(b^+)$,
where
$b^+\subset X$
is the closed arc determined by
$b$
that does not include
$b'$.
Thus there is a fixed point
$x\in\operatorname{int} b^+$
of
$\rho$.
Then
$a=(x,y)\in\operatorname{aY}$,
where
$y=\rho_{b'}(x)$,
is preserved by
$\rho_b$, $\rho_{b'}$,
and
$q=(a,b)$, $q=(a,b')\in\operatorname{lh}_a$.
\end{proof}
The pair
$a\in\operatorname{aY}$
above is called a {\em common perpendicular} to
$b$, $b'$.
We postpone the proof of uniqueness to sect.~\ref{subsect:distance_harmonic_pairs},
see Lemma~\ref{lem:unique_common_perpendicular}.
We say that
$d\in\operatorname{aY}$
{\em separates}
$b$
and
$c\in\operatorname{aY}$
if
$b$
and
$c$
lie on different open arcs in
$X$
defined by
$d$.
Note that in this case
$b$, $c$, $d$
are in the strong causal relation with each other.
Given a left line
$\operatorname{lh}_a\subset\operatorname{Harm}$
and distinct
$q=(a,b)$, $q'=(a,b')\in\operatorname{lh}_a$,
we define the {\em left segment}
$qq'\subset\operatorname{lh}_a$
as the union of
$q$, $q'$
and all of
$q''=(a,b'')\in\operatorname{lh}_a$
such that
$b''$
separates
$b$, $b'$.
The points
$q$, $q'$
are the {\em ends} of
$qq'$.
Similarly, we define right segments on a right line. More generally,
a segment
$qq'$
in
$\operatorname{Harm}$ ($\operatorname{Hm}$)
is a segment of line in
$\operatorname{Harm}$ ($\operatorname{Hm}$).
In this case the harmonic pairs
$q$, $q'$
have a common axis.
By the first part of Lemma~\ref{lem:common_perp},
$b$, $b'$
are in the strong causal relation. Denote by
$b^-\subset X$
the open arc determined by
$b$
that contains
$b'$,
and by
$(b')^-$
the open arc determined by
$b'$
that contains
$b$.
Then
$a$
does not meet
$b^-\cap(b')^-$
because
$a,b$
separate each other as well as
$a,b'$.
By Lemma~\ref{lem:three_embeddings}, for every
$z''\in b^-\cap(b')^-$
there is
$u''\in X$
such that
$(a,b'')$
is harmonic, i.e.,
$(a,b'')\in\operatorname{lh}_a$,
where
$b''=(z'',u'')$.
Thus
$b''$
is in the strong causal relation with
$b$
as well as with
$b'$.
Hence,
$u''\in b^-\cap(b')^-$.
In other words, the intersection
$b^-\cap(b')^-$
is invariant under the reflection
$\rho_a:X\to X$,
see proof of Lemma~\ref{lem:common_perp}.
We conclude that the segment
$qq'\subset\operatorname{lh}_a$
is homeomorphic to the standard segment
$[0,1]$.
A {\em zig-zag} path, or zz-path,
$S\subset\operatorname{Harm}$
is defined as an alternating finite (maybe empty) sequence of left and right segments
$\sigma_i$
in
$\operatorname{Harm}$,
where consecutive segments
$\sigma_i$, $\sigma_{i+1}$
have a common end. Segments
$\sigma_i$
are also called {\em sides} of
$S$.
\begin{lem}\label{lem:zz_connected} Given
$q$, $q'\in\operatorname{Harm}$,
there is a zz-path
$S$
in
$\operatorname{Harm}$
with at most five sides that connects
$q$
and
$q'$.
\end{lem}
\begin{proof} Let
$q=(a,b)$, $q'=(a',b')$.
The pairs
$a$, $a'\in\operatorname{aY}$
separate
$X$
into (at most four) open arcs. Taking
$a''\in\operatorname{aY}$
on such an arc, we see that
$a''$
is in the strong causal relation with
$a$
as well as with
$a'$.
By Lemma~\ref{lem:common_perp}, there is a common
perpendicular
$\widetilde b$
to
$a$, $a''$,
and there is a common perpendicular
$\widetilde b'$
to
$a'$, $a''$.
Then the pairs
$\widetilde q=(a,\widetilde b)$, $q''=(a'',\widetilde b)$,
$\widetilde q''=(a'',\widetilde b')$, $\widetilde q'=(a',\widetilde b')$
are harmonic, and the alternating sequence
$$S=q\widetilde q,\ \widetilde q q'',\ q''\widetilde q'',\ \widetilde q''\widetilde q',\ \widetilde q'q'$$
of left and rigth segments connects
$q$, $q'$
having at most 5 sides.
\end{proof}
A zz-path in
$\operatorname{Hm}$
is the image of a zz-path in
$\operatorname{Harm}$
under the canonical projection
$\operatorname{Harm}\to\operatorname{Hm}$.
This is also an alternating (in obvious sence) finite sequence
of segments in
$\operatorname{Hm}$,
where consecutive segments have a common end.
Lemma~\ref{lem:zz_connected} holds true also in
$\operatorname{Hm}$.
\section{Pseudometric on $\operatorname{Harm}$}
\label{sect:pseudometric}
\subsection{Distance between harmonic pairs with common axis}
\label{subsect:distance_harmonic_pairs}
Given two harmonic pairs in
$q$, $q'\in\operatorname{Harm}$
with a common axis, say
$q=(a,b)$
and
$q'=(a,b')$,
we define {\em the distance}
$|qq'|$
between them as
\begin{equation}\label{eq:distance}
|qq'|=|j(q)j(q')|=\left|\ln\frac{|xz'|\cdot|yz|}{|xz|\cdot|yz'|}\right|
\end{equation}
for some and hence any semi-metric on
$X$
from
$M$,
where
$a=(x,y)$, $b=(z,u)$, $b'=(z',u')\in\operatorname{aY}$,
and
$j:\operatorname{Harm}\to\operatorname{Harm}$
is the canonical involution.
Note that
\begin{equation}\label{eq:distance_different}
|qq'|=\left|\ln\frac{|xu'|\cdot|yu|}{|xu|\cdot|yu'|}\right|=
\left|\ln\frac{|xu'|\cdot|yz|}{|xz|\cdot|yu'|}\right|=
\left|\ln\frac{|xz'|\cdot|yu|}{|xu|\cdot|yz'|}\right|
\end{equation}
by harmonicity of
$q$, $q'$.
In this way, (\ref{eq:distance}) defines the distance along
the left hyperbolic line
$\operatorname{lh}_a\subset\operatorname{Harm}$
as well as along the right hyperbolic line
$\operatorname{rh}_a\subset\operatorname{Harm}$.
\begin{lem}\label{lem:distance_additive} Given
$a\in\operatorname{aY}$
and
$q=(a,b)$, $q'=(a,b')$, $q''=(a,b'')\in\operatorname{lh}_a$
such that
$b'$
separates
$b$
and
$b''$,
then
$|qq''|=|qq'|+|q'q''|$.
A similar property holds true also for right lines.
\end{lem}
\begin{proof} Let
$a=(x,y)$, $b=(z,u)$, $b'=(z',u')$, $b''=(z'',u'')$.
In the semi-metric from
$M$
with infinitely remote point
$x$, $y$
is the midpoint of the segments
$zu$, $z'u'$, $z''u''\subset X_x$.
Using that
$b'$
separates
$b$
and
$b''$,
we can assume without loss of generality that
$z''u''\subset z'u'\subset zu$.
Then
$|yz''|_x<|yz'|_x<|yz|_x$
and thus
$$|qq'|=\ln\frac{|yz|_x}{|yz'|_x},\ |q'q''|=\ln\frac{|yz'|_x}{|yz''|_x},
\ |qq''|=\ln\frac{|yz|_x}{|yz''|_x}.$$
Therefore
$|qq''|=|qq'|+|q'q
''|$.
\end{proof}
Now, we can prove uniqueness of the common perpendicular.
\begin{lem}\label{lem:unique_common_perpendicular} Given
$b$, $b'\in\operatorname{aY}$
in the strong causal relation, there is at most one common perpendicular
$a\in\operatorname{aY}$
to
$b$, $b'$.
\end{lem}
\begin{proof} Assume there are common perpendiculars
$a=(z,u)$, $a'=(z',u')\in\operatorname{aY}$
to
$b$, $b'$.
By the first part of Lemma~\ref{lem:common_perp},
$a$
and
$a'$
are in the strong causal relation.
Let
$b=(x,y)$, $b'=(x',y')$.
Using that the pairs
$a,a'$
and
$b,b'$
are in the strong causal relation, we assume without loss of generality
that on
$X_x$
we have the following order of points
$zz'yy'u'ux'$.
We denote by
$q_1=(b,a)$, $q_2=(b',a)$,
$q_1'=(b,a')$, $q_2'=(b',a')$
respective harmonic pairs. Then
$q_1,q_2\in\operatorname{rh}_a$, $q_1',q_2'\in\operatorname{rh}_{a'}$,
and we have well defined distances
$l=|q_1q_2|$, $l'=|q_1'q_2'|$.
Computing them in a semi-metric of the M\"obius
structure with infinitely remote point
$x$,
we obtain
$$e^l=\frac{|zx'|}{|x'u|}\quad
e^{l'}=\frac{|z'x'|}{|x'u'|}.$$
Using the order of points
$zz'yy'u'ux'$
on
$X_x$,
we have, in particular, that the interval
$z'x'$
is contained in the interval
$zx'$.
By Corollary~\ref{cor:interval_monotone},
$|zx'|\ge|z'x'|$.
Similarly,
$x'u\subset x'u'$
and hence
$|x'u|\le|x'u'|$.
Thus
$l\ge l'$
and if
$a'\neq a$,
the inequality is strong. Applying this argument with infinitely remote point
$y$,
we obtain
$l\le l'$.
Therefore
$l=l'$
and
$a=a'$.
\end{proof}
\subsection{Defining a pseudometric metric $\delta$ on $\operatorname{Hm}$ and $\operatorname{Harm}$}
\label{subsect:def_pseudometric}
It follows from Lemma~\ref{lem:distance_additive}, that for harmonic pairs
$q$, $q'$
on one and the same left (right) line, the length of the segment
$\sigma=qq'$
is equal to the distance
$|qq'|$,
that is, it can be computed by any of Equalities~(\ref{eq:distance}),
(\ref{eq:distance_different}).
Let
$S=\{\sigma_i\}$
be a zz-path in
$\operatorname{Harm}$.
We define length of
$S$
as the sum
$|S|=\sum_i|\sigma_i|$
of the length of its sides. Now, we define a distance
$\delta$
on
$\operatorname{Harm}$
by
$$\delta(q,q')=\inf_S|S|,$$
where the infimum is taken over all zz-paths
$S\subset\operatorname{Harm}$
from
$q$
to
$q'$.
\begin{pro}\label{pro:sym_finite_dist} The distance
$\delta$
on
$\operatorname{Harm}$
is symmetric,
$\delta(q,q')=\delta(q',q)$, $\delta(q,q)=0$,
satisfies the triangle inequality,
$$\delta(q,q'')\le\delta(q,q')+\delta(q',q''),$$
and finite
$\delta(q,q')<\infty$,
for all
$q,q',q''\in\operatorname{Harm}$.
\end{pro}
\begin{proof} The property of
$\delta$
to be symmetric and the triangle inequality immediately
follows from the definition. Taking an empty zz-path, we see that
$\delta(q,q)=0$
for any
$q\in\operatorname{Harm}$.
The fact that the distance
$\delta(q,q')$
is finite for every
$q$, $q'\in\operatorname{Harm}$,
follows from Lemma~\ref{lem:zz_connected}.
\end{proof}
We similarly define the distance on
$\operatorname{Hm}$,
for which we use the same notation
$\delta$.
The canonical projection
$\operatorname{Harm}\to\operatorname{Hm}$
is a 2-sheeted covering of 3-manifolds with deck transformation group
isomorphic to
$\mathbb{Z}_2$
acting by
$\delta$-isometries. Then the distance on
$\operatorname{Harm}$
is obtained by lifting the distance on
$\operatorname{Hm}$.
A basic problem is to prove that
$\delta$
is nondegenerate, i.e.,
$\delta(q,q')>0$
for any distinct
$q$, $q'\in\operatorname{Harm}$.
It is not at all clear that this holds even in the case
$q$, $q'$
lie on a line, and moreover that
$\delta(q,q')=|qq'|$
in this case.
\section{Projections to a line}
\label{sect:project_line}
\subsection{$s$-projection and midpoint projection}
\label{subsect:midpoint_project}
It follows from Lemma~\ref{lem:three_embeddings} that given
$a\in\operatorname{aY}$
and
$x\in X$, $x\notin a$,
there is a uniquely determined
$y\in X$
such that the pair
$(a,b)$
is harmonic,
$(a,b)\in\operatorname{Hm}$,
where
$b=(x,y)$.
In this case, we use notation
$x_a:=b$
and say that
$x_a\in\operatorname{h}_a$
is the projection of
$x$
to the line
$\operatorname{h}_a$.
We say that a one-parametric family of segments
$v_tw_t\subset\mathbb{R}$
is {\em monotone}, if its ends
$v_t$, $w_t$
are monotone in the same sence, i.e.,
$v_t<v_{t'}$
if and only if
$w_t<w_{t'}$
for
$t\neq t'$.
\begin{lem}\label{lem:monotone_segments} Given two lines
$\operatorname{h}_a$, $\operatorname{h}_c\subset\operatorname{Hm}$
with
$a=(z,u)\in\operatorname{aY}$,
the family of segments
$v_aw_a=v_aw_a(p)\subset\operatorname{h}_a$
is monotone in
$p=(c,d)\in\operatorname{h}_c$,
where
$d=(v,w)\in\operatorname{aY}$,
as
$p$
runs over the segment
$z_cu_c\subset\operatorname{h}_c$.
\end{lem}
\begin{proof} If
$c=a$,
then there is nothing to prove because
$z_cu_c=\operatorname{h}_c$
in this case and
$v_a=p=w_a$
for any
$p\in\operatorname{h}_c$.
Thus we assume that
$c\neq a$.
Another trivial case occurs when the pair
$(a,c)$
is harmonic. In that case,
$z_c=u_c$,
i.e.
the segment
$z_cu_c$
is degenerate, and for
$p=z_c=u_c$,
the family
$v_aw_a(p)=h_a$
is constant. Thus we assume that the pair
$(a,c)$
is not harmonic.
Let
$z'=\rho_c(z)$, $u'=\rho_c(u)$,
where
$\rho_c:X\to X$
is the reflection with respect to
$c$
(see the proof of Lemma~\ref{lem:common_perp} and \cite{Bu17}).
Then by definition
$z_c=(z,z')$, $u_c=(u,u')$.
Note that
$z_cu_c\subset\operatorname{h}_c$
is a ray when
$a$
and
$c$
have a common end.
Let
$zu\subset X$
be an open arc determined by
$z$, $u$
that does not contain at least one of the ends of
$c$,
and let
$z'u'\subset X$
be the
$\rho_c$
image of
$zu$.
Then the ends
$v$, $w\in X$
of
$d$
miss the intersection
$zu\cap z'u'$
(which is nonempty if and only if the pairs
$a$, $c$
separate each other). We assume without loss of generality that
$v\in zu\setminus z'u'$
($zu\setminus z'u'\neq\emptyset$
by the assumption that
$(a,c)$
is not harmonic). Then
$w\in z'u'\setminus zu$
because
$w=\rho_c(v)$.
Under our assumption, an order on
$z_cu_c$
induces well defined orders on the arcs
$zu\setminus z'u'$, $z'u'\setminus zu$
such that
$p<p'$
if and only if
$v<v'$
and
$w<w'$
for
$p=(c,d)$, $p'=(c,d')$, $d=(v,w)$, $d'=(v',w')$.
Taking projections on
$a$,
we see that the family
$v_aw_a(p)\subset\operatorname{h}_a$
is monotone in
$p$.
\end{proof}
We say that
$b\in\mathbb{R}$
is the
$s$-point
of an (oriented) segment
$vw\subset\mathbb{R}$, $s>0$,
if
$b\in vw$
and
$|vb|/|bw|=s$.
For example,
$1$-point
is the midpoint of a segment
$vw$.
In that case, the order of
$v$, $w$
on
$\mathbb{R}$
is not important.
\begin{lem}\label{lem:midpoint_project} Given
$s>0$, $q=(a,b)\in\operatorname{Harm}$, $a=(z,u)\in\operatorname{aY}$,
and a line
$\operatorname{h}_c\subset\operatorname{Hm}$,
there is a unique
$p=(c,d)\in z_cu_c\subset\operatorname{h}_c$, $d=(v,w)\in\operatorname{aY}$,
such that
$b\in\operatorname{h}_a$
is the
$s$-point
of the (maybe degenerate) segment
$v_aw_a\subset\operatorname{h}_a$.
\end{lem}
\begin{proof} If
$c=a$,
then
$z_cu_c=\operatorname{h}_c$,
and for every
$p=(c,d)\in\operatorname{h}_c$
the segment
$v_aw_a$
is degenerate,
$v_a=w_a=p$.
In this case, we take
$p=(c,b)$.
We also do not exclude
the case when the segment
$z_cu_c$
is degenerate, i.e.,
$z_c=u_c$.
In this case, the pair
$(a,c)$
is harmonic,
$d=a$
and
$v_aw_a=\operatorname{h}_a$.
Therefore, any
$b\in\operatorname{h}_a$
is understood as the
$s$-point
of the
$\operatorname{h}_a$
ends at infinity.
As in the proof of Lemma~\ref{lem:monotone_segments}, we always assume that
$v\in zu\setminus z'u'$
for
$d=(v,w)$,
where
$z'=\rho_c(z)$, $u'=\rho_c(u)$.
Then
$w\in z'u'\setminus zu$.
and we consider
$v_aw_a\subset\operatorname{h}_a$
as an oriented segment. This is well defined because by
Lemma~\ref{lem:monotone_segments} the family
$v_aw_a=v_aw_a(p)$
in monotone in
$p\in z_cu_c$.
First, we show that any
$b\in\operatorname{h}_a$
separates the
$s$-points
$m_a'$
and
$m_a''$
of segments
$v_a'w_a'$, $v_a''w_a''$
respectively for appropriate
$d'=(v',w')$, $d''=(v'',w'')\in z_cu_c\subset\operatorname{h}_c$.
To this end, note that the
$s$-point
$m_a$
of the segment
$v_aw_a\subset\operatorname{h}_a$
approaches the
$\operatorname{h}_a$
ends at infinity
$z$
or
$u$
as
$d=(v,w)\in z_cu_c$
approaches
$z_c$
or
$u_c$
respectively. Indeed, one of
$v_a$, $w_a$
stays bounded on
$\operatorname{h}_a$
while the other one goes along
$\operatorname{h}_a$
to infinity as
$d\to z_c$
or
$u_c$.
Thus
$m_a$
goes to
$z$
or
$u$
respectively. (It may happen that one of
$z_c$, $u_c\in\operatorname{h}_c$
is at infinity but not both of them when
$a$, $c\in\operatorname{aY}$
have a common end. In that case, the admissible segment
$z_cu_c$
is a ray on
$\operatorname{h}_c$,
and both
$v_a$, $w_a$
together with their
$s$-point
$m_a$
go to respective end at infinity of
$\operatorname{h}_a$
when
$d=(v,w)\in\operatorname{h}_c$
goes to the infinite end of the ray).
Second, we conclude that any
$b\in\operatorname{h}_a$
is the
$s$-point
of a respective segment
$v_aw_a\subset\operatorname{h}_a$, $b=m_a$.
By the first part,
$b$
lies between
$m_a'$, $m_b''$.
Now, we move from
$d'$
to
$d''$
along
$\operatorname{h}_c$,
i.e. consider
$d_t=(1-t)d'+td''\in\operatorname{h}_c$, $0\le t\le 1$,
$d_t=(v_t,w_t)$.
The
$s$-point
$(m_t)_a$
of
$(v_t)_a(w_t)_a$
varies continuously from
$m_a'$
to
$m_a''$
as
$t$
goes from 0 to 1. Therefore, there is
$0<\tau<1$
such that
$b=(m_\tau)_a$.
Finally, we show that the required
$p=(c,d)\in\operatorname{h}_c$
is unique. Indeed, by Lemma~\ref{lem:monotone_segments}, segments
$v_aw_a$,
where
$d=(v,w)$,
are monotone in
$p\in z_cu_c$.
Thus for any other
$p'=(c,d')\in z_cu_c$
no one of the segments
$v_aw_a$, $v_a'w_a'$
contains the other one. Hence, the respective
$s$-points
$m_a\neq m_a'$.
\end{proof}
For every
$s>0$
and a line
$\operatorname{h}_c\subset\operatorname{Hm}$
Lemma~\ref{lem:midpoint_project} determines a map
$\operatorname{pr}_c^s:\operatorname{Harm}\to\operatorname{h}_c$,
which is called the
$s$-{\em projection}
to the line
$\operatorname{h}_c$.
In the case
$s=1$
we abbreviate
$\operatorname{pr}_c:=\operatorname{pr}_c^1$,
and the map
$\operatorname{pr}_c:\operatorname{Harm}\to\operatorname{h}_c$
is called the {\em midpoint} projection to
$\operatorname{h}_c$.
The map
$\operatorname{pr}_c^s\circ j:\operatorname{Harm}\to\operatorname{h}_c$
in general differs from
$\operatorname{pr}_c^s$,
thus in
$\operatorname{Hm}$
we have two maybe different projections to the line
$\operatorname{h}_c$
depending on the choice of one of the entries of
$q=(a,b)\in\operatorname{Hm}$.
However,
$\operatorname{pr}_c^s$
is well defined along any line
$\operatorname{h}_a\subset\operatorname{Hm}$,
and hence along any zz-path.
\subsection{Equal ratio projection}
\label{subsect:equal_ratio_project}
\begin{lem}\label{lem:equal_ratio_project} Given
$q=(a,b)\in\operatorname{Harm}$, $a=(x,y)$, $b=(z,u)\in\operatorname{aY}$,
and a line
$\operatorname{h}_c\subset\operatorname{Hm}$,
there is a unique
$p=(c,d)\in x_cy_c\cap z_cu_c\subset\operatorname{h}_c$, $d=(v,w)$,
such that
$a\in v_bw_b\subset\operatorname{h}_b$, $b\in v_aw_a\subset\operatorname{h}_a$
and
$$\frac{|v_ba|}{|aw_b|}=\frac{|v_ab|}{|bw_a|}.$$
\end{lem}
\begin{proof} We fix an orientation of the line
$\operatorname{h}_c$
and the respective order. Note that one of the segments
$x_cy_c$, $z_cu_c$
is degenerate if and only if one of the pairs
$(a,c)$
or
$(c,b)$
is harmonic. Then there is nothing to prove because
$p=(c,a)$, $d=(v,w)=(x,y)$, $a=v_b=w_b\in\operatorname{h}_b$, $b\in v_aw_a=\operatorname{h}_a$
in the first case, and
$p=(c,b)$, $d=(v,w)=(z,u)$, $a\in v_bw_b=\operatorname{h}_b$, $b=v_a=w_a\in\operatorname{h}_a$,
in the second case (and the required equality is understood as
$0/0=\infty/\infty$).
Thus we assume that none of the segments
$x_cy_c$, $z_cu_c$
is degenerate. Moreover, their intersection
$x_cy_c\cap z_cu_c$
is not empty and also a nondegenerate segment because the pairs
$(x,y)$
and
$(z,u)$
being harmonic separate each other. Without loss of generality, we assume that
$x_c<y_c$, $u_c<z_c$.
Then
$u_c<y_c$
because the pairs
$(x,y)$
and
$(z,u)$
separate each other on
$X$.
Hence every
$d\in x_cy_c\cap z_cu_c$
separates
$u_c$, $y_c$
and
$x_c$, $z_c$.
Furthermore,
$c$
cannot separate
$(x,y)$, $(z,u)$.
Thus we can assume without loss of generality that
$c$
does not separate
$x$, $z$.
We also assume that
$v$
lies on the same arc in
$X$
determined by
$c$
as
$x$
and
$z$.
Then the assumption
$d=(v,w)\in x_cy_c\cap z_cu_c$
implies that
$v$
lies between
$x$
and
$z$
on that arc.
It follows that when we are moving along
$\operatorname{h}_a$
from
$x$
to
$y$,
we meet
$v$
earlier than
$z$,
and
$u$
earlier than
$w$.
Hence,
$b\in v_aw_a$.
Similarly, when we are moving along
$\operatorname{h}_b$
from
$z$
to
$u$,
we meet
$v$
earlier than
$x$,
and
$y$
earlier than
$w$.
Hence
$a\in v_bw_b$.
To be definite we assume that
$x_cy_c\cap z_cu_c=x_cz_c$
(other cases are considered similarly). Thus if
$p=(c,d)\in x_cz_c$
goes to
$x_c$,
then
$v_a\to\infty$,
while
$w_a$
stays bounded on
$\operatorname{h}_a$,
and
$v_b\to a$,
while
$\lim w_b\neq a$.
Setting
$s=|v_ba|/|aw_b|$, $t=|v_ab|/|bw_a|$,
we see that
$(s,t)\to (0,\infty)$
as
$p\to x_c$.
Similarly, if
$p\to z_c$,
then
$v_a\to b$,
while
$\lim w_a\neq b$,
and
$v_b\to\infty$,
while
$w_b$
stays bounded. Therefore,
$(s,t)\to (\infty,0)$
in this case. By continuity, there is
$p\in x_cz_c$
with
$s=t$.
This gives a required
$p\in x_cy_c\cap z_cu_c\subset \operatorname{h}_c$.
By Lemma~\ref{lem:monotone_segments}, segments
$v_aw_a$, $v_bw_b$
are monotone in
$p\in x_cy_c\cap z_cu_c\subset\operatorname{h}_c$.
Since
$b\in v_aw_a$, $a\in v_bw_b$,
this implies that the rations
$s$, $t$
are monotone. Thus a required
$p$
is unique.
\end{proof}
For a line
$\operatorname{h}_c\subset\operatorname{Hm}$,
Lemma~\ref{lem:equal_ratio_project} determines a map
$\operatorname{prr}_c:\operatorname{Harm}\to\operatorname{h}_c$,
which is called the {\em equal ratio projection} to the line
$\operatorname{h}_c$.
Note that for
$q=(a,b)\in\operatorname{Harm}$
we have
$\operatorname{prr}_c(q)=\operatorname{pr}_c^s(q)$
for some well determined
$s>0$,
where
$s$
depends on
$q$.
\subsection{Strictly contracting property of the midpoint projection}
\label{subsect:strictly_contacting}
Increment axiom (I) is only used in the proof of the following proposition
which plays a key role in the paper.
\begin{pro}\label{pro:strict_monotone} Given lines
$\operatorname{h}_a$, $\operatorname{h}_c\subset\operatorname{Hm}$, $a\neq c$,
and points
$d=(v,w)$, $d'=(v',w')\in\operatorname{h}_c$
such that the pairs
$(v,w')$, $(v',w)$
separate each other, we have
$$\frac{1}{2}\left(|v_av_a'|+|w_aw_a'|\right)>|dd'|.$$
\end{pro}
For its proof see \cite[Proposition~7.11]{Bu17}.
\begin{lem}\label{lem:midpoint_project_estimate} The midpoint projection
$\operatorname{pr}_c:\operatorname{h}_a\to\operatorname{h}_c$
to any line
$\operatorname{h}_c\subset\operatorname{Hm}$
is strictly contracting along any line
$\operatorname{h}_a\subset\operatorname{Hm}$, $a\neq c$.
\end{lem}
\begin{proof} Given
$q=(a,b)$, $q'=(a,b')\in\operatorname{h}_a$,
we let
$p=\operatorname{pr}_c(q)$, $p'=\operatorname{pr}_c(q')$
be the midpoint projections to
$\operatorname{h}_c$.
Then
$p=(c,d)$, $p'=(c,d')$
with
$d=(v,w)$, $d'=(v',w')\in\operatorname{aY}$,
so that
$c$
and
$(v,w)$
separate each other as well as
$c$
and
$(v',w')$.
We assume without loss of generality that
$v$, $v'$
lie on an arc in
$X$
determined by
$c$,
while
$w$, $w'$
lie on the other arc determined by
$c$.
Then the pairs
$(v,w')$, $(v',w)$
separate each other.
By Proposition~\ref{pro:strict_monotone}
$$\frac{1}{2}(|v_av_a'|+|w_aw_a'|)>|dd'|.$$
By definition of the midpoint projection,
$d$, $d'\in z_cu_c$,
where
$a=(z,u)$.
Thus
$a$
separates
$(v,v')$
and
$(w,w')$
(in terms of \cite{Bu17}, it means that the event
$(z,u)\in\operatorname{aY}$
is strictly between events
$(v,v')$, $(w,w')\in\operatorname{aY}$).
By Lemma~\ref{lem:common_perp}, the pairs
$d=(v,w)$
and
$d'=(v',w')\in\operatorname{h}_c$
are in the strong causal relation. Since
$(z,u)$
separates
$(v,v')$
and
$(w,w')$,
it follows that moving along
$a$,
we meet
$v,v'$
and
$w,w'$
in the same order. Identifying the line
$\operatorname{h}_a$
with real line
$\mathbb{R}$,
it means that the signs of
$v_a-v_a'$
and
$w_a-w_a'$
coincide.
Since
$b$
is the midpoint of
$v_aw_a$
and
$b'$
is the midpoint of
$v_a'w_a'$,
we obtain
\begin{align*}
|bb'|&=\left|\frac{1}{2}(v_a+w_a)-\frac{1}{2}(v_a'+w_a')\right|\\
&=\frac{1}{2}|v_a-v_a'+w_a-w_a'|\\
&=\frac{1}{2}(|v_av_a'|+|w_aw_a'|).
\end{align*}
Hence,
$|bb'|>|dd'|$
and therefore
$|qq'|>|pp'|$.
\end{proof}
\section{Distance $\delta$ along segments}
\label{sect:distance_segments}
We assume that a monotone M\"obius structure
$M$
satisfies Increment axiom~(I), and under this assumption we prove
Theorem~\ref{thm:main}.
\begin{pro}\label{pro:sides_short} For any side
$\sigma$
of a closed zz-path
$S\subset\operatorname{Harm}$
we have
$$|\sigma|<\sum_{\sigma'}|\sigma'|,$$
where the sum is taken over all sides
$\sigma'$
of
$S$, $\sigma'\neq\sigma$.
\end{pro}
\begin{proof} Let
$S'$
be a zz-path which is the union of all segments of
$S$
excluding
$\sigma$,
that is,
$S=\sigma\cup S'$.
The idea is to use the midpoint projection of
$S'$
to the line
$h_c$
determined by
$\sigma$, $\sigma\subset h_c$,
and apply Lemma~\ref{lem:midpoint_project_estimate}. The problem is
that the midpoint projections on adjacent segments of
$S'$
may not coincide on the common vertex (in the case of the canonical
M\"obius structure on
$S^1$
they coincide). This could create gaps on
$\sigma$
which do not covered by the projection and thus prevent the required estimate.
Let
$V=V(S')$
be the vertex set
$S'$.
We fix
$\varepsilon>0$
and for every vertex
$v\in V$
we take
$\varepsilon_v>0$
such that
$\sum_{v\in V}\varepsilon_v<\varepsilon$.
We use the midpoint projection on
$S'$
outside of the
$\varepsilon_v$-neighborhoods
$U_v(\varepsilon_v)$, $v\in V$,
of vertices, the equal ratio projection on the vertices,
and interpolate between these types of projections inside of
$U_v(\varepsilon_v)$
to obtain a continuous projection
$\operatorname{pr}_\sigma:S'\to\operatorname{h}_c$
with controlled metric properties.
Let
$\sigma'\subset S'$
be a side of
$S$
different from
$\sigma$, $h_a\subset \operatorname{Hm}$
the line containing
$\sigma'$, $\sigma'=pq\subset h_a$.
If
$\sigma'$
is adjacent to
$\sigma$,
then we assume to be definite that
$p\in\operatorname{Hm}$
is the common vertex of
$\sigma$, $\sigma'$,
in particular,
$p=(a,c)$.
Then
$q=(a,c')$
for some
$c'\in\operatorname{aY}$.
In this case, by Lemmas~\ref{lem:midpoint_project}, \ref{lem:equal_ratio_project},
the whole segment
$\sigma'$
is projected to
$p$,
$\operatorname{pr}_\sigma(\sigma')=p$.
Thus we assume that
$\sigma'$
is not adjacent to
$\sigma$.
Let
$pq'\subset\sigma'$
be the minimal subsegment containing the
$\varepsilon_p$-neighborhood of
$p$
in
$\sigma'$, $|pq'|=\varepsilon_p$.
We define
$\operatorname{pr}_\sigma$
on
$pq'$
by taking
$\operatorname{pr}_\sigma(p)=\operatorname{prr}_c(p)$, $\operatorname{pr}_\sigma(q')=\operatorname{pr}_c(q')$
and
$\operatorname{pr}_\sigma(p_\tau)=\operatorname{pr}_c^s(p_\tau)$,
where
$p_\tau=(1-\tau)p+\tau q'$,
$s=s(\tau)=(1-\tau)s_p+\tau$,
and
$s_p>0$
is determined by
$\operatorname{prr}_c(p)$, $\operatorname{prr}_c(p)=\operatorname{pr}_c^{s_p}(p)$
(we take
$s_p=1$
if the adjacent to
$\sigma'$
at
$p$
segment
$\sigma''\in S'$
is adjacent to
$\sigma$).
We have constructed a projection
$\operatorname{pr}_\sigma:S'\to\operatorname{h}_c$
of the zz-path
$S'$
to the line
$\operatorname{h}_c$.
Continuity of
$\operatorname{pr}_\sigma$
along sides of
$S'$
follows from the uniqueness property of
$\operatorname{pr}_c^s$,
continuity at common vertices of adjacent sides
follows from definition of
$\operatorname{prr}_c$.
Since
$\operatorname{pr}_\sigma$
is constant on the sides
$\sigma_1$, $\sigma_2$
adjacent to
$\sigma$,
which are mapped to the vertices of
$\sigma$,
the continuity of
$\operatorname{pr}_\sigma$
implies that the image
$\operatorname{pr}_\sigma(S')\subset\operatorname{h}_a$
covers
$\sigma$.
Thus
$|\sigma|\le\sum_{\sigma'\neq\sigma}|\operatorname{pr}_\sigma(\sigma')|$.
We decompose the right hand side of this inequality as
$\sum_{\sigma'\neq\sigma}|\operatorname{pr}_\sigma(\sigma')|=A+B$,
where
$A$
is the length of
$\operatorname{pr}_\sigma(S'\setminus\cup_{v\in V}U_v(\varepsilon_v))$
and
$B$
the length of
$\operatorname{pr}_\sigma(\cup_{v\in V}U_v(\varepsilon_v))$.
Since
$\operatorname{pr}_\sigma$
coincides with the midpoint projection
$\operatorname{pr}_c$
on
$S'$
outside of the union
$\cup_{v\in V}U_v(\varepsilon_v)$,
Lemma~\ref{lem:midpoint_project_estimate} gives
$A<|S''|-\sum_v\varepsilon_v<|S''|$,
where
$S''=S'\setminus(\sigma_1\cup\sigma_2)$.
Since
$\operatorname{pr}_\sigma$
is continuous on
$S'$,
we can make
$B$
arbitrarily small taking
$\varepsilon$
sufficiently small, say
$B<\delta(\varepsilon)<|\sigma_1|+|\sigma_2|$.
Thus
$|\sigma|\le|S''|+\delta(\varepsilon)<|S''|+|\sigma_1|+|\sigma_2|=|S'|$.
\end{proof}
\begin{cor}\label{cor:distance_line} For any
$q$, $q'\in\operatorname{Harm}$
on a line we have
$\delta(q,q')=|qq'|$.
\end{cor}
\begin{proof} Let
$S$
be a zz-path in
$\operatorname{Harm}$
between
$q$, $q'$
different from the segment
$qq'$.
By definition, the first and last sides of
$S$
lie of the line determined by the segment
$qq'$.
We denote by
$\widetilde q$, $\widetilde q'$
the ends of the first and the last sides respectively,
and assume that we have the order
$q<\widetilde q<\widetilde q'<q'$
along the segment
$qq'$.
Any other order only makes arguments easier.
Let
$S'$
be the zz-subpath of
$S$
between
$\widetilde q$, $\widetilde q'$.
Then
$S'$
together with the segment
$\widetilde q\widetilde q'$
gives a closed zz-path in
$\operatorname{Harm}$.
By Proposition~\ref{pro:sides_short} we obtain
$|\widetilde q\widetilde q'|<|S'|$.
On the other hand,
$|qq'|=|\widetilde q\widetilde q'|+a$
and
$|S|=|S'|+a$,
where
$a=|q\widetilde q|+|\widetilde q'q'|$.
Hence
$|S|>|qq'|$
and thus
$\delta(q,q')=|qq'|$.
\end{proof}
This completes the proof of Theorem~\ref{thm:main}.
| {
"timestamp": "2018-10-09T02:11:53",
"yymm": "1810",
"arxiv_id": "1810.03133",
"language": "en",
"url": "https://arxiv.org/abs/1810.03133",
"abstract": "Any (boundary continuous) hyperbolic space induces on the boundary at infinity a Moebius structure which reflects most essential asymptotic properties of the space. In this paper, we initiate the study of the inverse problem: describe Moebius structures which are induced by hyperbolic spaces at least in the simplest case of the circle. For a large class of Moebius structures on the circle, we define a canonical \"filling\" each of them, which serves as a natural candidate for a solution of the inverse problem. This is a 3-dimensional (pseudo)metric space Harm, which consists of harmonic 4-tuples of the respective Moebius structure with a distance determined by zig-zag paths. Our main result is the proof that every line in Harm is a geodesic, i.e., shortest in the zig-zag distance on each segment. This gives a good starting point to show that Harm is Gromov hyperbolic with the prescribed Moebius structure at infinity.",
"subjects": "Metric Geometry (math.MG)",
"title": "On the inverse problem of Moebius geometry on the circle",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717428891155,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7089449223089748
} |
https://arxiv.org/abs/1206.3057 | Optimally solving a transportation problem using Voronoi diagrams | We consider the following variant of the Monge-Kantorovich transportation problem. Let S be a finite set of point sites in d dimensions. A bounded set C in d-dimensional space is to be distributed among the sites p in S such that (i) each p receives a subset C_p of prescribed volume, and (ii) the average distance of all points of C from their respective sites p is minimized. In our model, volume is quantified by some measure, and the distance between a site p and a point z is given by a function d_p(z). Under quite liberal technical assumptions on C and on the functions d_p we show that a solution of minimum total cost can be obtained by intersecting with C the Voronoi diagram of the sites in S, based on the functions d_p with suitable additive weights. Moreover, this optimum partition is unique up to sets of measure zero. Unlike the deep analytic methods of classical transportation theory, our proof is based directly on geometric arguments. | \section{Introduction} \label{intro-sec
In 1781, Gaspard Monge~\cite{m-mtdr-81} raised the following problem.
Given two sets $C$ and $S$ of equal mass
in $\mathbb{R}^d$, transport each mass unit of $C$ to a mass unit of $S$
at minimal cost. More formally, given two measures $\mu$ and $\nu$,
find a map $f$ satisfying $\mu(f^{-1}(\cdot)) = \nu(\cdot)$
that minimizes
\[
\int \! d(z,f(z)) \, \mathrm{d} \mu(z),
\]
where $d(z,z')$ describes the cost of moving $z$ to $z'$.
Because of its obvious relevance to economics, and perhaps due to its
mathematical challenge, this problem has received a lot of attention.
Even with the Euclidean distance as cost function~$d$ it is not at all clear
in which cases an optimal map $f$ exists.
Progress by Appell~\cite{a-mdrsc-87} was honored with a prize by the
Academy of Paris in 1887. Kantorovich~\cite{k-pm-48} achieved a breakthrough
by solving a relaxed version of Monge's original problem.
In 1975, he received a Nobel prize in Economics; see
Gangbo and McCann~\cite{gm-got-96} for mathematical and historical details.
While usually known as the {\em Monge-Kantorovich transportation problem},
the minimum cost of a transportation is sometimes called {\em Wasserstein metric}
or, in computer science, {\em earth mover's distance} between the two measures
$\mu$ and $\nu$. It can be used to measure similarity in image retrieval;
see Rubner et al.~\cite{rtg-mdaid-98}.
If both measures $\mu$ and $\nu$ have finite support, Monge's problem becomes
the minimum weight matching problem for complete bipartite graphs, where edge weights
represent transportation cost; see Rote~\cite{r-tapm-09}, Vaidya~\cite{v-ghm-89} and Sharathkumar et al.~\cite{sa-atpgs-12}.
We are interested in the case where only measure $\nu$ has finite support.
More precisely, we assume that a set $S$ of $n$ point sites $p_i$ is given, and
numbers $\lambda_i > 0$ whose sum equals~1. A body $C$ of volume~1 must be split into
subsets $C_i$ of volume $\lambda_i$ in such a way that the total cost of transporting,
for each $i$, all points of $C_i$ to their site $p_i$ becomes a minimum.
In this setting, volume is measured by some measure $\mu$, and
transport cost $d(z,z')$ by some measure of distance.
Gangbo and McCann~\cite{gm-got-96} report on the cases
where either $d(z,z') = h(z-z')$ with a strictly convex function $h$,
or $d(z,z') = l(|z-z'|)$ with a non-negative, strictly concave function $l$
of the Euclidean norm.
As a consequence of deep results on the general Monge-Kantorovich problem,
they prove a surprising fact. The minimum cost partition of $C$ is given
by the additively weighted Voronoi diagram of the sites $p_i$, based
on cost function $d$ and additive weights $w_i$. In this structure, the Voronoi
region of $p_i$ contains all points $z$ satisfying
\[
d(p_i,z) - w_i \ < \ d(p_j,z) - w_j \mbox{ for all } j\not= i.
\]
Villani~\cite{v-oton-09} has more recently observed that this holds even for more general
cost functions, that need not be invariant under translations, and in the case where both distributions are continuous.
Figure~\ref{cones-fig} depicts how to obtain this structure in dimension $d=2$. For each point site $p\in S$
we construct in $\mathbb{R}^{3}$ the cone $\{ (z,d(p,z)-w_p)\ | \ z \in \mathbb{R}^2\}$. The lower envelope of the cones, projected
onto the $XY$--plane, results in the Voronoi diagram.
\begin{figure}[hbtp]%
\begin{center}%
\includegraphics[scale=0.5,keepaspectratio]{Figures/cones.pdf}%
\caption{An additively weighted Voronoi diagram as the lower envelope of cones.}%
\label{cones-fig}
\end{center}%
\end{figure}
Independently, Aurenhammer et al.~\cite{aha-mttls-98} have studied the case where $d(z,z')=|z-z'|^2$.
Here, a {\em power diagram} (see~\cite{ak-vd-99}) gives an optimum splitting of $C$. In addition
to structural results, they provide algorithms for computing the weights $w_i$.
Regarding the case where the transportation cost from point $z$ to site $p_i$ is given by individual cost functions $d_{p_i}(z)$, it is still unknown how to compute the weights $w_i$.
\vspace{\baselineskip}
In this paper we consider the situation where the cost $d(p_i,z)$ of transporting point $z$ to site $p_i$
is given by individual distance functions $d_{p_i}(z)$.
We require that the weighted Voronoi
diagram based on the functions $d_{p_i}(\cdot)$ is well-behaved, in the following sense.
Bisectors are $(d-1)$--dimensional, and increasing weight $w_i$ causes the bisectors of $p_i$ to sweep
$d$--space in a continuous way.
These requirements are fulfilled for the Euclidean metric and,
at least in dimension~2, if the distance function $d_p(z) = d(z-p)$ assigned to each site $p$ is a translate of the same strictly convex distance function $d(\cdot)$.
We show that these assumptions are strong enough to obtain the result of~\cite{gm-got-96,aha-mttls-98}:
The weighted Voronoi diagram based on the functions $d_{p_i}(\cdot)$ optimally solves the transportation problem,
if weights are suitably chosen.
Whereas~\cite{gm-got-96} derives this fact from a more general measure-theoretic theorem, our proof uses
the minimization of a quadratic objective function and
geometric arguments.
The purpose of our paper is to show that such a simple proof is possible.
After stating some definitions in Section~\ref{defi-sec} we generalize arguments from~\cite{aha-mttls-98}
to prove, in Section~\ref{parti-sec}, that $C$ can be split into parts of arbitrary given
volumes by a weighted Voronoi diagram, for a suitable choice of weights.
Then, in Section~\ref{opti-sec},
we show that such a partition is optimal, and unique.
In Section~\ref{unique-sec} we show that if $C$ is connected, these weights are uniquely determined, up to addition of a constant.
\section{Definitions} \label{defi-sec
Let $\mu$ be a measure defined for all Lebesgue-measurable
subsets of $\mathbb{R}^d$.
We assume that $\mu$ and the $d$-dimensional Lebesgue measure are mutually continuous, that is, $\mu$ vanishes exactly on the sets of Lebesgue measure zero.
Let $S$ denote a set of $n$ point sites in $\mathbb{R}^d$. For each $p \in S$ we are given
a continuous function
\begin{eqnarray*}
d_p \colon \mathbb{R}^d \to \mathbb{R}_{\geq 0}
\end{eqnarray*}
that assigns to each point $z$ of $\mathbb{R}^d$ a nonnegative value $d_p(z)$ as the ``distance''
from site $p$ to $z$.
For $p \not=q \in S$ and $\gamma \in \mathbb{R}$ we define
the \emph{bisector}
\[
{B_\gamma}(p,q) := \{\, z \in \mathbb{R}^d \mid d_p(z) - d_q(z) = \gamma\,\}
\]
and the \emph{region}
\[
{R_\gamma}(p,q) := \{\, z \in \mathbb{R}^d \mid d_p(z) - d_q(z) < \gamma\,\}.
\]
The sets $R_{\gamma}(p,q)$ are open and increase with $\gamma$.
Now let us assume that for each site $p \in S$ an additive weight $w_p$ is given,
and let
\[
w=(w_1, w_2, \ldots, w_n)
\]
denote the vector of all weights, according to some ordering $p_1, p_2, \ldots$ of $S$. Then,
$B_{w_i - w_j}(p_i,p_j)$ is called the {\em additively weighted bisector} of $p_i$ and $p_j$, and
\[
\mbox{VR}_w(p_i,S):= \bigcap_{j \not= i} {R_{w_i - w_j}}(p_i,p_j)
\]
is the {\em additively weighted Voronoi region} of $p_i$ with respect to $S$.
It consists of all points~$z$ for which $d_{p_i}(z) - w_i$ is smaller
than all values $d_{p_j}(z) - w_j$, by definition.
As usual,
\[
V_w(S) := \mathbb{R}^d \setminus \bigcup_i \mbox{VR}_w(p_i,S)
\]
is called the {\em additively weighted Voronoi diagram} of $S$;
see~\cite{ak-vd-99}. Clearly, $\mbox{VR}_w(p_i,S)$ and $V_w(S)$ do not change if
the same constant is added to all weights in $w$.
Increasing a single value $w_i$ will increase the size of $p_i$'s Voronoi region.
\begin{figure}[hbtp]%
\begin{center}%
\includegraphics[scale=0.42,keepaspectratio]{Figures/hyperbolae2.pdf}%
\caption{(i) Sets $R_\gamma(p,q)$ for increasing values of $\gamma$.
(ii) An additively weighted Voronoi diagram.}%
\label{hyperbolae-fig}
\end{center}%
\end{figure}
\vspace{\baselineskip}
\noindent
{\bf Example.} Let $d=2$ and $d_p(z)=|p-z|$ the Euclidean distance.
Given weights $w_p, w_q$, the bisector $B_{w_p - w_q}(p,q)$ is a line if $w_p=w_q$,
and a branch of a hyperbola otherwise. Figure~\ref{hyperbolae-fig}(i) shows how
$B_\gamma(p,q)$
sweeps across the plane as $\gamma$ grows
from $-\infty$ to $\infty$. In fact, when $|\gamma|=|p-q|$, the bisector degenerates to a ray, and for larger $|\gamma|$, the bisectors are empty.
The bisector $B_\gamma(p,q)$ forms the boundary of the set $R_\gamma(p,q)$, which increases with $\gamma$.
Each bounded set $C$ in the plane will be reached at some point.
From then on the volume of $C\cap R_\gamma (p,q)$ is continuously
growing until all of $C$ is contained in $R_\gamma(p,q)$.
Given $n$ points $p_j$ with additive weights $w_j$, raising the value of a single
weight $w_i$ will cause all sets $R_{w_i - w_j}(p_i, p_j)$ to grow until $C$ is fully
contained in the Voronoi region $V_w(p_i,S)$ of~$p_i$.
Figure~\ref{hyperbolae-fig}(ii) shows an additively weighted Voronoi diagram $V_w(S)$ based on the Euclidean distance.
It partitions the plane into~5 two-dimensional cells (Voronoi regions), and consists of~9
cells of dimension~1 (Voronoi edges)
and of 5~cells
of dimension~0 (Voronoi vertices). Each cell is homeomorphic to an open sphere of appropriate dimension.
The next definition generalizes the above properties to the setting used in this paper.
\vspace{\baselineskip}
\begin{definition} \label{admi-defi}
A system of continuous distance functions $d_p(\cdot)$, where $p \in S$, is called {\em admissible} if
for all $p \not= q \in S$, and for each bounded open set $C \subset \mathbb{R}^d$,
there exist values $m_{pq}$ and $M_{pq}$ such that
$ \gamma \mapsto \mu \left(C \cap R_\gamma (p,q)\right)$
is continuously increasing from $0$ to $\mu(C)$ as
$\gamma$ grows from $m_{pq}$ to $M_{pq}$.
Furthermore,
$C \cap R_\gamma (p,q) = \emptyset$ if $\gamma \leq m_{pq}$ and
$C \subset R_\gamma (p,q)$ if $M_{pq} \leq \gamma$.
\end{definition}
By symmetry we can assume w.~l.~o.~g.~$m_{qp} = - M_{pq}$ and $M_{qp} = - m_{pq}$;
see Figure~\ref{hyperbolae-fig}(i).
We will need the following structural property, which essentially says that bisectors have measure~0.
\begin{lemma} \label{bisector-volume-lemm}
For a system of admissible distance functions $d_p(\cdot)$, where $p\in S$, for any two points $p\not=q \in S$ and any $\gamma \in \mathbb{R}$, and
for any bounded open set $C\subset \mathbb{R}^d$, we have $\mu \left(C \cap B_\gamma (p,q)\right) = 0$.
\end{lemma}
\begin{proof}
By definition of $R_\gamma(p,q)$ and $B_\gamma(p,q)$, for all $\varepsilon > 0$ and $\gamma\in \mathbb{R}$ the following inequality holds:
\[\mu(C\cap R_{\gamma + \varepsilon}(p,q)) \geq \mu(C\cap R_{\gamma}(p,q)) + \mu(C\cap B_{\gamma}(p,q))\]
We let $\varepsilon$ decrease to 0.
Since, according to Definition~\ref{admi-defi}, the function $\gamma \mapsto \mu \left(C \cap R_\gamma (p,q)\right)$ is continuous, we get
\[\lim_{\varepsilon \searrow 0} \left(\mu(C\cap R_{\gamma+\varepsilon}(p,q))\right) = \mu(C\cap R_{\gamma}(p,q)).\]
This implies $\mu(C\cap B_{\gamma}(p,q)) = 0$.
\end{proof}
\section{Partitions of prescribed size} \label{parti-sec
The following theorem shows that we can use an additively weighted Voronoi
diagram based on an admissible system of distance functions $d_p$ to partition $C$ into subsets of prescribed
sizes.
\begin{theorem} \label{partition-theo}
Let $n \geq 2$ and
let $d_{p_i}(\cdot)$, $1 \leq i \leq n$,
be an admissible system as in
Definition~\ref{admi-defi}.
Let $C$ be a bounded open
subset of $\mathbb{R}^d$. Suppose we are given $n$ real numbers
$\lambda_i >0$ with
$\lambda_1 + \lambda_2 + \cdots + \lambda_n =\mu(C)$.
Then
there is a weight vector $w=(w_1, w_2, \ldots, w_n)$ such that
\[
\mu(C \cap \mathrm{VR}_w(p_i,S)) = \lambda_i
\]
holds for $1 \leq i \leq n$.
If, moreover, $C$ is pathwise connected
then $w$ is unique up to addition of a constant to all $w_i$, and
the parts $C_i = C \cap \mathrm{VR}_w(p_i,S)$ of this partition are unique.
\end{theorem}
\begin{proof}
The function
\[
\Phi(w) := \sum_{i=1}^n \bigl( \mu(C \cap \mbox{VR}_w(p_i,S)) - \lambda_i \bigr)^2
\]
measures how far $w$ is from fulfilling the theorem.
Since each function $\gamma \mapsto \mu(C \cap R_\gamma(p_i,p_j))$ is
continuous by Definition~\ref{admi-defi},
and because
\begin{multline*}
\lvert \mu(C \cap \mbox{VR}_w(p_i,S)) - \mu(C \cap \mbox{VR}_{w'}(p_i,S)) \rvert \leq\\
\sum_{j \not= i} \lvert \mu(C \cap R_{w_i-w_j}(p_i,p_j)) - \mu(C \cap R_{w'_i-w'_j}(p_i,p_j))\rvert,
\end{multline*}
we conclude that the function $\Phi$ is continuous, too.
Let
$ D := \max \{\,M_{pq} \mid p \not= q \,\}$
with $M_{pq}$ as in
Definition~\ref{admi-defi}.
Note that $m_{pq}=-M_{qp}$ and $m_{pq}<M_{pq}$ together imply that $D > 0$ is a positive constant.
On the compact set $[0,D]^n$, the function $\Phi$ attains its minimum value at some argument $w$.
If $\Phi(w) = 0$ we are done.
Suppose that $\Phi(w) >0$. Since the volumes of the Voronoi regions inside $C$ add up to~$\mu(C)$,
there must be some sites $p_j$ whose Voronoi regions have too large an intersection with $C$ i.e., of volume $> \lambda_j$,
while other region's intersections with $C$ are too small.
We now show that by decreasing the weights of some points of $S$ we can decrease the value of $\Phi$.
For any point $p_i\in S$ we consider the continuous {``excess'' function}
$$\phi_{p_i}(w) := \mu(C\cap \mbox{VR}_w(p_i,S)) - \lambda_i$$ that indicates by which amount of volume the region of $p_i$ is too large or too small.
We have $$\sum_{i=1}^n\phi_{p_i}(w)=0.$$
Let $T \subseteq S$ be the set of points $p_i$ for which $\phi_{p_i}(w)$ attains its maximum value, $\tau$.
By assumption, $T$ is neither empty nor equal to $S$.
We will reduce the weights of the points in $T$ by the same amount $\delta$, obtaining a new weight vector $w'$.
Note that, by construction,
$\mbox{VR}_w(t,S) \supseteq \mbox{VR}_{w'}(t,S)$
for any site $t\in T$,
and $\mbox{VR}_w(s,S) \subseteq \mbox{VR}_{w'}(s,S)$,
for any site $s\in S \setminus T$.
Let $w'(\delta)$ denote the weight vector $(w'_1,\dots, w'_n)$ where $w'_i := w_i - \delta$ if $p_i\in T$, and $w'_i := w_i$ otherwise.
The volume of the set of points of $C$, that is assigned by $V_{w}(S)$ to some point in $T$ and by $V_{w'(\delta)}(S)$ to some point in $S\setminus T$ is \[\mbox{Loss}(\delta) := \sum_{t\in T}\bigl( \phi_t(w)-\phi_t(w'(\delta))\bigr)\]
Let $\tau := \max_{p\in S}\phi_{p}(w) > 0$ and $\tau' := \max_{s\in S\setminus T}\phi_s(w) < \tau$.
Our goal is to choose $\delta$ in such a way that
\begin{equation}
\label{eq:loss}
\text{Loss}(\delta) = \frac{\tau-\tau'}{2n}
\end{equation}
and to show that this choice decreases $\Phi$ strictly.
Since $\mbox{Loss}(0) = 0$ and $\mbox{Loss}(\delta)$ is a monotone and continuous function,
our aim is to solve \eqref{eq:loss} by the intermediate value theorem.
For all $\delta > 2D$ we observe that $\mbox{Loss}(\delta) = \sum_{p_i\in T}(\tau + \lambda_i) > \tau$ holds, since in this case the Voronoi region $\mbox{VR}_{w'(\delta)}(p_i,S)$ of every point $p_i\in T$ contains no point of $C$.
Now $0 < \frac{\tau-\tau'}{2n} < \tau$ implies that we can indeed solve \eqref{eq:loss} by the intermediate value theorem.
The latter inequality is evident if $\tau' \geq 0$. Otherwise, it follows from $-\tau'\leq
-\sum_{s\in S \setminus T}\phi_s(w)
= \sum_{s\in T}\phi_s(w)
= |T|\cdot \tau \leq (n-1)\tau$.
In the following let $w' := w'(\delta)$, for short.
In order to prove the theorem we use the following reformulation of the statement $\Phi(w)>\Phi(w')$:
\begin{eqnarray}
\sum_{t\in T} \bigl(\phi_t(w)^2 - \phi_t(w')^2\bigr) > \sum_{s\in S\setminus T}\bigl( \phi_s(w')^2 - \phi_s(w)^2\bigr)\label{final-eqn}
\end{eqnarray}
Now, in order to
prove
inequality~$(\ref{final-eqn})$,
we
give an upper bound on the right-hand side of the inequality,
which will be compared with a lower bound on the left-hand side of the inequality.
Let $\varepsilon_s \geq 0$ denote the volume increase, in $C$, of the region of site $s \in S \setminus T$.
By the choice of $\delta$, we have
$\sum_{s\in S\setminus T}\varepsilon_s =\frac{\tau - \tau'}{2n}$, and
\begin{align*}
\sum_{s\in S\setminus T}\bigl(\phi_s(w')^2 - \phi_s(w)^2\bigr)
&= \sum_{s\in S\setminus T}\bigl((\phi_s(w)+\varepsilon_s)^2 - \phi_s(w)^2\bigr)\\
&= \sum_{s\in S\setminus T} 2\varepsilon_s\phi_s(w) + \sum_{s\in S\setminus T}\varepsilon_s^2\\%\label{means-inequality}\\
&\leq 2\tau'\sum_{s\in S\setminus T}\varepsilon_s + \Bigl(\sum_{s\in S\setminus T}\varepsilon_s\Bigr)^2\\%\label{tau-defi}\\
&= 2\tau'\frac{\tau - \tau'}{2n} + \left(\frac{\tau - \tau'}{2n}\right)^
\end{align*}
We now give a lower bound on the left-hand side of~$(\ref{final-eqn})$.
If $\varepsilon_t \geq 0$ denotes the volume decrease, in $C$, of the region of site $t\in T$, then the corresponding calculation for the points in $T$ yields:
\begin{align*}
\sum_{t\in T}\bigl(\phi_t(w)^2 - \phi_t(w')^2\bigr)
&= \sum_{t\in T}\bigl(\phi_t(w)^2 - (\phi_t(w)-\varepsilon_t)^2\bigr)\\
&= \sum_{t\in T} 2\phi_t(w)\varepsilon_t - \sum_{t\in T}\varepsilon_t^2\\
&\geq 2\tau\sum_{t\in T}\varepsilon_t - \Bigl(\sum_{t\in T}\varepsilon_t\Bigr)^2\\
&=2\tau\frac{\tau-\tau'}{2n} - \left(\frac{\tau-\tau'}{2n}\right)^2
\end{align*}
where we have used the same arguments as above and also that $\phi_t(w) = \tau$, by definition, for all $t\in T$.
It is easy to verify the inequality
\begin{eqnarray}
2\tau\frac{\tau-\tau'}{2n} - \left(\frac{\tau-\tau'}{2n}\right)^2
> 2\tau'\frac{\tau - \tau'}{2n} + \left(\frac{\tau - \tau'}{2n}\right)^2\nonumber
\end{eqnarray}
for
$\tau-\tau' > 0$ and $n \geq 2$.
Hence inequality~$(\ref{final-eqn})$ holds.
We have now shown the existence of a weight vector $w'$ with $\Phi(w')<\Phi(w)$.
If $w'$ belongs to $[0,D]^n$, we are done. Otherwise,
suppose the maximum weight $w'_m$ occurs for some point $p'_m$.
We add $D-w'_m$ to all weights. This does not change the Voronoi diagram defined by $w'$, but now the maximum weight $w'_m$ is exactly $D$.
Next we observe that if now the weight of some point $p_j$ is $<0$, then its Voronoi region is empty, because $C$ is contained in the set $R_{w'_m-w'_j}(p_m,p_j)$, by definition of $D$. Assigning weight $0$ to each of those points also results in empty Voronoi regions for those points. Altogether, we now have $0 \leq w'_i \leq D$ for all $p_i\in S$.
We have shown the existence of a weight vector $w'\in [0,D]^n$ that satisfies $\Phi(w')<\Phi(w)$.
This contradicts the minimality of $\Phi(w)$.
Hence the minimum value of $\Phi$ is $0$.
Uniqueness of the partition
will be discussed in Section~\ref{unique-sec}.
\end{proof}
\section{Optimality} \label{opti-sec
Again, let $d_{p}(\cdot)$, $p \in S = \{p_1, p_2, \ldots, p_n\}$,
be an admissible system as in Definition~\ref{admi-defi},
and let $C$ denote a bounded and open subset of $\mathbb{R}^d$.
Moreover, we are given real numbers $\lambda_i >0$ whose sum equals~$\mu(C)$.
By Theorem~\ref{partition-theo} there
exists a weight vector $w=(w_1, w_2, \ldots, w_n)$ satisfying
\[
\mu(C \cap \mbox{VR}_w(p_i,S)) = \lambda_i \mbox{ for } i=1, \ldots, n.
\]
Now we prove that this subdivision of $C$
minimizes the transportation cost, i.~e., the average distance from each
point to its site.
It is convenient to describe partitions of $C$ by $\mu$-measurable maps $f\colon C\to S$
where, for each $p \in S$, $f^{-1}(p)$ denotes the region assigned to $p$.
Let $F_\Lambda$ denote the set of those maps $f$ satisfying
$\mu(f^{-1}(p_i)) = \lambda_i
$ for $i = 1, \ldots, n$.
\begin{theorem} \label{opti-theo}
The partition of $C$ into regions $C_i := C \cap \mathrm{VR}_w(p_i,S)$
minimizes
\[\mathrm{cost}(f) := \int_C \! d_{f(z)}(z) \, \mathrm{d} \mu(z)\]
over all maps $f \in F_\Lambda$. Any other partition of minimal cost differs at most by
sets of measure zero from $(C_i)_i$.
\end{theorem}
If the measure $\mu$ were a discrete measure concentrated on a finite
set $C$ of points, the optimum partition problem would turn into the
classical transportation problem, a special type of network flow
problem on a bipartite graph. The weights $w_i$ would be obtained as
dual variables.
The following proof mimics the
optimality proof for the discrete case.
\begin{proof}
Let $f_w$ be defined by $f_w (C_i)=\{p_i\}$ for all $i$, and $f\in F_\Lambda$ be another map.
If we define $c(z,p) := d_p(z)$, then the cost of map $f$ is
\begin{align}
\text{cost}(f)
&= \int_C \left(c(z,f(z)) - w_{f(z)} + w_{f(z)}\right) \, \mathrm{d} \mu(z)
\nonumber \\%\label{cost-first-line-eqn}\\
&= \int_C \left(c(z,f(z)) - w_{f(z)}\right) \, \mathrm{d} \mu(z) + \int_C w_{f(z)} \, \mathrm{d} \mu(z)\label{cost-estimate-line-one-eqn}\\
&\geq \int_C \left(c(z,f_w(z)) - w_{f_w(z)}\right) \, \mathrm{d} \mu(z) + \int_C w_{f(z)} \, \mathrm{d} \mu(z)\label{cost-estimate-line-two-eqn}\\
&= \text{cost}(f_w)
- \int_Cw_{f_w(z)} \, \mathrm{d} \mu(z) + \int_C w_{f(z)} \, \mathrm{d} \mu(z)\label{cost-last-line-eqn}
\end{align}
where the inequality between lines~$(\ref{cost-estimate-line-one-eqn})$ and~$(\ref{cost-estimate-line-two-eqn})$ follows from the fact that
\begin{equation}
\label{eq:weight-better}
c(z,f_w(z)) - w_{f_w(z)} \leq c(z,f(z)) - w_{f(z)},
\end{equation}
by the definition of additively weighted Voronoi diagrams. Here we have assumed for simplicity that the map $f_w$ assigns every point $p\in V_w(S)$ to \emph{some} weighted nearest neighbor of $p$ in $S$, according to some tie-break rule. Since Lemma~\ref{bisector-volume-lemm} implies $\mu(C \cap V_w(S)) = 0$, changing the assignment of those points has no influence on $\mbox{cost}(f_w)$.
Both maps $f$ and $f_w$ partition $C$ into regions of volume $\lambda_i$, $1\leq i\leq n$, i.~e., $\mu(f_w^{-1}(p_i)) = \mu(f^{-1}(p_i)) = \lambda_i$ for all $p_i\in S$. Therefore,
\begin{equation*}
\int_C w_{f(z)} \, \mathrm{d} \mu(z)
= \sum_{p_i\in S}\int_{f^{-1}(p_i)}w_{f(z)} \, \mathrm{d} \mu(z)
= \sum_{p_i\in S} \lambda_i w_i,
\end{equation*}
and the same value is obtained for $f_w$:
\begin{equation*}
\int_C w_{f_w(z)} \, \mathrm{d} \mu(z)
= \sum_{p_i\in S}\int_{f_w^{-1}(p_i)}w_{f_w(z)} \, \mathrm{d} \mu(z)
= \sum_{p_i\in S} \lambda_i w_i.
\end{equation*}
Now the optimality claim follows from~$(\ref{cost-last-line-eqn})$.
We still have to prove uniqueness of the solution, up to a set of measure~0.
Assume that $f$ differs from $f_w$ on some set $G$ of positive measure.
We are done if we show that the inequality between~$(\ref{cost-estimate-line-one-eqn})$ and~$(\ref{cost-estimate-line-two-eqn})$
holds as a strict inequality.
We can remove the points of $V_w(S)$ from $G$, since they have measure 0.
Then \eqref{eq:weight-better} holds as a strict inequality on $G$.
The difference
between~$(\ref{cost-estimate-line-one-eqn})$ and~$(\ref{cost-estimate-line-two-eqn})$ is then the integral of the positive function
$g(z) := c(z,f_w(z)) - w_{f_w(z)} - \bigl( c(z,f(z)) - w_{f(z)}\bigr)$ over a
set $G$ of positive measure. Such an integral
$\int_{z\in G} g(z)\,\mathrm{d}\mu(z)$
has always a positive value:
the countably many sets
$\{\,z\in G \mid \frac1k >g(z)\ge \frac1{k+1}\,\}$
and
$\{\,z\in G \mid g(z)\ge 1\,\}$
partition $G$, and thus their measures add up to $\mu(G)$. Therefore,
at least one of these sets has positive measure, and it follows directly
that the integral is positive.
\end{proof}
\section{Uniqueness}\label{unique-sec}
In contradistinction to Theorem~\ref{opti-theo}, the weight vector $w$
and the resulting weighted Voronoi partition
in Theorem~\ref{partition-theo} is not unique:
if $C$ is disconnected, changing $w$ may move the
bisectors between different connected components of $C$ without
affecting the partition, or only changing it in boundary points of $C$.
However, uniqueness can be obtained under the additional assumption that
$C$ is pathwise connected.
Then
in Theorem~\ref{partition-theo},
the weight vector $w=(w_1,\dots,w_n)$ that partitions $C$ into regions of prescribed size is unique up to addition of the same constant to all $w_i$.
Consequently, the parts in the
weighted Voronoi partition
outside of $C$
are also uniquely determined.
The proof uses the following lemma.
\begin{lemma}\label{decrease-lemm}
Assume that $C$ is path-connected, in addition to being an open
bounded set. Consider a partition of the point set $S$ in two
nonempty sets $T$ and $S\setminus T$. Suppose, that, for a given
weight vector $w$, all regions $C\cap \mbox{VR}_w(p,S)$ have
positive measure.
If we increase the weight of every point in $T$ by the same constant
$\delta>0$, then $\mu(C\cap \mbox{VR}_w(t,S))$ increases, for some
$t\in T$.
\end{lemma}
\begin{proof}
In this section, we will omit the reference to the set $S$ from the Voronoi regions and simply write
$ \mbox{VR}_w(p)$ instead of
$ \mbox{VR}_w(p,S)$, since the point set $S$ is fixed.
It is clear that the regions $\mbox{VR}_w(t)$
for $t\in T$ can only
grow, in the sense of gaining new points, but we have to show that
this growth results in a strict increase of~$\mu(C\cap\mbox{VR}_w(t))$
for some $t\in T$.
Consider the continuous function
$$d_T(z) := \min \{\,d_p(z)-w_p\mid p\in T\,\}$$
and the function
$d_{S\setminus T}(z)$, which is defined analogously,
and define
\begin{align*}
\mathrm{VR}_w(T) & := \{\,z\in \mathbb{R}^d \mid d_T(z) < d_{S\setminus T}(z)\,\}
\\
\mathrm{VR}_w(S\setminus T) & := \{\,z\in \mathbb{R}^d \mid d_T(z) > d_{S\setminus T}(z)\,\}
\end{align*}
These two sets partition $\mathbb{R}^d$, apart from some ``generalized bisector''
where equality holds.
Roughly,
$\mathrm{VR}_w(T)$ is obtained by merging
all Voronoi regions $ \mbox{VR}_w(t)$ for $t\in T$ into one set,
together with some bisecting boundaries between them.
Since these boundaries have zero measure inside $C$, we have
\begin{equation}
\label{eq:sum-parts}
\mu(C\cap \mbox{VR}_w(T))
= \sum_{t\in T} \mu(C\cap \mbox{VR}_w(t))
\end{equation}
Let $w = (w_1,\dots, w_n)$ and $w' = (w'_1,\dots, w'_n)$, where $w'_i
:= w_i+\delta$ if $s_i\in T$ and $w'_i := w_i$ otherwise.
It suffices to show that in going from $w$ to $w'$,
$ \mu(C\cap \mbox{VR}_w(T)) $ increases, since then, by
\eqref{eq:sum-parts}, one of the constituent Voronoi regions
$\mu(C\cap \mbox{VR}_w(t))$ must increase.
As mentioned, the set
$C\cap\mbox{VR}_w(T)$ can only grow, but we have to show that this
growth results in a strict increase of~$\mu$.
If $C\cap\mathrm{VR}_{w'}(S\setminus T)=\emptyset$, we are done because
$\mu(C\cap \mbox{VR}_{w'}(T))$ has increased to $\mu(C)$.
Otherwise, consider a path
in $C$ connecting some point $p_1 \in
\mbox{VR}_w(T)$ with some point $p_2 \in \mathrm{VR}_{w'}(S\setminus
T)$.
We have, by definition,
$d_T(p_1) < d_{S\setminus T}(p_1)$ and
$d_T(p_2)-\delta > d_{S\setminus T}(p_2)$.
Hence, by the intermediate value theorem, we can find a point $p$
on the path
with
$d_T(p)-\delta/2= d_{S\setminus T}(p)$.
This point is an interior point of
$C\cap \mbox{VR}_{w}(S\setminus T)$
and $C\cap \mbox{VR}_{w'}(T)$, and hence there is a neighborhood of $p$
which adds a positive measure to
$C\cap \mbox{VR}_{w}(T)$ when going from $w$ to $w'$.
\end{proof}
To finish the proof of uniqueness in Theorem~\ref{partition-theo},
suppose that two weight vectors $w, w'$ do the job of producing the
desired measures $\lambda_i$ for the regions, but $w' - w \not= (c, c,
\ldots, c)$ for any constant $c$.
We may assume, by adding a constant to all entries, that $w\le w'$, and $w_i =w'_i$ for some $i$.
We will now gradually change $w$ into $w'$ by modifying its entries.
Let $T := \{\,s_i\in S \mid w'_i-w_i = \max_j (w'_j-w_j)\,\}$.
By assumption, $T$ is neither empty nor equal to~$S$.
We increase the weights of the points in $T$, leaving
the remaining values fixed. The amount $\delta$ is chosen in such
a way that
$w'_i-w_i$ for $i\in T$ does not decrease below the remaining
differences $w'_j-w_j$ for $j\in S\setminus T$.
By Lemma~\ref{decrease-lemm}, the measure of the region assigned to
some point $t_0\in T$ strictly increases in this process. When the
limiting value $\delta$ is reached, the set $T$ of points where
$w'_i-w_i$ achieves its maximum is enlarged, and the whole process is
repeated until $w=w'$. During this process, $t_0$ will always be among
the points whose weight is increased, and therefore, the measure of
its region can never shrink back to the original value---a contradiction.
\qed
\section{Conclusion and future work}
We have given a geometric proof for the fact that additively weighted Voronoi diagrams
can optimally solve some cases of the Monge-Kantorovich transportation problem, where one measure has
finite support. Surprisingly, the existence of an optimal solution---the main mathematical challenge in the general
case---is an easy consequence of our proof. In remains to be seen to which extent our assumptions on the
distance functions $d_{p_i}$ can be further generalized.
\bibliographystyle{model1-num-names}
| {
"timestamp": "2012-06-15T02:02:07",
"yymm": "1206",
"arxiv_id": "1206.3057",
"language": "en",
"url": "https://arxiv.org/abs/1206.3057",
"abstract": "We consider the following variant of the Monge-Kantorovich transportation problem. Let S be a finite set of point sites in d dimensions. A bounded set C in d-dimensional space is to be distributed among the sites p in S such that (i) each p receives a subset C_p of prescribed volume, and (ii) the average distance of all points of C from their respective sites p is minimized. In our model, volume is quantified by some measure, and the distance between a site p and a point z is given by a function d_p(z). Under quite liberal technical assumptions on C and on the functions d_p we show that a solution of minimum total cost can be obtained by intersecting with C the Voronoi diagram of the sites in S, based on the functions d_p with suitable additive weights. Moreover, this optimum partition is unique up to sets of measure zero. Unlike the deep analytic methods of classical transportation theory, our proof is based directly on geometric arguments.",
"subjects": "Metric Geometry (math.MG)",
"title": "Optimally solving a transportation problem using Voronoi diagrams",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717420994768,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7089449217415449
} |
https://arxiv.org/abs/hep-th/9402065 | Solutions of the Spherically Symmetric Wave Equation in $p+q$ Dimensions | We discuss solutions of the spherically symmetric wave equation and Klein Gordon equation in an arbitrary number of spatial and temporal dimensions. Starting from a given solution, we present various procedures to generate futher solutions in the same or in different dimensions. The transition from odd to even or non integer dimensions can be performed by fractional derivation or integration. The dimensional shift, however, can also be interpreted simply as a modification of the dynamics. We also discuss the analytic continuation to arbitrary real powers of the D'Alembert operator. There, particular peculiarities in the pole structure show up when $p$ and $q$ are both even. Finally we give operators which transform a time into a space coordinate and v.v. and comment on their possible relation to black holes. In this context, we describe a few aspects of the extension of our discussion to a curved metrics. | \section{Introduction}
The wave equation plays a very important role in practically all branches of
physics. It has a fundamental meaning in classical as well as
quantum physics, including field theory. This refers to both,
the non relativistic as well as the relativistic description. Hence it
is strongly motivated to discuss solutions of the wave eq.,
or variants thereof such as the Klein Gordon eq., in all possible
situations.
Also the use of space-time dimensions different from $3+1$
is well established nowadays in most parts of physics. In this
paper we provide a
background of solutions for the spherically symmetric wave eq.,
which may turn out to be useful anywhere. But we do not discuss
particular applications here.
Recently, Bollini and Giambiagi described fractional powers of the
D'Alembert operator and of the Laplace operator in one temporal
and an arbitrary number of spatial dimensions and discussed Huygens'
principle and causality in this framework \cite{BG}. In a subsequent
paper, Giambiagi referred to the standard D'Alembert operator
and discussed relations among solutions of the wave
eq. again in one temporal and different spatial dimensions. He also
considered the Klein Gordon eq. and Green's functions.
Now we want to generalize both of these considerations
to $p+q$ dimensions. We are particularly interested in operations
shifting the dimensions in which a solution is valid.
As we will see, interesting and qualitatively new
properties arise, particularly in the pole structure for even $p$ and $q$.
The mathematical background for this is beautifully outlined in the
classical book of Gelfand and Shilov \cite{GS}.
At the end, we add some remarks about a generalization to curved metrics.
This is motivated from the fact that transitions of a spatial
to a temporal dimension and vice versa are known in general relativity:
if we cross the boundary of a black hole -- described e.g. by the
Schwarzschild metrics -- time becomes spatial and
the radial component becomes temporal.
\section{The spherical wave equation in $d+1$ dimension}
The case of one time and an arbitrary number of spatial
dimensions has been discussed extensively in \cite{jjg}.
We start by adding some complementary observations to this
case.
We consider solutions of the spherically symmetric wave eq.
\begin{equation} \label{wed1}
\Big[ \partial_{r}^{2} + \frac{d-1}{r} \partial_{r} - \partial_{t}^{2} \Big]
\phi_{d}(t,r) = 0
\end{equation}
where $\partial_{r} \doteq \partial / \partial r $ etc. We are not
interested in constant factors or additive constants in $\phi_{d}$.
With the well known substitution:
\begin{equation}
\Omega_{d}(t,r) \doteq r^{(1-d)/2} \phi_{d}(t,r)
\end{equation}
we can absorb the linear derivative. Eq. (\ref{wed1}) takes the form:
\begin{equation}
\Big[ \partial_{r}^{2} + \frac{(d-1)(d-3)}{4r^{2}} - \partial_{t}^{2} \Big]
\Omega_{d}(t,r) =0
\end{equation}
and we can immediately read off general solutions for $d \in \{1,3 \}$:
\begin{eqnarray}
\Omega_{1}(t,r) &= F_{1}(t-r) + F_{2}(t+r) \ ; \quad
\Omega_{3}(t,r) &= f_{1}(t-r) + f_{2}(t+r) \nonumber \\
\to \quad \phi_{1}(t,r) &= F_{1}(t-r)+F_{2}(t+r) \ ; \quad
\phi_{3}(t,r) &= \frac{f_{1}(t-r)+f_{2}(t+r)}{r} \label{d13}
\end{eqnarray}
where $F_{1},F_{2},f_{1},f_{2}$ are arbitrary functions or symbolic functions.
If we choose $\delta $ functions for them, the above solutions represent
waves which are: for $d=1$ moving to the right/left, and for $d=3$
outgoing/incoming.
Here $d=1$ and $d=3$ play a special role, since for no other dimension
there is a solution of the type:
\begin{equation}
\phi_{d}(t,r) = f(t \ ^{+}_{-} \ r)r^{\alpha} \qquad (\alpha \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} )
\end{equation}
This can be seen by inserting this ansatz in (\ref{wed1}), which
yields the conditions: $2 \alpha = 1-d $ and $ \alpha^{2}=\alpha
(2-d)$, with the only solutions: $ \alpha =0, \ d=1 $ or $\alpha
=-1, \ d=3$.
In \cite{jjg} it was shown that if $\phi_{d}$ is a solution of (\ref{wed1}),
then the same holds for
\begin{equation} \label{difd}
\phi_{d+2}(t,r) = - \frac{1}{r} \partial_{r} \phi_{d}(t,r)
\end{equation}
With this respect, we can identify in (\ref{d13}): $F_{1}'=f_{1}, \
F_{2}'=f_{2}$ and we obtain a set of solutions for all odd $d$,
generated by the choice for $F_{1}, \ F_{2}$, e.g.
\begin{equation}
\phi_{5}(t,r) = \frac{f_{1}(t-r)+f_{2}(t+r)}{r^{3}} +
\frac{f_{1}'(t-r)-f_{2}'(t+r)}{r^{2}} \qquad {\rm etc.}
\end{equation}
Let's consider the two signs individually and first let $F_{2} \equiv 0$.
Then the solutions for all odd $d$ -- generated from $F_{1}$ by means
of (\ref{difd}) -- have the form:
\begin{equation} \label{series}
\phi_{2k+1} (t,r) = \sum_{n=0}^{k} a_{n}^{(k)} \frac{f^{(n)}(t-r)}
{r^{2k-n-1}}
\end{equation}
The coefficients $a_{n}^{(k)}$ are the elements of a ``modified
Pascalian triangle'':
\begin{equation} \label{pascal}
\begin{array}{ccccccccccccc}
k=0 &&&&&& 1 &&&&&& \\
k=1 &&&&&& 1 & 1 &&&&& \\
k=2 &&&&&& 3 & 3 & 1 &&&& \\
k=3 &&&&&& 15 & 15 & 6 & 1 &&& \\
k=4 &&&&&& 105 & 105 & 45 & 10 & 1 &&\\
k=5 &&&&&& 945 & 945 & 420 & 105 & 15 & 1 &\\
\dots &&&&&& \dots &&&&& \\
&&&&&& n=0 & n=1 & n=2 & n=3 & n=4 & n=5 & \dots
\end{array}
\end{equation}
($d=1$ is not represented here.)
The elements at the margins are: $a_{0}^{(k)}=(2k-1)!! \ , \ a_{k}^{(k)} =1$,
and the rest is determined by the recursion formula:
\begin{equation} \label{recur}
a_{n}^{(k)} = a_{n-1}^{(k-1)}+(2k-n-1)a_{n}^{(k-1)}
\end{equation}
As in the Pascalian triangle, the elements off the margin are built
from the two elements vertically and on the left above. The only
modifications are the left margin and the factor $(2k-n-1)$ in
(\ref{recur}).
We add two observations for the elements next to the margins, which
can easily be proved by induction. For all $k \in {\kern+.25em\sf{N}\kern-.86em\sf{I}\kern+.86em\kern-.25em} $ these elements
take the form:
\begin{equation}
a_{1}^{(k)} = a_{0}^{(k)} \ , \qquad a_{k-1}^{(k)}
= \left( \begin{array}{c} k+1 \\ 2 \end{array} \right)
\end{equation}
Now we consider the other case: $F_{1} \equiv 0$. The solutions
generated by (\ref{difd}) from $F_{2}$ can be represented in
the same way as (\ref{series}):
\begin{equation}
\phi_{2k+1}(t,r) = \sum_{n=0}^{k} \bar a_{n}^{(k)} \frac{f^{(n)}(t+r)}
{r^{2k-n-1}}
\end{equation}
The triangle for $\bar a_{n}^{(k)}$, however, is different:
\begin{equation}
\begin{array}{ccccccccccccc}
k=0 &&&&&& 1 &&&&&& \\
k=1 &&&&&& 1 & -1 &&&&& \\
k=2 &&&&&& 3 & -1 & 1 &&&& \\
k=3 &&&&&& 15 & -1 & 2 & -1 &&& \\
k=4 &&&&&& 105 & 9 & 9 & -2 & 1 && \\
k=5 &&&&&& 945 & 177 & 72 & -3 & 3 & -1 & \\
\dots &&&&&& \dots &&&&&& \\
&&&&&& n=0 & n=1 & n=2 & n=3 & n=4 & n=5 & \dots \\
\end{array}
\end{equation}
The left margin is the same as in (\ref{pascal}) and also the
recursion relation (\ref{recur}) still holds, but the right margin
is oscillating: $\bar a_{k}^{(k)} = (-1)^{k}$, thus altering all the
off margin elements. For the second column from the right we have the
general expression:
\begin{equation}
\bar a_{k-1}^{(k)} = \Big\{ \begin{array}{ccc} (k+1)/2 && k {\rm ~~odd} \\
- k/2 && k {\rm ~~even} \end{array}
\end{equation}
\section{The spherical wave equation in $p+q$ dimensions}
We consider a flat space with coordinates $(t_{1}, \dots , t_{q},
x_{1}, \dots x_{p})$ and search for solutions of the wave eq., which
depend only on $r \doteq \sqrt{\sum_{i=1}^{p} x_{i}^{2}}$ and
$\tau \doteq \sqrt{\sum_{i=1}^{q}t_{i}^{2}}$. They have to fulfill:
\begin{equation} \label{we}
\Big[ \partial_{r}^{2}+ \frac{p-1}{r}\partial_{r}-\partial_{\tau}^{2}
- \frac{q-1}{\tau} \partial_{\tau} \Big] \phi_{p,q}(\tau ,r) = 0
\end{equation}
i.e. we generalize the case $q=1$ considered in section 1.
Accordingly, we also generalize the first ansatz to:
\begin{equation} \label{ansa}
\phi_{p,q}(\tau ,r) = f(\tau \ ^{+}_{-} \ r) \tau^{\alpha}r^{\beta}
\qquad (\alpha ,\beta \in R)
\end{equation}
Again the wave eq. imposes constraints, which allow $\alpha $ and $\beta $
only to take the values $0$ or $-1$, and therefore: $p,q \in \{ 1,3 \}$.
The only new solution that we obtain is:
\begin{equation} \label{33}
\phi_{3,3}(\tau ,r) = \frac{f(\tau \ ^{+}_{-} \ r)}{\tau r}
\end{equation}
Now we search again for a rule how to generate solutions in higher
dimensions starting from these particular ones.\\
{\em\bf Rule 1} If $\phi_{p,q}$ is a solution of the wave eq.
(\ref{we}), then also
\begin{displaymath}
\phi_{p+2,q} \doteq - \frac{1}{r} \partial_{r} \phi_{p,q} \quad
{\rm and} \quad \phi_{p,q+2} \doteq - \frac{1}{\tau} \partial_{\tau}
\phi_{p,q}
\end{displaymath}
are solutions (always for the corresponding dimensions). \\
This generalizes the rule (\ref{difd}) and yields for example
the solution (\ref{33}). The proof by induction is straightforward,
if we just consider
\begin{displaymath}
\Big[ \partial^{2}_{r}+\frac{p+1}{r} \partial_{r} \Big] \frac{1}{r}
\partial_{r} = \frac{1}{r} \partial_{r} \Big[ \partial^{2}_{r}
+ \frac{p-1}{r} \partial_{r} \Big]
\end{displaymath}
and the same for $\tau $.
Hence starting from any $\phi_{1,1} = F_{1}(\tau -r)+F_{2}(\tau +r)$
we can generate immediately solutions {\em for all odd} $p$ and $q$.
If we choose $F_{1},F_{2}$ to be step functions, then the degree
of the pole in a solution $\phi_{p,q}$, which is generated
by application of rule 1, is $(p-1)/2 +(q-1)/2$. In particular
this set of solutions includes the physical monopole solution in
3+1 dimensions.
Now we want to postulate a second rule for generating solutions
in higher dimensions from a given one:
{\em\bf Rule 2} If $\phi_{p,q}$ is a solution of (\ref{we}),
then also
\begin{displaymath}
\bar \phi_{p+2,q+2} \doteq - \Big[ \frac{1}{r} \partial_{r} + \frac{1}{\tau}
\partial_{\tau} \Big] \phi_{p,q}
\end{displaymath}
is a solution.\\
The proof consists just of inserting rule 1.
But we emphasize that $\bar \phi_{p+2,q+2}$ does {\em not} coincide with
$\phi_{p+2,q+2} = \frac{1}{r \tau } \partial_{r} \partial_{\tau} \phi_{p,q}$.
To clarify the situation, we introduce the concept of equivalence
classes of solutions in different dimensions:
{\em Definition:} The solutions $\phi_{p+2,q} \ (\phi_{p,q+2})$ and
$\phi_{p,q}$ belong to the same equivalence class, if they are related
as $\phi_{p+2,q} = -\frac{1}{r} \partial_{r} \phi_{p,q}$
($\phi_{p,q+2} = - \frac{1}{\tau} \partial_{\tau} \phi_{p,q}$),
where we really mean ``equal to'', excluding different additive or
multiplicative constants.\\
Hence every solution $\phi_{1,1}$ defines an equivalence class with a unique
$\phi_{p,q}$ for all odd $p,q$. Now rule 2 can be formulated like this:
If $\phi_{p+2,q}$ and $\phi_{p,q+2}$ are solutions belonging to the
same equivalence class, then their superposition
$\bar \phi_{p+2,q+2} = \phi_{p+2,q}+\phi_{p,q+2}$ is a solution too,
which does, however, not belong to the same equivalence class.
Of course we may also define a second type of equivalence classes
of solutions related by rule 2. In such classes, $p-q$ must be fix.
Starting from a given $\phi_{p,q}$, all
\begin{equation}
\bar \phi_{p+2n,q+2n} = \sum_{k=0}^{n} (^{n}_{k}) \phi_{p+2k,q+2(n-k)}
\end{equation}
belong to the same second type equivalence class as
$\phi_{p,q}$, if all the $\phi_{p+2k,q+2(n-k)}$ belong to the same
first type equivalence class as $\phi_{p,q}$. This generalizes the above
formulation.\\
Let's reconsider $p=q=3$ and start from $\phi_{1,1}$ given in
(\ref{d13}). We obtain:
\begin{eqnarray}
\phi_{3,3}(\tau ,r) &=& \frac{1}{\tau r} [-F_{1}''(\tau -r)+F_{2}''(\tau + r)]
\nonumber \\
\bar \phi_{3,3}(\tau ,r) &=& \frac{1}{\tau r} [ (\tau -r)F_{1}'
(\tau -r) + (\tau + r) F_{2}'(\tau + r)]
\end{eqnarray}
These functions are different, but not basically different in the
sense that both of them fit with solution (\ref{33}).
If we choose $F_{1},F_{2}$ to be step functions, then $F_{1}'',F_{2}''=
\delta '$, i.e. we get from the first rule a dipole solution in
3+3 dimensions. (We noted before that in general the pole has the degree
$(p/2 + q/2 -1)$). The second rule seems to generate a monopole solution.
But in fact it vanishes since $x \delta (x) \equiv 0$.\\
To see that the two prescriptions do provide basically different solutions
in general, we consider as an example
$p=q=5$ and start from $\phi_{3,3} = f(\tau -r)/\tau r$:
\begin{eqnarray}
\phi_{5,5}(\tau ,r) &=& \frac{1}{(\tau r)^{3}} [ f(\tau -r) - f'(\tau -r)
(\tau -r) - f''(\tau -r)\tau r] \\
\bar \phi_{5,5}(\tau r) &=& \frac{1}{(\tau r)^{3}} [ f(\tau -r) (\tau^{2}
+r^{2})+ f'(\tau -r) \tau r(\tau -r)]
\end{eqnarray}
Clearly, these solutions are not related any more by a redefinition
of $f$ in one of them; this we see already from the different pole structure
for $f= \delta '$.
As a last example we start from a outgoing wave
$\phi_{3,1}=f(\tau -r)/r$ and proceed to $5+3$ dimensions:
\begin{eqnarray}
\phi_{5,3}(\tau ,r) &=& - \frac{1}{\tau r^{3}} [r f''(\tau -r)
+ f'(\tau -r)] \\
\bar \phi_{5,3}(\tau ,r) &=& - \frac{1}{\tau r^{3}} [ f'(\tau -r) r (\tau -r)
- \tau f(\tau -r)]
\end{eqnarray}
If we choose $f = \delta $, then the above solution
$ \bar \phi_{5,3}$ vanishes.
The first prescription is more powerful, since the second one is restricted
to a simultaneous increase of spatial and temporal dimensions (hence it
does, e.g. not yield solutions in $1+q$ or $p+1$ dimensions). But the latter
is useful to complete the first one, because it provides basically different
solutions. We also note that the application of the two prescriptions
commutes. Hence if we proceed e.g. from a $\phi_{3,5}$ to $\bar \phi_{7,7}$,
it does not matter if we first go to $\phi_{5,5}$ and then to
$\bar \phi_{7,7}$, or if we start with $\bar \phi_{5,7}$ and then apply
the first prescription to obtain the same $\bar \phi_{7,7}$.
On the other hand, if we go e.g. from $\phi_{3,3}$ to a solution in
$7+7$ dimensions, we have to distinguish if we use the second rule
not at all, once or twice (regardless of the order of the operators).
If we start from the form (\ref{33}), then the solution in $7+7$
dimensions contains maximally 4, 3, 2 derivatives of $f$, respectively.
Generally: if we replace two applications of the first rule by one
application of the second rule, then the maximal pole strength
is reduced by one.
\section{Solutions, which depend only on $\xi \doteq (\tau -r)(\tau +r)$}
Here we treat the two signs between $\tau $ and $r$ democratically.
Let $\psi_{n}(\xi )$ be a solution in $n=p+q$ dimensions. In
\cite{jjg} it was noted that such solutions are:
\begin{equation} \label{xixi}
\psi_{2}(\xi ) = \ln \xi \ , \quad
\psi_{n}(\xi ) = \xi^{1-n/2} \qquad (n \geq 3)
\end{equation}
where for $n > 2$ we can not identify $p$ and $q$ separately.
Note that $\psi_{2}(\xi )$ is a special case of the form given
in (\ref{d13}).
This string of solutions corresponds to rule 1, since
\begin{equation} \label{partxi}
- \frac{1}{r} \partial_{r} \psi_{n}(\xi ) = 2 f'_{n}(\xi ) = \frac{1}{\tau}
\partial_{\tau} \psi_{n}(\xi )
\end{equation}
is indeed the solution given in (\ref{xixi}) for $n+2$.
The second rule is supposed to generate a different solution
for $n+4$, which would be strange since the set (\ref{xixi}) is complete
(up to constants).
But to look at (\ref{partxi}) we recognize that the second rule only
yields trivial solutions here: $\bar \psi_{n+4}(\xi ) \equiv 0$.
So for this special type of solutions only the first rule is useful.
For further details, see \cite{jjg}.
\section{Generalization to the Klein Gordon equation}
Now we generalize (\ref{we}) to the spherically symmetric Klein
Gordon equation, i.e. the relativistic equation of motion for
a {\em massive} scalar particle:
\begin{equation} \label{KG}
\Big[ \partial_{r}^{2} + \frac{p-1}{r} \partial_{r} - \partial_{\tau}^{2}
- \frac{q-1}{\tau} \partial_{\tau} + m^{2} \Big] \phi_{p,q}^{(m)}(\tau ,r) =0
\end{equation}
It is easy to check that both rules still hold for this generalized
case. The reason is simply that the factor $m^{2}$ commutes with the
differential operators. (However, additive constants are not arbitrary
any more in $\phi^{(m)}_{p,q}$.)
Again we search for solutions of the form $\psi^{(m)}_{n}(\xi )$.
In terms of $\xi $, eq. (\ref{KG}) reads:
\begin{equation} \label{KGxi}
\Big[ \xi \partial_{\xi}^{2} + \frac{n}{2} \partial_{\xi} - \Big(
\frac{m}{2} \Big) ^{2} \Big] \psi_{n}^{(m)}(\xi ) =0
\end{equation}
In \cite{GR}, p. 972 we find the solution of the differential eq.
\begin{equation}
u''(z) + \frac{1-\nu}{z} u'(z) - \frac{1}{4z} u(z) =0
\end{equation}
namely:
\begin{equation}
u(z) = z^{\nu /2} Z_{\nu}(i \sqrt{z})
\end{equation}
where $Z_{\nu}$ is any Bessel function.
For $m \neq 0$ we can transform (\ref{KGxi}) to this form by substituting
$z \doteq m^{2} \xi$.
Thus the solution is:
\begin{equation}
\psi^{(m)}_{n}(\xi ) = (m \sqrt{\xi})^{1-n/2} Z_{1-n/2}(im\sqrt{\xi })
\end{equation}
(where we assume $m>0$). In
\begin{equation}
\partial_{m^{2}} \psi^{(m)}_{n}(\xi ) = \frac{2-n}{4m^{2}} \psi^{(m)}_{n}
(\xi ) + \frac{1}{2} i \xi (m \sqrt{\xi})^{-n/2} Z'_{1-n/2}(im\sqrt{\xi})
\end{equation}
the derivated Bessel function can be expressed in various ways
in terms of (non derivated) $Z_{1-n/2^{+}_{-}1}$, which describes
the dynamics in different dimensions.
Of course, in the limit $m \to 0$ we recover the solutions of the
preceding section.
\section{Transition to even and fractional dimensions}
The first rule in the form of section 3 only permits steps
of two dimensions. To arrive at non odd dimensions from the explicit
solutions given above by means of that rule, we need a fractional
application of the operator, which provides the dimensional shift.
This concept becomes much simpler if we use for this operator
the identity:
\begin{equation}
-\frac{1}{r} \partial_{r} \equiv -2 \partial_{r^{2}}
\end{equation}
We ignore the factor -2 and postulate the natural {\em generalization of
rule 1}:
If $\phi_{p,q}$ is a solution of the spherical wave eq. (\ref{we})
(or Klein Gordon eq. (\ref{KG})), then also
\begin{equation}
\phi_{p+2\alpha ,q} \doteq \partial_{r^{2}}^{\alpha} \phi_{p,q} \quad {\rm and}
\quad \phi_{p,q+2\beta} \doteq \partial_{\tau^{2}}^{\beta} \phi_{p,q}
\end{equation}
are solutions in $(p+2\alpha )+q$ and $p+(q+2\beta )$ dimensions,
respectively, for all $\alpha , \beta \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} $, if only $p+2\alpha
\geq 1 $ rsp. $q+2\beta \geq 1$.\\
We saw this before for integer $\alpha , \beta $, hence we just consider
non integers now.
Clearly the two statements are equivalent, so let's only consider
the first one.
For the fractional derivation -- or integration for $\alpha <0$ --
we refer to the definition
given by the Weyl transformation (see e.g. \cite{GS,Bateman})
\begin{eqnarray}
\phi_{p+2\alpha ,q}(\tau ,r) &=& \frac{1}{\Gamma (-\alpha )}
\int_{r^{2}}^{\infty} (u-r^{2})^{-1-\alpha} \phi_{p,q}(\tau ,\sqrt{u}) du \\
&=& \frac{1}{\Gamma (-\alpha )} \int_{0}^{\infty} x^{-1-\alpha}
\phi_{p,q}(\tau , \sqrt{r^{2}+x}) dx \nonumber
\end{eqnarray}
We substitute $y \doteq \sqrt{r^{2}+x}$, and we have to check now if
the following expression vanishes:
\begin{eqnarray*}
&& \Big[ \partial_{r}^{2} + \frac{p-1+2\alpha}{r} \partial_{r}
-\partial_{\tau}^{2} - \frac{q-1}{\tau} \partial_{\tau} \Big]
\int_{0}^{\infty} x^{-1-\alpha} \phi_{p,q}(\tau ,y) \\
& \propto & \int_{0}^{\infty} x^{-1-\alpha} \Big[ (1 - \frac{x}{y^{2}}
\partial_{y}^{2}+\frac{p+2\alpha - r^{2}/y^{2}}{y} \partial_{y}
- \partial_{\tau}^{2}- \frac{q-1}{\tau} \partial_{\tau} \Big] \phi_{p,q}
(\tau ,y) \\
&=& \int_{0}^{\infty}x^{-1-\alpha} \Big[ - \frac{2x}{y} \partial_{x}
+ \frac{2\alpha}{y} + \frac{x}{y^{3}} \Big] \partial_{y} \phi_{p,q}
(\tau ,y) dx
\end{eqnarray*}
In the sense of analytic continuation we can perform partial integration
in the first term without boundary terms. This shows immediately that this
expression vanishes, as we postulated. \\
In principle also the second rule could be generalized accordingly,
but the resulting fractional operator is very uneasy to handle.
Now we have a key for the construction of solutions in non odd dimensions.
General solution are for instance:
\begin{eqnarray}
\phi_{2,1}(t,r) &=& \int_{0}^{\infty}x^{-3/2}[F_{1}(t-\sqrt{r^{2}+x})+
F_{2}(t+\sqrt{r^{2}+x})] dx \\
\phi_{2,2}(\tau ,r) &=& \int _{0}^{\infty}dx \int_{0}^{\infty}dy (xy)^{-3/2}
\cdot \nonumber \\
&& [F_{1}(\sqrt{\tau^{2}+y}-\sqrt{r^{2}+x}) + F_{2}(\sqrt{\tau^{2}+y}+
\sqrt{r^{2}+x})]
\end{eqnarray}
etc.
Of course we would like to have really explicit solutions. But before
evaluating them for particular functions $F_{1},F_{2}$,
we construct some solutions for even dimensions in an independent way.
\section{Explicit solutions for even dimensions}
In particular the case of $2+1$ dimensions is motivated from solid
state physics: some crystals consist of layers, where the interaction
between layers is strongly suppressed with respect to the interactions
inside the layers.
In view of our recursion rules for the generation of solutions in higher
dimensions, we have to concentrate on finding particular solutions
for $2+1, \ 1+2 $ and $2+2$ dimensions.
So far, we only have the -- manifestly Lorentz invariant --
solutions of section 4 for even dimensions. Now we want to
construct different ones, and then discuss them in the context of
section 6.\\
If we look for polynomials in $r, \tau$ fulfilling (\ref{we}),
the only non trivial -- i.e. non constant -- solution is (except for
the linear solutions for $\phi_{1,1}$)
\begin{equation} \label{poly}
\phi_{p,p}(\tau ,r) = \tau^{2}+r^{2}
\end{equation}
So we have a new solution for any $p=q$, e.g. $\phi_{2,2}$, but it
does not yield a non trivial string of solutions from our theorems.
However, it is new, even in $p=q=1$, and it will help us to find
less obvious solutions.
We make an ansatz that generalizes somehow the solutions depending
only on $\xi $:
\begin{equation}
\phi (\tau ,r) = f(\tau ,r) (\tau^{2} \ ^{+}_{-} \ r^{2})^{\alpha}
\qquad (\alpha \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} )
\end{equation}
For the upper sign we can only reproduce (\ref{poly}), so we concentrate
on the lower sign (Lorentz invariant bracket), assume $\tau^{2} \neq r^{2}$
(off light cone) and denote $f_{r} \doteq \partial_{r}f$ etc.
We arrive at the condition:
\begin{equation} \label{fwe}
(\tau^{2}-r^{2}) \Big\{ f_{rr}+ \frac{p-1}{r}f_{r}-
f_{\tau \tau}-\frac{q-1}{\tau} f_{\tau} \Big\} = 4\alpha \Big\{
rf_{r}+\tau f_{\tau}+(\alpha -1 +\frac{n}{2}) f \Big\}
\end{equation}
(still with $n=p+q$). We want the curly brackets to vanish, i.e.
$f$ should also obey the wave eq. If we take the trivial solution
$f \equiv const.$, we obtain again the solutions (\ref{xixi}), so
we look for something more original.
Let's consider $q=1$ or $p=1$: then a simple solution of (\ref{fwe})
is $f=\tau $ rsp. $f=r$ and $\alpha =n/2$ and yields:
\begin{equation} \label{mix}
\phi_{p,1} = \frac{\tau}{(\tau^{2}-r^{2})^{(p+1)/2}} \quad ; \qquad
\phi_{1,q} = \frac{r}{(\tau^{2}-r^{2})^{(q+1)/2}}
\end{equation}
In $p=q=1$ we get special cases of the form (\ref{d13}), but more
interesting are $\phi_{2,1}$ and $\phi_{1,2}$ given in (\ref{mix}).
If we proceed by means of rule 1 to $\phi_{4,1}, \ \phi_{1,4}$
etc. we stay inside the set described by (\ref{mix}). But in addition
we obtain solutions for {\em all mixed} $p$ and $q$ (with respect
to even/odd), e.g.
\begin{equation}
\phi_{2,3} = \frac{1}{\tau} \partial_{\tau} \frac{\tau}{\tau^{2}-r^{2}}
= \frac{1}{\tau (\tau^{2}-r^{2})}- \frac{2\tau}{(\tau^{2}-r^{2})^{2}}
\end{equation}
etc.
We should still find new solutions for $p$ and $q$ both even.
The above solutions for $f$ fails (it leads to a trivial $\phi_{p,q}$),
therefore we try the polynomial solution: $f=\tau^{2}+r^{2}$ for $p=q$.
We obtain:
\begin{equation} \label{even}
\phi_{p,p} = \frac{\tau^{2}+r^{2}}{(\tau^{2}-r^{2})^{1+p}}
\end{equation}
Now we have in particular the desired solution for $\phi_{2,2}$ and
thus a string of explicit solutions for all even $p$ and $q$.
If we use the first rule to produce $\phi_{p+2,p+2}$ etc.,
we keep the form of (\ref{even}), and the second rule leads back to
the form (\ref{xixi}). But we can generate new solutions for $p \neq q$,
such as
\begin{equation}
\phi_{4,2} = \frac{2\tau^{2}+r^{2}}{(\tau^{2}-r^{2})^{4}} \quad ; \qquad
\phi_{2,4} = \frac{\tau^{2}+2r^{2}}{(\tau^{2}-r^{2})^{4}}
\end{equation}
Last we want to relate these solutions to the concept of
fractional derivatives described in section 6. If we start
from $\phi_{1,1}= \delta (\tau -r)$, then
\begin{equation}
\phi_{2,1}=\int_{0}^{\infty} x^{-3/2}\delta (\tau - \sqrt{r^{2}+x})dx
= \vert \tau \vert \int_{0}^{\infty} x^{-3/2}\delta (x-\tau^{2}+r^{2})
\end{equation}
If we drop the proportionality constant (including sign$(\tau )$),
we recover $\phi_{2,1}$ given in (\ref{mix}).
If we begin with $\phi_{1,1}=\Theta (\tau -r)$ or $\phi_{3,1}=
\delta (\tau -r)/r$ and move to $\phi_{2,1}$ by half a derivation
rsp. integration with respect to $r^{2}$, then we end up with the
form (\ref{xixi}). If we proceed from this solution to
$\phi_{2,2}$ by applying $\partial_{\tau^{2}}^{1/2}$, so we are still
in the set of solutions (\ref{xixi}).
\section{Construction of new solutions for fixed $p$ and $q$}
Up to now we considered procedures to build up solutions
in different dimensions from one given solution. Now that we
already know solutions for all integer $p+q$, we look for
different solutions in the same dimension as the given one.
For this we need an operator $A$, which commutes with the spherical
D'Alembert operator:
\begin{equation}
[(\partial_{r}^{2}+\frac{p-1}{r}\partial_{r}-\partial_{\tau}^{2}
-\frac{q-1}{\tau}\partial_{\tau}),A] = 0
\end{equation}
(Of course, $A=const.$ is not interesting.)
{\em\bf Rule 3} For $p=1$ rsp. $q=1$
the operator $A=\partial_{r}^{\alpha}$ rsp. $\partial_{\tau}^{\alpha}$
works for all $\alpha \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em}^{+} $.
Consider as an example $\phi_{2,1} = (\tau^{2}-r^{2})^{-1/2}$ (included
in (\ref{xixi})). One derivation by $\tau $ yields immediately the solution
given in (\ref{mix}), and in addition we obtain:
\begin{equation}
\tilde \phi_{2,1}(\tau ,r) =
\partial_{\tau}^{2}(\tau^{2}-r^{2})^{-1/2} \propto \frac{2\tau^{2}+r^{2}}
{(\tau^{2}-r^{2})^{5/2}} \qquad {\rm etc.}
\end{equation}
But if the dimension, which is $>1$, is also odd, we don't win anything
because there we already have solutions involving arbitrary functions
of $(\tau \ ^{+}_{-} \ r)$. Take e.g. $\phi_{3,1}$ from (\ref{d13}):
application of rule 3 just reproduces the same structure.\\
What can we do if $p$ and $q$ are both $>1$ ? It can be seen easily
that any ansatz $A=P_{1}(\tau ,r) P_{2}(\partial_{\tau },
\partial_{r})$, where $P_{1},P_{2}$ are finite polynomials, fails.
But we can combine the generalized rule 1 with rule 3 to
obtain:
{\em\bf Rule 4} If $\phi_{p,q}$ is a solution, then also
\begin{displaymath}
\partial_{r^{2}}^{(p-1)/2} \partial_{r}^{\alpha} \partial_{r^{2}}
^{-(p-1)/2} \phi_{p,q} \qquad {\rm and} \qquad
\partial_{\tau^{2}}^{(q-1)/2} \partial_{\tau}^{\beta} \partial_{\tau^{2}}
^{-(q-1)/2} \phi_{p,q}
\end{displaymath}
are solutions in $p+q$ dimensions, for any $\alpha , \beta \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em}^{+} $.\\
As an example we consider $\phi_{3,2}$ from (\ref{xixi}). For
$\alpha = 1,2 \dots $ we get further solutions in $3+2$ dimensions,
namely:
\begin{equation}
\frac{\tau^{2}/r+2r}{(\tau^{2}-r^{2})^{5/2}} \qquad , \qquad
\frac{3\tau^{2} + 2 r^{2}}{(\tau^{2}-r^{2})^{7/2}} \qquad {\rm etc.}
\end{equation}
Note, however, that the practical application of this theorem is often
complicated. In particular if $p$ and $q$ are both even, we are forced
to use fractional integration and derivation.\\
We may also build new solutions by a combination of the first and
second rule, which is different from unity, e.g. with the operator:
\begin{equation}
[\partial_{\tau^{2}} + \partial_{r^{2}}][\partial_{\tau^{2}} \partial_{r^{2}}
]^{-1} = \partial_{\tau^{2}}^{-1} + \partial_{r^{2}}^{-1}
\end{equation}
\section{Definition of $ \Box ^{\lambda}$ for $q>1$}
In \cite{BG,jjg} the analytic continuation of $(\tau^{2}-r^{2})^{\lambda}$
with respect to real $\lambda $ was discussed and applied extensively
in $d+1$ dimensions. The discussion revealed that its singularities
and residues are strongly related to physical properties. In particular
in \cite{jjg} it was shown that a displacement in dimensions, not
necessarily integers, can be interpreted as solutions for different
powers of the D'Alembert operator.
Now also this consideration shall be generalized to $p+q$ dimensions.
For that purpose, we need a definition of $\Box^{\lambda}$ for
multiple times. The extension of the analytic properties is not
straightforward, but displays interesting and qualitatively new
properties, as we will see.
Let us summarize the analytic properties of the distributions
\begin{displaymath}
P_{+-}^{\lambda} \doteq (\tau^{2} -r^{2})^{\lambda}_{+-} \quad {\rm and}
\quad (P+i0)^{\lambda}
\end{displaymath}
(see \cite{GS} p. 350 ff.). $+-$ means zero outside, inside the light cone,
respectively and $(P+i0)^{\lambda}$ means the limit $( \tau^{2} -r^{2}
+i\varepsilon )^{\lambda}$ when $\varepsilon \to 0$. The main results
are the following.\\
a) $p$ odd, $q$ even or v.v. \\
$P^{\lambda}_{+}$ has simple poles at $\lambda = -1, \ -2 , \dots $
and $\lambda = - n/2, \ -n/2-1, \dots $, where $n \doteq p+q$.
For $k \in {\kern+.25em\sf{N}\kern-.86em\sf{I}\kern+.86em\kern-.25em} $ the residues are:
\begin{eqnarray}
^{res}_{\lambda \to -k} P^{\lambda}_{+} &=& \frac{(-1)^{k-1}}{(k-1)!}
\delta^{(k-1)}(P) \\
^{res}_{\lambda \to -n/2-k} P_{+}^{\lambda} &=& \frac{(-1)^{p/2}\pi^{n/2}}
{4^{k} k! \Gamma (n/2+k)} \Box^{k} \delta (x_{1}, \dots ,x_{n})
\end{eqnarray}
(here the components $x_{i}$ run over both types of coordinates).\\
b) $p,q$ even \\
$P^{\lambda}_{+}$ has simple poles for $\lambda = -1,-2, \dots ,-k,\dots $
\begin{eqnarray}
^{res}_{\lambda \to -k} P^{\lambda}_{+} &=& (-1)^{k-1} \delta^{(k-1)}(P)
\qquad \quad (k \leq n/2) \\
^{res}_{\lambda \to -n/2-k} P^{\lambda}_{+} &=& \frac{(-1)^{n/2+k-1}}
{(n/2+k-1)!} \delta^{(n/2+k-1)}(P) + \frac{(-1)^{p/2} \pi^{n/2}}{4^{k} k!
\Gamma (n/2+k)} \Box^{k} \delta (x_{1}, \dots ,x_{n})
\end{eqnarray}
c) $p,q$ odd \\
$P_{+}^{\lambda}$ has simple poles for $k=-1,-2 \dots ,-n/2+1$ with
\begin{equation}
^{res}_{\lambda \to -k} P_{+}^{\lambda} = \frac{(-1)^{k-1}}{(k-1)!}
\delta^{(k-1)}(P)
\end{equation}
and single {\em and double} poles for $\lambda = -n/2, -n/2-1, \dots $
with (observe the difference to $p,q$ even)
\begin{equation}
^{res}_{\lambda \to -n/2-k} P^{\lambda}_{+}= c_{1} \frac{ \Box^{k}
\delta (P)}{(\lambda+n/2+k)^{2}} + \frac{c_{2} \Box^{k} \delta
(x_{1}, \dots ,x_{n}) + c_{2}' \delta^{(n/2+k-1)}(P)}{ \lambda +n/2+k }
\end{equation}
For $P_{-}$ exchange $p$ and $q$ and replace $\delta^{(k-1)}(P)$ by
$\delta^{(k-1)}(-P)$ and $\Box $ by $-\Box $. \\
d) The poles of $(P+i0)^{\lambda}$ are simple for $\lambda = -n/2-k, \
k=0,1,2 \dots $ and
\begin{equation}
^{res}_{\lambda \to -n/2-k} (P+i0)^{\lambda} = \frac{e^{-i \pi p/2}
\pi^{n/2}}{4^{k} k! \Gamma (n/2+k)} \Box^{k} \delta(x_{1}, \dots ,x_{n})
\end{equation}
and its complex conjugate for $(P-i0)^{\lambda}$.\\
In \cite{BG} a definition of $\Box^{\lambda}$ for real $\lambda $ was given,
which reduces to $\Box^{k}$ if $\lambda \in {\kern+.25em\sf{N}\kern-.86em\sf{I}\kern+.86em\kern-.25em} $. This was achieved
as follows
\begin{equation} \label{bogaeq}
\Box^{\lambda}_{R,A} * \phi (x)= \frac{2^{2\alpha +1}(t^{2}-r^{2})
^{-\alpha-n/2} \Theta (-+ t)}{\pi^{n/2-1} \Gamma(1-\alpha-n/2)
\Gamma(-\alpha )} * \phi (x)
\end{equation}
$R,A$ stand for retarded, advanced and refer to the negative, positive sign
in the argument of the step function $\Theta $.
It is easy to verify that (\ref{bogaeq})
reduces to $\Box^{k} \phi (x)$ when $\lambda =k$ and $q=1$, due to the
properties a), b) and c).
This is generally the case when $p,q$ are both odd. E.g. in dimensions
3+3 or 3+1 the double pole plays an essential role. (In 3+3 $t$ is to
be understood as $^{+}_{-} \tau $.)
So for a classical theory with $p,q$ both odd, $\Box^{\lambda}$ given
by (\ref{bogaeq}) is well defined, retarded as well as advanced.\\
The same happens if we consider a quantum theory, where (see \cite{BG})
\begin{equation} \label{bgquant}
\Box^{\lambda}_{^{+}_{-}} * \phi (x) = ^{+}_{-} i e^{^{+}_{-}i \pi (\lambda
+n/2)}4^{\lambda} (4\pi )^{n/2} \frac{\Gamma (\lambda +n/2)}
{\Gamma (- \lambda )}
(t^{2}-r^{2} {^{+}_{-}} i0)^{\lambda -n/2} * \phi (x)
\end{equation}
Thus if $\lambda = k \in {\kern+.25em\sf{N}\kern-.86em\sf{I}\kern+.86em\kern-.25em} $ the numerator as well as the denominator
pick up a pole, and we are left with the residue $\Box^{k} \delta $
(see property d)). \\
If we try to repeat the same procedure for $p,q$ even, e.g. 4+2 dimensions,
the classical theory as described by eq. (\ref{bogaeq}), runs into
trouble. The above reasoning doesn't work any more, since $(t^{2}-r^{2})
_{+}^{\lambda}$ has only single poles, and there is no way to compensate
the double poles in the denominator occurring in (\ref{bogaeq}).
Everything would be swept away.
And even if we write a phenomenological $\Box^{\lambda}$ with a single
pole in the denominator, we arrive for integer $\lambda $ at a linear
combination
\begin{displaymath}
a_{1} \Box^{k} + a_{2} \delta^{(k-1)}(P)
\end{displaymath}
and {\em not} the desired result $\Box^{k}$.
However, this is not the case for a quantum theory of the form (\ref{bgquant}).
The latter leads in fact to $\Box^{k}$ for $\lambda \to k$.
Hence everything works for the quantum operator $\Box ^{\lambda}_{^{+}_{-}}$
with multiple times when $p$ and $q$ are both even or both odd, whereas
the classical definition for $\Box^{\lambda}$ fails for even dimensions.
So if we consider the quantum definition of $\Box^{\lambda}$, we can extend
the result of \cite{BG}, section V, according to which a derivation with
respect to $r^{2} \ (\tau^{2})$ increases the spatial (temporal) dimension
by 2 or diminishes the power $\alpha $ by 1. If we apply the operator
$\partial_{r^{2}}^{\gamma} \ (\partial_{\tau^{2}}^{\gamma})$ on a radial
solution, we increase $p$ ($q$) by $2\gamma $ or diminish $\alpha $ by
$\gamma $.
The results for the ``mixed case'' (with respect to even/odd) are obvious
and not plagued by any problems due to double poles.
\section{The wave equation in curved space}
In particular, the previous formalism can be used to transform a time
coordinate into a space coordinate. The operator, which does this job,
is
\begin{equation} \label{tr}
\partial_{\tau^{2}}^{-1/2} \ \partial_{r^{2}}^{1/2}
\end{equation}
Of course, this reminds us of the process taking place when we
enter a black hole. When crossing the Schwarzschild radius,
a time coordinate becomes spatial and vice versa, as we see
from the Schwarzschild metrics in polar coordinates:
\begin{equation} \label{Schwarzschild}
ds^{2}= \Big( 1-\frac{2M}{r} \Big) d\tau^{2} + \tau^{2} d \Omega_{q}^{2}
- \frac{1}{1-2M/r}dr^{2}-r^{2} d \Omega_{p}^{2}
\end{equation}
We refer to a spherically symmetric, static black hole solution.
$d\Omega_{q}, \ d\Omega_{p}$ are the surface elements on the temporal,
spatial unit sphere, respectively.
\footnote{The appearing singularity of the metrics {\em on} the boundary is an
artifact of the choice of the coordinates and can be cured by
a different choice, as Kruskal showed for 3+1 dimensions \cite{Kruskal}.}
(Generally, a sensible dimensional
continuation of general relativity is given by the Lovelock eqs,
\cite{Lovelock}. For its application on static black holes,
see \cite{DBH}.)
Formula (\ref{Schwarzschild}) displays immediately that in a transition
from the outside to the inside of a black hole (of radius $2M$)
the radial temporal component $\tau $ becomes spatial while the radial
spatial component $r$ becomes temporal. The character of the
further components remains unchanged, so we have a transition
$p+q \to p+q$. (However, inside the black hole, the angular terms in
(\ref{Schwarzschild}) suffer from a mismatch between the radial and
the angular factor.)
Let's consider this in view of spherically symmetric waves.
The time-to-space transition we might describe with the operator (\ref{tr}),
but to include also
the simultaneous space-to-time transition we have no reasonable alternative
to its inverse operator, so we don't end up with anything instructive.
We could describe a transition of $3+1 \to 3+1$ -- or generally:
$p+q \to p+q$ -- dimensions different from unity, e.g. by using
the procedures of section 8,
but a motivation for this remains to be found.\\
A highly interesting transition, which is {\em not} $p+q \to p+q$ is the
{\em Wick rotation}. It transforms e.g. Schr\"{o}dinger's eq. to the
diffusion eq., hence deterministic and reversible quantum mechanics
to an irreversible stochastic process (Brownian motion).
But unfortunately our transformation rules 1 and 2 fail as soon as
$p$ or $q$ vanishes, since we have always assumed $\partial_{r}^{2}-
\partial_{\tau}^{2}$ to be part of the D'Alembert operator. In
Euclidean space we would be left with the search for harmonic functions,
which is well established in the literature.\\
Still, the most attractive feature in this context seems to us the
exchange of a time against a space coordinate in gravitation theory.
But in order to approach such questions as the transition
across the boundary of black holes seriously,
we have to expand our discussion to a curved space.
We add some simple remarks on this generalization. However, much
of this extension remains to be worked out. Here we want
to illustrate one interesting property, which is related to
the previous discussion of dimensional shifts. As we will see,
certain types of curvatures can be described in a flat metrics
by altering the dimensions.
Assume the
temporal and spatial sector of the metrics to be decoupled:
\begin{equation} \label{ggg}
g= \left( \begin{array}{cc} g^{(\tau )} & 0 \\ 0 & g^{(r)} \end{array}
\right) \ ,
\end{equation}
where $g^{(\tau )}, \ g^{(r)}$ is a $q \times q , \ p\times p$ matrix,
respectively.
First we consider the spatial part of the generalized D'Alembert
operator in this metrics, which is the Laplace-Beltrami operator:
\begin{equation} \label{LB}
\Delta = \frac{1}{\sqrt{\vert det \ g^{(r)}\vert }} \partial_{\mu} \Big[
\sqrt{\vert det \ g^{(r)}\vert } \ g^{(r) \mu \nu} \Big] \partial_{\nu}
\end{equation}
In flat space and polar coordinates $r, \theta_{1} \dots \theta_{p-1}$
we have $g^{(r)rr}=1$ and
$\sqrt{\vert det \ g^{(r)} \vert } = r^{p-1} \sin^{p-2}\theta_{p-1}
\sin^{p-3}\theta_{p-2} \dots \sin \theta_{2}$.
Even if we generalize this to
\begin{equation}
\sqrt{\vert det g^{(r)} \vert }= r^{p-1} F(\theta_{1}, \dots ,\theta_{p-1}) \ ,
\end{equation}
where $F$ is an arbitrary function, we always end up with the radial part
\begin{equation}
\Delta_{(r)} = \partial_{r}^{2} + \frac{p-1}{r} \partial_{r}
\end{equation}
that we have used so far.
A generalization of $\Delta_{(r)}$ can be achieved, however, if we let
the matrix elements $g^{(r)\mu \nu}$ depend on $r$. We still consider
the spatial part separately and assume that it takes the form:
\begin{equation}
(g^{(r)\mu \nu}) = \left( \begin{array}{cccc} g^{rr} & 0 & \dots & 0 \\
0 &&& \\ : & & g_{ij} & \\ 0 &&& \\ \end{array} \right)
\end{equation}
with $i,j=1, \dots ,p-1$, i.e. in addition the radial and the
angular part are decoupled. Hence:
\begin{equation} \label{detg}
det g^{(r)} = g^{rr} \cdot det (g_{ij})
\end{equation}
Let us consider the general case where both factors in (\ref{detg})
pick up a non trivial dependence on $r$.
\begin{eqnarray} \label{met1}
g^{rr} &=& f_{1}(r) \\
det (g_{ij}) &=& f_{2}(r) r^{p-1} F(\phi_{i}) \label{met2}
\end{eqnarray}
where $f_{1},f_{2}$ are functions ${\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} ^{+} \to {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} ^{+}$.
Inserting this into (\ref{LB}) we obtain:
\begin{equation} \label{delr}
\Delta_{(r)} = f_{1}(r) \left[ \partial^{2}_{r} + \Big\{
\frac{p-1}{r} + \frac{3}{2} ( \partial_{r} \ln f_{1}(r)) + \frac{1}{2}
(\partial_{r} \ln f_{2}(r)) \Big\} \partial_{r} \right]
\end{equation}
In general, such functions $f_{1},f_{2}$ require new types of solutions,
completely different from the case $f_{1}=f_{2}= const.$ considered
above. An exception is the case, where they take the simple monomial
form:
\begin{eqnarray}
f_{1}(r) &=& c_{1} r^{\alpha} \label{f1} \\
f_{2}(r) &=& c_{2} r^{\beta} \qquad
(c_{1},c_{2},\alpha , \beta : \ constants \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} ) \label{f2}
\end{eqnarray}
Here the modification of the dynamics due to the metrics -- with
respect to the flat space -- corresponds to a dimensional shift
in the flat space (as long as $c_{1},c_{2} \neq 0, \ 3\alpha
+\beta \geq 2(1-p)$):
\begin{equation}
p \to p + \frac{3}{2} \alpha + \frac{1}{2} \beta
\end{equation}
Hence ironing the curved space -- as it is very standard -- corresponds
for the special metrics given in (\ref{met1},\ref{met2},\ref{f1},\ref{f2})
-- to a shift in the flat space dimension.
Thus in a Euclidean space we have to look for harmonic functions in a modified
dimension. E.g. for critical phenomena it would be
most crucial if we could argue that the effective dimension of
the flat space, where most models are defined, is slightly
different from four, due to a weak curvature of this type.\\
Now we remember the temporal part, which we have ignored for
the moment. We still assume that the metrics does not couple it to
the spatial part, see (\ref{ggg}). We are interested now
in cases where the metrics causes a shift in the flat
dimensions $p$ and/or $q$, the same effect that was obtained
before from other operations.
If we want to leave the temporal part in the flat form, then
the factor $f_{1}(r)$ in (\ref{delr}) disturbs in the sense
that we do not arrive at the flat D'Alembert operator in a
modified dimension unless $f_{1}(r) \equiv 1$. For different
functions $f_{1}$, the flat solutions are only valid if they are
separable: $\phi_{p,q}(\tau ,r) = \psi_{1}(\tau) \psi_{2}(r)$;
$\psi_{1},\psi_{2}$ being harmonic functions in $p,q$ dimensions,
respectively. Also an angle dependent factor in (\ref{met1})
would disturb in this sense.
In order to use flat solutions which do not have
this separable form, we have to deal with
$f_{1}(r) \equiv 1$, e.g. by using Riemann normal coordinates,
and arrange the dimensional shift only
by a non constant $f_{2}(r)$ in (\ref{met2}) of the form (\ref{f2}).
Of course the analogous statements hold if we want to introduce
a curved temporal metrics and keep a flat spatial metrics.
If we only use curvatures of the type (\ref{met2},\ref{f2}), then we can
easily shift $p$ and $q$ simultaneously.\\
Finally, we can also cause a simultaneous
modification of the flat $p$ and $q$ by the following choice:
\begin{equation}
g^{(r)rr} = g^{(\tau)\tau \tau } = c_{1} r^{\alpha} \tau ^{\bar \alpha}
\qquad (c_{1},\alpha, \bar \alpha : \ constants \in {\kern+.25em\sf{R}\kern-.78em\sf{I}\kern+.78em\kern-.25em} )
\end{equation}
If we combine this with
\begin{equation}
f_{2}(r) = c_{2} r^{\beta} \ , \quad \bar f_{2}(\tau) =
\bar c_{2} \tau^{\bar \beta}
\end{equation}
where $\bar f_{2}(\tau)$ is the temporal analogue to $f_{2}(r)$,
then we arrive at the dimensional transformation:
\begin{equation}
p \to p + \frac{3}{2} \alpha + \frac{1}{2} \beta \quad , \qquad
q \to q + \frac{3}{2} \bar \alpha + \frac{1}{2} \bar \beta \quad .
\end{equation}
\section{Conclusions}
We have provided a reservoir of solutions of the spherically symmetric
wave equation in $p+q$ dimensions and gave some insight into their structure.
We gave a large number of explicit solutions and a set of prescriptions
for constructing new solutions out of them. In particular we showed how
to modify the dynamics such that it fulfills the wave eq. in different
temporal or spatial dimensions. The same prescriptions also hold for the
Klein Gordon equation.
The transition in steps of two dimensions is very simple. Its analytic
continuation allows also for transitions by one or by fractional
dimensions, but its application is somewhat more involved. Such transitions
in the flat space also correspond to certain types of curvature, i.e.
special curved metrics can be described in the flat space by altering
its dimensions.
We found an operator transforming a space into a time coordinate or
vice versa, a transition that actually takes place when we cross
the boundary of a black hole. However, our description of this process
if not complete yet.
The analytic continuation of $\Box^{\lambda}$ with respect to $\lambda $
corresponds in some cases again to the dynamics of the standard
D'Alembert operator in modified dimensions. This continuation is
feasible for the classical as well as for the quantum definition in
odd $p$ and/or odd $q$. If $p,q$ are {\em both} even, however,
the classical definition fails
and only the quantum definition yields a sensible result.\\
{\bf Acknowledgement} \ One of us (J.J.G.) is indebted to
O. Obregon for motivating discussions.
| {
"timestamp": "1994-02-10T20:16:35",
"yymm": "9402",
"arxiv_id": "hep-th/9402065",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/9402065",
"abstract": "We discuss solutions of the spherically symmetric wave equation and Klein Gordon equation in an arbitrary number of spatial and temporal dimensions. Starting from a given solution, we present various procedures to generate futher solutions in the same or in different dimensions. The transition from odd to even or non integer dimensions can be performed by fractional derivation or integration. The dimensional shift, however, can also be interpreted simply as a modification of the dynamics. We also discuss the analytic continuation to arbitrary real powers of the D'Alembert operator. There, particular peculiarities in the pole structure show up when $p$ and $q$ are both even. Finally we give operators which transform a time into a space coordinate and v.v. and comment on their possible relation to black holes. In this context, we describe a few aspects of the extension of our discussion to a curved metrics.",
"subjects": "High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc)",
"title": "Solutions of the Spherically Symmetric Wave Equation in $p+q$ Dimensions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717484165853,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7089449203353883
} |
https://arxiv.org/abs/1410.0713 | A Combinatorial Algorithm to Find the Minimal Free Resolution of an Ideal with Binomial and Monomial Generators | In recent years, the combinatorial properties of monomials ideals and binomial ideals have been widely studied. In particular, combinatorial interpretations of free resolution algorithms have been given in both cases. In this present work, we will introduce similar techniques, or modify existing ones to obtain two new results. The first is $S[\Lambda]$-resolutions of $\Lambda$-invariant submodules of $k[\mathbb{Z}^n]$ where $\Lambda$ is a lattice in $\mathbb{Z}^n$ satisfying some trivial conditions. A consequence will be the ability to resolve submodules of $k[\mathbb{Z}^n/\Lambda]$, and in particular ideals $J$ of $S/I_{\Lambda}$, where $I_{\Lambda}$ is the lattice ideal of $\Lambda$.Second, we will provide a detailed account in three dimensions on how to lift the aforementioned resolutions to resolutions in $k[x,y,z]$ of ideals with monomial and binomial generators. | \section{Introduction}
In recent decades, various groups of mathematicians have independently studied resolutions of binomial ideals, and resolutions of monomial ideals. Many beautiful results have been obtained, but resolutions of sums of such ideals remain elusive. It is exactly these types of ideals that will be studied in this present work.
In the first section, we will discuss the combinatorial setup we will be using for the rest of work. The objects of interest are subsets of $\Z^n$ that are typically infinite. (In the existing theory, researchers utilized finite subsets of $\N^n$.) We will draw on the language of \cite{BHS} to generalize the tools from \cite{ES} and \cite{MS}.
The next section examines subsets of $\Z^n$ that are groups as well as antichains. We will call them antichain lattices, and we will work intimately with them throughout the remainder of the work. Our antichain condition parallels other work where the subgroups are not allowed to intersect the positive orthant anywhere but 0; requiring that the lattice is an antichain is a more concise way to state this condition.
We will give a brief review of resolutions in the following sections, specifically focusing on resolutions of certain types of binomial ideals that have been studied in \cite{ES} and \cite{MS}.
The penultimate section will take us on our final step before we begin resolving our desired ideals. We will need to enter the world of Laurent monomial modules, which is the analogue of monomial ideals, but in a larger ambient space. We will look at $k[x_1,\dots,x_n]$-modules contained in the Laurent polynomial ring over $k$.
The final section, where the bulk of the new work lies, will tie everything together in the full generality of $\Z^n$, but our final computation will actually be in $\Z^3$ because the the increasingly complex computations in $\Z^n$ do not lend themselves to concise notation. That is, we will give the general combinatorial algorithm for the resolution of certain ideals with binomial and monomial generators in $k[x_1,x_2,x_3]$ as the main result. We will conclude with a detailed example outlining the full algorithm.
\section{Subsets of $\Z^n$}
The general setup we will be working with is one of $M$-sets, where $M$ is a monoid.
\begin{definition}
Let $M=<M,\ast, 0>$ be a monoid. Then an $M$-set is a set $S$ together with a map $$\begin{array}{ccc}M\times S & \rightarrow & S \\ (m,s) & \mapsto & ms\end{array}$$ such that $(m\ast m')s=m(m's)$ and $0s=s$.
\end{definition}
\vspace{.3cm}
\subsection{Subsets of $\Z^n$ as a Poset}
We have the following definitions and notations for elements $\alpha, \beta$ and subsets $A$ of $\Z^n$:
\singlespace
\begin{enumerate}
\item If $\alpha\in\R^n$, then $\pi_j(\alpha)$ denotes the $j^{th}$ component of $\alpha$.
\item $\alpha \leq \beta$ if $\pi_i(\alpha) \leq \pi_i(\beta)$, $i=1, \dots, n$
\item $\alpha < \beta$ if $\alpha \leq \beta$, and $\alpha \neq \beta$
\item $\alpha << \beta$ if $\pi_i(\alpha) < \pi_i(\beta)$, $i=1, \dots, n$\footnote{At times, we use the notation $a<< b$ for $a,b\in\R$ to mean that $b$ is much greater than $a$, but context will prevent any notational confusion.}
\item $\min(A) := \{\alpha \in A | \zeta < \alpha \Rightarrow \zeta \notin A\}$
\item If $A = A + \N^n$, then $A$ is an $\N^n$-set with the map being defined by $(\eta,\alpha)\mapsto \eta\alpha=\alpha+\eta$.
\item The $\N^n$ set generated by $A$ is $A+\N^n=\{\zeta \in \Z^n | \exists \alpha \in A \text{ with } \alpha \leq \zeta\}$
\item If $\alpha,\beta\in\Z^n$, then $\alpha\vee\beta=(\text{sup}\{\alpha_1,\beta_1\}, \dots, \text{sup}\{\alpha_n,\beta_n\})$, and $\alpha\wedge\beta=-(-\alpha\vee-\beta)$.
\end{enumerate}
\doublespace
\begin{definition}
A descending chain in a poset $X$ is a function $f:I\rightarrow X$ where $I\subseteq \N$ is an interval and $f(i)>f(j)$ if $i<j$. If $A\subseteq X$ does not have any infinite descending chains, we will say it satisfies the decending chain condition, and we call it a DCC set.
\end{definition}
If $A\subseteq\Z^n$ is a DCC $\N^n$-set, then $\min(A)+\N^n=A$. The definition of $\min(A)$ implies that it is an antichain with respect to the weak order on $\Z^n$.
There is a bijection between monomials in $k[x_1, \dots, x_n]$ and vectors in $\N^n$. If $I=<m_1, \dots, m_s>$, where $m_i=X^{a_i}$, then the monomials in $I$ are exactly the vectors in the $\N^n$-set generated by $A=\{a_1, \dots, a_s\}$.
\begin{definition}
For $\alpha\in\Z^n$, the support of $\alpha$ is $\supp(\alpha)=\{i\mid\pi_i(\alpha)\neq0\}$.
\end{definition}
\begin{definition}
Let $\eta\in\Z^n$, and let $[n]=\{1, \dots, n\}$. Let $T_{\eta}=\eta-\N^n=\{\eta-\alpha\mid\alpha\in\N^n\}$, and say that for nonempty $X\subseteq[n]$, an $X$-face of $T_{\eta}$ is $\{\alpha\in\Z^n | \pi_i(\alpha)=\pi_i(\eta) \text{ for all } i\in X\}$. Let $T^o_{\eta}=\eta-\N^n_{>0}$.
\end{definition}
\begin{definition}\label{generic}
Let $A\subseteq \Z^n$. We say $A$ is generic if for all $\eta\in\Z^n$, such that $T^o_{\eta}\cap A=\emptyset$, $T_{\eta}$ contains at most one element of $A$ on each face.
\end{definition}
If $A$ is an $\N^n$-set that has a minimal element, it is never generic. This is because if $\alpha\in\min(A)$, then $T^o_{\alpha+(1,0,\dots,0)}\cap A=\emptyset$, but $T^o_{\alpha+(1,0,\dots,0)}$ contains two points on one face. Because of this, we will adopt the convention of calling a DCC $\N^n$-set generic if its generating antichain is generic.
\subsection{Neighborly Sets}
If $A\subset\Z^n$, we wish to have a way of distinguishing certain subsets of $A$ that have desirable properties. This distinction will be in the form of neighborly sets.
\begin{definition}
Let $A\subset\Z^n$, and let $B\subset A$. We say that $B$ is neighborly in $A$ if $T^o_{\vee B}\cap A=\emptyset$. We say $B$ is maximally neighborly if $B$ is neighborly and $B'\supset B$ implies $B'$ is not neighborly.
\end{definition}
\begin{example}
\
\begin{enumerate}
\item If $A\subset\Z^n$ is an antichain, then each $\alpha\in A$ is a neighborly set.
\item If $\Lambda \in\Z^2$ is generated by $(1,-1)$, and $A=\{(1,1)\}+\Lambda$, then $\{(i+1,i-1),(i+2,i-2)\}$ is a maximally neighborly set of $A$.
\item The empty set.
\end{enumerate}
\end{example}
\begin{lemma}\label{inducedneighborly}
If $A\subseteq\Z^n$, and $B\subseteq A$ is neighborly, then every subset of $B$ is neighborly.
\end{lemma}
\begin{proof}$ $
Since $B$ is neighborly, we have that $T^o_{\vee B}\cap A=\emptyset$. Additionally, since $B'\subseteq B$, we have that $T^o_{\vee B'}\cap A\subseteq T^o_{\vee B}\cap A=\emptyset$, and hence $T^o_{\vee B'}\cap A=\emptyset$. Therefore, $B'$ is neighborly.
\end{proof}
\begin{definition}
Let $A\subset\Z^n$ and let $B\subset A$. If $B'\subseteq A$ and $\vee B'=\vee B$ implies that $B'=B$ for all such $B'\subseteq A$, then $B$ is called strongly neighborly.
\end{definition}
\begin{prop}\label{niceprop}
Let $A\subset\Z^n$. Then $B\subset A$ strongly neighborly implies that $B$ is neighborly, and the converse holds if $A$ is generic.
\end{prop}
\begin{proof}$ $
Let $B$ be strongly neighborly. Then for any $B'\subset A$ such that $\bigvee B=\bigvee B'$, we have that $B=B'$. If $T^o_{\vee B}\cap A\neq\emptyset$, then there exists $\alpha\in A$ such that $\alpha<<\bigvee B$, and hence $\bigvee B=\bigvee(B\cup\alpha)$. Then $B=B\cup\alpha$, which is a contradiction, and hence $T^o_{\vee B}\cap A=\emptyset$, so $B$ is neighborly.
Now suppose that $B$ is neighborly and $A$ is generic, then at most one element of $A$ lies on each face of $T_{\vee B}$ by definition. Now consider $B'$ such that $\bigvee B=\bigvee B'$. Each $\beta\in B$ contributes to $\bigvee B$ in some component because of genericity. If $\beta'\in B'$ contributes to $\bigvee B$ what $\beta$ did, then they lie in the same face of $T_{\vee B}$ and hence must be the same. In this manner, we conclude that each element of $B$ matches up with an element of $B'$, and vice versa, and hence $B=B'$, so $B$ is strongly neighborly.
\end{proof}
\begin{definition}\label{scarfdef}
If $A\subseteq\Z^n$, let $N(A):=\{\text{strongly neighborly sets of } A\}$, and let $N_i(A):=\{\sigma\in N(A) | |\sigma|=i+1\}$. We call $N(A)$ the Scarf complex of $A$.
\end{definition}
\begin{prop}\label{prop2.15}
If $A\subseteq\Z^n$, then $N(A)$ is a simplicial complex.
\end{prop}
\begin{proof}$ $
By Lemma \ref{inducedneighborly}, neighborliness is closed under taking subsets. Hence, $\sigma\in N_i(A)$ is an $i$-face of $N(A)$, and $N_{i-1}(A)\ni\tau\subseteq\sigma$ is a face of $\sigma$.
\end{proof}
\section{Antichain Lattices}
In the existing literature, the requirement that $\Lambda\cap\N^n=0$ is often imposed on lattices $\Lambda\subseteq\Z^n$. For brevity, we will work with lattices that are also antichains.
If $\Lambda\subseteq\Z^n$ is an antichain lattice, then we define $\IL\subset k[x_1,\dots,x_n]$ to be the ideal generated by $$\{X^{\lambda^+}-X^{\lambda^-}\mid\lambda\in\Lambda\}$$ Notice that any monoid morphism $\phi:\N^n\rightarrow\N^m$ extends to a group homomorphism $\overline{\phi}:\Z^n\rightarrow\Z^m$, and that $\ker(\overline{\phi})$ is an antichain lattice. Also, $\phi$ induces $\hat{\phi}:k[x_1,\dots,x_n]\rightarrow k[y_1,\dots,y_m]$, and $\ker(\hat{\phi})=I_{\ker(\overline{\phi})}=\{X^{\alpha^+}-X^{\alpha^-} | \alpha\in\ker{\overline{\phi}}\}$.
\subsection{Markov Bases} $ $
A Markov basis is a useful tool in bridging the gap between the combinatorial Scarf complex and the algebraic object $I_{\Lambda}$. This will be done via the fundamental theorem of Markov bases (Theorem \ref{markdef}). Save for Proposition \ref{markovneighbors}, the basic Markov basis theory treatment is from \cite{DS}.
Consider an antichain lattice $\Lambda \subseteq \Z^n$. Define the \emph{fiber over u} for $u \in \N^n$ to be $\mathcal{F}(u):= (u+\Lambda)\cap\N^n = \{v\in \N^n | u-v\in\Lambda\}$. Now consider an arbitary finite subset $\mathcal{B}\subseteq\Lambda$. For an arbitrary element $u\in\N^n$, we can define a graph denoted $\mathcal{F}(u)_{\mathcal{B}}$ where the vertices are the elements of $\mathcal{F}(u)$ and the edges are between vertices $v, w$ if $v - w$ or $w - v$ are in $\mathcal{B}$.
\begin{definition}\label{markdef1}A Markov basis of a lattice $\Lambda\in\Z^n$ is a finite set $\mathcal{B}\subseteq\Lambda$ such that $\mathcal{F}_{\mathcal{B}}(u)$ is connected for all $u\in\N^n$. We call a Markov basis \emph{minimal} if it is such with respect to inclusion.
\end{definition}
\begin{theorem}\emph{[Theorem 1.3.2, \cite{DS}]}If $\mathcal{B}$ and $\mathcal{B}'$ are minimal Markov bases for a lattice, then $|\mathcal{B}|=|\mathcal{B}'|$.
\end{theorem}
\begin{theorem}\label{markdef}\emph{[Theorem 1.3.6, \cite{DS}]}A subset $\mathcal{B}$ of a lattice $\Lambda$ is a (minimal) Markov basis if and only if the set $\{X^{b^+}-X^{b^-} | b\in\mathcal{B}\}\subset k[x_1, \dots, x_n]$ forms a (minimal) generating set of the lattice ideal $\IL=<X^{b^+}-X^{b^-} | b\in\Lambda>$.
\end{theorem}
In the future, we will be referring to Theorem \ref{markdef} more often than to Definition \ref{markdef1}
\begin{definition}
Let $\Lambda\subset\Z^n$ be a lattice. For any $\beta\in\Z^n$, the fiber over $\beta$ is $\beta+\N^n\cap\Lambda$.
\end{definition}
\begin{prop}\label{markovneighbors}
Let $\Lambda\subseteq\Z^n$ be a lattice that is an antichain. If $B$ is a Markov basis of $\Lambda$ and $\mathcal{N}$ is the set of neighbors of the origin, then $\mathcal{N}=B\cup-B$.
\end{prop}
\begin{proof} $ $
First, notice that $\mathcal{N}\subseteq B\cup -B$ because if $\lambda_1$ and $\lambda_2$ are neighborly, then there is a fiber of $\Lambda$ that contains only $\lambda_1$ and $\lambda_2$.
For the opposite inclusion, it suffices to show that $\mathcal{N}$ is a Markov basis. As a Markov basis, it will contain a minimal Markov basis, and because neighborliness is closed under taking negatives, it will also contain the negative of that minimal Markov basis. For any two minimal Markov bases, $B$ and $B'$, it is the case that $B\cup -B=B'\cup-B'$, so we will be finished. We proceed by proving that $\mathcal{N}$ is a Markov basis by showing that for any fiber, any two points in the fiber are connected by a path of neighborly pairs of elements.
Suppose that $F$ is a fiber of $\Lambda$ that contains only two elements. Then those two elements are neighborly, and hence there is a neighborly path between them. Now suppose that the result holds for all fibers $F$ such that $|F|<m$. Suppose $F$ is a fiber such that $|F|=m$, and suppose $\lambda_1,\lambda_2\in F$ where $\lambda_1$ and $\lambda_2$ are not neighborly. Without loss of generality, let $F$ be the fiber over $\lambda_1\wedge\lambda_2$. Since $\lambda_1$ and $\lambda_2$ are not neighborly, there exists $\alpha\in F$ such that $\alpha<<\lambda_1\vee\lambda_2$.
Let $$\Delta_1=\{i\in[1,\dots,n]\mid\pi_i(\lambda_1)>\pi_i(\lambda_2)\}$$ Then $\pi_i(\lambda_1)>\pi_i(\alpha)$ for all $i\in\Delta_1$ and $\pi_j(\lambda_1)<\pi_j(\alpha)$ for all $j\in\Delta_1^c$. By construction, $\pi_i(\alpha)>\pi_i(\lambda_2)$ for all $i\in\Delta_1$, and $\pi_j(\alpha)<\pi_j(\lambda_2)$ for all $j\in\Delta_1^c$. Therefore, $(\alpha-\lambda_1)\wedge 0>(\lambda_2-\lambda_1)\wedge 0$ and hence $(\alpha\wedge\lambda_1)>(\lambda_1\wedge\lambda_2)$.
We can draw two conclusions from this final inequality. The first is that $(\alpha\wedge\lambda_1+\N^n)\cap\Lambda\subset(\lambda_1\wedge\lambda_2+\N^n)\cap\Lambda$, and the second is that $\lambda_2\notin(\alpha\wedge\lambda_1+\N^n)\cap\Lambda$. The final conclusion to draw is that the minimal fiber containing $\alpha$ and $\lambda_1$ has size less than $n$, and likewise for $\lambda_2$. Thus, by the inductive hypothesis, there is a neighborly path from $\lambda_1$ to $\alpha$ and another from $\alpha$ to $\lambda_2$, creating the desired neighborly path from $\lambda_1$ to $\lambda_2$.
\end{proof}
Our use of Markov bases will be ubiquitous henceforth. The primary goal of this section was to establish the fact that the generating sets of the ideals we will work with later all have a very specific form. More structural lemmas along these lines will establish this fact more rigorously later.
\begin{comment}
\begin{example}\label{markovgrowth}In the following examples, we will look at some numerical semigroups generated by varying lists of integers, and look at a Markov bases of the corresponding lattices.\footnote{All computations either were, or could be, obtained using the command markov in 4ti2.}
\begin{enumerate}
\item If $G=<3, 4, 5>$, then an associated Markov basis is \newline $\{(3,-1,-1),(-1,2,-1),(-2,-1,2)\}$.
\item If $G=<4, 5, 6>$, then an associated Markov basis is $\{(1,-2,1),(3,0,-2)\}$.
\item If $G=<20, 24, 25, 31>$\footnote{This example was given in \cite{MS} as the smallest generic codimension 1 lattice in $\Z^4$}, then an associated Markov basis is \newline $\{(4,-1,-1,-1),(3,-2,2,-2),(2,3,-2,-2),(1,2,1,-3),(-2,4,-1,-1),\newline (-3,3,2,-2),(-1,-1,3,-1)\}$.
\item If $G=<20, 24, 25, 32>$, then an associated Markov basis is \newline $\{(2, -3, 0, 1),(2, 1, 0, -2),(4, -2, 0, -1),(5, 0, -4, 0)\}$.
\item If $G=<17, 25, 31, 47, 66>$, then an associated Markov basis has 11 elements.
\item If $G=<928, 963, 968, 1275, 1321>$\footnote{This is the smallest generic codimension 1 lattice in $\Z^5$ known to the author.}, then an associated Markov basis has 15 elements.
\end{enumerate}
\end{example}
Example \ref{markovgrowth} was intended to show the reader that as the dimension of the lattice increases, the size of the Markov basis grows faster and seemingly erratically. We can bound the size of the Markov basis, though, and under certain conditions, we can show that the size is fixed. Under more general conditions, bounds have been found; they are generally very large, but nonetheless achievable. The interested reader can find the bounds in \cite{shallcross}, although the lexicon and setting is different.
\end{comment}
\subsection{Generic Lattices}
In our quest to unite the various definitions of genericity, we will now consolidate two definitions of generic from the literature. Namely, we will unite Definition \ref{genlat} from \cite{PS} and Definition \ref{generic} from \cite{MS}.
\begin{definition}\label{genlat}If $\Lambda\subset\Z^n$ is an antichain lattice, we say $\Lambda$ is generic if there is a minimal Markov basis $L$ of $\Lambda$ such that each $\lambda\in L$ is fully supported.
\end{definition}
\begin{lemma}\label{genericsmatch}
If $\Lambda\subset\Z^n$ is an antichain lattice, then $\Lambda$ is generic as in Definition \ref{genlat} if and only if $\Lambda$ is generic in $\Z^n$ as in Definition \ref{generic}.
\end{lemma}
\begin{proof} $ $
By Proposition \ref{markovneighbors}, we can first consider an identical statement: the neighbors of the origin with respect to $\Lambda$ are fully supported if and only if there are no neighborly pairs that share a component.
Let $\Lambda$ be generic by Definition \ref{genlat}. Under lattice translations, if $$L=\{\text{neighbors of the origin with respect to }\Lambda\},$$ then $$\alpha+L=\{\text{neighbors of }\alpha\text{ with respect to }\Lambda\}$$ If $\beta\in\alpha+L$, then $pi_1(\alpha)\neq\pi_i(\beta)$ for $i=1,\dots, n$ because the elements of $L$ are fully supported, and $beta=\alpha+\ell$ for some $\ell\in L$. Because of this, if there exists a $\beta$ such that $\pi_i(\beta)=\pi_i(\alpha)$, then $\alpha$ and $\beta$ are not neighborly. Therefore, there exists $\gamma\in T^o_{\alpha\vee\beta}\cap\Lambda$ by definition. That is, there exists $\gamma<<\alpha\vee\beta$ and hence $\Lambda$ is generic by Definition \ref{generic}.
Let $\Lambda$ be generic by Definition \ref{generic}. Then for all $\alpha,\beta\in\Lambda$ such that $pi_i(\alpha)=\pi_i(\beta)$, there exists $\gamma\in\Lambda$ such that $\gamma<<\alpha\vee\beta$. That is, $\gamma\in T^o_{\alpha\vee\beta}\cap\Lambda$. Therefore, if $\pi_i(\alpha)=\pi_i(\beta)$ for some $i=1,\dots, n$, then they are not neighborly. Hence, if $\alpha,\beta$ are to be neighborly, $\alpha-\beta$ must be fully supported. Thus, if $A$ is the set of neighbors of $\alpha$, then the vectors $\{\alpha-\beta\mid\beta\in A\}$ are fully supported, and hence $\Lambda$ is generic by Definition \ref{genlat}.
\end{proof}
Lemma \ref{genericsmatch} shows us that the notion of a generic lattice from \cite{PS} matches the definition for generic we have already seen for $\N^n$-sets.
\section{$\Lambda$-sets}
In this section, we will generalize the lattices from the previous section into $\Lambda$-sets, and then reform some of the notions and definitions we had for lattices. If not explicitly mentioned, our lattices will continue to be subsets of $\Z^n$, antichains and generic.
The primary object of study in this section is a $\Lambda$-set, which is a specific case of an $M$-set, where $M$ is a monoid. If $A\subseteq\Z^n$, and $A=A+\Lambda$, then $A$ is a $\Lambda$-set under the map $A\times\Lambda\rightarrow A$ defined by $(\alpha,\lambda)\mapsto\alpha+\lambda$.
\subsection{Structure of $\Lambda$-sets}
\begin{definition}
Suppose $A=A+\Lambda$. If $A_0\subseteq A$, we call $A_0$ a set of $\Lambda$-representatives for $A$ if
\begin{enumerate}
\item $A=A_0+\Lambda$
\item $a,b\in A_0$ implies $a-b\notin\Lambda$
\end{enumerate}
Call $A$ $\Lambda$-finite if $A$ has a finite set of representatives.
\end{definition}
\begin{remark}
All $\Lambda$-finite sets are DCC sets, a fact that will be used nearly constantly without mention.
\end{remark}
Unless $\Lambda=\{0\}$, infinitely many options for $A_0$ exist. When thinking of $A=\Lambda\cup(\alpha_0+\Lambda)$, we could choose $A_0=\{\alpha,\alpha_0+\beta\}$ for any $\alpha,\beta\in\Lambda$ without any reference to the Euclidean distance between $\alpha$ and $\beta$. It will be important later to be able to address this distance, so we will develop a method for choosing an $A_0$ that has an additional desirable property: closeness.
\begin{lemma}\label{closeness}
Let $\Lambda\subset\R^n$ and let $A$ be $\Lambda$-finite. Let $V$ be the subspace of $\R^n$ spanned by $\Lambda$, and let $\mathcal{C}$ be a fundamental region ($k$-parallelapiped, where $\Lambda$ has codimension $n-k$) of $\Lambda$ in $V$. If $\pi:\R^n\rightarrow V$ is the orthogonal projection map, then there is a set of $\Lambda$-representatives for $A$ contained in $\pi^{-1}(\mathcal{C})$.
\end{lemma}
\begin{proof}
We have that $A$ is $\Lambda$-finite, so choose $A_0$ as a finite set of representatives. For ease, order $A_0$ as $\alpha_1 < \alpha_2<\cdots<\alpha_s$, and consider $\pi(\alpha_1)+\mathcal{C}$. Since $\pi(\alpha_1)\neq\pi(\alpha_i)$ for all $i>1$, and $\pi(\alpha_i)+\mathcal{C}+\Lambda$ is a division of $V$ into $k$-parallelapipeds, there exists a $\lambda_i\in\Lambda$ such that $(\pi(\alpha_i)+\mathcal{C}+\lambda_i)\cap(\pi(\alpha_i)+\mathcal{C})\neq\emptyset$. To complete the proof, let the representative set be $\pi^{-1}(\alpha_1)\cup\{\pi^{-1}(\alpha_i+\lambda_i) | i>1\}$.
\end{proof}
Although there will be many situations where this property is not needed, we will henceforth only consider sets of $\Lambda$-representatives of $\Lambda$-finite sets of the form of the conclusion of Lemma \ref{closeness}.
\begin{prop}\label{structural}Let $A$ be a generic $\Lambda$-finite set, then $N_i(A)$ is $\Lambda$-finite set under the map $N_i(A)\times\Lambda\rightarrow N_i(A)$ where $(\sigma,\lambda)\mapsto\sigma+\lambda$
\end{prop}
\begin{proof}$ $
If $\sigma\in N_i(A)$, then $\sigma+\lambda\in N_i(A)$ for all $\lambda\in\Lambda$, so $N_i(A) = N_i(A)+\Lambda$, and hence it is a $\Lambda$-set. The $\Lambda$-finiteness property will come as a corollary to Lemma \ref{locallyfinitelemma}.
\end{proof}
\begin{comment}
We will conclude this section with a few remarks about the generalizations we have done here. First, note that if $A_0$ is a singleton, then $A$ is a translation of $\Lambda$. Additionally, the simplicial complex put on $\Lambda$ is identical to the simplicial complex that we can define iteratively by saying two points $\alpha,\beta\in\Lambda$ have an edge between them there is no third element $\gamma\in\Lambda$ such that $\gamma<<\alpha\vee\beta$. This coincides verbatim with the definition of the Scarf complex (Lemma \ref{scarfneighbors}), which in turn coincides with the Buchberger graph from \cite{MS}, and shows the reader that we are augmenting the space of objects that previous algorithms have been applied to.
Furthermore, since we now have a simplicial complex, we are granted the existence of certain mappings involving the faces of subsimplices; these maps will be exploited in great detail in later chapters.
\end{comment}
\section{Resolutions}
This section will review our primary object of study: resolutions. We will mostly address the general definitions via our specific uses, and in particular, via a constructive algorithm. We will cover the definitions associated to cellular resolutions, which encompasses the algorithm that we will apply to the scarf complex in later chapters.
\begin{definition}
Let $M$ be an $S$-module, then a \emph{resolution} of $M$ is a complex $F_{\bullet}$ with maps $\delta_i$ such that $$0\longleftarrow M \overset{\delta_0}\longleftarrow F_0 \overset{\delta_1}\longleftarrow F_1 \overset{\delta_2}\longleftarrow \cdots \leftarrow :F_{\bullet}$$ is exact. I.e., if $\ker(\delta_i)=\im(\delta_{i+1})$. The resolution is free if $F_i$ is free for all $i$. If the resolution is free, then $F_i=S^{\beta_i}:=\underbrace{S\oplus \cdots \oplus S}_{\beta_i\text{ times}}$, and if it is minimal, the $\beta_i$s are collectively called the Betti numbers of the resolution.
\end{definition}
\subsection{Resolutions of Lattice Ideals}
Later, we will cover resolutions of lattice ideals in more generality, but for this section, we will give the basic results concerning lattice ideals.
\begin{definition}\label{M_A}\emph{[Definition 9.11, \cite{MS}]} Let $A\subseteq\Z^n$. Then $M_A$, is the $S$-submodule of the Laurent polynomial ring $S^{\pm}=k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$ generated by $\{X^{\alpha} | \alpha\in A\}$.
\end{definition}
In \cite{MS}, one will find that the Scarf complex of $A\in\Z^n$ is defined as the set of strongly neighborly sets, where we have defined it to be the set of neighborly sets. We saw in Lemma \ref{niceprop} that when $A$ is generic, strongly neighborly and neighborly are identical, and as such, the reader does not need to make any distinction going foward.
We will finish this section with a prelude to what we intend to do with the machinery we have hitherto developed. In section \ref{next}, we will construct a collection of maps that we will associate to simplicial complexes. When we apply this construction to the Scarf complex of a generic $\Lambda$-set, $A$, we will obtain free a free resolution of $M_A$ as an $S$-module. Additionally, we will be able to resolve lattice ideals by considering the construction modulo the lattice. The machinery behind these ideas will be developed in later sections in more general situations. The machinery will primarily exploit the structure of the lattice, and in fact, we will use a more general version of the Scarf complex.
\subsubsection{Lattice Ideal Resolutions in $\Z^3$}
In $\Z^3$, we have a remarkable amount of control over Markov bases of lattices. In particular, the Markov bases will have three elements, $\lambda_1, \lambda_2$, and $\lambda_3$, and they can be chosen such that $\lambda_1=-(\lambda_2+\lambda_3)$.
\begin{lemma}\label{lambdares}
If $\Lambda\subset\Z^3$ is a generic antichain lattice with codimension 1 and Markov basis $\lambda_1 = \{(\alpha_1,-\beta_1,-\gamma_1), \lambda_2 = (\alpha_2,-\beta_2,-\gamma_2), \lambda_3 = (\alpha_3,-\beta_3,-\gamma_3)\}$, then the minimal free resolution of $S/\IL$ is
\begin{displaymath}
\begin{array}{ccccccc}
S/\IL & \leftarrow & S & \leftarrow & Se_{\lambda_1}\oplus Se_{\lambda_2}\oplus Se_{\lambda_3} & \leftarrow & Se_{p_1}\oplus Se_{p_2} \\
& & b_1 & \mapsfrom & e_{\lambda_1} & x_3^{\gamma_2}e_{\lambda_1}+x_1^{\alpha_3}e_{\lambda_2}+x_2^{\beta_1}e_{\lambda_3} & \mapsfrom e_{p_1} \\
& & b_2 & \mapsfrom & e_{\lambda_2} & x_2^{\beta_3}e_{\lambda_1}+x_3^{\gamma_1}e_{\lambda_2}+x_1^{\alpha_2}e_{\lambda_3} & \mapsfrom e_{p_2} \\
& & b_3 & \mapsfrom & e_{\lambda_3} & &
\end{array}
\end{displaymath}
\end{lemma}
\begin{proof}
Apply the tools from section \ref{6} that we will cover latter. Alternatively, \cite{H3}.
\end{proof}
\subsection{Cellular Resolutions}\label{next}
Let $\Lambda\subseteq\Z^n$ be an antichain lattice, and let $A$ be a generic $\Lambda$-finite set. We already have that $N(A)$ is a simplicial complex; to the simplicial structure, we can add more information in the form of face labels. We will label the face $\sigma$ of $N(A)$ with $\vee\sigma$.
\begin{definition}
Let $S=k[x_1, \dots, x_n]$, and let $F_i(N(A)):=\displaystyle{\bigoplus_{\sigma\in\N_i(A)}Se_{\sigma}}$ be the free $S$-module with generators $\{e_{\sigma}\mid\sigma\in N_i(A)\}$.
\end{definition}
If $\sigma=\{\sigma_0,\dots,\sigma_i\}\in N_i(A)$, then $\partial_j\sigma=\{\sigma_0,\dots,\sigma_{j-1},\sigma_{j+1},\dots,\sigma_i\}$. Let $\phi_i: F_i(N(A))\rightarrow F_{i-1}(N(A))$ be defined as follows: \begin{equation}\label{mapeq}\begin{array}{cccc}\phi_i: & F_i(N(A)) & \rightarrow & F_{i-1}(N(A)) \\ & e_{\sigma} & \mapsto & \sum_{j=0}^i(-1)^jX^{\vee\sigma-\vee\partial_j\sigma}e_{\partial_j\sigma}\end{array}\end{equation}
\begin{prop}
With $\phi_i$ defined above, $\phi_i\phi_{i-1}=0$.
\end{prop}
\begin{proof}
Chapter 8 of \cite{W1}.
\end{proof}
\begin{definition}
Let $X$ be a simplicial complex labeled with suprema in $\Z^n$, and let $X_i$ be the set of $i$-faces of $X$. The cellular free complex supported on $X$, denoted $\mathcal{F}_{X}$, is the complex of free $k[x^{\pm 1}_1, \dots, x^{\pm 1}_n]$-modules generated by $e_{\sigma}$ for $\sigma\in X_i$. If $X$ is acyclic, we pair $X$ together with the maps $\phi$ from (\ref{mapeq}) to obtain the cellular free resolution supported on $X$. We also denote it $\mathcal{F}_{X}$.
\end{definition}
\begin{definition}
If $X$ is a simplicial complex labeled with elements of $\Z^n$, then for all $b\in\Z^n$, $X_{\preceq b}$ is the subcomplex supported on all faces $\sigma$ such that $\vee\sigma\leq b$.
\end{definition}
\begin{prop}
The cellular free complex $\mathcal{F}_{X}$ supported on $X$ is is a cellular resolution if and only if $X_{\preceq b}$ is acyclic over $k$ for all $b\in\Z^n$. When $\mathcal{F}_{X}$ is acyclic, then it is a free resolution of $M_{X}=\{X^{\zeta}\mid\zeta\text{ the label of some face of } X\}$, the $k[x_1,\dots,x_n]$-submodule of $k[x_1^{\pm1}, \dots, x_n^{\pm1}]$.
\end{prop}
\begin{proof}$ $
This is an extension of the finite case given in Proposition 4.5 in \cite{MS}, but the proof runs identically.
\end{proof}
\begin{example}\label{isresolution}
Let $\Lambda\subset\Z^n$ be an antichain lattice, and let $A$ be a generic $\Lambda$-finite set. Then $$F_{\bullet}: \cdots \rightarrow F_i(N(A))\overset{\phi_i}\rightarrow F_{i-1}(N(A))\overset{\phi_{i-1}}\rightarrow\cdots F_0(N(A))\overset{\phi_1}\rightarrow M_A$$ is a resolution of $M_A$ as an $S$-module.
\end{example}
\subsection{Taylor and Hull Resolutions}
\subsubsection{Hull Complex}
We begin with some notation. We will always assume that $t\in\R$ with $t>1$ and that $A\subset\Z^n$. Let $$E_t(\alpha)=(t^{\pi_1(\alpha)}, \dots, t^{\pi_n(\alpha)})$$ for $\alpha\in\Z^n$ and $$E_t(A)=\{E_t(\alpha)\mid\alpha\in A\}$$ Additionally, we will let $$\mathcal{P}_t(A)=\conv(E_t(A)+\N^n)= \R_{\geq0}^n+\conv(E_t(A))$$
\begin{lemma}\label{exponentialconvex}
If $A\subseteq\Z^n$ is a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subseteq\Z^n$, then for $t>1$, the vertices of $\mathcal{P}_t(A)$ are $E_t(A)$.
\end{lemma}
\begin{proof}$ $
It suffices to show that $E_t(A)$ is convex for large enough $t$. First note that from \cite{RK}, we have the following condition for convexity: a set $C\in\R^n$ is convex if and only if for all $x,y\in C$, $<N_C(x)-N_C(y), x-y>\geq 0$, where $N_C(x)$ is the normal vector to $C$ at $x$.\footnote{In \cite{RK}, as here, we will consider a normal vector at a point to be any vector inside the normal cone at that point. That is, we can choose a normal vector to any plane that is tangent at the point, and the result still hols.}
Let $a\in\R_{>0}^n$ and let $t\in\R_{>1}$. Let $$H_a=\{x\in\R^n | a\cdot x\geq 0\}$$
and
$$\partial H_a=\{x\in\R^n | a\cdot x=0\}.$$
Then $$t(H_a)=\{t(x) | a\cdot x \geq 0\}$$
$$=\{(t^{x1}, \dots, t^{x_n}) | a_1x_1 + \cdots + a_nx_n\geq 0\}$$
$$=\{(\xi_1, \dots, \xi_n) | \xi_1^{a_1}\dots\xi_n^{a_n}\geq 1, \xi_i=t^{x_i}\}$$
and
$$t(\partial H_a)=\{(\xi_1, \dots, \xi_n) | \xi_1^{a_1}\dots\xi_n^{a_n} = 1\}$$
To simplify notation, let $f_a(\xi)=\xi_1^{a_1}\dots\xi_n^{a_n}$. Then we have that $t(\partial H_a)$ is the level set defined by $f_a(\xi)=1$ and $t(H_a)=\{\xi | f_a(\xi)\geq 1\}$. Note that since $t$ is a homeomorphism from $\R^n$ to $\R^n$, we have that $t(\partial H_a)=\partial t(H_a)$.
We wish to show that $C=t(H_a)$ is convex. By the aforementioned convexity condition, we can show $<N_C(x)-N_C(y),x-y>\geq 0$ for all $x,y\in C$. We clearly only need to check this on the boundary of $C$, which is what we will do. The (outward facing) normal vector to $\partial C$ at $\xi$ is $-\nabla f_a(\xi)$. Now $\frac{\partial f_a}{\partial \xi_i}=\frac{a_i}{\xi_i}f_a(\xi)$ and if $\xi\in\partial C$, then $f_a(\xi)=1$. Thus $\nabla f(\xi)=(\frac{a_1}{\xi_1},\cdots, \frac{a_n}{\xi_n})$ for all $\xi\in\partial C$.
To finish the computation, choose $\xi,\eta\in\partial C$. Then $$N_C(\xi)=-(\frac{a_1}{\xi_1},\cdots, \frac{a_n}{\xi_n})$$
and
$$N_C(\eta)=(\frac{a_1}{\eta_1},\cdots, \frac{a_n}{\eta_n})$$
Now $<N_C(\xi)-N_C(\eta),\xi-\eta>=<(\dots, \frac{a_i(\xi_i-\eta_i)}{\xi_i\eta_i},\dots),(\dots,\xi_i-\eta_i,\dots)>$
$=a_1\frac{(\xi_1-\eta_1)^2}{\xi_1\eta_1} + \cdots + a_n\frac{(\xi_n-\eta_n)^2}{\xi_n\eta_n}\geq 0$. So we have that $C$ is convex.
\end{proof}
\begin{corollary}\label{cor5.2}
Let $A\subseteq\Z^n$ be a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subseteq\Z^n$, and $t>1$. If $F$ is a face of $\conv(E_t(A))$, then $F\cap E_t(A)=E_t(\sigma)$ where $\sigma\in N(A)$.
\end{corollary}
\begin{proof}
We already have that $E_t(A)$ is the vertex set of $\mathcal{P}_t(A)$. Suppose that $F$ is a maximal face of $E_t(A)$ and let $F\cap E_t(A)=\{t^{\alpha_1}, \dots, t^{\alpha_r}\}$. Suppose for a contradiction that $\{\alpha_1, \dots, \alpha_r\}\notin N(A)$. Then there exists $b\in A$ such that $b<<\vee\alpha_i$. Therefore $E_t(b)<<E_t(\vee\alpha_i)=\vee E_t(\alpha_i)$. We have three cases to consider.
\begin{enumerate}
\item $E_t(b)\in\conv(E_t(\alpha_1), \dots, E_t(\alpha_r),\vee E_t(\alpha_i))$.
\item $E_t(b)\notin\conv(E_t(\alpha_1), \dots, E_t(\alpha_r),\vee E_t(\alpha_i))$.
\item $E_t(b)\in F$.
\end{enumerate}
Examining each case:
\begin{enumerate}
\item We would have that $E_t(b)$ lies in the interior of $\mathcal{P}_t(A)$, contradicting Lemma \ref{exponentialconvex}.
\item This would imply that the hyperplane containing $F$ separates $E_t(A)$, contradicting the convexity of $\mathcal{P}_t(A)$.
\item If $E_t(b)\in F$, increase $t$ by $\epsilon> 0$ to be back in case 2.
\end{enumerate}
\end{proof}
Before we cover the main concepts in this section, we first need a structural lemma that underlies many statements that will be made later.
\begin{lemma}
Let $A\subset\Z^n$, and suppose that $A$ is a generic $\Lambda$-finite set for some antichain lattice $\Lambda$. Then for $t>>0$, $\partial(\conv(E_t(A)+\N^n))\cong\R^{n-1}$.
\end{lemma}
\begin{proof} $ $
Let $B=\{\beta\in\R^n\mid\pi_1(\beta)+\cdots+\pi_n(\beta)=0\}$, and for each $\beta\in B$, let $\ell_{\beta}=\{\beta+s(1,\dots,1)\mid s\in\R\}$. Since $\partial(\conv(E_t(A)+\N^n))$ is convex, and $B\cap\N^n=0$, we have that each $\ell_{\beta}$ intersects $\partial(\conv(E_t(A)+\N^n))$ in at most one point. To see that $\ell_{\beta}$ intersects $\partial(\conv(E_t(A)+\N^n))$ at all, notice that the point of $\partial(\conv(E_t(A)+\N^n))$ that is closest to the origin is the point of intersection with $\ell_0$. Call this point $\gamma$. Then $\N^n\subset\partial(\conv(E_t(A)+\N^n))-\gamma$. The line connecting any point $\eta$ on any coordinate face of $\N^n$ to the closest point on $B$ passes through $\partial(\conv(E_t(A)+\N^n))-\gamma$, showing that each $\ell_{\beta}$ intersects $\partial(\conv(E_t(A)+\N^n))$ in exactly one point.
Therefore, we have a bijection between $B$ and $\partial(\conv(E_t(A)+\N^n))$. For each $\beta\in B$, call this point of intersection $\beta'$. Consider the map $$\begin{array}{cccc} f: & \partial(\conv(E_t(A)+\N^n)) & \rightarrow & B \\ & \beta' & \mapsto & \beta\end{array}$$
Since $f$ maps different elements along lines parallel to $t(1,\dots,1)$, then two points that are close in $\partial(\conv(E_t(A)+\N^n))$ remain close under $f$. This also holds mutatis mutandis under $f^{-1}$, which maps $\beta'$ to $\beta$. Therefore, we have a continuous bijection with a continuous inverse, and hence $\partial(\conv(E_t(A)+\N^n))$ and $B$ are homeomorphic. Since $B$ is a hyperplane in $\R^n$, it is homeomorphic to $\R^{n-1}$, and hence, so is $\partial(\conv(E_t(A)+\N^n))$.
\end{proof}
Continuing, we need to show an important property of $A$.
\begin{lemma}\label{locallyfinitelemma}
Let $\Lambda$ be an antichain lattice, and let $A\in\Z^n$ be a generic $\Lambda$-finite set. Then for each $\alpha\in A$, the set of neighbors of $\alpha$ is finite.
\end{lemma}
\begin{proof}
The proof runs similarly to the proof of Proposition 9.4 in \cite{MS}. Since $A$ is $\Lambda$-finite, we can choose a set of $\Lambda$-representatives and call it $A_0$. Then we have $|A_0|$ copies of $\Lambda$ in $A$. We can find all the primitive elements (defined in the referenced proof) by individually translating each copy of $\Lambda$ to contain the origin, finding the associated primitive elements, then translating them back. There are only finitely many primitive elements for each copy of $\Lambda$, and hence only finitely many overall.
The second half of the proof runs identically.
\end{proof}
For $A\subseteq\Z^n$, let $\hull_t(A)=\{E_t(F)\subseteq E_t(A)\mid \conv(E_t(F))\text{ is a face of }\mathcal{P}_t(A)\}$.
\begin{prop}\label{stableposet}
If $A\in\Z^n$ is generic, then there exists $T\in\R$ such that for $t,t'\geq T$, $\hull_t(A)=\hull_{t'}(A)$.
\end{prop}
\begin{proof}$ $
Let $B_i=B(0,i)$ be the ball of radius $i$ about the origin in $\R^n$. If $\mathcal{V}_{i,t}=B_i\cap\hull_t(A)$, then $\hull_t(A)=\varinjlim\mathcal{V}_{i,t}$. By Proposition 4.14 of \cite{MS}, there exists a $T\in\R$ such that for $t,t'\geq T$, $\hull_t(\mathcal{V}_{i,t})=\hull_{t'}(\mathcal{V}_{i,t})$. Specifically, the Proposition tells us that $T=(n+1)!$. Since this holds for all $\mathcal{V}_{i,t}$, it holds under the direct limit, and hence when $T>(n+1)!$, $\hull_t(A)=\hull_{t'}(A)$.
\end{proof}
\begin{remark}
Although not mentioned explicitly, if $A$ were not generic, Proposition \ref{stableposet} fails. This is because there will exist two elements that share a component without a third element dividing the supremum of the first two. Under the exponentiation, these two elements would continue to share a component for all $t$, which would imply the existence of a supporting hyperplane of $\mathcal{P}_t(A)$ that was parallel to a coordinate plane, violating Lemma \ref{exponentialconvex}.
\end{remark}
When $t$ is large enough, $\hull_t(A)$ is independent of $t$, so we will drop the subscript and use $\hull(A)$ when it is understood that $t\geq T$.
\begin{prop}\label{locallyfinite}
Let $A\subset\Z^n$ be a $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$. For all $\alpha\in A$, $$|\{\sigma\in\hull(A)\mid\alpha\in\sigma\}|<\infty$$
\end{prop}
\begin{proof}$ $
If a face of $\hull(A)$ were incident with infinitely many other faces, that would imply the existence of an edge that was incident with infinitely many other edges; up to a suitable translation, we could consider the point of incidence to be 0, contradicting Lemma \ref{locallyfinitelemma}.
\end{proof}
\begin{remark}
In Lemma \ref{locallyfinitelemma}, we worked strictly in $A$ and $N(A)$, but Proposition \ref{locallyfinite} made a claim about $\hull(A)$. However, we have a structure-preserving bijection between the two objects, so, up to notation, the claim in the lemma could have been made as a claim about $\hull(A)$.
\end{remark}
\begin{prop}
If $A\subseteq\Z^n$ is a $\Lambda$-finite set for some antichain lattice $\Lambda\subseteq\Z^n$, then every face of $\conv(E_t(A))$ is a polyhedron.
\end{prop}
\begin{proof} $ $
It is clear that $\conv(E_t(A))$ is the intersection of half-spaces from Lemma \ref{exponentialconvex}, so it remains to show that each face is the convex hull of finitely many points.
If $\conv(E_t(A))$ had a supporting hyperplane that contained infinitely many points, that would imply the existence of a hyperplane containing infinitely many points of $A$. The only such hyperplanes are those that are parallel to $\Lambda$ and that contain $\alpha_0+\Lambda$ for some $\alpha_0\in A$. But by Theorem 9.14 of \cite{MS}, these collections of points are mapped to locally finite sets under the exponentiation map, and hence no supporting hyperplane of $\conv(E_t(A))$ containing infinitely many points exists.
\end{proof}
\subsubsection{Taylor Complexes and Resolutions}
\begin{definition}
A simplicial complex with labels from a lattice is a function from the vertices of the complex to the lattice. The label of a simplex is the supremum of the labels of its vertices.
\end{definition}
\begin{definition}\label{cellrescoeffs}
Let $\Delta$ be a simplicial complex labeled with suprema from $\Z^n$, and let $\Delta_i=\{i\text{-faces of }\Delta\}$. Let $S=k[x_1,\dots,x_n]$ and $S(e_{\sigma})$ be the principal $S$-module generated by $e_{\sigma}$. The Taylor Complex supported on $\Delta$ is $$\mathcal{F}_{\Delta}: \cdots \overset{d_{i+1}}\rightarrow\mathcal{F}_i\overset{d_i}\rightarrow\cdots\overset{d_1}\rightarrow\mathcal{F}_0\overset{d_0}\rightarrow 0$$ where $$\mathcal{F}_i=\bigoplus_{\sigma\in\Delta_i}S(e_{\sigma})$$ and if $\sigma=\{\alpha_0,\dots,\alpha_i\}$, and $\sigma\setminus j=\{\alpha_0,\dots,\alpha_{j-1},\alpha_{j+1},\dots,\alpha_i\}$, $$d(e_{\sigma})=\sum_{\alpha_j\in\sigma}(-1)^{j-1}(X^{\vee\sigma-\vee\sigma\setminus j})e_{\sigma\setminus j}.$$
\end{definition}
\begin{remark}
In \cite{MS}, the Taylor complex is defined on a finite set in $\N^n$, but there is no reason for this other than making the $\Delta_i$ finite.
\end{remark}
\begin{definition}
The Taylor resolution of $A\subseteq \Z^n$ is the Taylor complex supported on the simplicial complex that is full over $A$. I.e., the faces of the simplicial complex are in bijection with the finite subsets of $2^A$.
\end{definition}
\begin{definition}
If $N(A)$ is the Scarf complex of $A$ (Definition \ref{scarfdef}), then $\mathcal{F}_{N(A)}$ is the algebraic Scarf complex, which is the Taylor complex supported on the Scarf complex.
\end{definition}
\begin{remark}
Note that the Scarf complex is a labeled simplicial complex, and the algebraic Scarf complex is that complex coupled with a collection of maps.
\end{remark}
\begin{prop}\label{scarfsubcomplex}
If $A\subseteq\Z^n$, then every free $S$-resolution of $M_A$ contains the algebraic Scarf complex $\mathcal{F}_{N(A)}$ as a subcomplex.
\end{prop}
\begin{proof}$ $
The Taylor resolution is an $S$-resolution of $M_A$. By \cite{P2}, it must contain a minimal resolution. Call that minimal resolution $\mathcal{F}_{\bullet}$. By definition, $\mathcal{F}_{\bullet}$ must contain all relations of $M_A$ in all dimensions. Additionally, the Taylor resolution contains the Scarf complex by construction, which in turn contains relations of $M_A$ without repitition. Since the Scarf complex does not necessarily contain all relations, it is a subcomplex of $\mathcal{F}_{\bullet}$.
\end{proof}
\begin{theorem}\label{theorem137}
If $A\subset\Z^n$, then $\mathcal{F}_{N(A)}$ is isomorphic to a subcomplex of $\hull(A)$.
\end{theorem}
\begin{proof} $ $
Let $\sigma\subset A$ be a face of the Scarf complex. Then $\sigma$ is strongly neighborly. We wish to relabel the elements of $\sigma$ in a meaningful way. To do this, consider $i\in [p]$ and let $$J(i)=\{j\in [n] \mid \pi_j(\vee\sigma\setminus i)<\pi_j(\vee\sigma)\}$$
Notice that $J(i)$ is nonempty, because if it was empty, then $\alpha_i$ would not contribute to $\bigvee \sigma$, and hence $\sigma$ could not be neighborly because $\bigvee\sigma = \bigvee(\sigma\setminus\alpha_i)$. Additionally, for similar reasons, $J(i)\nsubseteq\bigcup_{k\neq i}J(k)$. Therefore, for each $i\in [p]$, there is a $j=j(i)\in [n]$ such that $\alpha_i$ contributes to $\bigvee\sigma$ in component $j(i)$ and no other element of $\sigma$ does. Now for each $\alpha_i\in\sigma$, choose such a $j(i)$, and relabel $\alpha_i$ as $\alpha_{j(i)}$. Then $\pi_i(\alpha_i)>\pi_i(\alpha_k)$ for all $k\neq i$.
The second step of the proof is that $\{t^{\pi_i(\alpha_k)}\}$ is a nonsingular matrix for large enough $t$. It suffices to show this by showing that for large enough $t$, \begin{equation}\label{tlarge}\prod_{i=1}^pt^{\pi_i(\alpha_i)}>p!\prod_{i=1}^pt^{\pi_i(\alpha_{\rho(i)})}\end{equation} for any non-identity permutation $\rho$ of $[p]$. If (\ref{tlarge}) is satisfied, then the term $\prod_{i=1}^pt^{\pi_i(\alpha_i)}$ will dominate all other terms $\det(\{t^{\pi_i(\alpha_k)}\})$, and hence the matrix will be nonsingular. Assume $t>p$, then $$\frac{\prod_{i=1}^pt^{\pi_i(\alpha_i)}}{\prod_{i=1}^pt^{\pi_i(\alpha_{\rho(i)})}}=\prod_{i=1}^pt^{\pi_i(\alpha_i)-\pi_i(\alpha_{\rho(i)})}\geq\prod_{i=1}^pt\geq\prod_{i=1}^pp=p^p>p!$$
Therefore, inequality (\ref{tlarge}) is satisfied for all non-identity permutations $\rho$.
This says that the points $\{t^{\alpha_1}, \dots, t^{\alpha_p}\}$ are affinely independent. Because they are affinely independent, the convex hull of the points forms a simplex in which every point is a vertex.
By definition, $\hull(A)_{\preceq\vee\sigma}$ is exactly the convex hull of $\{t^{\alpha_1}, \dots, t^{\alpha_p}\}$. Because $\sigma$ is (strongly) neighborly, there is no other subset of $A$ that has the same supremum as $\sigma$. As such, if a face of $\hull(A)$ is labeled with $x^{\vee\sigma}$, it necessarily came from the image of $\sigma$, and since the exponential map is injective, there can be only one such face. Proposition \ref{scarfsubcomplex} says that every free resolution contains the algebraic Scarf complex as a subcomplex. This tells us that in addition to there being at most one face with label $x^{\vee\sigma}$, there also must be at least one. Therefore, every strongly neighborly set of $A$ is present as a face in $\hull(A)$.
\end{proof}
\begin{remark}It will be common to drop the phrase "is isomorphic to" from Theorem \ref{theorem137} and just say that $\mathcal{F}_{N(A)}$ is a subcomplex of $\hull(A)$.
\end{remark}
\begin{theorem}\label{scarfequalshull}
If $A\subset\Z^n$ is a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$, then $N(A)\cong\hull(A)$.
\end{theorem}
We need a lemma to prove the theorem.
\begin{lemma}\label{lemma137}
If $A\subset\Z^n$ is a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$, and $F$ is a face of $\hull(A)$, then for every $\alpha\in A$, there is a component $\pi_j(\alpha)$ such that $\pi_j(\alpha)\geq\pi_j(\vee F)$.
\end{lemma}
\begin{proof}$ $
The analogous statement in \cite{MS}, Lemma 6.14 has a finite $A\subset\N^n$, but the hypothesis is never used, and the proof runs identically for infinite $A\subset\Z^n$.
\end{proof}
\begin{proof}{[Theorem \ref{scarfequalshull}]}$ $
Let $F$ be a face of $\hull(A)$ and let $\{\alpha_1, \dots, \alpha_p\}\subset A$ be the points that correspond to the vertices of $F$. That is, $F=\{E_t(\alpha_i)\}$. Without loss of generality, we can assume that $\pi_i(\bigvee_j\alpha_j)\neq0$. For a contradiction, assume that $\{\alpha_1, \dots, \alpha_p\}$ is not a face of $N(A)$. This could occur in two cases:
\begin{enumerate}
\item There exists $k\in\{1,\dots,p\}$ such that $\bigvee_{j\neq k}\alpha_j=\bigvee_j\alpha_j$.
\item There exists $\beta\in A$ such that $t^{\beta}\notin F$ and $\beta<\bigvee_j\alpha_j$. (I.e., $\bigvee_j\alpha_j=\beta\vee\bigvee_j\alpha_j$.)
\end{enumerate}
For the first case, if we apply Lemma \ref{lemma137} to $\alpha_k$, then there exists a $j$ such that $\pi_j(\alpha_k)=\pi_j(\bigvee_j\alpha_j)$, and hence there is an element $\alpha_{\ell}$ such that $\pi_j(\alpha_k)=\pi_j(\alpha_{\ell})$. Since $A$ is generic, there exists $\gamma\in A$ such that $\gamma<<\alpha_k\vee\alpha_{\ell}$, and hence $\gamma\leq\bigvee_j\alpha_j$, contradicting Lemma \ref{lemma137}.
In the second case, if we assume we are distinct from the first case, then for any $\alpha_k\in\{\alpha_1, \dots, \alpha_p\}$, there exists $j$ such that $\pi_j(\alpha_k)=\pi_j(\bigvee_i\alpha_i)\geq\pi_j(\beta)$. If the inequality is equality, then by genericity, there exists $\beta'<<\bigvee_i\alpha_i$, which is a contradiction to Lemma \ref{lemma137} again, so we have a strict inequality. Having a strict inequality means that $\beta<<\bigvee_i\alpha_i$, again contradicting Lemma \ref{lemma137}.
In both cases, we reached contradictions, and hence every face of $\hull(A)$ is a face of the Scarf complex. Coupled with Theorem \ref{theorem137}, we have that $\hull(A)\cong N(A)$.
\end{proof}
\begin{corollary}\label{scarfresolves}
If $A\subset\Z^n$ is a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$, then $\mathcal{F}_{(N(A))}$ minimally resolves $M_A$ as an $S$-module.
\end{corollary}
\begin{proof}$ $
We already have that $\mathcal{F}_{\hull(A)}$ resolves $M_A$, and Theorems \ref{theorem137} and \ref{scarfequalshull} together give us that $\mathcal{F}_{N(A)}$ also resolves it. The resolution is minimal because no two faces of $N(A)$ have the same degree.
\end{proof}
\section{Different Module Structures}\label{6}
Currently, we are operating under the condition that $A\subset\Z^n$ is a generic $\Lambda$-finite set such that $\Lambda\subseteq\Z^n$ is an antichain lattice. With these assumptions, we have constructed a minimal free resolution of the $S$-submodule $M_A=\{\sum_{\alpha}c_{\alpha}X^{\alpha}\}$ of the Laurent polynomial ring $S^{\pm}=k[x_1^{\pm 1}, \dots, x_n^{\pm 1}]$. The minimal free resolution we constructed, namely the algebraic Scarf complex of $A$ may only have finitely many nonzero dimensions, but in most dimensions, the module is infinitely generated. That is, $$(\mathcal{F}_{N(A)})_i=\bigoplus_{\sigma\in N_i(A)}Se_{\sigma}$$ is nonzero for only finitely many $i$, but for the $i$'s for which it is nonzero, there are typically infinitely many $\sigma\in N_i(A)$.
An underlying structure that we have hitherto underutilized is the grading on $S$, and hence on the $S$-modules.
\subsection{Gradings on S}
The polynomial ring $S=k[x_1, \dots, x_n]$ is graded by $\N^n$, and hence all the $S$-modules we have seen have also been graded by $\N^n$. Because of this grading, and our ability to associate any monomial in $S$ to a vector in $\N^n$, it will be helpful at times to consider $S$ as the monoid algebra $k[\N^n]$. This notation will be used when considering gradings that are less common than the $\N^n$-grading. There is a second grading present for many examples that we have yet to consider: the $\Lambda$-grading.
Consider the rings $$S[\Lambda]\cong k[\N^n][\Lambda]=\{\sum_{\alpha,\lambda}c_{\alpha\lambda}X^{\alpha}z^{\lambda}\mid c_{\alpha\lambda}\in k\text{ finitely non-zero }, \alpha\in \N^n, \lambda\in\Lambda\}$$ and $$k[\N^n+\Lambda]=\{\sum_{\beta}c_{\beta}X^{\beta}\mid c_{\beta}\in k\text{ finitely non-zero }, \beta\in\N^n+\Lambda\}$$.
\begin{lemma}\label{lemma1137}Let $A\subset\Z^n$ be a $\Lambda$-finite set such that $\Lambda\subset\Z^n$ is an antichain lattice. With $M_A=\{\sum_{\alpha}c_{\alpha}t^{\alpha}\mid c_{\alpha}\in k \text{ finitely non-zero }, \alpha\in A+\N^n\}$,
\
\begin{enumerate}
\item $M_A$ is a $k[\N^n+\Lambda]$-module, with action defined by:
$$(x^\beta, t^\alpha)\mapsto t^{\alpha+\beta},\; \alpha\in A+\N^n, \beta\in \N^n+\Lambda,$$ and linearity.
\item $M_A$ is a $S[\Lambda]$-module, with action defined by:
$$(x^\beta z^{\lambda}, t^\alpha)\mapsto t^{\alpha+\beta+\lambda},\; \alpha\in A, \beta\in \N^n, \lambda\in\Lambda,$$ and linearity.
\item The set $\{\,t^\alpha\mid\alpha\in A\,\}$ is a minimal set of generators for $M_A$ as an $S$-module.
\item The set $\{\,t^\alpha\mid\alpha\in A_0\,\}$ is a minimal set of generators for $M_A$ as a $k[\N^n+\Lambda]$-module.
\item If $A\subseteq \Lambda+\N^n$ and $A=A+\Lambda+\N^n$, then $M_A$ is an ideal in $k[\N^n+\Lambda]$.
\end{enumerate}
\end{lemma}
\begin{proof}
\
\begin{enumerate}
\item We have the following equalities that show the result:
\begin{enumerate}
\item $(x^{\beta},t^{\alpha_1}+t^{\alpha_2})\mapsto t^{\alpha_1+\beta}+t^{\alpha_2+\beta}=x^{\beta}t^{\alpha_1}+x^{\beta}t^{\alpha_2}$.
\item $(x^{\beta_1}+x^{\beta_2},t^{\alpha})\mapsto t^{\alpha+\beta_1}+t^{\alpha+\beta_2}=x^{\beta_1}t^{\alpha}+x^{\beta_2}t^{\alpha}$.
\item $(x^{\beta_1}x^{\beta_2},t^{\alpha})\mapsto t^{\alpha+\beta_1+\beta_2}=x^{\beta_1}t^{\alpha+\beta_2}=x^{\beta_1}(x^{\beta_2}t^{\alpha})$.
\item $(1,t^{\alpha})\mapsto t^{\alpha+0}=t^{\alpha}$.
\end{enumerate}
\item Identical to part 1 with the realization that $\beta+\lambda\in A$, and $\alpha+A\in A$.
\item Let $M_A\ni m=\sum c_{\alpha}t^{\alpha}$ for finitely many $\alpha\in A+\N^n$. If some $\alpha$ is not in $A$, then there exists an $\eta\in\N^n$ and $\alpha_0\in A$ such that $\alpha=\alpha_0+\eta$. Then we have that $c_{\alpha}t^{\alpha}=c_{\alpha}t^{\alpha_0+\eta}=c_{\alpha}t^{\eta}t^{\alpha_0}$. But $t^{\eta}\in S$, so $A$ generates $M_A$ as an $S$-module.
\item Mutatis mutandis with part two, except that now every $\alpha\in A+\N^n$ is written as $\alpha_0+\lambda+\eta$.
\item It suffices to show that $\alpha+\beta\in A+\N^n$ when $\alpha\in A+\N^n$ and $\beta\in\N^n+\Lambda$. If $\alpha=\lambda_1+\eta_1$, and $\beta=\lambda_2+\eta_2$, then $\alpha+\beta=\lambda_1+\lambda_2+\eta_1+\eta_2$, and since $A=A+\lambda+\N^n$, we have that $\alpha+\beta\in A+\N^n$.
\end{enumerate}
\end{proof}
We have already defined the algebraic Scarf complex to be the Taylor complex supported on $N(A)$. Implicit in this definition was the consideration of the algebraic Scarf complex as a complex of $S$-modules. We have now seen that these modules can be considered as $S[\Lambda]$-modules.
\begin{definition}
If $A\subset\Z^n$ is a $\Lambda$-finite set such that $\Lambda\subset\Z^n$ is an antichain lattice, then the Taylor complex supported on $N(A)/\Lambda$ considered as a complex of $S[\Lambda]$-modules is $\mathcal{F}^{\Lambda}_{N(A)}$.
\end{definition}
A typical free module in $\mathcal{F}^{\Lambda}_{N(A)}$ would be of the form $$\bigoplus_{\sigma+\Lambda\in N_i(A)/\Lambda}S(e_{\sigma+\Lambda})$$ Due to the onerous nature of this notation, we often will refrain from writing out the modules in detail.
\subsubsection{The Functor $\underline{\white{M}}\otimesS S$}
Let $J$ be the ideal $<1-z^{\lambda}\mid\lambda\in\Lambda>$ in $S[\Lambda]$, and let $\overline{J}$ be the image of $J$ in $k[\N^n+\Lambda]$ under the map $z^{\lambda}x^{\alpha}\mapsto x^{\alpha+\lambda}$.
\begin{lemma}
Let $M$ be an $S[\Lambda]$-module. Then $S\otimes_{S[\Lambda]}M\cong M/JM$.
\end{lemma}
\begin{proof}$ $
Define $b:S\times M\rightarrow M/JM$ by $b(s,m)=sm+JM$. Then $b$ is surjective and $S$-bilinear. Furthermore, $b$ is $S[\Lambda]$-bilinear because $b(x^{\lambda}s,m)=\overline{sm}=b(s,x^{\lambda}m)$. Therefore, $b$ induces an $S$-algebra morphism from $S\otimes_{S[\Lambda]}M$ to $M/JM$, and we can exhibit an inverse. The kernel of the map $m\mapsto1\otimes m:M\rightarrow S\otimes_{S[\Lambda]}M$ contains $JM$, hence this map induces a morphism $M/JM$ to $S\otimes_{S[\Lambda]}S$.
\end{proof}
Let $A=\Lambda$, under the usual conditions, and consider $M_A\otimesS S=M_{\Lambda}\otimesS S$. If $I_A=\IL=<X^{\lambda^+}-X^{\lambda^-}\mid\lambda\in\Lambda>$ as usual, then $\IL=\overline{J}\cap S$, and $$M_{\Lambda}\otimesS S\cong k[\N^n+\Lambda]/\overline{J}\cong (S+\overline{J})/\overline{J}\cong S/(\overline{J}\cap S)\cong S/\IL$$
More generally, we can let $M_0$ be the $S$-submodule of $k[\Z^n]$ generated by $\{x^{\alpha}\mid\alpha\in A_0\}$, where $A=A_0+\Lambda$, as usual. Then notice that if $\alpha\in A$, we can write $\alpha=\alpha_0+\lambda$ for some $\alpha_0\in A_0$ and $\lambda\in\Lambda$, and as such, we have that $x^{\alpha}=x^{\alpha_0}-(1-z^{\lambda})x^{\alpha_0}$. With this representation of $x^{\alpha}$, we see that $M_A=M_0+JM_A$. Therefore, we have $$M_A\otimesS S\cong M_A/JM_A\cong (M_0+JM_A)/JM_A\cong M_0/(JM_A\cap M_0)$$
This is too general to say much about, so we will make the assumption that $A\subset\N^n+\Lambda$. With this assumption, we have the following useful lemma.
\begin{lemma}\label{repsinN}
If $A\subset\N^n+\Lambda$, where $\Lambda\subset\Z^n$ and $\Lambda\cap\N^n=0$, then for any $\alpha\in A$, there are $\alpha_0\in\N^n$ and $\lambda\in\Lambda$ such that $\alpha=\alpha_0+\lambda$.
\end{lemma}
\begin{proof}$ $
Let $\alpha\in A$. Then there exists $\lambda\in\Lambda$ such that $\alpha\in-T_{\lambda}$. Let $\alpha_0=\alpha-\lambda$. Then $\alpha_0\in-T_{\alpha-\lambda}\subseteq\N^n$, completing the proof.
\end{proof}
If we choose a generating set for $A$ that is distinguished by being contained in $\N^n$, then using Lemma \ref{lemma1137}, we have that $$M_A\otimesS S\cong M_0(JM_A\cap M_0)= M_0(\IL\cap M_0)\cong (M_0+\IL)/\IL$$
Therefore, in this case, we can identify $M_A\otimesS S$ with the monomial ideal of $S/\IL$ that is generated by $$\{x^{\alpha}+\IL\mid\alpha\in A_0\}$$
Additionally, we have $$k[\Z^n]\otimesS S\cong k[\Z^n]/Jk[\Z^n]\cong k[\Z^n/\Lambda]$$
With this last computation, since $M_A$ is an $S$-submodule of $k[\Z^n]$, we make the claim that $M_A\otimesS S$ is the $S$-submodule of $k[\Z^n/\Lambda]$ generated by the image of $M_0$. The proof of this claim will come as corollary to Theorem \ref{equivalentcategories}
\subsection{Categorical Equivalence}
Let $\mathcal{A}$ be the category of $S[\Lambda]$-modules with the usual $\Z^n$-grading. Under the tensor product $\underline{\white{M}}\otimesS S$ that we just worked with, the images are $\Z^n/\Lambda$-graded. With this setup, let $\mathcal{B}$ be the category of $\Z^n/\Lambda$-graded $S$-modules.
\begin{theorem}\label{equivalentcategories}\emph{[Theorem 9.17, \cite{MS}]}
The tensor product $\pi(\underline{\white{M}})=\underline{\white{M}}\otimesS S:\mathcal{A}\rightarrow\mathcal{B}$ is an equivalence of categories.
\end{theorem}
\begin{corollary}
If $\mathcal{F}_{\bullet}$ is any $\Z^n$-graded free resolution of $M_A$ over $S[\Lambda]$, then $\pi(\mathcal{F}_{\bullet})$ is a $\Z^n/\Lambda$-graded free resolution of $S/\IL$ over $S$. Moreover, $\mathcal{F}_{\bullet}$ is minimal if and only if $\pi(\mathcal{F}_{\bullet})$ is minimal.
\end{corollary}
\begin{theorem}\label{thm6.8}
For an antichain lattice $\Lambda\subset\Z^n$, and a $\Lambda$-finite set $A\subset\Z^n$, the following are equivalent:
\begin{enumerate}
\item The algebraic Scarf complex of $A$, $\mathcal{F}_{N(A)}$.
\item The hull resolution of $A$.
Additionally, they are minimal free $S$-resolutions of $M_A$.
\end{enumerate}
\end{theorem}
\begin{proof}$ $
This theorem is a generalization of Theorem 9.24 from \cite{MS}. The machinery is unchanged, but the setting is broader with the same conclusion and identical proof.
\end{proof}
\begin{corollary}
The isomorphism in Theorem \ref{thm6.8} can be chosen to commute with the $\Lambda$-actions and therefore the isomorphism holds for $S[\Lambda]$-modules and we have a minimal free $S[\Lambda]$-resolution of $M_A$
\end{corollary}
\begin{proof}
This is an identical statement to Corollary \ref{cor5.2}, but with a different application.
\end{proof}
\begin{corollary}
The minimal free resolution of a generic lattice ideal $I_{\Lambda}$ is $\pi(N(\Lambda))$.
\end{corollary}
\section{Application of the Horseshoe Lemma}
To bring everything we have worked on together, we will need the first part of the Horseshoe Lemma.
\begin{lemma}\label{horseshoe}
Suppose given a commutative diagram $$\begin{array}{cccccccccc} & & & & & & & 0 & & \\ & & & & & & & \downarrow & & \\ \cdots & P_2' & \overset{d'_2}\rightarrow & P_1' & \overset{d'_1}\rightarrow & P_0' & \overset{d'_0}\rightarrow & A' & \rightarrow & 0 \\ & & & & & & & \white{M}\downarrow i_A & & \\ & & & & & & & A & & \\ & & & & & & & \white{M}\downarrow \pi_A & & \\ \cdots & P_2'' & \overset{d''_2}\rightarrow & P_1'' & \overset{d''_1}\rightarrow & P_0'' & \overset{d''_0}\rightarrow & A'' & \rightarrow & 0 \\ & & & & & & &\downarrow & & \\ & & & & & & & 0 & & \end{array}$$
where the column is exact and the rows are projective resolutions. Set $P_n=P_n'\oplus P_n''$. Then there exists maps from $P_n$ to $P_{n-1}$ generated from $d'_n$ and $d''_n$ such that $P_{\bullet}$ is a projective resolution of $A$.
\end{lemma}
In our particular case of using cyclic $S$-modules, all of our modules are free and hence projective. Before we arrive at a situation where we can use the Horseshoe Lemma, we need to verify a few conditions first.
\begin{lemma}\label{containsmarkov1}
Let $A\subset\Z^n$ be a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$ such that $A=A_0+\Lambda$ with $A_0\subset\N^n$ and $A_0\neq\{0\}$. Let $B$ be a minimal Markov basis of $\Lambda$, and assume that $\alpha\nleq\lambda^+$ and $\alpha\nleq\lambda^-$ for all $\lambda\in B$. Then every minimal generating set of $\IL+I_{A_0}\subseteq S$ contains $$\{X^{\lambda^+}-X^{\lambda^-}\mid\lambda\in L\}$$ for some minimal Markov basis $L$ of $\Lambda$.
\end{lemma}
\begin{proof}$ $
Because of Proposition \ref{markovneighbors}, we have that a minimal Markov bases is a subset of a finite set of positive and negative pairs of vectors. A minimal Markov basis is any subset of this set that chooses one vector from each pair. As such, the minimal bases only differ by sign patterns, and hence the property $\alpha\in A_0$, $\alpha\nleq\lambda^+$ and $\alpha\nleq\lambda^-$ for all $\lambda\in B$ holds for all Markov bases.
This condition tells us that $X^{\lambda^+}-X^{\lambda^-}\notin I_{A_0}$. However, we know that $X^{\lambda^+}-X^{\lambda^-}\in \IL+I_{A_0}$, and hence $X^{\lambda^+}-X^{\lambda^-}\in \IL$. This holds for all $\lambda\in B$, and hence by the fundamental theorem of Markov bases (Theorem \ref{markdef}), the generating set of $\IL+I_{A_0}$ must contain binomials corresponding to a Markov basis.
\end{proof}
So we have shown that for our generic $\Lambda$-finite sets $A\subseteq\Z^n$ with $\Lambda$-representatives $A_0$, the ideal $\IL+I_{A_0}$ in $S$ can only be written in such a form.
\begin{prop}\label{mainresult}
Let $A\subset\Z^n$ be a generic $\Lambda$-finite set for some antichain lattice $\Lambda\subset\Z^n$ with $\Lambda$-representatives $A_0\subset\N^n$ and $A_0\neq\{0\}$. If $I_{A_0}=<X^{\alpha}\mid\alpha\in A_0>$, then the syzygy modules of the minimal free resolution of $I_{\Lambda}+I_{A_0}$ are submodules of the syzygy modules of $\pi(\mathcal{F}_{N(\Lambda)})\oplus\pi(\mathcal{F}_{N(A)}) $.
\end{prop}
\begin{proof}$ $
Consider the exact sequence $$0\rightarrow \IL \hookrightarrow \IL+I_{A_0} \twoheadrightarrow (I_{A_0}+\IL)/\IL \rightarrow 0$$ By previous arguments, $\pi(\mathcal{F}_{N(\Lambda)})$ and $\pi(\mathcal{F}_{N(A)})$ are free resolutions of $\IL$ and $I_{A_0}$, respectively. By the Horseshoe Lemma, there exists maps that can be paired with the syzygy modules of $\pi(\mathcal{F}_{N(\Lambda)})\oplus\pi(\mathcal{F}_{N(A)})$ that form a resolution of $\IL+I_{A_0}$. By \cite{P2}, all graded free resolutions contain a minimal graded free resolution, completing the proof.
\end{proof}
Unfortunately, even though $\pi(\mathcal{F}_{N(\Lambda)})$ and $\pi(\mathcal{F}_{N(A)})$ minimally resolve the binomial ideal $\IL\subset S$, and the monomial ideal $(I_{A_0}+\IL)/\IL\subseteq S/\IL$ respectively, the Horseshoe Lemma makes no claim as to the minimality of $\pi(\mathcal{F}_{N(\Lambda)})\oplus\pi(\mathcal{F}_{N(A)})$ as a resolution. The key to utilizing the Horseshoe Lemma is to understand the maps that are created from the separate resolutions.
\subsection{Lifting Terms}
The proof of the Horseshoe lemma provides a method for defining the new maps of the constructed resolution. In the diagram in Lemma \ref{horseshoe}, the horizontal maps terminating in $A$ are defined first by lifting the map $\epsilon''$ to a map $\overline{\epsilon}'':P_0''\rightarrow A$, and then defining $i_A\circ\epsilon'\oplus\overline{\epsilon}'':P_0'\oplus P_0''\rightarrow A$. Once this map is constructed, then the process is iterated. A lifting is defined when we choose a representative of $N_i(A)$ from its $\Lambda$-orbit for each $i$.
\subsection{Lifting Terms in $\Z^3$}
When working with the syzygy modules of the ideal $I_A=\IL+I_{A_0}$, we have several symbols that must be handled very carefully. In particular, if we have chosen a set of representatives for each $\Lambda$-orbit of $N(A)$, then each face $F$ has a representative face $F'$ such that $F=F'+\lambda$ for some $\lambda\in\Lambda$. Additionally, each face of $F$ has its own representative that may or may not be a face of $F$. These considerations lead us to the following potential problem. In $N(A)/\Lambda$, we have generators of our modules of the from $e_{\sigma+\Lambda}=e_{\overline{\sigma}}$; in $N(A)$, it would appear that we have generators of the form $e_{\sigma}$, but that is only true of the representative we chose for the lifting. As such, we need a definition for $e_{\sigma}$ if $\sigma$ is not a representative.
\begin{lemma}\label{abcd}
Let $\Lambda\subset\Z^3$ be an antichain lattice with minimal Markov basis $\{\lambda_i\}$, and let $g\in\Lambda$. Then there exists $\{c_i\}\subset S$ such that $$X^{g^+}-X^{g^-}=\sum_ic_i(X^{\lambda_i^+}-X^{\lambda_i^-})$$
\end{lemma}
\begin{proof} $ $
By definition of $\IL$, if $g\in\Lambda$, then $X^{g^+}-X^{g^-}\in\IL$, and by the fundamental theorem of Markov bases, $\{X^{\lambda_i^+}-X^{\lambda_i^-}\}$ generates $\IL$.
\end{proof}
\begin{definition}\label{1234}
Let $\Lambda$ be an antichain lattice in $\Z^3$ with minimal Markov basis $\{\lambda_i\}$, and let $P_0=\pi(\mathcal{F}_{N(A)})_0\oplus\pi(\mathcal{F}_{N(\Lambda)})_0=(\displaystyle{\bigoplus_{\sigma\in A_0}}Se_{\sigma})\oplus Se_{\lambda_1}\oplus Se_{\lambda_2}\oplus Se_{\lambda_3}$, where $A_0$ is a set of $\Lambda$-representatives of $N_0(A)$. Let $g\in\Lambda$ such that $X^{g^+}-X^{g^-}=\sum_ic_i(X^{\lambda_i^+}-X^{\lambda_i^-})$ and let $C=\{c_i\}$. Then we define $e_g(C)=\sum c_ie_{\lambda_i}$.
\end{definition}
\begin{remark}\label{uptosomething}
For all $C\subset S$ satisfying Definition \ref{1234}, if $d_0:P_0\rightarrow I_A$, then $d_0(e_g(C))=X^{g^+}-X^{g^-}$. Because of this, we can relax the notational dependence of $e_g$ on $C$.
\end{remark}
We now have a way to consider the symbol $e_g$ in terms of the symbols $e_{\lambda_i}$, which are generators of the $0^{th}$ dimensional module in the resolution of $\IL$. These symbols will often arise in symbolic computations, and needed to be addressed before we proceeded.
\begin{lemma}\label{N_1lifting}
Let $A\subset\Z^3$ be a generic $\Lambda$-finite set for a codimension 1 lattice $\Lambda\subset\Z^3$. Let $B,C\in\N^3$, $f,g\in\Lambda$ and $\sigma=\{B+f,C+g\}\subset N_1(A)$ oriented from $B+f$ to $C+g$. If $P_0=(\displaystyle{\bigoplus_{\sigma\in A_0}}Se_{\sigma})\oplus Se_{\lambda_1}\oplus Se_{\lambda_2}\oplus Se_{\lambda_3}$, $I_A=\IL+I_{A_0}$, and $d_0:P_0\rightarrow I_A$, then $$d_0(e_\sigma)=X^{S-(C+g)}e_C-X^{S-(B+f)}e_B+X^{S-g^+}e_g-X^{S-f^+}e_f$$ where $S=(B+f)\vee(C+g)$.
\end{lemma}
\begin{proof}$ $
The first two terms of the expression are $d'(e_{\sigma})$ when we consider $\sigma$ as an element of $N_1(A)/\Lambda$. So we need to show that if we attempted to use this same map for $d(e_{\sigma})$, then we would have the second pair of terms of the expression left over. Computing, $$dd(e_{\sigma})=d(X^{S-(C+g)}e_C-X^{S-(B+f)}e_B)=X^{S-(C+g)}d(e_C)-X^{S-(B+f)}d(e_B)$$ $$=X^{S-g}-X^{S-f}\neq 0$$
Therefore, we need to add an expression to $X^{S-(C+g)}e_C-X^{S-(B+f)}e_B$ such that applying $d$ to that expression will give us $X^{S-g}-X^{S-f}$. That expression is exactly $X^{S-g^+}e_g-X^{S-f^+}e_f$. Applying $d$, we get $$X^{S-g^+}(X^{g^+}-X^{g^-})-X^{S-f^+}(X^{f^+}-X^{f^-})$$ $$=X^S-X^{S-g^++g^-}-X^S+X^{S-f^++f^-}=X^{S-g}-X^{S-f}$$ as required.
\end{proof}
\begin{remark}
In Lemma \ref{N_1lifting}, even though we were equipped with Definition \ref{1234}, it appears as though we did not use it. This is because if we had replaced $e_g$ with $\sum c_ie_{\lambda_i}$, all the terms would have canceled just as if we had left $e_g$ in the computation. This situation repeats itself often in similar computations, and when we are able, we will use the analogues of $e_g$ directly in future computations with the understanding that they are only symbolic.
\end{remark}
Since we are in $\Z^3$, need only have Definition \ref{1234} and a similar definition for faces to handle all possible cases we might run into.
\begin{lemma}\label{efgh}
Let $A\subset\Z^3$ be a generic $\Lambda$-finite set for some codimension 1 antichain lattice $\Lambda\subset\Z^3$ with minimal Markov basis $\{\lambda_i\}$. Let $A_1$ be a set of $\Lambda$-representatives of $N_1(A)$. Suppose $t\in N_1(A)$ with endpoints $B+f$ and $C+g$. Let $t^r\in N_1(A)$ be the representative of $t$ and assume that $t=t^r+h$ with $h\in\Lambda$. Let $c_i, c_i', d_i, d_i'$ be the coefficients described in Lemma \ref{abcd} for $g-h, g, f-h$, and $f$, respectively. If $P_1=(\displaystyle{\bigoplus_{\sigma\in A_1}}Se_{\sigma})\oplus Se_{p_1}\oplus Se_{p_2}$ where $p_1,p_2$ are as in Lemma \ref{lambdares}, and $P_0=(\displaystyle{\bigoplus_{\sigma\in A_0}}Se_{\sigma})\oplus Se_{\lambda_1}\oplus Se_{\lambda_2}\oplus Se_{\lambda_3}$ and $d_1:P_1\rightarrow P_0$, then, symbolically, $$d_1(e_t)-d_1(e_{t^r})=\sum(c_iX^{\vee t^r-(g-h)^+}-c_i'X^{\vee t-g^+}-d_iX^{\vee t^r-(f-h)^+}+d_i'X^{\vee t-f^+})e_{\lambda_i}$$
\end{lemma}
\begin{proof}$ $
We compute with the understanding that $d(e_t)$ is a symbolic computation. To aid in the computation, we can create a diagram out of the hypothesis as follows:
\begin{center}
\scalemath{.6}{\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\draw (2.0,6.0)-- (4.0,2.0);
\draw (4.0,2.0)-- (7.0,4.0);
\draw (7.0,4.0)-- (5.0,8.0);
\draw (5.0,8.0)-- (2.0,6.0);
\draw (1.1600000000000006,6.720000000000002) node[anchor=north west] {$B+f$};
\draw (4.600000000000002,8.780000000000003) node[anchor=north west] {$C+g$};
\draw (3.200000000000002,2.000000000000001) node[anchor=north west] {$B+(f-h)$};
\draw (6.940000000000004,4.000000000000002) node[anchor=north west] {$C+(g-h)$};
\draw (2.6000000000000014,4.280000000000001) node[anchor=north west] {$h$};
\draw (6.120000000000004,6.460000000000003) node[anchor=north west] {$h$};
\draw (3.2600000000000016,7.620000000000003) node[anchor=north west] {$t$};
\draw (5.560000000000003,3.160000000000001) node[anchor=north west] {$t_r$};
\begin{scriptsize}
\draw [fill=black] (2.0,6.0) circle (2.5pt);
\draw [fill=black] (5.0,8.0) circle (2.5pt);
\draw [fill=black] (4.0,2.0) circle (2.5pt);
\draw [fill=black] (7.0,4.0) circle (2.5pt);
\end{scriptsize}
\end{tikzpicture}}
\end{center}
$$d(e_{t^r})=X^{\vee t^r-(C+g-h)}e_C-X^{\vee t^r-(B+f-h)}e_B + X^{\vee t^r-(g-h)^+}\sum c_ie_{\lambda_i}-X^{\vee t^r-(f-h)^+}\sum d_ie_{\lambda_i}$$
$$d(e_t)=X^{\vee t-(C+g)}e_C-X^{\vee t-(B+f)}e_B + X^{\vee t-g^+}\sum c'_ie_{\lambda_i}-X^{\vee t-f^+}\sum d'_ie_{\lambda_i}$$
Taking the difference and rearranging, we get
\begin{equation}\label{eq137}(X^{\vee t^r - (C+g-h)}-X^{\vee t -(C+g)})e_C-(X^{\vee t^r - (B+f-h)}-X^{\vee t -(B+f)})e_B$$
$$+\sum(c_iX^{\vee t^r-(g-h)^+}-c'_iX^{\vee t-g^+})e_{\lambda_i}-\sum(d_iX^{\vee t^r-(f-h)^+}-d'_iX^{\vee t-f^+})e_{\lambda_i}\end{equation}
Notice now that $$\vee t^r-(C+g-h)=(B+f-h)\vee(C+g-h)-(C+g-h)=(B+f)\vee(C+g)-h-(C+g-h)$$
$$=(B+f)\vee(C+g)-(C+g)=\vee t-(C+g),$$ so the first parenthetical expression of \ref{eq137} is 0, and by an identical computation, the second parenthetical expression is also 0. This leaves us with the desired result
\end{proof}
Definition \ref{exists?} will exemplify the nature of Remark \ref{uptosomething} in the sense that we will define the term exactly by how it acts under the mapping, and not how it acts as a module element. As in Definition \ref{N_1lifting}, we will not need to reference the defining set in practice, and will supress the notation.
\begin{definition}\label{exists?}
Under the conditions of Lemma \ref{efgh}, we define $d(e_t(\mathcal{B}))=d(e_{t^r})+\sum b_id(e_{p_i})$, where $p_i$ is as in Lemma \ref{lambdares}, $\mathcal{B}=\{b_i\}$, and the $b_i$ satisfy $$\sum(c_iX^{\vee t^r-(g-h)^+}-c_i'X^{\vee t-g^+}-d_iX^{\vee t^r-(f-h)^+}+d_i'X^{\vee t-f^+})e_{\lambda_i}=\sum b_id(e_{p_i})$$
\end{definition}
We will call the expressions computed for Definitions \ref{1234} and \ref{exists?} lifting terms in their respective dimensions.
\subsection{Example}
To conclude, we will compute a example using the tools developed here.
\begin{example}
Let $\Lambda$ be the lattice generated by $\{(-1,2,-1),(3,-1,-1)\}$in $\Z^3$, and let $A_0=\{\alpha\}=\{(1,2,0)\}$. A minimal Markov basis of $\Lambda$ is $\{\lambda_1,\lambda_2,\lambda_3\}=\{(-1,2,-1),(3,-1,-1),(-2,-1,2)\}$\footnote{Markov basis computations can be performed in 4ti2, \cite{ti}}. For representatives, we will choose \linebreak $A_1=\{r,s,t\}= \{\{(1,2,0),(4,3,-1)\},\{(1,2,0),(3,3,-2)\},\{(1,2,0),(0,4,-1)\}\}$, and $A_2=\{u,v\}=\{\{(1,2,0),(0,4,-1),(3,3,-2)\},\{(1,2,0),(3,3,-2),(4,3,-1)\}\}$ with the orientations as listed, and we obtain the following diagram for $N(A)/\Lambda$ where the representatives are indicated by solid lines or filled in circles, and the suprema labeled in the appropriate places.
\begin{center}
\begin{tikzpicture}[scale=.65][line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm]
\fill[fill=black,fill opacity=1.0] (-0.12,3.77) -- (0.12,3.77) -- (0,3.98) -- cycle;
\fill[fill=black,fill opacity=1.0] (3.77,0.12) -- (3.77,-0.12) -- (3.98,0) -- cycle;
\fill[fill=black,fill opacity=1.0] (3.86,4.02) -- (4.03,3.85) -- (4.09,4.08) -- cycle;
\draw (0,0)-- (0,8);
\draw [dash pattern=on 3pt off 3pt] (0,8)-- (8,8);
\draw [dash pattern=on 3pt off 3pt] (8,8)-- (8,0);
\draw (8,0)-- (0,0);
\draw (0,0)-- (8,8);
\draw (-0.12,3.77)-- (0.12,3.77);
\draw (0.12,3.77)-- (0,3.98);
\draw (0,3.98)-- (-0.12,3.77);
\draw [shift={(5.07,1.46)}] plot[domain=-3.25:1.69,variable=\t]({1*0.54*cos(\t r)+0*0.54*sin(\t r)},{0*0.54*cos(\t r)+1*0.54*sin(\t r)});
\draw [shift={(1.75,4.84)}] plot[domain=-3.25:1.69,variable=\t]({1*0.54*cos(\t r)+0*0.54*sin(\t r)},{0*0.54*cos(\t r)+1*0.54*sin(\t r)});
\draw (-1.25,-0.17) node[anchor=north west] {(1,2,0)};
\draw (7,-0.17) node[anchor=north west] {(0,4,-1)};
\draw (-1.25,9) node[anchor=north west] {(4,1,-1)};
\draw (7,9) node[anchor=north west] {(3,3,-2)};
\draw (-2.3,4.3) node[anchor=north west] {(4,2,0)};
\draw (3.42,-0.1) node[anchor=north west] {(1,4,0)};
\draw (8,4.3) node[anchor=north west] {(3,4,-1)};
\draw (3.42,8.9) node[anchor=north west] {(4,3,-1)};
\draw (3.9,4.37) node[anchor=north west] {(3,3,0)};
\draw (2.19,6.5) node[anchor=north west] {(4,3,0)};
\draw (5.61,2.66) node[anchor=north west] {(3,4,0)};
\begin{scriptsize}
\fill [color=black] (0,0) circle (2.5pt);
\draw [color=black] (0,8) circle (2.5pt);
\draw [color=black] (8,0) circle (2.5pt);
\draw [color=black] (8,8) circle (2.5pt);
\fill [color=black,shift={(5,2)},rotate=90] (0,0) ++(0 pt,3.75pt) -- ++(3.25pt,-5.625pt)--++(-6.5pt,0 pt) -- ++(3.25pt,5.625pt);
\fill [color=black,shift={(1.68,5.38)},rotate=90] (0,0) ++(0 pt,3.75pt) -- ++(3.25pt,-5.625pt)--++(-6.5pt,0 pt) -- ++(3.25pt,5.625pt);
\end{scriptsize}
\end{tikzpicture}
\end{center}
We must first compute the resolution of $(\IL+I_{A_0})/\IL$ using the coefficients computed from Definition \ref{cellrescoeffs}. For example, the relation associated to the edge $t$ is $xze_{\overline{\alpha}}-y^2e_{\overline{\alpha}}=(xz-y^2)e_{\overline{\alpha}}$.\footnote{We are making a slight abuse of the diagram here: the diagram should only explicity be used for the resolution of $I_A$, but if we ignore the repeated edges, we can make use of it as a guide for the resolution of $(I_A+\IL)/\IL.$} The relation associated to the face $u$ is $x^2e_{\overline{t}}+ze_{\overline{r}}-ye_{\overline{s}}$. Omitting the details of the remaining computations, we have that the resolution of $(\IL+I_{A_0})/\IL$, $\pi(\mathcal{F}_{N(A)})$ is
$$\begin{array}{ccccccc}Se_{\overline{u}}\oplus Se_{\overline{v}} & \rightarrow & Se_{\overline{r}}\oplus Se_{\overline{s}}\oplus Se_{\overline{t}} & \rightarrow & Se_{\overline{\alpha}} & \rightarrow & (\IL+I_{A_0})/\IL\\
e_{\overline{u}} & \mapsto & x^2e_{\overline{t}}+ze_{\overline{r}}-ye_{\overline{s}} & & & & \\
e_{\overline{v}} & \mapsto & ze_{\overline{t}}+ye_{\overline{r}}-xe_{\overline{s}} & & & & \\
& & e_{\overline{t}} & \mapsto & (xz-y^2)e_{\overline{\alpha}} & & \\
& & e_{\overline{r}} & \mapsto & (y-x^3z)e_{\overline{\alpha}} & & \\
& & e_{\overline{s}} & \mapsto & (z^2-x^2y)e_{\overline{\alpha}} & & \\
& & & & e_{\overline{\alpha}} & \mapsto & xy^2+\IL
\end{array}$$
Using the same diagram for the lifting computations, we will again show one example from each dimension. The edge $t$ is of the form $\{\alpha,\alpha+\lambda_1\}$ oriented from $\alpha$ to $\alpha+\lambda_1$. Making the substitutions into Lemma \ref{N_1lifting}, we have that $B=\alpha$, $f=0$ (consequently, $e_f=0$), $C=\alpha$, and $g=\lambda_1$. Therefore, our lifted map will be $$d_1(e_t)=X^{S-(\alpha+\lambda_1)}e_{\alpha}-X^{S-\alpha}e_{\alpha}+X^{S-\lambda_1^+}e_{\lambda_1}$$ $$=xze_{\alpha}-y^2e_{\alpha}+xy^2e_{\lambda}$$$$=(xz-y^2)e_{\alpha}+xy^2e_{\lambda_1}$$
We will show the use of Lemma \ref{efgh} for the face $u$. Notice that the edges $t$ and $s$ are already representatives, so we will only need a lifting term for our tranlsation of the edge $r$. From Lemma \ref{efgh}, we have that $f=\lambda_1$, $g=-\lambda_3$, and $h=\lambda_1$. Additionally, we already have computed that $\vee r=(3,4,-1)$ and $\vee r^r=(4,2,0)$. What is left to compute are the $c_is, c'_is, d_is,$ and $d'_is$. The three easy cases are $c'_i,d_i,$ and $d_i'$: $f=\lambda_1$ implies $d'_1=1$ and $d'_2=d'_3=0$; $g=-\lambda_3$ implies $c'_3=-1$ and $c'_1=c'_2=0$; and $f=h$ implies $d_i=0$ for all $i$. For $g-h$, we need to write $X^{(g-h)^+}-X^{(g-h)^-}=\sum c_i(X^{\lambda_i^+}-X^{\lambda_i^-})$. Since $g-h=\lambda_2$, we have that $c_2=1$ and $c_1=c_3=0$.
Continuing, we have $$d_1(e_r^r)-d_1(e_r)=X^{(3,4,-1)-(0,2,0)}e_{\lambda_1} + X^{(4,2,0)-(3,0,0)}e_{\lambda_2}-X^{(3,4,-1)-(2,1,0)}e_{\lambda_3}$$ $$=x^3y^2z^{-1}e_{\lambda_1}+xy^2e_{\lambda_2}+xy^3z^{-1}e_{\lambda_3}$$
To use this, $$d_2(e_u)=x^2e_t+ze_r-ye_s$$
$$=x^2e_{t^r}+z(e_{r^r}-xy^2(x^2z^{-1}d^{-1}_1(e_{\lambda_1})+d^{-1}_1(e_{\lambda_2})-yz^{-1}d^{-1}_1(e_{\lambda_3})))-ye_{s^r}$$
$$=x^2e_{t^r}+ze_{r^r}-ye_{s^r}-xy^2e_{p_1}$$
Omitting the remaining similar computations, we have
$$\scalemath{.8}{\begin{array}{ccccccc} Se_u\oplus Se_v & \overset{d_2}\rightarrow & Se_{p_1}\oplus Se_{p_2} \oplus Se_r \oplus Se_s\oplus Se_t & \overset{d_1}\rightarrow & Se_{\lambda_1}\oplus Se_{\lambda_2}\oplus Se_{\lambda_3} \oplus Se_{\alpha} & \overset{d_0}\rightarrow & I_A \\
e_u & \mapsto & x^2e_{t^r}+ze_{r^r}-ye_{s^r}-xy^2e_{p_1} & & & & \\
e_v & \mapsto & ze_{t^r}+ye_{r^r}-xe_{s^r}-xy^2e_{p_2} & & & & \\
& & e_{p_1} & \mapsto & x^2e_{\lambda_1}+ze_{\lambda_2}-ye_{\lambda_3} & & \\
& & e_{p_2} & \mapsto & ze_{\lambda_1}+ye_{\lambda_2}-xe_{\lambda_3} & & \\
& & e_r & \mapsto & xy^2e_{\lambda_2} - (x^3-yz)e_{\alpha} & & \\
& & e_s & \mapsto & xy^2e_{\lambda_3} - (x^2y-z^2)e_{\alpha} & & \\
& & e_t & \mapsto & xy^2e_{\lambda_1} - (y^2-xz)e_{\alpha} & & \\
& & & & e_{\lambda_1} & \mapsto y^2-xz \\
& & & & e_{\lambda_2} & \mapsto x^3-yz \\
& & & & e_{\lambda_3} & \mapsto x^2y-z^2 \\
& & & & e_{\alpha} & \mapsto xy^2
\end{array}}$$
\end{example}
\begin{remark}
During long computations, such as we have just completed, many small perturbations occur without mention, such as rearranging terms, or moving negative signs around. One notable point from the previous computation was the occurence of $z^{-1}$ during an intermediate step. Although $z^{-1}\notin S$, the end result justified the means, so we choose to ignore the phenomenon.
\end{remark}
\section{Conclusion}
The main result of this paper is Proposition \ref{mainresult} together with Lemmas \ref{N_1lifting} and \ref{efgh}. The general case runs identically, where we perform formal computations and match it with what our representatives should look like, defining the lifting terms in higher dimensions accordingly. The result is analogous, but messier, versions of Lemmas \ref{N_1lifting} and \ref{efgh} for any dimension.
The author has recently become acquainted with the work of L\"{u} in \cite{lu1} and \cite{lu2} in which one can make very nice statements concerning the minimality of resolutions obtained from applications of the horseshoe lemma. In the three dimensional case covered here, minimality was essentially free, but in higher dimensions, the computation of all the lifting terms is a daunting undertaking. Using these new results has the potential to prove some very clean statements about minimality, and this will be explored in the future.
An additional line of research lies in studying ideals of the form $I=<X^{\lambda_i^+}-X^{\lambda_i^-} | \lambda_i \text{ generates } \Lambda>$ for some generic antichain lattice $\Lambda$. This is different from the existing case in that we are not requiring a full Markov basis, just a full lattice basis. The idea that is supported by preliminary computations is that one can pass from the deficient ideal to the full lattice ideal $I_{\Lambda}$, then perform the algorithm outlined in this paper, then pass from that resolution into another resolution via a simple algorithm. This has been shown to work in three dimensions, and further cases will be studied.
| {
"timestamp": "2014-10-06T02:03:13",
"yymm": "1410",
"arxiv_id": "1410.0713",
"language": "en",
"url": "https://arxiv.org/abs/1410.0713",
"abstract": "In recent years, the combinatorial properties of monomials ideals and binomial ideals have been widely studied. In particular, combinatorial interpretations of free resolution algorithms have been given in both cases. In this present work, we will introduce similar techniques, or modify existing ones to obtain two new results. The first is $S[\\Lambda]$-resolutions of $\\Lambda$-invariant submodules of $k[\\mathbb{Z}^n]$ where $\\Lambda$ is a lattice in $\\mathbb{Z}^n$ satisfying some trivial conditions. A consequence will be the ability to resolve submodules of $k[\\mathbb{Z}^n/\\Lambda]$, and in particular ideals $J$ of $S/I_{\\Lambda}$, where $I_{\\Lambda}$ is the lattice ideal of $\\Lambda$.Second, we will provide a detailed account in three dimensions on how to lift the aforementioned resolutions to resolutions in $k[x,y,z]$ of ideals with monomial and binomial generators.",
"subjects": "Commutative Algebra (math.AC)",
"title": "A Combinatorial Algorithm to Find the Minimal Free Resolution of an Ideal with Binomial and Monomial Generators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717480217661,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7089449200516733
} |
https://arxiv.org/abs/1809.10904 | Computational Number Theory in Relation with L-Functions | We give a number of theoretical and practical methods related to the computation of L-functions, both in the local case (counting points on varieties over finite fields, involving in particular a detailed study of Gauss and Jacobi sums), and in the global case (for instance Dirichlet L-functions, involving in particular the study of inverse Mellin transforms); we also give a number of little-known but very useful numerical methods, usually but not always related to the computation of L-functions. | \section{$L$-Functions}\label{sec:one}
This course is divided into five parts. In the first part (Sections 1 and 2),
we introduce the notion of $L$-function, give a number of results and
conjectures concerning them, and explain some of the computational problems
in this theory. In the second part (Sections 3 to 6), we give a number of
computational methods for obtaining the Dirichlet series coefficients of the
$L$-function, so is \emph{arithmetic} in nature. In the third part
(Section 7), we give a number of \emph{analytic} tools necessary for working
with $L$-functions. In the fourth part (Sections 8 and 9), we give a number of
very useful numerical methods which are not sufficiently well-known, most of
which being also related to the computation of $L$-functions. The fifth part
(Sections 10 and 11) gives the {\tt Pari/GP} commands corresponding to most of
the algorithms and examples given in the course. A final Section 12 gives
as an appendix some basic definitions and results used in the course which
may be less familiar to the reader.
\subsection{Introduction}
The theory of $L$-functions is one of the most exciting subjects in number
theory. It includes for instance two of the crowning achievements of
twentieth century mathematics, first the proof of the Weil conjectures
and of the Ramanujan conjecture by Deligne in the early 1970's, using the
extensive development of modern algebraic geometry initiated by Weil himself
and pursued by Grothendieck and followers in the famous EGA and SGA treatises,
and second the proof of the Shimura--Taniyama--Weil conjecture by Wiles et
al., implying among other things the proof of Fermat's last theorem. It
also includes two of the seven 1 million dollar Clay problems for the
twenty-first century, first the Riemann hypothesis, and second the
Birch--Swinnerton-Dyer conjecture which in my opinion is the most beautiful,
if not the most important, conjecture in number theory, or even in the whole
of mathematics, together with similar conjectures such as the
Beilinson--Bloch conjecture.
There are two kinds of $L$-functions: local $L$-functions and global
$L$-functions. Since the proof of the Weil conjectures, local $L$-functions
are rather well understood from a theoretical standpoint, but somewhat less
from a computational standpoint. Much less is known on global $L$-functions,
even theoretically, so here the computational standpoint is much more
important since it may give some insight on the theoretical side.
Before giving a definition of $L$-functions, we look in some detail at a
large number of special cases of global $L$-functions.
\subsection{The Prototype: the Riemann Zeta Function $\zeta(s)$}
The simplest of all (global) $L$-function is the Riemann zeta function $\zeta(s)$
defined by
$$\zeta(s)=\sum_{n\ge1}\dfrac{1}{n^s}\;.$$
This is an example of a \emph{Dirichlet series} (more generally
$\sum_{n\ge1}a(n)/n^s$, or even more generally $\sum_{n\ge1}1/\lambda_n^s$, but
we will not consider the latter). As such, it has a half-plane of absolute
convergence, here $\Re(s)>1$.
The properties of this function, studied initially by Bernoulli
and Euler, are as follows, given historically:
\begin{enumerate}\item (Bernoulli, Euler): it has \emph{special values}.
When $s=2$, $4$,... is a strictly positive even integer, $\zeta(s)$ is equal
to $\pi^s$ times a \emph{rational number}. $\pi$ is here a \emph{period},
and is of course the usual $\pi$ used for measuring circles. These rational
numbers have elementary \emph{generating functions}, and are equal up to easy
terms to the so-called \emph{Bernoulli numbers}. For example
$\zeta(2)=\pi^2/6$, $\zeta(4)=\pi^4/90$, etc. This was conjectured by Bernoulli
and proved by Euler. Note that the proof in 1735 of the so-called
\emph{Basel problem}:
$$\zeta(2)=1+\dfrac{1}{2^2}+\dfrac{1}{3^2}+\dfrac{1}{4^2}+\cdots=\dfrac{\pi^2}{6}$$
is one of the crowning achievements of mathematics of that time.
\item (Euler): it has an \emph{Euler product}: for $\Re(s)>1$ one has the
identity
$$\zeta(s)=\prod_{p\in P}\dfrac{1}{1-1/p^s}\;,$$
where $P$ is the set of prime numbers. This is exactly equivalent to the
so-called fundamental theorem of arithmetic. Note in passing (this does not
seem interesting here but will be important later) that if we consider
$1-1/p^s$ as a polynomial in $1/p^s=T$, its reciprocal roots all have the
same modulus, here $1$, this being of course trivial.
\item (Riemann, but already ``guessed'' by Euler in special cases): it has
an \emph{analytic continuation} to a meromorphic function in the whole complex
plane, with a single pole, at $s=1$, with residue $1$, and a \emph{functional
equation} $\Lambda(1-s)=\Lambda(s)$, where $\Lambda(s)=\Gamma_{{\mathbb R}}(s)\zeta(s)$,
with $\Gamma_{{\mathbb R}}(s)=\pi^{-s/2}\Gamma(s/2)$, and $\Gamma$ is the gamma function
(see appendix).
\item As a consequence of the functional equation, we have $\zeta(s)=0$
when $s=-2$, $-4$,..., $\zeta(0)=-1/2$, but we also have \emph{special
values} at $s=-1$, $s=-3$,... which are symmetrical to those at $s=2$, $4$,...
(for instance $\zeta(-1)=-1/12$, $\zeta(-3)=1/120$, etc.). This is the
part which was guessed by Euler.
\end{enumerate}
Roughly speaking, one can say that a global $L$-function is a function
having properties similar to \emph{all} the above. We will of course be
completely precise below. Two things should be added immediately: first, the
existence of special values will not be part of the definition but, at
least conjecturally, a consequence. Second, all the global $L$-functions
that we will consider should \emph{conjecturally} satisfy a Riemann hypothesis:
when suitably normalized, and excluding ``trivial'' zeros, all the zeros
of the function should be on the line $\Re(s)=1/2$, axis of symmetry of the
functional equation. Note that even for the simplest $L$-function, $\zeta(s)$,
this is not proved.
\subsection{Dedekind Zeta Functions}
The Riemann zeta function is perhaps too simple an example to get the correct
feeling about global $L$-functions, so we generalize:
Let $K$ be a number field (a finite extension of ${\mathbb Q}$) of degree $d$. We
can define its \emph{Dedekind zeta function} $\zeta_K(s)$ for $\Re(s)>1$ by
$$\zeta_K(s)=\sum_{\a}\dfrac{1}{\N(\a)^s}=\sum_{n\ge1}\dfrac{i(n)}{n^s}\;,$$
where $\a$ ranges over all (nonzero) integral ideals of the ring of integers
${\mathbb Z}_K$ of $K$, $\N(\a)=[{\mathbb Z}_K:\a]$ is the norm of $\a$, and $i(n)$ denotes
the number of integral ideals of norm $n$.
This function has very similar properties to those of $\zeta(s)$ (which is the
special case $K={\mathbb Q}$). We give them in a more logical order:
\begin{enumerate}\item It can be analytically continued to the whole complex
plane into a meromorphic function having a single pole, at $s=1$, with known
residue, and it has a functional equation $\Lambda_K(1-s)=\Lambda_K(s)$, where
$$\Lambda_K(s)=|D_K|^{s/2}\Gamma_{{\mathbb R}}(s)^{r_1+r_2}\Gamma_{{\mathbb R}}(s+1)^{r_2}\;,$$
where $(r_1,2r_2)$ are the number of real and complex embeddings of $K$
and $D_K$ its discriminant.
\item It has an Euler product $\zeta_K(s)=\prod_{{\mathfrak p}}1/(1-1/\N({\mathfrak p})^s)$, where
the product is over all prime ideals of ${\mathbb Z}_K$. Note that this can also be
written
$$\zeta_K(s)=\prod_{p\in P}\prod_{{\mathfrak p}\mid p}\dfrac{1}{1-1/p^{f({\mathfrak p}/p)s}}\;,$$
where $f({\mathfrak p}/p)=[{\mathbb Z}_K/{\mathfrak p}:{\mathbb Z}/p{\mathbb Z}]$ is the so-called \emph{residual index}
of ${\mathfrak p}$ above $p$. Once again, note that if we set as usual $1/p^s=T$,
the reciprocal roots of $1-T^{f({\mathfrak p}/p)}$ all have modulus $1$.
\item It has \emph{special values}, but only when $K$ is a \emph{totally real}
number field ($r_2=0$, $r_1=d$): in that case $\zeta_K(s)$ is a \emph{rational
number} if $s$ is a negative odd integer, or equivalently by the functional
equation, it is a rational multiple of $\sqrt{|D_K|}\pi^{ds}$ if $s$ is a
positive even integer.\end{enumerate}
An important new phenomenon occurs: recall that
$\sum_{{\mathfrak p}\mid p}e({\mathfrak p}/p)f({\mathfrak p}/p)=d$, where $e({\mathfrak p}/p)$ is the so-called
\emph{ramification index}, which is equivalent to the
defining equality $p{\mathbb Z}_K=\prod_{{\mathfrak p}\mid p}{\mathfrak p}^{e({\mathfrak p}/p)}$. In particular
$\sum_{{\mathfrak p}\mid p}f({\mathfrak p}/p)=d$ if and only if $e({\mathfrak p}/p)=1$ for all ${\mathfrak p}$, which
means that $p$ is \emph{unramified} in $K/{\mathbb Q}$; one can prove that this is
equivalent to $p\nmid D_K$. Thus, the \emph{local $L$-function}
$L_{K,p}(T)=\prod_{{\mathfrak p}\mid p}(1-T^{f({\mathfrak p}/p)})$ has degree in $T$ exactly
equal to $d$ for all but a finite number of primes $p$, which are exactly
those which divide the discriminant $D_K$, and for those ``bad'' primes
the degree is strictly less than $d$. In addition, note that the
number of $\Gamma_{{\mathbb R}}$ factors in the \emph{completed} function $\Lambda_K(s)$
is equal to $r_1+2r_2$, hence once again equal to $d$.
\smallskip
{\bf Examples:}
\begin{enumerate}\item Let $D$ be the discriminant of a quadratic field, and
let $K={\mathbb Q}(\sqrt{D})$. In that case, $\zeta_K(s)$ \emph{factors} as
$\zeta_K(s)=\zeta(s)L(\chi_D,s)$, where $\chi_D=\lgs{D}{.}$ is the
Legendre--Kronecker symbol, and $L(\chi_D,s)=\sum_{n\ge 1}\chi_D(n)/n^s$.
Thus, the local $L$-function at a prime $p$ is given by
$$L_{K,p}(T)=(1-T)(1-\chi_D(p)T)=1-a_pT+\chi_D(p)T^2\;,$$
with $a_p=1+\chi_D(p)$. Note that $a_p$ is equal to the number of solutions
in ${\mathbb F}_p$ of the equation $x^2=D$.
\item Let us consider two special cases of (1): first $K={\mathbb Q}(\sqrt{5})$.
Since it is a real quadratic field, it has special values, for instance
$$\zeta_K(-1)=\dfrac{1}{30}\;,\quad \zeta_K(-3)=\dfrac{1}{60}\;,\quad \zeta_K(2)=\dfrac{2\sqrt{5}\pi^4}{375}\;,\quad \zeta_K(4)=\dfrac{4\sqrt{5}\pi^8}{84375}\;.$$
In addition, note that its \emph{gamma factor} is $5^{s/2}\Gamma_{{\mathbb R}}(s)^2$.
Second, consider $K={\mathbb Q}(\sqrt{-23})$. Since it is not a totally real field,
$\zeta_K(s)$ does not have special values. However, because of the factorization
$\zeta_K(s)=\zeta(s)L(\chi_D,s)$, we can look \emph{separately} at the special values
of $\zeta(s)$, which we have already seen (negative odd integers and positive
even integers), and of $L(\chi_D,s)$. It is easy to prove that the special
values of this latter function occurs at negative \emph{even} integers
and positive \emph{odd} integers, which have empty intersection which those
of $\zeta(s)$ and explains why $\zeta_K(s)$ itself has none. For instance,
$$L(\chi_D,-2)=-48\;,\quad L(\chi_D,-4)=6816\;,\quad L(\chi_D,3)=\dfrac{96\sqrt{23}\pi^3}{12167}\;.$$
In addition, note that its gamma factor is
$$23^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=23^{s/2}\Gamma_{{\mathbb C}}(s)\;,$$
where we set by definition
$$\Gamma_{{\mathbb C}}(s)=\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=2\cdot(2\pi)^{-s}\Gamma(s)$$
by the duplication formula for the gamma function.
\item Let $K$ be the unique cubic field up to isomorphism of discriminant
$-23$, defined for instance by a root of the equation $x^3-x-1=0$. We
have $(r_1,2r_2)=(1,2)$ and $D_K=-23$. Here, one
can prove (it is less trivial) that $\zeta_K(s)=\zeta(s)L(\rho,s)$, where
$L(\rho,s)$ is a holomorphic function. Using both properties of $\zeta_K$ and
$\zeta$, this $L$-function has the following properties:
\begin{itemize}\item It extends to an entire function on ${\mathbb C}$ with a functional
equation $\Lambda(\rho,1-s)=\Lambda(\rho,s)$, with
$$\Lambda(\rho,s)=23^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)L(\rho,s)=23^{s/2}\Gamma_{{\mathbb C}}(s)L(\rho,s)\;.$$
Note that this is the \emph{same} gamma factor as for ${\mathbb Q}(\sqrt{-23})$.
However the functions are fundamentally different, since
$\zeta_{{\mathbb Q}(\sqrt{-23})}(s)$ has a pole at $s=1$, while $L(\rho,s)$ is an
entire function.
\item It is immediate to show that if we let $L_{\rho,p}(T)=L_{K,p}(T)/(1-T)$
be the local $L$ function for $L(\rho,s)$, we have
$L_{\rho,p}(T)=1-a_pT+\chi_{-23}(p)T^2$, with
$a_p=1$ if $p=23$, $a_p=0$ if $\lgs{-23}{p}=-1$, and
$a_p=1$ or $2$ if $\lgs{-23}{p}=1$.
\end{itemize}
\end{enumerate}
\begin{remark} In all of the above examples, the function $\zeta_K(s)$
is \emph{divisible} by the Riemann zeta function $\zeta(s)$, i.e., the function
$\zeta_K(s)/\zeta(s)$ is an \emph{entire function}. This is known for some number
fields $K$, but is \emph{not} known in general, even in degree $d=5$ for
instance: it is a consequence of the more precise \emph{Artin conjecture} on
the holomorphy of Artin $L$-functions.\end{remark}
\subsection{Further Examples in Weight $0$}
It is now time to give examples not coming from number fields.
Define $a_1(n)$ by the formal equality
$$q\prod_{n\ge1}(1-q^n)(1-q^{23n})=\sum_{n\ge1}a_1(n)q^n=q-q^2-q^3+q^6+q^8-\cdots\;,$$
and set $L_1(s)=\sum_{n\ge1}a_1(n)/n^s$. The theory of modular forms
(here of the Dedekind eta function) tells us that $L_1(s)$ will satisfy
exactly the same properties as $L(\rho,s)$ with $\rho$ as above.
Define $a_2(n)$ by the formal equality
$$\dfrac{1}{2}\left(\sum_{(m,n)\in{\mathbb Z}\times{\mathbb Z}}q^{m^2+mn+6n^2}-q^{2m^2+mn+3n^2}\right)=\sum_{n\ge1}a_2(n)q^n\;,$$
and set $L_2(s)=\sum_{n\ge1}a_2(n)/n^s$. The theory of modular forms
(here of theta functions) tells us that $L_2(s)$ will satisfy
exactly the same properties as $L(\rho,s)$.
And indeed, it is an interesting \emph{theorem} that
$$L_1(s)=L_2(s)=L(\rho,s)\;:$$
The ``moral'' of this story is the following, which can be made mathematically
precise: if two $L$-functions are holomorphic, have the same gamma factor
(including in this case the $23^{s/2}$), then (conjecturally in general) they
belong to a finite-dimensional vector space. Thus in particular if this vector
space is $1$-dimensional and the $L$-functions are suitably normalized
(usually with $a(1)=1$), this implies as here that they are equal.
\subsection{Examples in Weight $1$}
Although we have not yet defined the notion of weight, let me give two
further examples.
Define $a_3(n)$ by the formal equality
$$q\prod_{n\ge1}(1-q^n)^2(1-q^{11n})^2=\sum_{n\ge1}a_3(n)q^n=q-2q^2-q^3+2q^4+\cdots\;,$$
and set $L_3(s)=\sum_{n\ge1}a_3(n)/n^s$. The theory of modular forms
(again of the Dedekind eta function) tells us that $L_3(s)$ will satisfy
the following properties, analogous but more general than those satisfied by
$L_1(s)=L_2(s)=L(\rho,s)$:
\begin{itemize}\item It has an analytic continuation to the whole complex
plane, and if we set
$$\Lambda_3(s)=11^{s/2}\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)L_3(s)=11^{s/2}\Gamma_{{\mathbb C}}(s)L_3(s)\;,$$
we have the functional equation $\Lambda_3(2-s)=\Lambda_3(s)$. Note the crucial
difference that here $1-s$ is replaced by $2-s$.
\item There exists an Euler product $L_3(s)=\prod_{p\in P}1/L_{3,p}(1/p^s)$
similar to the preceding ones in that $L_{3,p}(T)$ is for all but a finite
number of $p$ a second degree polynomial in $T$. More precisely,
if $p=11$ we have $L_{3,p}(T)=1-T$, while for $p\ne11$ we have
$L_{3,p}(T)=1-a_pT+pT^2$, for some $a_p$ such that $|a_p|<2\sqrt{p}$.
This is expressed more vividly by saying that for $p\ne11$ we have
$L_{3,p}(T)=(1-\alpha_pT)(1-\beta_pT)$, where the reciprocal roots $\alpha_p$ and
$\beta_p$ have modulus exactly equal to $p^{1/2}$. Note again the crucial
difference with ``weight $0$'' in that the coefficient of $T^2$ is equal to
$p$ instead of $\pm1$, hence that $|\alpha_p|=|\beta_p|=p^{1/2}$ instead of $1$.
\end{itemize}
\smallskip
As a second example, consider the equation $y^2+y=x^3-x^2-10x-20$ (an
elliptic curve $E$), and denote by $N_q(E)$ the number of projective points
of this curve over the finite field ${\mathbb F}_q$ (it is clear that there is a unique
point at infinity, so if you want $N_q(E)$ is one plus the number of affine
points). There is a universal recipe to construct an $L$-function out of
a variety which we will recall below, but here let us simplify: for $p$
prime, set $a_p=p+1-N_p(E)$ and
$$L_4(s)=\prod_{p\in P}1/(1-a_pp^{-s}+\chi(p)p^{1-2s})\;,$$
where $\chi(p)=1$ for $p\ne11$ and $\chi(11)=0$. It is not difficult to show
that $L_4(s)$ satisfies exactly the same properties as $L_3(s)$ (using for
instance the elementary theory of modular curves), so by the moral explained
above, it should not come as a surprise that in fact $L_3(s)=L_4(s)$.
\subsection{Definition of a Global $L$-Function}
With all these examples at hand, it is quite natural to give the following
definition of an $L$-function, which is not the most general but will be
sufficient for us.
\begin{definition} Let $d$ be a nonnegative integer. We say that a
Dirichlet series $L(s)=\sum_{n\ge1}a(n)n^{-s}$ with $a(1)=1$ is an
$L$-function of \emph{degree $d$} and \emph{weight $0$} if the following
conditions are satisfied:
\begin{enumerate}\item (Ramanujan bound): we have $a(n)=O(n^\varepsilon)$ for
all $\varepsilon>0$, so that in particular the Dirichlet series converges
absolutely and uniformly in any half plane $\Re(s)\ge\sigma>1$.
\item (Meromorphy and Functional equation): The function $L(s)$ can be
extended to ${\mathbb C}$ to a meromorphic function of order $1$ (see appendix) having
a finite number of poles; furthermore there exist complex numbers $\lambda_i$ with
nonnegative real part and an integer $N$ called the \emph{conductor} such
that if we set
$$\gamma(s)=N^{s/2}\prod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+\lambda_i)\text{\quad and\quad}\Lambda(s)=\gamma(s)L(s)\;,$$
we have the \emph{functional equation}
$$\Lambda(s)=\omega\ov{\Lambda(1-\ov{s})}$$
for some complex number $\omega$, called the \emph{root number}, which will
necessarily be of modulus~$1$.
\item (Euler Product): For $\Re(s)>1$ we have an Euler product
$$L(s)=\prod_{p\in P}1/L_p(1/p^s)\text{\quad with\quad}L_p(T)=\prod_{1\le j\le d}(1-\alpha_{p,j}T)\;,$$
and the reciprocal roots $\alpha_{p,j}$ are called the \emph{Satake parameters}.
\item (Local Riemann hypothesis): for $p\nmid N$ we have $|\alpha_{p,j}|=1$,
and for $p\mid N$ we have either $\alpha_{p,j}=0$ or $|\alpha_{p,j}|=p^{-m/2}$
for some $m$ such that $1\le m\le d$.
\end{enumerate}\end{definition}
\begin{remarks}{\rm \begin{enumerate}
\item More generally Selberg has defined a more general class of $L$-functions
which first allows $\Gamma(\mu_i s+\lambda_i)$ with $\mu_i$ positive real in the gamma factors and second allows weaker assumptions on $N$ and the Satake parameters.
\item Note that $d$ is \emph{both} the number of $\Gamma_{{\mathbb R}}$ factors,
\emph{and} the degree in $T$ of the Euler factors $L_p(T)$, at
least for $p\nmid N$, while the degree decreases for the ``bad'' primes $p$
which divide $N$.
\item The Ramanujan bound (1) is easily seen to be a consequence of the
conditions that we have imposed on the Satake parameters: in Selberg's more
general definition this is not the case.
\end{enumerate}}
\end{remarks}
It is important to generalize this definition in the following trivial way:
\begin{definition} Let $w$ be a nonnegative integer. A function $L(s)$ is said
to be an $L$-function of degree $d$ and \emph{motivic weight} $w$ if
$L(s+w/2)$ is an $L$-function of degree $d$ and weight $0$ as above
(with the slight additional technical condition that the nonzero Satake
parameters $\alpha_{p,j}$ for $p\mid N$ satisfy $|\alpha_{p,j}|=p^{-m/2}$ with
$1\le m\le w$).
\end{definition}
For an $L$-function of weight $w$, it is clear that the functional equation
is $\Lambda(s)=\omega\ov{\Lambda(k-\ov{s})}$ with $k=w+1$,
and that the Satake parameters will satisfy $|\alpha_{p,j}|=p^{w/2}$ for
$p\nmid N$, and for $p\mid N$ we have either $\alpha_{p,j}=0$ or
$|\alpha_{p,j}|=p^{(w-m)/2}$ for some integer $m$ such that $1\le m\le w$.
Thus, the first examples that we have given are all of weight $0$, and
the last two (which are in fact equal) are of weight $1$. For those who
know the theory of modular forms, note that the motivic weight (that we
denote by $w$) is one less than the weight $k$ of the modular form.
\section{Origins of $L$-Functions}
As can already be seen in the above examples, it is possible to construct
$L$-functions in many different ways. In the present section, we look at three
different ways for constructing $L$-functions: the first is by the theory of
modular forms or more generally of \emph{automorphic forms} (of which we have
seen a few examples above), the second is by using Weil's construction of
local $L$-functions attached to varieties, and more generally
to \emph{motives}, and third, as a special but much simpler case of this,
by the theory of \emph{hypergeometric motives}.
\subsection{$L$-Functions coming from Modular Forms}
The basic notion that we need here is that of \emph{Mellin transform}:
if $f(t)$ is a nice function tending to zero exponentially fast at infinity,
we can define its Mellin transform $\Lambda(f;s)=\int_0^\infty t^sf(t)\,dt/t$,
the integral being written in this way because $dt/t$ is the invariant Haar
measure on the locally compact group ${\mathbb R}_{>0}$. If we set $g(t)=t^{-k}f(1/t)$
and assume that $g$ also tends to zero exponentially fast at infinity,
it is immediate to see by a change of variable that
$\Lambda(g;s)=\Lambda(f;k-s)$. This is exactly the type of functional equation
needed for an $L$-function.
The other fundamental property of $L$-functions that we need is the existence
of an Euler product of a specific type. This will come from the theory of
\emph{Hecke operators}.
\smallskip
{\bf A crash course in modular forms} (see for instance \cite{Coh-Str} for a
complete introduction): we use the notation $q=e^{2\pi i\tau}$,
for $\tau\in{\mathbb C}$ such that $\Im(\tau)>0$, so that $|q|<1$. A function
$f(\tau)=\sum_{n\ge1}a(n)q^n$ is said to be a modular cusp form of (positive,
even) weight $k$ if $f(-1/\tau)=\tau^kf(\tau)$ for all $\Im(\tau)>0$.
Note that because of the notation $q$ we also have $f(\tau+1)=f(\tau)$,
hence it is easy to deduce that $f((a\tau+b)/(c\tau+d))=(c\tau+d)^kf(\tau)$
if $\psmm{a}{b}{c}{d}$ is an integer matrix of determinant $1$.
We define the $L$-function attached to $f$ as $L(f;s)=\sum_{n\ge1}a(n)/n^s$,
and the Mellin transform $\Lambda(f;s)$ of the function $f(it)$ is on the
one hand equal to $(2\pi)^{-s}\Gamma(s)L(f;s)=(1/2)\Gamma_{{\mathbb C}}(s)L(f;s)$, and on the
other hand as we have seen above satisfies the functional equation
$\Lambda(k-s)=(-1)^{k/2}\Lambda(s)$.
One can easily show the fundamental fact that the vector space of modular
forms of given weight $k$ is \emph{finite dimensional}, and compute its
dimension explicitly.
If $f(\tau)=\sum_{n\ge1}a(n)q^n$ is a modular form and $p$ is a prime number,
one defines $T(p)(f)$ by $T(p)(f)=\sum_{n\ge1}b(n)q^n$ with
$b(n)=a(pn)+p^{k-1}a(n/p)$, where $a(n/p)$ is by convention $0$ when
$p\nmid n$, or equivalently
$$T(p)(f)(\tau)=p^{k-1}f(p\tau)+\dfrac{1}{p}\sum_{0\le j<p}f\left(\dfrac{\tau+j}{p}\right)\;.$$
Then $T(p)f$ is also a modular cusp form, so $T(p)$ is an operator on the
space of modular forms, and it is easy to show that the $T(p)$ commute and
are diagonalizable, so they are simultaneously diagonalizable hence there
exists a basis of common \emph{eigenforms} for all the $T(p)$. Since one can
show that for such an eigenform one has $a(1)\ne0$, we can normalize them
by asking that $a(1)=1$, and we then obtain a canonical basis.
If $f(\tau)=\sum_{n\ge1}a(n)q^n$ is such a \emph{normalized eigenform}, it
follows that the corresponding $L$ function $\sum_{n\ge1}a(n)/n^s$ will indeed
have an Euler product, and using the elementary properties of the operators
$T(p)$ that it will in fact be of the form:
$$L(f;s)=\prod_{p\in P}\dfrac{1}{1-a(p)p^{-s}+p^{k-1-2s}}\;.$$
As a final remark, note that the analytic continuation and functional equation
of this $L$-function is an \emph{elementary consequence} of the definition of
a modular form. This is totally different from the motivic cases that we will
see below, where this analytic continuation is in general completely
\emph{conjectural}.
\smallskip
The above describes briefly the theory of modular forms on the modular group
$\PSL_2({\mathbb Z})$. One can generalize (nontrivially) this theory to \emph{subgroups}
of the modular group, the most important being $\Gamma_0(N)$ (matrices as above
with $N\mid c$), to other \emph{Fuchsian groups}, to forms in several
variables, and even more generally to \emph{reductive groups}.
\subsection{Local $L$-Functions of Algebraic Varieties}
The second very important source of $L$-functions comes from algebraic
geometry. Let $V$ be some algebraic object. In modern terms, $V$ may be a
\emph{motive}, whatever that may mean for the moment, but assume for instance
that $V$ is an algebraic variety, in other words that for each suitable field
$K$, $V(K)$ is the set of common zeros of a family of polynomials in several
variables. If $K$ is a \emph{finite} field ${\mathbb F}_q$ (recall that we must then
have $q=p^n$ for some prime $p$ and that ${\mathbb F}_q$ exists and is unique up to
isomorphism), then $V({\mathbb F}_q)$ will also be finite.
After studying a number of special cases, such as elliptic curves
(due to Hasse), and quasi-diagonal hypersurfaces in $\P^d$, in 1949 Weil was
led to make a number of more precise conjectures concerning the number of
\emph{projective} points $|V({\mathbb F}_q)|$, assuming that $V$ is a
\emph{smooth projective} variety, and proved these conjectures in the special
case of curves (the proof is already quite deep).
\smallskip
The first \emph{Weil conjecture} says that (for $p$ fixed) the number
$|V({\mathbb F}_{p^n})|$ of projective points of $V$ over the finite field ${\mathbb F}_{p^n}$
satisfies a (non-homogeneous) linear recurrence with
constant coefficients. For instance, if $V$ is an \emph{elliptic curve}
defined over ${\mathbb Q}$ (such as $y^2=x^3+x+1$) and if we set
$a(p^n)=p^n+1-|V({\mathbb F}_{p^n})|$, then
$$a(p^{n+1})=a(p)a(p^n)-\chi(p)pa(p^{n-1})\;,$$
where $\chi(p)=1$ unless $p$ divides the so-called \emph{conductor} of the
elliptic curve, in which case $\chi(p)=0$ (this is not quite true because
we must choose a suitable model for $V$, but it suffices for us).
\begin{exercise} Using the above recursion for $a(p^n)$, find the corresponding
recursion for $v_n=|V({\mathbb F}_{p^n})|$.
\end{exercise}
\begin{exercise}\begin{enumerate}\item Given a prime $p$ and $n\ge1$, write a
computer program which runs through all the elements of ${\mathbb F}_{p^n}$,
represented in a suitable way.
\item For the elliptic curve $y^2=x^3+x+1$, compute (on a computer) $a(5)$
and $a(5^2)$, and check the recursion.
\item Similarly, compute $a(31)$ and $a(31^2)$, and check the recursion
(here $\chi(31)=0$).\end{enumerate}
\end{exercise}
This first Weil conjecture was proved by Dwork in the early 1960's. It is
better reformulated in terms of \emph{local $L$-functions} as follows:
define the Hasse--Weil zeta function of $V$ as the \emph{formal power series}
in $T$ given by the formula
$$Z_p(V;T)=\exp\Biggl(\sum_{n\ge1}\dfrac{|V({\mathbb F}_{p^n})|}{n}T^n\Biggr)\;.$$
There should be no difficulty in understanding this: setting for simplicity
$v_n=|V({\mathbb F}_{p^n})|$, we have
\begin{align*}
Z_p(V;T)&=\exp(v_1T+v_2T^2/2+v_3T^3/3+\cdots)\\
&=1+v_1T+(v_1^2+v_2)T^2/2+(v_1^3+3v_1v_2+2v_3)T^3/6+\cdots\end{align*}
For instance, if $V$ is projective $d$-space $\P^d$, we have
$|V({\mathbb F}_q)|=q^d+q^{d-1}+\cdots+1$, and since
$\sum_{n\ge1}p^{nj}T^n/n=-\log(1-p^jT)$, we deduce that
$Z_p(\P^d;T)=1/((1-T)(1-pT)\cdots(1-p^dT))$.
In terms of this language, the existence of the recurrence relation is
equivalent to the fact that $Z_p(V;T)$ is a \emph{rational function} of $T$,
and as already mentioned, this was proved by Dwork in 1960.
\smallskip
The second conjecture of Weil states that this rational function is of the form
$$Z_p(V;T)=\prod_{0\le i\le 2d}P_{i,p}(V;T)^{(-1)^{i+1}}=\dfrac{P_{1,p}(V;T)\cdots P_{2d-1,p}(V;T)}{P_{0,p}(V;T)P_{2,p}(V;T)\cdots P_{2d,p}(V;T)}\;,$$
where $d=\dim(V)$, and the $P_{i,p}$ are polynomials in $T$.
Furthermore, a basic result in algebraic geometry called Poincar\'e duality
implies that $Z_p(V;1/(p^dT))=\pm p^{de/2}T^eZ_p(V;T)$, where $e$ is the
degree of the rational function (called the Euler characteristic of $V$),
which means that there is a relation between $P_{i,p}$ and $P_{2d-i,p}$.
In addition the $P_{i,p}$ have integer coefficients, and $P_{0,p}(T)=1-T$,
$P_{2d,p}(T)=1-p^dT$. For instance, for \emph{curves}, this means that
$Z_p(V;T)=P_1(V;T)/((1-T)(1-pT))$, the polynomial $P_1$ is of even degree
$2g$ ($g$ is the so-called \emph{genus} of the curve) and satisfies
$p^{dg}P_1(V;1/(p^dT))=\pm P_1(V;T)$.
For knowledgeable readers, in highbrow language, the polynomial $P_{i,p}$ is
the reverse characteristic polynomial of the Frobenius endomorphism acting on
the $i$th $\ell$-adic cohomology group $H^i(V;{\mathbb Q}_{\ell})$ for any $\ell\ne p$.
\smallskip
The third, most important and most difficult of the Weil conjectures is the
local \emph{Riemann hypothesis}, which says that the reciprocal roots of
$P_{i,p}$ have modulus exactly equal to $p^{i/2}$, in other words that
$$P_{i,p}(V;T)=\prod_j(1-\alpha_{i,j}T)\text{\quad with\quad}|\alpha_{i,j}|=p^{i/2}\;.$$
This last is the most important in applications.
The Weil conjectures were completely proved by Deligne in the early 1970's
following a strategy already put forward by Weil, and is considered as one of
the two or three major accomplishments of mathematics of the second half of
the twentieth century.
\begin{exercise} (You need to know some algebraic number theory for this).
Let $P\in{\mathbb Z}[X]$ be a monic irreducible polynomial and $K={\mathbb Q}(\th)$, where
$\th$ is a root of $P$ be the corresponding number field. Assume that
$p^2\nmid\disc(P)$. Show that the Hasse--Weil zeta function at $p$ of the
$0$-dimensional variety defined by $P=0$ is the Euler factor at $p$ of
the Dedekind zeta function $\zeta_K(s)$ attached to $K$, where $p^{-s}$ is
replaced by $T$.\end{exercise}
\subsection{Global $L$-Function Attached to a Variety}
We are now ready to ``globalize'' the above construction, and build
\emph{global} $L$-functions attached to a variety.
Let $V$ be an algebraic variety defined over ${\mathbb Q}$, say. We assume that
$V$ is ``nice'', meaning for instance that we choose $V$ to be projective,
smooth, and absolutely irreducible. For all but a finite number of primes $p$
we can consider $V$ as a smooth variety over ${\mathbb F}_p$, so for each $i$ we can
set $L_i(V;s)=\prod_p 1/P_{i,p}(V;p^{-s})$, where the product
is over all the ``good'' primes, and the $P_{i,p}$ are as above. The factor
$1/P_{i,p}(V;p^{-s})$ is as usual called the Euler factor at $p$. These
functions $L_i$ can be called the global $L$-functions attached to $V$.
This na\"\i ve definition is insufficient to construct interesting objects.
First and most importantly, we have omitted a finite number of
Euler factors at the so-called ``bad primes'', which include in particular
those for which $V$ is not smooth over ${\mathbb F}_p$, and although there do
exist cohomological recipes to define them, as far as the author is aware
these recipes do not really give practical algorithms. (In highbrow language,
these recipes are based on the computation of $\ell$-adic cohomology groups,
for which the known algorithms are useless in practice; in the simplest case
of Artin $L$-functions, one must determine the action of Frobenius on the
vector space fixed by the inertia group, which can be done reasonably easily.)
\smallskip
Another much less important reason is the fact that most of the $L_i$ are
uninteresting or related. For instance in the case of elliptic curves seen
above, we have (up to a finite number of Euler factors)
$L_0(V;s)=\zeta(s)$ and $L_2(V;s)=\zeta(s-1)$, so the only interesting $L$-function,
called \emph{the} $L$-function of the elliptic curve, is the function
$L_1(V;s)=\prod_p(1-a(p)p^{-s}+\chi(p)p^{1-2s})^{-1}$ (if the model of
the curve is chosen to be \emph{minimal}, this happens to be the correct
definition, including for the ``bad'' primes). For varieties of higher
dimension $d$, as we have mentioned as part of the Weil conjecture
the functions $L_i$ and $L_{2d-i}$ are related by Poincar\'e duality, and
$L_0$ and $L_{2d}$ are translates of the Riemann zeta function (as above), so
only the $L_i$ for $1\le i\le d$ need to be studied.
\subsection{Hypergeometric Motives}
Still another way to construct $L$-functions is through the use of
\emph{hypergeometric motives}, due to Katz and Rodriguez-Villegas. Although
this construction is a special case of the construction of $L$-functions of
varieties studied above, the corresponding variety is \emph{hidden} (although
it can be recovered if desired), and the computations are in some sense much
simpler.
Let me give a short and unmotivated introduction to the subject: let
$\gamma=(\gamma_n)_{n\ge1}$ be a finite sequence of (positive or negative) integers
satisfying the essential condition $\sum_nn\gamma_n=0$.
For any finite field ${\mathbb F}_q$ with $q=p^f$ and any character $\chi$ of ${\mathbb F}_q^*$,
recall that the Gauss sum $\gg(\chi)$ is defined by
$$\gg(\chi)=\sum_{x\in{\mathbb F}_q^*}\chi(x)\exp(2\pi i\Tr_{{\mathbb F}_q/{\mathbb F}_p}(x)/p)\;,$$
see Section \ref{sec:gausssum} below. We set
$$Q_q(\gamma;\chi)=\prod_{n\ge1}\gg(\chi^n)^{\gamma_n}$$
and for any $t\in{\mathbb F}_q\setminus\{0,1\}$
$$a_q(\gamma;t)=\dfrac{1}{1-q}\left(1+\sum_{\chi\ne\varepsilon}\chi(Mt)Q_q(\gamma;\chi)\right)\;,$$
where $\varepsilon$ is the trivial character and $M=\prod_nn^{n\gamma_n}$ is a
normalizing constant (this is not quite the exact formula but it will
suffice for our purposes). The theorem of Katz is that for $t\ne0,1$ the
quantity $a_q(\gamma;t)$ is the \emph{trace of Frobenius} on some \emph{motive}
\emph{defined over ${\mathbb Q}$}.
In the language of $L$-functions this means the following:
define as usual the local $L$-function at $p$ by the formal power series
$$L_p(\gamma;t;T)=\exp\left(\sum_{f\ge1}a_{p^f}(\gamma;t)\dfrac{T^f}{f}\right)\;.$$
Then $L_p$ is a rational function of $T$, satisfies the local Riemann
hypothesis, and if we set
$$L(\gamma;t;s)=\prod_pL_p(\gamma;t;p^{-s})^{-1}\;,$$
then $L$ once completed at the ``bad'' primes should be a global $L$-function
of the standard type described above.
\smallskip
Let me give one of the simplest examples of a hypergeometric motive, and show
how one can recover the underlying algebraic variety. We choose
$\gamma_1=4$, $\gamma_2=-2$, $\gamma_n=0$ for $n>2$, which does satisfy the condition
$\sum_nn\gamma_n=0$ (we could choose the simpler values $\gamma_1=2$, $\gamma_2=-1$,
but this would give a zero-dimensional variety, i.e., a number field, so
less representative of the general case). We thus have
$Q_q(\gamma,\chi)=\gg(\chi)^4/\gg(\chi^2)^2$ and $M=1/4$. By the results on
Jacobi sums that we will see below (Proposition \ref{jacgaufq}), if $\chi^2$
is not the trivial character $\varepsilon$ we have $Q_q(\gamma,\chi)=J(\chi,\chi)^2$,
where $J(\chi,\chi)=\sum_{x\in{\mathbb F}_q\setminus\{0,1\}}\chi(x)\chi(1-x)$. As
mentioned above, we did not give the precise formula, here it simply
corresponds to setting $Q_q(\gamma,\chi)=J(\chi,\chi)^2$, including when
$\chi^2=\varepsilon$. Thus
$$a_q(\gamma;t)=\dfrac{1}{1-q}\left(1+\sum_{\chi\ne\varepsilon}\chi(t/4)J(\chi,\chi)^2\right)\;.$$
If by a temporary abuse of notation\footnote{The definition of $J$ given
below is a sum over all $x\in{\mathbb F}_q$, so that $J(\varepsilon,\varepsilon)=q^2$ and not
$(q-2)^2$.} we define $J(\varepsilon,\varepsilon)$ by the same formula as above, we have
$J(\varepsilon,\varepsilon)=(q-2)^2$ hence
$$a_q(\gamma;t)=\dfrac{1}{1-q}\left(1-(q-2)^2+\sum_{\chi}\chi(t/4)J(\chi,\chi)^2\right)\;.$$
Now
$$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2=\sum_{x,y\in{\mathbb F}_q\setminus\{0,1\}}\sum_{\chi}\chi(t/4)\chi(x)\chi(1-x)\chi(y)\chi(1-y)\;.$$
The point of writing it this way is that because of orthogonality of
characters (Exercise \ref{exoorth} below) the sum on $\chi$ vanishes unless
the argument is equal to $1$ in which case it is equal to $q-1$, so that
$$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2=(q-1)N_q(t)\;,\text{\quad where\quad }N_q(t)=\sum_{\substack{x,y\in{\mathbb F}_q\setminus\{0,1\}\\(t/4)x(1-x)y(1-y)=1}}1$$
is the number of \emph{affine} points over ${\mathbb F}_q$ of the algebraic variety
defined by $(t/4)x(1-x)y(1-y)=1$ (which automatically implies $x$ and $y$
different from $0$ and $1$). We have thus shown that
$$a_q(\gamma;t)=\dfrac{1}{1-q}(1-(q-2)^2+(q-1)N_q(t))=q-3-N_q(t)\;.$$
\begin{exercise} By making the change of variables $X=(4/t)(1-1/x)$,
$Y=(4/t)(y-1)(1-1/x)$, show that
$$a_q(\gamma;t)=q+1-|E({\mathbb F}_q)|\;,$$
where $|E({\mathbb F}_q)|$ is the number of projective points over ${\mathbb F}_q$ of the
elliptic curve $Y^2+XY=X(X-4/t)^2$. Thus, the global $L$-function
attached to the hypergeometric motive defined by $\gamma$ is equal to
the $L$-function attached to the elliptic curve $E$.
\end{exercise}
Since we will see below fast methods for computing expressions such as\newline
$\sum_{\chi}\chi(t/4)J(\chi,\chi)^2$, these will consequently give fast
methods for computing $|E({\mathbb F}_q)|$ for an arbitrary elliptic curve $E$.
\begin{exercise}\begin{enumerate}
\item In a similar way, study the hypergeometric motive
corresponding to $\gamma_1=3$, $\gamma_3=-1$, and $\gamma_n=0$ otherwise,
assuming that the correct formula for $Q_q$ corresponds as above to
the replacement of quotients of Gauss sums by Jacobi sums for all
characters $\chi$, not only those allowed by Proposition \ref{jacgaufq}.
To find the elliptic curve, use the change of variable $X=-xy$,
$Y=x^2y$.
\item Deduce that the global $L$-function of this hypergeometric motive
is equal to the $L$-function attached to the elliptic curve
$y^2=x^3+x^2+4x+4$ and to the $L$-function attached to the modular form
$q\prod_{n\ge1}(1-q^{2n})^2(1-q^{10n})^2$.
\end{enumerate}
\end{exercise}
\subsection{Other Sources of $L$-Functions}
There exist many other sources of $L$-functions in addition to those that we
have already mentioned, that we will not expand upon:
\begin{itemize}
\item Hecke $L$-functions, attached to Hecke Gr\"ossencharacters.
\item Artin $L$-functions, of which we have met a couple of examples in
Section \ref{sec:one}.
\item Functorial constructions of $L$-functions such as Rankin--Selberg
$L$-functions, symmetric squares and more generally symmetric powers.
\item $L$-functions attached to Galois representations.
\item General automorphic $L$-functions.
\end{itemize}
Of course these are not disjoint sets, and as already mentioned, when some
$L$-functions lies in an intersection, this usually corresponds to an
interesting arithmetic property. Probably the most general such correspondence
is the \emph{Langlands program}.
\subsection{Results and Conjectures on $L(V;s)$}
The problem with global $L$-functions is that most of their properties are only
\emph{conjectural}. We mention these conjectures in the case of global
$L$-functions attached to algebraic varieties:
\smallskip
\begin{enumerate}\item The function $L_i$ is only defined through its Euler
product, and thanks to the last of Weil's conjectures, the local Riemann
hypothesis, proved by Deligne, it converges absolutely for $\Re(s)>1+i/2$.
Note that, with the definitions introduced above, $L_i$ is an $L$-function
of degree $d_i$, the common degree of $P_{i,p}$ for all but a finite number of
$p$, and of motivic weight exactly $w=i$ since the Satake parameters satisfy
$|\alpha_{i,p}|=p^{i/2}$, again by the local Riemann hypothesis.
\item A first conjecture is that $L_i$ should have an
\emph{analytic continuation} to the whole complex plane with a
\emph{finite number} of \emph{known} poles with \emph{known} polar part.
\item A second conjecture, which can in fact be considered as part of the
first, is that this extended $L$-function should satisfy a \emph{functional
equation} when $s$ is changed into $i+1-s$. More precisely, when completed
with the Euler factors at the ``bad'' primes as mentioned (but not explained)
above, then if we set
$$\Lambda_i(V;s)=N^{s/2}\prod_{1\le j\le d_i}\Gamma_{{\mathbb R}}(s+\mu_j)L_i(V;s)$$
then $\Lambda_i(V;i+1-s)=\omega\ov{\Lambda_i(V^*;s)}$ for some variety $V^*$ in
some sense ``dual'' to $V$ and a complex number $\omega$ of modulus $1$. In the
above, $N$ is some integer divisible exactly by all the ``bad'' primes, i.e.,
essentially (but not exactly) the primes for which $V$ reduced modulo $p$ is
not smooth, and the $\mu_j$ are in this case (varieties) \emph{integers}
which can be computed in terms of the \emph{Hodge numbers} $h^{p,q}$ of
the variety thanks to a recipe due to Serre \cite{Ser}. The number $i$ is
called the \emph{motivic weight}, and it is important to note that the
``weight'' $k$ usually attached to an $L$-function with functional equation
$s\mapsto k-s$ is equal to $k=i+1$, i.e., to \emph{one more} than the motivic
weight.
In many cases the $L$-function is self-dual, in which case the functional
equation is simply of the form $\Lambda_i(V;i+1-s)=\pm\Lambda_i(V;s)$.
\item The function $\Lambda_i$ should satisfy the generalized Riemann
hypothesis (GRH): all its zeros in ${\mathbb C}$ are on the vertical line
$\Re(s)=(i+1)/2$. Equivalently, the zeros of $L_i$ are on the one hand
real zeros at some integers coming from the poles of the gamma factors,
and all the others satisfy $\Re(s)=(i+1)/2$.
\item The function $\Lambda_i$ should have \emph{special values}: for the
integer values of $s$ (called special points) which are those for which
neither the gamma factor at $s$ nor at $i+1-s$ has a pole, it should be
computable ``explicitly'': it should be equal to a \emph{period}
(integral of an algebraic function on an algebraic cycle) times an algebraic
number. This has been stated (conjecturally) in great detail by Deligne in the
1970's.\end{enumerate}
It is conjectured that \emph{all} $L$-functions of degree $d_i$ and weight
$i$ as defined at the beginning should satisfy all the above properties, not
only the $L$-functions coming from varieties.
\smallskip
I now give the status of these conjectures.
\begin{enumerate}\item The first conjecture (analytic continuation) is known
only for a very restricted class of $L$-functions: first $L$-functions of
degree $1$, which can be shown to be Dirichlet $L$-functions, $L$-functions of
Hecke characters, $L$-functions attached to modular forms as shown above, and
more generally to \emph{automorphic forms}. For $L$-functions attached to
varieties, one knows this \emph{only} when one can prove that the
corresponding $L$-function comes from an automorphic form: this is how Wiles
proves the analytic continuation of the $L$-function attached to an elliptic
curve defined over ${\mathbb Q}$, a very deep and difficult
result, with Deligne's proof of the Weil conjectures one of the most important
result of the end of the 20th century. More results of this type are known for
certain higher-dimensional varieties such as certain \emph{Calabi--Yau
manifolds}. Note however that for such simple objects as most
\emph{Artin $L$-functions} (degree $0$, in which case only \emph{meromorphic}
continuation is known) or abelian surfaces, this is not
known, although the work of Brumer--Kramer--Poor--Yuen, as well as more
recent work of G.~Boxer, F.~Calegari, T.~Gee, and V.~Pilloni on the
\emph{paramodular conjecture} may some day lead to a proof in this last case.
\item The second conjecture on the existence of a functional equation is
of course intimately linked to the first, and the work of Wiles et al.
also proves the existence of this functional equation. But in
addition, in the case of Artin $L$-functions for which only meromorphy
(possibly with infinitely many poles) is known thanks to a theorem of
Brauer, this same theorem implies the functional equation which is thus known
in this case. Also, as mentioned, the Euler factors which we must include
for the ``bad'' primes in order to have a clean functional equation are often
quite difficult to compute.
\item The (global) Riemann hypothesis is not known for \emph{any} global
$L$-function of the type mentioned above, not even for the simplest one, the
Riemann zeta function $\zeta(s)$. Note that it \emph{is} known for other kinds of
$L$-functions such as \emph{Selberg zeta functions}, but these are
functions of order $2$, so are not in the class considered above.
\item Concerning \emph{special values}: many cases are known, and many
conjectured. This is probably one of the most \emph{fun} conjectures since
everything can be computed explicitly to thousands of decimals if desired.
For instance, for modular forms it is a theorem of Manin, for symmetric
squares of modular forms it is a theorem of Rankin, and for higher symmetric
powers one has very precise conjectures of Deligne, which check perfectly
on a computer, but none of them are proved. For the Riemann zeta function
or Dirichlet $L$-functions, of course all these results such as $\zeta(2)=\pi^2/6$
date back essentially to Euler.
In the case of an elliptic curve $E$ over ${\mathbb Q}$, the only special point is
$s=1$, and in this case the whole subject revolves around the \emph{Birch and
Swinnerton-Dyer conjecture} (BSD) which predicts the behavior of $L_1(E;s)$
around $s=1$. The only known results, already quite deep, due to Kolyvagin
and Gross--Zagier, deal with the case where the \emph{rank} of the elliptic
curve is $0$ or $1$.
\end{enumerate}
There exist a number of other very important conjectures linked to the behavior
of $L$-functions at integer points which are not necessarily special,
such as the Bloch, Beilinson, Kato, Lichtenbaum, or Zagier conjectures,
but it would carry us too far afield to describe them in general. However,
in the next subsections, we will give three completely explicit numerical
examples of these conjectures, so that the reader can convince himself both
that they are easy to check numerically, and that the results are spectacular.
\subsection{An Explicit Numerical Example of BSD}\label{sec:BSD}
Let us now be a little more precise. Even if this subsection involves notions
not introduced in these notes, we ask the reader to be patient since the
numerical work only involves standard notions.
Let $E$ be an elliptic curve defined over ${\mathbb Q}$. Elliptic curves have a
natural \emph{abelian group} structure, and it is a theorem of Mordell
that the group of rational points on $E$ is \emph{finitely generated}, i.e.,
$E({\mathbb Q})\simeq{\mathbb Z}^r\oplus E_{\text{tors}}({\mathbb Q})$, where $E_{\text{tors}}({\mathbb Q})$ is
a finite group, and $r$ is called the \emph{rank} of the curve.
On the analytic side, we have mentioned that $E$ has an $L$-function $L(E,s)$
(denoted $L_1$ above), and the deep theorem of Wiles et al. says that it has
an analytic continuation to the whole of ${\mathbb C}$ into an entire function with
a functional equation linking $L(E,s)$ to $L(E,2-s)$. The only special point
in the above sense is $s=1$, and a weak form of the Birch and Swinnerton-Dyer
conjecture states that the order of vanishing $v$ of $L(E,s)$ at $s=1$ should
be equal to $r$.
This has been proved for $r=0$ (by Kolyvagin) and for $r=1$
(by Gross--Zagier--Kolyvagin), and nothing is known for $r\ge2$. However,
this is not quite true: if $r=2$ then we cannot have $v=0$ or $1$ by the
previous results, so $v\ge2$. On the other hand, for any given elliptic curve
it is easy to check numerically that $L''(E,1)\ne0$, so to check that $v=2$.
Similarly, if $r=3$ we again cannot have $v=0$ or $1$. But for any given
elliptic curve one can compute the \emph{sign} of the functional equation
linking $L(E,s)$ to $L(E,2-s)$, and this will show that if $r=3$ all
derivatives $L^{(k)}(E,s)$ for $k$ even will vanish. Thus we cannot have
$v=2$, and once again for any $E$ it is easy to check that $L'''(E,1)\ne0$,
hence to check that $v=3$.
Unfortunately, this argument does not work for $r\ge4$. Assume for instance
$r=4$. The same reasoning will show that $L(E,1)=0$ (by Kolyvagin), that
$L'(E,1)=L'''(E,1)=0$ (because the sign of the functional equation will be
$+$), and that $L''''(E,1)\ne0$ by direct computation. The BSD conjecture
tells us that $L''(E,1)=0$, but this is not known for a single curve.
\medskip
Let us give the simplest numerical example, based on an elliptic curve with
$r=4$. I emphasize that no knowledge of elliptic curves is needed for this.
For every prime $p$, consider the congruence
$$y^2+xy\equiv x^3-x^2-79x+289\pmod{p}\;,$$
and denote by $N(p)$ the number of pairs $(x,y)\in({\mathbb Z}/p{\mathbb Z})^2$ satisfying it.
We define an arithmetic function $a(n)$ in the following way:
\smallskip
\begin{enumerate}
\item $a(1)=1$.
\item If $p$ is prime, we set $a(p)=p-N(p)$.
\item For $k\ge2$ and $p$ is prime, we define $a(p^k)$ by induction:
$$a(p^k)=a(p)a(p^{k-1})-\chi(p)p\cdot a(p^{k-2})\;,$$
where $\chi(p)=1$ unless $p=2$ or $p=117223$, in which case $\chi(p)=0$.
\item For arbitrary $n$, we extend by multiplicativity: if $n=\prod_ip_i^{k_i}$
then $a(n)=\prod_ia(p_i^{k_1})$.
\end{enumerate}
\begin{remarks}{\rm \begin{itemize}
\item The number $117223$ is simply a prime factor
of the discriminant of the cubic equation obtained by completing the square
in the equation of the above elliptic curve.
\item Even though the definition of $a(n)$ looks complicated, it is \emph{very}
easy to compute (see below), for instance only a few seconds for a million
terms. In addition $a(n)$ is quite small: for $n=1,2,\dots$ we have
$$a(n)=1,-1,-3,1,-4,3,-5,-1,6,4,-6,-3,-6,5,\ldots$$
\end{itemize}}
\end{remarks}
On the analytic side, define a function $f(x)$ for $x>0$ by
$$f(x)=\int_1^\infty e^{-xt}\log(t)^2\,dt\;.$$
Note that it is very easy to compute this integral to thousands of digits if
desired and also note that $f$ tends to $0$ exponentially fast as $x\to\infty$
(more precisely $f(x)\sim 2e^{-x}/x^3$).
In this specific situation, the BSD conjecture tells us that $S=0$, where
$$S=\sum_{n\ge1}a(n)f\left(\dfrac{2\pi n}{\sqrt{234446}}\right)\;.$$
It takes only a few seconds to compute \emph{thousands} of digits of $S$,
and we can indeed check that $S$ is extremely close to $0$, but as of now
nobody knows how to prove that $S=0$.
\subsection{An Explicit Numerical Example of Beilinson--Bloch}\label{sec:BB}
This subsection is entirely due to V.~Golyshev (personal communication)
whom I heartily thank.
Let $u>1$ be a real parameter. Consider the elliptic curve $E(u)$ with
affine equation
$$y^2=x(x+1)(x+u^2)\;.$$
As usual one can define its $L$-function $L(E(u),s)$ using a general recipe.
The BSD conjecture deals with the value of $L(E(u),s)$ (and its derivatives)
at $s=1$. The Beilinson--Bloch conjectures deal with values at other
integer values of $s$, in the present case we consider $L(E(u),2)$. Once
again it is very easy to compute thousands of decimals of this quantity if
desired.
On the other hand, for $u>1$ consider the function
$$g(u)=2\pi\int_0^1\dfrac{\asin(t)}{\sqrt{1-t^2/u^2}}\,\dfrac{dt}{t}+\pi^2\acosh(u)=\dfrac{\pi^2}{2}\left(2\log(4u)-\sum_{n\ge1}\dfrac{\binom{2n}{n}^2}{n}(4u)^{-2n}\right)\;.$$
The conjecture says that when $u$ is an integer, $L(E(u),2)/g(u)$ should be a
\emph{rational number}. In fact, if we let $N(u)$ be the \emph{conductor}
of $E(u)$ (notion that I have not defined), then it seems that when
$u\ne4$ and $u\ne8$ we even have $F(u)=N(u)L(E(u),2)/g(u)\in{\mathbb Z}$.
Once again, this is a conjecture which can immediately be tested on
modern computer algebra systems such as {\tt Pari/GP}. For instance, for
$u=2,3,\ldots$ we find \emph{numerically} to thousands of decimal digits
(remember that nothing is proved)
$$F(u)=1,2,4/11,8,32,8,4/3,8,32,64,8,96,256,48,16,16,192,\ldots$$
\begin{exercise} Check numerically that the conjecture seems still to be true
when $4u\in{\mathbb Z}$, i.e., if $u$ is a rational number with denominator $2$ or $4$.
On the other hand, it is definitely wrong for instance if $3u\in{\mathbb Z}$ (and
$u\notin{\mathbb Z}$), i.e., when the denominator is $3$. It is possible that there
is a replacement formula, but Bloch and Golyshev tell me that this is
unlikely.\end{exercise}
\subsection{An Explicit Numerical Example of Mahler Measures}
This example is entirely due to W.~Zudilin (personal communication)
whom I heartily thank. The reader does not need any knowledge of Mahler
measures since we are again going to give the example as an equality
between values of $L$-functions and integrals. Note that this can also be
considered an isolated example of the Bloch--Beilinson conjecture.
Consider the elliptic curve $E$ with equation $y^2=x^3-x^2-4x+4$, of conductor
$24$. Its associated $L$-function $L(E,s)$ can easily be shown to be equal
to the $L$-function associated to the modular form
$$q\prod_{n\ge1}(1-q^{2n})(1-q^{4n})(1-q^{6n})(1-q^{12n})$$
(we do not need this for this example, but this will give us two
ways to create the $L$-function in {\tt Pari/GP}). We have the conjectural
identity due to Zudilin:
$$L(E,3)=\dfrac{\pi^2}{36}\left(\pi G+\int_0^1\asin(x)\asin(1-x)\,\dfrac{dx}{x}\right)\;,$$
where $G=\sum_{n\ge0}(-1)^n/(2n+1)^2=0.91596559\cdots$ is Catalan's constant.
At the end of this course, the reader will find three complete {\tt Pari/GP}
scripts which implement the BSD, Beilinson--Bloch, and Mahler measure examples
that we have just given.
\subsection{Computational Goals}
Now that we have a handle on what $L$-functions are, we come to the
computational and algorithmic problems, which are the main focus of these
notes. This involves many different aspects, all interesting in their own
right.
In a first type of situation, we assume that we are ``given''
the $L$-function, in other words that we are given a reasonably ``efficient''
algorithm to compute the coefficients $a(n)$ of the Dirichlet series
(or the Euler factors), and that we know the gamma factor $\gamma(s)$.
The main computational goals are then the following:
\begin{enumerate}\item Compute $L(s)$ for ``reasonable'' values of $s$:
for example, compute $\zeta(3)$. More sophisticated, but much more interesting:
check the Birch--Swinnerton-Dyer conjecture, the Beilinson--Bloch
conjecture, and the conjectures of Deligne concerning special values of
symmetric powers $L$-functions of modular forms.
\item Check the numerical validity of the functional equation, and
in passing, if unknown, compute the numerical value of the \emph{root
number} $\omega$ occurring in the functional equation.
\item Compute $L(s)$ for $s=1/2+it$ for rather large real values of $t$
(in the case of weight $0$, more generally for $s=(w+1)/2+it$),
and/or make a plot of the corresponding $Z$ function (see below).
\item Compute all the zeros of $L(s)$ on the critical line up to a given
height, and check the corresponding Riemann hypothesis.
\item Compute the residue of $L(s)$ at $s=1$ (typically): for instance
if $L$ is the Dedekind zeta function of a number field, this gives the
product $hR$.
\item Compute the \emph{order} of the zeros of $L(s)$ at integer points
(if it has one), and the leading term in the Taylor expansion: for instance
for the $L$-function of an elliptic curve and $s=1$, this gives
the \emph{analytic rank} of an elliptic curve, together with the
Birch and Swinnerton-Dyer data.
\end{enumerate}
\medskip
Unfortunately, we are not always given an $L$-function completely
explicitly. We can lack more or less partial information on the
$L$-function:
\begin{enumerate}\item One of the most frequent situations is that
one knows the Euler factors for the ``good'' primes, as well as the
corresponding part of the conductor, and that one is lacking both
the Euler factors for the bad primes and the bad part of the conductor.
The goal is then to find numerically the missing factors and missing parts.
\item A more difficult but much more interesting problem is when
essentially nothing is known on the $L$-function except $\gamma(s)$, in
other words the $\Gamma_{{\mathbb R}}$ factors and the constant $N$, essentially equal
to the conductor. It is quite amazing that nonetheless one can quite often
tell whether an $L$-function with the given data can exist, and give
some of the initial Dirichlet coefficients (even when several $L$-functions
may be possible).
\item Even more difficult is when essentially nothing is known except
the degree $d$ and the constant $N$, and one looks for possible $\Gamma_{{\mathbb R}}$
factors: this is the case in the search for Maass forms over $\SL_n({\mathbb Z})$,
which has been conducted very successfully for $n=2$, $3$, and $4$.
\end{enumerate}
We will not consider these more difficult problems.
\subsection{Available Software for $L$-Functions}
Many people working on the subject have their own software. I mention the
available public data.
$\bullet$ M.~Rubinstein's {\tt C++} program {\tt lcalc}, which can compute
values of $L$-functions, make large tables of zeros, and so on.
The program uses {\tt C++} language {\tt double}, so is limited to 15 decimal
digits, but is highly optimized, hence very fast, and used in most
situations. Also optimized for large values of the imaginary part
using Riemann--Siegel. Available in {\tt Sage}.
\smallskip
$\bullet$ T.~Dokchitser's program {\tt computel}, initially written in
{\tt GP/Pari}, rewritten for {\tt magma}, and also available in {\tt Sage}.
Similar to Rubinstein's, but allows arbitrary precision, hence slower,
and has no built-in zero finder, although this is not too difficult
to write. It is not optimized for large imaginary parts.
\smallskip
$\bullet$ Since June 2015, {\tt Pari/GP} has a complete package for computing
with $L$-functions, written by B.~Allombert, K.~Belabas, P.~Molin, and myself,
based on the ideas of T.~Dokchitser for the computation
of inverse Mellin transforms (see below) but put on a more solid footing,
and on the ideas of P.~Molin for computing the $L$-function values themselves,
which avoid computing generalized incomplete gamma functions (see also below).
Note the related complete {\tt Pari/GP} package for computing with modular
forms, available since July 2018.
\smallskip
$\bullet$ Last but not least, not a program but a huge \emph{database}
of $L$-functions, modular forms, number fields, etc., which is the
result of a collaborative effort of approximately 30 to 40 people headed
by D.~Farmer. This database can of course be queried in many different
ways, it is possible and useful to navigate between related pages, and
it also contains {\tt knowls}, bits of knowledge which give the main
definitions. In addition to the stored data, the site can compute
additional required information on the fly using the software mentioned above,
i.e., {\tt Pari}, {\tt Sage}, {\tt magma}, and {\tt lcalc})
Available at:
\centerline{\tt http://www.lmfdb.org}
\section{Arithmetic Methods: Computing $a(n)$}
We now come to the second part of this course: the computation of
the Dirichlet series coefficients $a(n)$ and/or of the Euler factors,
which is usually the same problem. Of course this depends entirely on how
the $L$-function is \emph{given}: in view of what we have seen, it can be
given for instance (but not only) as the $L$-function attached to a modular
form, to a variety, or to a hypergeometric motive. Since there are so many
relations between these $L$-functions (we have seen several identities above),
we will not separate the way in which they are given, but treat everything
at once.
\smallskip
In view of the preceding section, an important computational problem is the
computation of $|V({\mathbb F}_q)|$ for a variety $V$. This may of course be done by a
na\"\i ve point count: if $V$ is defined by polynomials in $n$ variables, we can
range through the $q^n$ possibilities for the $n$ variables and count the
number of common zeros. In other words, there always exists a trivial
algorithm requiring $q^n$ steps. We of course want something better.
\subsection{General Elliptic Curves}
Let us first look at the special case of \emph{elliptic curves}, i.e.,
a projective curve $V$ with affine equation $y^2=x^3+ax+b$ such that
$p\nmid 6(4a^3+27b^2)$, which is almost the general equation for an
\emph{elliptic curve}. For simplicity assume that $q=p$, but it is immediate
to generalize. If you know the definition of the Legendre symbol, you know
that the number of solutions in ${\mathbb F}_p$ to the equation $y^2=n$ is equal to
$1+\lgs{n}{p}$. If you do not, since ${\mathbb F}_p$ is a field, it is clear that this
number is equal to $0$, $1$, or $2$, and so one can \emph{define} $\lgs{n}{p}$
as one less, so $-1$, $0$, or $1$. Thus, since it is immediate to see that
there is a single projective point at infinity, we have
\begin{align*}|V({\mathbb F}_p)|&=1+\sum_{x\in{\mathbb F}_p}\left(1+\leg{x^3+ax+b}{p}\right)=p+1-a(p)\;,\quad\text{with}\\
a(p)&=-\sum_{0\le x\le p-1}\leg{x^3+ax+b}{p}\;.\end{align*}
Now a Legendre symbol can be computed very efficiently using the
\emph{quadratic reciprocity law}. Thus, considering that it can be computed
in constant time (which is not quite true but almost), this gives a $O(p)$
algorithm for computing $a(p)$, already much faster than the trivial $O(p^2)$
algorithm consisting in looking at all pairs $(x,y)$.
\smallskip
To do better, we have to use an additional and crucial property of an elliptic
curve: it is an \emph{abelian group}. Using this combined with the so-called
Hasse bounds $|a(p)|<2\sqrt{p}$ (a special case of the Weil conjectures), and
the so-called \emph{baby-step giant-step algorithm} due to Shanks, one can
obtain a $O(p^{1/4})$ algorithm, which is very fast for all practical
purposes.
\smallskip
However a remarkable discovery due to Schoof in the early 1980's is that
there exists a practical algorithm for computing $a(p)$ which is
\emph{polynomial in $\log(p)$}, for instance $O(\log^6(p))$. The idea is to
compute $a(p)$ modulo $\ell$ for small primes $\ell$ using
\emph{$\ell$-division polynomials}, and then use the Chinese remainder theorem
and the bound $|a(p)|<2\sqrt{p}$ to recover $a(p)$. Several
important improvements have been made on this basic algorithm, in particular
by Atkin and Elkies, and the resulting SEA algorithm (which is implemented
in many computer packages) is able to compute $a(p)$ for $p$ with several
thousand decimal digits. Note however that in practical ranges (say
$p<10^{12}$), the $O(p^{1/4})$ algorithm mentioned above is sufficient.
\subsection{Elliptic Curves with Complex Multiplication}
In certain special cases it is possible to compute $|V({\mathbb F}_q)|$ for an elliptic
curve $V$ much faster than with any of the above methods: when the elliptic
curve $V$ has \emph{complex multiplication}. Let us consider the special
cases $y^2=x^3-nx$ (the general case is more complicated but not
really slower). By the general formula for $a(p)$, we have for
$p\ge3$:
\begin{align*}a(p)&=-\sum_{-(p-1)/2\le x\le (p-1)/2}\leg{x(x^2-n)}{p}\\
&=-\sum_{1\le x\le (p-1)/2}\left(\leg{x(x^2-n)}{p}+\leg{-x(x^2-n)}{p}\right)\\
&=-\left(1+\leg{-1}{p}\right)\sum_{1\le x\le(p-1)/2}\leg{x(x^2-n)}{p}\end{align*}
by the multiplicative property of the Legendre symbol. This already
shows that if $\lgs{-1}{p}=-1$, in other words $p\equiv3\pmod4$, we
have $a(p)=0$. But we can also find a formula when $p\equiv1\pmod4$:
recall that in that case by a famous theorem due to Fermat, there
exist integers $u$ and $v$ such that $p=u^2+v^2$. If necessary by
exchanging $u$ and $v$, and/or changing the sign of $u$, we may
assume that $u\equiv-1\pmod4$, in which case the decomposition is
unique, up to the sign of $v$. It is then not difficult to
prove the following theorem (see Section 8.5.2 of \cite{Coh3} for the proof):
\begin{theorem} Assume that $p\equiv1\pmod4$ and $p=u^2+v^2$ with
$u\equiv-1\pmod4$. The number of projective points on the elliptic
curve $y^2=x^3-nx$ (where $p\nmid n$) is equal to $p+1-a(p)$, where
$$a(p)=2\leg{2}{p}\begin{cases}
-u&\text{\quad if\quad $n^{(p-1)/4}\equiv1\pmod{p}$}\\
u&\text{\quad if\quad $n^{(p-1)/4}\equiv-1\pmod{p}$}\\
-v&\text{\quad if\quad $n^{(p-1)/4}\equiv-u/v\pmod{p}$}\\
v&\text{\quad if\quad $n^{(p-1)/4}\equiv u/v\pmod{p}$}\end{cases}$$
(note that one of these four cases must occur).
\end{theorem}
To apply this theorem from a computational standpoint we note the
following two \emph{facts}:
(1) The quantity $n^{(p-1)/4}\bmod p$ can be computed efficiently
by the \emph{binary powering algorithm} (in $O(\log^3(p))$
operations). It is however possible to compute it more efficiently
in $O(\log^2(p))$ operations using the \emph{quartic reciprocity law}.
(2) The numbers $u$ and $v$ such that $u^2+v^2=p$ can be computed
efficiently (in $O(\log^2(p))$ operations) using \emph{Cornacchia's
algorithm} which is very easy to describe but not so easy to prove.
It is a variant of Euclid's algorithm. It proceeds as follows:
\smallskip
$\bullet$ As a first step, we compute a square root of $-1$ modulo $p$,
i.e., an $x$ such that $x^2\equiv-1\pmod{p}$. This is done by choosing
randomly a $z\in[1,p-1]$ and computing the Legendre symbol $\lgs{z}{p}$
until it is equal to $-1$ (we can also simply try $z=2$, $3$, ...).
Note that this is a fast computation. When this is the case, we have
by definition $z^{(p-1)/2}\equiv-1\pmod{p}$, hence $x^2\equiv-1\pmod{p}$
for $x=z^{(p-1)/4}\bmod{p}$. Reducing $x$ modulo $p$ and possibly
changing $x$ into $p-x$, we normalize $x$ so that $p/2<x<p$.
\smallskip
$\bullet$ As a second step, we perform the Euclidean algorithm on the pair
$(p,x)$, writing $a_0=p$, $a_1=x$, and $a_{n-1}=q_na_n+a_{n+1}$
with $0\le a_{n+1}<a_n$, and we stop at the exact $n$ for which
$a_n^2<p$. It can be proved (this is the difficult part) that for
this specific $n$ we have $a_n^2+a_{n+1}^2=p$, so up to exchange of
$u$ and $v$ and/or change of signs, we can take $u=a_n$ and $v=a_{n+1}$.
\smallskip
Note that Cornacchia's algorithm can easily be generalized to solving
efficiently $u^2+dv^2=p$ or $u^2+dv^2=4p$ for any $d\ge1$, see Section 1.5.2
of\cite{Coh1} (incidentally one can also solve this for $d<0$, but it poses
completely different problems since there may be infinitely many solutions).
\smallskip
The above theorem is given for the special elliptic curves
$y^2=x^3-nx$ which have complex multiplication by the (ring of integers
of the) field ${\mathbb Q}(i)$, but a similar theorem is valid for all curves
with complex multiplication, see Section 8.5.2 of \cite{Coh3}.
\subsection{Using Modular Forms of Weight $2$}
By Wiles' celebrated theorem, the $L$-function of an elliptic curve is
equal to the $L$-function of a modular form of weight $2$ for $\Gamma_0(N)$,
where $N$ is the conductor of the curve. We do not need to give the
precise definitions of these objects, but only a specific example.
Let $V$ be the elliptic curve with affine equation $y^2+y=x^3-x^2$.
It has conductor $11$. It can be shown using classical modular form methods
(i.e., without Wiles' theorem) that the global $L$-function
$L(V;s)=\sum_{n\ge1}a(n)/n^s$ is the same as that of the modular form
of weight $2$ over $\Gamma_0(11)$ given by
$$f(\tau)=q\prod_{m\ge1}(1-q^m)^2(1-q^{11m})^2\;,$$
with $q=\exp(2\pi i\tau)$. Even with no knowledge of modular forms, this
simply means that if we formally expand the product on the right hand side
as
$$q\prod_{m\ge1}(1-q^m)^2(1-q^{11m})^2=\sum_{n\ge1}b(n)q^n\;,$$
we have $b(n)=a(n)$ for all $n$, and in particular for $n=p$ prime.
We have already seen this example above with a slightly different equation
for the elliptic curve (which makes no difference for its $L$-function outside
of the primes $2$ and $3$).
We see that this gives an alternate method for computing $a(p)$ by
expanding the infinite product. Indeed, the function
$$\eta(\tau)=q^{1/24}\prod_{m\ge1}(1-q^m)$$
is a modular form of weight $1/2$ with known expansion:
$$\eta(\tau)=\sum_{n\ge1}\leg{12}{n}q^{n^2/24}\;,$$
and so using Fast Fourier Transform techniques for formal power series
multiplication we can compute all the coefficients $a(n)$ simultaneously
(as opposed to one by one) for $n\le B$ in time $O(B\log^2(B))$. This
amounts to computing each individual $a(n)$ in time $O(\log^2(n))$, so
it seems to be competitive with the fast methods for elliptic curves
with complex multiplication, but this is an illusion since we must
store all $B$ coefficients, so it can be used only for $B\le 10^{12}$,
say, far smaller than what can be reached using Schoof's algorithm,
which is truly polynomial in $\log(p)$ for each fixed prime $p$.
\subsection{Higher Weight Modular Forms}
It is interesting to note that the dichotomy between elliptic curves
with or without complex multiplication is also valid for modular forms
of higher weight (again, whatever that means, you do not need to know
the definitions). For instance, consider
$$\Delta(\tau)=\Delta_{24}(\tau)=\eta^{24}(\tau)=q\prod_{m\ge1}(1-q^m)^{24}:=\sum_{n\ge1}\tau(n)q^n\;.$$
The function $\tau(n)$ is a famous function called the \emph{Ramanujan
$\tau$ function}, and has many important properties, analogous to those
of the $a(p)$ attached to an elliptic curve (i.e., to a modular
form of weight $2$).
There are several methods to compute $\tau(p)$ for $p$ prime, say. One
is to do as above, using FFT techniques. The running time is similar,
but again we are limited to $B\le 10^{12}$, say. A second more
sophisticated method is to use the \emph{Eichler--Selberg trace formula},
which enables the computation of an individual $\tau(p)$ in time
$O(p^{1/2+\varepsilon})$ for all $\varepsilon>0$. A third very deep method, developed
by Edixhoven, Couveignes, et al., is a generalization of Schoof's algorithm.
While in principle polynomial time in $\log(p)$, it is not yet practical
compared to the preceding method.
For those who want to see the formula using the trace formula explicitly, we
let $H(N)$ be the
\emph{Hurwitz class number} $H(N)$ (essentially the class number of imaginary
quadratic orders counted with suitable multiplicity): if we set
$H_3(N)=H(4N)+2H(N)$ (note that $H(4N)$ can be computed in terms of $H(N)$),
then for $p$ prime
\begin{align*}\tau(p)&=28p^6-28p^5-90p^4-35p^3-1\\
&\phantom{=}-128\sum_{1\le t<p^{1/2}}t^6(4t^4-9pt^2+7p^2)H_3(p-t^2)\;,
\end{align*}
which is the fastest \emph{practical} formula that I know for computing
$\tau(p)$.
\smallskip
On the contrary, consider
$$\Delta_{26}(\tau)=\eta^{26}(\tau)=q^{13/12}\prod_{m\ge1}(1-q^m)^{26}:=q^{13/12}\sum_{n\ge1}\tau_{26}(n)q^n\;.$$
This is what is called a modular form with complex multiplication. Whatever
the definition, this means that the coefficients $\tau_{26}(p)$ can be
computed in time polynomial in $\log(p)$ using a generalization of
Cornacchia's algorithm, hence very fast.
\begin{exercise} (You need some extra knowledge for this.) In the literature
find an exact formula for $\tau_{26}(p)$ in terms of values of Hecke
\emph{Gr\"ossencharacters}, and program this formula. Use it to compute
some values of $\tau_{26}(p)$ for $p$ prime as large as you can go.
\end{exercise}
\subsection{Computing $|V({\mathbb F}_q)|$ for Quasi-diagonal Hypersurfaces}
We now consider a completely different situation where $|V({\mathbb F}_q)|$ can
be computed without too much difficulty.
As we have seen, in the case of elliptic curves $V$ defined over ${\mathbb Q}$, the
corresponding $L$-function is of \emph{degree $2$}, in other words is of
the form $\prod_p1/(1-a(p)p^{-s}+b(p)p^{-2s})$, where $b(p)\ne0$ for all but
a finite number of $p$. $L$-functions of degree $1$ such as the Riemann
zeta function are essentially $L$-functions of Dirichlet characters, in other
words simple ``twists'' of the Riemann zeta function. $L$-functions of degree
$2$ are believed to be always $L$-functions attached to modular forms,
and $b(p)=\chi(p)p^{k-1}$ for a suitable integer $k$ ($k=2$ for elliptic
curves), the \emph{weight} (note that this is \emph{one more} than the
so-called \emph{motivic weight}). Even though many unsolved questions remain,
this case is also quite well understood. Much more mysterious are $L$-functions
of higher degree, such as $3$ or $4$, and it is interesting to study natural
mathematical objects leading to such functions. A case where this can be done
reasonably easily is the case of diagonal or \emph{quasi-diagonal hypersurfaces}. We study a special case:
\begin{definition} Let $m\ge2$, for $1\le i\le m$ let $a_i\in{\mathbb F}_q^*$ be
nonzero, and let $b\in{\mathbb F}_q$. The quasi-diagonal hypersurface defined by this
data is the hypersurface in $\P^{m-1}$ defined by the projective equation
$$\sum_{1\le i\le m}a_ix_i^m-b\prod_{1\le i\le m}x_i=0\;.$$
When $b=0$, it is a diagonal hypersurface.
\end{definition}
Of course, we could study more general equations, for instance where the
degree is not equal to the number of variables, but we stick to this
special case.
To compute the number of (projective) points on this hypersurface, we need
an additional definition:
\begin{definition} We let $\omega$ be a generator of the group of characters
of ${\mathbb F}_q^*$, either with values in ${\mathbb C}$, or in the $p$-adic field ${\mathbb C}_p$
(do not worry if you are not familiar with this).\end{definition}
Indeed, by a well-known theorem of elementary algebra, the multiplicative
group ${\mathbb F}_q^*$ of a finite field is \emph{cyclic}, so its group of
characters, which is \emph{non-canonically isomorphic} to ${\mathbb F}_q^*$, is also
cyclic, so $\omega$ indeed exists.
It is not difficult to prove the following theorem:
\begin{theorem}\label{thmquasi} Assume that $\gcd(m,q-1)=1$ and $b\ne0$, and
set $B=\prod_{1\le i\le m}(a_i/b)$. If $V$ is the above quasi-diagonal
hypersurface, the number $|V({\mathbb F}_q)|$ of \emph{affine} points on $V$ is given by
$$|V({\mathbb F}_q)|=q^{m-1}+(-1)^{m-1}+\sum_{1\le n\le q-2}\omega^{-n}(B)J_m(\omega^n,\dotsc,\omega^n)\;,$$
where $J_m$ is the $m$-variable Jacobi sum.
\end{theorem}
We will study in great detail below the definition and properties of
$J_m$.
Note that the number of \emph{projective} points is simply
$(|V({\mathbb F}_q)|-1)/(q-1)$.
There also exists a more general theorem with no restriction on
$\gcd(m,q-1)$, which we do not give.
The occurrence of Jacobi sums is very natural and frequent in point counting
results. It is therefore important to look at efficient ways to compute them,
and this is what we do in the next section, where we also give complete
definitions and basic results.
\section{Gauss and Jacobi Sums}
In this long section, we study in great detail Gauss and Jacobi sums.
Most results are standard, and I would like to emphasize
that almost all of them can be proved with little difficulty
by easy algebraic manipulations.
\subsection{Gauss Sums over ${\mathbb F}_q$}\label{sec:gausssum}
We can define and study Gauss and Jacobi sums in two different contexts: first,
and most importantly, over finite fields ${\mathbb F}_q$, with $q=p^f$ a prime power
(note that from now on we write $q=p^f$ and not $q=p^n$).
Second, over the ring ${\mathbb Z}/N{\mathbb Z}$. The two notions coincide when $N=q=p$ is prime,
but the methods and applications are quite different.
To give the definitions over ${\mathbb F}_q$ we need to recall some fundamental (and
easy) results concerning finite fields.
\begin{proposition} Let $p$ be a prime, $f\ge1$, and ${\mathbb F}_q$ be the finite field
with $q=p^f$ elements, which exists and is unique up to isomorphism.
\begin{enumerate}\item The map $\phi$ such that $\phi(x)=x^p$ is a field
isomorphism from ${\mathbb F}_q$ to itself leaving ${\mathbb F}_p$ fixed. It is called the
\emph{Frobenius map}.
\item The extension ${\mathbb F}_q/{\mathbb F}_p$ is a \emph{normal} (i.e., separable and Galois)
field extension, with Galois group which is cyclic of order $f$ generated
by~$\phi$.
\end{enumerate}\end{proposition}
In particular, we can define the \emph{trace} $\Tr_{{\mathbb F}_q/{\mathbb F}_p}$ and the
\emph{norm} $\N_{{\mathbb F}_q/{\mathbb F}_p}$, and we have the formulas (where from now on we
omit ${\mathbb F}_q/{\mathbb F}_p$ for simplicity):
$$\Tr(x)=\sum_{0\le j\le f-1}x^{p^j}\text{\quad and\quad}
\N(x)=\prod_{0\le j\le f-1}x^{p^j}=x^{(p^f-1)/(p-1)}=x^{(q-1)/(p-1)}\;.$$
\begin{definition} Let $\chi$ be a character from ${\mathbb F}_q^*$ to an
algebraically closed field $C$ of characteristic $0$. For $a\in{\mathbb F}_q$
we define the \emph{Gauss sum} ${\mathfrak g}(\chi,a)$ by
$${\mathfrak g}(\chi,a)=\sum_{x\in{\mathbb F}_q^*}\chi(x)\zeta_p^{\Tr(ax)}\;,$$
where $\zeta_p$ is a fixed primitive $p$th root of unity in $C$.
We also set ${\mathfrak g}(\chi)={\mathfrak g}(\chi,1)$.
\end{definition}
Note that strictly speaking this definition depends on the choice
of $\zeta_p$. However, if $\zeta'_p$ is some other primitive $p$th root of
unity we have $\zeta'_p=\zeta_p^k$ for some $k\in{\mathbb F}_p^*$, so
$$\sum_{x\in{\mathbb F}_q^*}\chi(x){\zeta'_p}^{\Tr(ax)}={\mathfrak g}(\chi,ka)\;.$$
In fact it is trivial to see (this follows from the next proposition)
that ${\mathfrak g}(\chi,ka)=\chi^{-1}(k){\mathfrak g}(\chi,a)$.
\begin{definition}\label{defeps} We define $\varepsilon$ to be the trivial character,
i.e., such that $\varepsilon(x)=1$ for all $x\in{\mathbb F}_q^*$. We extend characters $\chi$
to the whole of ${\mathbb F}_q$ by setting $\chi(0)=0$ if $\chi\ne\varepsilon$ and $\varepsilon(0)=1$.
\end{definition}
Note that this apparently innocuous definition of $\varepsilon(0)$ is \emph{crucial}
because it simplifies many formulas. Note also that the definition of
${\mathfrak g}(\chi,a)$ is a sum over $x\in{\mathbb F}_q^*$ and not $x\in{\mathbb F}_q$, while for
Jacobi sums we will use all of ${\mathbb F}_q$.
\begin{exercise}\label{exoorth}\begin{enumerate}\item
Show that ${\mathfrak g}(\varepsilon,a)=-1$ if $a\in{\mathbb F}_q^*$ and ${\mathfrak g}(\varepsilon,0)=q-1$.
\item If $\chi\ne\varepsilon$, show that ${\mathfrak g}(\chi,0)=0$, in other words that
$$\sum_{x\in{\mathbb F}_q}\chi(x)=0$$
(here it does not matter if we sum over ${\mathbb F}_q$ or ${\mathbb F}_q^*$).
\item Deduce that if $\chi_1\ne\chi_2$ then
$$\sum_{x\in{\mathbb F}_q^*}\chi_1(x)\chi_2^{-1}(x)=0\;.$$
This relation is called for evident reasons \emph{orthogonality of
characters}.
\item Dually, show that if $x\ne0,1$ we have $\sum_{\chi}\chi(x)=0$, where
the sum is over all characters of ${\mathbb F}_q^*$.
\end{enumerate}
\end{exercise}
Because of this exercise, if necessary we may assume that $\chi\ne\varepsilon$
and/or that $a\ne0$.
\begin{exercise} Let $\chi$ be a character of ${\mathbb F}_q^*$ of exact order $n$.
\begin{enumerate}\item Show that $n\mid(q-1)$ and that
$\chi(-1)=(-1)^{(q-1)/n}$. In particular, if $n$ is odd and $p>2$ we have
$\chi(-1)=1$.
\item Show that ${\mathfrak g}(\chi,a)\in{\mathbb Z}[\zeta_n,\zeta_p]$, where as usual $\zeta_m$ denotes
a primitive $m$th root of unity.
\end{enumerate}
\end{exercise}
\begin{proposition}\begin{enumerate}\item If $a\ne0$ we have
$${\mathfrak g}(\chi,a)=\chi^{-1}(a){\mathfrak g}(\chi)\;.$$
\item We have
$${\mathfrak g}(\chi^{-1})=\chi(-1)\ov{{\mathfrak g}(\chi)}\;.$$
\item We have
$${\mathfrak g}(\chi^p,a)=\chi^{1-p}(a){\mathfrak g}(\chi,a)\;.$$
\item If $\chi\ne\varepsilon$ we have
$$|{\mathfrak g}(\chi)|=q^{1/2}\;.$$
\end{enumerate}\end{proposition}
\subsection{Jacobi Sums over ${\mathbb F}_q$}
Recall that we have extended characters of ${\mathbb F}_q^*$ by setting $\chi(0)=0$
if $\chi\ne\varepsilon$ and $\varepsilon(0)=1$.
\begin{definition} For $1\le j\le k$ let $\chi_j$ be characters of ${\mathbb F}_q^*$.
We define the Jacobi sum
$$J_k(\chi_1,\dotsc,\chi_k;a)=\sum_{x_1+\cdots+x_k=a}\chi_1(x_1)\cdots\chi_k(x_k)$$
and $J_k(\chi_1,\dotsc,\chi_k)=J_k(\chi_1,\dotsc,\chi_k;1)$.
\end{definition}
Note that, as mentioned above, we do not exclude the cases where some
$x_i=0$, using the convention of Definition \ref{defeps} for $\chi(0)$.
The following easy lemma shows that it is only necessary to study
$J_k(\chi_1,\dotsc,\chi_k)$:
\begin{lemma}\label{lemjactriv} Set $\chi=\chi_1\cdots\chi_k$.
\begin{enumerate}\item If $a\ne0$ we have
$$J_k(\chi_1,\dotsc,\chi_k;a)=\chi(a)J_k(\chi_1,\dotsc,\chi_k)\;.$$
\item If $a=0$, abbreviating $J_k(\chi_1,\dotsc,\chi_k;0)$ to $J_k(0)$ we have
$$J_k(0)=\begin{cases} q^{k-1}&\text{\quad if $\chi_j=\varepsilon$ for all $j$\;,}\\
0&\text{\quad if $\chi\ne\varepsilon$\;,}\\
\chi_k(-1)(q-1)J_{k-1}(\chi_1,\dotsc,\chi_{k-1})&\text{\quad if $\chi=\varepsilon$ and $\chi_k\ne\varepsilon$\;.}\end{cases}$$
\end{enumerate}
\end{lemma}
As we have seen, a Gauss sum ${\mathfrak g}(\chi)$ belongs to the rather large ring
${\mathbb Z}[\zeta_{q-1},\zeta_p]$ (and in general not to a smaller ring). The advantage of
Jacobi sums is that they belong to the smaller ring ${\mathbb Z}[\zeta_{q-1}]$, and as
we are going to see, that they are closely related to Gauss sums. Thus, when
working \emph{algebraically}, it is almost always better to use Jacobi sums
instead of Gauss sums. On the other hand, when working \emph{analytically}
(for instance in ${\mathbb C}$ or ${\mathbb C}_p$), it may be better to work with Gauss sums:
we will see below the use of root numbers (suggested by Louboutin), and of the
Gross--Koblitz formula.
Note that $J_1(\chi_1)=1$. Outside of this trivial case, the close link between
Gauss and Jacobi sums is given by the following easy proposition, whose
apparently technical statement is only due to the trivial character $\varepsilon$:
if none of the $\chi_j$ nor their product is trivial, we have the simple formula
given by (3).
\begin{proposition}\label{jacgaufq} Denote by $t$ the number of $\chi_j$ equal
to the trivial character $\varepsilon$, and as above set $\chi=\chi_1\dotsc\chi_k$.
\begin{enumerate}\item If $t=k$ then $J_k(\chi_1,\dots,\chi_k)=q^{k-1}$.
\item If $1\le t\le k-1$ then $J_k(\chi_1,\dots,\chi_k)=0$.
\item If $t=0$ and $\chi\ne\varepsilon$ then
$$J_k(\chi_1,\dotsc,\chi_k)=\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{{\mathfrak g}(\chi_1\cdots\chi_k)}=\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{{\mathfrak g}(\chi)}\;.$$
\item If $t=0$ and $\chi=\varepsilon$ then
\begin{align*}J_k(\chi_1,\dotsc,\chi_k)&=-\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)}{q}\\
&=-\chi_k(-1)\dfrac{{\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_{k-1})}{{\mathfrak g}(\chi_1\cdots\chi_{k-1})}=-\chi_k(-1)J_{k-1}(\chi_1,\dotsc,\chi_{k-1})\;.\end{align*}
In particular, in this case we have
$${\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)=\chi_k(-1)qJ_{k-1}(\chi_1,\dotsc,\chi_{k-1})\;.$$
\end{enumerate}
\end{proposition}
\begin{corollary}\label{corjacrecur} With the same notation, assume that
$k\ge2$ and all the $\chi_j$ are nontrivial. Setting
$\psi=\chi_1\cdots\chi_{k-1}$, we have the following recursive formula:
$$J_k(\chi_1,\dotsc,\chi_k)=\begin{cases}J_{k-1}(\chi_1,\dotsc,\chi_{k-1})J_2(\psi,\chi_k)&\text{\quad if $\psi\ne\varepsilon$\;,}\\
\chi_{k-1}(-1)qJ_{k-2}(\chi_1,\dotsc,\chi_{k-2})&\text{\quad if $\psi=\varepsilon$\;.}\end{cases}$$
\end{corollary}
The point of this recursion is that the definition of a $k$-fold Jacobi sum
$J_k$ involves a sum over $q^{k-1}$ values for $x_1,\dotsc,x_{k-1}$, the last
variable $x_k$ being
determined by $x_k=1-x_1-\cdots-x_{k-1}$, so neglecting the time to compute
the $\chi_j(x_j)$ and their product (which is a reasonable assumption), using
the definition takes time $O(q^{k-1})$. On the other hand, using the above
recursion boils down at worst to computing $k-1$ Jacobi sums $J_2$, for a
total time of $O((k-1)q)$. Nonetheless, we will see that in some cases it is
still better to use directly Gauss sums and formula (3) of the proposition.
Since Jacobi sums $J_2$ are the simplest and the above recursion in fact shows
that one can reduce to $J_2$, we will drop the subscript $2$ and simply write
$J(\chi_1,\chi_2)$. Note that
$$J(\chi_1,\chi_2)=\sum_{x\in{\mathbb F}_q}\chi_1(x)\chi_2(1-x)\;,$$
where the sum is over the whole of ${\mathbb F}_q$ and \emph{not} ${\mathbb F}_q\setminus\{0,1\}$
(which makes a difference only if one of the $\chi_i$ is trivial). More
precisely it is clear that $J(\varepsilon,\varepsilon)=q^2$, and that if $\chi\ne\varepsilon$
we have $J(\chi,\varepsilon)=\sum_{x\in{\mathbb F}_q}\chi(x)=0$, which are special cases of
Proposition \ref{jacgaufq}.
\begin{exercise} Let $n\mid(q-1)$ be the order of $\chi$. Prove that
${\mathfrak g}(\chi)^n\in{\mathbb Z}[\zeta_n]$.
\end{exercise}
\begin{exercise} Assume that none of the $\chi_j$ is equal to $\varepsilon$, but that
their product $\chi$ is equal to $\varepsilon$. Prove that (using the same notation
as in Lemma \ref{lemjactriv}):
$$J_k(0)=\left(1-\dfrac{1}{q}\right){\mathfrak g}(\chi_1)\cdots{\mathfrak g}(\chi_k)\;.$$
\end{exercise}
\begin{exercise} Prove the following reciprocity formula for Jacobi sums:
if the $\chi_j$ are all nontrivial and $\chi=\chi_1\cdots\chi_k$, we have
$$J_k(\chi_1^{-1},\dotsc,\chi_k^{-1})=\dfrac{q^{k-1-\delta}}{J_k(\chi_1,\dotsc,\chi_k)}\;,$$
where $\delta=1$ if $\chi=\varepsilon$, and otherwise $\delta=0$.
\end{exercise}
\subsection{Applications of $J(\chi,\chi)$}
In this short subsection we give without proof a couple of
applications of the special Jacobi sums $J(\chi,\chi)$. Once again
the proofs are not difficult. We begin by the following result,
which is a special case of the Hasse--Davenport relations that we
will give below.
\begin{lemma} Assume that $q$ is odd, and let $\rho$ be the unique
character of order $2$ on ${\mathbb F}_q^*$. For any nontrivial character
$\chi$ we have
$$\chi(4)J(\chi,\chi)=J(\chi,\rho)\;.$$
Equivalently, if $\chi\ne\rho$ we have
$${\mathfrak g}(\chi){\mathfrak g}(\chi\rho)=\chi^{-1}(4){\mathfrak g}(\rho){\mathfrak g}(\chi^2)\;.$$
\end{lemma}
\begin{exercise}\begin{enumerate}
\item Prove this lemma.
\item Show that ${\mathfrak g}(\rho)^2=(-1)^{(q-1)/2}q$.
\end{enumerate}
\end{exercise}
\begin{proposition}\label{propjac34}\begin{enumerate}
\item Assume that $q\equiv1\pmod4$, let $\chi$ be one of the two
characters of order $4$ on ${\mathbb F}_q^*$, and write $J(\chi,\chi)=a+bi$.
Then $q=a^2+b^2$, $2\mid b$, and $a\equiv-1\pmod4$.
\item Assume that $q\equiv1\pmod3$, let $\chi$ be one of the two
characters of order $3$ on ${\mathbb F}_q^*$, and write $J(\chi,\chi)=a+b\rho$,
where $\rho=\zeta_3$ is a primitive cube root of unity.
Then $q=a^2-ab+b^2$, $3\mid b$, $a\equiv-1\pmod3$, and
$a+b\equiv q-2\pmod{9}$.
\item Let $p\equiv2\pmod3$, $q=p^{2m}\equiv1\pmod3$, and let $\chi$
be one of the two characters of order $3$ on ${\mathbb F}_q^*$. We have
$$J(\chi,\chi)=(-1)^{m-1}p^m=(-1)^{m-1}q^{1/2}\;.$$
\end{enumerate}\end{proposition}
\begin{corollary}\begin{enumerate}
\item (Fermat.) Any prime $p\equiv1\pmod4$ is a sum of two squares.
\item Any prime $p\equiv1\pmod3$ is of the form $a^2-ab+b^2$ with
$3\mid b$, or equivalently $4p=(2a-b)^2+27(b/3)^2$ is of the form
$c^2+27d^2$.
\item (Gauss.) $p\equiv1\pmod3$ is itself of the form $p=u^2+27v^2$
if and only if $2$ is a cube in ${\mathbb F}_p^*$.\end{enumerate}\end{corollary}
\begin{exercise} Assuming the proposition, prove the corollary.
\end{exercise}
\subsection{The Hasse--Davenport Relations}
All the results that we have given up to now on Gauss and Jacobi sums
have rather simple proofs, which is one of the reasons we have not
given them. Perhaps surprisingly, there exist other important
relations which are considerably more difficult to prove. Before
giving them, it is instructive to explain how one can ``guess''
their existence, if one knows the classical theory of the gamma
function $\Gamma(s)$ (of course skip this part if you do not know it,
since it would only confuse you, or read the appendix).
Recall that $\Gamma(s)$ is defined (at least for $\Re(s)>0$) by
$$\Gamma(s)=\int_0^\infty e^{-t}t^s dt/t\;,$$ and the beta function
$B(a,b)$ by $B(a,b)=\int_0^1 t^{a-1}(1-t)^{b-1}\,dt$.
The function $e^{-t}$ transforms sums into products, so is an
\emph{additive} character, analogous to $\zeta_p^t$. The function
$t^s$ transforms products into products, so is a multiplicative
character, analogous to $\chi(t)$ ($dt/t$ is simply the Haar
invariant measure on ${\mathbb R}_{>0}$). Thus $\Gamma(s)$ is a continuous
analogue of the Gauss sum ${\mathfrak g}(\chi)$.
Similarly, since $J(\chi_1,\chi_2)=\sum_t\chi_1(t)\chi_2(1-t)$, we
see the similarity with the function $B$. Thus, it does not come
too much as a surprise that analogous formulas are valid on both
sides. To begin with, it is not difficult to show that
$B(a,b)=\Gamma(a)\Gamma(b)/\Gamma(a+b)$, exactly analogous to
$J(\chi_1,\chi_2)={\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)/{\mathfrak g}(\chi_1\chi_2)$.
The analogue of $\Gamma(s)\Gamma(-s)=-\pi/(s\sin(s\pi))$ is
$${\mathfrak g}(\chi){\mathfrak g}(\chi^{-1})=\chi(-1)q\;.$$
But it is well-known that the gamma function has a duplication formula
$\Gamma(s)\Gamma(s+1/2)=2^{1-2s}\Gamma(1/2)\Gamma(2s)$, and more generally
a multiplication (or distribution) formula. This duplication
formula is clearly the analogue of the formula
$${\mathfrak g}(\chi){\mathfrak g}(\chi\rho)=\chi^{-1}(4){\mathfrak g}(\rho){\mathfrak g}(\chi^2)$$
given above. The \emph{Hasse--Davenport product relation} is
the analogue of the distribution formula for the gamma function.
\begin{theorem} Let $\rho$ be a character of exact order $m$ dividing
$q-1$. For any character $\chi$ of ${\mathbb F}_q^*$ we have
$$\prod_{0\le a<m}{\mathfrak g}(\chi\rho^a)=\chi^{-m}(m)k(p,f,m)q^{(m-1)/2}{\mathfrak g}(\chi^m)\;,$$
where $k(p,f,m)$ is the fourth root of unity given by
$$k(p,f,m)=\begin{cases}
\leg{p}{m}^f&\text{ if $m$ is odd,}\\
(-1)^{f+1}\leg{(-1)^{m/2+1}m/2}{p}^f\leg{-1}{p}^{f/2}&\text{ if $m$ is even,}\\
\end{cases}$$
where $(-1)^{f/2}$ is to be understood as $i^f$ when $f$ is odd.
\end{theorem}
\begin{remark} For some reason, in the literature this formula is usually
stated in the weaker form where the constant $k(p,f,m)$ is not
given explicitly.\end{remark}
Contrary to the proof of the distribution formula for the gamma
function, the proof of this theorem is quite long. There are
essentially two completely different proofs: one using classical
algebraic number theory, and one using $p$-adic analysis. The latter
is simpler and gives directly the value of $k(p,f,m)$. See
Section 3.7.2 of \cite{Coh3} and Section 11.7.4 of \cite{Coh4} for both
detailed proofs.
\smallskip
Gauss sums satisfy another type of nontrivial relation, also due to
Hasse--Davenport, the so-called \emph{lifting relation}, as follows:
\begin{theorem} Let ${\mathbb F}_{q^n}/{\mathbb F}_q$ be an extension of finite fields,
let $\chi$ be a character of ${\mathbb F}_q^*$, and define the \emph{lift}
of $\chi$ to ${\mathbb F}_{q^n}$ by the formula
$\chi^{(n)}=\chi\circ\N_{{\mathbb F}_{q^n}/{\mathbb F}_q}$. We have
$${\mathfrak g}(\chi^{(n)})=(-1)^{n-1}{\mathfrak g}(\chi)^n\;.$$
\end{theorem}
This relation is essential in the initial proof of the Weil conjectures
for diagonal hypersurfaces done by Weil himself. This is not surprising,
since we have seen in Theorem \ref{thmquasi} that $|V({\mathbb F}_q)|$ is closely
related to Jacobi sums, hence also to Gauss sums.
\section{Practical Computations of Gauss and Jacobi Sums}
As above, let $\omega$ be a character of order exactly $q-1$, so that
$\omega$ is a generator of the group of characters of ${\mathbb F}_q^*$.
For notational simplicity, we will write $J(r_1,\dotsc,r_k)$ instead of
$J(\omega^{r_1},\dotsc,\omega^{r_k})$. Let us consider the specific example of
efficient computation of the quantity
$$S(q;z)=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)\;,$$
which occurs in the computation of the Hasse--Weil zeta function of
a quasi-diagonal threefold, see Theorem \ref{thmquasi}.
\subsection{Elementary Methods}
By the recursion of Corollary \ref{corjacrecur}, we have \emph{generically}
(i.e., except for special values of $n$ which will be considered separately):
$$J_5(n,n,n,n,n)=J(n,n)J(2n,n)J(3n,n)J(4n,n)\;.$$
Since $J(n,an)=\sum_{x}\omega^n(x)\omega^{an}(1-x)$, the cost of
computing $J_5$ as written is $\Os(q)$, where here and after we
write $\Os(q^\alpha)$ to mean $O(q^{\alpha+\varepsilon})$ for all $\varepsilon>0$
(soft-$O$ notation). Thus computing $S(q;z)$ by this direct
method requires time $\Os(q^2)$.
We can however do much better. Since the values of the characters are all in
${\mathbb Z}[\zeta_{q-1}]$, we work in this ring. In fact, even better, we work in the
ring with zero divisors $R={\mathbb Z}[X]/(X^{q-1}-1)$, together with the natural
surjective map sending the class of $X$ in $R$ to $\zeta_{q-1}$. Indeed, let $g$
be the generator of ${\mathbb F}_q^*$ such that $\omega(g)=\zeta_{q-1}$. We have,
again \emph{generically}:
$$J(n,an)=\sum_{1\le u\le q-2}\omega^n(g^u)\omega^{an}(1-g^u)
=\sum_{1\le u\le q-2}\zeta_{q-1}^{nu+an\log_g(1-g^u)}\;,$$
where $\log_g$ is the \emph{discrete logarithm} to base $g$ defined modulo
$q-1$, i.e., such that $g^{\log_g(x)}=x$. If $(q-1)\nmid n$ but $(q-1)\mid an$
we have $\omega^{an}=\varepsilon$ so we must add the contribution of $u=0$, which is $1$,
and if $(q-1)\mid n$ we must add the contribution of $u=0$ \emph{and} of
$x=0$, which is $2$ (recall the \emph{essential} convention that
$\chi(0)=0$ if $\chi\ne\varepsilon$ and $\varepsilon(0)=1$, see Definition \ref{defeps}).
In other words, if we set
$$P_a(X)=\sum_{1\le u\le q-2}X^{(u+a\log_g(1-g^u))\bmod{(q-1)}}\in R\;,$$
we have
$$J(n,an)=P_a(\zeta_{q-1}^n)+\begin{cases}
0&\text{\quad if $(q-1)\nmid an$\;,}\\
1&\text{\quad if $(q-1)\mid an$ but $(q-1)\nmid n$\;, and}\\
2&\text{\quad if $(q-1)\mid n$\;.}\end{cases}$$
Thus, if we set finally
$$P(X)=P_1(X)P_2(X)P_3(X)P_4(X)\bmod{X^{q-1}}\in R\;,$$
we have (still generically) $J_5(n,n,n,n,n)=P(\zeta_{q-1}^n)$.
Assume for the moment that this is true for all $n$ (we will correct this
below), let $\ell=\log_g(z)$, so that $\omega(z)=\omega(g^\ell)=\zeta_{q-1}^\ell$,
and write
$$P(X)=\sum_{0\le j\le q-2}a_jX^j\;.$$
We thus have
$$\omega^{-n}(z)J_5(n,n,n,n,n)=\zeta_{q-1}^{-n\ell}\sum_{0\le j\le q-2}a_j\zeta_{q-1}^{nj}
=\sum_{0\le j\le q-2}a_j\zeta_{q-1}^{n(j-\ell)}\;,$$
hence
\begin{align*}S(q;z)&=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)
=\sum_{0\le j\le q-2}a_j\sum_{0\le n\le q-2}\zeta_{q-1}^{n(j-\ell)}\\
&=(q-1)\sum_{0\le j\le q-2,\ j\equiv\ell\pmod{q-1}}a_j=(q-1)a_{\ell}\;.\end{align*}
The result is thus immediate as soon as we know the coefficients of the
polynomial $P$. Since there exist fast methods for computing discrete
logarithms, this leads to a $\Os(q)$ method for computing $S(q;z)$.
\smallskip
To obtain the correct formula, we need to adjust for the special $n$
for which $J_5(n,n,n,n,n)$ is not equal to $J(n,n)J(n,2n)J(n,3n)J(n,4n)$,
which are the same for which $(q-1)\mid an$ for some $a$ such that
$2\le a\le 4$, together with $a=5$. This is easy but boring, and should be
skipped on first reading.
\begin{enumerate}\item For $n=0$ we have $J_5(n,n,n,n,n)=q^4$, and on the
other hand $P(1)=(J(0,0)-2)^4=(q-2)^4$, so the correction term is
$q^4-(q-2)^4=8(q-1)(q^2-2q+2)$.
\item For $n=(q-1)/2$ (if $q$ is odd) we have
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^4={\mathfrak g}(\rho)^4$$
since $5n\equiv n\pmod{q-1}$, where $\rho$ is the character of order $2$,
and we have ${\mathfrak g}(\rho)^2=(-1)^{(q-1)/2}q$, so $J_5(n,n,n,n,n)=q^2$.
On the other hand
\begin{align*}P(\zeta_{q-1}^n)&=J(\rho,\rho)(J(\rho,2\rho)-1)J(\rho,\rho)(J(\rho,2\rho)-1)\\
&=J(\rho,\rho)^2={\mathfrak g}(\rho)^4/q^2=1\;,\end{align*}
so the correction term is $\rho(z)(q^2-1)$.
\item For $n=\pm(q-1)/3$ (if $q\equiv1\pmod3$), writing $\chi_3=\omega^{(q-1)/3}$,
which is one of the two cubic characters, we have
\begin{align*}J_5(n,n,n,n,n)&={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{-n})\\
&={\mathfrak g}(\omega^n)^6/({\mathfrak g}(\omega^{-n}){\mathfrak g}(\omega^n))={\mathfrak g}(\omega^n)^6/q\\
&=qJ(n,n)^2\end{align*}
(check all this). On the other hand
\begin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)(J(n,3n)-1)J(n,4n)\\
&=\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{2n})}{q}\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\\
&=\dfrac{{\mathfrak g}(\omega^n)^5}{q{\mathfrak g}(\omega^{-n})}=\dfrac{{\mathfrak g}(\omega^n)^6}{q^2}=J(n,n)^2\;,\end{align*}
so the correction term is
$2(q-1)\Re(\chi_3^{-1}(z)J(\chi_3,\chi_3)^2)$.
\item For $n=\pm(q-1)/4$ (if $q\equiv1\pmod4$), writing $\chi_4=\omega^{(q-1)/4}$,
which is one of the two quartic characters, we have
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})={\mathfrak g}(\omega^n)^4
=\omega^n(-1)qJ_3(n,n,n)\;.$$
In addition, we have
$$J_3(n,n,n)=J(n,n)J(n,2n)=\omega^n(4)J(n,n)^2=\rho(2)J(n,n)^2\;,$$
so
$$J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^4=\omega^n(-1)q\rho(2)J(n,n)^2\;.$$
Note that
$$\chi_4(-1)=\chi_4^{-1}(-1)=\rho(2)=(-1)^{(q-1)/4}\;,$$
(Exercise: prove it!), so that $\omega^n(-1)\rho(2)=1$ and the above
simplifies to $J_5(n,n,n,n,n)=qJ(n,n)^2$.
On the other hand,
\begin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)J(n,3n)(J(n,4n)-1)\\
&=\dfrac{{\mathfrak g}(\omega^n)^2}{{\mathfrak g}(\omega^{2n})}\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{2n})}{{\mathfrak g}(\omega^{3n})}
\dfrac{{\mathfrak g}(\omega^n){\mathfrak g}(\omega^{3n})}{q}\\
&=\dfrac{{\mathfrak g}(\omega^n)^4}{q}=\omega^n(-1)\rho(2)J(n,n)^2=J(n,n)^2\end{align*}
as above, so the correction term is
$2(q-1)\Re(\chi_4^{-1}(z)J(\chi_4,\chi_4)^2)$.
\item For $n=a(q-1)/5$ with $1\le a\le 4$ (if $q\equiv1\pmod5$), writing
$\chi_5=\omega^{(q-1)/5}$ we have
$J_5(n,n,n,n,n)=-{\mathfrak g}(\chi_5^a)^5/q$, while abbreviating
${\mathfrak g}(\chi_5^{am})$ to $g(m)$ we have
\begin{align*}P(\zeta_{q-1}^n)&=J(n,n)J(n,2n)J(n,3n)J(n,4n)\\
&=-\dfrac{g(n)^2}{g(2n)}\dfrac{g(n)g(2n)}{g(3n)}\dfrac{g(n)g(3n)}{g(4n)}\dfrac{g(n)g(4n)}{q}\\
&=-\dfrac{g(n)^5}{q}\;,\end{align*}
so there is no correction term.\end{enumerate}
Summarizing, we have shown the following:
\begin{proposition} Let $S(q;z)=\sum_{0\le n\le q-2}\omega^{-n}(z)J_5(n,n,n,n,n)$.
Let $\ell=\log_g(z)$ and let $P(X)=\sum_{0\le j\le q-2}a_jX^j$ be the polynomial
defined above. We have
$$S(q;z)=(q-1)(T_1+T_2+T_3+T_4+a_{\ell})\;,$$
where $T_m=0$ if $m\nmid(q-1)$ and otherwise
\begin{align*}T_1&=8(q^2-2q+2)\;,\quad T_2=\rho(z)(q+1)\;,\\
T_3&=2\Re(\chi_3^{-1}(z)J(\chi_3,\chi_3)^2)\;,\text{\quad and\quad}T_4=2\Re(\chi_4^{-1}(z)J(\chi_4,\chi_4)^2)\;,\end{align*}
with the above notation.
\end{proposition}
Note that thanks to Proposition \ref{propjac34}, these supplementary Jacobi
sums $J(\chi_3,\chi_3)$ and $J(\chi_4,\chi_4)$ can be computed in logarithmic
time using Cornacchia's algorithm (this is not quite true, one needs an
additional slight computation, do you see why?).
Note also for future reference that the above proposition \emph{proves} that
$(q-1)\mid S(q,z)$, which is not clear from the definition.
\subsection{Sample Implementations}
For simplicity, assume that $q=p$ is prime. I have written simple
implementations of the computation of $S(q;z)$. In the first implementation,
I use the na\"\i ve formula expressing $J_5$ in terms of $J(n,an)$ and sum on
$n$, except that I use the reciprocity formula which gives
$J_5(-n,-n,-n,-n,-n)$ in terms of $J_5(n,n,n,n,n)$ to sum only over $(p-1)/2$
terms instead of $p-1$. Of course to avoid recomputation, I precompute
a discrete logarithm table.
The timings for $p\approx 10^k$ for $k=2$, $3$, and $4$ are
$0.03$, $1.56$, and $149$ seconds respectively, compatible with $\Os(q^2)$
time.
\smallskip
On the other hand, implementing in a straightforward manner the algorithm
given by the above proposition gives timings for $p\approx 10^k$ for
$k=2$, $3$, $4$, $5$, $6$, and $7$ of $0$, $0.02$, $0.08$, $0.85$, $9.90$,
and $123$ seconds respectively, of course much faster and compatible with
$\Os(q)$ time.
The main drawback of this method is that it requires $O(q)$ storage: it is
thus applicable only for $q\le 10^8$, say, which is more than sufficient
for many applications, but of course not for all. For instance, the case
$p\approx 10^7$ mentioned above already required a few gigabytes of storage.
\subsection{Using Theta Functions}
A completely different way of computing Gauss and Jacobi sums has been
suggested by S.~Louboutin. It is related to the theory of $L$-functions of
Dirichlet characters that we study below, and in our context is valid only
for $q=p$ prime, not for prime powers, but in the context of Dirichlet
characters it is valid in general (simply replace $p$ by $N$ and ${\mathbb F}_p$
by ${\mathbb Z}/N{\mathbb Z}$ in the following formulas when $\chi$ is a primitive character
of conductor $N$, see below for definitions):
\begin{definition} Let $\chi$ be a character on ${\mathbb F}_p$, and let $e=0$ or $1$
be such that $\chi(-1)=(-1)^e$. The \emph{theta function} associated to
$\chi$ is the function defined on the upper half-plane by
$$\Theta(\chi,\tau)=2\sum_{m\ge1}m^e\chi(m)e^{i\pi m^2\tau/p}\;.$$
\end{definition}
The main property of this function, which is a direct consequence of the
\emph{Poisson summation formula}, and is equivalent to the functional
equation of Dirichlet $L$-functions, is as follows:
\begin{proposition} We have the functional equation
$$\Theta(\chi,-1/\tau)=\omega(\chi)(\tau/i)^{(2e+1)/2}\Theta(\chi^{-1},\tau)\;,$$
with the principal determination of the square root, and where
$\omega(\chi)={\mathfrak g}(\chi)/(i^ep^{1/2})$ is the so-called \emph{root number}.
\end{proposition}
\begin{corollary} If $\chi(-1)=1$ we have
$${\mathfrak g}(\chi)=p^{1/2}\dfrac{\sum_{m\ge1}\chi(m)\exp(-\pi m^2/pt)}{t^{1/2}\sum_{m\ge1}\chi^{-1}(m)\exp(-\pi m^2t/p)}$$
and if $\chi(-1)=-1$ we have
$${\mathfrak g}(\chi)=p^{1/2}i\dfrac{\sum_{m\ge1}\chi(m)m\exp(-\pi m^2/pt)}{t^{3/2}\sum_{m\ge1}\chi^{-1}(m)m\exp(-\pi n^2t/p)}$$
for any $t$ such that the denominator does not vanish.
\end{corollary}
Note that the optimal choice of $t$ is $t=1$, and (at least for $p$ prime)
it seems that the denominator never vanishes (there are counterexamples
when $p$ is not prime, but apparently only four, see \cite{Coh-Zag}).
It follows from this corollary that ${\mathfrak g}(\chi)$ can be computed numerically as
a complex number in $\Os(p^{1/2})$ operations. Thus,
if $\chi_1$ and $\chi_2$ are nontrivial characters such that
$\chi_1\chi_2\ne\varepsilon$ (otherwise $J(\chi_1,\chi_2)$ is trivial to compute),
the formula $J(\chi_1,\chi_2)={\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)/{\mathfrak g}(\chi_1\chi_2)$ allows
the computation of $J_2$ \emph{numerically} as a complex number in
$\Os(p^{1/2})$ operations.
To recover $J$ itself as an algebraic number we could either compute all its
conjugates, but this would require more time than the direct computation of
$J$, or possibly use the LLL algorithm, which although fast, would also
require some time. In practice, to perform computations such as that of
the sum $S(q;z)$ above, we only need
$J$ to sufficient accuracy: we perform all the elementary operations in
${\mathbb C}$, and since we know that at the end the result will be an integer
for which we know an upper bound, we thus obtain a proven exact result.
More generally, we have generically $J_5(n,n,n,n,n)={\mathfrak g}(\omega^n)^5/{\mathfrak g}(\omega^{5n})$,
which can thus be computed in $\Os(p^{1/2})$ operations. It follows that
$S(p;z)$ can be computed in $\Os(p^{3/2})$ operations, which is slower than
the elementary method seen above. The main advantage is that we do not need
much storage: more precisely, we want to compute $S(p;z)$ to sufficiently
small accuracy that we can recognize it as an integer, so a priori up to
an absolute error of $0.5$. However, we have seen that $(p-1)\mid S(p;z)$:
it is thus sufficient to have an absolute error less than $(p-1)/2$
thus at worse each of the $p-1$ terms in the sum to an absolute error less
than $1/2$. Since generically $|J_5(n,n,n,n,n)|=p^2$, we need a relative
error less than $1/(2p^2)$, so less than $1/(10p^2)$ on each Gauss sum.
In practice of course this is overly pessimistic, but it does not matter.
For $p\le 10^9$, this means that $19$ decimal digits suffice.
The main term in the theta function computation (with $t=1$) is
$\exp(-\pi m^2/p)$, so we need $\exp(-\pi m^2/p)\le 1/(100p^2)$, say, in other
words $\pi m^2/p\ge 4.7+2\log(p)$, so $m^2\ge p(1.5+0.7\log(p))$.
This means that we will need the values of $\omega(m)$ only up to this limit,
of the order of $O((p\log(p))^{1/2})$, considerably smaller than $p$.
Thus, instead of computing a full discrete logarithm table, which takes
some time but more importantly a lot of memory, we compute only discrete
logarithms up to that limit, using specific algorithms for doing so
which exist in the literature, some of which being quite easy.
A straightforward implementation of this method gives timings for
$k=2$, $3$, $4$, and $5$ of $0.02$, $0.40$, $16.2$, and $663$ seconds
respectively, compatible with $\Os(p^{3/2})$ time. This is faster than
the completely na\"\i ve method, but slower than the method explained above.
Its advantage is that it requires much less memory. For $p$ around $10^7$,
however, it is much too slow so this method is rather useless. We will see
that its usefulness is mainly in the context where it was invented, i.e.,
for $L$-functions of Dirichlet characters.
\subsection{Using the Gross--Koblitz Formula}
This section is of a higher mathematical level than the
preceding ones, but is very important since it gives the best method for
computing Gauss (and Jacobi) sums. We refer to Sections 11.6 and 11.7 of
\cite{Coh4} for complete details, and urge the reader to try to understand
what follows.
In the preceding sections, we have considered Gauss sums as belonging to a
number of different rings: the ring ${\mathbb Z}[\zeta_{q-1},\zeta_p]$ or the field ${\mathbb C}$ of
complex numbers, and for Jacobi sums the ring ${\mathbb Z}[\zeta_{q-1}]$, but also the
ring ${\mathbb Z}[X]/(X^{q-1}-1)$, and again the field ${\mathbb C}$.
In number theory there exist other algebraically closed fields which are
useful in many contexts, the fields ${\mathbb C}_\ell$ of $\ell$-adic numbers, one
for each prime number $\ell$. These fields come with a topology and analysis
which are rather special: one of
the main things to remember is that a sequence of elements tends to $0$
if and only the $\ell$-adic valuation of the elements (the largest exponent
of $\ell$ dividing them) tends to infinity. For instance $2^m$ tends to $0$
in ${\mathbb C}_2$, but in no other ${\mathbb C}_{\ell}$, and $15^m$ tends to $0$ in
${\mathbb C}_3$ and in ${\mathbb C}_5$.
The most important subrings of ${\mathbb C}_{\ell}$ are the ring ${\mathbb Z}_{\ell}$
of $\ell$-adic integers, the elements of which can be written as
$x=a_0+a_1\ell+\cdots+a_k\ell^k+\cdots$ with $a_j\in[0,\ell-1]$, and its field
of fractions ${\mathbb Q}_{\ell}$, which contains ${\mathbb Q}$, whose elements can be
represented in a similar way as $x=a_{-m}\ell^{-m}+a_{-(m-1)}\ell^{-(m-1)}+\cdots+a_{-1}\ell^{-1}+a_0+a_1\ell+\cdots.$
In dealing with Gauss and Jacobi sums over ${\mathbb F}_q$ with $q=p^f$,
the only ${\mathbb C}_{\ell}$ which is of use for us is the one with $\ell=p$
(in highbrow language, we are going to use implicitly \emph{crystalline}
$p$-adic methods, while for $\ell\ne p$ it would be \emph{\'etale} $\ell$-adic
methods).
Apart from this relatively strange topology, many definitions and results
valid on ${\mathbb C}$ have analogues in ${\mathbb C}_p$. The main object that we will
need in our context is the analogue of the gamma function, naturally called
the $p$-adic gamma function, in the present case due to Morita (there is
another one, see Section 11.5 of \cite{Coh4}), and denoted $\Gamma_p$.
Its definition is in fact quite simple:
\begin{definition} For $s\in{\mathbb Z}_p$ we define
$$\Gamma_p(s)=\lim_{m\to s}(-1)^m\prod_{\substack{0\le k<m\{\mathfrak p}\nmid k}}k\;,$$
where the limit is taken over any sequence of positive integers $m$
tending to $s$ for the $p$-adic topology.\end{definition}
It is of course necessary to show that this definition makes sense,
but this is not difficult, and most of the important properties
of $\Gamma_p(s)$, analogous to those of $\Gamma(s)$, can be deduced from it.
\begin{exercise} Choose $p=5$ and $s=-1/4$, so that $p$-adically
$s=1/(1-5)=1+5+5^2+5^3+\cdots$.
\begin{enumerate}\item Compute the right hand side of
the above definition with small $5$-adic accuracy for $m=1$, $1+5$,
and $1+5+5^2$.
\item It is in fact easy to compute that
$$\Gamma_5(-1/4)=4 + 4\cdot5 + 5^3 + 3\cdot5^4 + 2\cdot5^5 + 2\cdot5^6 + 2\cdot5^7 + 4\cdot5^8+\cdots$$
Using this, show that $\Gamma_5(-1/4)^2/16$ seems to be a $5$-adic root of
the polynomial $5X^2+4X+1$. This is in fact true, see the Gross--Koblitz
formula below.\end{enumerate}
\end{exercise}
We need a much deeper property of $\Gamma_p(s)$ known as the
Gross--Koblitz formula: it is in fact an analogue of a formula for
$\Gamma(s)$ known as the Chowla--Selberg formula, and it is also closely
related to the Davenport--Hasse relations that we have seen above.
The proof of the Gross--Koblitz formula was initially given using tools of
crystalline cohomology, but an elementary proof due to A.~Robert now
exists, see for instance Section 11.7 of \cite{Coh4} once again.
The Gross--Koblitz formula tells us that certain products of $p$-adic gamma
functions at \emph{rational} arguments are in fact \emph{algebraic
numbers}, more precisely \emph{Gauss sums} (explaining their
importance for us). This is quite surprising since usually
transcendental functions such as $\Gamma_p$ take transcendental values.
To give a specific example, we have $\Gamma_5(1/4)^2=-2+\sqrt{-1}$,
where $\sqrt{-1}$ is the square root in ${\mathbb Z}_5$ congruent to
$3$ modulo $5$. In view of the elementary properties of the
$p$-adic gamma function, this is equivalent to the result stated
in the above exercise as $\Gamma_5(-1/4)^2=-(16/5)(2+\sqrt{-1})$.
\smallskip
Before stating the formula we need to collect a number of facts,
both on classical algebraic number theory and on $p$-adic analysis.
None are difficult to prove, see Chapter 4 of \cite{Coh3}. Recall that
$q=p^f$.
\medskip
$\bullet$ We let $K={\mathbb Q}(\zeta_p)$ and $L=K(\zeta_{q-1})={\mathbb Q}(\zeta_{q-1},\zeta_p)={\mathbb Q}(\zeta_{p(q-1)})$, so that $L/K$ is an extension of degree $\phi(q-1)$.
There exists a unique prime ideal ${\mathfrak p}$ of $K$ above $p$, and we have
${\mathfrak p}=(1-\zeta_p){\mathbb Z}_K$ and ${\mathfrak p}^{p-1}=p{\mathbb Z}_K$, and ${\mathbb Z}_K/{\mathfrak p}\simeq{\mathbb F}_p$. The prime
ideal ${\mathfrak p}$ splits into a product of $g=\phi(q-1)/f$ prime ideals
${\mathfrak P}_j$ of degree $f$ in the extension $L/K$, i.e., ${\mathfrak p}{\mathbb Z}_L={\mathfrak P}_1\cdots{\mathfrak P}_g$,
and for any prime ideal ${\mathfrak P}={\mathfrak P}_j$ we have ${\mathbb Z}_L/{\mathfrak P}\simeq{\mathbb F}_q$.
\begin{exercise} Prove directly that for any $f$ we have $f\mid\phi(p^f-1)$.
\end{exercise}
$\bullet$ Fix one of the prime ideals ${\mathfrak P}$ as above. There exists a unique
group isomorphism $\omega=\omega_{{\mathfrak P}}$ from $({\mathbb Z}_L/{\mathfrak P})^*$ to the group of
$(q-1)$st roots of unity in $L$, such that for all $x\in({\mathbb Z}_L/{\mathfrak P})^*$ we have
$\omega(x)\equiv x\pmod{{\mathfrak P}}$. It is called the \emph{Teichm\"uller character},
and it can be considered as a character of order $q-1$ on
${\mathbb F}_q^*\simeq({\mathbb Z}_L/{\mathfrak P})^*$. We can thus \emph{instantiate} the definition of
a Gauss sum over ${\mathbb F}_q$ by defining it as ${\mathfrak g}(\omega_{{\mathfrak P}}^{-r})\in L$.
\smallskip
$\bullet$ Let $\zeta_p$ be a primitive $p$th root of unity in ${\mathbb C}_p$,
fixed once and for all. There exists a unique $\pi\in{\mathbb Z}[\zeta_p]$
satisfying $\pi^{p-1}=-p$, $\pi\equiv1-\zeta_p\pmod{\pi^2}$, and
we set $K_{{\mathfrak p}}={\mathbb Q}_p(\pi)={\mathbb Q}_p(\zeta_p)$, and $L_{{\mathfrak P}}$ the \emph{completion}
of $L$ at ${\mathfrak P}$. The field extension $L_{{\mathfrak P}}/K_{{\mathfrak p}}$ is Galois, with Galois
group isomorphic to ${\mathbb Z}/f{\mathbb Z}$ (which is the same as the Galois group of
${\mathbb F}_q/{\mathbb F}_p$, where ${\mathbb F}_p$ (resp., ${\mathbb F}_q$) is the so-called
\emph{residue field} of $K$ (resp., $L$)).
\smallskip
$\bullet$ We set the following:
\begin{definition} We define the \emph{$p$-adic Gauss sum} by
$${\mathfrak g}_q(r)=\sum_{x\in L_{{\mathfrak P}},\ x^{q-1}=1}x^{-r}\zeta_p^{\Tr_{L_{{\mathfrak P}}/K_{{\mathfrak p}}}(x)}\in L_{{\mathfrak P}}\;.$$
\end{definition}
Note that this depends on the choice of $\zeta_p$, or equivalently of $\pi$.
Since ${\mathfrak g}_q(r)$ and ${\mathfrak g}(\omega_{{\mathfrak P}}^{-r})$ are algebraic numbers, it is
clear that they are equal, although viewed in fields having different
topologies. Thus, results about ${\mathfrak g}_q(r)$ translate immediately into results
about ${\mathfrak g}(\omega_{{\mathfrak P}}^{-r})$, hence about general Gauss sums over finite fields.
\smallskip
The Gross--Koblitz formula is as follows:
\begin{theorem}[Gross--Koblitz] Denote by $s(r)$ the sum of digits in base $p$
of the integer $r\bmod{(q-1)}$, i.e., of the unique integer $r'$ such that
$r'\equiv r\pmod{q-1}$ and $0\le r'<q-1$. We have
$${\mathfrak g}_q(r)=-\pi^{s(r)}\prod_{0\le i<f}\Gamma_p\left(\left\{\dfrac{p^{f-i}r}{q-1}\right\}\right)\;,$$
where $\{x\}$ denotes the fractional part of $x$.\end{theorem}
Let us show how this can be used to compute Gauss or Jacobi sums, and in
particular our sum $S(q;z)$. Assume for simplicity that $f=1$, in other
words that $q=p$: the right hand
side is thus equal to $-\pi^{s(r)}\Gamma_p(\{pr/(p-1)\})$. Since we can always
choose $r$ such that $0\le r<p-1$, we have $s(r)=r$ and
$\{pr/(p-1)\}=\{r+r/(p-1)\}=r/(p-1)$, so the RHS is $-\pi^r\Gamma_p(r/(p-1))$.
Now an easy property of $\Gamma_p$ is that it is differentiable: recall that $p$
is ``small'' in the $p$-adic topology, so $r/(p-1)$ is close to $-r$, more
precisely $r/(p-1)=-r+pr/(p-1)$ (this is how we obtained it in the first
place!). Thus in particular, if $p>2$ we have the Taylor expansion
\begin{align*}\Gamma_p(r/(p-1))&=\Gamma_p(-r)+(pr/(p-1))\Gamma'_p(-r)+O(p^2)\\
&=\Gamma_p(-r)-pr\Gamma'_p(-r)+O(p^2)\;.\end{align*}
Since ${\mathfrak g}_q(r)$ depends only on $r$ modulo $p-1$, we will assume that
$0\le r<p-1$. In that case it is easy to show from the definition that
$$\Gamma_p(-r)=1/r!\text{\quad and\quad}\Gamma'_p(-r)=(-\gamma_p+H_r)/r!\;,$$
where $H_r=\sum_{1\le n\le r}1/n$ is the harmonic sum, and $\gamma_p=-\Gamma'_p(0)$
is the $p$-adic analogue of Euler's constant.
\begin{exercise} Prove these formulas, as well as the congruence for
$\gamma_p$ given below.
\end{exercise}
There exist infinite ($p$-adic)
series enabling accurate computation of $\gamma_p$, but since we only need it
modulo $p$, we use the easily proved congruence
$\gamma_p\equiv((p-1)!+1)/p=W_p\pmod{p}$, the so-called \emph{Wilson quotient}.
\smallskip
We will see below that, as a consequence of the Weil conjectures proved
by Deligne, it is sufficient to compute $S(p;z)$ modulo $p^2$. Thus, in the
following $p$-adic computation we only work modulo $p^2$.
The Gross--Koblitz formula tells us that for $0\le r<p-1$ we have
$${\mathfrak g}_q(r)=-\dfrac{\pi^r}{r!}(1-pr(H_r-W_p)+O(p^2))\;.$$
It follows that for $(p-1)\nmid 5r$ we have
$$J(-r,-r,-r,-r,-r)=\dfrac{{\mathfrak g}(\omega_{{\mathfrak P}})^5}{{\mathfrak g}(\omega_{{\mathfrak P}}^5)}=\dfrac{{\mathfrak g}_q(r)^5}{{\mathfrak g}_q(5r)}=\pi^{f(r)}(a+bp+O(p^2))\;,$$
where $a$ and $b$ will be computed below and
\begin{align*}f(r)&=5r-(5r\bmod{p-1})=5r-(5r-(p-1)\lfloor5r/(p-1)\rfloor)\\
&=(p-1)\lfloor 5r/(p-1)\rfloor\;,\end{align*}
so that $\pi^{f(r)}=(-p)^{\lfloor 5r/(p-1)\rfloor}$ since $\pi^{p-1}=-p$.
Since we want the result modulo $p^2$, we consider three intervals together
with special cases:
\begin{enumerate}\item If $r>2(p-1)/5$ but $(p-1)\nmid 5r$, we have
$$J(-r,-r,-r,-r,-r)\equiv0\pmod{p^2}\;.$$
\item If $(p-1)/5<r<2(p-1)/5$ we have
$$J(-r,-r,-r,-r,-r)\equiv(-p)\dfrac{(5r-(p-1))!}{r!^5}\pmod{p^2}\;.$$
\item If $0<r<(p-1)/5$ we have $f(r)=0$ and $0\le 5r<(p-1)$ hence
\begin{align*}J(-r,-r,-r,-r,-r)&=\dfrac{(5r)!}{r!^5}(1-5pr(H_r-W_p)+O(p^2))\cdot\\
&\phantom{=}\cdot(1+5pr(H_{5r}-W_p)+O(p^2))\\
&\equiv\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r))\pmod{p^2}\;.\end{align*}
\item Finally, if $r=j(p-1)/5$ we have $J(-r,-r,-r,-r,-r)=p^4\equiv0\pmod{p^2}$
if $j=0$, and otherwise $J(-r,-r,-r,-r,-r)=-{\mathfrak g}_q(r)^5/p$, and since the
$p$-adic valuation of ${\mathfrak g}_q(r)$ is equal to $r/(p-1)=j/5$, that of
$J(-r,-r,-r,-r,-r)$ is equal to $j-1$, which is greater or equal to $2$
as soon as $j\ge3$. For $j=2$, i.e., $r=2(p-1)/5$, we thus have
$$J(-r,-r,-r,-r,-r)\equiv p\dfrac{1}{r!^5}\equiv(-p)\dfrac{(5r-(p-1))!}{r!^5}\pmod{p^2}\;,$$
which is the same formula as for $(p-1)/5<r\le 2(p-1)/5$.
For $j=1$, i.e., $r=(p-1)/5$, we thus have
$$J(-r,-r,-r,-r,-r)\equiv-\dfrac{1}{r!^5}(1-5pr(H_r-W_p))\pmod{p^2}\;,$$
while on the other hand
$$(5r)!=(p-1)!=-1+pW_p\equiv-1-p(p-1)W_p\equiv-1-5prW_p\;,$$ and
$H_{5r}=H_{p-1}\equiv0\pmod{p}$ (Wolstenholme's congruence, easy), so
\begin{align*}\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r))&\equiv-\dfrac{1}{r!^5}(1-5prH_r)(1+5prW_p)\\
&\equiv-\dfrac{1}{r!^5}(1-5pr(H_r-W_p))\pmod{p^2}\;,\end{align*}
which is the same formula as for $0<r<(p-1)/5$.
\end{enumerate}
An important point to note is that we are working $p$-adically, but the
final result $S(p;z)$ being an integer, it does not matter at the end.
There is one small additional detail to take care of: we have
\begin{align*}S(p;z)&=\sum_{0\le r\le p-2}\omega^{-r}(z)J(r,r,r,r,r)\\
&=\sum_{0\le r\le p-2}\omega^r(z)J(-r,-r,-r,-r,-r)\;,\end{align*}
so we must express $\omega^r(z)$ in the $p$-adic setting. Since
$\omega=\omega_{{\mathfrak P}}$ is the \emph{Teichm\"uller character}, in the $p$-adic
setting it is easy to show that $\omega(z)$ is the $p$-adic limit of
$z^{p^k}$ as $k\to\infty$. in particular $\omega(z)\equiv z\pmod{p}$, but more
precisely $\omega(z)\equiv z^p\pmod{p^2}$.
\begin{exercise} Let $p\ge3$. Assume that $z\in{\mathbb Z}_p\setminus p{\mathbb Z}_p$ (for
instance that $z\in{\mathbb Z}\setminus p{\mathbb Z}$). Prove that $z^{p^k}$ has a $p$-adic
limit $\omega(z)$ when $k\to\infty$, that $\omega^{p-1}(z)=1$, that
$\omega(z)\equiv z\pmod{p}$, and $\omega(z)\equiv z^p\pmod{p^2}$.
\end{exercise}
We have thus proved the following
\begin{proposition} We have
\begin{align*}S(p;z)&\equiv\sum_{0<r\le(p-1)/5}\dfrac{(5r)!}{r!^5}(1+5pr(H_{5r}-H_r))z^{pr}\\
&\phantom{=}-p\sum_{(p-1)/5<r\le2(p-1)/5}\dfrac{(5r-(p-1))!}{r!^5}z^r\pmod{p^2}\;.\end{align*}
In particular
$$S(p;z)\equiv\sum_{0<r\le(p-1)/5}\dfrac{(5r)!}{r!^5}z^r\pmod{p}\;.$$
\end{proposition}
\begin{remarks}{\rm \begin{enumerate}
\item Note that, as must be the case, all mention of $p$-adic numbers has
disappeared from this formula. We used the $p$-adic setting only in the proof.
It can be proved ``directly'', but with some difficulty.
\item We used the Taylor expansion only to order $2$. It is of course possible
to use it to any order, thus giving a generalization of the above proposition
to any power of $p$.\end{enumerate}}
\end{remarks}
The point of giving all these details is as follows: it is easy to show that
$(p-1)\mid S(p;z)$ (in fact we have seen this in the elementary method above).
We can thus easily compute $S(p;z)$ modulo $p^2(p-1)$. On the other hand,
it is possible to prove (but not easy, it is part of the Weil conjectures
proved by Deligne), that $|S(p;z)-p^4|<4p^{5/2}$. It follows that as soon
as $8p^{5/2}<p^2(p-1)$, in other words $p\ge67$, the computation that we
perform modulo $p^2$ is sufficient to determine $S(p;z)$ exactly. It is
clear that the time to perform this computation is $\Os(p)$, and in fact
much faster than any that we have seen.
\smallskip
In fact, implementing in a reasonable way the algorithm
given by the above proposition gives timings for $p\approx 10^k$ for
$k=2$, $3$, $4$, $5$, $6$, $7$, and $8$ of $0$, $0.01$, $0.03$, $0.21$, $2.13$,
$21.92$, and $229.6$ seconds respectively, of course much faster and
compatible with $\Os(p)$ time. The great additional advantage is that we
use very small memory. This is therefore the best known method.
\smallskip
{\bf Numerical example:} Choose $p=10^6+3$ and $z=2$. In $2.13$ seconds we find
that $S(p;z)\equiv a\pmod{p^2}$ with $a=356022712041$. Using the Chinese
remainder formula
$$S(p;z)=p^4+((a-(1+a)p^2)\bmod((p-1)p^2))\;,$$
we immediately deduce that
$$S(p;z)=1000012000056356142712140\;.$$
\smallskip
Here is a summary of the timings (in seconds) that we have mentioned:
\bigskip
\centerline{
\begin{tabular}{|c||c|c|c|c|c|c|c|}
\hline
$k$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ \\
\hline\hline
Na\"\i ve & $0.03$ & $1.56$ & $149$ & $*$ & $*$ & $*$ & $*$\\
\hline
Theta & $0.02$ & $0.40$ & $16.2$ & $663$ & $*$ & $*$ & $*$\\
\hline
Mod $X^{q-1}-1$ & $0$ & $0.02$ & $0.08$ & $0.85$ & $9.90$ & $123$ & $*$\\
\hline
Gross--Koblitz & $0$ & $0.01$ & $0.03$ & $0.21$ & $2.13$ & $21.92$ & $229.6$\\
\hline
\end{tabular}}
\medskip
\centerline{Time for computing $S(p;z)$ for $p\approx10^k$}
\section{Gauss and Jacobi Sums over ${\mathbb Z}/N{\mathbb Z}$}
Another context in which one encounters Gauss sums is over finite rings
such as ${\mathbb Z}/N{\mathbb Z}$. The theory coincides with that over ${\mathbb F}_q$ when
$q=p=N$ is prime, but is rather different otherwise. These other Gauss sums
enter in the important theory of \emph{Dirichlet characters}.
\subsection{Definitions}
We recall the following definition:
\begin{definition} Let $\chi$ be a (multiplicative) character from the
multiplicative group $({\mathbb Z}/N{\mathbb Z})^*$ of invertible elements of ${\mathbb Z}/N{\mathbb Z}$ to
the complex numbers ${\mathbb C}$.
We denote by abuse of notation again by $\chi$ the map from ${\mathbb Z}$ to ${\mathbb C}$
defined by $\chi(x)=\chi(x\bmod N)$ when $x$ is coprime to $N$, and
$\chi(x)=0$ if $x$ is not coprime to $N$, and call it the Dirichlet character
modulo $N$ associated to $\chi$.\end{definition}
It is clear that a Dirichlet character satisfies $\chi(xy)=\chi(x)\chi(y)$
for all $x$ and $y$, that $\chi(x+N)=\chi(x)$, and that $\chi(x)=0$
if and only if $x$ is not coprime with $N$. Conversely, it immediate that
these properties characterize Dirichlet characters.
A crucial notion (which has no equivalent in the context of characters of
${\mathbb F}_q^*$) is that of \emph{primitivity}:
Assume that $M\mid N$. If $\chi$ is a Dirichlet character modulo $M$, we can
transform it into a character $\chi_N$ modulo $N$ by setting
$\chi_N(x)=\chi(x)$ if $x$ is coprime to $N$, and $\chi_N(x)=0$ otherwise.
We say that the characters $\chi$ and $\chi_N$ are \emph{equivalent}.
Conversely, if $\psi$ is a character modulo $N$, it is not always true that
one can find $\chi$ modulo $M$ such that $\psi=\chi_N$. If it is possible,
we say that $\psi$ \emph{can be defined modulo $M$}.
\begin{definition} Let $\chi$ be a character modulo $N$. We say that
$\chi$ is a \emph{primitive character} if $\chi$ cannot be defined modulo
$M$ for any proper divisor $M$ of $N$, i.e., for any $M\mid N$ such that
$M\ne N$.\end{definition}
\begin{exercise} Assume that $N\equiv2\pmod4$. Show that there do not exist
any primitive characters modulo $N$.
\end{exercise}
\begin{exercise} Assume that $p^a\mid N$ with $p$ prime. Show that if $\chi$
is a primitive character modulo $N$, the \emph{order} of $\chi$ (the smallest
$k$ such that $\chi^k$ is a trivial character) is \emph{divisible}
by $p^{a-1}$.
\end{exercise}
As we will see, questions about general Dirichlet characters can always be
reduced to questions about primitive characters, and the latter have much
nicer properties.
\begin{proposition} Let $\chi$ be a character modulo $N$. There exists
a divisor $f$ of $N$ called the \emph{conductor} of $\chi$ (this $f$ has
nothing to do with the $f$ used above such that $q=p^f$), having the following
properties:
\begin{enumerate}\item The character $\chi$ can be defined modulo $f$,
in other words there exists a character $\psi$ modulo $f$ such that
$\chi=\psi_N$ using the notation above.
\item $f$ is the smallest divisor of $N$ having this property.
\item The character $\psi$ is a primitive character modulo $f$.
\end{enumerate}\end{proposition}
There is also the notion of \emph{trivial character modulo $N$}: however
we must be careful here, and we set the following:
\begin{definition} The trivial character modulo $N$ is the Dirichlet
character associated with the trivial character of $({\mathbb Z}/N{\mathbb Z})^*$. It is
usually denoted by $\chi_0$ (but be careful, the index $N$ is implicit, so
$\chi_0$ may represent different characters), and its values are as follows:
$\chi_0(x)=1$ if $x$ is coprime to $N$, and $\chi_0(x)=0$ if $x$ is not
coprime to $N$.\end{definition}
In particular, $\chi_0(0)=0$ if $N\ne1$. The character $\chi_0$ can also be
characterized as the only character modulo $N$ of conductor $1$.
\begin{definition} Let $\chi$ be a character modulo $N$. The \emph{Gauss sum}
associated to $\chi$ and $a\in{\mathbb Z}$ is
$${\mathfrak g}(\chi,a)=\sum_{x\bmod N}\chi(x)\zeta_N^{ax}\;,$$
and we write simply ${\mathfrak g}(\chi)$ instead of ${\mathfrak g}(\chi,1)$.
\end{definition}
The most important results concerning these Gauss sums is the following:
\begin{proposition} Let $\chi$ be a character modulo $N$.\begin{enumerate}
\item If $a$ is coprime to $N$ we have
$${\mathfrak g}(\chi,a)=\chi^{-1}(a){\mathfrak g}(\chi)=\ov{\chi(a)}{\mathfrak g}(\chi)\;,$$
and more generally
${\mathfrak g}(\chi,ab)=\chi^{-1}(a){\mathfrak g}(\chi,b)=\ov{\chi(a)}{\mathfrak g}(\chi,b)$.
\item If $\chi$ is a \emph{primitive} character, we have
$${\mathfrak g}(\chi,a)=\ov{\chi(a)}{\mathfrak g}(\chi)$$
for \emph{all} $a$, in other words, in addition to (1), we have
${\mathfrak g}(\chi,a)=0$ if $a$ is not coprime to $N$.
\item If $\chi$ is a \emph{primitive} character, we have
$|{\mathfrak g}(\chi)|^2=N$.\end{enumerate}\end{proposition}
Note that (1) is trivial, and that since $\chi(a)$ has modulus $1$ when
$a$ is coprime to $N$, we can write indifferently $\chi^{-1}(a)$ or
$\ov{\chi(a)}$. On the other hand, (2) is not completely trivial.
\smallskip
We leave to the reader the easy task of defining Jacobi sums and of proving
the easy relations between Gauss and Jacobi sums.
\subsection{Reduction to Prime Gauss Sums}
A fundamental and little-known fact is that in the context of Gauss
sums over ${\mathbb Z}/N{\mathbb Z}$ (as opposed to ${\mathbb F}_q$), one can in fact always reduce
to prime $N$. First note (with proof) the following easy result:
\begin{proposition} Let $N=N_1N_2$ with $N_1$ and $N_2$ coprime, and
let $\chi$ be a character modulo $N$.\begin{enumerate}
\item There exist unique characters $\chi_i$ modulo $N_i$ such that
$\chi=\chi_1\chi_2$ in an evident sense, and if $\chi$ is primitive,
the $\chi_i$ will also be primitive.
\item We have the identity (valid even if $\chi$ is not primitive):
$${\mathfrak g}(\chi)=\chi_1(N_2)\chi_2(N_1){\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)\;.$$
\end{enumerate}\end{proposition}
\begin{proof} (1). Since $N_1$ and $N_2$ are coprime there exist $u_1$ and $u_2$
such that $u_1N_1+u_2N_2=1$. We define $\chi_1(x)=\chi(xu_2N_2+u_1N_1)$ and
$\chi_2(x)=\chi(xu_1N_1+u_2N_2)$. We leave to the reader to check (1)
using these definitions.
\smallskip
(2). When $x_i$ ranges modulo $N_i$, $x=x_1u_2N_2+x_2u_1N_1$ ranges
modulo $N$ (check it, in particular that the values are distinct!),
and $\chi(x)=\chi_1(x)\chi_2(x)=\chi_1(x_1)\chi_2(x_2)$. Furthermore,
$$\zeta_N=\exp(2\pi i/N)=\exp(2\pi i(u_1/N_2+u_2/N_1))=\zeta_{N_1}^{u_2}\zeta_{N_2}^{u_1}\;,$$
hence
\begin{align*}{\mathfrak g}(\chi)&=\sum_{x\bmod N}\chi(x)\zeta_N^x\\
&=\sum_{x_1\bmod N_1,\ x_2\bmod N_2}\chi_1(x_1)\chi_2(x_2)\zeta_{N_1}^{u_2x_1}\zeta_{N_2}^{u_1x_2}\\
&={\mathfrak g}(\chi_1;u_2){\mathfrak g}(\chi_2;u_1)=\chi_1^{-1}(u_2)\chi_2^{-1}(u_1){\mathfrak g}(\chi_1){\mathfrak g}(\chi_2)\;,\end{align*}
so the result follows since $N_2u_2\equiv1\pmod{N_1}$ and
$N_1u_1\equiv1\pmod{N_2}$.\qed\end{proof}
Thanks to the above result, the computation of Gauss sums modulo $N$ can be
reduced to the computation of Gauss sums modulo prime powers.
Here a remarkable simplification occurs, due to Odoni: Gauss sums modulo
$p^a$ for $a\ge2$ can be ``explicitly computed'', in the sense that there
is a direct formula not involving a sum over $p^a$ terms for computing
them. Although the proof is not difficult, we do not give it, and refer
instead to \cite{Coh5} which can be obtained from the author. We use the
classical notation $\mathbf e(x)$ to mean $e^{2\pi i x}$. Furthermore, we use
the $p$-adic logarithm $\log_p(m)$, but in a totally elementary manner
since we will always have $m\equiv1\pmod p$ and the standard expansion
$-\log_p(1-x)=\sum_{k\ge1}x^k/k$ which we stop as soon as all the terms
are divisible by $p^n$:
\begin{theorem}[Odoni et al.]\label{thmodoni} Let $\chi$ be a \emph{primitive}
character modulo $p^n$.
\begin{enumerate}\item Assume that $p\ge3$ is prime and $n\ge2$. Write
$\chi(1+p)=\mathbf e(-b/p^{n-1})$ with $p\nmid b$. Define
$$A(p)=\dfrac{p}{\log_p(1+p)}\text{\quad and\quad}B(p)=A(p)(1-\log_p(A(p)))\;,$$
except when $p^n=3^3$, in which case we define $B(p)=10$. Then
$${\mathfrak g}(\chi)=p^{n/2}\mathbf e\left(\dfrac{bB(p)}{p^n}\right)\chi(b)\cdot\begin{cases}
1&\text{\quad if $n\ge2$ is even,}\\
\leg{b}{p}i^{p(p-1)/2}&\text{\quad if $n\ge3$ is odd.}
\end{cases}$$
\item Let $p=2$ and assume that $n\ge4$. Write
$\chi(1+p^2)=\mathbf e(b/p^{n-2})$ with $p\nmid b$. Define
$$A(p)=-\dfrac{p^2}{\log_p(1+p^2)}\text{\quad and\quad}B(p)=A(p)(1-\log_p(A(p)))\;,$$
except when $p^n=2^4$, in which case we define $B(p)=13$. Then
$${\mathfrak g}(\chi)=p^{n/2}\mathbf e\left(\dfrac{bB(p)}{p^n}\right)\chi(b)\cdot\begin{cases}
\mathbf e\left(\dfrac{b}{8}\right)&\text{\quad if $n\ge4$ is even,}\\
\mathbf e\left(\dfrac{(b^2-1)/2+b}{8}\right)&\text{\quad if $n\ge5$ is odd.}
\end{cases}$$
\item If $p^n=2^2$, or $p^n=2^3$ and $\chi(-1)=1$, we have ${\mathfrak g}(\chi)=p^{n/2}$,
and if $p^n=2^3$ and $\chi(-1)=-1$ we have ${\mathfrak g}(\chi)=p^{n/2}i$.
\end{enumerate}\end{theorem}
Thanks to this theorem, we see that the computation of Gauss sums in the
context of Dirichlet characters can be reduced to the computation of Gauss
sums modulo $p$ for prime $p$. This is of course the same as the
computation of a Gauss sum for a character of ${\mathbb F}_p^*$.
We recall the available methods for computing a single Gauss sum of this
type:
\begin{enumerate}\item The na\"\i ve method, time $\Os(p)$ (applicable in
general, time $\Os(N)$).
\item Using the Gross--Koblitz formula, also time $\Os(p)$, but the implicit
constant is much smaller, and also computations can be done modulo $p$ or
$p^2$ for instance, if desired (applicable only to $N=p$, or in the
context of finite fields).
\item Using theta functions, time $\Os(p^{1/2})$ (applicable in general,
time $\Os(N^{1/2})$).\end{enumerate}
\subsection{General Complete Exponential Sums over ${\mathbb Z}/N{\mathbb Z}$}
We have just seen the (perhaps surprising) fact that Gauss sums modulo
$p^a$ for $a\ge2$ can be ``explicitly computed''. This is in fact
a completely general fact. Let $\chi$ be a Dirichlet character modulo $N$,
and let $F\in {\mathbb Q}[X]$ be integer-valued. Consider the following
\emph{complete exponential sum}:
$$S(F,N)=\sum_{x\bmod N}\chi(x)e^{2\pi i F(x)/N}\;.$$
For this to make sense we must of course assume that $x\equiv y\pmod N$
implies $F(x)\equiv F(y)\pmod{N}$, which is for instance the case if
$F\in{\mathbb Z}[X]$. As we did for Gauss sums, using Chinese remaindering we can
reduce the computation to the case where $N=p^a$ is a prime power. But
the essential point is that if $a\ge2$, $S(F,p^a)$ can be ``explicitly
computed'', see \cite{Coh5} for the detailed statement and proof, so
we are again reduced to the computation of $S(F,p)$.
A simplified version and incomplete version of the result when $\chi$ is the
trivial character is as follows:
\begin{theorem} Let $S=\sum_{x\bmod{p^a}}e^{2\pi iF(x)/p^a}$, and
assume that $a\ge2$ and $p>2$. Then under suitable assumptions on $F$ we
have the following:
\begin{enumerate}
\item If there does not exist $y$ such that $F'(y)\equiv0\pmod p$ then $S=0$.
\item Otherwise, there exists $u\in{\mathbb Z}_p$ such that
$F'(u)=0$ and $v_p(F''(u))=0$, $u$ is unique, and we have
$$S=p^{a/2}e^{2\pi iF(u)/p^a}g(u,p,a)\;,$$
where $g(u,p,a)=1$ if $a$ is even and otherwise
$$g(u,p,a)=\leg{F''(u)}{p}i^{p(p-1)/2}\;.$$
\end{enumerate}
\end{theorem}
\begin{exercise} Let $F(x)=cx^3+dx$ with $c$ and $d$ integers, and let $p$
be a prime number such that $p\nmid 6cd$. The assumptions of the theorem
will then be satisfied. Compute explicitly
$\sum_{x\bmod{p^a}}e^{2\pi iF(x)/p^a}$ for $a\ge2$. You will need to
introduce a square root of $-3cd$ modulo $p^a$.\end{exercise}
For instance, using a variant of the above theorem, it is immediate to prove
the following result due to Sali\'e:
\begin{proposition} The \emph{Kloosterman sum} $K(m,n,N)$ is defined by
$$K(m,n,N)=\sum_{x\in({\mathbb Z}/N{\mathbb Z})^*}e^{2\pi i(mx+nx^{-1})/N}\;,$$
where $x$ runs over the invertible elements of ${\mathbb Z}/N{\mathbb Z}$. If $p>2$
is a prime such that $p\nmid n$ and $a\ge2$ we have
$$K(n,n,p^a)=\begin{cases}
2p^{a/2}\cos(4\pi n/p^a)&\text{ if $2\mid a$,}\\
2p^{a/2}\leg{n}{p}\cos(4\pi n/p^a)&\text{ if $2\nmid a$ and $p\equiv1\pmod4$,}\\
-2p^{a/2}\leg{n}{p}\sin(4\pi n/p^a)&\text{ if $2\nmid a$ and $p\equiv3\pmod4$.}\end{cases}$$
\end{proposition}
Note that it is immediate to reduce general $K(m,n,N)$ to the case $m=n$
and $N=p^a$, and to give formulas also for the case $p=2$. As usual the
case $N=p$ is \emph{not} explicit, and, contrary to the case of Gauss sums
where it is easy to show that $|\gg(\chi)|=\sqrt{p}$ for a primitive character
$\chi$, the bound $|K(m,n,p)|\le 2\sqrt{p}$ for $p\nmid nm$ due to Weil is
much more difficult to prove, and in fact follows from his proof of the
Riemann hypothesis for curves.
\section{Numerical Computation of $L$-Functions}
\subsection{Computational Issues}
Let $L(s)$ be a general $L$-function as defined in Section \ref{sec:one},
and let $N$ be its conductor. There are several computational problems that we
want to solve. The first, but not necessarily the most important, is the
numerical computation of $L(s)$ for given complex values of $s$. This problem
is of very varying difficulty depending on the size of $N$ and of the
imaginary part of $s$ (note that if the \emph{real part} of $s$ is quite
large, the defining series for $L(s)$ converges quite well, if not
exponentially fast, so there is no problem in that range, and by the
functional equation the same is true if the real part of $1-s$ is quite large).
The problems for $\Im(s)$ large are quite specific, and are already crucial
in the case of the Riemann zeta function $\zeta(s)$. It is by an efficient
management of this problem (for instance by using the so-called
\emph{Riemann--Siegel formula}) that one is able to compute billions of
nontrivial zeros of $\zeta(s)$. We will not consider these problems here, but
concentrate on reasonable ranges of $s$.
The second problem is specific to general $L$-functions as opposed to
$L$-functions attached to Dirichlet characters for instance: in the general
situation, we are given an $L$-function by an Euler product known outside of
a finite and small number of ``bad primes''. Using recipes dating to the
late 1960's and well explained in a beautiful paper of Serre \cite{Ser}, one
can give the ``gamma factor'' $\gamma(s)$, and some (but not all) the information
about the ``conductor'', which is the exponential factor, at least in the
case of $L$-functions of varieties, or more generally of motives.
\smallskip
We will ignore these problems and assume that we know all the bad primes,
gamma factor, conductor, and root number. Note that if we know the gamma
factor and the bad primes, using the formulas that we will give below for
different values of the argument it is easy to recover the conductor and the
root number. What is most difficult to obtain are the Euler factors at the
bad primes, and this is the object of current work.
\subsection{Dirichlet $L$-Functions}
Let $\chi$ be a Dirichlet character modulo $N$. We define the $L$-function
attached to $\chi$ as the complex function
$$L(\chi,s)=\sum_{n\ge1}\dfrac{\chi(n)}{n^s}\;.$$
Since $|\chi(n)|\le1$, it is clear that $L(\chi,s)$ converges absolutely
for $\Re(s)>1$. Furthermore, since $\chi$ is multiplicative, as for the
Riemann zeta function we have an \emph{Euler product}
$$L(\chi,s)=\prod_p\dfrac{1}{1-\chi(p)/p^s}\;.$$
The denominator of this product being generically of degree $1$, this is
also called an $L$-function of degree $1$, and conversely, with a suitable
definition of the notion of $L$-function, one can show that these are the
only $L$-functions of degree $1$.
If $f$ is the conductor of $\chi$ and $\chi_f$ is the character modulo $f$
equivalent to $\chi$, it is clear that
$$L(\chi,s)=\prod_{p\mid N, p\nmid f}(1-\chi_f(p)p^{-s})L(\chi_f,s)\;,$$
so if desired we can always reduce to primitive characters, and this is
what we will do from now on.
Dirichlet $L$-series have important analytic and arithmetic properties, some
of them conjectural (such as the Riemann Hypothesis), which should (again
conjecturally) be shared by all global $L$-functions, see the discussion
in the introduction. We first give the following:
\begin{theorem} Let $\chi$ be a \emph{primitive} character modulo $N$, and
let $e=0$ or $1$ be such that $\chi(-1)=(-1)^e$.
\begin{enumerate}
\item (Analytic continuation.)
The function $L(\chi,s)$ can be analytically continued to the whole
complex plane into a meromorphic function, which is in fact holomorphic
except in the special case $N=1$, $L(\chi,s)=\zeta(s)$, where it has a unique
pole, at $s=1$, which is simple with residue $1$.
\item (Functional equation.)
There exists a \emph{functional equation} of the following form:
letting $\gamma_{{\mathbb R}}(s)=\pi^{-s/2}\Gamma(s/2)$, we set
$$\Lambda(\chi,s)=N^{(s+e)/2}\gamma_{{\mathbb R}}(s+e)L(\chi,s)\;,$$
where $e$ is as above. Then
$$\Lambda(\chi,1-s)=\omega(\chi)\Lambda(\ov{\chi},s)\;,$$
where $\omega(\chi)$, the so-called \emph{root number}, is a complex
number of modulus $1$ given by the formula
$\omega(\chi)={\mathfrak g}(\chi)/(i^eN^{1/2})$.
\item (Special values.)
For each integer $k\ge1$ we have the \emph{special values}
$$L(\chi,1-k)=-\dfrac{B_k(\chi)}{k}-\delta_{N,1}\delta_{k,1}\;,$$
where $\delta$ is the Kronecker symbol, and the \emph{generalized Bernoulli
numbers} $B_k(\chi)$ are easily computable algebraic numbers. In particular,
when $k\not\equiv e\pmod{2}$ we have $L(\chi,1-k)=0$ (except when $k=N=1$).
By the functional equation this is equivalent to the formula
for $k\equiv e\pmod{2}$, $k\ge1$:
$$L(\chi,k)=(-1)^{k-1+(k+e)/2}\omega(\chi)\dfrac{2^{k-1}\pi^k\ov{B_k(\chi)}}{m^{k-1/2}k!}\;.$$
\end{enumerate}\end{theorem}
To state the next theorem, which for the moment we state for Dirichlet
$L$-functions, we need still another important special function:
\begin{definition} For $x>0$ we define the \emph{incomplete gamma function}
$\Gamma(s,x)$ by
$$\Gamma(s,x)=\int_x^\infty t^se^{-t}\,\dfrac{dt}{t}\;.$$
\end{definition}
Note that this integral converges for \emph{all} $s\in{\mathbb C}$, and that it
tends to $0$ exponentially fast when $x\to\infty$, more precisely
$\Gamma(s,x)\sim x^{s-1}e^{-x}$. In addition (but this would carry us too far here)
there are many efficient methods to compute it; see however the section
on inverse Mellin transforms below.
\begin{theorem} Let $\chi$ be a \emph{primitive} character modulo $N$. For all
$A>0$ we have:
\begin{align*}\Gamma\left(\dfrac{s+e}{2}\right)L(\chi,s)&=\delta_{N,1}\pi^{s/2}\left(\dfrac{A^{(s-1)/2}}{s-1}-\dfrac{A^{s/2}}{s}\right)
+\sum_{n\ge 1}\dfrac{\chi(n)}{n^s}\Gamma\left(\dfrac{s+e}{2},\dfrac{\pi n^2 A}{N}\right)\\
&\phantom{=}+\omega(\chi)\left(\dfrac{\pi}{N}\right)^{s-1/2}\sum_{n\ge 1}\dfrac{\ov{\chi}(n)}{n^{1-s}}\Gamma\left(\dfrac{1-s+e}{2},\dfrac{\pi n^2}{AN}\right)\;.\end{align*}
\end{theorem}
\begin{remarks}{\rm \begin{enumerate}
\item Thanks to this theorem, we can compute numerical values of $L(\chi,s)$
(for $s$ in a reasonable range) in time $\Os(N^{1/2})$.
\item The optimal value of $A$ is $A=1$, but the theorem is stated in this
form for several reasons, one of them being that by varying $A$ (for instance
taking $A=1.1$ and $A=0.9$) one can check the correctness of the
implementation, or even compute the root number $\omega(\chi)$ if it is not known.
\item To compute values of $L(\chi,s)$ when $\Im(s)$ is large, one
does not use the theorem as stated, but variants, see \cite{Rub}.
\item The above theorem, called the \emph{approximate functional equation},
evidently implies the functional equation itself, so it seems to be more
precise; however this is an illusion since one can show that under very mild
assumptions functional equations in a large class imply corresponding
approximate functional equations.
\end{enumerate}}
\end{remarks}
\subsection{Approximate Functional Equations}
In fact, let us make this last statement completely precise. For the sake of
simplicity we will assume that the $L$-functions have no poles (this
corresponds for Dirichlet $L$-functions to the requirement that $\chi$ not be
the trivial character). We begin by the following (where we restrict to
certain kinds of gamma products, but it is easy to generalize; incidentally
recall the \emph{duplication formula} for the gamma function
$\Gamma(s/2)\Gamma((s+1)/2)=2^{1-s}\pi^{1/2}\Gamma(s)$, which allows the reduction of
factors of the type $\Gamma(s+a)$ to several of the type $\Gamma(s/2+a')$ and
conversely).
\begin{definition} Recall that we have defined $\Gamma_{{\mathbb R}}(s)=\pi^{-s/2}\Gamma(s/2)$,
which is the gamma factor attached to $L$-functions of even characters, for
instance to $\zeta(s)$. A \emph{gamma product} is a function of the type
$$\gamma(s)=f^{s/2}\prod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+b_j)\;,$$
where $f>0$ is a real number. The number $d$ of gamma factors is called the
\emph{degree} of $\gamma(s)$.\end{definition}
Note that the $b_j$ may not be real numbers, but in the case of $L$-functions
attached to motives, they will always be, and in fact be integers.
\begin{proposition} Let $\gamma$ be a gamma product.\begin{enumerate}
\item There exists a function
$W(t)$ called the \emph{inverse Mellin transform} of $\gamma$ such that
$$\gamma(s)=\int_0^\infty t^sW(t)\,dt/t$$
for $\Re(s)$ sufficiently large (greater than the real part of the rightmost
pole of $\gamma(s)$ suffices).
\item $W(t)$ is given by the following \emph{Mellin inversion formula} for
$t>0$:
$$W(t)=\M^{-1}(\gamma)(t)=\dfrac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}t^{-s}\gamma(s)\,ds\;,$$
for any $\sigma$ larger than the real part of the poles of $\gamma(s)$.
\item $W(t)$ tends to $0$ exponentially fast when $t\to+\infty$. More
precisely, as $t\to\infty$ we have
$$W(t)\sim C\cdot(t/f^{1/2})^B\exp(-\pi d(t/f^{1/2})^{2/d})$$
with $B=(1-d+\sum_{1\le j\le d}b_j)/d$ and $C=2^{(d+1)/2}/d^{1/2}$.
\end{enumerate}\end{proposition}
\begin{definition} Let $\gamma(s)$ be a gamma product and $W(t)$ its inverse
Mellin transform. The \emph{incomplete gamma product} $\gamma(s,x)$ is defined
for $x>0$ by
$$\gamma(s,x)=\int_x^\infty t^sW(t)\,\dfrac{dt}{t}\;.$$
\end{definition}
Note that this integral always converges since $W(t)$ tends to $0$
exponentially fast when $t\to\infty$. In addition, thanks to the above
proposition it is immediate to show the following:
\begin{corollary}\label{asympunsmooth}\begin{enumerate}
\item For any $\sigma$ larger than the real part of the poles of $\gamma(s)$
we have
$$\gamma(s,x)=\dfrac{x^s}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{x^{-z}\gamma(z)}{z-s}\,dz\;.$$
\item For $s$ fixed, as $x\to\infty$ we have with the same
constants $B$ and $C$ as above
$$\gamma(s,x)\sim \dfrac{C}{2\pi}x^s(x/f^{1/2})^{B-2/d}\exp(-\pi d(x/f^{1/2})^{2/d})$$
so has essentially the same exponential decay as $W(x)$.
\end{enumerate}
\end{corollary}
The first theorem, essentially due to Lavrik, which is an exercise in complex
integration is as follows (recall that a function $f$ is of \emph{finite
order} $\alpha\ge0$ if for all $\varepsilon>0$ and sufficiently large $|z|$ we have
$|f(z)|\le \exp(|z|^{\alpha+\varepsilon})$):
\begin{theorem}\label{thmapprox} For $i=1$ and $i=2$, let
$L_i(s)=\sum_{n\ge 1}a_i(n)n^{-s}$ be
Dirichlet series converging in some right half-plane $\Re(s)\ge\sigma_0$.
For $i=1$ and $i=2$, let $\gamma_i(s)$ be gamma products having the same
degree $d$. Assume that the functions $\Lambda_i(s)=\gamma_i(s)L_i(s)$
extend analytically to ${\mathbb C}$ into holomorphic functions of \emph{finite order},
and that we have the functional equation
$$\Lambda_1(k-s)=w\cdot\Lambda_2(s)$$ for some constant
$w\in{\mathbb C}^*$ and some real number $k$.
Then for all $A>0$, we have
$$\Lambda_1(s)=\sum_{n\ge1}\dfrac{a_1(n)}{n^s}\gamma_1(s,nA)+
w\sum_{n\ge1}\dfrac{a_2(n)}{n^{k-s}}\gamma_2\Bigl(k-s,\dfrac{n}{A}\Bigr)$$
and symmetrically
$$\Lambda_2(s)=\sum_{n\ge1}\dfrac{a_2(n)}{n^s}\gamma_2\Bigl(s,\dfrac{n}{A}\Bigr)+
w^{-1}\sum_{n\ge1}\dfrac{a_1(n)}{n^{k-s}}\gamma_1(k-s,nA)\;,$$
where $\gamma_i(s,x)$ are the corresponding incomplete gamma products.
\end{theorem}
Note that, as already mentioned, it is immediate to modify this theorem
to take into account possible poles of $L_i(s)$.
Since the incomplete gamma products $\gamma_i(s,x)$ tend to $0$ exponentially
fast when $x\to\infty$, the above formulas are rapidly
convergent series. We can make this more precise: if we write as above
$\gamma_i(s,x)\sim C_ix^{B'_i}\exp(-\pi d(x/f_i^{1/2})^{2/d})$, since
the convergence of the series is dominated by the exponential term, choosing
$A=1$, to have the $n$th term of the series less than $e^{-D}$, say, we
need (approximately) $\pi d(n/f^{1/2})^{2/d}>D$, in other words
$n>(D/(\pi d))^{d/2}f^{1/2}$, with $f=\max(f_1,f_2)$. Thus, if the
``conductor'' $f$ is large, we may have some trouble. But this stays
reasonable for $f<10^8$, say.
\smallskip
The above argument leads to the belief that, apart from special values which
can be computed by other methods, the computation of values of $L$-functions
of conductor $f$ requires at least $C\cdot f^{1/2}$ operations. It has
however been shown by Hiary (see \cite{Hia}), that if $f$ is far from
squarefree (for instance if $f=m^3$ for Dirichlet $L$-functions), the
computation can be done faster (in $\Os(m)$ in the case $f=m^3$), at least in
the case of Dirichlet $L$-functions.
\medskip
For practical applications, it is very useful to introduce an additional
function as a parameter. We state the following version due to Rubinstein
(see \cite{Rub}), whose proof is essentially identical to that of the
preceding version. To simplify the exposition, we again assume that the
$L$ function has no poles (it is easy to generalize), but also that
$L_2=\ov{L_1}$.
\begin{theorem} Let $L(s)=\sum_{n\ge1}a(n)n^{-s}$ be an $L$-function as above
with functional equation $\Lambda(k-s)=w\ov{\Lambda}(s)$ with
$\Lambda(s)=\gamma(s)L(s)$. For simplicity of exposition, assume that $L(s)$
has no poles in ${\mathbb C}$. Let $g(s)$ be an entire function such that for fixed
$s$ we have $|\Lambda(z+s)g(z+s)/z|\to0$ as $\Im(z)\to\infty$ in any bounded
strip $|\Re(z)|\le \alpha$. We have
$$\Lambda(s)g(s)=\sum_{n\ge1}\dfrac{a(n)}{n^s}f_1(s,n)
+\omega\sum_{n\ge1}\dfrac{\ov{a(n)}}{n^{k-s}}f_2(k-s,n)\;,$$
where
$$f_1(s,x)=\dfrac{x^s}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{\gamma(z)g(z)x^{-z}}{z-s}\,dz\text{\quad and\quad}f_2(s,x)=\dfrac{x^s}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\dfrac{\gamma(z)\ov{g(k-\ov{z})}x^{-z}}{z-s}\,dz\;,$$
where $\sigma$ is any real number greater than the real parts of all the
poles of $\gamma(z)$ and than $\Re(s)$.
\end{theorem}
Several comments are in order concerning this theorem:
\begin{enumerate}\item As already mentioned, the proof is a technical but
elementary exercise in complex analysis. In particular, it is very easy to
modify the formula to take into account possible poles of $L(s)$, see
\cite{Rub} once again.
\item
As in the unsmoothed case, the functions $f_i(s,x)$ are exponentially
decreasing as $x\to\infty$. Thus this gives fast formulas for computing values
of $L(s)$ for reasonable values of $s$. The very simplest case of this
approximate functional equation, even simpler than the Riemann zeta function,
is for the computation of the value at $s=1$ of the $L$-function of an
\emph{elliptic curve} $E$: if the sign of its functional equation is equal
to $+1$ (otherwise $L(E,1)=0$), the (unsmoothed) formula reduces to
$$L(E,1)=2\sum_{n\ge1}\dfrac{a(n)}{n}e^{-2\pi n/N^{1/2}}\;,$$
where $N$ is the conductor of the curve.
\item It is not difficult to show that as $n\to\infty$ we have a similar
behavior for the functions $f_i(s,n)$ as in the unsmoothed case
(Corollary \ref{asympunsmooth}), i.e.,
$$f_i(s,n)\sim C_i\cdot n^{B'_i}e^{-\pi d(n/N^{1/2})^{2/d}}$$
for some explicit constants $C_i$ and $B'_i$ (in the preceding example $d=2$).
\item The theorem can be used with $g(s)=1$ to compute values of
$L(s)$ for ``reasonable'' values of $s$. When $s$ is unreasonable,
for instance when $s=1/2+iT$ with $T$ large (to check the Riemann
hypothesis for instance), one chooses other functions $g(s)$ adapted
to the computation to be done, such as $g(s)=e^{is\th}$ or
$g(s)=e^{-a(s-s_0)^2}$; I refer to Rubinstein's paper for detailed
examples.
\item By choosing two very simple functions $g(s)$ such as $a^s$ for
two different values of $a$ close to $1$, one can compute numerically
the value of the root number $\omega$ if it is unknown. In a similar manner,
if the $a(n)$ are known but not $\omega$ nor the conductor $N$, by
choosing a few easy functions $g(s)$ one can find them. But much more
surprisingly, if almost nothing is known apart from the gamma factors and $N$,
say, by cleverly choosing a number of functions $g(s)$ and applying techniques
from numerical analysis such as singular value decomposition and least
squares methods, one can prove or disprove (numerically of course)
the existence of an $L$-function having the given gamma factors and
conductor, and find its first few Fourier coefficients if they exist.
This method has been used extensively by D.~Farmer in his search for
$\GL_3({\mathbb Z})$ and $\GL_4({\mathbb Z})$ Maass forms, by Poor and Yuen in computations
related to the paramodular conjecture of Brumer--Kramer and abelian surfaces,
and by A.~Mellit in the search of $L$-functions of degree $4$ with
integer coefficients and small conductor. Although a fascinating and
active subject, it would carry us too far afield to give more detailed
explanations.\end{enumerate}
\subsection{Inverse Mellin Transforms}
We thus see that it is necessary to compute inverse Mellin transforms of some
common gamma factors. Note that the exponential factors (either involving the
conductor and/or $\pi$) are easily taken into account: if
$\gamma(s)=\M(W)(s)=\int_0^\infty W(t)t^s\,dt/t$ is the Mellin transform of $W(t)$,
we have for $a>0$, setting $u=at$:
$$\int_0^\infty W(at)t^s\,dt/t=\int_0^\infty W(u)u^sa^{-s}\,du/u=a^{-s}\gamma(s)\;,$$
so the inverse Mellin transform of $a^{-s}\gamma(s)$ is simply $W(at)$.
As we have seen, there exists an explicit formula for the inverse Mellin
transform, which is immediate from the Fourier inversion formula.
We will see that although this looks quite technical, it is in practice very
useful for computing inverse Mellin transforms.
Let us look at the simplest examples (omitting the exponential factor $f^{s/2}$
thanks to the above remark):
\begin{enumerate}
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s))=2e^{-\pi x^2}$ (this occurs for $L$-functions of
even characters, and in particular for $\zeta(s)$).
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s+1))=2xe^{-\pi x^2}$ (this occurs for $L$-functions of
odd characters).
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s))=2e^{-2\pi x}$ (this occurs for $L$-functions
attached to modular forms and to elliptic curves).
\item $\M^{-1}(\Gamma_{{\mathbb R}}(s)^2)=4K_0(2\pi x)$ (this occurs for instance for
Dedekind zeta functions of real quadratic fields). Here $K_0(z)$ is a
well-known special function called a $K$-Bessel function. Of course this is
just a name, but it can be computed quite efficiently and can be found in
all computer algebra packages.
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s)^2)=8K_0(4\pi x^{1/2})$.
\item $\M^{-1}(\Gamma_{{\mathbb C}}(s)\Gamma_{{\mathbb C}}(s-1))=8K_1(4\pi x^{1/2})/x^{1/2}$, where
$K_1(z)$ is another $K$-Bessel function which can be defined by
$K_1(z)=-K_0'(z)$.
\end{enumerate}
\begin{exercise} Prove all these formulas.
\end{exercise}
It is clear however that when the gamma factor is more complicated, we
cannot write such ``explicit'' formulas, for instance what must be done for
$\gamma(s)=\Gamma_{{\mathbb C}}(s)\Gamma_{{\mathbb R}}(s)$ or $\gamma(s)=\Gamma_{{\mathbb R}}(s)^3$ ? In fact all of
the above formulas involving $K$-Bessel functions are ``cheats'' in the sense
that we have simply given a \emph{name} to these inverse Mellin transform,
without explaining how to compute them.
\smallskip
However the Mellin inversion formula does provide such a method. The
main point to remember (apart of course from the crucial use of the Cauchy
residue formula and contour integration), is that the gamma function
\emph{tends to zero exponentially fast} on vertical lines, uniformly in the
real part (this may seem surprising if you have never seen it since
the gamma function grows so fast on the real axis, see appendix).
This exponential decrease implies that in the Mellin inversion
formula we can \emph{shift} the line of integration without changing
the value of the integral, as long as we take into account the residues
of the poles which are encountered along the way.
The line $\Re(s)=\sigma$ has been chosen so that $\sigma$ is larger than
the real part of any pole of $\gamma(s)$, so shifting to the right does not
bring anything. On the other hand, shifting towards the left shows that
for any $r<0$ not a pole of $\gamma(s)$ we have
$$W(t)=\sum_{\substack{s_0\text{ pole of $\gamma(s)$}\\\Re(s_0)>r}}\Res_{s=s_0}(t^{-s}\gamma(s))+\dfrac{1}{2\pi i}\int_{r-i\infty}^{r+i\infty}t^{-s}\gamma(s)\,ds\;.$$
Using the reflection formula for the gamma function
$\Gamma(s)\Gamma(1-s)=\pi/\sin(s\pi)$, it is easy to show that if $r$ stays say
half-way between the real part of two consecutive poles of $\gamma(s)$ then
$\gamma(s)$ will tend to $0$ exponentially fast on $\Re(s)=r$ as $r\to-\infty$,
in other words that the integral tends to $0$ (exponentially fast). We thus
have the \emph{exact formula}
$$W(t)=\sum_{s_0\text{ pole of $\gamma(s)$}}\Res_{s=s_0}(t^{-s}\gamma(s))\;.$$
Let us see the simplest examples of this, taken from those given above.
\begin{enumerate}
\item For $\gamma(s)=\Gamma_{{\mathbb C}}(s)=2\cdot(2\pi)^{-s}\Gamma(s)$ the poles of $\gamma(s)$
are for $s_0=-n$, $n$ a positive or zero integer, and since
$\Gamma(s)=\Gamma(s+n+1)/((s+n)(s+n-1)\cdots s)$, the residue at $s_0=-n$ is equal to
$$2\cdot (2\pi t)^n\Gamma(1)/((-1)(-2)\cdots(-n))=(-1)^n(2\pi t)^n/n!\;,$$
so we obtain $W(t)=2\sum_{n\ge0}(-1)^n(2\pi t)^n/n!=2\cdot e^{-2\pi t}$.
Of course we knew that!
\item For $\gamma(s)=\Gamma_{{\mathbb C}}(s)^2=4(2\pi)^{-2s}\Gamma(s)^2$, the inverse Mellin
transform is $8K_0(4\pi x^{1/2})$ whose expansion we do \emph{not} yet know.
The poles of $\gamma(s)$ are again for $s_0=-n$, but here all the poles are
double poles, so the computation is slightly more complicated. More precisely
we have $$\Gamma(s)^2=\Gamma(s+n+1)^2/((s+n)^2(s+n-1)^2\cdots s^2)\;,$$ so setting
$s=-n+\varepsilon$ with $\varepsilon$ small this gives
\begin{align*}\Gamma(-n+\varepsilon)^2&=\dfrac{\Gamma(1+\varepsilon)^2}{\varepsilon^2}\dfrac{1}{(1-\varepsilon)^2\cdots(n-\varepsilon)^2}\\
&=\dfrac{1+2\Gamma'(1)\varepsilon+O(\varepsilon^2)}{n!^2\varepsilon^2}(1+2\varepsilon/1)(1+2\varepsilon/2)\cdots(1+2\varepsilon/n)\\
&=\dfrac{1+2\Gamma'(1)\varepsilon+O(\varepsilon^2)}{n!^2\varepsilon^2}(1+2H_n\varepsilon)\;,\end{align*}
where we recall that $H_n=\sum_{1\le j\le n}1/j$ is the harmonic sum.
Since $(4\pi^2t)^{-(-n+\varepsilon)}=(4\pi^2t)^{n-\varepsilon}=(4\pi^2t)^n(1-\varepsilon\log(4\pi^2t)+O(\varepsilon^2))$, it follows
that
$$(4\pi^2t)^{-(-n+\varepsilon)}\Gamma(-n+\varepsilon)^2=\dfrac{(4\pi^2t)^n}{n!^2\varepsilon^2}(1+\varepsilon(2H_n+2\Gamma'(1)-\log(4\pi^2t)))\;,$$
so that the residue of $\gamma(s)$ at $s=-n$ is equal to
$4((4\pi^2t)^n/n!^2)(2H_n+2\Gamma'(1)-\log(4\pi^2t))$.
We thus have
$2K_0(4\pi t^{1/2})=\sum_{n\ge0}((4\pi^2t)^n/n!^2)(2H_n+2\Gamma'(1)-\log(4\pi^2t))$,
hence using the easily proven fact that $\Gamma'(1)=-\gamma$, where
$$\gamma=\lim_{n\to\infty}(H_n-\log(n))=0.57721566490\dots$$
is Euler's constant, this gives finally the expansion
$$K_0(t)=\sum_{n\ge0}\dfrac{(t/2)^{2n}}{n!^2}(H_n-\gamma-\log(t/2))\;.$$
\end{enumerate}
\begin{exercise} In a similar manner, or directly from this formula, find the
expansion of $K_1(t)$.
\end{exercise}
\begin{exercise}\label{exga}
Like all inverse Mellin transforms of gamma factors, the
function $K_0(x)$ tends to $0$ exponentially fast as $x\to\infty$
(more precisely $K_0(x)\sim(2x/\pi)^{-1/2}e^{-x}$). Note that this is
absolutely not ``visible'' on the expansion given above. Use this remark
and the above expansion to write an algorithm which computes Euler's
constant $\gamma$ \emph{very efficiently} to a given accuracy.
\end{exercise}
It must be remarked that even though the series defining the inverse Mellin
transform converge for \emph{all} $x>0$, one need a large number of terms
before the terms become very small when $x$ is large. For instance, we have
seen that for $\gamma(s)=\Gamma(s)$ we have
$W(t)=\M^{-1}(\gamma)(t)=\sum_{n\ge0}(-1)^nt^n/n!=e^{-t}$,
but this series is not very good for computing $e^{-t}$.
\begin{exercise} Show that for $t>0$, to compute $e^{-t}$ to any reasonable
accuracy (even to $1$ decimal) we must take at least $n>3.6\cdot t$
($e=2.718...$), and work to accuracy at most $e^{-2t}$ in an evident sense.
\end{exercise}
The reason that this is not a good way is that there is catastrophic
cancellation in the series. One way to circumvent this problem is to
compute $e^{-t}$ as
$$e^{-t}=1/e^t=1/\sum_{n\ge0}t^n/n!\;,$$
and the cancellation problem disappears. However this is very special to
the exponential function, and is not applicable for instance to the
$K$-Bessel function.
Nonetheless, an important result is that for any inverse Mellin transform
as above, or more importantly for the corresponding incomplete gamma
product, there exist \emph{asymptotic expansions} as $x\to\infty$, in other
words nonconvergent series which however give a good approximation if limited
to a few terms.
Let us take the simplest example of the incomplete gamma function
$\Gamma(s,x)=\int_x^\infty t^se^{-t}\,dt/t$. The \emph{power series} expansion
is easily seen to be (at least for $s$ not a negative or zero integer,
otherwise the formula must be slightly modified):
$$\Gamma(s,x)=\Gamma(s)-\sum_{n\ge0}(-1)^n\dfrac{x^{n+s}}{n!(s+n)}\;,$$
which has the same type of (bad when $x$ is large) convergence behavior as
$e^{-x}$. On the other hand, it is immediate to prove by integration by parts
that
\begin{align*}\Gamma(s,x)&=e^{-x}x^{s-1}\left(1+\dfrac{s-1}{x}+\dfrac{(s-1)(s-2)}{x^2}+\cdots\right.\\
&\phantom{=}\left.+\dfrac{(s-1)(s-2)\cdots(s-n)}{x^n}+R_n(s,x)\right)\;,\end{align*}
and one can show that in reasonable ranges of $s$ and $x$ the modulus of
$R_n(s,x)$ is smaller than the first ``neglected term'' in an evident sense.
This is therefore quite a practical method for computing these functions
when $x$ is rather large.
\begin{exercise} Explain why the asymptotic series above terminates when
$s$ is a strictly positive integer.
\end{exercise}
\subsection{Hadamard Products and Explicit Formulas}
This could be the subject of a course in itself, so we will be quite
brief. I refer to Mestre's paper \cite{Mes} for a precise and
general statement (note that there are quite a number of evident
misprints in the paper).
\smallskip
In Theorem \ref{thmapprox} we assume that the $L$-series that we consider
satisfy a functional equation, together with some mild growth conditions,
in particular that they are of finite order. According to a well-known
theorem of complex analysis, this implies that they have a so-called
\emph{Hadamard product}, see Appendix. For instance, in the case of the
Riemann zeta function, which is of order $1$, we have
$$\zeta(s)=\dfrac{e^{bs}}{s(s-1)\Gamma(s/2)}\prod_{\rho}\left(1-\dfrac{s}{\rho}\right)e^{s/\rho}\;,$$
where the product is over all nontrivial zeros of $\zeta(s)$ (i.e., such that
$0\le\Re(\rho)\le1$), and $b=\log(2\pi)-1-\gamma$. In fact, this can be written
in a much nicer way as follows: recall that
$\Lambda(s)=\pi^{-s/2}\Gamma(s/2)\zeta(s)$ satisfies $\Lambda(1-s)=\Lambda(s)$. Then
$$s(s-1)\Lambda(s)=\prod_{\rho}\left(1-\dfrac{s}{\rho}\right)\;,$$
where it is now understood that the product is taken as the limit as
$T\to\infty$ of $\prod_{|\Im(\rho)|\le T}(1-s/\rho)$.
\smallskip
However, almost all $L$-functions that are used in number theory not only
have the above properties, but have also \emph{Euler products}. Taking again
the example of $\zeta(s)$, we have for $\Re(s)>1$ the Euler product
$\zeta(s)=\prod_p(1-1/p^s)^{-1}$. It follows that (in a suitable range of $s$)
we have equality between two products, hence taking logarithms, equality
between two \emph{sums}. In our case the Hadamard product gives
$$\log(\Lambda(s))=-\log(s(s-1))+\sum_{\rho}\log(1-s/\rho)\;,$$
while the Euler product gives
\begin{align*}\log(\Lambda(s))&=-(s/2)\log(\pi)+\log(\Gamma(s/2))-\sum_p\log(1-1/p^s)\\
&=-(s/2)\log(\pi)+\log(\Gamma(s/2))+\sum_{p,k\ge1}1/(kp^{ks})\;,\end{align*}
Equating the two sides gives a relation between on the one hand a
sum over the nontrivial zeros of $\zeta(s)$, and on the other hand a
sum over prime powers.
In itself, this is not very useful. The crucial idea is to introduce
a test function $F$ which we will choose to the best of our interests,
and obtain a formula depending on $F$ and some transforms of it.
This is in fact quite easy to do, and even though not very useful in this
case, let us perform the computation for Dirichlet $L$-function of
even primitive characters.
\begin{theorem} Let $\chi$ be an even primitive Dirichlet character of
conductor $N$, and let $F$ be a real function satisfying a number of easy
technical conditions (see \cite{Mes}). We have the \emph{explicit formula}:
\begin{align*}\sum_{\rho}\Phi(\rho)&-2\delta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx\\
&=-\sum_{p,k\ge1}\dfrac{\log(p)}{p^{k/2}}(\chi^k(p)F(k\log(p))+\ov{\chi^k(p)}F(-k\log(p)))\\
&\phantom{=}+F(0)\log(N/\pi)\\
&\phantom{=}+\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}\dfrac{F(x/2)+F(-x/2)}{2}\right)\,dx\;,\end{align*}
where we set
$$\Phi(s)=\int_{-\infty}^\infty F(x)e^{(s-1/2)x}\,dx\;,$$
and as above the sum on $\rho$ is a sum over all the nontrivial zeros of
$L(\chi,s)$ taken symmetrically
($\sum_{\rho}=\lim_{T\to\infty}\sum_{|\Im(\rho)|\le T}$).
\end{theorem}
\begin{remarks}{\rm \begin{enumerate}
\item Write $\rho=1/2+i\gamma$ (if the GRH is true all $\gamma$ are real,
but even without GRH we can always write this). Then
$$\Phi(\rho)=\int_{-\infty}^\infty F(x)e^{i\gamma x}\,dx=\widehat{F}(\gamma)$$
is simply the value at $\gamma$ of the \emph{Fourier transform}
$\widehat{F}$ of $F$.
\item It is immediate to generalize to odd $\chi$ or more general
$L$-functions:
\begin{exercise} After studying the proof, generalize to an arbitrary pair
of $L$-functions as in Theorem \ref{thmapprox}.
\end{exercise}
\end{enumerate}}
\end{remarks}
\begin{proof} The proof is not difficult, but involves a number of integral
transform computations. We will omit some detailed justifications which
are in fact easy but boring.
As in the theorem, we set
$$\Phi(s)=\int_{-\infty}^\infty F(x)e^{(s-1/2)x}\,dx\;,$$
and we first prove some lemmas.
\begin{lemma}
We have the inversion formulas valid for any $c>1$:
$$F(x)=e^{x/2}\int_{c-i\infty}^{c+i\infty}\Phi(s)e^{-sx}\,ds\;.$$
$$F(-x)=e^{x/2}\int_{c-i\infty}^{c+i\infty}\Phi(1-s)e^{-sx}\,ds\;.$$
\end{lemma}
\begin{proof} This is in fact a hidden version of the Mellin inversion formula:
setting $t=e^x$ in the definition of $\Phi(s)$, we deduce that
$\Phi(s)=\int_0^\infty F(\log(t))t^{s-1/2}\,dt/t$, so that
$\Phi(s+1/2)$ is the Mellin transform of $F(\log(t))$. By Mellin inversion
we thus have for sufficiently large $\sigma$:
$$F(\log(t))=\dfrac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+i\infty}\Phi(s+1/2)t^{-s}\,ds\;,$$
so changing $s$ into $s-1/2$ and $t$ into $e^x$ gives the first formula
for $c=\sigma+1/2$ sufficiently large, and the assumptions on $F$ (which
we have not given) imply that we can shift the line of integration to any
$c>1$ without changing the integral.
For the second formula, we simply note that
$$\Phi(1-s)=\int_{-\infty}^\infty F(x)e^{-(s-1/2)x}\,dx
=\int_{-\infty}^\infty F(-x)e^{(s-1/2)x}\,dx\;,$$
so we simply apply the first formula to $F(-x)$.\qed\end{proof}
\begin{corollary} For any $c>1$ and any $p\ge1$ we have
\begin{align*}
\int_{c-i\infty}^{c+i\infty}\Phi(s)p^{-ks}\,ds&=F(k\log(p))p^{-k/2}\text{\quad and}\\
\int_{c-i\infty}^{c+i\infty}\Phi(1-s)p^{-ks}\,ds&=F(-k\log(p))p^{-k/2}\;.
\end{align*}
\end{corollary}
\begin{proof} Simply apply the lemma to $x=k\log(p)$.\qed\end{proof}
Note that we will also use this corollary for $p=1$.
\begin{lemma} Denote as usual by $\psi(s)$ the logarithmic derivative
$\Gamma'(s)/\Gamma(s)$ of the gamma function. We have
\begin{align*}
\int_{c-i\infty}^{c+i\infty}\Phi(s)\psi(s/2)&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(x/2)\right)\,dx\text{\quad and}\\
\int_{c-i\infty}^{c+i\infty}\Phi(1-s)\psi(s/2)&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(-x/2)\right)\,dx\;.\end{align*}
\end{lemma}
\begin{proof} We use one of the most common integral representations of
$\psi$, see Proposition 9.6.43 of \cite{Coh4}: we have
$$\psi(s)=\int_0^\infty\left(\dfrac{e^{-x}}{x}-\dfrac{e^{-sx}}{1-e^{-x}}\right)\,dx\;.$$
Thus, assuming that we can interchange integrals (which is easy to justify),
we have, using the preceding lemma:
\begin{align*}\int_{c-i\infty}^{c+i\infty}\Phi(s)\psi(s/2)\,ds&=\int_0^\infty\left(\dfrac{e^{-x}}{x}\int_{c-i\infty}^{c+i\infty}\Phi(s)\,ds\right.\\
&\phantom{=}\left.-\dfrac{1}{1-e^{-x}}\int_{c-i\infty}^{c+i\infty}\Phi(s)e^{-(s/2)x}\,ds\right)\,dx\\
&=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}F(x/2)\right)\,dx\;,\end{align*}
proving the first formula, and the second follows by changing $F(x)$ into
$F(-x)$.\qed\end{proof}
\smallskip
\noindent
{\it Proof of the theorem.\/} Recall from above that if we set
$\L(s)=N^{s/2}\pi^{-s/2}\Gamma(s/2)L(\chi,s)$ we have the functional
equation $\L(1-s)=\omega(\chi)\L(\ov{\chi},s)$ for some $\omega(\chi)$ of modulus
$1$.
For $c>1$, consider the following integral
$$J=\dfrac{1}{2i\pi}\int_{c-i\infty}^{c+i\infty}\Phi(s)\dfrac{\L'(s)}{\L(s)}\,ds\;,$$
which by our assumptions does not depend on $c>1$. We shift the line of
integration to the left (it is easily seen that this is allowed) to the
line $\Re(s)=1-c$, so by the residue theorem we obtain
$$J=S+\dfrac{1}{2i\pi}\int_{1-c-i\infty}^{1-c+i\infty}\Phi(s)\dfrac{\L'(s)}{\L(s)}\,ds\;,$$
where $S$ is the sum of the residues in the rectangle $[1-c,c]\times{\mathbb R}$.
We first have possible poles at $s=0$ and $s=1$, which occur only for $N=1$,
and they contribute to $S$
$$-\delta_{N,1}(\Phi(0)+\Phi(1))=-2\delta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx\;,$$
and of course second we have the contributions from the nontrivial zeros
$\rho$, which contribute $\sum_{\rho}\Phi(\rho)$, where it is understood that
zeros are counted with multiplicity, so that
$$S=-2\delta_{N,1}\int_{-\infty}^\infty F(x)\cosh(x/2)\,dx+\sum_{\rho}\Phi(\rho)\;.$$
On the other hand, by the functional equation we have
$\L'(1-s)/\L(1-s)=-\overline{\Lambda}'(s)/\overline{\Lambda}(s)$ (note that this does not involve
$\omega(\chi)$), where we write $\overline{\Lambda}(s)$ for $\L(\ov{\chi},s)$, so that
\begin{align*}\int_{1-c-i\infty}^{1-c+i\infty}\Phi(s)\dfrac{\L'(s)}{\L(s)}\,ds
&=\int_{c-i\infty}^{c+i\infty}\Phi(1-s)\dfrac{\L'(1-s)}{\L(1-s)}\,ds\\
&=-\int_{c-i\infty}^{c+i\infty}\Phi(1-s)\dfrac{\overline{\Lambda}'(s)}{\overline{\Lambda}(s)}\,ds\;.\end{align*}
Thus,
\begin{align*}S&=J-\dfrac{1}{2i\pi}\int_{1-c-i\infty}^{1-c+i\infty}\Phi(s)\dfrac{\L'(s)}{\L(s)}\,ds\\
&=\dfrac{1}{2i\pi}\int_{c-i\infty}^{c+i\infty}\left(\Phi(s)\dfrac{\L'(s)}{\L(s)}+\Phi(1-s)\dfrac{\overline{\Lambda}'(s)}{\overline{\Lambda}(s)}\right)\,ds\;.\end{align*}
Now by definition we have as above
$$\log(\L(s))=\dfrac{s}{2}\log(N/\pi)+\log\left(\Gamma\left(\dfrac{s}{2}\right)\right)+\sum_{p,k\ge1}\dfrac{\chi^k(p)}{kp^{ks}}$$
(where the double sum is over primes and integers $k\ge1$), so
$$\dfrac{\L'(s)}{\L(s)}=\dfrac{1}{2}\log(N/\pi)+\dfrac{1}{2}\psi(s/2)
-\sum_{p,k\ge1}\chi^k(p)\log(p)p^{-ks}\;,$$
and similarly for $\overline{\Lambda}'(s)/\overline{\Lambda}(s)$. Thus, by the above lemmas and corollaries,
we have
$$S=\log(N/\pi)F(0)+J_1-\sum_{p,k\ge1}\dfrac{\log(p)}{p^{k/2}}(\chi^k(p)F(k\log(p))+\ov{\chi^k(p)}F(-k\log(p)))\;,$$
where
$$J_1=\int_0^\infty\left(\dfrac{e^{-x}}{x}F(0)-\dfrac{e^{-x/4}}{1-e^{-x}}\dfrac{F(x/2)+F(-x/2)}{2}\right)\,dx\;,$$
proving the theorem.\qed\end{proof}
\smallskip
This theorem can be used in several different directions, and has
been an extremely valuable tool in analytic number theory. Just to
mention a few:
\begin{enumerate}\item Since the conductor $N$ occurs, we can obtain
\emph{bounds} on $N$, assuming certain conjectures such as the
generalized Riemann hypothesis. For instance, this is how
Stark--Odlyzko--Poitou--Serre find \emph{lower bounds for discriminants}
of number fields. This is also how Mestre finds lower bounds
for conductors of abelian varieties, and so on.
\item When the $L$-function has a zero at its central point (here of
course it usually does not, but for more general $L$-functions
it is important), this can give good upper bounds for the order
of the zero.
\item More generally, suitable choices of the test functions
can give information on the nontrivial zeros $\rho$ of small
imaginary part.
\end{enumerate}
\section{Some Useful Analytic Computational Tools}
We finish this course by giving a number of little-known numerical methods
which are not always directly related to the computation of $L$-functions, but
which are often very useful.
\subsection{The Euler--MacLaurin Summation Formula}
This numerical method is \emph{very} well-known (there is in fact even a
whole chapter in Bourbaki devoted to it!), and is as old as Taylor's
formula, but deserves to be mentioned since it is very useful. We will be
vague on purpose, and refer to \cite{Bou} or Section 9.2 of \cite{Coh4} for
details. Recall that the \emph{Bernoulli numbers} are defined by the formal
power series
$$\dfrac{T}{e^T-1}=\sum_{n\ge0}\dfrac{B_n}{n!}T^n\;.$$
We have $B_0=0$, $B_1=-1/2$, $B_2=1/6$, $B_3=0$, $B_4=-1/30$, and
$B_{2k+1}=0$ for $k\ge1$.
Let $f$ be a $C^\infty$ function defined on ${\mathbb R}>0$. The basic statement of
the Euler--MacLaurin formula is that there exists a constant $z=z(f)$ such that
$$\sum_{n=1}^Nf(n)=\int_1^N f(t)\,dt+z(f)+\dfrac{f(N)}{2}+\sum_{1\le k\le p}
\dfrac{B_{2k}}{(2k)!}f^{(2k-1)}(N)+R_p(N)\;,$$
where $R_p(N)$ is ``small'', in general smaller than the first neglected term,
as in most asymptotic series.
The above formula can be slightly modified at will, first by changing the
lower bound of summation and/or of integration (which simply changes the
constant $z(f)$), and second by writing
$\int_1^Nf(t)\,dt+z(f)=z'(f)-\int_N^\infty f(t)\,dt$ (when $f$ tends to
$0$ sufficiently fast for the integral to converge), where
$z'(f)=z(f)+\int_1^\infty f(t)\,dt$.
\medskip
The Euler--MacLaurin summation formula can be used in many contexts, but we
mention the two most important ones.
$\bullet$ First, to have some idea of the size of $\sum_{n=1}^Nf(n)$.
Let us take an example. Consider $S_2(N)=\sum_{n=1}^N n^2\log(n)$. Note
incidentally that
$$\exp(S_2(N))=\prod_{n=1}^N n^{n^2}=1^{1^2}2^{2^2}\cdots N^{N^2}\;.$$
What is the size of this generalized kind of factorial? Euler--MacLaurin
tells us that there exists a constant $z$ such that
\begin{align*}S_2(N)&=\int_1^N t^2\log(t)\,dt+z+\dfrac{N^2\log(N)}{2}\\
&\phantom{=}+\dfrac{B_2}{2!}(N^2\log(N))'+\dfrac{B_4}{4!}(N^2\log(N))'''+\cdots\;.\end{align*}
We have $\int_1^N t^2\log(t)\,dt=(N^3/3)\log(N)-(N^3-1)/9$,
$(N^2\log(N))'=2N\log(N)+N$, $(N^2\log(N))''=2\log(N)+3$, and
$(N^2\log(N))'''=2/N$, so using $B_2=1/6$ we obtain for some other constant
$z'$:
$$S_2(N)=\dfrac{N^3\log(N)}{3}-\dfrac{N^3}{9}+\dfrac{N^2\log(N)}{2}+\dfrac{N\log(N)}{6}+\dfrac{N}{12}+z'+O\left(\dfrac{1}{N}\right)\;,$$
which essentially answers our question, up to the determination of the constant
$z'$. Thus we obtain a generalized Stirling's formula:
$$\exp(S_2(N))=N^{N^3/3+N^2/2+N/6}e^{-(N^3/9-N/12)}C\;,$$
where $C=\exp(z')$ is an a priori unknown constant. In the case of the usual
Stirling's formula we have $C=(2\pi)^{1/2}$, so we can ask for a similar
formula here. And indeed, such a formula exists: we have
$$C=\exp(\zeta(3)/(4\pi^2))\;.$$
\begin{exercise} Do a similar (but simpler) computation for
$S_1(N)=\sum_{1\le n\le N}n\log(n)$. The corresponding constant is explicit
but more difficult (it involves $\zeta'(-1)$; more generally the constant
in $S_r(N)$ involves $\zeta'(-r)$).
\end{exercise}
$\bullet$ The second use of the Euler--MacLaurin formula is to increase
considerably the speed of convergence of slowly convergent series.
For instance, if you want to compute $\zeta(3)$ directly using the series
$\zeta(3)=\sum_{n\ge1}1/n^3$, since the remainder term after $N$ terms is
asymptotic to $1/(2N^2)$ you will never get more than $15$ or $20$ decimals
of accuracy. On the other hand, it is immediate to use Euler--MacLaurin:
\begin{exercise} Write a computer program implementing the computation
of $\zeta(3)$ (and more generally of $\zeta(s)$ for reasonable $s$) using
Euler--MacLaurin, and compute it to $100$ decimals.
\end{exercise}
A variant of the method is to compute limits: a typical example is the
computation of Euler's constant
$$\gamma=\lim_{N\to\infty}\left(\sum_{n=1}^N\dfrac{1}{n}-\log(N)\right)\;.$$
Using Euler--MacLaurin, it is immediate to find the \emph{asymptotic expansion}
$$\sum_{n=1}^N\dfrac{1}{n}=\log(N)+\gamma+\dfrac{1}{2N}-\sum_{k\ge1}\dfrac{B_{2k}}{2kN^{2k}}$$
(note that this is not a misprint, the last denominator is $2kN^{2k}$, not
$(2k)!N^{2k}$).
\begin{exercise} Implement the above, and compute $\gamma$ to $100$ decimal
digits.\end{exercise}
Note that this is \emph{not} the fastest way to compute Euler's constant,
the method using Bessel functions given in Exercise \ref{exga} is better.
\subsection{Variant: Discrete Euler--MacLaurin}
One problem with the Euler--MacLaurin method is that we need to compute
the derivatives $f^{(2k-1)}(N)$. When $k$ is tiny, say $k=2$ or $k=3$ this
can be done explicitly. When $f(x)$ has a special form, such as
$f(x)=1/x^{\alpha}$, it is very easy to compute all derivatives. In fact, this
is more generally the case when the expansion of $f(1/x)$ around $x=0$ is
known explicitly. But in general none of this is available.
One way around this is to use finite differences instead of derivatives:
we can easily compute
$$\Delta_{\delta}(f)(x)=(f(x+\delta)-f(x-\delta))/(2\delta)$$
and iterates of this, where $\delta$ is some fixed and nonzero number.
The choice of $\delta$ is essential: it should not be too large, otherwise
$\Delta_{\delta}(f)$ would be too far away from the true derivative
(which will be reflected in the speed of convergence of the asymptotic
formula), and it should not be too small, otherwise catastrophic cancellation
errors will occur. After numerous trials, the value $\delta=1/4$ seems
reasonable.
One last thing must be done: find the analogue of the Bernoulli numbers.
This is a very instructive exercise which we leave to the reader.
\subsection{Zagier's Extrapolation Method}
The following nice trick is due to D.~Zagier. Assume that you have
a sequence $u_n$ that you suspect of converging to some limit $a_0$ when
$n\to\infty$ in a regular manner. How do you give a reasonable numerical
estimate of $a_0$ ?
Assume for instance that as $n\to\infty$ we have
$u_n=\sum_{0\le i\le p}a_i/n^i+O(n^{-p-1})$ for any $p$. One idea would be to
choosing for $n$ suitable values and solve a linear system. This would in
general be quite unstable and inaccurate. Zagier's trick is instead to
proceed as follows: choose some reasonable integer $k$, say $k=10$, set
$u'_n=n^ku_n$, and compute the $k$th \emph{forward difference}
$\Delta^k(u'_n)$ of this sequence (the forward difference of a sequence $w_n$
is the sequence $\Delta(w)_n=w_{n+1}-w_n$). Note that
$$u'_n=a_0n^k+\sum_{1\le i\le k}a_in^{k-i}+O(1/n)\;.$$
The two crucial points are the following:
\begin{itemize}\item The $k$th forward difference of a polynomial of degree
less than or equal to $k-1$ vanishes, and that of $n^k$ is equal to
$k!$.
\item Assuming reasonable regularity conditions, the $k$th forward difference
of an asymptotic expansion beginning at $1/n$ will begin at $1/n^{k+1}$.
\end{itemize}
Thus, under reasonable assumptions we have
$$a_0=\Delta^k(v)_n/k!+O(1/n^{k+1})\;,$$
so choosing $n$ large enough can give a good estimate for $a_0$.
A number of remarks concerning this basic method:
\begin{remarks}{\rm \begin{enumerate}
\item It is usually preferable to apply this not to the sequence $u_n$
itself, but for instance to the sequence $u_{n+100}$, if it is not too
expensive to compute, since the first terms of $u_n$ are usually far from
the asymptotic expansion.
\item It is immediate to modify the method to compute further coefficients
$a_1$, $a_2$, etc.
\item If the asymptotic expansion of $u_n$ is (for instance) in powers of
$1/n^{1/2}$, it is not difficult to modify this method, see below.
\end{enumerate}}
\end{remarks}
\medskip
{\bf Example.} Let us compute numerically the constant occurring in
the first example of the use of Euler--MacLaurin that we have given. We
set
$$u_N=\sum_{1\le n\le N}n^2\log(n)-(N^3/3+N^2/2+N/6)\log(N)+N^3/9-N/12\;.$$
We compute for instance that $u_{1000}=0.0304456\cdots$, which has only
$4$ correct decimal digits. On the other hand, if we apply the above
trick with $k=12$ and $N=100$, we find
$$a_0=\lim_{N\to\infty}u_N=0.0304484570583932707802515304696767\cdots$$
with $28$ correct decimal digits: recall that the exact value is
$$\zeta(3)/(4\pi^2)=0.03044845705839327078025153047115477\cdots\;.$$
\medskip
Assume now that $u_n$ has an asymptotic expansion in integral powers of
$1/n^{1/2}$, i.e.,
$u_n=\sum_{0\le i\le p}a_i/n^{i/2}+O(n^{-(p+1)/2})$ for any $p$. We can modify
the above method as follows. First write
$u_n=v_n+w_n/n^{1/2}$, where $v_n=\sum_{0\le i\le q}a_{2i}/n^i+O(n^{-q-1})$
and $w_n=\sum_{0\le i\le q}a_{2i+1}/n^i+O(n^{-q-1})$ are two sequences as
above. Once again we choose some reasonable integer $k$ such as $k=10$, and
we now multiply the sequence $u_n$ by $n^{k-1/2}$, so we set
$u'_n=n^{k-1/2}u_n=n^{k-1/2}v_n+n^{k-1}w_n$. Thus, when we compute
the $k$th forward difference we will have
$$\Delta^k(n^{k-1/2}v_n)=\dfrac{(k-1/2)(k-3/2)\cdots 1/2}{n^{1/2}}\left(a_0+\sum_{0\le i\le q+k}b_{k,i}/n^i\right)$$
for certain coefficients $b_{k,i}$, while as above since
$n^{k-1}w_n=P_{k-1}(n)+O(1/n)$ for some polynomial $P_{k-1}(n)$ of degree
$k-1$, we have $\Delta^k(n^{k-1}w_n)=O(1/n^k)$. Thus we have essentially
eliminated the sequence $w_n$, so we now apply the usual method to
$v'_n=n^{1/2}\Delta^k(n^{k-1/2}v_n)$, which has an expansion in integral
powers of $1/n$: we will thus have
$$\Delta^k(v'_n)/k!=((k-1/2)(k-3/2)\cdots (1/2))a(0)+O(1/n^k)$$
(in fact we do not even have to take the same $k$ for this last step).
This method can immediately be generalized to sequences $u_n$ having an
asymptotic expansion in integral powers of $n^{1/q}$ for small integers $q$.
\subsection{Computation of Euler Sums and Euler Products}
Assume that we want to compute numerically
$$S_1=\prod_p\left(1+\dfrac{1}{p^2}\right)\;,$$
where here and elsewhere, the expression $\prod_p$ always means the product
over all prime numbers. Trying to compute it using a large table of prime
numbers will not give much accuracy: if we use primes up to $X$, we will
make an error of the order of $1/X$, so it will be next to impossible to
have more than $8$ or $9$ decimal digits.
On the other hand, if we simply notice that $1+1/p^2=(1-1/p^4)/(1-1/p^2)$,
by definition of the Euler product for the Riemann zeta function this implies
that
$$S_1=\dfrac{\zeta(2)}{\zeta(4)}=\dfrac{\pi^2/6}{\pi^4/90}=\dfrac{15}{\pi^2}=1.519817754635066571658\cdots$$
Unfortunately this is based on a special identity. What if we wanted instead
to compute $S_2=\prod_p(1+2/p^2)$ ? There is no special identity to help us
here.
The way around this problem is to approximate the function of which we want
to take the product (here $1+2/p^2$) by \emph{infinite products} of values
of the Riemann zeta function. Let us do it step by step before giving the
general formula.
When $p$ is large, $1+2/p^2$ is close to $1/(1-1/p^2)^2$, which is the
Euler factor for $\zeta(2)^2$. More precisely,
$(1+2/p^2)(1-1/p^2)^2=1-3/p^4+2/p^6$, so we deduce that
$$S_2=\zeta(2)^2\prod_p(1-3/p^4+2/p^6)=(\pi^4/36)\prod_p(1-3/p^4+2/p^6)\;.$$
Even though this looks more complicated, what we have gained is that the
new Euler product converges \emph{much} faster. Once again, if we compute it
for $p$ up to $10^8$, say, instead of having $8$ decimal digits we now
have approximately $24$ decimal digits (convergence in $1/X^3$ instead
of $1/X$). But there is no reason to stop there: we have
$(1-3/p^4+2/p^6)/(1-1/p^4)^3=1+O(1/p^6)$ with evident notation and explicit
formulas if desired, so we get an even better approximation by writing
$S_2=\zeta(2)^2/\zeta(4)^3\prod_p(1+O(1/p^6))$, with convergence in $1/X^5$.
More generally, it is easy to compute by induction exponents $a_n\in{\mathbb Z}$ such
that $S_2=\prod_{2\le n\le N}\zeta(n)^{a_n}\prod_p(1+O(1/p^{N+1}))$
(in our case $a_n=0$ for $n$ odd but this will not be true in general).
It can be shown in essentially all examples that one can pass to the limit,
and for instance here write $S_2=\prod_{n\ge2}\zeta(n)^{a_n}$.
\begin{exercise}\begin{enumerate}
\item Compute explicitly the recursion for the $a_n$ in the example of $S_2$.
\item More generally, if $S=\prod_pf(p)$, where $f(p)$ has a convergent
series expansion in $1/p$ starting with $f(p)=1+1/p^b+o(1/p^b)$ with $b>1$
(not necessarily integral), express $S$ as a product of zeta values raised
to suitable exponents, and find the recursion for these exponents.
\end{enumerate}\end{exercise}
An important remark needs to be made here: even though the product
$\prod_{n\ge2}\zeta(n)^{a_n}$ may be convergent, it may converge rather slowly:
remember that when $n$ is large we have $\zeta(n)-1\sim1/2^n$, so that in fact
if the $a_n$ grow like $3^n$ the product will not even converge.
The way around this, which must be used even when the product converges, is
as follows: choose a reasonable integer $N$, for instance $N=50$, and
compute $\prod_{p\le 50}f(p)$, which is of course very fast. Then
the tail $\prod_{p>50}f(p)$ of the Euler product will be equal to
$\prod_{n\ge2}\zeta_{>50}(n)^{a_n}$, where $\zeta_{>N}(n)$ is the zeta function
without its Euler factors up to $N$, in other words
$\zeta_{>N}(n)=\zeta(n)\prod_{p\le N}(1-1/p^n)$ (I am assuming here that we have
zeta values at integers as in the $S_2$ example above, but it is immediate
to generalize). Since $\zeta_{>N}(n)-1\sim1/(N+1)^n$,
the convergence of our zeta product will of course be considerably faster.
\smallskip
Note that by using the power series expansion of the logarithm
together with \emph{M\"obius inversion}, it is immediate to do the same for
Euler \emph{sums}, for instance to compute $\sum_p1/p^2$ and the like,
see Section 10.3.6 of \cite{Coh4} for details. Using \emph{derivatives} of the
zeta function we can compute Euler sums of the type $\sum_p\log(p)/p^2$, and
using antiderivatives we can compute sums of the type $\sum_p1/(p^2\log(p))$.
We can even compute sums of the form $\sum_p\log(\log(p))/p^2$, but this
is slightly more subtle: it involves taking derivatives with respect to the
order of \emph{fractional derivation}.
We can also compute products and sums over primes
which involve Dirichlet characters, as long as their conductor is small,
as well as such products and sums where the primes are restricted to
certain congruence classes:
\begin{exercise} Compute to 100 decimal digits
$$\prod_{p\equiv1\pmod{4}}(1-1/p^2)\quad\text{and}\quad\prod_{p\equiv1\pmod4}(1+1/p^2)$$
by using products of $\zeta(ns)$ and of $L(\chi_{-4},ns)$ as above, where
as usual $\chi_{-4}$ is the character $\lgs{-4}{n}$.
\end{exercise}
\subsection{Summation of Alternating Series}
This is due to F.~Rodriguez--Villegas, D.~Zagier, and the author \cite{Coh-Vil-Zag}.
We have seen above the use of the Euler--MacLaurin summation formula to sum
quite general types of series. If the series is \emph{alternating} (the terms
alternate in sign), the method cannot be used as is, but it is trivial to
modify it: simply write
$$\sum_{n\ge1}(-1)^nf(n)=\sum_{n\ge1}f(2n)-\sum_{n\ge1}f(2n-1)$$
and apply Euler--MacLaurin to each sum. One can even do better and avoid this
double computation, but this is not what I want to mention here.
A completely different method which is much simpler since it avoids completely
the computation of derivatives and Bernoulli numbers, due to the above authors,
is as follows. The idea is to express (if possible) $f(n)$ as a \emph{moment}
$$f(n)=\int_0^1 x^nw(x)\,dx$$
for some \emph{weight function} $w(x)$. Then it is clear that
$$S=\sum_{n\ge0}(-1)^nf(n)=\int_0^1\dfrac{1}{1+x}w(x)\,dx\;.$$
Assume that $P_n(X)$ is a polynomial of degree $n$ such that $P_n(-1)\ne0$.
Evidently
$$\dfrac{P_n(X)-P_n(-1)}{X+1}=\sum_{k=0}^{n-1}c_{n,k}X^k$$
is still a polynomial (of degree $n-1$), and we note the trivial fact that
\begin{align*}S&=\dfrac{1}{P_n(-1)}\int_0^1\dfrac{P_n(-1)}{1+x}w(x)\,dx\\
&=\dfrac{1}{P_n(-1)}\left(\int_0^1\dfrac{P_n(-1)-P_n(x)}{1+x}w(x)\,dx
+\int_0^1\dfrac{P_n(x)}{1+x}w(x)\,dx\right)\\
&=\dfrac{1}{P_n(-1)}\sum_{k=0}^{n-1}c_{n,k}f(k)+R_n\;,\end{align*}
with
$$|R_n|\le\dfrac{M_n}{|P_n(-1)|}\int_0^1\dfrac{1}{1+x}w(x)\,dx
=\dfrac{M_n}{|P_n(-1)|}S\;,$$
and where $M_n=\sup_{x\in[0,1]}|P_n(x)|$.
Thus if we can manage to have $M_n/|P_n(-1)|$ small, we obtain a good
approximation to $S$.
It is a classical result that the best choice for $P_n$ are the shifted
Chebychev polynomials defined by $P_n(\sin^2(t))=\cos(2nt)$, but in any
case we can use these polynomials and ignore that they are the best.
\medskip
This leads to an incredibly simple algorithm which we write explicitly:
\medskip
$d\gets (3+\sqrt{8})^n$; $d\gets (d+1/d)/2$; $b\gets -1$; $c\gets -d$; $s\gets0$; For $k=0,\dotsc,n-1$ do:
$c\gets b-c$; $s\gets s+c\cdot f(k)$; $b\gets(k+n)(k-n)b/((k+1/2)(k+1))$;
The result is $s/d$.
\medskip
The convergence is in $5.83^{-n}$.
It is interesting to note that, even though this algorithm is designed to
work with functions $f$ of the form $f(n)=\int_0^1 x^nw(x)\,dx$ with
$w$ continuous and positive, it is in fact valid outside its proven region
of validity. For example:
\begin{exercise}
It is well-known that the Riemann zeta function $\zeta(s)$
can be extended analytically to the whole complex plane, and that we have
for instance $\zeta(-1)=-1/12$ and $\zeta(-2)=0$. Apply the above algorithm to the
\emph{alternating} zeta function
$$\beta(s)=\sum_{n\ge1}(-1)^{n-1}\dfrac{1}{n^s}=\left(1-\dfrac{1}{2^{s-1}}\right)\zeta(s)$$
(incidentally, prove this identity), and by using the above algorithm, show
the nonconvergent ``identities''
$$1-2+3-4+\cdots=1/4\text{\quad and\quad}1-2^2+3^2-4^2+\cdots=0\;.$$
\end{exercise}
\begin{exercise} (B.~Allombert.) Let $\chi$ be a periodic arithmetic function
of period $m$, say, and assume that $\sum_{0\le j<m}\chi(j)=0$ (for instance
$\chi(j)=(-1)^j$ with $m=2$).
\begin{enumerate}\item Using the same polynomials $P_n$ as above, write
a similar algorithm for computing $\sum_{n\ge0}\chi(n)f(n)$, and estimate
its rate of convergence.
\item Using this, compute to 100 decimals
$L(\chi_{-3},k)=1-1/2^k+1/4^k-1/5^k+\cdots$ for $k=1$, $2$, and $3$,
and recognize the exact value for $k=1$ and $k=3$.\end{enumerate}\end{exercise}
\subsection{Numerical Differentiation}
The problem is as follows: given a function $f$, say defined and $C^\infty$
on a real interval, compute $f'(x_0)$ for a given value of $x_0$. To be able
to analyze the problem, we will assume that $f'(x_0)$ is not too close to $0$,
and that we want to compute it to a given \emph{relative accuracy}, which
is what is usually required in numerical analysis.
The na\"\i ve, although reasonable, approach, is to choose a small $h>0$ and
compute $(f(x_0+h)-f(x_0))/h$. However, it is clear that (using the same
number of function evaluations) the formula $(f(x_0+h)-f(x_0-h))/(2h)$
will be better. Let us analyze this in detail. For simplicity we will
assume that all the derivatives of $f$ around $x_0$ that we consider are
neither too small nor too large in absolute value. It is easy to modify the
analysis to treat the general case.
Assume $f$ computed to a relative accuracy of $\varepsilon$, in other words that
we know values $\tilde{f}(x)$ such that
$\tilde{f}(x)(1-\varepsilon)<f(x)<\tilde{f}(x)(1+\varepsilon)$
(the inequalities being reversed if $f(x)<0$). The absolute error
in computing $(f(x_0+h)-f(x_0-h))/(2h)$ is thus essentially equal to
$\varepsilon |f(x_0)|/h$. On the other hand, by Taylor's theorem we have
$(f(x_0+h)-f(x_0-h))/(2h)=f'(x_0)+(h^2/6)f'''(x)$ for some $x$ close to $x_0$,
so the absolute error made in computing $f'(x_0)$ as
$(f(x_0+h)-f(x_0-h))/(2h)$ is close to $\varepsilon |f(x_0)|/h+(h^2/6)|f'''(x_0)|$.
For a given value of $\varepsilon$ (i.e., the accuracy to which we compute $f$)
the optimal value of $h$ is $(3\varepsilon |f(x_0)/f'''(x_0)|)^{1/3}$ for an
absolute error of $(1/2)(3\varepsilon |f(x_0)f'''(x_0)|)^{2/3}$ hence a relative
error of $(3\varepsilon |f(x_0)f'''(x_0)|)^{2/3}/(2|f'(x_0)|)$.
Since we have assumed that the derivatives have reasonable size,
the relative error is roughly $C\varepsilon^{2/3}$,
so if we want this error to be less than $\eta$, say, we need $\varepsilon$
of the order of $\eta^{3/2}$, and $h$ will be of the order of $\eta^{1/2}$.
\smallskip
Note that this result is not completely intuitive. For instance,
assume that we want to compute derivatives to $38$ decimal digits.
With our assumptions, we choose $h$ around $10^{-19}$, and perform
the computations with $57$ decimals of relative accuracy. If for some
reason or other we are limited to $38$ decimals in the computation of $f$,
the ``intuitive'' way would be also to choose $h=10^{-19}$, and the above
analysis shows that we would obtain only approximately $19$ decimals.
On the other hand, if we chose $h=10^{-13}$ for instance, close to
$10^{-38/3}$, we would obtain $25$ decimals.
\smallskip
There are of course many other formulas for computing $f'(x_0)$, or for
computing higher derivatives, which can all easily be analyzed as above.
For instance (exercise), one can look for approximations to $f'(x_0)$ of the
form $S=(\sum_{1\le i\le 3}\lambda_if(x_0+h/a_i))/h$, for any nonzero and pairwise
distinct $a_i$, and we find that this is possible as soon as
$\sum_{1\le i\le 3}a_i=0$ (for instance, if $(a_1,a_2,a_3)=(-3,1,2)$
we have $(\lambda_1,\lambda_2,\lambda_3)=(-27,-5,32)/20$), and the absolute error is then
of the form $C_1/h+C2h^3$, so the same analysis shows that we should
work with accuracy $\varepsilon^{4/3}$ instead of $\varepsilon^{3/2}$. Even though we
have $3/2$ times more evaluations of $f$, we require less accuracy:
for instance, if $f$ requires time $O(D^a)$ to be computed to $D$ decimals,
as soon as $(3/2)\cdot((4/3)D)^a<((3/2)D)^a$, i.e., $3/2<(9/8)^a$, hence
$a\ge3.45$, this new method will be faster.
Perhaps the best known method with more function evaluations is the
approximation
$$f'(x_0)\approx(f(x-2h)-8f(x-h)+8f(x+h)-f(x+2h))/(12h)\;,$$
which requires accuracy $\varepsilon^{5/4}$, and since this requires $4$ evaluations
of $f$, this is faster than the first method as soon as
$2\cdot(5/4)^a<(3/2)^a$, in other words $a>3.81$, and faster than the
second method as soon as $(4/3)\cdot(5/4)^a<(4/3)^a$, in other words
$a>4.46$. To summarize, use the first method if $a<3.45$, the second method
if $3.45\le a<4.46$, and the third if $a>4.46$. Of course this game can
be continued at will, but there is not much point in doing so. In practice
the first method is sufficient.
\medskip
\subsection{Double Exponential Numerical Integration}
A remarkable although little-known technique invented around 1970 deals with
\emph{numerical integration} (the numerical computation of a definite
integral $\int_a^b f(t)\,dt$, where $a$ and $b$ are allowed to be $\pm\infty$).
In usual numerical analysis courses one teaches very elementary techniques
such as the trapezoidal rule, Simpson's rule, or more sophisticated methods
such as Romberg or Gaussian integration. These methods apply to very general
classes of functions $f(t)$, but are unable to compute more than a few
decimal digits of the result, except for Gaussian integration which we will
mention below.
However, in most mathematical (as opposed for instance to physical) contexts,
the function $f(t)$ is \emph{extremely regular}, typically holomorphic or
meromorphic, at least in some domain of the complex plane. It was observed
in the late 1960's by H.~Takahashi and M.~Mori \cite{Tak-Mor} that
this property can be used to obtain a \emph{very simple} and
\emph{incredibly accurate} method to compute definite integrals of such
functions. It is now instantaneous to compute $100$ decimal digits, and takes
only a few seconds to compute $500$ decimal digits, say.
In view of its importance it is essential to have some knowledge of this
method. It can of course be applied in a wide variety of contexts, but note
also that in his thesis \cite{Mol}, P.~Molin has applied it specifically to
the \emph{rigorous} and \emph{practical} computation of values of
$L$-functions, which brings us back to our main theme.
\medskip
There are two basic ideas behind this method. The first is in fact a theorem,
which I state in a vague form: If $F$ is a holomorphic function which tends to
$0$ ``sufficiently fast'' when $x\to\pm\infty$, $x$ real, then the most
efficient method to compute $\int_{{\mathbb R}}F(t)\,dt$ is indeed the trapezoidal
rule. Note that this is a \emph{theorem}, not so difficult but a little
surprising nonetheless. The definition of ``sufficiently fast'' can be
made precise. In practice, it means at least like $e^{-ax^2}$ ($e^{-a|x|}$
is not fast enough), but it can be shown that the best results are obtained
with functions tending to $0$ \emph{doubly exponentially fast} such as
$\exp(-\exp(a|x|))$. Note that it would be (very slightly) worse to choose
functions tending to $0$ even faster.
To be more precise, we have an estimate coming for instance from the
\emph{Euler--MacLaurin summation formula}:
$$\int_{-\infty}^{\infty}F(t)\,dt=h\sum_{n=-N}^NF(nh)+R_N(h)\;,$$
and under suitable holomorphy conditions on $F$, if we choose $h=a\log(N)/N$
for some constant $a$ close to $1$, the remainder term $R_N(h)$ will
satisfy $R_n(h)=O(e^{-bN/\log(N)})$ for some other (reasonable) constant $b$,
showing exponential convergence of the method.
\medskip
The second and of course crucial idea of the method is as follows: evidently
not all functions are doubly-exponentially tending to $0$ at $\pm\infty$,
and definite integrals are not all from $-\infty$ to $+\infty$. But it is
possible to reduce to this case by using clever \emph{changes of variable}
(the essential condition of holomorphy must of course be preserved).
Let us consider the simplest example, but others that we give below are
variations on the same idea. Assume that we want to compute
$$I=\int_{-1}^1f(x)\,dx\;.$$
We make the ``magical'' change of variable $x=\phi(t)=\tanh(\sinh(t))$, so that
if we set $F(t)=f(\phi(t))$ we have
$$I=\int_{-\infty}^{\infty}F(t)\phi'(t)\,dt\;.$$
Because of the elementary properties of the hyperbolic sine and tangent,
we have gained two things at once: first the integral from $-1$ to $1$ is
now from $-\infty$ to $\infty$, but most importantly the function
$\phi'(t)$ is easily seen to tend to $0$ doubly exponentially. We thus
obtain an \emph{exponentially good approximation}
$$\int_{-1}^1f(x)\,dx=h\sum_{n=-N}^Nf(\phi(nh))\phi'(nh)+R_N(h)\;.$$
To give an idea of the method, if one takes $h=1/200$ and $N=500$, hence
only $1000$ evaluations of the function $f$, one can compute $I$ to several
hundred decimal places!
\medskip
Before continuing, I would like to comment that in this theory many results
are not completely rigorous: the method works very well, but the proof that
it does is sometimes missing. Thus I cannot resist giving a \emph{proven and
precise} theorem due to P.~Molin (which is of course just an example).
We keep the above notation $\phi(t)=\tanh(\sinh(t))$, and note that
$\phi'(t)=\cosh(t)/\cosh^2(\sinh(t))$.
\begin{theorem}[P.~Molin] Let $f$ be holomorphic on the disc $D=D(0,2)$
centered at the origin and of radius $2$. Then for all $N\ge1$, if we choose
$h=\log(5N)/N$ we have
$$\int_{-1}^1f(x)\,dx=h\sum_{n=-N}^Nf(\phi(nh))\phi'(nh)+R_N\;,$$
where
$$|R_N|\le \left(e^4\sup_{D}|f|\right)\exp(-5N/\log(5N))\;.$$
\end{theorem}
\medskip
Coming back to the general situation, I briefly comment on the computation
of general definite integrals $\int_a^b f(t)\,dt$.
\begin{enumerate}\item If $a$ and $b$ are finite, we can reduce to $[-1,1]$
by affine changes of variable.
\item If $a$ (or $b$) is finite and the function has an algebraic singularity
at $a$ (or $b$), we remove the singularity by a polynomial change of variable.
\item If $a=0$ (say) and $b=\infty$, then if $f$ does \emph{not} tend to $0$
exponentially fast (for instance $f(x)\sim 1/x^k$), we use
$x=\phi(t)=\exp(\sinh(t))$.
\item If $a=0$ (say) and $b=\infty$ and if $f$ does tend to $0$
exponentially fast (for instance $f(x)\sim e^{-ax}$ or $f(x)\sim e^{-ax^2}$),
we use $x=\phi(t)=\exp(t-\exp(-t))$.
\item If $a=-\infty$ and $b=\infty$, use $x=\phi(t)=\sinh(\sinh(t))$ if
$f$ does not tend to $0$ exponentially fast, and $x=\phi(t)=\sinh(t)$
otherwise.
\end{enumerate}
The problem of \emph{oscillating} integrals such as
$\int_0^\infty f(x)\sin(x)\,dx$ is more subtle, but there does exist
similar methods when, as here, the oscillations are completely under control.
\begin{remark} The theorems are valid when the function is holomorphic in
a sufficiently large region compared to the path of integration. If the
function is only \emph{meromorphic}, with known poles, the direct application
of the formulas may give totally wrong answers. However, if we take into
account the poles, we can recover perfect agreement. Example of bad behavior:
$f(t)=1/(1+t^2)$ (poles $\pm i$). Integrating on the intervals
$[0,\infty]$, $[0,1000]$, or even $[-\infty,\infty]$, which involve different
changes of variables, give perfect results (the latter being somewhat
surprising). On the other hand, integrating on $[-1000,1000]$ gives
a totally wrong answer because the poles are ``too close'', but it is easy
to take them into account if desired.
\end{remark}
Apart from the above pathological behavior, let us give a couple of examples
where we must slightly modify the direct use of doubly-exponential
integration techniques.
\medskip
\newcommand{\hh}[1]{\^{}{#1}}
$\bullet$ Assume for instance that we want to compute
$$J=\int_1^\infty\left(\dfrac{1+e^{-x}}{x}\right)^2\,dx\;,$$
and that we use the built-in function {\tt intnum} of {\tt Pari/GP} for
doing so. The function tends to $0$ slowly at infinity, so we should compute
it using the {\tt GP} syntax {\tt oo} to represent $\infty$, so we write
{\tt f(x)=((1+exp(-x))/x)\hh{2};}, then {\tt intnum(x=1,oo,f(x))}.
This will give some sort of error, because the software will try to
evaluate $\exp(-x)$ for large values of $x$, which it cannot do since there
is exponent underflow. To compute the result, we need to split it into
its slow part and fast part: when a function tends exponentially fast to
$0$ like $exp(-ax)$, $\infty$ is represented as {\tt [oo,a]}, so we write
$J=J_1+J_2$, with $J_1$ and $J_2$ computed by:
{\tt J1=intnum(x=1,[oo,1],(exp(-2*x)+2*exp(-x))/x\hh{2});} and
{\tt J2=intnum(x=1,oo,1/x\hh{2});}
(which of course is equal to $1$), giving
$$J=1.3345252753723345485962398139190637\cdots\;.$$
Note that we could have tried to ``cheat'' and written directly
{\tt intnum(x=1,[oo,1],f(x))}, but the answer would
be wrong, because the software would have assumed that $f(x)$ tends to $0$
exponentially fast, which is not the case.
\medskip
$\bullet$ A second situation where we must be careful is when we have
``apparent singularities'' which are not real singularities.
Consider the function $f(x)=(\exp(x)-1-x)/x^2$. It has an apparent singularity
at $x=0$ but in fact it is completely regular. If you ask
{\tt J=intnum(x=0,1,f(x))}, you will get a result which is reasonably correct,
but never more than $19$ decimals, say. The reason is \emph{not} due to
a defect in the numerical integration routine, but more in the computation
of $f(x)$: if you simply write {\tt f(x)=(exp(x)-1-x)/x\hh{2};}, the results
will be bad for $x$ close to $0$.
Assuming that you want $38$ decimals, say, the solution is to write
\noindent
{\tt f(x)=if(x<10\hh{(-10)},1/2+x/6+x\hh{2}/24+x\hh{3}/120,(exp(x)-1-x)/x\hh{2});}
and now we obtain the value of our integral as
$$J=0.59962032299535865949972137289656934022\cdots$$
\subsection{The Use of Abel--Plana for Definite Summation}
We finish this course by describing an identity, which is first quite amusing
and second can be used efficiently for definite summation. Consider for
instance the following theorem:
\begin{theorem} Define by convention $\sin(n/10)/n$ as equal to its limit
$1/10$ when $n=0$, and define $\sum'_{n\ge0}f(n)$ as
$f(0)/2+\sum_{n\ge1}f(n)$. We have
$$\sideset{}{'}\sum_{n\ge0}\left(\dfrac{\sin(n/10)}{n}\right)^k=\int_0^\infty\left(\dfrac{\sin(x/10)}{x}\right)^k$$
for $1\le k\le 62$, but not for $k\ge63$.
\end{theorem}
If you do not like all these conventions, replace the left-hand side by
$$\dfrac{1}{2\cdot 10^k}+\sum_{n\ge1}\left(\dfrac{\sin(n/10)}{n}\right)^k\;.$$
It is clear that something is going on: it is the Abel--Plana formula.
There are several forms of this formula, here is one of them:
\begin{theorem}[Abel--Plana] Assume that $f$ is an entire function
and that $f(z)=o(\exp(2\pi|\Im(z)|))$ as $|\Im(z)|\to\infty$ uniformly in
vertical strips of bounded width, and a number of less important additional
conditions which we omit. Then
\begin{align*}\sum_{m\ge1}f(m)&=\int_0^\infty f(t)\,dt-\dfrac{f(0)}{2}+i\int_0^\infty\dfrac{f(it)-f(-it)}{e^{2\pi t}-1}\,dt\\
&=\int_{1/2}^\infty f(t)\,dt-i\int_0^\infty\dfrac{f(1/2+it)-f(1/2-it)}{e^{2\pi t}+1}\,dt\;.\end{align*}
In particular, if the function $f$ is \emph{even}, we have
$$\dfrac{f(0)}{2}+\sum_{m\ge1}f(m)=\int_0^\infty f(t)\,dt\;.$$
\end{theorem}
Since we have seen above that using doubly-exponential techniques it is easy
to compute numerically a definite \emph{integral}, the Abel--Plana formula can
be used to compute numerically a \emph{sum}. Note that in the first version
of the formula there is an apparent singularity (but which is not a
singularity) at $t=0$, and the second version avoids this problem.
In practice, this summation method is very competitive with other methods
if we use the doubly-exponential method to compute $\int_0^\infty f(t)\,dt$,
but most importantly if we use a variant of \emph{Gaussian integration} to
compute the complex integrals, since the nodes and weights for the
function $t/(e^{2\pi t}-1)$ can be computed once and for all by using
continued fractions, see Section \ref{sec:gauss}.
\section{The Use of Continued Fractions}
\subsection{Introduction}
The last idea that I would like to mention and that is applicable in quite
different situations is the use of continued fractions. Recall that a
continued fraction is an expression of the form
$$a_0+\dfrac{b_0}{a_1+\dfrac{b_1}{a_2+\dfrac{b_2}{a_3+\ddots}}}\;.$$
The problem of \emph{convergence} of such expressions (when they are unlimited)
is difficult and will not be considered here. We refer to any good textbook
on the elementary properties of continued fractions. In particular, recall that
if we denote by $p_n/q_n$ the $n$th \emph{partial quotient} (obtained by
stopping at $b_{n-1}/a_n$) then both $p_n$ and $q_n$ satisfy the same recursion
$u_n=a_nu_{n-1}+b_{n-1}u_{n-2}$.
We will mainly consider continued fractions representing \emph{functions}
as opposed to simply numbers. Whatever the context, the interest of continued
fractions (in addition to the fact that they are easy to evaluate) is that
they give essentially the \emph{best possible} approximations, both for
real numbers (this is the standard theory of \emph{regular} continued
fractions, where $b_n=1$ and $a_n\in{\mathbb Z}_{\ge1}$ for $n\ge1$), and for
functions (this is the theory of \emph{Pad\'e approximants}).
\subsection{The Two Basic Algorithms}
The first algorithm that we need is the following: assume that we want
to expand a (formal) power series $S(z)$ (without loss of generality
such that $S(0)=1$) into a continued fraction:
$$S(z)=1+c(1)z+c(2)z^2+\cdots = 1+\dfrac{b(0)z}{1+\dfrac{b(1)z}{1+\dfrac{b(2)z}{1+\ddots}}}\;.$$
The following method, called the \emph{quotient-difference} (QD) algorithm
does what is required:
We define two arrays $e(j,k)$ for $j\ge0$ and $q(j,k)$ for $j\ge1$ by
$e(0,k)=0$, $q(1,k)=c(k+2)/c(k+1)$ for $k\ge0$, and by induction for $j\ge1$
and $k\ge0$:
\begin{align*}e(j,k)&=e(j-1,k+1)+q(j,k+1)-q(j,k)\;,\\
q(j+1,k)&=q(j,k+1)e(j,k+1)/e(j,k)\;.\end{align*}
Then $b(0)=c(1)$ and $b(2n-1)=-q(n,0)$ and $b(2n)=-e(n,0)$ for $n\ge1$.
Three essential implementation remarks: first keeping the whole arrays is
costly, it is sufficient to keep the latest vectors of $e$ and $q$. Second,
even if the $c(n)$ are rational numbers it is essential to do the computation
with floating point approximations to avoid coefficient explosion. The
algorithm can become unstable, but this is corrected by increasing the working
accuracy. Third, it is of course possible that some division by $0$ occurs,
and this is in fact quite frequent. There are several ways to overcome this,
probably the simplest being to multiply or divide the power series by
something like $1-z/\pi$.
\medskip
The second algorithm is needed to \emph{evaluate} the continued fraction for
a given value of $z$. It is well-known that this can be done from bottom to
top (start at $b(n)z/1$, then $b(n-1)/(1+b(n)z/1)$, etc.), or from top
to bottom (start at $(p(-1),q(-1))=(1,0)$, $(p(0),q(0))=(1,1)$, and use
the recursion). It is in general better to evaluate from bottom to top, but
before doing this we can considerably improve on the speed by using an identity
due to Euler:
$$1+\dfrac{b(0)z}{1+\dfrac{b(1)z}{1+\dfrac{b(2)z}{1+\ddots}}}
=1+\dfrac{B(0)Z}{Z+A(1)+\dfrac{B(1)}{Z+A(2)+\dfrac{B(2)}{Z+A(3)+\ddots}}}\;,$$
where $Z=1/z$, $A(1)=b(1)$, $A(n)=b(2n-2)+b(2n-1)$ for $n\ge2$,
$B(0)=b(0)$, $B(n)=-b(2n)b(2n-1)$ for $n\ge1$.
The reason for which this is much faster is that we replace
$n$ multiplications ($b(j)*z$) plus $n$ divisions by
$1$ multiplication plus approximately $1+n/2$ divisions, counting as usual
additions as negligible.
This is still not the end of the story since we can ``compress'' any
continued fraction by taking, for instance, two steps at once instead
of one, which reduces the cost . In any case this leads to a very efficient method for evaluating
continued fractions.
\subsection{Using Continued Fractions for Inverse Mellin Transforms}
We have mentioned above that one can use asymptotic expansions to compute
the incomplete gamma function $\Gamma(s,x)$ when $x$ is large. But this method
cannot give us great accuracy since we must stop the asymptotic expansion
at its smallest term. We can of course always use the power series expansion,
which has infinite radius of convergence, but when $x$ is large this is not
very efficient (remember the example of computing $e^{-x}$).
In the case of $\Gamma(s,x)$, continued fractions save the day: indeed, one can
prove that
$$\Gamma(s,x)=\dfrac{x^se^{-x}}{x+1-s-\dfrac{1(1-s)}{x+3-s-\dfrac{2(2-s)}{x+5-s-\ddots}}}\;,$$
with precisely known speed of convergence. This formula is the best method
for computing $\Gamma(s,x)$ when $x$ is large (say $x>50$), and can give arbitrary
accuracy.
However here we were in luck: we had an ``explicit'' continued fraction
representing the function that we wanted to compute. Evidently, in general
this will not be the case.
It is a remarkable idea of T.~Dokchitser \cite{Dok} that it does not really
matter if the continued fraction is not explicit, at least in the context of
computing $L$-functions, for instance for inverse Mellin transforms. Simply do
the following:
\begin{enumerate}\item First compute sufficiently many terms of the asymptotic
expansion of the function to be computed. This is very easy because our
functions all satisfy a \emph{linear differential equation} with polynomial
coefficients, which gives a \emph{recursion} on the coefficients of the
asymptotic expansion.
\item Using the quotient-difference algorithm seen above, compute the
corresponding continued fraction, and write it in the form due to Euler
to evaluate it as efficiently as possible.
\item Compute the value of the function at all desired arguments by evaluating
the Euler continued fraction.\end{enumerate}
The first two steps are completely automatic and rigorous. The whole problem
lies in the third step, the evaluation of the continued fraction. In the case
of the incomplete gamma function, we had a theorem giving us the speed of
convergence. In the case of inverse Mellin transforms, not only do we not
have such a theorem, but we do not even know how to prove that the continued
fraction converges! However experimentation shows that not only does the
continued fraction converge, but rather fast, in fact at a similar speed to
that of the incomplete gamma function.
Even though this step is completely heuristic, since its introduction by
T.~Dokchitser it is used in all packages computing $L$-functions since it is
so useful. It would of course be nice to have a \emph{proof} of its validity,
but for now this seems completely out of reach, except for the simplest
examples where there are at most two gamma factors (for instance the problem
is completely open for the inverse Mellin transform of $\Gamma(s)^3$).
\subsection{Using Continued Fractions for Gaussian Integration and Summation}\label{sec:gauss}
We have seen above the doubly-exponential method for numerical integration,
which is robust and quite generally applicable. However, an extremely classical
method is \emph{Gaussian integration}: it is orders of magnitude faster,
but note the crucial fact that it is much less robust, in that it works
much less frequently.
The setting of Gaussian
integration is the following: we have a measure $d\mu$ on a (compact or
infinite) interval $[a,b]$; you can of course think of $d\mu$ as $K(x)dx$ for
some fixed function $K(x)$. We want to compute $\int_a^bf(x)d\mu$ by
means of \emph{nodes} and \emph{weights}, i.e., for a given $n$ compute $x_i$
and $w_i$ for $1\le i\le n$ such that $\sum_{1\le i\le n}w_if(x_i)$
approximates as closely as possible the exact value of the integral.
Note that \emph{classical} Gaussian integration such as Gauss--Legendre
integration (integration of a continuous function on a compact interval)
is easy to perform because one can easily compute explicitly the necessary
nodes and weights using standard \emph{orthogonal polynomials}. What I want to
stress here is that \emph{general} Gaussian integration can be performed very
simply using continued fractions, as follows.
In general the measure $d\mu$ is (or can be) given through its \emph{moments}
$M_k=\int_a^bx^kd\mu$. The remarkably simple algorithm to compute
the $x_i$ and $w_i$ using continued fractions is as follows:
\begin{enumerate}
\item Set $\Phi(z)=\sum_{k\ge0}M_kz^{k+1}$, and using the
quotient-difference algorithm compute $c(m)$ such that
$\Phi(z)=c(0)z/(1+c(1)z/(1+c(2)z/(1+\cdots)))$ (see the
remark made above in case the algorithm has a division by $0$; it may also
happen that the odd or even moments vanish, so that the continued fraction
is only in powers of $z^2$, but this is also easily dealt with).
\item For any $m$, denote as usual by $p_m(z)/q_m(z)$ the $m$th
convergent obtained by stopping the continued fraction at
$c(m)z/1$, and denote by $N_n(z)$ the reciprocal polynomial of
$p_{2n-1}(z)/z$ (which has degree $n-1$) and by $D_n(z)$ the
reciprocal polynomial of $q_{2n-1}$ (which has degree $n$).
\item The $x_i$ are the $n$ roots of $D_n$ (which are all simple
and in the interval $]a,b[$), and the $w_i$ are given by the formula
$w_i=N_n(x_i)/D'_n(x_i)$.
\end{enumerate}
By construction, this Gaussian integration method will work when the
function $f(x)$ to be integrated is well approximated by polynomials,
but otherwise will fail miserably, and this is why we say that the method
is much less ``robust'' than doubly-exponential integration.
\medskip
The fact that Gaussian ``integration'' can also be used very efficiently for
numerical \emph{summation} was discovered quite recently by H.~Monien. We
explain the simplest case. Consider the measure on $]0,1]$ given by
$d\mu=\sum_{n\ge1}\delta_{1/n}/n^2$, where $\delta_x$ is the Dirac measure
centered at $x$. Thus by definition
$\int_0^1 f(x)d\mu=\sum_{n\ge1}f(1/n)/n^2$. Let us apply the recipe
given above: the $k$th moment $M_k$ is given by
$M_k=\sum_{n\ge1}(1/n)^k/n^2=\zeta(k+2)$, so that
$\Phi(z)=\sum_{k\ge1}\zeta(k+1)z^k$. Note that this is closely related to the
digamma function $\psi(z)$, but we do not need this. Applying the
quotient-difference algorithm, we write
$\Phi(z)=c(0)z/(1+c(1)z/(1+\cdots))$, and compute the $x_i$ and $w_i$ as
explained above. We will then have that $\sum_iw_if(x_i)$ is a very good
approximation to $\sum_{n\ge1}f(1/n)/n^2$, or equivalently (changing the
definition of $f$) that $\sum_iw_if(y_i)$ is a very good approximation
to $\sum_{n\ge1}f(n)$, with $y_i=1/x_i$.
To take essentially the simplest example, stopping the continued fraction
after two terms we find that
$y_1=1.0228086266\cdots$, $w_1=1.15343168\cdots$,
$y_2=4.371082834\cdots$, and $w_2=10.3627543\cdots$,
and (by definition) we have $\sum_{1\le i\le 2}w_if(y_i)=\sum_{n\ge1}f(n)$
for $f(n)=1/n^k$ with $k=2$, $3$, $4$, and $5$.
\section{{\tt Pari/GP} Commands}
In this section, we give some of the {\tt Pari/GP} commands related to the
subjects studied in this course, together with examples. Unless mentioned
otherwise, the commands assume that the current default accuracy is the
default, i.e., $38$ decimal digits.
{\tt zeta(s)}: Riemann zeta function at $s$.
\begin{verbatim}
? zeta(3)
? zeta(1/2+14*I)
- 0.10325812326645005790236309555257383451*I
\end{verbatim}
{\tt lfuncreate(obj)}: create $L$-function attached to mathematical object
{\tt obj}.
{\tt lfun(pol,s)}: Dedekind zeta function of the number field $K$ defined by
{\tt pol} at $s$. Identical to {\tt L=lfuncreate(pol); lfun(L,s)}.
\begin{verbatim}
? L = lfuncreate(x^3-x-1); lfunan(L,10)
? lfun(L,1)
? lfun(L,2)
\end{verbatim}
{\tt lfunlambda(pol,s)}: same, but for the completed function $\Lambda_K(s)$,
identical to {\tt lfunlambda(L,s)} where {\tt L} is as above.
\begin{verbatim}
? lfunlambda(L,2)
\end{verbatim}
{\tt lfun(D,s)}: $L$-function of quadratic character $(D/.)$ at $s$.
\noindent
Identical to {\tt L=lfuncreate(D); lfun(L,s)}.
\begin{verbatim}
? lfun(-23,-2)
? lfun(5,-1)
\end{verbatim}
{\tt L1=lfuncreate(pol); L2=lfuncreate(1); L=lfundiv(L1,L2)}: $L$ function
attached to $\zeta_K(s)/\zeta(s)$.
\begin{verbatim}
? L1 = lfuncreate(x^3-x-1); L2 = lfuncreate(1);
? L = lfundiv(L1,L2); lfunan(L,14)
\end{verbatim}
{\tt lfunetaquo($[m_1,r_1;m_2,r_2]$)}: $L$-function of eta product
$\eta(m_1\tau)^{r_1}\eta(m_2\tau)^{r_2}$, for instance with
{\tt [1,1;23,1]} or {\tt [1,2;11,2]}.
\begin{verbatim}
? L1 = lfunetaquo([1,1;23,1]); lfunan(L1,14)
? L2 = lfunetaquo([1,2;11,2]); lfunan(L2,14)
\end{verbatim}
{\tt lfuncreate(ellinit(e))}: $L$-function of elliptic curve $e$, for
instance with $e=[0,-1,1,-10,-20]$.
\begin{verbatim}
? e = ellinit([0,-1,1,-10,-20]);
? L = lfuncreate(e); lfunan(L,14)
\end{verbatim}
{\tt ellap(e,p)}: compute $a(p)$ for an elliptic curve $e$.
\begin{verbatim}
? ellap(e,nextprime(10^42))
\end{verbatim}
{\tt eta(q+O(q\^{}B))\^{}m}: compute the $m$th power of $\eta$ to $B$ terms.
\begin{verbatim}
? eta(q+O(q^5))^26
\end{verbatim}
{\tt D=mfDelta(); mfcoefs(D,B)}: compute $B+1$ terms of the Fourier expansion
of $\Delta$.
\begin{verbatim}
? D = mfDelta(); mfcoefs(D,7)
\end{verbatim}
{\tt ramanujantau(n)}: compute Ramanujan's tau function $\tau(n)$ using
the trace formula.
\begin{verbatim}
? ramanujantau(nextprime(10^7))
\end{verbatim}
{\tt qfbhclassno(n)}: Hurwitz class number $H(n)$.
\begin{verbatim}
? vector(13,n,qfbhclassno(n-1))
\end{verbatim}
{\tt qfbsolve(Q,n)}: solve $Q(x,y)=n$ for a binary quadratic form $Q$
(contains in particular Cornacchia's algorithm).
\begin{verbatim}
? Q = Qfb(1,0,1); p = 10^16+61; qfbsolve(Q,p)
\end{verbatim}
{\tt gamma(s)}: gamma function at $s$.
\begin{verbatim}
? gamma(1/4)*gamma(3/4)-Pi*sqrt(2)
\end{verbatim}
{\tt incgam(x,s)}: incomplete gamma function $\Gamma(s,x)$.
\begin{verbatim}
? incgam(1,5/2)
\end{verbatim}
{\tt G=gammamellininvinit(A)}: initialize data for computing inverse Mellin
transforms of $\prod_{1\le i\le d}\Gamma_{{\mathbb R}}(s+a_i)$, with $A=[a_1,\ldots,a_d]$.
{\tt gammamellininv(G,t)}: inverse Mellin transform at $t$ of $A$, with
$G$ initialized as above.
\begin{verbatim}
? G = gammamellininvinit([0,0]); gammamellininv(G,2)
\end{verbatim}
{\tt K(nu,x)}: $K_{\nu}(x)$, $K$-Bessel function of (complex) index $\nu$ at
$x$.
\begin{verbatim}
? 4*besselk(0,4*Pi)
\end{verbatim}
{\tt sumnum(n=a,f(n))}: numerical summation of $\sum_{n\ge a}f(n)$ using
discrete Euler--MacLaurin.
\begin{verbatim}
? sumnum(n=1,1/(n^2+n^(4/3)))
\end{verbatim}
{\tt sumnumap(n=a,f(n))}: numerical summation of $\sum_{n\ge a}f(n)$ using
Abel--Plana.
{\tt sumnummonien(n=a,f(n))}: numerical summation using Monien's Gaussian
summation method,
(there also exists {\tt sumnumlagrange}, which can also be very useful).
{\tt limitnum(n->f(n))}: limit of $f(n)$ as $n\to\infty$ using a variant
of Zagier's method, assuming asymptotic expansion in integral powers of $1/n$
(also {\tt asympnum} to obtain more coefficients).
\begin{verbatim}
? limitnum(n->(1+1/n)^n)
? asympnum(n->(1+1/n)^n*exp(-1))
\end{verbatim}
{\tt sumeulerrat(f(x))}: $\sum_{p\ge2}f(p)$, $p$ ranging over primes
(more general variant exists form $\sum_{p\ge a}f(p^s)$).
\begin{verbatim}
? sumeulerrat(1/(x^2+x))
\end{verbatim}
{\tt prodeulerrat(f(x))}: $\prod_{p\ge2}f(p)$, $p$ ranging over primes, with
same variants.
\begin{verbatim}
? prodeulerrat((1-1/x)^2*(1+2/x))
\end{verbatim}
{\tt sumalt(n=a,(-1)\^{}n*f(n))}: $\sum_{n\ge a}(-1)^nf(n)$, assuming $f$
positive.
\begin{verbatim}
? sumalt(n=1,(-1)^n/(n^2+n))
\end{verbatim}
{\tt f'(x)} (or {\tt deriv(f)(x)}): numerical derivative of $f$ at $x$.
\begin{verbatim}
? -zeta'(-2)
? zeta(3)/(4*Pi^2)
\end{verbatim}
{\tt intnum(x=a,b,f(x))}: numerical computation of $\int_a^b f(x)\,dx$ using
general doubly-exponential integration.
{\tt intnumgauss(x=a,b,f(x))}: numerical integration using Gaussian
integration.
\begin{verbatim}
? intnum(t=0,1,lngamma(t+1))
\end{verbatim}
For instance, for $500$ decimal digits, after the initial computation of
nodes and weights in both cases ({\tt intnuminit(0,1)} and
{\tt intnumgaussinit()}) this examples requires $2.5$ seconds by
doubly-exponential integration but only $0.25$ seconds by Gaussian
integration.
\section{Three Pari/GP Scripts}
\subsection{The Birch--Swinnerton-Dyer Example}
Here is a list of commands which implements the explicit BSD example given
in Section \ref{sec:BSD}, again assuming the default accuracy of $38$ decimal
digits.
\begin{verbatim}
? E = ellinit([1,-1,0,-79,289]); /* initialize */
? N = ellglobalred(E)[1] /* compute conductor */
? /* define the integral $f(x)$ */
? f(x) = intnum(t=1,[oo,x],exp(-x*t)*log(t)^2);
? /* check that f(100) is small enough for 38D */
? f(100)
? A = ellan(E,8000); /* compute 8000 coefficients */
? /* Note that $2\pi 8000/sqrt(N) > 100$ */
? S = sum(n=1,8000,A[n]*f(2*Pi*n/sqrt(N)))
? /* compute APPARENT order of vanishing of L(E,s) */
? ellanalyticrank(E)[1]
\end{verbatim}
Note that for illustrative purposes we use the {\tt intnum} command to compute
$f(x)$, corresponding to the use of doubly-exponential integration, but in the
present case there are methods which are orders of magnitude faster.
The last command, which is almost immediate, implements these methods.
\subsection{The Beilinson--Bloch Example}
The code for the explicit Beilinson--Bloch example seen in Section
\ref{sec:BB} is simpler (I have used the integral representation of $g(u)$,
but of course I could have used the series expansion instead):
\begin{verbatim}
? e(u) =
{
my(E = ellinit([0,u^2+1,0,u^2,0]));
lfun(E,2)*ellglobalred(E)[1];
}
? g(u) =
{
my(S);
S = 2*Pi*intnum(t=0,1,asin(t)/(t*sqrt(1-(t/u)^2)));
S+Pi^2*acosh(u);
}
? e(5)/g(5)
? /* we obtain perfect accuracy */
? /* for example: */
? for(u = 2,18,print1(bestappr(e(u)/g(u),10^6)," "))
\end{verbatim}
\subsection{The Mahler Measure Example}
\begin{verbatim}
? L=lfunetaquo([2,1;4,1;6,1;12,1]);
\\ Equivalently L=lfuncreate(ellinit([0,-1,0,-4,4]));
? lfun(L,3)
? (Pi^2/36)*(Catalan*Pi+intnum(t=0,1,asin(t)*asin(1-t)/t))
\end{verbatim}
\section{Appendix: Selected Results}
\subsection{The Gamma Function}
The Gamma function, denoted by $\Gamma(s)$, can be defined in several different
ways. My favorite is the one I give in Section 9.6.2 of \cite{Coh4}, but for
simplicity I will recall the classical definition. For $s\in{\mathbb C}$ we define
$$\Gamma(s)=\int_0^\infty e^{-t}t^s\,\dfrac{dt}{t}\;.$$
It is immediate to see that this converges if and only if $\Re(s)>0$ (there is
no problem at $t=\infty$, the only problem is at $t=0$), and integration by
parts shows that $\Gamma(s+1)=s\Gamma(s)$, so that if $s=n$ is a positive integer,
we have $\Gamma(n)=(n-1)!$. We can now \emph{define} $\Gamma(s)$ for
all complex $s$ by using this recursion backwards, i.e., setting
$\Gamma(s)=\Gamma(s+1)/s$. It is then immediate to check that $\Gamma(s)$ is a meromorphic
function on ${\mathbb C}$ having poles at $s=-n$ for $n=0$, $1$, $2$,\dots, which
are simple with residue $(-1)^n/n!$.
The gamma function has numerous additional properties, the most important
being recalled below:
\begin{enumerate}
\item (Stirling's formula for large $\Re(s)$): as $s\to\infty$, $s\in{\mathbb R}$ (say,
there is a more general formulation) we have
$\Gamma(s)\sim s^{s-1/2}e^{-s}(2\pi)^{1/2}$.
\item (Stirling's formula for large $\Im(s)$): as $|T|\to\infty$, $\sigma\in{\mathbb R}$
being fixed (say, once again there is a more general formulation), we have
$|\Gamma(\sigma+iT)|\sim |T|^{\sigma-1/2}e^{-\pi |T|/2}(2\pi)^{1/2}$. In particular,
it tends to $0$ exponentially fast on vertical strips.
\item (Reflection formula): we have $\Gamma(s)\Gamma(1-s)=\pi/\sin(\pi s)$.
\item (Duplication formula): we have $\Gamma(s)\Gamma(s+1/2)=2^{1-2s}\pi^{1/2}\Gamma(2s)$
(there is also a more general distribution formula giving
$\prod_{0\le j<N}\Gamma(s+j/N)$ which we do not need). Equivalently, if we set
$\Gamma_{{\mathbb R}}(s)=\pi^{-s/2}\Gamma(s/2)$ and $\Gamma_{{\mathbb C}}(s)=2\cdot(2\pi)^{-s}\Gamma(s)$, we have
$\Gamma_{{\mathbb R}}(s)\Gamma_{{\mathbb R}}(s+1)=\Gamma_{{\mathbb C}}(s)$.
\item (Link with the beta function): let $a$ and $b$ in ${\mathbb C}$ with $\Re(a)>0$
and $\Re(b)>0$. We have
$$B(a,b):=\int_0^1t^{a-1}(1-t)^{b-1}\,dt=\dfrac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\;.$$
\end{enumerate}
\subsection{Order of a Function: Hadamard Factorization}
Let $F$ be a holomorphic function in the whole of ${\mathbb C}$ (it is immediate
to generalize to the case of meromorphic functions, but for simplicity we
stick to the holomorphic case). We say that $F$ has \emph{finite order} if
there exists $\alpha\ge0$ such that as $|s|\to\infty$ we have
$|F(s)|\le e^{|s|^{\alpha}}$. The infimum of such $\alpha$ is called the order of
$F$. It is an immediate consequence of Liouville's theorem that functions
of order $0$ are polynomials. Most functions occurring in number theory,
and in particular all $L$-functions occurring in this course, have order $1$.
The Selberg zeta function, which we do not consider, is also an interesting
function and has order $2$.
The Weierstrass--Hadamard factorization theorem is the following:
\begin{theorem} Let $F$ be a holomorphic function of order $\rho$, set
$p=\lfloor\rho\rfloor$, let $(a_n)_{n\ge1}$ be the non-zero zeros of $F$
repeated with multiplicity, and let $m$ be the order of the zero at $z=0$.
There exists a polynomial $P$ of degree at most $p$ such that for all
$z\in{\mathbb C}$ we have
$$F(z)=z^me^{P(z)}\prod_{n\ge1}\left(1-\dfrac{z}{a_n}\right)\exp\left(\dfrac{z/a_n}{1}+\dfrac{(z/a_n)^2}{2}+\cdots+\dfrac{(z/a_n)^p}{p}\right)\;.$$
\end{theorem}
In the case of order $1$ which is of interest to us, this reads
$$F(z)=B\cdot z^me^{Az}\prod_{n\ge1}\left(1-\dfrac{z}{a_n}\right)e^{z/a_n}\;.$$
For example, we have
$$\sin(\pi z)=\pi z\prod_{n\ge1}\left(1-\dfrac{z^2}{n^2}\right)\text{\quad and\quad}\dfrac{1}{\Gamma(z+1)}=e^{\gamma z}\prod_{n\ge1}\left(1+\dfrac{z}{n}\right)e^{-z/n}\;,$$
where as usual $\gamma=0.57721\cdots$ is Euler's constant.
\begin{exercise}\begin{enumerate}
\item Using these expansions, prove the reflection formula and the duplication
formula for the gamma function, and find the distribution formula giving
$\prod_{0\le j<N}\Gamma(s+j/N)$.
\item Show that the above expansion for the sine function is equivalent to
the formula expressing $\zeta(2k)$ in terms of Bernoulli numbers.
\item Show that the above expansion for the gamma function is equivalent to
the Taylor expansion
$$\log(\Gamma(z+1))=-\gamma z+\sum_{n\ge2}(-1)^n\dfrac{\zeta(n)}{n}z^n\;,$$
and prove the validity of this Taylor expansion for $|z|<1$, hence of
the above Hadamard product.
\end{enumerate}
\end{exercise}
\subsection{Elliptic Curves}
We will not need the abstract definition of an elliptic curve. For us, an
elliptic curve $E$ defined over a field $K$ will be a nonsingular projective
curve defined by the (affine) generalized Weierstrass equation with
coefficients in $K$:
$$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6\;.$$
This curve has a \emph{discriminant} (obtained essentially by completing the
square and computing the discriminant of the resulting cubic), and the
essential property of being nonsingular is equivalent to the discriminant being
nonzero.
This curve has a unique point ${\mathcal O}$ at infinity, with projective
coordinates $(0:1:0)$. Using chord and tangents one can define an addition
law on this curve, and the first essential (but rather easy) result is that
it is an \emph{abelian group law} with neutral element ${\mathcal O}$, making
$E$ into an algebraic group.
In the case where $K={\mathbb Q}$ (or more generally a number field), a deeper theorem
due to Mordell states that the group $E({\mathbb Q})$ of rational points of $E$ is a
\emph{finitely generated abelian group}, i.e., is isomorphic to
${\mathbb Z}^r\oplus E({\mathbb Q})_{\text{tors}}$, where $E({\mathbb Q})_{\text{tors}}$ (the torsion
subgroup) is a finite group, and the integer $r$ is called the (algebraic)
\emph{rank} of the curve.
Still in the case $K={\mathbb Q}$, for all prime numbers $p$ except a finite number,
we can \emph{reduce} the equation modulo $p$, thus obtaining an elliptic curve
over the finite field ${\mathbb F}_p$. Using an algorithm due to J.~Tate, we can find
first a \emph{minimal Weierstrass equation} for $E$, second the behavior of
$E$ reduced at the ``bad'' primes in terms of so-called \emph{Kodaira symbols},
and third the algebraic \emph{conductor} $N$ of $E$, product of the bad primes
raised to suitable exponents (and other important quantities).
The deep theorem of Wiles et al. tells us that the $L$-function of $E$
(as defined in the main text) is equal to the $L$-function of a rational
Hecke eigenform in the modular form space $M_2(\Gamma_0(N))$, where $N$ is
the conductor of $E$.
A weak form of the Birch and Swinnerton-Dyer conjecture says that the
algebraic rank $r$ is equal to the analytic rank defined as the order of
vanishing of the $L$-function of $E$ at $s=1$.
\bigskip
| {
"timestamp": "2018-10-01T02:09:03",
"yymm": "1809",
"arxiv_id": "1809.10904",
"language": "en",
"url": "https://arxiv.org/abs/1809.10904",
"abstract": "We give a number of theoretical and practical methods related to the computation of L-functions, both in the local case (counting points on varieties over finite fields, involving in particular a detailed study of Gauss and Jacobi sums), and in the global case (for instance Dirichlet L-functions, involving in particular the study of inverse Mellin transforms); we also give a number of little-known but very useful numerical methods, usually but not always related to the computation of L-functions.",
"subjects": "Number Theory (math.NT)",
"title": "Computational Number Theory in Relation with L-Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717468373085,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7089449192005287
} |
https://arxiv.org/abs/2207.07536 | The Edge-Connectivity of Vertex-Transitive Hypergraphs | A graph or hypergraph is said to be vertex-transitive if its automorphism group acts transitively upon its vertices. A classic theorem of Mader asserts that every connected vertex-transitive graph is maximally edge-connected. We generalise this result to hypergraphs and show that every connected linear uniform vertex-transitive hypergraph is maximally edge-connected. We also show that if we relax either the linear or uniform conditions in this generalisation, then we can construct examples of vertex-transitive hypergraphs which are not maximally edge-connected. | \section{Introduction}
A graph or hypergraph is {\it connected} if there is a path connecting each pair of vertices, where a {\it path} is a sequence of alternating incident vertices and edges without repetition.
A {\it cut set} of edges in a graph or hypergraph is a set of edges whose deletion renders the graph or hypergraph disconnected.
The {\it edge-connectivity} of a graph or hypergraph $H$ is the size of a minimum cut set of edges and is denoted $\kappa'(H)$.
For a graph or hypergraph $H$, $\delta(H)$ is the minimum degree among the vertices and $\Delta(H)$ is the maximum degree among the vertices, where the degree of a vertex is the number of edges incident with it.
In~\cite{W}, Whitney observes that, for a graph $G$, $\kappa'(G)$ never exceeds $\delta(G)$, a result which extends naturally to hypergraphs.
This bound is in fact tight and a graph or hypergraph $H$ which satisfies $\kappa'(H)=\delta(H)$ is said to be {\it maximally edge-connected}.
Hellwig and Volkmann list several sufficient conditions for graphs to be maximally edge-connected in their 2008 survey~\cite{HV}.
The subject of connectivity in hypergraphs has been developing recently with results like those in \cite{BS,DPP,JS}.
In~\cite{BS}, Bahmanian and \v{S}ajna study various connectivity properties in hypergraphs with an emphasis on cut sets of edges and vertices.
In~\cite{DPP}, Dewar, Pike and Proos consider both vertex and edge-connectivity in hypergraphs with additional details on the computational complexity of these problems.
In~\cite{JS}, Jami and Szigeti investigate the edge-connectivity of permutation hypergraphs.
Dankelmann and Meierling extend several well-known sufficient conditions for graphs to be maximally edge-connected to the realm of hypergraphs in~\cite{DM}.
Tong and Shan continue this work with more extensions from graphs to hypergraphs in~\cite{TS}.
Zhao and Meng present sufficient conditions for linear uniform hypergraphs to be maximally edge-connected that generalise results from graphs in~\cite{ZM}.
These three papers were primarily focused on the properties of distance and girth.
In this paper, we investigate the edge-connectivity of vertex-transitive hypergraphs which are linear and uniform.
A graph or hypergraph $H$ is said to be {\it vertex-transitive} if, for any two vertices $u$ and $v$ of $V(H)$, there exists some automorphism $\phi$ of $H$ such that $\phi(u)=v$.
Note that any vertex-transitive graph or hypergraph must also be regular, and so $\delta(H)=\Delta(H)$.
A {\it linear} hypergraph is one in which any pair of vertices is contained in at most one edge.
A {\it uniform} hypergraph is one in which each edge has the same cardinality; moreover, if each edge has cardinality $k$, then we say that the hypergraph is $k$-uniform.
A classic result of Mader establishes the edge-connectivity of vertex-transitive graphs.
\begin{Theorem}\label{Mader}\cite{M}
Let $G$ be a vertex-transitive and connected graph. Then $G$ is maximally edge-connected.
\end{Theorem}
Our main result is a generalisation of Mader's Theorem (Theorem 1) to linear uniform hypergraphs.
In particular, we show the following:
\begin{Theorem}\label{main}
Let $H$ be a linear $k$-uniform hypergraph with $k\geq3$. If $H$ is vertex-transitive and connected, then $H$ is maximally edge-connected.
\end{Theorem}
In Section 2 we demonstrate the existence of vertex-transitive hypergraphs which fail to be maximally edge-connected when we relax either the uniformity or linearity conditions of Theorem~\ref{main}.
In Section 3 we present the proof of Theorem~\ref{main}.
\section{Non-Uniform and Non-Linear Hypergraphs}
In this section, we present two examples of vertex-transitive hypergraphs which are not maximally edge-connected.
Both examples meet all of the criteria of the hypothesis of Theorem~\ref{main} except for linearity in the first case and uniformity in the second.
\subsection{Uniform but Non-Linear Hypergraphs}
Let $H$ be the complete $k$-uniform hypergraph on $n$ vertices, i.e. $V(H)$ consists of $n$ vertices and $E(H)$ is equal to the set of all $k$-subsets of $V(H)$.
Then $H$ is a connected $k$-uniform hypergraph which is simple but non-linear, where a {\it simple} hypergraph is one with no repeated edges and no loops.
For any two vertices $u$ and $v$, there exists an automorphism $\phi$ such that $\phi(u)=v$, $\phi(v)=u$ and $\phi(w)=w$ for any other vertex $w$.
Therefore $H$ is also vertex-transitive.
Now let $H_1,H_2,\dots,H_k$ be distinct copies of $H$, each with its own vertex set $V(H_i)=V(H)\times \{i\}$.
Take $H^*$ to be the union of these copies along with $n$ edges of the form $E_v=\{(v,1),(v,2),\dots,(v,k)\}$ (one for each vertex $v\in V(H)$).
Then $H^*$ is a connected $k$-uniform hypergraph which is simple but non-linear.
Now we must verify that $H^*$ is vertex-transitive.
For any two vertices within the same copy of $H$, we can find an automorphism $\phi$ of $H^*$ similar to the ones described for $H$; for example, to map $(u,1)$ to $(v,1)$, use the map $\phi:H^*\rightarrow H^*$ defined by
$$\phi(u,i)=(v,i), \phi(v,i)=(u,i) \text{ and }\phi(w,i)=(w,i) \text{ when }w\not\in\{u,v\}.$$
For any two vertices within an edge of the form $E_v$, simply take an automorphism $\psi$ of $H^*$ which exchanges the two corresponding copies of $H$ and fixes the rest; for example, to map $(v,1)$ to $(v,2)$, use the map $\psi:H^*\rightarrow H^*$ defined by
$$\psi(u,1)=(u,2), \psi(u,2)=(u,1) \text{ and }\psi(u,i)=(u,i) \text{ when }i\not\in\{1,2\}.$$
Finally, for any two vertices in general, we may take a composition (if needed) of the two types of automorphisms we have just described.
Therefore, $H^*$ is a vertex-transitive hypergraph.
However, so long as $n \geq k+2$ and $k \geq 3$, $$\kappa'(H^*)\leq n < \binom{n-1}{k-1}+1=\Delta(H)+1=\Delta(H^*)$$ and so $H^*$ is not maximally edge-connected.
\subsection{Linear but Non-Uniform Hypergraphs}
Let $P$ be the affine plane of order 3 on the point set $\{1,2,\dots,9\}$ with lines shown in Table~\ref{plane}.
\begin{table}[h]
\centering
\begin{tabular}{llll}
$\Pi_0:$ & $\{1,2,3\}$, & $\{4,5,6\}$, & $\{7,8,9\}$ \\
$\Pi_1:$ & $\{1,4,7\}$, & $\{2,5,8\}$, & $\{3,6,9\}$ \\
$\Pi_2:$ & $\{1,5,9\}$, & $\{3,4,8\}$, & $\{2,6,7\}$ \\
$\Pi_3:$ & $\{1,6,8\}$, & $\{2,4,9\}$, & $\{3,5,7\}$
\end{tabular}
\caption{Lines of the affine plane $P$}
\label{plane}
\end{table}
Note that the parallel classes $\Pi_0,\dots,\Pi_3$ of $P$ are indicated horizontally.
Now let $H$ denote $P$ with $\Pi_0$ removed and observe that $H$ may be viewed as a connected linear 3-uniform hypergraph.
To verify that $H$ is vertex-transitive, let $x$ and $y$ be two vertices of $H$.
Find the parallel class in $P$ that has a line which contains the pair $\{x,y\}$ and write its blocks in order as a permutation.
For example, to map point 1 to point 2, take the permutation $(1,2,3)(4,5,6)(7,8,9)$ corresponding to $\Pi_0$.
We call this permutation $\sigma$ and note that either $\sigma$ or $\sigma^{-1}$ is an automorphism in $H$ which maps $x$ to $y$ and preserves all of the parallel classes of $H$.
Now take a copy of $H$ (denoted $H'$) on the vertex set $\{1',2',\dots,9'\}$ with edges corresponding to those of $H$.
Form three additional edges of size six as follows:
$$\{1,2,3,1',2',3'\}, \{4,5,6,4',5',6'\}, \{7,8,9,7',8',9'\}.$$
Then take the union of $H$, $H'$, and the three edges of size six to form the hypergraph $H^*$.
Note that $H^*$ is a connected linear non-uniform hypergraph with edges of sizes 3 and 6.
By composing the automorphisms described for $H$ with the automorphism which maps each vertex of $H$ to its copy in $H'$, we can verify that $H^*$ is also vertex-transitive.
However the edge-connectivity $\kappa'(H^*)=3$ whereas the degree $\Delta(H^*)=4$, and so $H^*$ is not maximally edge-connected.
\section{A Generalisation of Mader's Theorem}
Let $H$ be a hypergraph with vertex set $V(H)$.
For $Y\subseteq V(H)$, we let $\partial(Y)$ denote the set of hyperedges in $H$ in which each edge has at least one vertex in $Y$ and at least one vertex in $V\setminus Y$.
A key part of the proof of our main theorem is the following lemma.
\begin{Lemma}\label{uncross}
Let $H$ be a $k$-uniform hypergraph and $X,Y\subseteq V(H)$. Then $$|\partial(X\cup Y)|+|\partial(X\cap Y)|\leq|\partial(X)|+|\partial(Y)|.$$
\end{Lemma}
\begin{Proof} In a Venn diagram of two (possibly intersecting) sets, there are four distinct regions.
For our subsets $X$ and $Y$, these are $X\setminus Y$, $Y\setminus X$, $X\cap Y$ and $(X\cup Y)^C$.
Any edges between these regions will contribute to the values of $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ and $|\partial(X)|+|\partial(Y)|$.
When $k=2$, we have $\binom{4}{2}=6$ pairs of regions and hence, six types of relevant edges which may exist.
By checking each pair of regions, we see that $|\partial(X)|+|\partial(Y)|$ accounts for all of the edges of $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ but counts any edges between $X\setminus Y$ and $Y\setminus X$ twice, whereas $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ does not count these edges at all.
When $k=3$, we have $\binom{4}{3}=4$ additional types of possible edges.
Then $|\partial(X)|+|\partial(Y)|$ accounts for all of the edges of $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ but counts any edges between $X\setminus Y$ and $Y\setminus X$ twice, whereas $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ counts these edges at most once.
When $k\geq4$, there is only one additional type of possible edge, one that contains vertices from all four regions.
Here, $|\partial(X)|+|\partial(Y)|$ accounts for all of the edges of $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ but counts any edges between $X\setminus Y$ and $Y\setminus X$ twice, whereas $|\partial(X\cup Y)|+|\partial(X\cap Y)|$ counts these edges at most once.
\end{Proof}
We now proceed with the proof of our main result. Note that the examples detailed in Section 2 imply the necessity of the linear and uniform conditions in the statement of this result.
\begin{manualtheorem}{2
Let $H$ be a linear $k$-uniform hypergraph with $k\geq3$. If $H$ is vertex-transitive and connected, then $H$ is maximally edge-connected.
\end{manualtheorem}
\begin{Proof} Since $\kappa'(H)\leq \Delta(H)$, it suffices to show that $\kappa'(H)\geq \Delta(H)$. Choose a proper subset $X \subset V(H)$ such that
\begin{itemize}
\item[$(i)$] $|\partial(X)|$ is minimum and
\item[$(ii)$] $|X|$ is minimum (subject to $(i)$).
\end{itemize}
Note that by condition $(i)$, $|\partial(X)|=\kappa'(H)$, so it suffices to show that $|\partial(X)|\geq \Delta(H)$.
By definition $\partial(X)=\partial(V(H)\setminus X)$, so condition $(ii)$ implies that $|X|\leq \frac{1}{2}|V(H)|$.
In~\cite{GR} such a set $X$ is referred to as an {\it edge atom}.
Now suppose there exists $\phi \in \text{Aut}(H)$ such that $\emptyset\neq X\cap\phi(X)\neq X$.
Then by Lemma~\ref{uncross}, $$|\partial(X\cup\phi(X))|+|\partial(X\cap\phi(X))|\leq|\partial(X)|+|\partial(\phi(X))|=2|\partial(X)|.$$
If $|\partial(X\cup\phi(X))|< |\partial(X)|$ then the set $X\cup\phi(X)$ contradicts our choice of $X$ by condition $(i)$.
Otherwise, $|\partial(X\cap\phi(X))|\leq|\partial(X)|$, but then $X\cap\phi(X)$ contradicts our choice of $X$ by condition $(i)$ or $(ii)$.
Therefore, for every $\phi \in \text{Aut}(H)$, either $X\cap\phi(X)=X$ or $X\cap\phi(X)=\emptyset$.
For this reason, we say that $X$ is a {\it block of imprimitivity} (for more information on this terminology, see~\cite{GR}).
This proof so far has loosely followed the proof of Mader's Theorem found in~\cite{GR}, however, to proceed from here we must make use of original techniques.
Now, for $Y\subseteq V(H)$, we let $\partial_i(Y)$ denote the set of hyperedges in $H$ in which each edge has exactly $i$ vertices in $Y$ and $k-i$ vertices in $V\setminus Y$.
Note that $\partial(Y)=\bigcup_{i=1}^{k-1}\partial_i(Y)$.
For any $x\in X$ and $1\leq i\leq k$, let $a_i$ be the number of neighbours of $x$ in $X$ which occur in edges of $\partial_i(X)$.
Similarly, let $b_i$ be the number of neighbours of $x$ in $V\setminus X$ which occur in edges of $\partial_i(X)$.
Since $X$ is a block of imprimitivity, the values of $a_i$ and $b_i$ for $1\leq i \leq k$ do not depend on the choice of $x\in X$.
If $|X|=1$, then $\partial(X)=\Delta(H)$, so from now on we assume $|X|\geq 2$.
Let $x,y\in X$ and note that $$|\partial(X\setminus\{y\})|=|\partial(X)|+\frac{a_k}{k-1}-\frac{b_1}{k-1}.$$
So, if $a_k\leq b_1$ then $X\setminus\{y\}$ contradicts our choice of $X$.
Otherwise we assume $a_k>b_1$ which implies $\partial_k(X)$ is nonempty.
For the remainder of the proof, we will refer to an edge contained in the set $\partial_i(X)$ as a $\partial_i$-edge.
If $|X|=k$ then $X$ is simply a single $\partial_k$-edge.
Then by linearity, the only boundary edges are $\partial_1$-edges and by vertex transitivity, $|\partial(X)|=k(\Delta(H)-1)$.
Now $|\partial(X)|=k(\Delta(H)-1)$ is strictly greater than $|\partial(\{x\})|=\Delta(H)$ as long as $k\geq 3$ and $\Delta(H)\geq 2$.
But this is easy to confirm as a connected hypergraph $H'$ with $\Delta(H')=1$ would be a single edge of $k$ vertices.
So $\{x\}$ contradicts our choice of $X$ by condition $(ii)$. Hence $|X|$ must be strictly greater than $k$.
Now since $a_k\neq 0$ and $X$ is a block of imprimitivity, every vertex of $X$ must be incident with at least one $\partial_k$-edge.
Then the subgraph of $H$ induced by $X$, denoted $H_X$, is either a collection of non-intersecting edges or a $k$-uniform hypergraph in which each vertex lies at the intersection of at least two edges.
In the first case, there must be paths in $H$ connecting the disjoint edges of $H_X$. But then any proper subset of the edges of $H_X$ yields a better choice for our set $X$.
Therefore, we know that each vertex of $X$ lies at the intersection of at least two edges of $H_X$.
For $x\in X$, let $r_x$ be the number of $\partial_k$-edges within $X$ which contain $x$.
Observe that $r_x=\frac{a_k}{k-1}$ and so $r_x$ does not depend on our choice of $x$.
So we will simply use $r$ to denote the number of $\partial_k$-edges within $X$ which contain any given vertex of $X$.
Observe that the degree of $H$, $\Delta(H)$, must be strictly greater than $r$, since otherwise every neighbour of any vertex in $X$ must also be a vertex of $X$ and therefore $H_X$ is either a disconnected component of $H$ or $H_X=H$.
In addition, we note that $\Delta(H)$ must be strictly greater than $|\partial(X)|$, since otherwise $\kappa'(H)=|\partial(X)|=\Delta(H)$.
Also $|\partial(X)|\geq \frac{|X|(\Delta(H)-r)}{k-1}$, since the edges of $\partial(X)$ can be shared by at most $k-1$ vertices of $X$.
Therefore, $$\Delta(H)>\frac{|X|(\Delta(H)-r)}{k-1};$$ rearranging for $|X|$ gives a strict upper bound $$|X|<\frac{\Delta(H)(k-1)}{\Delta(H)-r}.$$
Observe that $X$ contains the vertex $x$ and at least $r(k-1)$ other vertices.
So $$\frac{\Delta(H)(k-1)}{\Delta(H)-r}>|X|\geq 1+r(k-1).$$
This implies $\Delta(H)(k-1)>(\Delta(H)-r)+(\Delta(H)-r)r(k-1)$ and since $\Delta(H)-r>0$, we have $\Delta(H)(k-1)>(\Delta(H)-r)r(k-1)$.
Dividing both sides by $k-1\neq 0$ we have $\Delta(H)>(\Delta(H)-r)r$.
Now $\Delta(H)>(\Delta(H)-r)r$ rearranges to $r^2>\Delta(H)(r-1)$.
To make the arithmetic easier, let $d$ be the difference $\Delta(H)-r$, and note that $d$ is a positive integer.
Substitute $d+r$ for $\Delta(H)$ and continue:
$$\begin{array}{lccl}
& r^2 &>& (d+r)(r-1)\\
\Rightarrow & r^2 &>& r^2+dr-d-r\\
\Rightarrow & 0 &>& dr-d-r\\
\Rightarrow & d &>& r(d-1).
\end{array}$$
If $d>1$ then $r<\frac{d}{d-1}$, a ratio of two consecutive positive integers, so $1\leq r<\frac{d}{d-1}\leq 2$ which implies $r=1$.
This means that each vertex of $X$ is incident with a single $\partial_k$-edge of $X$.
But we previously established that each vertex of $X$ lies at the intersection of at least two $\partial_k$-edges, a contradiction.
Finally, if $d=1$, then each vertex is incident with a single boundary edge.
Recall the lower bound $|X|\geq 1+r(k-1)$.
Replacing $r$ with $\Delta(H)-d=\Delta(H)-1$, we get $|X|\geq 1+(\Delta(H)-1)(k-1)$, which is strictly greater than $(\Delta(H)-1)(k-1)$.
So we have $\frac{|X|}{k-1}>\Delta(H)-1$.
Observe that $|\partial(X)|\geq \frac{|X|}{k-1}$ since boundary edges take up vertices of $X$ at most $k-1$ at a time.
Therefore $\frac{|X|}{k-1}>\Delta(H)-1$ implies $|\partial(X)|>\Delta(H)-1$ and so $\kappa'(H)=|\partial(X)|\geq \Delta(H)$.
\end{Proof}
\section{Acknowledgements}
Authors Burgess and Pike acknowledge NSERC Discovery Grant support and Luther acknowledges NSERC scholarship support.
| {
"timestamp": "2022-07-18T02:16:17",
"yymm": "2207",
"arxiv_id": "2207.07536",
"language": "en",
"url": "https://arxiv.org/abs/2207.07536",
"abstract": "A graph or hypergraph is said to be vertex-transitive if its automorphism group acts transitively upon its vertices. A classic theorem of Mader asserts that every connected vertex-transitive graph is maximally edge-connected. We generalise this result to hypergraphs and show that every connected linear uniform vertex-transitive hypergraph is maximally edge-connected. We also show that if we relax either the linear or uniform conditions in this generalisation, then we can construct examples of vertex-transitive hypergraphs which are not maximally edge-connected.",
"subjects": "Combinatorics (math.CO)",
"title": "The Edge-Connectivity of Vertex-Transitive Hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9865717464424892,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7089449189168138
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.